lkml.org 
[lkml]   [2008]   [Apr]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/2] fix sys_unshare()+SEM_UNDO: add support for CLONE_SYSVSEM
Quoting Serge E. Hallyn (serue@us.ibm.com):
> Quoting Manfred Spraul (manfred@colorfullife.com):
> > Andrew Morton wrote:
> >> On Sun, 13 Apr 2008 10:04:17 +0200 Manfred Spraul
> >> <manfred@colorfullife.com> wrote:
> >>
> >>
> >>> sys_unshare(CLONE_NEWIPC) doesn't handle the undo lists properly, this
> >>> can
> >>> cause a kernel memory corruption. CLONE_NEWIPC must detach from the
> >>> existing
> >>> undo lists.
> >>> Fix, part 1: add support for sys_unshare(CLONE_SYSVSEM)
> >>>
> >>>
> >>
> >> Is this a non-back-compatible change?
> >>
> >>
> > It adds a new feature - previously sys_unshare(CLONE_SYSVSEM) returned
> > -EINVAL.
> >
> >>> Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
> >>> ---
> >>> ipc/sem.c | 1 +
> >>> kernel/fork.c | 18 ++++++++++++++----
> >>> 2 files changed, 15 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/ipc/sem.c b/ipc/sem.c
> >>> index 0b45a4d..35841bd 100644
> >>> --- a/ipc/sem.c
> >>> +++ b/ipc/sem.c
> >>> @@ -1298,6 +1298,7 @@ void exit_sem(struct task_struct *tsk)
> >>> undo_list = tsk->sysvsem.undo_list;
> >>> if (!undo_list)
> >>> return;
> >>> + tsk->sysvsem.undo_list = NULL;
> >>> if (!atomic_dec_and_test(&undo_list->refcnt))
> >>> return;
> >>> diff --git a/kernel/fork.c b/kernel/fork.c
> >>> index 9c042f9..7f242b0 100644
> >>> --- a/kernel/fork.c
> >>> +++ b/kernel/fork.c
> >>> @@ -1675,13 +1675,17 @@ static int unshare_fd(unsigned long
> >>> unshare_flags, struct files_struct **new_fdp
> >>> }
> >>> /*
> >>> - * Unsharing of semundo for tasks created with CLONE_SYSVSEM is not
> >>> - * supported yet
> >>> + * Unsharing of semundo for tasks created with CLONE_SYSVSEM doesn't
> >>> require
> >>> + * any allocations: it means that the task leaves the existing undo
> >>> lists,
> >>> + * just like sys_exit(). The new undo lists are allocated on demand in
> >>> the
> >>> + * ipc syscalls.
> >>> + * new_ulistp is set to a non-NULL value, the caller expects that on
> >>> success.
> >>> */
> >>> static int unshare_semundo(unsigned long unshare_flags, struct
> >>> sem_undo_list **new_ulistp)
> >>> {
> >>> - if (unshare_flags & CLONE_SYSVSEM)
> >>> - return -EINVAL;
> >>> + if (unshare_flags & CLONE_SYSVSEM) {
> >>> + *new_ulistp = (void*)1;
> >>> + }
> >>>
> >>
> >> And can we do anything nicer than this?
> >>
> >>
> > Attached is an alternative. If you prefer it, I'll send another patch set.
>
> FWIW I definately far far prefer this version :)
>
> As for 'maintainers', in the end wrt this code I'd defer to any two of
> Pavel, Nadia, and Pierre, each of who I've seen do a great deal of
> digging around this code.
>
> (I think I saw some set of these go into -mm so I guess I'll just grab
> mmotm and test with that a bit later today)
>
> thanks,
> -serge

Of course if we do go this route, then the unshare(2) manpage will need
an update (so Michael Kerrisk cc:d).

thanks,
-serge


\
 
 \ /
  Last update: 2008-04-14 23:47    [W:0.699 / U:0.184 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site