lkml.org 
[lkml]   [2013]   [May]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 04/11] ipc: move locking out of ipcctl_pre_down_nolock
On Wed, 15 May 2013 18:08:03 -0700 Davidlohr Bueso <davidlohr.bueso@hp.com> wrote:

> This function currently acquires both the rw_mutex and the rcu lock on
> successful lookups, leaving the callers to explicitly unlock them, creating
> another two level locking situation.
>
> Make the callers (including those that still use ipcctl_pre_down()) explicitly
> lock and unlock the rwsem and rcu lock.
>
> ...
>
> @@ -409,31 +409,38 @@ static int msgctl_down(struct ipc_namespace *ns, int msqid, int cmd,
> return -EFAULT;
> }
>
> + down_write(&msg_ids(ns).rw_mutex);
> + rcu_read_lock();
> +
> ipcp = ipcctl_pre_down(ns, &msg_ids(ns), msqid, cmd,
> &msqid64.msg_perm, msqid64.msg_qbytes);
> - if (IS_ERR(ipcp))
> - return PTR_ERR(ipcp);
> + if (IS_ERR(ipcp)) {
> + err = PTR_ERR(ipcp);
> + /* the ipc lock is not held upon failure */

Terms like "the ipc lock" are unnecessarily vague. It's better to
identify the lock by name, eg msg_queue.q_perm.lock.

Where should readers go to understand the overall locking scheme? A
description of the overall object hierarchy and the role which the
various locks play?

Have you done any performance testing of this patchset? Just from
squinting at it, I'd expect the effects to be small...



\
 
 \ /
  Last update: 2013-05-24 22:41    [W:0.128 / U:0.396 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site