lkml.org 
[lkml]   [2008]   [Nov]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 6/6] fs: Introduce kern_mount_special() to mount special vfs
    On Thu, Nov 27, 2008 at 12:32:59AM +0100, Eric Dumazet wrote:
    > This function arms a flag (MNT_SPECIAL) on the vfs, to avoid
    > refcounting on permanent system vfs.
    > Use this function for sockets, pipes, anonymous fds.

    IMO that's pushing it past the point of usefulness; unless you can show
    that this really gives considerable win on pipes et.al. *AND* that it
    doesn't hurt other loads...

    dput() part: again, I want to see what happens on other loads; it's probably
    fine (and win is certainly more than from mntput() change), but... The
    thing is, atomic_dec_and_lock() in there is often done on dentries with
    d_count > 1 and that's fairly cheap (and doesn't involve contention on
    dcache_lock on sane targets).

    FWIW, unless there's a really good reason to do alpha atomic_dec_and_lock()
    in a special way, I'd try to compare with
    if (atomic_add_unless(&dentry->d_count, -1, 1))
    return;
    if (your flag)
    sod off to special
    spin_lock(&dcache_lock);
    if (atomic_dec_and_test(&dentry->d_count)) {
    spin_unlock(&dcache_lock);
    return;
    }
    the rest as usual

    As for the alpha... unless I'm misreading the assembler in
    arch/alpha/lib/dec_and_lock.c, it looks like we have essentially an
    implementation of atomic_add_unless() in there and one that just
    might be better than what we've got in arch/alpha/include/asm/atomic.h.
    How about
    1: ldl_l x, addr
    cmpne x, u, y /* y = x != u */
    beq y, 3f /* if !y -> bugger off, return 0 */
    addl x, a, y
    stl_c y, addr /* y <- *addr has not changed since ldl_l */
    beq y, 2f
    3: /* return value is in y */
    .subsection 2 /* out of the way */
    2: br 1b
    .previous
    for atomic_add_unless() guts? With that we are rid of HAVE_DEC_LOCK and
    get a uniform implementation of atomic_dec_and_lock() for all targets...

    AFAICS, that would be
    static __inline__ int atomic_add_unless(atomic_t *v, int a, int u)
    {
    unsigned long temp, res;
    __asm__ __volatile__(
    "1: ldl_l %0,%1\n"
    " cmpne %0,%4,%2\n"
    " beq %4,3f\n"
    " addl %0,%3,%4\n"
    " stl_c %2,%1\n"
    " beq %2,2f\n"
    "3:\n"
    ".subsection 2\n"
    "2: br 1b\n"
    ".previous"
    :"=&r" (temp), "=m" (v->counter), "=&r" (res)
    :"Ir" (a), "Ir" (u), "m" (v->counter) : "memory");
    smp_mb();
    return res;
    }

    static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
    {
    unsigned long temp, res;
    __asm__ __volatile__(
    "1: ldq_l %0,%1\n"
    " cmpne %0,%4,%2\n"
    " beq %4,3f\n"
    " addq %0,%3,%4\n"
    " stq_c %2,%1\n"
    " beq %2,2f\n"
    "3:\n"
    ".subsection 2\n"
    "2: br 1b\n"
    ".previous"
    :"=&r" (temp), "=m" (v->counter), "=&r" (res)
    :"Ir" (a), "Ir" (u), "m" (v->counter) : "memory");
    smp_mb();
    return res;
    }

    Comments?


    \
     
     \ /
      Last update: 2008-11-28 10:29    [W:9.345 / U:0.420 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site