lkml.org 
[lkml]   [2012]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 26/31] arm64: Miscellaneous library functions
On Thu, Aug 16, 2012 at 02:00:32PM +0100, Arnd Bergmann wrote:
> On Thursday 16 August 2012, Will Deacon wrote:
> > > > +
> > > > +#include <linux/kernel.h>
> > > > +#include <linux/spinlock.h>
> > > > +#include <linux/atomic.h>
> > > > +
> > > > +#ifdef CONFIG_SMP
> > > > +arch_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned = {
> > > > + [0 ... (ATOMIC_HASH_SIZE-1)] = __ARCH_SPIN_LOCK_UNLOCKED
> > > > +};
> > > > +#endif
> > >
> > > What?
> > >
> > > I suppose this is a leftover from an earlier version using the
> > > generic bitops, right?
> >
> > We currently use the generic atomic bitops (asm-generic/bitops/atomic.h)
> > which contains:
> >
> > # define ATOMIC_HASH(a) (&(__atomic_hash[ (((unsigned long) a)/L1_CACHE_BYTES) & (ATOMIC_HASH_SIZE-1) ]))
> >
> > so we have to provide a definition for the array. We have additional patches
> > containing optimised assembly implementations of the atomic bitops which we
> > will push later, once we've got some hardware to benchmark with.
> >
>
> Ah, I was confusing this with the asm/atomic.h stuff, for which you already
> provide an optimized version.
>
> The generic atomic bitops are really horrible in performance and I would
> expect that there is just one obvious way to implement bitops using ldaxr/stlxr,
> so I recommend just doing that even if you have no hardware for benchmarking.

As Will said, we have the code already but I dropped it from the initial
set patches to be reviewed to keep them simpler. They will be added
later.

--
Catalin


\
 
 \ /
  Last update: 2012-08-16 17:02    [W:0.131 / U:23.452 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site