lkml.org 
[lkml]   [2011]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3 1/2] perf, x86: Implement event scheduler helper functions
From
Date
On Mon, 2011-11-14 at 18:51 +0100, Robert Richter wrote:
> @@ -22,8 +22,14 @@ extern unsigned long __sw_hweight64(__u64 w);
> #include <asm/bitops.h>
>
> #define for_each_set_bit(bit, addr, size) \
> - for ((bit) = find_first_bit((addr), (size)); \
> - (bit) < (size); \
> + for ((bit) = find_first_bit((addr), (size)); \
> + (bit) < (size); \
> + (bit) = find_next_bit((addr), (size), (bit) + 1))
> +
> +/* same as for_each_set_bit() but use bit as value to start with */
> +#define for_each_set_bit_cont(bit, addr, size) \
> + for ((bit) = find_next_bit((addr), (size), (bit)); \
> + (bit) < (size); \
> (bit) = find_next_bit((addr), (size), (bit) + 1))

So my version has the +1 for the first as well, this is from the
assumption that the bit passed in has been dealt with and should not be
the first. ie. cont _after_ @bit instead of cont _at_ @bit.

This seems consistent with the list_*_continue primitives as well, which
will start with the element after (or before for _reverse) the given
position.

Thoughts?


\
 
 \ /
  Last update: 2011-11-16 17:05    [W:0.863 / U:0.232 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site