Messages in this thread |  | | From | Linus Torvalds <> | Date | Mon, 2 Apr 2018 16:37:17 -0700 | Subject | Re: [PATCH] bitmap: fix memset optimization on big-endian systems |
| |
On Mon, Apr 2, 2018 at 3:58 PM, Omar Sandoval <osandov@osandov.com> wrote: > > Commit 2a98dc028f91 introduced an optimization to bitmap_{set,clear}() > which uses memset() when the start and length are constants aligned to a > byte. This is wrong on big-endian systems;
Ugh, yes.
In retrospect, I do wish I had just made the bitmap types be byte-based, but we had strong reasons for those "unsigned long" array semantics.
The traditional problem - and the reason why it is byte-order dependent - was that we often mix bitmap operations with "unsigned long flags" style operations.
We should probably have at least switched it to "unsigned long int" with the whole 64-bit transition, but never did even that, so the bitmap format is not just byte order dependent, but dependent on the size of "long".
I guess the "unsigned long flag" issue still exists in several places, and we're stuck with it, probably forever. Think page flags, but also various networking flags etc.
You'd *think* they use bitmap operations consistently, but they definitely mix it with direct accesses to the flags field, eg the page flags are usually done using the PG_xyz bit numbers, but occasionally you have code that checks multiple independent bits at once, doing things like
#define PAGE_FLAGS_PRIVATE \ (1UL << PG_private | 1UL << PG_private_2)
static inline int page_has_private(struct page *page) { return !!(page->flags & PAGE_FLAGS_PRIVATE); }
so the bits really are _exposed_ as being encoded as bits in an unsigned long.
Your patch is obviously correct, and we just need to make sure people *really* understand that bitmaps are arrays of unsigned long, and byte (and bit) order is a real thing.
Linus
|  |