lkml.org 
[lkml]   [2021]   [Jul]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH] lib/string: Bring optimized memcmp from glibc
    Date
    * Nikolay Borisov:

    > +/*
    > + * Compare A and B bytewise in the byte order of the machine.
    > + * A and B are known to be different. This is needed only on little-endian
    > + * machines.
    > + */
    > +static inline int memcmp_bytes(unsigned long a, unsigned long b)
    > +{
    > + long srcp1 = (long) &a;
    > + long srcp2 = (long) &b;
    > + unsigned long a0, b0;
    > +
    > + do {
    > + a0 = ((uint8_t *) srcp1)[0];
    > + b0 = ((uint8_t *) srcp2)[0];
    > + srcp1 += 1;
    > + srcp2 += 1;
    > + } while (a0 == b0);
    > + return a0 - b0;
    > +}

    Should this be this?

    static inline int memcmp_bytes(unsigned long a, unsigned long b)
    {
    if (sizeof(a) == 4)
    return __builtin_bswap32(a) < __builtin_bswap32(b) ? -1 : 0;
    else
    return __builtin_bswap64(a) < __builtin_bswap64(b) ? -1 : 0;
    }

    (Or whatever macro versions the kernel has for this.)

    Or is the expectation that targets that don't have an assembler
    implementation for memcmp have also bad bswap built-ins?

    Thanks,
    Florian

    \
     
     \ /
      Last update: 2021-07-28 22:12    [W:3.151 / U:0.372 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site