lkml.org 
[lkml]   [2012]   [Nov]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] lib/raid6: Add AVX2 optimized recovery functions
On 11/29/2012 12:09 PM, Andi Kleen wrote:
> Jim Kukunas <james.t.kukunas@linux.intel.com> writes:
>> +
>> + /* ymm0 = x0f[16] */
>> + asm volatile("vpbroadcastb %0, %%ymm7" : : "m" (x0f));
>> +
>> + while (bytes) {
>> +#ifdef CONFIG_X86_64
>> + asm volatile("vmovdqa %0, %%ymm1" : : "m" (q[0]));
>> + asm volatile("vmovdqa %0, %%ymm9" : : "m" (q[32]));
>> + asm volatile("vmovdqa %0, %%ymm0" : : "m" (p[0]));
>> + asm volatile("vmovdqa %0, %%ymm8" : : "m" (p[32]));
>
> This is somewhat dangerous to assume registers do not get changed
> between assembler statements or assembler statements do not get
> reordered. Better always put such values into explicit variables or
> merge them into a single asm statement.
>
> asm volatile is also not enough to prevent reordering. If anything
> you would need a memory clobber.
>

The code is compiled so that the xmm/ymm registers are not available to
the compiler. Do you have any known examples of asm volatiles being
reordered *with respect to each other*? My understandings of gcc is
that volatile operations are ordered with respect to each other (not
necessarily with respect to non-volatile operations, though.)

Either way, this implementatin technique was used for the MMX/SSE
implementations without any problems for 9 years now.

-h[a




\
 
 \ /
  Last update: 2012-11-29 22:41    [W:0.169 / U:0.236 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site