lkml.org 
[lkml]   [2012]   [Nov]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] lib/raid6: Add AVX2 optimized recovery functions
Date
Jim Kukunas <james.t.kukunas@linux.intel.com> writes:
> +
> + /* ymm0 = x0f[16] */
> + asm volatile("vpbroadcastb %0, %%ymm7" : : "m" (x0f));
> +
> + while (bytes) {
> +#ifdef CONFIG_X86_64
> + asm volatile("vmovdqa %0, %%ymm1" : : "m" (q[0]));
> + asm volatile("vmovdqa %0, %%ymm9" : : "m" (q[32]));
> + asm volatile("vmovdqa %0, %%ymm0" : : "m" (p[0]));
> + asm volatile("vmovdqa %0, %%ymm8" : : "m" (p[32]));

This is somewhat dangerous to assume registers do not get changed
between assembler statements or assembler statements do not get
reordered. Better always put such values into explicit variables or
merge them into a single asm statement.

asm volatile is also not enough to prevent reordering. If anything
you would need a memory clobber.

-Andi


--
ak@linux.intel.com -- Speaking for myself only


\
 
 \ /
  Last update: 2012-11-29 21:41    [W:0.113 / U:0.164 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site