lkml.org 
[lkml]   [2022]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: GCC 12 miscompilation of volatile asm (was: Re: [PATCH] arm64/io: Remind compiler that there is a memory side effect)
Sorry, I copied the wrong version of the x86_64 assembly as generated by GCC
11.2.0). Updated below.

On Tue, Apr 05, 2022 at 01:51:30PM +0100, Mark Rutland wrote:
> My x86_64 test case is:
>
> | unsigned long rdmsr(unsigned long reg)
> | {
> | unsigned int lo, hi;
> |
> | asm volatile(
> | "rdmsr"
> | : "=d" (hi), "=a" (lo)
> | : "c" (reg)
> | );
> |
> | return ((unsigned long)hi << 32) | lo;
> | }
> |
> | void wrmsr(unsigned long reg, unsigned long val)
> | {
> | unsigned int lo = val;
> | unsigned int hi = val >> 32;
> |
> | asm volatile(
> | "wrmsr"
> | :
> | : "d" (hi), "a" (lo), "c" (reg)
> | );
> | }
> |
> | void msr_rmw_set_bits(unsigned long reg, unsigned long bits)
> | {
> | unsigned long val;
> |
> | val = rdmsr(reg);
> | val |= bits;
> | wrmsr(reg, val);
> | }
> |
> | void func_with_msr_side_effects(unsigned long reg)
> | {
> | msr_rmw_set_bits(reg, 1UL << 0);
> | msr_rmw_set_bits(reg, 1UL << 1);
> | msr_rmw_set_bits(reg, 1UL << 2);
> | msr_rmw_set_bits(reg, 1UL << 3);
> | }
>
> Per compiler explorer (https://godbolt.org/z/cveff9hq5) GCC trunk currently
> compiles this as:
>
> | msr_rmw_set_bits:
> | mov rcx, rdi
> | rdmsr
> | sal rdx, 32
> | mov eax, eax
> | or rax, rsi
> | or rax, rdx
> | mov rdx, rax
> | shr rdx, 32
> | wrmsr
> | ret
> | func_with_msr_side_effects:
> | ret
>

GCC 11.2 compiles that as:

| rdmsr:
| mov rcx, rdi
| rdmsr
| sal rdx, 32
| mov eax, eax
| or rax, rdx
| ret
| wrmsr:
| mov rax, rsi
| mov rdx, rsi
| shr rdx, 32
| mov rcx, rdi
| wrmsr
| ret
| msr_rmw_set_bits:
| mov rcx, rdi
| rdmsr
| sal rdx, 32
| mov eax, eax
| or rax, rsi
| or rax, rdx
| mov rdx, rax
| shr rdx, 32
| wrmsr
| ret
| func_with_msr_side_effects:
| push rbx
| mov rbx, rdi
| mov esi, 1
| call msr_rmw_set_bits
| mov esi, 2
| mov rdi, rbx
| call msr_rmw_set_bits
| mov esi, 4
| mov rdi, rbx
| call msr_rmw_set_bits
| mov esi, 8
| mov rdi, rbx
| call msr_rmw_set_bits
| pop rbx
| ret

Thanks,
Mark.

\
 
 \ /
  Last update: 2022-04-05 22:35    [W:0.075 / U:0.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site