lkml.org 
[lkml]   [2013]   [Mar]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: How does spin_unlock() in x86-64 align with the description in Documention/memory-barriers.txt?
>>> On 22.03.13 at 12:58, Zhu Yanhai <zhu.yanhai@gmail.com> wrote:
> Hi all,
> In the documention it reads,
>
> (2) UNLOCK operation implication:
>
> Memory operations issued before the UNLOCK will be completed before the
> UNLOCK operation has completed.
>
> Memory operations issued after the UNLOCK may be completed before the
> UNLOCK operation has completed.
>
> However, on x86-64 __ticket_spin_unlock() merely does,
>
> static __always_inline void __ticket_spin_unlock(raw_spinlock_t *lock)
> {
> asm volatile(
> ALTERNATIVE(UNLOCK_LOCK_PREFIX"incb (%0);"ASM_NOP3,
> UNLOCK_LOCK_ALT_PREFIX"movw $0, (%0)",
> X86_FEATURE_UNFAIR_SPINLOCK)
> :
> : "Q" (&lock->slock)
> : "memory", "cc");
> }
>
> While both UNLOCK_LOCK_PREFIX and UNLOCK_LOCK_ALT_PREFIX are empty
> strings. So how such a function keeps the memory operations issued
> before it completed?

Please read the section "Memory Ordering in P6 and More Recent
Processor Families" in SDM Vol 3.

Jan



\
 
 \ /
  Last update: 2013-03-22 14:01    [W:0.033 / U:0.612 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site