Messages in this thread Patch in this message | | | Date | Mon, 10 Jul 2023 08:37:51 -0000 | From | "tip-bot2 for Uros Bizjak" <> | Subject | [tip: perf/core] perf/x86: Use local64_try_cmpxchg |
| |
The following commit has been merged into the perf/core branch of tip:
Commit-ID: 4c1c9dea2089374cb58505a0df6136969ff3c8b9 Gitweb: https://git.kernel.org/tip/4c1c9dea2089374cb58505a0df6136969ff3c8b9 Author: Uros Bizjak <ubizjak@gmail.com> AuthorDate: Thu, 06 Jul 2023 16:16:48 +02:00 Committer: Peter Zijlstra <peterz@infradead.org> CommitterDate: Mon, 10 Jul 2023 09:52:35 +02:00
perf/x86: Use local64_try_cmpxchg
Use local64_try_cmpxchg instead of local64_cmpxchg (*ptr, old, new) == old. x86 CMPXCHG instruction returns success in ZF flag, so this change saves a compare after cmpxchg (and related move instruction in front of cmpxchg).
Also, try_cmpxchg implicitly assigns old *ptr value to "old" when cmpxchg fails. There is no need to re-read the value in the loop.
No functional change intended.
Cc. "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20230706141720.2672-1-ubizjak@gmail.com --- arch/x86/events/amd/ibs.c | 4 ++-- arch/x86/events/core.c | 10 ++++------ arch/x86/events/intel/cstate.c | 10 ++++------ arch/x86/events/msr.c | 8 +++----- 4 files changed, 13 insertions(+), 19 deletions(-)
diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index 74e6642..7d29be0 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -156,8 +156,8 @@ perf_event_try_update(struct perf_event *event, u64 new_raw_count, int width) * count to the generic event atomically: */ prev_raw_count = local64_read(&hwc->prev_count); - if (local64_cmpxchg(&hwc->prev_count, prev_raw_count, - new_raw_count) != prev_raw_count) + if (!local64_try_cmpxchg(&hwc->prev_count, + &prev_raw_count, new_raw_count)) return 0; /* diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 9d24870..23c9642 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -129,13 +129,11 @@ u64 x86_perf_event_update(struct perf_event *event) * exchange a new raw count - then add that new-prev delta * count to the generic event atomically: */ -again: prev_raw_count = local64_read(&hwc->prev_count); - rdpmcl(hwc->event_base_rdpmc, new_raw_count); - - if (local64_cmpxchg(&hwc->prev_count, prev_raw_count, - new_raw_count) != prev_raw_count) - goto again; + do { + rdpmcl(hwc->event_base_rdpmc, new_raw_count); + } while (!local64_try_cmpxchg(&hwc->prev_count, + &prev_raw_count, new_raw_count)); /* * Now we have the new raw value and have updated the prev diff --git a/arch/x86/events/intel/cstate.c b/arch/x86/events/intel/cstate.c index 835862c..97035db 100644 --- a/arch/x86/events/intel/cstate.c +++ b/arch/x86/events/intel/cstate.c @@ -365,13 +365,11 @@ static void cstate_pmu_event_update(struct perf_event *event) struct hw_perf_event *hwc = &event->hw; u64 prev_raw_count, new_raw_count; -again: prev_raw_count = local64_read(&hwc->prev_count); - new_raw_count = cstate_pmu_read_counter(event); - - if (local64_cmpxchg(&hwc->prev_count, prev_raw_count, - new_raw_count) != prev_raw_count) - goto again; + do { + new_raw_count = cstate_pmu_read_counter(event); + } while (!local64_try_cmpxchg(&hwc->prev_count, + &prev_raw_count, new_raw_count)); local64_add(new_raw_count - prev_raw_count, &event->count); } diff --git a/arch/x86/events/msr.c b/arch/x86/events/msr.c index 0feaaa5..61c59e0 100644 --- a/arch/x86/events/msr.c +++ b/arch/x86/events/msr.c @@ -244,12 +244,10 @@ static void msr_event_update(struct perf_event *event) s64 delta; /* Careful, an NMI might modify the previous event value: */ -again: prev = local64_read(&event->hw.prev_count); - now = msr_read_counter(event); - - if (local64_cmpxchg(&event->hw.prev_count, prev, now) != prev) - goto again; + do { + now = msr_read_counter(event); + } while (!local64_try_cmpxchg(&event->hw.prev_count, &prev, now)); delta = now - prev; if (unlikely(event->hw.event_base == MSR_SMI_COUNT)) {
| |