perf/x86/rapl: Use local64_try_cmpxchg in rapl_event_update()
authorUros Bizjak <ubizjak@gmail.com>
Mon, 7 Aug 2023 14:51:15 +0000 (16:51 +0200)
committerIngo Molnar <mingo@kernel.org>
Tue, 3 Oct 2023 19:13:45 +0000 (21:13 +0200)
Use local64_try_cmpxchg() instead of local64_cmpxchg(*ptr, old, new) == old.

X86 CMPXCHG instruction returns success in ZF flag, so this change saves a
compare after CMPXCHG (and related move instruction in front of CMPXCHG).

Also, try_cmpxchg() implicitly assigns old *ptr value to "old" when CMPXCHG
fails. There is no need to re-read the value in the loop.

No functional change intended.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20230807145134.3176-2-ubizjak@gmail.com
arch/x86/events/rapl.c

index e8f53b2590a5537aad41a827b84b87bb56e533b5..6d3e738486437c8b8b6ad7e64ff117df1378d7ec 100644 (file)
@@ -179,13 +179,11 @@ static u64 rapl_event_update(struct perf_event *event)
        s64 delta, sdelta;
        int shift = RAPL_CNTR_WIDTH;
 
-again:
        prev_raw_count = local64_read(&hwc->prev_count);
-       rdmsrl(event->hw.event_base, new_raw_count);
-
-       if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
-                           new_raw_count) != prev_raw_count)
-               goto again;
+       do {
+               rdmsrl(event->hw.event_base, new_raw_count);
+       } while (!local64_try_cmpxchg(&hwc->prev_count,
+                                     &prev_raw_count, new_raw_count));
 
        /*
         * Now we have the new raw value and have updated the prev