locking/qspinlock: Use atomic_try_cmpxchg_relaxed() in xchg_tail()
authorUros Bizjak <ubizjak@gmail.com>
Thu, 21 Mar 2024 19:52:47 +0000 (20:52 +0100)
committerIngo Molnar <mingo@kernel.org>
Thu, 11 Apr 2024 13:14:54 +0000 (15:14 +0200)
commit79a34e3d8411050c3c7550c5163d6f9dc41e8f66
treeddc759da727ab424638b4ea0cfa5200201e0a787
parent21689e4bfb9ae8f8b45279c53faecaa5a056ffa5
locking/qspinlock: Use atomic_try_cmpxchg_relaxed() in xchg_tail()

Use atomic_try_cmpxchg_relaxed(*ptr, &old, new) instead of
atomic_cmpxchg_relaxed (*ptr, old, new) == old in xchg_tail().

x86 CMPXCHG instruction returns success in ZF flag,
so this change saves a compare after CMPXCHG.

No functional change intended.

Since this code requires NR_CPUS >= 16k, I have tested it
by unconditionally setting _Q_PENDING_BITS to 1 in
<asm-generic/qspinlock_types.h>.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20240321195309.484275-1-ubizjak@gmail.com
kernel/locking/qspinlock.c