locking/atomic/x86: Introduce arch_atomic64_read_nonatomic() to x86_32
authorUros Bizjak <ubizjak@gmail.com>
Wed, 10 Apr 2024 06:29:34 +0000 (08:29 +0200)
committerIngo Molnar <mingo@kernel.org>
Wed, 10 Apr 2024 13:04:54 +0000 (15:04 +0200)
Introduce arch_atomic64_read_nonatomic() for 32-bit targets to load
the value from atomic64_t location in a non-atomic way. This
function is intended to be used in cases where a subsequent atomic
operation will handle the torn value, and can be used to prime the
first iteration of unconditional try_cmpxchg() loops.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20240410062957.322614-2-ubizjak@gmail.com
arch/x86/include/asm/atomic64_32.h

index ec217aaf41eb8028396a979a51f6c783771a3709..bbdf174de110f12a25a0d538ea0065a9f3e50c24 100644 (file)
@@ -14,6 +14,32 @@ typedef struct {
 
 #define ATOMIC64_INIT(val)     { (val) }
 
+/*
+ * Read an atomic64_t non-atomically.
+ *
+ * This is intended to be used in cases where a subsequent atomic operation
+ * will handle the torn value, and can be used to prime the first iteration
+ * of unconditional try_cmpxchg() loops, e.g.:
+ *
+ *     s64 val = arch_atomic64_read_nonatomic(v);
+ *     do { } while (!arch_atomic64_try_cmpxchg(v, &val, val OP i);
+ *
+ * This is NOT safe to use where the value is not always checked by a
+ * subsequent atomic operation, such as in conditional try_cmpxchg() loops
+ * that can break before the atomic operation, e.g.:
+ *
+ *     s64 val = arch_atomic64_read_nonatomic(v);
+ *     do {
+ *             if (condition(val))
+ *                     break;
+ *     } while (!arch_atomic64_try_cmpxchg(v, &val, val OP i);
+ */
+static __always_inline s64 arch_atomic64_read_nonatomic(const atomic64_t *v)
+{
+       /* See comment in arch_atomic_read(). */
+       return __READ_ONCE(v->counter);
+}
+
 #define __ATOMIC64_DECL(sym) void atomic64_##sym(atomic64_t *, ...)
 #ifndef ATOMIC64_EXPORT
 #define ATOMIC64_DECL_ONE __ATOMIC64_DECL