m68k: implement xor_unlock_is_negative_byte
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Wed, 4 Oct 2023 16:53:09 +0000 (17:53 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Wed, 18 Oct 2023 21:34:16 +0000 (14:34 -0700)
Using EOR to clear the guaranteed-to-be-set lock bit will test the
negative flag just like the x86 implementation.  This should be more
efficient than the generic implementation in filemap.c.  It would be
better if m68k had __GCC_ASM_FLAG_OUTPUTS__.

Coldfire doesn't have a byte-sized EOR, so we test bit 7 after the EOR,
which is a second memory access, but it's slightly better than the current
C code.

Link: https://lkml.kernel.org/r/20231004165317.1061855-10-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Andreas Dilger <adilger.kernel@dilger.ca>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
arch/m68k/include/asm/bitops.h

index e984af71df6bee14a90af4ad9d21b57c95e6f8ed..80ee3609590556a9f43b80f226a719736e656f41 100644 (file)
@@ -319,6 +319,28 @@ arch___test_and_change_bit(unsigned long nr, volatile unsigned long *addr)
        return test_and_change_bit(nr, addr);
 }
 
+static inline bool xor_unlock_is_negative_byte(unsigned long mask,
+               volatile unsigned long *p)
+{
+#ifdef CONFIG_COLDFIRE
+       __asm__ __volatile__ ("eorl %1, %0"
+               : "+m" (*p)
+               : "d" (mask)
+               : "memory");
+       return *p & (1 << 7);
+#else
+       char result;
+       char *cp = (char *)p + 3;       /* m68k is big-endian */
+
+       __asm__ __volatile__ ("eor.b %1, %2; smi %0"
+               : "=d" (result)
+               : "di" (mask), "o" (*cp)
+               : "memory");
+       return result;
+#endif
+}
+#define xor_unlock_is_negative_byte xor_unlock_is_negative_byte
+
 /*
  *     The true 68020 and more advanced processors support the "bfffo"
  *     instruction for finding bits. ColdFire and simple 68000 parts