iomap: hold state_lock over call to ifs_set_range_uptodate()
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Wed, 4 Oct 2023 16:53:01 +0000 (17:53 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Wed, 18 Oct 2023 21:34:16 +0000 (14:34 -0700)
Patch series "Add folio_end_read", v2.

The core of this patchset is the new folio_end_read() call which
filesystems can use when finishing a page cache read instead of separate
calls to mark the folio uptodate and unlock it.  As an illustration of its
use, I converted ext4, iomap & mpage; more can be converted.

I think that's useful by itself, but the interesting optimisation is that
we can implement that with a single XOR instruction that sets the uptodate
bit, clears the lock bit, tests the waiter bit and provides a write memory
barrier.  That removes one memory barrier and one atomic instruction from
each page read, which seems worth doing.  That's in patch 15.

The last two patches could be a separate series, but basically we can do
the same thing with the writeback flag that we do with the unlock flag;
clear it and test the waiters bit at the same time.

This patch (of 17):

This is really preparation for the next patch, but it lets us call
folio_mark_uptodate() in just one place instead of two.

Link: https://lkml.kernel.org/r/20231004165317.1061855-1-willy@infradead.org
Link: https://lkml.kernel.org/r/20231004165317.1061855-2-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Andreas Dilger <adilger.kernel@dilger.ca>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
fs/iomap/buffered-io.c

index 5db54ca29a35acf39324b9354d9d02da21425eed..6e780ca64ce3430d4f2aa51c15c69e8e2af86a64 100644 (file)
@@ -57,30 +57,32 @@ static inline bool ifs_block_is_uptodate(struct iomap_folio_state *ifs,
        return test_bit(block, ifs->state);
 }
 
-static void ifs_set_range_uptodate(struct folio *folio,
+static bool ifs_set_range_uptodate(struct folio *folio,
                struct iomap_folio_state *ifs, size_t off, size_t len)
 {
        struct inode *inode = folio->mapping->host;
        unsigned int first_blk = off >> inode->i_blkbits;
        unsigned int last_blk = (off + len - 1) >> inode->i_blkbits;
        unsigned int nr_blks = last_blk - first_blk + 1;
-       unsigned long flags;
 
-       spin_lock_irqsave(&ifs->state_lock, flags);
        bitmap_set(ifs->state, first_blk, nr_blks);
-       if (ifs_is_fully_uptodate(folio, ifs))
-               folio_mark_uptodate(folio);
-       spin_unlock_irqrestore(&ifs->state_lock, flags);
+       return ifs_is_fully_uptodate(folio, ifs);
 }
 
 static void iomap_set_range_uptodate(struct folio *folio, size_t off,
                size_t len)
 {
        struct iomap_folio_state *ifs = folio->private;
+       unsigned long flags;
+       bool uptodate = true;
 
-       if (ifs)
-               ifs_set_range_uptodate(folio, ifs, off, len);
-       else
+       if (ifs) {
+               spin_lock_irqsave(&ifs->state_lock, flags);
+               uptodate = ifs_set_range_uptodate(folio, ifs, off, len);
+               spin_unlock_irqrestore(&ifs->state_lock, flags);
+       }
+
+       if (uptodate)
                folio_mark_uptodate(folio);
 }