From: Dave Chinner Date: Thu, 14 Jul 2022 02:04:38 +0000 (+1000) Subject: xfs: reduce the number of atomic when locking a buffer after lookup X-Git-Url: http://git.maquefel.me/?a=commitdiff_plain;h=d8d9bbb0ee6c79191b704d88c8ae712b89e0d2bb;p=linux.git xfs: reduce the number of atomic when locking a buffer after lookup Avoid an extra atomic operation in the non-trylock case by only doing a trylock if the XBF_TRYLOCK flag is set. This follows the pattern in the IO path with NOWAIT semantics where the "trylock-fail-lock" path showed 5-10% reduced throughput compared to just using single lock call when not under NOWAIT conditions. So make that same change here, too. See commit 942491c9e6d6 ("xfs: fix AIM7 regression") for details. Signed-off-by: Dave Chinner [hch: split from a larger patch] Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 81ca951b451a9..374c4e508b127 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -534,11 +534,12 @@ xfs_buf_find_lock( struct xfs_buf *bp, xfs_buf_flags_t flags) { - if (!xfs_buf_trylock(bp)) { - if (flags & XBF_TRYLOCK) { + if (flags & XBF_TRYLOCK) { + if (!xfs_buf_trylock(bp)) { XFS_STATS_INC(bp->b_mount, xb_busy_locked); return -EAGAIN; } + } else { xfs_buf_lock(bp); XFS_STATS_INC(bp->b_mount, xb_get_locked_waited); }