rcu/nocb: Recheck lazy callbacks under the ->nocb_lock from shrinker
authorFrederic Weisbecker <frederic@kernel.org>
Wed, 29 Mar 2023 16:02:02 +0000 (18:02 +0200)
committerPaul E. McKenney <paulmck@kernel.org>
Wed, 10 May 2023 00:26:59 +0000 (17:26 -0700)
The ->lazy_len is only checked locklessly. Recheck again under the
->nocb_lock to avoid spending more time on flushing/waking if not
necessary. The ->lazy_len can still increment concurrently (from 1 to
infinity) but under the ->nocb_lock we at least know for sure if there
are lazy callbacks at all (->lazy_len > 0).

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
kernel/rcu/tree_nocb.h

index c321fce2af8e31b493c38826e5dd4a9eb121f4be..dfa9c10d672773258ff52201f117358639a797f7 100644 (file)
@@ -1358,12 +1358,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
                if (!rcu_rdp_is_offloaded(rdp))
                        continue;
 
-               _count = READ_ONCE(rdp->lazy_len);
-
-               if (_count == 0)
+               if (!READ_ONCE(rdp->lazy_len))
                        continue;
 
                rcu_nocb_lock_irqsave(rdp, flags);
+               /*
+                * Recheck under the nocb lock. Since we are not holding the bypass
+                * lock we may still race with increments from the enqueuer but still
+                * we know for sure if there is at least one lazy callback.
+                */
+               _count = READ_ONCE(rdp->lazy_len);
+               if (!_count) {
+                       rcu_nocb_unlock_irqrestore(rdp, flags);
+                       continue;
+               }
                WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false));
                rcu_nocb_unlock_irqrestore(rdp, flags);
                wake_nocb_gp(rdp, false);