mm: zswap: remove unnecessary trees cleanups in zswap_swapoff()
authorYosry Ahmed <yosryahmed@google.com>
Wed, 24 Jan 2024 04:51:12 +0000 (04:51 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 22 Feb 2024 18:24:39 +0000 (10:24 -0800)
During swapoff, try_to_unuse() makes sure that zswap_invalidate() is
called for all swap entries before zswap_swapoff() is called.  This means
that all zswap entries should already be removed from the tree.  Simplify
zswap_swapoff() by removing the trees cleanup code, and leave an assertion
in its place.

Link: https://lkml.kernel.org/r/20240124045113.415378-3-yosryahmed@google.com
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
Reviewed-by: Chengming Zhou <zhouchengming@bytedance.com>
Cc: Chris Li <chrisl@kernel.org>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/zswap.c

index 464179d4339979daf56252c1aac9635352fa6c8e..0e4a869b6fd8aa75a65539cea28179c927c0ff93 100644 (file)
@@ -1808,19 +1808,9 @@ void zswap_swapoff(int type)
        if (!trees)
                return;
 
-       for (i = 0; i < nr_zswap_trees[type]; i++) {
-               struct zswap_tree *tree = trees + i;
-               struct zswap_entry *entry, *n;
-
-               /* walk the tree and free everything */
-               spin_lock(&tree->lock);
-               rbtree_postorder_for_each_entry_safe(entry, n,
-                                                    &tree->rbroot,
-                                                    rbnode)
-                       zswap_free_entry(entry);
-               tree->rbroot = RB_ROOT;
-               spin_unlock(&tree->lock);
-       }
+       /* try_to_unuse() invalidated all the entries already */
+       for (i = 0; i < nr_zswap_trees[type]; i++)
+               WARN_ON_ONCE(!RB_EMPTY_ROOT(&trees[i].rbroot));
 
        kvfree(trees);
        nr_zswap_trees[type] = 0;