bpf: Propagate error from htab_lock_bucket() to userspace
authorHou Tao <houtao1@huawei.com>
Wed, 31 Aug 2022 04:26:28 +0000 (12:26 +0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 26 Oct 2022 10:34:40 +0000 (12:34 +0200)
[ Upstream commit 66a7a92e4d0d091e79148a4c6ec15d1da65f4280 ]

In __htab_map_lookup_and_delete_batch() if htab_lock_bucket() returns
-EBUSY, it will go to next bucket. Going to next bucket may not only
skip the elements in current bucket silently, but also incur
out-of-bound memory access or expose kernel memory to userspace if
current bucket_cnt is greater than bucket_size or zero.

Fixing it by stopping batch operation and returning -EBUSY when
htab_lock_bucket() fails, and the application can retry or skip the busy
batch as needed.

Fixes: 20b6cc34ea74 ("bpf: Avoid hashtab deadlock with map_locked")
Reported-by: Hao Sun <sunhao.th@gmail.com>
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220831042629.130006-3-houtao@huaweicloud.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
kernel/bpf/hashtab.c

index cae858985f0c463c2f7f5c71301422906c0e10d4..e7f45a966e6b5f7cf43def852841b14932f301e2 100644 (file)
@@ -1671,8 +1671,11 @@ again_nocopy:
        /* do not grab the lock unless need it (bucket_cnt > 0). */
        if (locked) {
                ret = htab_lock_bucket(htab, b, batch, &flags);
-               if (ret)
-                       goto next_batch;
+               if (ret) {
+                       rcu_read_unlock();
+                       bpf_enable_instrumentation();
+                       goto after_loop;
+               }
        }
 
        bucket_cnt = 0;