lib/test_hmm: avoid accessing uninitialized pages
authorMiaohe Lin <linmiaohe@huawei.com>
Thu, 9 Jun 2022 13:08:35 +0000 (21:08 +0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 17 Aug 2022 12:23:43 +0000 (14:23 +0200)
[ Upstream commit ed913b055a74b723976f8e885a3395162a0371e6 ]

If make_device_exclusive_range() fails or returns pages marked for
exclusive access less than required, remaining fields of pages will left
uninitialized.  So dmirror_atomic_map() will access those yet
uninitialized fields of pages.  To fix it, do dmirror_atomic_map() iff all
pages are marked for exclusive access (we will break if mapped is less
than required anyway) so we won't access those uninitialized fields of
pages.

Link: https://lkml.kernel.org/r/20220609130835.35110-1-linmiaohe@huawei.com
Fixes: b659baea7546 ("mm: selftests for exclusive device memory")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
lib/test_hmm.c

index ac794e3540693185fd9dcb040a55926e79635222..a89cb4281c9dcb81cabe8199c3a64fa433fd026b 100644 (file)
@@ -731,7 +731,7 @@ static int dmirror_exclusive(struct dmirror *dmirror,
 
        mmap_read_lock(mm);
        for (addr = start; addr < end; addr = next) {
-               unsigned long mapped;
+               unsigned long mapped = 0;
                int i;
 
                if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT))
@@ -740,7 +740,13 @@ static int dmirror_exclusive(struct dmirror *dmirror,
                        next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT);
 
                ret = make_device_exclusive_range(mm, addr, next, pages, NULL);
-               mapped = dmirror_atomic_map(addr, next, pages, dmirror);
+               /*
+                * Do dmirror_atomic_map() iff all pages are marked for
+                * exclusive access to avoid accessing uninitialized
+                * fields of pages.
+                */
+               if (ret == (next - addr) >> PAGE_SHIFT)
+                       mapped = dmirror_atomic_map(addr, next, pages, dmirror);
                for (i = 0; i < ret; i++) {
                        if (pages[i]) {
                                unlock_page(pages[i]);