projects
/
linux.git
/ commitdiff
commit
grep
author
committer
pickaxe
?
search:
re
summary
|
shortlog
|
log
|
commit
| commitdiff |
tree
raw
|
patch
| inline |
side by side
(parent:
1f13dff
)
mm/mprotect: only reference swap pfn page if type match
author
Peter Xu
<peterx@redhat.com>
Tue, 23 Aug 2022 22:11:38 +0000
(18:11 -0400)
committer
Andrew Morton
<akpm@linux-foundation.org>
Sun, 28 Aug 2022 21:02:46 +0000
(14:02 -0700)
Yu Zhao reported a bug after the commit "mm/swap: Add swp_offset_pfn() to
fetch PFN from swap entry" added a check in swp_offset_pfn() for swap type [1]:
kernel BUG at include/linux/swapops.h:117!
CPU: 46 PID: 5245 Comm: EventManager_De Tainted: G S O L 6.0.0-dbg-DEV #2
RIP: 0010:pfn_swap_entry_to_page+0x72/0xf0
Code: c6 48 8b 36 48 83 fe ff 74 53 48 01 d1 48 83 c1 08 48 8b 09 f6
c1 01 75 7b 66 90 48 89 c1 48 8b 09 f6 c1 01 74 74 5d c3 eb 9e <0f> 0b
48 ba ff ff ff ff 03 00 00 00 eb ae a9 ff 0f 00 00 75 13 48
RSP: 0018:
ffffa59e73fabb80
EFLAGS:
00010282
RAX:
00000000ffffffe8
RBX:
0c00000000000000
RCX:
ffffcd5440000000
RDX:
1ffffffffff7a80a
RSI:
0000000000000000
RDI:
0c0000000000042b
RBP:
ffffa59e73fabb80
R08:
ffff9965ca6e8bb8
R09:
0000000000000000
R10:
ffffffffa5a2f62d
R11:
0000030b372e9fff
R12:
ffff997b79db5738
R13:
000000000000042b
R14:
0c0000000000042b
R15:
1ffffffffff7a80a
FS:
00007f549d1bb700
(0000) GS:
ffff99d3cf680000
(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
0000440d035b3180
CR3:
0000002243176004
CR4:
00000000003706e0
DR0:
0000000000000000
DR1:
0000000000000000
DR2:
0000000000000000
DR3:
0000000000000000
DR6:
00000000fffe0ff0
DR7:
0000000000000400
Call Trace:
<TASK>
change_pte_range+0x36e/0x880
change_p4d_range+0x2e8/0x670
change_protection_range+0x14e/0x2c0
mprotect_fixup+0x1ee/0x330
do_mprotect_pkey+0x34c/0x440
__x64_sys_mprotect+0x1d/0x30
It triggers because pfn_swap_entry_to_page() could be called upon e.g. a
genuine swap entry.
Fix it by only calling it when it's a write migration entry where the page*
is used.
[1] https://lore.kernel.org/lkml/CAOUHufaVC2Za-p8m0aiHw6YkheDcrO-C3wRGixwDS32VTS+k1w@mail.gmail.com/
Link:
https://lkml.kernel.org/r/20220823221138.45602-1-peterx@redhat.com
Fixes: 6c287605fd56 ("mm: remember exclusively mapped anonymous pages with PG_anon_exclusive")
Signed-off-by: Peter Xu <peterx@redhat.com>
Reported-by: Yu Zhao <yuzhao@google.com>
Tested-by: Yu Zhao <yuzhao@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/mprotect.c
patch
|
blob
|
history
diff --git
a/mm/mprotect.c
b/mm/mprotect.c
index 3a23dde73723bc8adb1847c403b99a3d66970b88..bc6bddd156ca65fef31cfc761f18ac9f6381e10b 100644
(file)
--- a/
mm/mprotect.c
+++ b/
mm/mprotect.c
@@
-196,10
+196,11
@@
static unsigned long change_pte_range(struct mmu_gather *tlb,
pages++;
} else if (is_swap_pte(oldpte)) {
swp_entry_t entry = pte_to_swp_entry(oldpte);
- struct page *page = pfn_swap_entry_to_page(entry);
pte_t newpte;
if (is_writable_migration_entry(entry)) {
+ struct page *page = pfn_swap_entry_to_page(entry);
+
/*
* A protection check is difficult so
* just be safe and disable write