As the comment described in "struct vm_fault":
".address" : 'Faulting virtual address - masked'
".real_address" : 'Faulting virtual address - unmasked'
The link [1] said: "Whatever the routes, all architectures end up to the
invocation of handle_mm_fault() which, in turn, (likely) ends up calling
__handle_mm_fault() to carry out the actual work of allocating the page
tables."
__handle_mm_fault() does address assignment:
.address = address & PAGE_MASK,
.real_address = address,
This is debug dump by running `./test_progs -a "*arena*"`:
[ 69.767494] arena fault: vmf->address =
10000001d000, vmf->real_address =
10000001d008
[ 69.767496] arena fault: vmf->address =
10000001c000, vmf->real_address =
10000001c008
[ 69.767499] arena fault: vmf->address =
10000001b000, vmf->real_address =
10000001b008
[ 69.767501] arena fault: vmf->address =
10000001a000, vmf->real_address =
10000001a008
[ 69.767504] arena fault: vmf->address =
100000019000, vmf->real_address =
100000019008
[ 69.769388] arena fault: vmf->address =
10000001e000, vmf->real_address =
10000001e1e8
So we can use the value of 'vmf->address' to do BPF arena kernel address
space cast directly.
[1] https://docs.kernel.org/mm/page_tables.html
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Link: https://lore.kernel.org/r/20240507063358.8048-1-haiyue.wang@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
int ret;
kbase = bpf_arena_get_kern_vm_start(arena);
- kaddr = kbase + (u32)(vmf->address & PAGE_MASK);
+ kaddr = kbase + (u32)(vmf->address);
guard(mutex)(&arena->lock);
page = vmalloc_to_page((void *)kaddr);