mm: optimise vmf_anon_prepare() for VMAs without an anon_vma
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Fri, 26 Apr 2024 14:45:03 +0000 (15:45 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 6 May 2024 00:53:54 +0000 (17:53 -0700)
If the mmap_lock can be taken for read, we can call __anon_vma_prepare()
while holding it, saving ourselves a trip back through the fault handler.

Link: https://lkml.kernel.org/r/20240426144506.1290619-5-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/memory.c

index b5f72ce569e658777873a332bcb3a77dbedea56d..eea6e4984eaefade23089ec7eb1e673e0f6c9203 100644 (file)
@@ -3232,16 +3232,21 @@ static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf)
 vm_fault_t vmf_anon_prepare(struct vm_fault *vmf)
 {
        struct vm_area_struct *vma = vmf->vma;
+       vm_fault_t ret = 0;
 
        if (likely(vma->anon_vma))
                return 0;
        if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
-               vma_end_read(vma);
-               return VM_FAULT_RETRY;
+               if (!mmap_read_trylock(vma->vm_mm)) {
+                       vma_end_read(vma);
+                       return VM_FAULT_RETRY;
+               }
        }
        if (__anon_vma_prepare(vma))
-               return VM_FAULT_OOM;
-       return 0;
+               ret = VM_FAULT_OOM;
+       if (vmf->flags & FAULT_FLAG_VMA_LOCK)
+               mmap_read_unlock(vma->vm_mm);
+       return ret;
 }
 
 /*