s390/gmap: don't unconditionally call pte_unmap_unlock() in __gmap_zap()
authorDavid Hildenbrand <david@redhat.com>
Thu, 9 Sep 2021 16:22:41 +0000 (18:22 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 18 Nov 2021 18:16:40 +0000 (19:16 +0100)
[ Upstream commit b159f94c86b43cf7e73e654bc527255b1f4eafc4 ]

... otherwise we will try unlocking a spinlock that was never locked via a
garbage pointer.

At the time we reach this code path, we usually successfully looked up
a PGSTE already; however, evil user space could have manipulated the VMA
layout in the meantime and triggered removal of the page table.

Fixes: 1e133ab296f3 ("s390/mm: split arch/s390/mm/pgtable.c")
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Link: https://lore.kernel.org/r/20210909162248.14969-3-david@redhat.com
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
arch/s390/mm/gmap.c

index e0735c3437759ccee564c8b1c77db881dc4fd718..d63c0ccc5ccda17fbfb6e15a44a08eb01654559a 100644 (file)
@@ -689,9 +689,10 @@ void __gmap_zap(struct gmap *gmap, unsigned long gaddr)
 
                /* Get pointer to the page table entry */
                ptep = get_locked_pte(gmap->mm, vmaddr, &ptl);
-               if (likely(ptep))
+               if (likely(ptep)) {
                        ptep_zap_unused(gmap->mm, vmaddr, ptep, 0);
-               pte_unmap_unlock(ptep, ptl);
+                       pte_unmap_unlock(ptep, ptl);
+               }
        }
 }
 EXPORT_SYMBOL_GPL(__gmap_zap);