dma-direct: Leak pages on dma_set_decrypted() failure
authorRick Edgecombe <rick.p.edgecombe@intel.com>
Thu, 22 Feb 2024 00:17:21 +0000 (16:17 -0800)
committerChristoph Hellwig <hch@lst.de>
Wed, 28 Feb 2024 13:31:38 +0000 (05:31 -0800)
On TDX it is possible for the untrusted host to cause
set_memory_encrypted() or set_memory_decrypted() to fail such that an
error is returned and the resulting memory is shared. Callers need to
take care to handle these errors to avoid returning decrypted (shared)
memory to the page allocator, which could lead to functional or security
issues.

DMA could free decrypted/shared pages if dma_set_decrypted() fails. This
should be a rare case. Just leak the pages in this case instead of
freeing them.

Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
kernel/dma/direct.c

index 98b2e192fd6965be62e105333ba04a5cc5319f6b..4d543b1e9d577a9013099f2d70c1d797f5e09028 100644 (file)
@@ -286,7 +286,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
        } else {
                ret = page_address(page);
                if (dma_set_decrypted(dev, ret, size))
-                       goto out_free_pages;
+                       goto out_leak_pages;
        }
 
        memset(ret, 0, size);
@@ -307,6 +307,8 @@ out_encrypt_pages:
 out_free_pages:
        __dma_direct_free_pages(dev, page, size);
        return NULL;
+out_leak_pages:
+       return NULL;
 }
 
 void dma_direct_free(struct device *dev, size_t size,
@@ -367,12 +369,11 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 
        ret = page_address(page);
        if (dma_set_decrypted(dev, ret, size))
-               goto out_free_pages;
+               goto out_leak_pages;
        memset(ret, 0, size);
        *dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
        return page;
-out_free_pages:
-       __dma_direct_free_pages(dev, page, size);
+out_leak_pages:
        return NULL;
 }