erofs: avoid memory allocation failure during rolling decompression
authorHuang Jianan <huangjianan@oppo.com>
Tue, 16 Mar 2021 03:15:14 +0000 (11:15 +0800)
committerGao Xiang <hsiangkao@redhat.com>
Mon, 29 Mar 2021 02:18:00 +0000 (10:18 +0800)
Currently, err would be treated as io error. Therefore, it'd be
better to ensure memory allocation during rolling decompression
to avoid such io error.

In the long term, we might consider adding another !Uptodate case
for such case.

Link: https://lore.kernel.org/r/20210316031515.90954-1-huangjianan@oppo.com
Reviewed-by: Gao Xiang <hsiangkao@redhat.com>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Huang Jianan <huangjianan@oppo.com>
Signed-off-by: Guo Weichao <guoweichao@oppo.com>
Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
fs/erofs/decompressor.c

index 1cb1ffd105698594a6b5e1d41f4ae56756e85fb5..34e73ff76f89eb8a53eba82643da625242246c18 100644 (file)
@@ -73,9 +73,8 @@ static int z_erofs_lz4_prepare_destpages(struct z_erofs_decompress_req *rq,
                        victim = availables[--top];
                        get_page(victim);
                } else {
-                       victim = erofs_allocpage(pagepool, GFP_KERNEL);
-                       if (!victim)
-                               return -ENOMEM;
+                       victim = erofs_allocpage(pagepool,
+                                                GFP_KERNEL | __GFP_NOFAIL);
                        set_page_private(victim, Z_EROFS_SHORTLIVED_PAGE);
                }
                rq->out[i] = victim;