crypto: x86/aes-xts - simplify loop in xts_crypt_slowpath()
authorEric Biggers <ebiggers@google.com>
Sat, 20 Apr 2024 05:54:55 +0000 (22:54 -0700)
committerHerbert Xu <herbert@gondor.apana.org.au>
Fri, 26 Apr 2024 09:26:10 +0000 (17:26 +0800)
Since the total length processed by the loop in xts_crypt_slowpath() is
a multiple of AES_BLOCK_SIZE, just round the length down to
AES_BLOCK_SIZE even on the last step.  This doesn't change behavior, as
the last step will process a multiple of AES_BLOCK_SIZE regardless.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
arch/x86/crypto/aesni-intel_glue.c

index 110b3282a1f272e203eea9438c6f955abf8ea8b5..02a4c0c276dfdef52dac42bdaeae178fa487a9f3 100644 (file)
@@ -935,16 +935,13 @@ xts_crypt_slowpath(struct skcipher_request *req, xts_crypt_func crypt_func)
        err = skcipher_walk_virt(&walk, req, false);
 
        while (walk.nbytes) {
-               unsigned int nbytes = walk.nbytes;
-
-               if (nbytes < walk.total)
-                       nbytes = round_down(nbytes, AES_BLOCK_SIZE);
-
                kernel_fpu_begin();
-               (*crypt_func)(&ctx->crypt_ctx, walk.src.virt.addr,
-                             walk.dst.virt.addr, nbytes, req->iv);
+               (*crypt_func)(&ctx->crypt_ctx,
+                             walk.src.virt.addr, walk.dst.virt.addr,
+                             walk.nbytes & ~(AES_BLOCK_SIZE - 1), req->iv);
                kernel_fpu_end();
-               err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
+               err = skcipher_walk_done(&walk,
+                                        walk.nbytes & (AES_BLOCK_SIZE - 1));
        }
 
        if (err || !tail)