aboutsummaryrefslogtreecommitdiff
path: root/crypto/ctr.c
AgeCommit message (Collapse)Author
2018-04-21crypto: remove several VLAsSalvatore Mesoraca
We avoid various VLAs[1] by using constant expressions for block size and alignment mask. [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-08-04crypto: algapi - make crypto_xor() take separate dst and src argumentsArd Biesheuvel
There are quite a number of occurrences in the kernel of the pattern if (dst != src) memcpy(dst, src, walk.total % AES_BLOCK_SIZE); crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE); or crypto_xor(keystream, src, nbytes); memcpy(dst, keystream, nbytes); where crypto_xor() is preceded or followed by a memcpy() invocation that is only there because crypto_xor() uses its output parameter as one of the inputs. To avoid having to add new instances of this pattern in the arm64 code, which will be refactored to implement non-SIMD fallbacks, add an alternative implementation called crypto_xor_cpy(), taking separate input and output arguments. This removes the need for the separate memcpy(). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-03-09crypto: ctr - Propagate NEED_FALLBACK bitMarcelo Cerri
When requesting a fallback algorithm, we should propagate the NEED_FALLBACK bit when search for the underlying algorithm. This will prevents drivers from allocating unnecessary fallbacks that are never called. For instance, currently the vmx-crypto driver will use the following chain of calls when calling the fallback implementation: p8_aes_ctr -> ctr(p8_aes) -> aes-generic However p8_aes will always delegate its calls to aes-generic. With this patch, p8_aes_ctr will be able to use ctr(aes-generic) directly as its fallback. The same applies to aes_s390. Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-02-11crypto: algapi - make crypto_xor() and crypto_inc() alignment agnosticArd Biesheuvel
Instead of unconditionally forcing 4 byte alignment for all generic chaining modes that rely on crypto_xor() or crypto_inc() (which may result in unnecessary copying of data when the underlying hardware can perform unaligned accesses efficiently), make those functions deal with unaligned input explicitly, but only if the Kconfig symbol HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers. For crypto_inc(), this simply involves making the 4-byte stride conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that it typically operates on 16 byte buffers. For crypto_xor(), an algorithm is implemented that simply runs through the input using the largest strides possible if unaligned accesses are allowed. If they are not, an optimal sequence of memory accesses is emitted that takes the relative alignment of the input buffers into account, e.g., if the relative misalignment of dst and src is 4 bytes, the entire xor operation will be completed using 4 byte loads and stores (modulo unaligned bits at the start and end). Note that all expressions involving misalign are simply eliminated by the compiler when HAVE_EFFICIENT_UNALIGNED_ACCESS is defined. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-01crypto: skcipher - Get rid of crypto_spawn_skcipher2()Eric Biggers
Since commit 3a01d0ee2b99 ("crypto: skcipher - Remove top-level givcipher interface"), crypto_spawn_skcipher2() and crypto_spawn_skcipher() are equivalent. So switch callers of crypto_spawn_skcipher2() to crypto_spawn_skcipher() and remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-01crypto: skcipher - Get rid of crypto_grab_skcipher2()Eric Biggers
Since commit 3a01d0ee2b99 ("crypto: skcipher - Remove top-level givcipher interface"), crypto_grab_skcipher2() and crypto_grab_skcipher() are equivalent. So switch callers of crypto_grab_skcipher2() to crypto_grab_skcipher() and remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18crypto: ctr - Use skcipher in rfc3686Herbert Xu
This patch converts rfc3686 to use the new skcipher interface as opposed to ablkcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-11-26crypto: include crypto- module prefix in templateKees Cook
This adds the module loading prefix "crypto-" to the template lookup as well. For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly includes the "crypto-" prefix at every level, correctly rejecting "vfat": net-pf-38 algif-hash crypto-vfat(blowfish) crypto-vfat(blowfish)-all crypto-vfat Reported-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-11-24crypto: prefix module autoloading with "crypto-"Kees Cook
This prefixes all crypto module loading with "crypto-" so we never run the risk of exposing module auto-loading to userspace via a crypto API, as demonstrated by Mathias Krause: https://lkml.org/lkml/2013/3/4/70 Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-02-25Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds
Pull crypto update from Herbert Xu: "Here is the crypto update for 3.9: - Added accelerated implementation of crc32 using pclmulqdq. - Added test vector for fcrypt. - Added support for OMAP4/AM33XX cipher and hash. - Fixed loose crypto_user input checks. - Misc fixes" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (43 commits) crypto: user - ensure user supplied strings are nul-terminated crypto: user - fix empty string test in report API crypto: user - fix info leaks in report API crypto: caam - Added property fsl,sec-era in SEC4.0 device tree binding. crypto: use ERR_CAST crypto: atmel-aes - adjust duplicate test crypto: crc32-pclmul - Kill warning on x86-32 crypto: x86/twofish - assembler clean-ups: use ENTRY/ENDPROC, localize jump labels crypto: x86/sha1 - assembler clean-ups: use ENTRY/ENDPROC crypto: x86/serpent - use ENTRY/ENDPROC for assember functions and localize jump targets crypto: x86/salsa20 - assembler cleanup, use ENTRY/ENDPROC for assember functions and rename ECRYPT_* to salsa20_* crypto: x86/ghash - assembler clean-up: use ENDPROC at end of assember functions crypto: x86/crc32c - assembler clean-up: use ENTRY/ENDPROC crypto: cast6-avx: use ENTRY()/ENDPROC() for assembler functions crypto: cast5-avx: use ENTRY()/ENDPROC() for assembler functions and localize jump targets crypto: camellia-x86_64/aes-ni: use ENTRY()/ENDPROC() for assembler functions and localize jump targets crypto: blowfish-x86_64: use ENTRY()/ENDPROC() for assembler functions and localize jump targets crypto: aesni-intel - add ENDPROC statements for assembler functions crypto: x86/aes - assembler clean-ups: use ENTRY/ENDPROC, localize jump targets crypto: testmgr - add test vector for fcrypt ...
2013-02-04crypto: use ERR_CASTJulia Lawall
Replace PTR_ERR followed by ERR_PTR by ERR_CAST, to be more concise. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression err,x; @@ - err = PTR_ERR(x); if (IS_ERR(x)) - return ERR_PTR(err); + return ERR_CAST(x); // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-01-08crypto: ctr - make rfc3686 asynchronous block cipherJussi Kivilinna
Some hardware crypto drivers register asynchronous ctr(aes), which is left unused in IPSEC because rfc3686 template only supports synchronous block ciphers. Some other drivers register rfc3686(ctr(aes)) to workaround this limitation but not all. This patch changes rfc3686 to use asynchronous block ciphers, to allow async ctr(aes) algorithms to be utilized automatically by IPSEC. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2010-05-26crypto: Use ERR_CASTJulia Lawall
Use ERR_CAST(x) rather than ERR_PTR(PTR_ERR(x)). The former makes more clear what is the purpose of the operation, which otherwise looks like a no-op. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ type T; T x; identifier f; @@ T f (...) { <+... - ERR_PTR(PTR_ERR(x)) + x ...+> } @@ expression x; @@ - ERR_PTR(PTR_ERR(x)) + ERR_CAST(x) // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-13crypto: ctr - Use chainiv on raw counter modeHerbert Xu
Raw counter mode only works with chainiv, which is no longer the default IV generator on SMP machines. This broke raw counter mode as it can no longer instantiate as a givcipher. This patch fixes it by always picking chainiv on raw counter mode. This is based on the diagnosis and a patch by Huang Ying. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11[CRYPTO] seqiv: Add Sequence Number IV GeneratorHerbert Xu
This generator generates an IV based on a sequence number by xoring it with a salt. This algorithm is mainly useful for CTR and similar modes. This patch also sets it as the default IV generator for ctr. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11[CRYPTO] ctr: Refactor into ctr and rfc3686Herbert Xu
As discussed previously, this patch moves the basic CTR functionality into a chainable algorithm called ctr. The IPsec-specific variant of it is now placed on top with the name rfc3686. So ctr(aes) gives a chainable cipher with IV size 16 while the IPsec variant will be called rfc3686(ctr(aes)). This patch also adjusts gcm accordingly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11[CRYPTO] ctr: Fix multi-page processingHerbert Xu
When the data spans across a page boundary, CTR may incorrectly process a partial block in the middle because the blkcipher walking code may supply partial blocks in the middle as long as the total length of the supplied data is more than a block. CTR is supposed to return any unused partial block in that case to the walker. This patch fixes this by doing exactly that, returning partial blocks to the walker unless we received less than a block-worth of data to start with. This also allows us to optimise the bulk of the processing since we no longer have to worry about partial blocks until the very end. Thanks to Tan Swee Heng for fixes and actually testing this :) Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11[CRYPTO] ctr: Use crypto_inc and crypto_xorHerbert Xu
This patch replaces the custom inc/xor in CTR with the generic functions. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11[CRYPTO] ctr: Add countersizeJoy Latten
This patch adds countersize to CTR mode. The template is now ctr(algo,noncesize,ivsize,countersize). For example, ctr(aes,4,8,4) indicates the counterblock will be composed of a salt/nonce that is 4 bytes, an iv that is 8 bytes and the counter is 4 bytes. When noncesize + ivsize < blocksize, CTR initializes the last block - ivsize - noncesize portion of the block to zero. Otherwise the counter block is composed of the IV (and nonce if necessary). If noncesize + ivsize == blocksize, then this indicates that user is passing in entire counterblock. Thus countersize indicates the amount of bytes in counterblock to use as the counter for incrementing. CTR will increment counter portion by 1, and begin encryption with that value. Note that CTR assumes the counter portion of the block that will be incremented is stored in big endian. Signed-off-by: Joy Latten <latten@austin.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11[CRYPTO] ctr: Add CTR (Counter) block cipher modeJoy Latten
This patch implements CTR mode for IPsec. It is based off of RFC 3686. Please note: 1. CTR turns a block cipher into a stream cipher. Encryption is done in blocks, however the last block may be a partial block. A "counter block" is encrypted, creating a keystream that is xor'ed with the plaintext. The counter portion of the counter block is incremented after each block of plaintext is encrypted. Decryption is performed in same manner. 2. The CTR counterblock is composed of, nonce + IV + counter The size of the counterblock is equivalent to the blocksize of the cipher. sizeof(nonce) + sizeof(IV) + sizeof(counter) = blocksize The CTR template requires the name of the cipher algorithm, the sizeof the nonce, and the sizeof the iv. ctr(cipher,sizeof_nonce,sizeof_iv) So for example, ctr(aes,4,8) specifies the counterblock will be composed of 4 bytes from a nonce, 8 bytes from the iv, and 4 bytes for counter since aes has a blocksize of 16 bytes. 3. The counter portion of the counter block is stored in big endian for conformance to rfc 3686. Signed-off-by: Joy Latten <latten@austin.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>