aboutsummaryrefslogtreecommitdiff
path: root/include/crypto
AgeCommit message (Collapse)Author
2010-10-24Merge branch 'for-next' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (39 commits) Update broken web addresses in arch directory. Update broken web addresses in the kernel. Revert "drivers/usb: Remove unnecessary return's from void functions" for musb gadget Revert "Fix typo: configuation => configuration" partially ida: document IDA_BITMAP_LONGS calculation ext2: fix a typo on comment in ext2/inode.c drivers/scsi: Remove unnecessary casts of private_data drivers/s390: Remove unnecessary casts of private_data net/sunrpc/rpc_pipe.c: Remove unnecessary casts of private_data drivers/infiniband: Remove unnecessary casts of private_data drivers/gpu/drm: Remove unnecessary casts of private_data kernel/pm_qos_params.c: Remove unnecessary casts of private_data fs/ecryptfs: Remove unnecessary casts of private_data fs/seq_file.c: Remove unnecessary casts of private_data arm: uengine.c: remove C99 comments arm: scoop.c: remove C99 comments Fix typo configue => configure in comments Fix typo: configuation => configuration Fix typo interrest[ing|ed] => interest[ing|ed] Fix various typos of valid in comments ... Fix up trivial conflicts in: drivers/char/ipmi/ipmi_si_intf.c drivers/usb/gadget/rndis.c net/irda/irnet/irnet_ppp.c
2010-10-18Update broken web addresses in the kernel.Justin P. Mattock
The patch below updates broken web addresses in the kernel Signed-off-by: Justin P. Mattock <justinmattock@gmail.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Finn Thain <fthain@telegraphics.com.au> Cc: Randy Dunlap <rdunlap@xenotime.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Dimitry Torokhov <dmitry.torokhov@gmail.com> Cc: Mike Frysinger <vapier.adi@gmail.com> Acked-by: Ben Pfaff <blp@cs.stanford.edu> Acked-by: Hans J. Koch <hjk@linutronix.de> Reviewed-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2010-09-20crypto: cryptd - Adding the AEAD interface type support to cryptdAdrian Hoban
This patch adds AEAD support into the cryptd framework. Having AEAD support in cryptd enables crypto drivers that use the AEAD interface type (such as the patch for AEAD based RFC4106 AES-GCM implementation using Intel New Instructions) to leverage cryptd for asynchronous processing. Signed-off-by: Adrian Hoban <adrian.hoban@intel.com> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Gabriele Paoloni <gabriele.paoloni@intel.com> Signed-off-by: Aidan O'Mahony <aidan.o.mahony@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2010-05-19crypto: skcipher - Add ablkcipher_walk interfacesDavid S. Miller
These are akin to the blkcipher_walk helpers. The main differences in the async variant are: 1) Only physical walking is supported. We can't hold on to kmap mappings across the async operation to support virtual ablkcipher_walk operations anyways. 2) Bounce buffers used for async more need to be persistent and freed at a later point in time when the async op completes. Therefore we maintain a list of writeback buffers and require that the ablkcipher_walk user call the 'complete' operation so we can copy the bounce buffers out to the real buffers and free up the bounce buffer chunks. These interfaces will be used by the new Niagara2 crypto driver. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2010-01-17crypto: md5 - Add export supportMax Vozeler
This patch adds export/import support to md5. The exported type is defined by struct md5_state. This is modeled after the equivalent change to sha1_generic. Signed-off-by: Max Vozeler <max@hinterhof.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2010-01-07crypto: pcrypt - Add pcrypt crypto parallelization wrapperSteffen Klassert
This patch adds a parallel crypto template that takes a crypto algorithm and converts it to process the crypto transforms in parallel. For the moment only aead algorithms are supported. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-10-19crypto: hash - Remove legacy hash/digest codeBenjamin Gilbert
6941c3a0 disabled compilation of the legacy digest code but didn't actually remove it. Rectify this. Also, remove the crypto_hash_type extern declaration from algapi.h now that the struct is gone. Signed-off-by: Benjamin Gilbert <bgilbert@cs.cmu.edu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-10-19crypto: ghash - Add PCLMULQDQ accelerated implementationHuang Ying
PCLMULQDQ is used to accelerate the most time-consuming part of GHASH, carry-less multiplication. More information about PCLMULQDQ can be found at: http://software.intel.com/en-us/articles/carry-less-multiplication-and-its-usage-for-computing-the-gcm-mode/ Because PCLMULQDQ changes XMM state, its usage must be enclosed with kernel_fpu_begin/end, which can be used only in process context, the acceleration is implemented as crypto_ahash. That is, request in soft IRQ context will be defered to the cryptd kernel thread. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-09-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits) crypto: sha-s390 - Fix warnings in import function crypto: vmac - New hash algorithm for intel_txt support crypto: api - Do not displace newly registered algorithms crypto: ansi_cprng - Fix module initialization crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx crypto: fips - Depend on ansi_cprng crypto: blkcipher - Do not use eseqiv on stream ciphers crypto: ctr - Use chainiv on raw counter mode Revert crypto: fips - Select CPRNG crypto: rng - Fix typo crypto: talitos - add support for 36 bit addressing crypto: talitos - align locks on cache lines crypto: talitos - simplify hmac data size calculation crypto: mv_cesa - Add support for Orion5X crypto engine crypto: cryptd - Add support to access underlaying shash crypto: gcm - Use GHASH digest algorithm crypto: ghash - Add GHASH digest algorithm for GCM crypto: authenc - Convert to ahash crypto: api - Fix aligned ctx helper crypto: hmac - Prehash ipad/opad ...
2009-09-02crypto: vmac - New hash algorithm for intel_txt supportShane Wang
This patch adds VMAC (a fast MAC) support into crypto framework. Signed-off-by: Shane Wang <shane.wang@intel.com> Signed-off-by: Joseph Cihula <joseph.cihula@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-29crypto: skcipher - Fix skcipher_dequeue_givcrypt NULL testHerbert Xu
As struct skcipher_givcrypt_request includes struct crypto_request at a non-zero offset, testing for NULL after converting the pointer returned by crypto_dequeue_request does not work. This can result in IPsec crashes when the queue is depleted. This patch fixes it by doing the pointer conversion only when the return value is non-NULL. In particular, we create a new function __crypto_dequeue_request that does the pointer conversion. Reported-by: Brad Bosch <bradbosch@comcast.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-06crypto: cryptd - Add support to access underlaying shashHuang Ying
cryptd_alloc_ahash() will allocate a cryptd-ed ahash for specified algorithm name. The new allocated one is guaranteed to be cryptd-ed ahash, so the shash underlying can be gotten via cryptd_ahash_child(). Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-24crypto: api - Fix aligned ctx helperHerbert Xu
The aligned ctx helper was using a bogus alignment value thas was one off the correct value. Fortunately the current users do not require anything beyond the natural alignment of the platform so this hasn't caused a problem. This patch fixes that and also removes the unnecessary minimum check since if the alignment is less than the natural alignment then the subsequent ALIGN operation should be a noop. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-22crypto: sha512_generic - Use 64-bit countersHerbert Xu
This patch replaces the 32-bit counters in sha512_generic with 64-bit counters. It also switches the bit count to the simpler byte count. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-22crypto: sha512 - Export struct sha512_stateHerbert Xu
This patch renames struct sha512_ctx and exports it as struct sha512_state so that other sha512 implementations can use it as the reference structure for exporting their state. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-15crypto: shash - Fix digest size offsetHerbert Xu
When an shash algorithm is exported as ahash, ahash will access its digest size through hash_alg_common. That's why the shash layout needs to match hash_alg_common. This wasn't the case because the alignment weren't identical. This patch fixes the problem. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-15crypto: ahash - Add unaligned handling and default operationsHerbert Xu
This patch exports the finup operation where available and adds a default finup operation for ahash. The operations final, finup and digest also will now deal with unaligned result pointers by copying it. Finally export/import operations are will now be exported too. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: ahash - Remove old_ahash_algHerbert Xu
Now that all ahash implementations have been converted to the new ahash type, we can remove old_ahash_alg and its associated support. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: crypto4xx - Switch to new style ahashHerbert Xu
This patch changes crypto4xx to use the new style ahash type. In particular, we now use ahash_alg to define ahash algorithms instead of crypto_alg. This is achieved by introducing a union that encapsulates the new type and the existing crypto_alg structure. They're told apart through a u32 field containing the type value. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: cryptd - Switch to template create APIHerbert Xu
This patch changes cryptd to use the template->create function instead of alloc in anticipation for the switch to new style ahash algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: hash - Add helpers to free spawnsHerbert Xu
This patch adds the helpers crypto_drop_ahash and crypto_drop_shash so that these spawns can be dropped without ugly casts. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: ahash - Add instance/spawn supportHerbert Xu
This patch adds support for creating ahash instances and using ahash as spawns. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: ahash - Convert to new style algorithmsHerbert Xu
This patch converts crypto_ahash to the new style. The old ahash algorithm type is retained until the existing ahash implementations are also converted. All ahash users will automatically get the new crypto_ahash type. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: api - Remove frontend argument from extsize/init_tfmHerbert Xu
As the extsize and init_tfm functions belong to the frontend the frontend argument is superfluous. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: ahash - Add crypto_ahash_set_reqsizeHerbert Xu
This patch adds the helper crypto_ahash_set_reqsize so that implementations do not directly access the crypto_ahash structure. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: shash - Export async functionsHerbert Xu
This patch exports the async functions so that they can be reused by cryptd when it switches over to using shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14crypto: shash - Make descsize a run-time attributeHerbert Xu
This patch changes descsize to a run-time attribute so that implementations can change it in their init functions. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-12crypto: async - Use kzfree for requestsHerbert Xu
This patch changes the kfree call to kzfree for async requests. As the request may contain sensitive data it needs to be zeroed before it can be reallocated by others. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-11crypto: sha256_generic - Add export/import supportHerbert Xu
This patch adds export/import support to sha256_generic. The exported type is defined by struct sha256_state, which is basically the entire descriptor state of sha256_generic. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-11crypto: sha1_generic - Add export/import supportHerbert Xu
This patch adds export/import support to sha1_generic. The exported type is defined by struct sha1_state, which is basically the entire descriptor state of sha1_generic. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-11crypto: shash - Export/import hash state onlyHerbert Xu
This patch replaces the full descriptor export with an export of the partial hash state. This allows the use of a consistent export format across all implementations of a given algorithm. This is useful because a number of cases require the use of the partial hash state, e.g., PadLock can use the SHA1 hash state to get around the fact that it can only hash contiguous data chunks. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-09crypto: shash - Add shash_instance_ctxHerbert Xu
This patch adds the helper shash_instance_ctx which is the shash analogue of crypto_instance_ctx. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: shash - Add __crypto_shash_castHerbert Xu
This patch adds __crypto_shash_cast which turns a crypto_tfm into crypto_shash. It's analogous to the other __crypto_*_cast functions. It hasn't been needed until now since no existing shash algorithms have had an init function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: shash - Add crypto_shash_ctx_alignedHerbert Xu
This patch adds crypto_shash_ctx_aligned which will be needed by hmac after its conversion to shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: shash - Add shash_register_instanceHerbert Xu
This patch adds shash_register_instance so that shash instances can be registered without bypassing the shash checks applied to normal algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: shash - Add shash_attr_alg2 helperHerbert Xu
This patch adds the helper shash_attr_alg2 which locates a shash algorithm based on the information in the given attribute. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: api - Add crypto_attr_alg2 helperHerbert Xu
This patch adds the helper crypto_attr_alg2 which is similar to crypto_attr_alg but takes an extra frontend argument. This is intended to be used by new style algorithm types such as shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: shash - Add spawn supportHerbert Xu
This patch adds the functions needed to create and use shash spawns, i.e., to use shash algorithms in a template. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: api - Add new style spawn supportHerbert Xu
This patch modifies the spawn infrastructure to support new style algorithms like shash. In particular, this means storing the frontend type in the spawn and using crypto_create_tfm to allocate the tfm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08crypto: shash - Add shash_instanceHerbert Xu
This patch adds shash_instance and the associated alloc/free functions. This is meant to be an instance that with a shash algorithm under it. Note that the instance itself doesn't have to be shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-07crypto: api - Add crypto_alloc_instance2Herbert Xu
This patch adds a new argument to crypto_alloc_instance which sets aside some space before the instance for use by algorithms such as shash that place type-specific data before crypto_alg. For compatibility the function has been renamed so that existing users aren't affected. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-07crypto: api - Add new template create functionHerbert Xu
This patch introduces the template->create function intended to replace the existing alloc function. The intention is for create to handle the registration directly, whereas currently the caller of alloc has to handle the registration. This allows type-specific code to be run prior to registration. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-03-04crypto: zlib - New zlib crypto module, using pcompGeert Uytterhoeven
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Cc: James Morris <jmorris@namei.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-03-04crypto: compress - Add pcomp interfaceGeert Uytterhoeven
The current "comp" crypto interface supports one-shot (de)compression only, i.e. the whole data buffer to be (de)compressed must be passed at once, and the whole (de)compressed data buffer will be received at once. In several use-cases (e.g. compressed file systems that store files in big compressed blocks), this workflow is not suitable. Furthermore, the "comp" type doesn't provide for the configuration of (de)compression parameters, and always allocates workspace memory for both compression and decompression, which may waste memory. To solve this, add a "pcomp" partial (de)compression interface that provides the following operations: - crypto_compress_{init,update,final}() for compression, - crypto_decompress_{init,update,final}() for decompression, - crypto_{,de}compress_setup(), to configure (de)compression parameters (incl. allocating workspace memory). The (de)compression methods take a struct comp_request, which was mimicked after the z_stream object in zlib, and contains buffer pointer and length pairs for input and output. The setup methods take an opaque parameter pointer and length pair. Parameters are supposed to be encoded using netlink attributes, whose meanings depend on the actual (name of the) (de)compression algorithm. Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-19crypto: api - Use dedicated workqueue for crypto subsystemHuang Ying
Use dedicated workqueue for crypto subsystem A dedicated workqueue named kcrypto_wq is created to be used by crypto subsystem. The system shared keventd_wq is not suitable for encryption/decryption, because of potential starvation problem. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-18crypto: shash - Add crypto_shash_blocksizeHerbert Xu
This function is needed by algorithms that don't know their own block size, e.g., in s390 where the code is common between multiple versions of SHA. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-18crypto: cryptd - Add support to access underlying blkcipherHuang Ying
cryptd_alloc_ablkcipher() will allocate a cryptd-ed ablkcipher for specified algorithm name. The new allocated one is guaranteed to be cryptd-ed ablkcipher, so the blkcipher underlying can be gotten via cryptd_ablkcipher_child(). Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-18crypto: aes - Move key_length in struct crypto_aes_ctx to be the last fieldHuang Ying
The Intel AES-NI AES acceleration instructions need key_enc, key_dec in struct crypto_aes_ctx to be 16 byte aligned, it make this easier to move key_length to be the last one. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-05crypto: shash - Fix tfm destructionHerbert Xu
We were freeing an offset into the slab object instead of the start. This patch fixes it by calling crypto_destroy_tfm which allows the correct address to be given. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-12-25crypto: aes - Precompute tablesHerbert Xu
The tables used by the various AES algorithms are currently computed at run-time. This has created an init ordering problem because some AES algorithms may be registered before the tables have been initialised. This patch gets around this whole thing by precomputing the tables. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>