Commit Graph

705 Commits

Author SHA1 Message Date
Dan Williams
1f6672d44c async_tx/raid6: add missing dma_unmap calls to the async fail case
If we are unable to offload async_mult() or async_sum_product(), then
unmap the buffers before falling through to the synchronous path.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-21 10:47:40 -07:00
Dan Williams
1b6df69309 raid6test: fix stack overflow
Testing on x86_64 with NDISKS=255 yields:

   do_IRQ: modprobe near stack overflow (cur:ffff88007d19c000,sp:ffff88007d19c128)

...and eventually

   general protection fault: 0000 [#1]

Moving the scribble buffers off the stack allows the test to complete
successfully.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-16 21:03:29 -07:00
Linus Torvalds
332a339218 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits)
  crypto: sha-s390 - Fix warnings in import function
  crypto: vmac - New hash algorithm for intel_txt support
  crypto: api - Do not displace newly registered algorithms
  crypto: ansi_cprng - Fix module initialization
  crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx
  crypto: fips - Depend on ansi_cprng
  crypto: blkcipher - Do not use eseqiv on stream ciphers
  crypto: ctr - Use chainiv on raw counter mode
  Revert crypto: fips - Select CPRNG
  crypto: rng - Fix typo
  crypto: talitos - add support for 36 bit addressing
  crypto: talitos - align locks on cache lines
  crypto: talitos - simplify hmac data size calculation
  crypto: mv_cesa - Add support for Orion5X crypto engine
  crypto: cryptd - Add support to access underlaying shash
  crypto: gcm - Use GHASH digest algorithm
  crypto: ghash - Add GHASH digest algorithm for GCM
  crypto: authenc - Convert to ahash
  crypto: api - Fix aligned ctx helper
  crypto: hmac - Prehash ipad/opad
  ...
2009-09-11 09:38:37 -07:00
Dan Williams
bbb20089a3 Merge branch 'dmaengine' into async-tx-next
Conflicts:
	crypto/async_tx/async_xor.c
	drivers/dma/ioat/dma_v2.h
	drivers/dma/ioat/pci.c
	drivers/md/raid5.c
2009-09-08 17:55:21 -07:00
Dan Williams
83544ae9f3 dmaengine, async_tx: support alignment checks
Some engines have transfer size and address alignment restrictions.  Add
a per-operation alignment property to struct dma_device that the async
routines and dmatest can use to check alignment capabilities.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:53 -07:00
Dan Williams
138f4c359d dmaengine, async_tx: add a "no channel switch" allocator
Channel switching is problematic for some dmaengine drivers as the
architecture precludes separating the ->prep from ->submit.  In these
cases the driver can select ASYNC_TX_DISABLE_CHANNEL_SWITCH to modify
the async_tx allocator to only return channels that support all of the
required asynchronous operations.

For example MD_RAID456=y selects support for asynchronous xor, xor
validate, pq, pq validate, and memcpy.  When
ASYNC_TX_DISABLE_CHANNEL_SWITCH=y any channel with all these
capabilities is marked DMA_ASYNC_TX allowing async_tx_find_channel() to
quickly locate compatible channels with the guarantee that dependency
chains will remain on one channel.  When
ASYNC_TX_DISABLE_CHANNEL_SWITCH=n async_tx_find_channel() may select
channels that lead to operation chains that need to cross channel
boundaries using the async_tx channel switch capability.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:51 -07:00
Dan Williams
0403e38277 dmaengine: add fence support
Some engines optimize operation by reading ahead in the descriptor chain
such that descriptor2 may start execution before descriptor1 completes.
If descriptor2 depends on the result from descriptor1 then a fence is
required (on descriptor2) to disable this optimization.  The async_tx
api could implicitly identify dependencies via the 'depend_tx'
parameter, but that would constrain cases where the dependency chain
only specifies a completion order rather than a data dependency.  So,
provide an ASYNC_TX_FENCE to explicitly identify data dependencies.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:42:50 -07:00
Dan Williams
f9dd213437 Merge branch 'md-raid6-accel' into ioat3.2
Conflicts:
	include/linux/dmaengine.h
2009-09-08 17:42:29 -07:00
Dan Williams
a348a7e6fd Merge commit 'v2.6.31-rc1' into dmaengine 2009-09-08 14:32:24 -07:00
Linus Torvalds
e9ee3a54a1 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: skcipher - Fix skcipher_dequeue_givcrypt NULL test
2009-09-05 14:51:45 -07:00
Shane Wang
f1939f7c56 crypto: vmac - New hash algorithm for intel_txt support
This patch adds VMAC (a fast MAC) support into crypto framework.

Signed-off-by: Shane Wang <shane.wang@intel.com>
Signed-off-by: Joseph Cihula <joseph.cihula@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-09-02 20:05:22 +10:00
Herbert Xu
2bf2901669 crypto: api - Do not displace newly registered algorithms
We have a mechanism where newly registered algorithms of a higher
priority can displace existing instances that use a different
implementation of the same algorithm with a lower priority.

Unfortunately the same mechanism can cause a newly registered
algorithm to displace itself if it depends on an existing version
of the same algorithm.

This patch fixes this by keeping all algorithms that the newly
reigstered algorithm depends on, thus protecting them from being
removed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-31 15:56:54 +10:00
Dan Williams
cb3c82992f async_tx: raid6 recovery self test
Port drivers/md/raid6test/test.c to use the async raid6 recovery
routines.  This is meant as a unit test for raid6 acceleration drivers.  In
addition to the 16-drive test case this implements tests for the 4-disk and
5-disk special cases (dma devices can not generically handle less than 2
sources), and adds a test for the D+Q case.

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:09:28 -07:00
Dan Williams
0a82a6239b async_tx: add support for asynchronous RAID6 recovery operations
async_raid6_2data_recov() recovers two data disk failures

 async_raid6_datap_recov() recovers a data disk and the P disk

These routines are a port of the synchronous versions found in
drivers/md/raid6recov.c.  The primary difference is breaking out the xor
operations into separate calls to async_xor.  Two helper routines are
introduced to perform scalar multiplication where needed.
async_sum_product() multiplies two sources by scalar coefficients and
then sums (xor) the result.  async_mult() simply multiplies a single
source by a scalar.

This implemention also includes, in contrast to the original
synchronous-only code, special case handling for the 4-disk and 5-disk
array cases.  In these situations the default N-disk algorithm will
present 0-source or 1-source operations to dma devices.  To cover for
dma devices where the minimum source count is 2 we implement 4-disk and
5-disk handling in the recovery code.

[ Impact: asynchronous raid6 recovery routines for 2data and datap cases ]

Cc: Yuri Tikhonov <yur@emcraft.com>
Cc: Ilya Yanok <yanok@emcraft.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:09:27 -07:00
Dan Williams
b2f46fd8ef async_tx: add support for asynchronous GF multiplication
[ Based on an original patch by Yuri Tikhonov ]

This adds support for doing asynchronous GF multiplication by adding
two additional functions to the async_tx API:

 async_gen_syndrome() does simultaneous XOR and Galois field
    multiplication of sources.

 async_syndrome_val() validates the given source buffers against known P
    and Q values.

When a request is made to run async_pq against more than the hardware
maximum number of supported sources we need to reuse the previous
generated P and Q values as sources into the next operation.  Care must
be taken to remove Q from P' and P from Q'.  For example to perform a 5
source pq op with hardware that only supports 4 sources at a time the
following approach is taken:

p, q = PQ(src0, src1, src2, src3, COEF({01}, {02}, {04}, {08}))
p', q' = PQ(p, q, q, src4, COEF({00}, {01}, {00}, {10}))

p' = p + q + q + src4 = p + src4
q' = {00}*p + {01}*q + {00}*q + {10}*src4 = q + {10}*src4

Note: 4 is the minimum acceptable maxpq otherwise we punt to
synchronous-software path.

The DMA_PREP_CONTINUE flag indicates to the driver to reuse p and q as
sources (in the above manner) and fill the remaining slots up to maxpq
with the new sources/coefficients.

Note1: Some devices have native support for P+Q continuation and can skip
this extra work.  Devices with this capability can advertise it with
dma_set_maxpq.  It is up to each driver how to handle the
DMA_PREP_CONTINUE flag.

Note2: The api supports disabling the generation of P when generating Q,
this is ignored by the synchronous path but is implemented by some dma
devices to save unnecessary writes.  In this case the continuation
algorithm is simplified to only reuse Q as a source.

Cc: H. Peter Anvin <hpa@zytor.com>
Cc: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Yuri Tikhonov <yur@emcraft.com>
Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:09:27 -07:00
Dan Williams
95475e5711 async_tx: remove walk of tx->parent chain in dma_wait_for_async_tx
We currently walk the parent chain when waiting for a given tx to
complete however this walk may race with the driver cleanup routine.
The routines in async_raid6_recov.c may fall back to the synchronous
path at any point so we need to be prepared to call async_tx_quiesce()
(which calls  dma_wait_for_async_tx).  To remove the ->parent walk we
guarantee that every time a dependency is attached ->issue_pending() is
invoked, then we can simply poll the initial descriptor until
completion.

This also allows for a lighter weight 'issue pending' implementation as
there is no longer a requirement to iterate through all the channels'
->issue_pending() routines as long as operations have been submitted in
an ordered chain.  async_tx_issue_pending() is added for this case.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:09:27 -07:00
Dan Williams
af1f951eb6 async_tx: kill needless module_{init|exit}
If module_init and module_exit are nops then neither need to be defined.

[ Impact: pure cleanup ]

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:09:26 -07:00
Dan Williams
ad283ea4a3 async_tx: add sum check flags
Replace the flat zero_sum_result with a collection of flags to contain
the P (xor) zero-sum result, and the soon to be utilized Q (raid6 reed
solomon syndrome) zero-sum result.  Use the SUM_CHECK_ namespace instead
of DMA_ since these flags will be used on non-dma-zero-sum enabled
platforms.

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-08-29 19:09:26 -07:00
Herbert Xu
0c7d400faf crypto: skcipher - Fix skcipher_dequeue_givcrypt NULL test
As struct skcipher_givcrypt_request includes struct crypto_request
at a non-zero offset, testing for NULL after converting the pointer
returned by crypto_dequeue_request does not work.  This can result
in IPsec crashes when the queue is depleted.

This patch fixes it by doing the pointer conversion only when the
return value is non-NULL.  In particular, we create a new function
__crypto_dequeue_request that does the pointer conversion.

Reported-by: Brad Bosch <bradbosch@comcast.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-29 20:44:04 +10:00
Steffen Klassert
a367b17f34 crypto: ansi_cprng - Fix module initialization
Return the value we got from crypto_register_alg() instead of
returning 0 in any case.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-29 17:36:25 +10:00
Steffen Klassert
36f87a4a29 crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx
The alignment calculation of xcbc_tfm_ctx uses alg->cra_alignmask
and not alg->cra_alignmask + 1 as it should. This led to frequent
crashes during the selftest of xcbc(aes-asm) on x86_64
machines. This patch fixes this. Also we use the alignmask
of xcbc and not the alignmask of the underlying algorithm
for the alignmnent calculation in xcbc_create now.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-20 17:58:04 +10:00
Neil Horman
4e4ed83be6 crypto: fips - Depend on ansi_cprng
What about something like this?  It defaults the CPRNG to m and makes FIPS
dependent on the CPRNG.  That way you get a module build by default, but you can
change it to y manually during config and still satisfy the dependency, and if
you select N it disables FIPS as well.  I rather like that better than making
FIPS a tristate.  I just tested it out here and it seems to work well.  Let me
know what you think

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-20 17:54:16 +10:00
Herbert Xu
63b5ac286d crypto: blkcipher - Do not use eseqiv on stream ciphers
Recently we switched to using eseqiv on SMP machines in preference
over chainiv.  However, eseqiv does not support stream ciphers so
they should still default to chainiv.

This patch applies the same check as done by eseqiv to weed out
the stream ciphers.  In particular, all algorithms where the IV
size is not equal to the block size will now default to chainiv.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-14 22:58:18 +10:00
Herbert Xu
aef27136b8 crypto: ctr - Use chainiv on raw counter mode
Raw counter mode only works with chainiv, which is no longer
the default IV generator on SMP machines.  This broke raw counter
mode as it can no longer instantiate as a givcipher.

This patch fixes it by always picking chainiv on raw counter
mode.  This is based on the diagnosis and a patch by Huang
Ying.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-13 23:10:39 +10:00
Herbert Xu
73fec12094 Revert crypto: fips - Select CPRNG
This reverts commit 215ccd6f55.

It causes CPRNG and everything selected by it to be built-in
whenever FIPS is enabled.  The problem is that it is selecting
a tristate from a bool, which is usually not what is intended.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-13 22:41:25 +10:00
Christian Kujau
a8ccc393dd crypto: rng - Fix typo
Correct a typo in crypto/rng.c

Signed-off-by: Christian Kujau <lists@nerdbynature.de>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-13 11:53:56 +10:00
Huang Ying
ace1366369 crypto: cryptd - Add support to access underlaying shash
cryptd_alloc_ahash() will allocate a cryptd-ed ahash for specified
algorithm name. The new allocated one is guaranteed to be cryptd-ed
ahash, so the shash underlying can be gotten via cryptd_ahash_child().

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-06 15:35:20 +10:00
Huang Ying
9382d97af5 crypto: gcm - Use GHASH digest algorithm
Remove the dedicated GHASH implementation in GCM, and uses the GHASH
digest algorithm instead. This will make GCM uses hardware accelerated
GHASH implementation automatically if available.

ahash instead of shash interface is used, because some hardware
accelerated GHASH implementation needs asynchronous interface.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-06 15:34:26 +10:00
Huang Ying
2cdc6899a8 crypto: ghash - Add GHASH digest algorithm for GCM
GHASH is implemented as a shash algorithm. The actual implementation
is copied from gcm.c. This makes it possible to add
architecture/hardware accelerated GHASH implementation.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-06 15:32:38 +10:00
Steffen Klassert
cbdcf80d8b crypto: authenc - Convert to ahash
This patch converts authenc to the new ahash interface.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-05 19:35:34 +10:00
Linus Torvalds
db06816cb9 Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
  dmaengine: at_hdmac: add DMA slave transfers
  dmaengine: at_hdmac: new driver for the Atmel AHB DMA Controller
  dmaengine: dmatest: correct thread_count while using multiple thread per channel
  dmaengine: dmatest: add a maximum number of test iterations
  drivers/dma: Remove unnecessary semicolons
  drivers/dma/fsldma.c: Remove unnecessary semicolons
  dmaengine: move HIGHMEM64G restriction to ASYNC_TX_DMA
  fsldma: do not clear bandwidth control bits on the 83xx controller
  fsldma: enable external start for the 83xx controller
  fsldma: use PCI Read Multiple command
2009-07-30 16:46:31 -07:00
Herbert Xu
0b767b4df3 crypto: hmac - Prehash ipad/opad
This patch uses crypto_shash_export/crypto_shash_import to prehash
ipad/opad to speed up hmac.  This is partly based on a similar patch
by Steffen Klassert.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-24 15:18:41 +08:00
Phil Carmody
7b4ffcf953 crypto: aes - Undefined behaviour in crypto_aes_expand_key
It's undefined behaviour in C to write outside the bounds of an array.
The key expansion routine takes a shortcut of creating 8 words at a
time, but this creates 4 additional words which don't fit in the array.

As everyone is hopefully now aware, GCC is at liberty to make any
assumptions and optimisations it likes in situations where it can
detect that UB has occured, up to and including nasal demons, and
as the indices being accessed in the array are trivially calculable,
it's rash to invite gcc to do take any liberties at all.

Signed-off-by: Phil Carmody <ext-phil.2.carmody@nokia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-24 13:59:17 +08:00
Steffen Klassert
0044f3eda9 crypto: shash - Test for the algorithms import function before exporting it
crypto_init_shash_ops_async() tests for setkey and not for import
before exporting the algorithms import function to ahash.
This patch fixes this.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-24 13:57:13 +08:00
Steffen Klassert
5befbd5a7e crypto: ahash - Use GFP_KERNEL on allocation if the request can sleep
ahash_op_unaligned() and ahash_def_finup() allocate memory atomically,
regardless whether the request can sleep or not. This patch changes
this to use GFP_KERNEL if the request can sleep.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-24 13:56:31 +08:00
Herbert Xu
f592682f9f crypto: shash - Require all algorithms to support export/import
This patch provides a default export/import function for all
shash algorithms.  It simply copies the descriptor context as
is done by sha1_generic.

This in essence means that all existing shash algorithms now
support export/import.  This is something that will be depended
upon in implementations such as hmac.  Therefore all new shash
and ahash implementations must support export/import.

For those that cannot obtain a partial result, padlock-sha's
fallback model should be used so that a partial result is always
available.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-22 14:38:13 +08:00
Herbert Xu
13887ed688 crypto: sha512_generic - Use 64-bit counters
This patch replaces the 32-bit counters in sha512_generic with
64-bit counters.  It also switches the bit count to the simpler
byte count.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-22 14:38:13 +08:00
Herbert Xu
1f38ad8389 crypto: sha512 - Export struct sha512_state
This patch renames struct sha512_ctx and exports it as struct
sha512_state so that other sha512 implementations can use it
as the reference structure for exporting their state.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-22 14:38:12 +08:00
Herbert Xu
ac95301f27 crypto: xcbc - Fix shash conversion
Although xcbc was converted to shash, it didn't obey the new
requirement that all hash state must be stored in the descriptor
rather than the transform.

This patch fixes this issue and also optimises away the rekeying
by precomputing K2 and K3 within setkey.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-22 14:38:10 +08:00
Herbert Xu
b588ef6e69 crypto: xcbc - Use crypto_xor
This patch replaces the local xor function with the generic
crypto_xor function.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-22 14:38:10 +08:00
Herbert Xu
6fba00d176 crypto: cryptd - Add finup/export/import for hash
This patch adds the finup/export/import functions to the cryptd
ahash implementation.  We simply invoke the underlying shash
operations.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-22 11:10:22 +08:00
Herbert Xu
cbc86b9161 crypto: shash - Fix async finup handling of null digest
When shash_ahash_finup encounters a null request, we end up not
calling the underlying final function.  This patch fixes that.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-15 21:26:41 +08:00
Herbert Xu
a70c522520 crypto: ahash - Fix setkey crash
When the alignment check was made unconditional for ahash we
may end up crashing on shash algorithms because we're always
calling alg->setkey instead of tfm->setkey.

This patch fixes it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-15 20:39:05 +08:00
Herbert Xu
b5ebd44eb7 crypto: xcbc - Fix incorrect error value when creating instance
If shash_alloc_instance() fails, we return the wrong error value.
This patch fixes it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-15 16:53:33 +08:00
Herbert Xu
3b3fc322d9 crypto: hmac - Fix incorrect error value when creating instance
If shash_alloc_instance() fails, we return the wrong error value.
This patch fixes it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-15 16:52:55 +08:00
Steffen Klassert
05ed8758fa crypto: cryptd - Fix uninitialized return value
If cryptd_alloc_instance() fails, the return value is uninitialized.
This patch fixes this by setting the return value.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-15 16:51:04 +08:00
Herbert Xu
66f6ce5e52 crypto: ahash - Add unaligned handling and default operations
This patch exports the finup operation where available and adds
a default finup operation for ahash.  The operations final, finup
and digest also will now deal with unaligned result pointers by
copying it.  Finally export/import operations are will now be
exported too.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-15 12:40:40 +08:00
Herbert Xu
093900c2b9 crypto: ahash - Use GFP_KERNEL in unaligned setkey
We currently use GFP_ATOMIC in the unaligned setkey function
to allocate the temporary aligned buffer.  Since setkey must
be called in a sleepable context, we can use GFP_KERNEL instead.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 21:48:35 +08:00
Herbert Xu
0e2d3a1263 crypto: shash - Fix alignment in unaligned operations
When we encounter an unaligned pointer we are supposed to copy
it to a temporary aligned location.  However the temporary buffer
isn't aligned properly.  This patch fixes that.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 21:43:56 +08:00
Herbert Xu
8c32c516eb crypto: hash - Zap unaligned buffers
Some unaligned buffers on the stack weren't zapped properly which
may cause secret data to be leaked.  This patch fixes them by doing
a zero memset.

It is also possible for us to place random kernel stack contents
in the digest buffer if a digest operation fails.  This is fixed
by only copying if the operation succeeded.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 21:35:36 +08:00
Herbert Xu
500b3e3c3d crypto: ahash - Remove old_ahash_alg
Now that all ahash implementations have been converted to the new
ahash type, we can remove old_ahash_alg and its associated support.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 20:29:57 +08:00
Herbert Xu
0b535adfb1 crypto: cryptd - Switch to new style ahash
This patch changes cryptd to use the new style ahash type.  In
particular, the instance is enlarged to encapsulate the new
ahash_alg structure.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 19:11:32 +08:00
Herbert Xu
9cd899a32f crypto: cryptd - Switch to template create API
This patch changes cryptd to use the template->create function
instead of alloc in anticipation for the switch to new style
ahash algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 18:45:45 +08:00
Herbert Xu
7be380f720 crypto: tcrypt - Add mask parameter
This patch adds a mask parameter to complement the existing type
parameter.  This is useful when instantiating algorithms that
require a mask other than the default, e.g., ahash algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 16:06:54 +08:00
Herbert Xu
01c2dece43 crypto: ahash - Add instance/spawn support
This patch adds support for creating ahash instances and using
ahash as spawns.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 15:54:09 +08:00
Herbert Xu
88056ec346 crypto: ahash - Convert to new style algorithms
This patch converts crypto_ahash to the new style.  The old ahash
algorithm type is retained until the existing ahash implementations
are also converted.  All ahash users will automatically get the
new crypto_ahash type.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 15:54:07 +08:00
Herbert Xu
2ca33da1de crypto: api - Remove frontend argument from extsize/init_tfm
As the extsize and init_tfm functions belong to the frontend the
frontend argument is superfluous.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 12:58:15 +08:00
Herbert Xu
0d6669e2ba crypto: cryptd - Use crypto_ahash_set_reqsize
This patch makes cryptd use crypto_ahash_set_reqsize to avoid
accessing crypto_ahash directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 12:58:12 +08:00
Herbert Xu
46309d8938 crypto: cryptd - Use shash algorithms
This patch changes cryptd to use shash algorithms instead of the
legacy hash interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 12:58:10 +08:00
Herbert Xu
7eddf95ec5 crypto: shash - Export async functions
This patch exports the async functions so that they can be reused
by cryptd when it switches over to using shash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 12:58:08 +08:00
Herbert Xu
6941c3a0aa crypto: hash - Remove legacy hash/digest implementaion
This patch removes the implementation of hash and digest now that
no algorithms use them anymore.  The interface though will remain
until the users are converted across.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 12:58:07 +08:00
Herbert Xu
9ef074fa9b crypto: authenc - Remove reference to crypto_hash
Now that there are no more legacy hash implementations we can
remove the reference to crypto_hash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 12:58:07 +08:00
Herbert Xu
3106caab61 crypto: xcbc - Switch to shash
This patch converts the xcbc algorithm to the new shash type.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 12:58:06 +08:00
Herbert Xu
8bd1209cff crypto: hmac - Switch to shash
This patch changes hmac to the new shash interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 12:58:04 +08:00
Herbert Xu
113adefc73 crypto: shash - Make descsize a run-time attribute
This patch changes descsize to a run-time attribute so that
implementations can change it in their init functions.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 12:58:00 +08:00
Herbert Xu
57cfe44bcc crypto: shash - Move null setkey check to registration time
This patch moves the run-time null setkey check to shash_prepare_alg
just like we did for finup/digest.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-12 10:46:02 +08:00
Herbert Xu
9b2fda7b94 crypto: sha256_generic - Add export/import support
This patch adds export/import support to sha256_generic.  The exported
type is defined by struct sha256_state, which is basically the entire
descriptor state of sha256_generic.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-11 18:23:34 +08:00
Herbert Xu
3d4d277cf8 crypto: sha256_generic - Use 64-bit counter like sha1
This patch replaces the two 32-bit counter code in sha256_generic
with the simpler 64-bit counter code from sha1.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-11 18:23:33 +08:00
Herbert Xu
e2a7ce4e18 crypto: sha1_generic - Add export/import support
This patch adds export/import support to sha1_generic.  The exported
type is defined by struct sha1_state, which is basically the entire
descriptor state of sha1_generic.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-11 18:23:33 +08:00
Herbert Xu
8267adab94 crypto: shash - Move finup/digest null checks to registration time
This patch moves the run-time null finup/digest checks to the
shash_prepare_alg function which is run at registration time.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-11 18:23:33 +08:00
Herbert Xu
99d27e1c59 crypto: shash - Export/import hash state only
This patch replaces the full descriptor export with an export of
the partial hash state.  This allows the use of a consistent export
format across all implementations of a given algorithm.

This is useful because a number of cases require the use of the
partial hash state, e.g., PadLock can use the SHA1 hash state
to get around the fact that it can only hash contiguous data
chunks.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-11 18:23:32 +08:00
Herbert Xu
7ede5a5ba5 crypto: api - Fix crypto_drop_spawn crash on blank spawns
This patch allows crypto_drop_spawn to be called on spawns that
have not been initialised or have failed initialisation.  This
fixes potential crashes during initialisation without adding
special case code.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-09 11:34:06 +08:00
Herbert Xu
deee2289b9 crypto: shash - Propagate reinit return value
This patch fixes crypto_shash_import to propagate the value returned
by reinit.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08 23:39:01 +08:00
Herbert Xu
f88ad8de28 crypto: shash - Use finup in default digest
This patch simplifies the default digest function by using finup.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08 23:32:08 +08:00
Herbert Xu
619a6ebd25 crypto: shash - Add shash_register_instance
This patch adds shash_register_instance so that shash instances
can be registered without bypassing the shash checks applied to
normal algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08 19:00:38 +08:00
Herbert Xu
7d6f56400a crypto: shash - Add shash_attr_alg2 helper
This patch adds the helper shash_attr_alg2 which locates a shash
algorithm based on the information in the given attribute.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08 18:58:30 +08:00
Herbert Xu
d06854f024 crypto: api - Add crypto_attr_alg2 helper
This patch adds the helper crypto_attr_alg2 which is similar to
crypto_attr_alg but takes an extra frontend argument.  This is
intended to be used by new style algorithm types such as shash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08 18:58:29 +08:00
Herbert Xu
942969992d crypto: shash - Add spawn support
This patch adds the functions needed to create and use shash
spawns, i.e., to use shash algorithms in a template.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08 18:58:29 +08:00
Herbert Xu
97eedce1a6 crypto: api - Add new style spawn support
This patch modifies the spawn infrastructure to support new style
algorithms like shash.  In particular, this means storing the
frontend type in the spawn and using crypto_create_tfm to allocate
the tfm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08 18:57:10 +08:00
Herbert Xu
2e4fddd8e4 crypto: shash - Add shash_instance
This patch adds shash_instance and the associated alloc/free
functions.  This is meant to be an instance that with a shash
algorithm under it.  Note that the instance itself doesn't have
to be shash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08 18:57:10 +08:00
Herbert Xu
70ec7bb91a crypto: api - Add crypto_alloc_instance2
This patch adds a new argument to crypto_alloc_instance which
sets aside some space before the instance for use by algorithms
such as shash that place type-specific data before crypto_alg.

For compatibility the function has been renamed so that existing
users aren't affected.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-07 18:55:28 +08:00
Herbert Xu
f2ac72e826 crypto: api - Add new template create function
This patch introduces the template->create function intended
to replace the existing alloc function.  The intention is for
create to handle the registration directly, whereas currently
the caller of alloc has to handle the registration.

This allows type-specific code to be run prior to registration.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-07 12:30:33 +08:00
Sebastian Andrzej Siewior
fd09d7facb crypto: ansi_prng - alloc cipher just in init
As reported by Eric Sesterhenn the re-allocation of the cipher in reset leads
to:
|BUG: sleeping function called from invalid context at kernel/rwsem.c:21
|in_atomic(): 1, irqs_disabled(): 0, pid: 4926, name: modprobe
|INFO: lockdep is turned off.
|Pid: 4926, comm: modprobe Tainted: G   M 2.6.31-rc1-22297-g5298976 #24
|Call Trace:
| [<c011dd93>] __might_sleep+0xf9/0x101
| [<c0777aa0>] down_read+0x16/0x68
| [<c048bf04>] crypto_alg_lookup+0x16/0x34
| [<c048bf52>] crypto_larval_lookup+0x30/0xf9
| [<c048c038>] crypto_alg_mod_lookup+0x1d/0x62
| [<c048c13e>] crypto_alloc_base+0x1e/0x64
| [<c04bf991>] reset_prng_context+0xab/0x13f
| [<c04e5cfc>] ? __spin_lock_init+0x27/0x51
| [<c04bfce1>] cprng_init+0x2a/0x42
| [<c048bb4c>] __crypto_alloc_tfm+0xfa/0x128
| [<c048c153>] crypto_alloc_base+0x33/0x64
| [<c04933c9>] alg_test_cprng+0x30/0x1f4
| [<c0493329>] alg_test+0x12f/0x19f
| [<c0177f1f>] ? __alloc_pages_nodemask+0x14d/0x481
| [<d09219e2>] do_test+0xf9d/0x163f [tcrypt]
| [<d0920de6>] do_test+0x3a1/0x163f [tcrypt]
| [<d0926035>] tcrypt_mod_init+0x35/0x7c [tcrypt]
| [<c010113c>] _stext+0x54/0x12c
| [<d0926000>] ? tcrypt_mod_init+0x0/0x7c [tcrypt]
| [<c01398a3>] ? up_read+0x16/0x2b
| [<c0139fc4>] ? __blocking_notifier_call_chain+0x40/0x4c
| [<c014ee8d>] sys_init_module+0xa9/0x1bf
| [<c010292b>] sysenter_do_call+0x12/0x32

because a spin lock is held and crypto_alloc_base() may sleep.
There is no reason to re-allocate the cipher, the state is resetted in
->setkey(). This patches makes the cipher allocation a one time thing and
moves it to init.

Reported-by: Eric Sesterhenn <eric.sesterhenn@lsexperts.de>
Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-03 12:10:47 +08:00
Sebastian Andrzej Siewior
ed94070058 crypto: ansi_prng - Use just a BH lock
The current code uses a mix of sping_lock() & spin_lock_irqsave(). This can
lead to deadlock with the correct timming & cprng_get_random() + cprng_reset()
sequence.
I've converted them to bottom half locks since all three user grab just a BH
lock so this runs probably in softirq :)

Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-03 12:09:41 +08:00
Herbert Xu
a68f6610d4 crypto: testmgr - Allow implementation-specific tests
This patch adds the support for testing specific implementations.
This should only be used in very specific situations.  Right now
this means specific implementations of random number generators.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-02 16:32:12 +08:00
Dan Williams
daf4219dbc dmaengine: move HIGHMEM64G restriction to ASYNC_TX_DMA
On HIGHMEM64G systems dma_addr_t is known to be larger than (void *)
which precludes async_xor from performing dma address conversions by
reusing the input parameter address list.  However, other parts of the
dmaengine infrastructure do not suffer this constraint, so the
HIGHMEM64G restriction can be down-levelled.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-07-01 16:12:53 -07:00
Herbert Xu
0b67fb65d1 crypto: skcipher - Change default sync geniv on SMP to eseqiv
As it stands we use chainiv for sync algorithms and eseqiv for
async algorithms.  However, when there is more than one CPU
chainiv forces all processing to be serialised which is usually
not what you want.  Also, the added overhead of eseqiv isn't that
great.

Therefore this patch changes the default sync geniv on SMP machines
to eseqiv.  For the odd situation where the overhead is unacceptable
then chainiv is still available as an option.

Note that on UP machines chainiv is still preferred over eseqiv
for sync algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-25 18:43:48 +08:00
Herbert Xu
435578aeaa crypto: skcipher - Fix request for sync algorithms
When a sync givcipher algorithm is requested, if an async version
of the same algorithm already exists, then we will loop forever
without ever constructing the sync version based on a blkcipher.

This is because we did not include the requested type/mask when
getting a larval for the geniv algorithm that is to be constructed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-25 14:46:31 +08:00
Herbert Xu
259c5e05c1 crypto: testmgr - Remove hash size check
Until hash test vectors grow longer than 256 bytes, the only
purpose of the check is to generate a gcc warning.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-24 13:48:13 +08:00
Neil Horman
215ccd6f55 crypto: fips - Select CPRNG
The ANSI CPRNG has no dependence on FIPS support.  FIPS support however,
requires the use of the CPRNG.  Adjust that depedency relationship in Kconfig.

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-21 21:38:03 +08:00
Herbert Xu
ea40065769 crypto: tcrypt - Fix module return code when testing by name
We should return 0/-ENOENT instead of 1/0 when testing by name.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-19 20:37:00 +08:00
Herbert Xu
27300176d7 crypto: ansi_cprng - Do not select FIPS
The RNG should work with FIPS disabled.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-19 20:32:58 +08:00
Steffen Klassert
a873a5f1c4 crypto: tcrypt - Test algorithms by name
This adds the 'alg' module parameter to be able to test an
algorithm by name. If the algorithm type is not ad-hoc
clear for a algorithm (e.g. pcrypt, cryptd) it is possilbe
to set the algorithm type with the 'type' module parameter.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-19 19:46:53 +08:00
Neil Horman
5b739ef8a4 random: Add optional continuous repetition test to entropy store based rngs
FIPS-140 requires that all random number generators implement continuous self
tests in which each extracted block of data is compared against the last block
for repetition.  The ansi_cprng implements such a test, but it would be nice if
the hw rng's did the same thing.  Obviously its not something thats always
needed, but it seems like it would be a nice feature to have on occasion. I've
written the below patch which allows individual entropy stores to be flagged as
desiring a continuous test to be run on them as is extracted.  By default this
option is off, but is enabled in the event that fips mode is selected during
bootup.

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-18 19:50:21 +08:00
Vegard Nossum
722f2a6c87 Merge commit 'linus/master' into HEAD
Conflicts:
	MAINTAINERS

Signed-off-by: Vegard Nossum <vegard.nossum@gmail.com>
2009-06-15 15:50:49 +02:00
Vegard Nossum
33f65df7ed crypto: don't track xor test pages with kmemcheck
The xor tests are run on uninitialized data, because it is doesn't
really matter what the underlying data is. Annotate this false-
positive warning.

Acked-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Vegard Nossum <vegard.nossum@gmail.com>
2009-06-15 12:40:10 +02:00
Dan Williams
04ce9ab385 async_xor: permit callers to pass in a 'dma/page scribble' region
async_xor() needs space to perform dma and page address conversions.  In
most cases the code can simply reuse the struct page * array because the
size of the native pointer matches the size of a dma/page address.  In
order to support archs where sizeof(dma_addr_t) is larger than
sizeof(struct page *), or to preserve the input parameters, we utilize a
memory region passed in by the caller.

Since the code is now prepared to handle the case where it cannot
perform address conversions on the stack, we no longer need the
!HIGHMEM64G dependency in drivers/dma/Kconfig.

[ Impact: don't clobber input buffers for address conversions ]

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-06-03 14:22:28 -07:00
Dan Williams
a08abd8ca8 async_tx: structify submission arguments, add scribble
Prepare the api for the arrival of a new parameter, 'scribble'.  This
will allow callers to identify scratchpad memory for dma address or page
address conversions.  As this adds yet another parameter, take this
opportunity to convert the common submission parameters (flags,
dependency, callback, and callback argument) into an object that is
passed by reference.

Also, take this opportunity to fix up the kerneldoc and add notes about
the relevant ASYNC_TX_* flags for each routine.

[ Impact: moves api pass-by-value parameters to a pass-by-reference struct ]

Signed-off-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-06-03 14:07:35 -07:00
Dan Williams
88ba2aa586 async_tx: kill ASYNC_TX_DEP_ACK flag
In support of inter-channel chaining async_tx utilizes an ack flag to
gate whether a dependent operation can be chained to another.  While the
flag is not set the chain can be considered open for appending.  Setting
the ack flag closes the chain and flags the descriptor for garbage
collection.  The ASYNC_TX_DEP_ACK flag essentially means "close the
chain after adding this dependency".  Since each operation can only have
one child the api now implicitly sets the ack flag at dependency
submission time.  This removes an unnecessary management burden from
clients of the api.

[ Impact: clean up and enforce one dependency per operation ]

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-06-03 14:07:34 -07:00
Alex Riesen
aa07a6990f crypto: api - Use formatting of module name
Besdies, for the old code, gcc-4.3.3 produced this warning:
  "format not a string literal and no format arguments"

Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:13:14 +10:00
Herbert Xu
a0cfae59f8 crypto: testmgr - Allow hash test vectors longer than a page
As it stands we will each test hash vector both linearly and as
a scatter list if applicable.  This means that we cannot have
vectors longer than a page, even with scatter lists.

This patch fixes this by skipping test vectors with np != 0 when
testing linearly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:05:02 +10:00
Herbert Xu
fd57f22a09 crypto: testmgr - Check all test vector lengths
As we cannot guarantee the availability of contiguous pages at
run-time, all test vectors must either fit within a page, or use
scatter lists.  In some cases vectors were not checked as to
whether they fit inside a page.  This patch adds all the missing
checks.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:05:00 +10:00
Jarod Wilson
4e033a6bc7 crypto: tcrypt - Do not exit on success in fips mode
At present, the tcrypt module always exits with an -EAGAIN upon
successfully completing all the tests its been asked to run. In fips
mode, integrity checking is done by running all self-tests from the
initrd, and its much simpler to check the ret from modprobe for
success than to scrape dmesg and/or /proc/crypto. Simply stay
loaded, giving modprobe a retval of 0, if self-tests all pass and
we're in fips mode.

A side-effect of tracking success/failure for fips mode is that in
non-fips mode, self-test failures will return the actual failure
return codes, rather than always returning -EAGAIN, which seems more
correct anyway.

The tcrypt_test() portion of the patch is dependent on my earlier
pair of patches that skip non-fips algs in fips mode, at least to
achieve the fully intended behavior.

Nb: testing this patch against the cryptodev tree revealed a test
failure for sha384, which I have yet to look into...

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:57 +10:00
Geert Uytterhoeven
3ce858cb04 crypto: compress - Return produced bytes in crypto_{,de}compress_{update,final}
If crypto_{,de}compress_{update,final}() succeed, return the actual number of
bytes produced instead of zero, so their users don't have to calculate that
theirselves.

Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:56 +10:00
Jarod Wilson
a3bef3a31a crypto: testmgr - Skip algs not flagged fips_allowed in fips mode
Because all fips-allowed algorithms must be self-tested before they
can be used, they will all have entries in testmgr.c's alg_test_descs[].
Skip self-tests for any algs not flagged as fips_approved and return
-EINVAL when in fips mode.

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:49 +10:00
Jarod Wilson
a1915d51e8 crypto: testmgr - Mark algs allowed in fips mode
Set the fips_allowed flag in testmgr.c's alg_test_descs[] for algs
that are allowed to be used when in fips mode.

One caveat: des isn't actually allowed anymore, but des (and thus also
ecb(des)) has to be permitted, because disallowing them results in
des3_ede being unable to properly register (see des module init func).

Also, crc32 isn't technically on the fips approved list, but I think
it gets used in various places that necessitate it being allowed.

This list is based on
http://csrc.nist.gov/groups/STM/cavp/index.html

Important note: allowed/approved here does NOT mean "validated", just
that its an alg that *could* be validated.

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:48 +10:00
Jarod Wilson
f7cb80f2b9 crypto: testmgr - Add ctr(aes) test vectors
Now with multi-block test vectors, all from SP800-38A, Appendix F.5.
Also added ctr(aes) to case 10 in tcrypt.

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:46 +10:00
Herbert Xu
f8b0d4d09d crypto: testmgr - Dynamically allocate xbuf and axbuf
We currently allocate temporary memory that is used for testing
statically.  This renders the testing engine non-reentrant. As
algorithms may nest, i.e., one may construct another in order to
carry out a part of its operation, this is unacceptable.  For
example, it has been reported that an AEAD implementation allocates
a cipher in its setkey function, which causes it to fail during
testing as the temporary memory is overwritten.

This patch replaces the static memory with dynamically allocated
buffers.  We need a maximum of 16 pages so this slightly increases
the chances of an algorithm failing due to memory shortage.
However, as testing usually occurs at registration, this shouldn't
be a big problem.

Reported-by: Shasi Pulijala <spulijala@amcc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:45 +10:00
Jarod Wilson
29ecd4ab3d crypto: testmgr - Print self-test pass notices in fips mode
According to our FIPS CAVS testing lab guru, when we're in fips mode,
we must print out notices of successful self-test completion for
every alg to be compliant.

New and improved v2, without strncmp crap. Doesn't need to touch a flag
though, due to not moving the notest label around anymore.

Applies atop '[PATCH v2] crypto: catch base cipher self-test failures
in fips mode'.

Personally, I wouldn't mind seeing this info printed out regardless of
whether or not we're in fips mode, I think its useful info, but will
stick with only in fips mode for now.

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:43 +10:00
Jarod Wilson
941fb3287c crypto: testmgr - Catch base cipher self-test failures in fips mode
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:42 +10:00
Jarod Wilson
e08ca2da39 crypto: testmgr - Add ansi_cprng test vectors
Add ANSI X9.31 Continuous Pseudo-Random Number Generator (AES mode),
aka 'ansi_cprng' test vectors, taken from Appendix B.2.9 and B.2.10
of the NIST RNGVS document, found here:
    http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf

Successfully tested against both the cryptodev-2.6 tree and a Red
Hat Enterprise Linux 5.4 kernel, via 'modprobe tcrypt mode=150'.

The selection of 150 was semi-arbitrary, didn't seem like it should
go any place in particular, so I started a new range for rng tests.

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:40 +10:00
Jarod Wilson
7647d6ce20 crypto: testmgr - Add infrastructure for ansi_cprng self-tests
Add some necessary infrastructure to make it possible to run
self-tests for ansi_cprng. The bits are likely very specific
to the ANSI X9.31 CPRNG in AES mode, and thus perhaps should
be named more specifically if/when we grow additional CPRNG
support...

Successfully tested against the cryptodev-2.6 tree and a
Red Hat Enterprise Linux 5.x kernel with the follow-on
patch that adds the actual test vectors.

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:38 +10:00
Jarod Wilson
5d667322a2 crypto: testmgr - Add self-tests for rfc4309(ccm(aes))
Add an array of encryption and decryption + verification self-tests
for rfc4309(ccm(aes)).

Test vectors all come from sample FIPS CAVS files provided to
Red Hat by a testing lab. Unfortunately, all the published sample
vectors in RFC 3610 and NIST Special Publication 800-38C contain nonce
lengths that the kernel's rfc4309 implementation doesn't support, so
while using some public domain vectors would have been preferred, its
not possible at this time.

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:36 +10:00
Jarod Wilson
e44a1b44c3 crypto: testmgr - Handle AEAD test vectors expected to fail verification
Add infrastructure to tcrypt/testmgr to support handling ccm decryption
test vectors that are expected to fail verification.

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:35 +10:00
Geert Uytterhoeven
2f6ceb7933 crypto: pcomp - pcompress.c should include crypto/internal/compress.h
make C=1:
| crypto/pcompress.c:77:5: warning: symbol 'crypto_register_pcomp' was not declared. Should it be static?
| crypto/pcompress.c:89:5: warning: symbol 'crypto_unregister_pcomp' was not declared. Should it be static?

Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:20 +10:00
Geert Uytterhoeven
c79cf91006 crypto: testmgr - Kill test_comp() sparse warnings
make C=1:
| crypto/testmgr.c:846:45: warning: incorrect type in argument 5 (different signedness)
| crypto/testmgr.c:846:45:    expected unsigned int *dlen
| crypto/testmgr.c:846:45:    got int *<noident>
| crypto/testmgr.c:878:47: warning: incorrect type in argument 5 (different signedness)
| crypto/testmgr.c:878:47:    expected unsigned int *dlen
| crypto/testmgr.c:878:47:    got int *<noident>

Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:18 +10:00
Huang Ying
2cf4ac8beb crypto: aes-ni - Add support for more modes
Because kernel_fpu_begin() and kernel_fpu_end() operations are too
slow, the performance gain of general mode implementation + aes-aesni
is almost all compensated.

The AES-NI support for more modes are implemented as follow:

- Add a new AES algorithm implementation named __aes-aesni without
  kernel_fpu_begin/end()

- Use fpu(<mode>(AES)) to provide kenrel_fpu_begin/end() invoking

- Add <mode>(AES) ablkcipher, which uses cryptd(fpu(<mode>(AES))) to
  defer cryption to cryptd context in soft_irq context.

Now the ctr, lrw, pcbc and xts support are added.

Performance testing based on dm-crypt shows that cryption time can be
reduced to 50% of general mode implementation + aes-aesni implementation.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:16 +10:00
Huang Ying
150c7e8552 crypto: fpu - Add template for blkcipher touching FPU
Blkcipher touching FPU need to be enclosed by kernel_fpu_begin() and
kernel_fpu_end(). If they are invoked in cipher algorithm
implementation, they will be invoked for each block, so that
performance will be hurt, because they are "slow" operations. This
patch implements "fpu" template, which makes these operations to be
invoked for each request.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:15 +10:00
Huang Ying
505fd21d61 crypto: cryptd - Use nivcipher in cryptd_alloc_ablkcipher
Use crypto_alloc_base() instead of crypto_alloc_ablkcipher() to
allocate underlying tfm in cryptd_alloc_ablkcipher. Because
crypto_alloc_ablkcipher() prefer GENIV encapsulated crypto instead of
raw one, while cryptd_alloc_ablkcipher needed the raw one.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:13 +10:00
Johannes Weiner
811d8f0626 crypto: api - Use kzfree
Use kzfree() instead of memset() + kfree().

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:11 +10:00
Frank Seidel
376bacb0a2 crypto: tcrypt - Reduce stack size
Applying kernel janitors todos (printk calls need KERN_*
constants on linebeginnings, reduce stack footprint where
possible) to tcrypts test_hash_speed (where stacks
memory footprint was very high (on i386 1184 bytes to
160 now).

Signed-off-by: Frank Seidel <frank@f-seidel.de>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-06-02 14:04:09 +10:00
Herbert Xu
d315a0e09f crypto: hash - Fix handling of sg entry that crosses page boundary
A quirk that we've always supported is having an sg entry that's
bigger than a page, or more generally an sg entry that crosses
page boundaries.  Even though it would be better to explicitly have
to sg entries for this, we need to support it for the existing users,
in particular, IPsec.

The new ahash sg walking code did try to handle this, but there was
a bug where we didn't increment the page so kept on walking on the
first page over an dover again.

This patch fixes it.

Tested-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-05-31 23:09:22 +10:00
Linus Torvalds
cd208bcc7c Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: padlock - Revert aes-all alias to aes
  crypto: api - Fix algorithm module auto-loading
  crypto: eseqiv - Fix IV generation for sync algorithms
  crypto: ixp4xx - check firmware for crypto support
2009-05-17 15:48:05 -07:00
Herbert Xu
37fc334cc8 crypto: api - Fix algorithm module auto-loading
The commit a760a6656e (crypto:
api - Fix module load deadlock with fallback algorithms) broke
the auto-loading of algorithms that require fallbacks.  The
problem is that the fallback mask check is missing an and which
cauess bits that should be considered to interfere with the
result.

Reported-by: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-04-21 13:27:16 +08:00
Steffen Klassert
abe5fa7899 crypto: eseqiv - Fix IV generation for sync algorithms
If crypto_ablkcipher_encrypt() returns synchronous,
eseqiv_complete2() is called even if req->giv is already the
pointer to the generated IV. The generated IV is overwritten
with some random data in this case. This patch fixes this by
calling eseqiv_complete2() just if the generated IV has to be
copied to req->giv.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-04-15 20:45:03 +08:00
Dan Williams
099f53cb50 async_tx: rename zero_sum to val
'zero_sum' does not properly describe the operation of generating parity
and checking that it validates against an existing buffer.  Change the
name of the operation to 'val' (for 'validate').  This is in
anticipation of the p+q case where it is a requirement to identify the
target parity buffers separately from the source buffers, because the
target parity buffers will not have corresponding pq coefficients.

Reviewed-by: Andre Noll <maan@systemlinux.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-04-08 14:28:37 -07:00
Dan Williams
fd74ea6588 Merge branch 'dmaengine' into async-tx-raid6 2009-04-08 14:28:13 -07:00
Linus Torvalds
133e2a3164 Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
  dma: Add SoF and EoF debugging to ipu_idmac.c, minor cleanup
  dw_dmac: add cyclic API to DW DMA driver
  dmaengine: Add privatecnt to revert DMA_PRIVATE property
  dmatest: add dma interrupts and callbacks
  dmatest: add xor test
  dmaengine: allow dma support for async_tx to be toggled
  async_tx: provide __async_inline for HAS_DMA=n archs
  dmaengine: kill some unused headers
  dmaengine: initialize tx_list in dma_async_tx_descriptor_init
  dma: i.MX31 IPU DMA robustness improvements
  dma: improve section assignment in i.MX31 IPU DMA driver
  dma: ipu_idmac driver cosmetic clean-up
  dmaengine: fail device registration if channel registration fails
2009-04-03 12:13:45 -07:00
Linus Torvalds
c54c4dec61 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: ixp4xx - Fix handling of chained sg buffers
  crypto: shash - Fix unaligned calculation with short length
  hwrng: timeriomem - Use phys address rather than virt
2009-04-03 09:45:53 -07:00
Linus Torvalds
223cdea4c4 Merge branch 'for-linus' of git://neil.brown.name/md
* 'for-linus' of git://neil.brown.name/md: (53 commits)
  md/raid5 revise rules for when to update metadata during reshape
  md/raid5: minor code cleanups in make_request.
  md: remove CONFIG_MD_RAID_RESHAPE config option.
  md/raid5: be more careful about write ordering when reshaping.
  md: don't display meaningless values in sysfs files resync_start and sync_speed
  md/raid5: allow layout and chunksize to be changed on active array.
  md/raid5: reshape using largest of old and new chunk size
  md/raid5: prepare for allowing reshape to change layout
  md/raid5: prepare for allowing reshape to change chunksize.
  md/raid5: clearly differentiate 'before' and 'after' stripes during reshape.
  Documentation/md.txt update
  md: allow number of drives in raid5 to be reduced
  md/raid5: change reshape-progress measurement to cope with reshaping backwards.
  md: add explicit method to signal the end of a reshape.
  md/raid5: enhance raid5_size to work correctly with negative delta_disks
  md/raid5: drop qd_idx from r6_state
  md/raid6: move raid6 data processing to raid6_pq.ko
  md: raid5 run(): Fix max_degraded for raid level 4.
  md: 'array_size' sysfs attribute
  md: centralize ->array_sectors modifications
  ...
2009-04-03 09:08:19 -07:00
NeilBrown
bff61975b3 md: move lots of #include lines out of .h files and into .c
This makes the includes more explicit, and is preparation for moving
md_k.h to drivers/md/md.h

Remove include/raid/md.h as its only remaining use was to #include
other files.

Signed-off-by: NeilBrown <neilb@suse.de>
2009-03-31 14:33:13 +11:00
Yehuda Sadeh
f4f689933c crypto: shash - Fix unaligned calculation with short length
When the total length is shorter than the calculated number of unaligned bytes, the call to shash->update breaks. For example, calling crc32c on unaligned buffer with length of 1 can result in a system crash.

Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-03-27 13:03:51 +08:00
Linus Torvalds
562f477a54 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (29 commits)
  crypto: sha512-s390 - Add missing block size
  hwrng: timeriomem - Breaks an allyesconfig build on s390:
  nlattr: Fix build error with NET off
  crypto: testmgr - add zlib test
  crypto: zlib - New zlib crypto module, using pcomp
  crypto: testmgr - Add support for the pcomp interface
  crypto: compress - Add pcomp interface
  netlink: Move netlink attribute parsing support to lib
  crypto: Fix dead links
  hwrng: timeriomem - New driver
  crypto: chainiv - Use kcrypto_wq instead of keventd_wq
  crypto: cryptd - Per-CPU thread implementation based on kcrypto_wq
  crypto: api - Use dedicated workqueue for crypto subsystem
  crypto: testmgr - Test skciphers with no IVs
  crypto: aead - Avoid infinite loop when nivaead fails selftest
  crypto: skcipher - Avoid infinite loop when cipher fails selftest
  crypto: api - Fix crypto_alloc_tfm/create_create_tfm return convention
  crypto: api - crypto_alg_mod_lookup either tested or untested
  crypto: amcc - Add crypt4xx driver
  crypto: ansi_cprng - Add maintainer
  ...
2009-03-26 11:04:34 -07:00
Dan Williams
729b5d1b8e dmaengine: allow dma support for async_tx to be toggled
Provide a config option for blocking the allocation of dma channels to
the async_tx api.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-25 09:13:25 -07:00
Dan Williams
06164f3194 async_tx: provide __async_inline for HAS_DMA=n archs
To allow an async_tx routine to be compiled away on HAS_DMA=n arch it
needs to be declared __always_inline otherwise the compiler may emit
code and cause a link error.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-03-25 09:13:25 -07:00
Geert Uytterhoeven
0c01aed50d crypto: testmgr - add zlib test
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-03-04 15:42:15 +08:00
Geert Uytterhoeven
bf68e65ec9 crypto: zlib - New zlib crypto module, using pcomp
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Cc: James Morris <jmorris@namei.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-03-04 15:16:19 +08:00
Geert Uytterhoeven
8064efb874 crypto: testmgr - Add support for the pcomp interface
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-03-04 15:16:18 +08:00
Geert Uytterhoeven
a1d2f09544 crypto: compress - Add pcomp interface
The current "comp" crypto interface supports one-shot (de)compression only,
i.e. the whole data buffer to be (de)compressed must be passed at once, and
the whole (de)compressed data buffer will be received at once.
In several use-cases (e.g. compressed file systems that store files in big
compressed blocks), this workflow is not suitable.
Furthermore, the "comp" type doesn't provide for the configuration of
(de)compression parameters, and always allocates workspace memory for both
compression and decompression, which may waste memory.

To solve this, add a "pcomp" partial (de)compression interface that provides
the following operations:
  - crypto_compress_{init,update,final}() for compression,
  - crypto_decompress_{init,update,final}() for decompression,
  - crypto_{,de}compress_setup(), to configure (de)compression parameters
    (incl. allocating workspace memory).

The (de)compression methods take a struct comp_request, which was mimicked
after the z_stream object in zlib, and contains buffer pointer and length
pairs for input and output.

The setup methods take an opaque parameter pointer and length pair. Parameters
are supposed to be encoded using netlink attributes, whose meanings depend on
the actual (name of the) (de)compression algorithm.

Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-03-04 15:05:33 +08:00
Adrian-Ken Rueegsegger
8c882f6413 crypto: Fix dead links
Signed-off-by: Adrian-Ken Rueegsegger <ken@codelabs.ch>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-03-04 14:43:52 +08:00
Herbert Xu
a760a6656e crypto: api - Fix module load deadlock with fallback algorithms
With the mandatory algorithm testing at registration, we have
now created a deadlock with algorithms requiring fallbacks.
This can happen if the module containing the algorithm requiring
fallback is loaded first, without the fallback module being loaded
first.  The system will then try to test the new algorithm, find
that it needs to load a fallback, and then try to load that.

As both algorithms share the same module alias, it can attempt
to load the original algorithm again and block indefinitely.

As algorithms requiring fallbacks are a special case, we can fix
this by giving them a different module alias than the rest.  Then
it's just a matter of using the right aliases according to what
algorithms we're trying to find.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-26 14:06:31 +08:00
Lee Nipper
bb402f16ec crypto: ahash - Fix digest size in /proc/crypto
crypto_ahash_show changed to use cra_ahash for digestsize reference.

Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-19 14:46:26 +08:00
Huang Ying
0a2e821d62 crypto: chainiv - Use kcrypto_wq instead of keventd_wq
keventd_wq has potential starvation problem, so use dedicated
kcrypto_wq instead.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-19 14:44:02 +08:00
Huang Ying
254eff7714 crypto: cryptd - Per-CPU thread implementation based on kcrypto_wq
Original cryptd thread implementation has scalability issue, this
patch solve the issue with a per-CPU thread implementation.

struct cryptd_queue is defined to be a per-CPU queue, which holds one
struct cryptd_cpu_queue for each CPU. In struct cryptd_cpu_queue, a
struct crypto_queue holds all requests for the CPU, a struct
work_struct is used to run all requests for the CPU.

Testing based on dm-crypt on an Intel Core 2 E6400 (two cores) machine
shows 19.2% performance gain. The testing script is as follow:

-------------------- script begin ---------------------------
#!/bin/sh

dmc_create()
{
        # Create a crypt device using dmsetup
        dmsetup create $2 --table "0 `blockdev --getsize $1` crypt cbc(aes-asm)?cryptd?plain:plain babebabebabebabebabebabebabebabe 0 $1 0"
}

dmsetup remove crypt0
dmsetup remove crypt1

dd if=/dev/zero of=/dev/ram0 bs=1M count=4 >& /dev/null
dd if=/dev/zero of=/dev/ram1 bs=1M count=4 >& /dev/null

dmc_create /dev/ram0 crypt0
dmc_create /dev/ram1 crypt1

cat >tr.sh <<EOF
#!/bin/sh

for n in \$(seq 10); do
        dd if=/dev/dm-0 of=/dev/null >& /dev/null &
        dd if=/dev/dm-1 of=/dev/null >& /dev/null &
done
wait
EOF

for n in $(seq 10); do
        /usr/bin/time sh tr.sh
done
rm tr.sh
-------------------- script end   ---------------------------

The separator of dm-crypt parameter is changed from "-" to "?", because
"-" is used in some cipher driver name too, and cryptds need to specify
cipher driver name instead of cipher name.

The test result on an Intel Core2 E6400 (two cores) is as follow:

without patch:
-----------------wo begin --------------------------
0.04user 0.38system 0:00.39elapsed 107%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6566minor)pagefaults 0swaps
0.07user 0.35system 0:00.35elapsed 121%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6567minor)pagefaults 0swaps
0.06user 0.34system 0:00.30elapsed 135%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6562minor)pagefaults 0swaps
0.05user 0.37system 0:00.36elapsed 119%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6607minor)pagefaults 0swaps
0.06user 0.36system 0:00.35elapsed 120%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6562minor)pagefaults 0swaps
0.05user 0.37system 0:00.31elapsed 136%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6594minor)pagefaults 0swaps
0.04user 0.34system 0:00.30elapsed 126%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6597minor)pagefaults 0swaps
0.06user 0.32system 0:00.31elapsed 125%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6571minor)pagefaults 0swaps
0.06user 0.34system 0:00.31elapsed 134%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6581minor)pagefaults 0swaps
0.05user 0.38system 0:00.31elapsed 138%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6600minor)pagefaults 0swaps
-----------------wo end   --------------------------


with patch:
------------------w begin --------------------------
0.02user 0.31system 0:00.24elapsed 141%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6554minor)pagefaults 0swaps
0.05user 0.34system 0:00.31elapsed 127%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6606minor)pagefaults 0swaps
0.07user 0.33system 0:00.26elapsed 155%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6559minor)pagefaults 0swaps
0.07user 0.32system 0:00.26elapsed 151%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6562minor)pagefaults 0swaps
0.05user 0.34system 0:00.26elapsed 150%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6603minor)pagefaults 0swaps
0.03user 0.36system 0:00.31elapsed 124%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6562minor)pagefaults 0swaps
0.04user 0.35system 0:00.26elapsed 147%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6586minor)pagefaults 0swaps
0.03user 0.37system 0:00.27elapsed 146%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6562minor)pagefaults 0swaps
0.04user 0.36system 0:00.26elapsed 154%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6594minor)pagefaults 0swaps
0.04user 0.35system 0:00.26elapsed 154%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+6557minor)pagefaults 0swaps
------------------w end   --------------------------

The middle value of elapsed time is:
wo cryptwq: 0.31
w  cryptwq: 0.26

The performance gain is about (0.31-0.26)/0.26 = 0.192.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-19 14:42:19 +08:00
Huang Ying
25c38d3fb9 crypto: api - Use dedicated workqueue for crypto subsystem
Use dedicated workqueue for crypto subsystem

A dedicated workqueue named kcrypto_wq is created to be used by crypto
subsystem. The system shared keventd_wq is not suitable for
encryption/decryption, because of potential starvation problem.

Signed-off-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-19 14:33:40 +08:00
Herbert Xu
6fe4a28d88 crypto: testmgr - Test skciphers with no IVs
As it is an skcipher with no IV escapes testing altogether because
we only test givcipher objects.  This patch fixes the bypass logic
to test these algorithms.

Conversely, we're currently testing nivaead algorithms with IVs,
which would have deadlocked had it not been for the fact that no
nivaead algorithms have any test vectors.  This patch also fixes
that case.

Both fixes are ugly as hell, but this ugliness should hopefully
disappear once we move them into the per-type code (i.e., the
AEAD test would live in aead.c and the skcipher stuff in ablkcipher.c).

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-18 21:41:29 +08:00
Herbert Xu
5852ae4242 crypto: aead - Avoid infinite loop when nivaead fails selftest
When an aead constructed through crypto_nivaead_default fails
its selftest, we'll loop forever trying to construct new aead
objects but failing because it already exists.

The crux of the issue is that once an aead fails the selftest,
we'll ignore it on the next run through crypto_aead_lookup and
attempt to construct a new aead.

We should instead return an error to the caller if we find an
an that has failed the test.

This bug hasn't manifested itself yet because we don't have any
test vectors for the existing nivaead algorithms.  They're tested
through the underlying algorithms only.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-18 21:21:24 +08:00
Herbert Xu
b170a137f4 crypto: skcipher - Avoid infinite loop when cipher fails selftest
When an skcipher constructed through crypto_givcipher_default fails
its selftest, we'll loop forever trying to construct new skcipher
objects but failing because it already exists.

The crux of the issue is that once a givcipher fails the selftest,
we'll ignore it on the next run through crypto_skcipher_lookup and
attempt to construct a new givcipher.

We should instead return an error to the caller if we find a
givcipher that has failed the test.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-18 21:20:06 +08:00
Herbert Xu
3f683d6175 crypto: api - Fix crypto_alloc_tfm/create_create_tfm return convention
This is based on a report and patch by Geert Uytterhoeven.

The functions crypto_alloc_tfm and create_create_tfm return a
pointer that needs to be adjusted by the caller when successful
and otherwise an error value.  This means that the caller has
to check for the error and only perform the adjustment if the
pointer returned is valid.

Since all callers want to make the adjustment and we know how
to adjust it ourselves, it's much easier to just return adjusted
pointer directly.

The only caveat is that we have to return a void * instead of
struct crypto_tfm *.  However, this isn't that bad because both
of these functions are for internal use only (by types code like
shash.c, not even algorithms code).

This patch also moves crypto_alloc_tfm into crypto/internal.h
(crypto_create_tfm is already there) to reflect this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-18 16:56:59 +08:00
Herbert Xu
ff753308d2 crypto: api - crypto_alg_mod_lookup either tested or untested
As it stands crypto_alg_mod_lookup will search either tested or
untested algorithms, but never both at the same time.  However,
we need exactly that when constructing givcipher and aead so
this patch adds support for that by setting the tested bit in
type but clearing it in mask.  This combination is currently
unused.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-02-18 16:49:43 +08:00