Commit Graph

378 Commits

Author SHA1 Message Date
Linus Torvalds
0302e28dee Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security
Pull security subsystem updates from James Morris:
 "Highlights:

  IMA:
   - provide ">" and "<" operators for fowner/uid/euid rules

  KEYS:
   - add a system blacklist keyring

   - add KEYCTL_RESTRICT_KEYRING, exposes keyring link restriction
     functionality to userland via keyctl()

  LSM:
   - harden LSM API with __ro_after_init

   - add prlmit security hook, implement for SELinux

   - revive security_task_alloc hook

  TPM:
   - implement contextual TPM command 'spaces'"

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (98 commits)
  tpm: Fix reference count to main device
  tpm_tis: convert to using locality callbacks
  tpm: fix handling of the TPM 2.0 event logs
  tpm_crb: remove a cruft constant
  keys: select CONFIG_CRYPTO when selecting DH / KDF
  apparmor: Make path_max parameter readonly
  apparmor: fix parameters so that the permission test is bypassed at boot
  apparmor: fix invalid reference to index variable of iterator line 836
  apparmor: use SHASH_DESC_ON_STACK
  security/apparmor/lsm.c: set debug messages
  apparmor: fix boolreturn.cocci warnings
  Smack: Use GFP_KERNEL for smk_netlbl_mls().
  smack: fix double free in smack_parse_opts_str()
  KEYS: add SP800-56A KDF support for DH
  KEYS: Keyring asymmetric key restrict method with chaining
  KEYS: Restrict asymmetric key linkage using a specific keychain
  KEYS: Add a lookup_restriction function for the asymmetric key type
  KEYS: Add KEYCTL_RESTRICT_KEYRING
  KEYS: Consistent ordering for __key_link_begin and restrict check
  KEYS: Add an optional lookup_restriction hook to key_type
  ...
2017-05-03 08:50:52 -07:00
Linus Torvalds
5a0387a8a8 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
 "Here is the crypto update for 4.12:

  API:
   - Add batch registration for acomp/scomp
   - Change acomp testing to non-unique compressed result
   - Extend algorithm name limit to 128 bytes
   - Require setkey before accept(2) in algif_aead

  Algorithms:
   - Add support for deflate rfc1950 (zlib)

  Drivers:
   - Add accelerated crct10dif for powerpc
   - Add crc32 in stm32
   - Add sha384/sha512 in ccp
   - Add 3des/gcm(aes) for v5 devices in ccp
   - Add Queue Interface (QI) backend support in caam
   - Add new Exynos RNG driver
   - Add ThunderX ZIP driver
   - Add driver for hardware random generator on MT7623 SoC"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (101 commits)
  crypto: stm32 - Fix OF module alias information
  crypto: algif_aead - Require setkey before accept(2)
  crypto: scomp - add support for deflate rfc1950 (zlib)
  crypto: scomp - allow registration of multiple scomps
  crypto: ccp - Change ISR handler method for a v5 CCP
  crypto: ccp - Change ISR handler method for a v3 CCP
  crypto: crypto4xx - rename ce_ring_contol to ce_ring_control
  crypto: testmgr - Allow ecb(cipher_null) in FIPS mode
  Revert "crypto: arm64/sha - Add constant operand modifier to ASM_EXPORT"
  crypto: ccp - Disable interrupts early on unload
  crypto: ccp - Use only the relevant interrupt bits
  hwrng: mtk - Add driver for hardware random generator on MT7623 SoC
  dt-bindings: hwrng: Add Mediatek hardware random generator bindings
  crypto: crct10dif-vpmsum - Fix missing preempt_disable()
  crypto: testmgr - replace compression known answer test
  crypto: acomp - allow registration of multiple acomps
  hwrng: n2 - Use devm_kcalloc() in n2rng_probe()
  crypto: chcr - Fix error handling related to 'chcr_alloc_shash'
  padata: get_next is never NULL
  crypto: exynos - Add new Exynos RNG driver
  ...
2017-05-02 15:53:46 -07:00
Giovanni Cabiddu
3de4f5e1a5 crypto: scomp - allow registration of multiple scomps
Add crypto_register_scomps and crypto_unregister_scomps to allow
the registration of multiple implementations with one call.

Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-04-24 18:11:07 +08:00
Giovanni Cabiddu
3ce5bc72eb crypto: acomp - allow registration of multiple acomps
Add crypto_register_acomps and crypto_unregister_acomps to allow
the registration of multiple implementations with one call.

Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-04-21 20:30:50 +08:00
Linus Torvalds
5ee4c5a929 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
 "This fixes the following problems:

   - regression in new XTS/LRW code when used with async crypto

   - long-standing bug in ahash API when used with certain algos

   - bogus memory dereference in async algif_aead with certain algos"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: algif_aead - Fix bogus request dereference in completion function
  crypto: ahash - Fix EINPROGRESS notification callback
  crypto: lrw - Fix use-after-free on EINPROGRESS
  crypto: xts - Fix use-after-free on EINPROGRESS
2017-04-18 09:03:50 -07:00
Herbert Xu
ef0579b64e crypto: ahash - Fix EINPROGRESS notification callback
The ahash API modifies the request's callback function in order
to clean up after itself in some corner cases (unaligned final
and missing finup).

When the request is complete ahash will restore the original
callback and everything is fine.  However, when the request gets
an EBUSY on a full queue, an EINPROGRESS callback is made while
the request is still ongoing.

In this case the ahash API will incorrectly call its own callback.

This patch fixes the problem by creating a temporary request
object on the stack which is used to relay EINPROGRESS back to
the original completion function.

This patch also adds code to preserve the original flags value.

Fixes: ab6bf4e5e5 ("crypto: hash - Fix the pointer voodoo in...")
Cc: <stable@vger.kernel.org>
Reported-by: Sabrina Dubroca <sd@queasysnail.net>
Tested-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-04-10 19:09:18 +08:00
Ondrej Mosnáček
e55318c84f crypto: gf128mul - switch gf128mul_x_ble to le128
Currently, gf128mul_x_ble works with pointers to be128, even though it
actually interprets the words as little-endian. Consequently, it uses
cpu_to_le64/le64_to_cpu on fields of type __be64, which is incorrect.

This patch fixes that by changing the function to accept pointers to
le128 and updating all users accordingly.

Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Reviewd-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-04-05 21:58:37 +08:00
Ondrej Mosnáček
acb9b159c7 crypto: gf128mul - define gf128mul_x_* in gf128mul.h
The gf128mul_x_ble function is currently defined in gf128mul.c, because
it depends on the gf128mul_table_be multiplication table.

However, since the function is very small and only uses two values from
the table, it is better for it to be defined as inline function in
gf128mul.h. That way, the function can be inlined by the compiler for
better performance.

For consistency, the other gf128mul_x_* functions are also moved to the
header file. In addition, the code is rewritten to be constant-time.

After this change, the speed of the generic 'xts(aes)' implementation
increased from ~225 MiB/s to ~235 MiB/s (measured using 'cryptsetup
benchmark -c aes-xts-plain64' on an Intel system with CRYPTO_AES_X86_64
and CRYPTO_AES_NI_INTEL disabled).

Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Reviewd-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-04-05 21:58:35 +08:00
Mat Martineau
8e323a02e8 KEYS: Keyring asymmetric key restrict method with chaining
Add a restrict_link_by_key_or_keyring_chain link restriction that
searches for signing keys in the destination keyring in addition to the
signing key or keyring designated when the destination keyring was
created. Userspace enables this behavior by including the "chain" option
in the keyring restriction:

  keyctl(KEYCTL_RESTRICT_KEYRING, keyring, "asymmetric",
         "key_or_keyring:<signing key>:chain");

Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
2017-04-04 14:10:13 -07:00
Mat Martineau
7e3c4d2208 KEYS: Restrict asymmetric key linkage using a specific keychain
Adds restrict_link_by_signature_keyring(), which uses the restrict_key
member of the provided destination_keyring data structure as the
key or keyring to search for signing keys.

Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
2017-04-04 14:10:13 -07:00
Mat Martineau
aaf66c8838 KEYS: Split role of the keyring pointer for keyring restrict functions
The first argument to the restrict_link_func_t functions was a keyring
pointer. These functions are called by the key subsystem with this
argument set to the destination keyring, but restrict_link_by_signature
expects a pointer to the relevant trusted keyring.

Restrict functions may need something other than a single struct key
pointer to allow or reject key linkage, so the data used to make that
decision (such as the trust keyring) is moved to a new, fourth
argument. The first argument is now always the destination keyring.

Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
2017-04-03 10:24:56 -07:00
Herbert Xu
2e6d603e51 Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux
Merging 4.11-rc3 to pick up md5 removal from /dev/random.
2017-03-24 21:58:58 +08:00
David Howells
cdfbabfb2f net: Work around lockdep limitation in sockets that use sockets
Lockdep issues a circular dependency warning when AFS issues an operation
through AF_RXRPC from a context in which the VFS/VM holds the mmap_sem.

The theory lockdep comes up with is as follows:

 (1) If the pagefault handler decides it needs to read pages from AFS, it
     calls AFS with mmap_sem held and AFS begins an AF_RXRPC call, but
     creating a call requires the socket lock:

	mmap_sem must be taken before sk_lock-AF_RXRPC

 (2) afs_open_socket() opens an AF_RXRPC socket and binds it.  rxrpc_bind()
     binds the underlying UDP socket whilst holding its socket lock.
     inet_bind() takes its own socket lock:

	sk_lock-AF_RXRPC must be taken before sk_lock-AF_INET

 (3) Reading from a TCP socket into a userspace buffer might cause a fault
     and thus cause the kernel to take the mmap_sem, but the TCP socket is
     locked whilst doing this:

	sk_lock-AF_INET must be taken before mmap_sem

However, lockdep's theory is wrong in this instance because it deals only
with lock classes and not individual locks.  The AF_INET lock in (2) isn't
really equivalent to the AF_INET lock in (3) as the former deals with a
socket entirely internal to the kernel that never sees userspace.  This is
a limitation in the design of lockdep.

Fix the general case by:

 (1) Double up all the locking keys used in sockets so that one set are
     used if the socket is created by userspace and the other set is used
     if the socket is created by the kernel.

 (2) Store the kern parameter passed to sk_alloc() in a variable in the
     sock struct (sk_kern_sock).  This informs sock_lock_init(),
     sock_init_data() and sk_clone_lock() as to the lock keys to be used.

     Note that the child created by sk_clone_lock() inherits the parent's
     kern setting.

 (3) Add a 'kern' parameter to ->accept() that is analogous to the one
     passed in to ->create() that distinguishes whether kernel_accept() or
     sys_accept4() was the caller and can be passed to sk_alloc().

     Note that a lot of accept functions merely dequeue an already
     allocated socket.  I haven't touched these as the new socket already
     exists before we get the parameter.

     Note also that there are a couple of places where I've made the accepted
     socket unconditionally kernel-based:

	irda_accept()
	rds_rcp_accept_one()
	tcp_accept_from_sock()

     because they follow a sock_create_kern() and accept off of that.

Whilst creating this, I noticed that lustre and ocfs don't create sockets
through sock_create_kern() and thus they aren't marked as for-kernel,
though they appear to be internal.  I wonder if these should do that so
that they use the new set of lock keys.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-03-09 18:23:27 -08:00
Eric Biggers
5527dfb6dd crypto: kpp - constify buffer passed to crypto_kpp_set_secret()
Constify the buffer passed to crypto_kpp_set_secret() and
kpp_alg.set_secret, since it is never modified.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-03-09 18:34:27 +08:00
Eric Biggers
3ea996ddfb crypto: gf128mul - constify 4k and 64k multiplication tables
Constify the multiplication tables passed to the 4k and 64k
multiplication functions, as they are not modified by these functions.

Cc: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-03-09 18:34:16 +08:00
Eric Biggers
63be5b53b6 crypto: gf128mul - fix some comments
Fix incorrect references to GF(128) instead of GF(2^128), as these are
two entirely different fields, and fix a few other incorrect comments.

Cc: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-03-09 18:34:14 +08:00
Herbert Xu
016df0abc5 crypto: api - Add crypto_requires_off helper
This patch adds crypto_requires_off which is an extension of
crypto_requires_sync for similar bits such as NEED_FALLBACK.

Cc: stable@vger.kernel.org #4.10
Suggested-by: Marcelo Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-02-27 18:09:39 +08:00
Ard Biesheuvel
db91af0fbe crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic
Instead of unconditionally forcing 4 byte alignment for all generic
chaining modes that rely on crypto_xor() or crypto_inc() (which may
result in unnecessary copying of data when the underlying hardware
can perform unaligned accesses efficiently), make those functions
deal with unaligned input explicitly, but only if the Kconfig symbol
HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.

For crypto_inc(), this simply involves making the 4-byte stride
conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
it typically operates on 16 byte buffers.

For crypto_xor(), an algorithm is implemented that simply runs through
the input using the largest strides possible if unaligned accesses are
allowed. If they are not, an optimal sequence of memory accesses is
emitted that takes the relative alignment of the input buffers into
account, e.g., if the relative misalignment of dst and src is 4 bytes,
the entire xor operation will be completed using 4 byte loads and stores
(modulo unaligned bits at the start and end). Note that all expressions
involving misalign are simply eliminated by the compiler when
HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-02-11 17:52:28 +08:00
Rabin Vincent
379d972b81 crypto: doc - Fix hash export state information
The documentation states that crypto_ahash_reqsize() provides the size
of the state structure used by crypto_ahash_export().  But it's actually
crypto_ahash_statesize() which provides this size.

Signed-off-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-02-03 18:16:11 +08:00
Ard Biesheuvel
c821f6ab2e crypto: skcipher - introduce walksize attribute for SIMD algos
In some cases, SIMD algorithms can only perform optimally when
allowed to operate on multiple input blocks in parallel. This is
especially true for bit slicing algorithms, which typically take
the same amount of time processing a single block or 8 blocks in
parallel. However, other SIMD algorithms may benefit as well from
bigger strides.

So add a walksize attribute to the skcipher algorithm definition, and
wire it up to the skcipher walk API. To avoid confusion between the
skcipher and AEAD attributes, rename the skcipher_walk chunksize
attribute to 'stride', and set it from the walksize (in the skcipher
case) or from the chunksize (in the AEAD case).

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-12-30 19:52:47 +08:00
Ard Biesheuvel
9ae433bc79 crypto: chacha20 - convert generic and x86 versions to skcipher
This converts the ChaCha20 code from a blkcipher to a skcipher, which
is now the preferred way to implement symmetric block and stream ciphers.

This ports the generic and x86 versions at the same time because the
latter reuses routines of the former.

Note that the skcipher_walk() API guarantees that all presented blocks
except the final one are a multiple of the chunk size, so we can simplify
the encrypt() routine somewhat.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-12-27 17:47:31 +08:00
Linus Torvalds
0aaf2146ec This pull contains one set of changes: a conversion of the crypto DocBook
to Sphinx.
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJYVCNwAAoJEI3ONVYwIuV66foP/AnsYE+6U0Zfvgz8Sn59AdQ2
 co/fGocHETcsMhDKfQZw/lrznpxU4gY3KBj4GuBdgCryoJFH6quJdXpznBGcpExs
 SniRExUhUlyWHoH3SMXYrvUPdGB3RgQo3qXwHOoF9bHnlMpjoqPZKKdkEi6gmrqN
 Uf6cy2cLpNYXxY5LwxgYWpvntHJKT0Oedtzo8RYN730Aym0CcwgYd27pC7daiEni
 0/jRp+eMzJ8+KcJlfJboa1g9YeBa9vx+Y3sawAD3yx021EhFpw93GFdAFN5wss/M
 sLy5A5gp+NtwD1zs801mamaXOmHtgvb5qE7TWlna3gWIRaJz7Eb0YcxGHi/PFgkf
 xjsvgfiBp7EpuomU3wJl5RLV7oLv0sBSyyglMJPimfmHaHnKmU4iTpqrCrEYODCs
 XJ8lK6eBMq1UYkpEfIVEgu+VqA+s0Pfs1akw+275WlKDgMovVlO8zp0/rUjcgofS
 ESTI5O2fb/o/8qROBwtj9crpsQHbsQWRQBy9GT009T4ZhEwHa21aTkWUGqm9l2RL
 N0zLJEUBCX9u5wyZHmHULBwIT9D/Hv8AOhChKeIsZWipw2FRslUg3yJPb1OwlOIv
 1ox5QPJTsTk4FcRYqvIWpxJEO9XFKPHus6gpP2nwYF86B8hKTJZwWHi5EWHjGX1P
 Y9+FJPT3JOCIRgbmvPsK
 =DZLO
 -----END PGP SIGNATURE-----

Merge tag 'docs-4.10-2' of git://git.lwn.net/linux

Pull more documentation updates from Jonathan Corbet:
 "This converts the crypto DocBook to Sphinx"

* tag 'docs-4.10-2' of git://git.lwn.net/linux:
  crypto: doc - optimize compilation
  crypto: doc - clarify AEAD memory structure
  crypto: doc - remove crypto_alloc_ablkcipher
  crypto: doc - add KPP documentation
  crypto: doc - fix separation of cipher / req API
  crypto: doc - fix source comments for Sphinx
  crypto: doc - remove crypto API DocBook
  crypto: doc - convert crypto API documentation to Sphinx
2016-12-17 16:00:34 -08:00
Stephan Mueller
3f692d5f97 crypto: doc - clarify AEAD memory structure
The previous description have been misleading and partially incorrect.

Reported-by: Harsh Jain <harshjain.prof@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
2016-12-13 16:38:06 -07:00
Stephan Mueller
8d23da22ac crypto: doc - add KPP documentation
Add the KPP API documentation to the kernel crypto API Sphinx
documentation. This addition includes the documentation of the
ECDH and DH helpers which are needed to create the approrpiate input
data for the crypto_kpp_set_secret function.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
2016-12-13 16:38:06 -07:00
Stephan Mueller
0184cfe72d crypto: doc - fix source comments for Sphinx
Update comments to avoid any complaints from Sphinx during compilation.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
2016-12-13 16:38:05 -07:00
Herbert Xu
34bc085c83 crypto: skcipher - Add separate walker for AEAD decryption
The AEAD decrypt interface includes the authentication tag in
req->cryptlen.  Therefore we need to exlucde that when doing
a walk over it.

This patch adds separate walker functions for AEAD encryption
and decryption.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2016-12-01 21:06:17 +08:00
Herbert Xu
479d014de5 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Merge the crypto tree to pull in chelsio chcr fix.
2016-11-30 19:53:12 +08:00
Stephan Mueller
5102981212 crypto: drbg - prevent invalid SG mappings
When using SGs, only heap memory (memory that is valid as per
virt_addr_valid) is allowed to be referenced. The CTR DRBG used to
reference the caller-provided memory directly in an SG. In case the
caller provided stack memory pointers, the SG mapping is not considered
to be valid. In some cases, this would even cause a paging fault.

The change adds a new scratch buffer that is used unconditionally to
catch the cases where the caller-provided buffer is not suitable for
use in an SG. The crypto operation of the CTR DRBG produces its output
with that scratch buffer and finally copies the content of the
scratch buffer to the caller's buffer.

The scratch buffer is allocated during allocation time of the CTR DRBG
as its access is protected with the DRBG mutex.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-30 19:46:44 +08:00
Herbert Xu
cc868d82ab crypto: cbc - Export CBC implementation
This patch moves the core CBC implementation into a header file
so that it can be reused by drivers implementing CBC.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:21 +08:00
Herbert Xu
266d051601 crypto: simd - Add simd skcipher helper
This patch adds the simd skcipher helper which is meant to be
a replacement for ablk helper.  It replaces the underlying blkcipher
interface with skcipher, and also presents the top-level algorithm
as an skcipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:18 +08:00
Herbert Xu
4e0958d19b crypto: cryptd - Add support for skcipher
This patch adds skcipher support to cryptd alongside ablkcipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:18 +08:00
Herbert Xu
f1c131b454 crypto: xts - Convert to skcipher
This patch converts xts over to the skcipher interface.  It also
optimises the implementation to be based on ECB instead of the
underlying cipher.  For compatibility the existing naming scheme
of xts(aes) is maintained as opposed to the more obvious one of
xts(ecb(aes)).

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:18 +08:00
Herbert Xu
b286d8b1a6 crypto: skcipher - Add skcipher walk interface
This patch adds the skcipher walk interface which replaces both
blkcipher walk and ablkcipher walk.  Just like blkcipher walk it
can also be used for AEAD algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-28 21:23:17 +08:00
Alex Cope
75aa0a7caf crypto: gf128mul - Zero memory when freeing multiplication table
GF(2^128) multiplication tables are typically used for secret
information, so it's a good idea to zero them on free.

Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-17 23:34:59 +08:00
Alex Cope
d266f44b5b crypto: gf128mul - remove dead gf128mul_64k_lle code
This code is unlikely to be useful in the future because transforms
don't know how often keys will be changed, new algorithms are unlikely
to use lle representation, and tables should be replaced with
carryless multiplication instructions when available.

Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-13 17:45:06 +08:00
Eric Biggers
60425a8bad crypto: skcipher - Get rid of crypto_spawn_skcipher2()
Since commit 3a01d0ee2b ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_spawn_skcipher2() and
crypto_spawn_skcipher() are equivalent.  So switch callers of
crypto_spawn_skcipher2() to crypto_spawn_skcipher() and remove it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-01 08:37:17 +08:00
Eric Biggers
a35528eca0 crypto: skcipher - Get rid of crypto_grab_skcipher2()
Since commit 3a01d0ee2b ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_grab_skcipher2() and
crypto_grab_skcipher() are equivalent.  So switch callers of
crypto_grab_skcipher2() to crypto_grab_skcipher() and remove it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-01 08:37:16 +08:00
Giovanni Cabiddu
1ab53a77b7 crypto: acomp - add driver-side scomp interface
Add a synchronous back-end (scomp) to acomp. This allows to easily
expose the already present compression algorithms in LKCF via acomp.

Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-10-25 11:08:31 +08:00
Giovanni Cabiddu
2ebda74fd6 crypto: acomp - add asynchronous compression api
Add acomp, an asynchronous compression api that uses scatterlist
buffers.

Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-10-25 11:08:30 +08:00
Petr Mladek
c4ca2b0b25 crypto: engine - Handle the kthread worker using the new API
Use the new API to create and destroy the crypto engine kthread
worker. The API hides some implementation details.

In particular, kthread_create_worker() allocates and initializes
struct kthread_worker. It runs the kthread the right way
and stores task_struct into the worker structure.

kthread_destroy_worker() flushes all pending works, stops
the kthread and frees the structure.

This patch does not change the existing behavior except for
dynamically allocating struct kthread_worker and storing
only the pointer of this structure.

It is compile tested only because I did not find an easy
way how to run the code. Well, it should be pretty safe
given the nature of the change.

Signed-off-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-10-25 11:08:25 +08:00
Eric Biggers
afb5a0a947 crypto: skcipher - Remove unused crypto_lookup_skcipher() declaration
The definition of crypto_lookup_skcipher() was already removed in
commit 3a01d0ee2b ("crypto: skcipher - Remove top-level givcipher
interface").  So the declaration should be removed too.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-10-21 11:03:41 +08:00
Herbert Xu
c3afafa478 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Merge the crypto tree to pull in vmx ghash fix.
2016-10-10 11:19:47 +08:00
Marcelo Cerri
a397ba829d crypto: ghash-generic - move common definitions to a new header file
Move common values and types used by ghash-generic to a new header file
so drivers can directly use ghash-generic as a fallback implementation.

Fixes: cc333cd68d ("crypto: vmx - Adding GHASH routines for VMX module")
Cc: stable@vger.kernel.org
Signed-off-by: Marcelo Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-10-02 22:26:40 +08:00
Corentin LABBE
4cba7cf025 crypto: engine - permit to enqueue ashash_request
The current crypto engine allow only ablkcipher_request to be enqueued.
Thus denying any use of it for hardware that also handle hash algo.

This patch modify the API for allowing to enqueue ciphers and hash.

Since omap-aes/omap-des are the only users, this patch also convert them
to the new cryptoengine API.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-09-07 21:08:27 +08:00
Corentin LABBE
2589ad8404 crypto: engine - move crypto engine to its own header
This patch move the whole crypto engine API to its own header
crypto/engine.h.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-09-07 21:08:26 +08:00
Linus Torvalds
818e607b57 A number of improvements for the /dev/random driver; the most
important is the use of a ChaCha20-based CRNG for /dev/urandom, which
 is faster, more efficient, and easier to make scalable for
 silly/abusive userspace programs that want to read from /dev/urandom
 in a tight loop on NUMA systems.
 
 This set of patches also improves entropy gathering on VM's running on
 Microsoft Azure, and will take advantage of a hw random number
 generator (if present) to initialize the /dev/urandom pool.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQEcBAABCAAGBQJXla/8AAoJEPL5WVaVDYGj4xgH/3qDPHpFvynUMbMgg/MRQNt2
 K1zhKKQjYuJ465aIiPME21tGC1C5JxO3a9mgDy0pfJmLVvCWRUx9dDtdI59Xkaev
 KuVTbqdl8D75lftX3/jF3lXKd5dVLrW2V9hOMcESQqBFuoc1B8sJztR0upQsdGvA
 I8+jjxRffSzxDZY4it6p/lqsTdEfwhA+mEc6ztZ+Ccsdlqd+GNCvB/YE4tk+U6x/
 tGKaKfuiHSXpR4P9ks/L5gwaIcFQ6Y1rdVqd8YP3iC9YXGFZdJkRXklRRj1mCgWs
 YoCDMATTos+6iVONft94fF5pW6HND1F3bxPNo/RpMZca3MlqbiCSij3ky+gqUc4=
 =mSBt
 -----END PGP SIGNATURE-----

Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random

Pull random driver updates from Ted Ts'o:
 "A number of improvements for the /dev/random driver; the most
  important is the use of a ChaCha20-based CRNG for /dev/urandom, which
  is faster, more efficient, and easier to make scalable for
  silly/abusive userspace programs that want to read from /dev/urandom
  in a tight loop on NUMA systems.

  This set of patches also improves entropy gathering on VM's running on
  Microsoft Azure, and will take advantage of a hw random number
  generator (if present) to initialize the /dev/urandom pool"

(It turns out that the random tree hadn't been in linux-next this time
around, because it had been dropped earlier as being too quiet.  Oh
well).

* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
  random: strengthen input validation for RNDADDTOENTCNT
  random: add backtracking protection to the CRNG
  random: make /dev/urandom scalable for silly userspace programs
  random: replace non-blocking pool with a Chacha20-based CRNG
  random: properly align get_random_int_hash
  random: add interrupt callback to VMBus IRQ handler
  random: print a warning for the first ten uninitialized random users
  random: initialize the non-blocking pool via add_hwgenerator_randomness()
2016-07-27 15:11:55 -07:00
Herbert Xu
5c562338de crypto: skcipher - Add comment for skcipher_alg->base
This patch adds a missing comment for the base parameter in struct
skcipher_alg.

Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-19 15:42:42 +08:00
Herbert Xu
ac02725812 crypto: scatterwalk - Inline start/map/done
This patch inlines the functions scatterwalk_start, scatterwalk_map
and scatterwalk_done as they're all tiny and mostly used by the block
cipher walker.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18 17:35:50 +08:00
Herbert Xu
4140139734 crypto: api - Optimise away crypto_yield when hard preemption is on
When hard preemption is enabled there is no need to explicitly
call crypto_yield.  This patch eliminates it if that is the case.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18 17:35:49 +08:00
Herbert Xu
5506f53c7c crypto: scatterwalk - Remove scatterwalk_bytes_sglen
This patch removes the now unused scatterwalk_bytes_sglen.  Anyone
using this out-of-tree should switch over to sg_nents_for_len.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18 17:35:48 +08:00