Commit Graph

2166 Commits

Author SHA1 Message Date
Eric Biggers
8b55107c57 crypto: algapi - remove unused notifications
There is a message posted to the crypto notifier chain when an algorithm
is unregistered, and when a template is registered or unregistered.  But
nothing is listening for those messages; currently there are only
listeners for the algorithm request and registration messages.

Get rid of these unused notifications for now.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-05 18:43:10 +11:00
Eric Biggers
ce8614a312 crypto: algapi - convert cra_refcnt to refcount_t
Reference counters should use refcount_t rather than atomic_t, since the
refcount_t implementation can prevent overflows, reducing the
exploitability of reference leak bugs.  crypto_alg.cra_refcount is a
reference counter with the usual semantics, so switch it over to
refcount_t.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-01-05 18:43:09 +11:00
Gilad Ben-Yossef
427988d981 crypto: tcrypt - add multibuf aead speed test
The performance of some aead tfm providers is affected by
the amount of parallelism possible with the processing.

Introduce an async aead concurrent multiple buffer
processing speed test to be able to test performance of such
tfm providers.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-28 17:56:42 +11:00
Gilad Ben-Yossef
e161c5930c crypto: tcrypt - add multibuf skcipher speed test
The performance of some skcipher tfm providers is affected by
the amount of parallelism possible with the processing.

Introduce an async skcipher concurrent multiple buffer
processing speed test to be able to test performance of such
tfm providers.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-28 17:56:41 +11:00
Gilad Ben-Yossef
b34a0f67ba crypto: tcrypt - add multi buf ahash jiffies test
The multi buffer concurrent requests ahash speed test only
supported the cycles mode. Add support for the so called
jiffies mode that test performance of bytes/sec.

We only add support for digest mode at the moment.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-28 17:56:41 +11:00
Gilad Ben-Yossef
8fcdc86856 crypto: tcrypt - allow setting num of bufs
For multiple buffers speed tests, the number of buffers, or
requests, used actually sets the level of parallelism a tfm
provider may utilize to hide latency. The existing number
(of 8) is good for some software based providers but not
enough for many HW providers with deep FIFOs.

Add a module parameter that allows setting the number of
multiple buffers/requests used, leaving the default at 8.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-28 17:56:40 +11:00
Gilad Ben-Yossef
4431bd4953 crypto: tcrypt - fix AEAD decryption speed test
The AEAD speed test pretended to support decryption, however that support
was broken as decryption requires a valid auth field which the test did
not provide.

Fix this by running the encryption path once with inout/output sgls
switched to calculate the auth field prior to performing decryption
speed tests.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-28 17:56:39 +11:00
Gilad Ben-Yossef
7c3f132389 crypto: tcrypt - use multi buf for ahash mb test
The multi buffer ahash speed test was allocating multiple
buffers for use with the multiple outstanding requests
it was starting but never actually using them (except
to free them), instead using a different single statically
allocated buffer for all requests.

Fix this by actually using the allocated buffers for the test.

It is noted that it may seem tempting to instead remove the
allocation and free of the multiple buffers and leave things as
they are since this is a hash test where the input is read
only. However, after consideration I believe that multiple
buffers better reflect real life scenario with regard
to data cache and TLB behaviours etc.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-28 17:56:39 +11:00
Herbert Xu
45fa9a324d Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Merge the crypto tree to pick up inside-secure fixes.
2017-12-22 20:00:50 +11:00
Corentin Labbe
d94c3d65df crypto: seqiv - Remove unused alg/spawn variable
This patch remove two unused variable and some dead "code" using it.

Fixes: 92932d03c2 ("crypto: seqiv - Remove AEAD compatibility code")

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-22 19:52:45 +11:00
Corentin Labbe
1f83f4d15d crypto: echainiv - Remove unused alg/spawn variable
This patch remove two unused variable and some dead "code" using it.

Fixes: 66008d4230 ("crypto: echainiv - Remove AEAD compatibility code")
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-22 19:52:45 +11:00
Eric Biggers
f5c421d545 crypto: gf128mul - remove incorrect comment
The comment in gf128mul_x8_ble() was copy-and-pasted from gf128mul.h and
makes no sense in the new context.  Remove it.

Cc: Harsh Jain <harsh@chelsio.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-22 19:52:40 +11:00
Eric Biggers
3a2d4fb51e crypto: null - Get rid of crypto_{get,put}_default_null_skcipher2()
Since commit 499a66e6b6 ("crypto: null - Remove default null
blkcipher"), crypto_get_default_null_skcipher2() and
crypto_put_default_null_skcipher2() are the same as their non-2
equivalents.  So switch callers of the "2" versions over to the original
versions and remove the "2" versions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-22 19:29:08 +11:00
Eric Biggers
cadc9ab503 crypto: api - Unexport crypto_larval_lookup()
crypto_larval_lookup() is not used outside of crypto/api.c, so unexport
it and mark it 'static'.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-22 19:29:06 +11:00
Eric Biggers
d76c68109f crypto: pcrypt - fix freeing pcrypt instances
pcrypt is using the old way of freeing instances, where the ->free()
method specified in the 'struct crypto_template' is passed a pointer to
the 'struct crypto_instance'.  But the crypto_instance is being
kfree()'d directly, which is incorrect because the memory was actually
allocated as an aead_instance, which contains the crypto_instance at a
nonzero offset.  Thus, the wrong pointer was being kfree()'d.

Fix it by switching to the new way to free aead_instance's where the
->free() method is specified in the aead_instance itself.

Reported-by: syzbot <syzkaller@googlegroups.com>
Fixes: 0496f56065 ("crypto: pcrypt - Add support for new AEAD interface")
Cc: <stable@vger.kernel.org> # v4.2+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-22 19:02:47 +11:00
Jonathan Cameron
af955bf15d crypto: af_alg - Fix race around ctx->rcvused by making it atomic_t
This variable was increased and decreased without any protection.
Result was an occasional misscount and negative wrap around resulting
in false resource allocation failures.

Fixes: 7d2c3f54e6 ("crypto: af_alg - remove locking in async callback")
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-22 19:02:40 +11:00
Eric Biggers
e57121d08c crypto: chacha20poly1305 - validate the digest size
If the rfc7539 template was instantiated with a hash algorithm with
digest size larger than 16 bytes (POLY1305_DIGEST_SIZE), then the digest
overran the 'tag' buffer in 'struct chachapoly_req_ctx', corrupting the
subsequent memory, including 'cryptlen'.  This caused a crash during
crypto_skcipher_decrypt().

Fix it by, when instantiating the template, requiring that the
underlying hash algorithm has the digest size expected for Poly1305.

Reproducer:

    #include <linux/if_alg.h>
    #include <sys/socket.h>
    #include <unistd.h>

    int main()
    {
            int algfd, reqfd;
            struct sockaddr_alg addr = {
                    .salg_type = "aead",
                    .salg_name = "rfc7539(chacha20,sha256)",
            };
            unsigned char buf[32] = { 0 };

            algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
            bind(algfd, (void *)&addr, sizeof(addr));
            setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, sizeof(buf));
            reqfd = accept(algfd, 0, 0);
            write(reqfd, buf, 16);
            read(reqfd, buf, 16);
    }

Reported-by: syzbot <syzkaller@googlegroups.com>
Fixes: 71ebc4d1b2 ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
Cc: <stable@vger.kernel.org> # v4.2+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-22 19:02:33 +11:00
Colin Ian King
eaf356e4be crypto: cryptd - make cryptd_max_cpu_qlen module parameter static
The cryptd_max_cpu_qlen module parameter is local to the source and does
not need to be in global scope, so make it static.

Cleans up sparse warning:
crypto/cryptd.c:35:14: warning: symbol 'cryptd_max_cpu_qlen' was not
declared. Should it be static?

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-11 22:36:58 +11:00
Hauke Mehrtens
b5b9007730 crypto: ecdh - fix typo in KPP dependency of CRYPTO_ECDH
This fixes a typo in the CRYPTO_KPP dependency of CRYPTO_ECDH.

Fixes: 3c4b23901a ("crypto: ecdh - Add ECDH software support")
Cc: <stable@vger.kernel.org> # v4.8+
Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-11 22:36:56 +11:00
Stephan Mueller
d53c513579 crypto: af_alg - fix race accessing cipher request
When invoking an asynchronous cipher operation, the invocation of the
callback may be performed before the subsequent operations in the
initial code path are invoked. The callback deletes the cipher request
data structure which implies that after the invocation of the
asynchronous cipher operation, this data structure must not be accessed
any more.

The setting of the return code size with the request data structure must
therefore be moved before the invocation of the asynchronous cipher
operation.

Fixes: e870456d8e ("crypto: algif_skcipher - overhaul memory management")
Fixes: d887c52d6a ("crypto: algif_aead - overhaul memory management")
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: <stable@vger.kernel.org> # v4.14+
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-11 22:29:55 +11:00
Sebastian Andrzej Siewior
9abffc6f2e crypto: mcryptd - protect the per-CPU queue with a lock
mcryptd_enqueue_request() grabs the per-CPU queue struct and protects
access to it with disabled preemption. Then it schedules a worker on the
same CPU. The worker in mcryptd_queue_worker() guards access to the same
per-CPU variable with disabled preemption.

If we take CPU-hotplug into account then it is possible that between
queue_work_on() and the actual invocation of the worker the CPU goes
down and the worker will be scheduled on _another_ CPU. And here the
preempt_disable() protection does not work anymore. The easiest thing is
to add a spin_lock() to guard access to the list.

Another detail: mcryptd_queue_worker() is not processing more than
MCRYPTD_BATCH invocation in a row. If there are still items left, then
it will invoke queue_work() to proceed with more later. *I* would
suggest to simply drop that check because it does not use a system
workqueue and the workqueue is already marked as "CPU_INTENSIVE". And if
preemption is required then the scheduler should do it.
However if queue_work() is used then the work item is marked as CPU
unbound. That means it will try to run on the local CPU but it may run
on another CPU as well. Especially with CONFIG_DEBUG_WQ_FORCE_RR_CPU=y.
Again, the preempt_disable() won't work here but lock which was
introduced will help.
In order to keep work-item on the local CPU (and avoid RR) I changed it
to queue_work_on().

Cc: stable@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-11 22:29:54 +11:00
Stephan Mueller
11edb55596 crypto: af_alg - wait for data at beginning of recvmsg
The wait for data is a non-atomic operation that can sleep and therefore
potentially release the socket lock. The release of the socket lock
allows another thread to modify the context data structure. The waiting
operation for new data therefore must be called at the beginning of
recvmsg. This prevents a race condition where checks of the members of
the context data structure are performed by recvmsg while there is a
potential for modification of these values.

Fixes: e870456d8e ("crypto: algif_skcipher - overhaul memory management")
Fixes: d887c52d6a ("crypto: algif_aead - overhaul memory management")
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: <stable@vger.kernel.org> # v4.14+
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-11 22:29:54 +11:00
Eric Biggers
2b4f27c36b crypto: skcipher - set walk.iv for zero-length inputs
All the ChaCha20 algorithms as well as the ARM bit-sliced AES-XTS
algorithms call skcipher_walk_virt(), then access the IV (walk.iv)
before checking whether any bytes need to be processed (walk.nbytes).

But if the input is empty, then skcipher_walk_virt() doesn't set the IV,
and the algorithms crash trying to use the uninitialized IV pointer.

Fix it by setting the IV earlier in skcipher_walk_virt().  Also fix it
for the AEAD walk functions.

This isn't a perfect solution because we can't actually align the IV to
->cra_alignmask unless there are bytes to process, for one because the
temporary buffer for the aligned IV is freed by skcipher_walk_done(),
which is only called when there are bytes to process.  Thus, algorithms
that require aligned IVs will still need to avoid accessing the IV when
walk.nbytes == 0.  Still, many algorithms/architectures are fine with
IVs having any alignment, and even for those that aren't, a misaligned
pointer bug is much less severe than an uninitialized pointer bug.

This change also matches the behavior of the older blkcipher_walk API.

Fixes: 0cabf2af6f ("crypto: skcipher - Fix crash on zero-length input")
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: <stable@vger.kernel.org> # v4.14+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-11 22:29:53 +11:00
Eric Biggers
54c1fb39fe X.509: fix comparisons of ->pkey_algo
->pkey_algo used to be an enum, but was changed to a string by commit
4e8ae72a75 ("X.509: Make algo identifiers text instead of enum").  But
two comparisons were not updated.  Fix them to use strcmp().

This bug broke signature verification in certain configurations,
depending on whether the string constants were deduplicated or not.

Fixes: 4e8ae72a75 ("X.509: Make algo identifiers text instead of enum")
Cc: <stable@vger.kernel.org> # v4.6+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
2017-12-08 15:13:29 +00:00
Eric Biggers
aa33003620 X.509: use crypto_shash_digest()
Use crypto_shash_digest() instead of crypto_shash_init() followed by
crypto_shash_finup().  (For simplicity only; they are equivalent.)

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
2017-12-08 15:13:29 +00:00
Eric Biggers
72f9a07b6b KEYS: be careful with error codes in public_key_verify_signature()
In public_key_verify_signature(), if akcipher_request_alloc() fails, we
return -ENOMEM.  But that error code was set 25 lines above, and by
accident someone could easily insert new code in between that assigns to
'ret', which would introduce a signature verification bypass.  Make the
code clearer by moving the -ENOMEM down to where it is used.

Additionally, the callers of public_key_verify_signature() only consider
a negative return value to be an error.  This means that if any positive
return value is accidentally introduced deeper in the call stack (e.g.
'return EBADMSG' instead of 'return -EBADMSG' somewhere in RSA),
signature verification will be bypassed.  Make things more robust by
having public_key_verify_signature() warn about positive errors and
translate them into -EINVAL.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
2017-12-08 15:13:29 +00:00
Eric Biggers
a80745a6de pkcs7: use crypto_shash_digest()
Use crypto_shash_digest() instead of crypto_shash_init() followed by
crypto_shash_finup().  (For simplicity only; they are equivalent.)

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
2017-12-08 15:13:28 +00:00
Eric Biggers
7204eb8590 pkcs7: fix check for self-signed certificate
pkcs7_validate_trust_one() used 'x509->next == x509' to identify a
self-signed certificate.  That's wrong; ->next is simply the link in the
linked list of certificates in the PKCS#7 message.  It should be
checking ->signer instead.  Fix it.

Fortunately this didn't actually matter because when we re-visited
'x509' on the next iteration via 'x509->signer', it was already seen and
not verified, so we returned -ENOKEY anyway.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: James Morris <james.l.morris@oracle.com>
2017-12-08 15:13:28 +00:00
Eric Biggers
8ecb506d34 pkcs7: return correct error code if pkcs7_check_authattrs() fails
If pkcs7_check_authattrs() returns an error code, we should pass that
error code on, rather than using ENOMEM.

Fixes: 99db443506 ("PKCS#7: Appropriately restrict authenticated attributes and content type")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: James Morris <james.l.morris@oracle.com>
2017-12-08 15:13:28 +00:00
Eric Biggers
0f30cbea00 X.509: reject invalid BIT STRING for subjectPublicKey
Adding a specially crafted X.509 certificate whose subjectPublicKey
ASN.1 value is zero-length caused x509_extract_key_data() to set the
public key size to SIZE_MAX, as it subtracted the nonexistent BIT STRING
metadata byte.  Then, x509_cert_parse() called kmemdup() with that bogus
size, triggering the WARN_ON_ONCE() in kmalloc_slab().

This appears to be harmless, but it still must be fixed since WARNs are
never supposed to be user-triggerable.

Fix it by updating x509_cert_parse() to validate that the value has a
BIT STRING metadata byte, and that the byte is 0 which indicates that
the number of bits in the bitstring is a multiple of 8.

It would be nice to handle the metadata byte in asn1_ber_decoder()
instead.  But that would be tricky because in the general case a BIT
STRING could be implicitly tagged, and/or could legitimately have a
length that is not a whole number of bytes.

Here was the WARN (cleaned up slightly):

    WARNING: CPU: 1 PID: 202 at mm/slab_common.c:971 kmalloc_slab+0x5d/0x70 mm/slab_common.c:971
    Modules linked in:
    CPU: 1 PID: 202 Comm: keyctl Tainted: G    B            4.14.0-09238-g1d3b78bbc6e9 #26
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-20171110_100015-anatol 04/01/2014
    task: ffff880033014180 task.stack: ffff8800305c8000
    Call Trace:
     __do_kmalloc mm/slab.c:3706 [inline]
     __kmalloc_track_caller+0x22/0x2e0 mm/slab.c:3726
     kmemdup+0x17/0x40 mm/util.c:118
     kmemdup include/linux/string.h:414 [inline]
     x509_cert_parse+0x2cb/0x620 crypto/asymmetric_keys/x509_cert_parser.c:106
     x509_key_preparse+0x61/0x750 crypto/asymmetric_keys/x509_public_key.c:174
     asymmetric_key_preparse+0xa4/0x150 crypto/asymmetric_keys/asymmetric_type.c:388
     key_create_or_update+0x4d4/0x10a0 security/keys/key.c:850
     SYSC_add_key security/keys/keyctl.c:122 [inline]
     SyS_add_key+0xe8/0x290 security/keys/keyctl.c:62
     entry_SYSCALL_64_fastpath+0x1f/0x96

Fixes: 42d5ec27f8 ("X.509: Add an ASN.1 decoder")
Cc: <stable@vger.kernel.org> # v3.7+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: James Morris <james.l.morris@oracle.com>
2017-12-08 15:13:27 +00:00
Eric Biggers
9f480faec5 crypto: chacha20 - Fix keystream alignment for chacha20_block()
When chacha20_block() outputs the keystream block, it uses 'u32' stores
directly.  However, the callers (crypto/chacha20_generic.c and
drivers/char/random.c) declare the keystream buffer as a 'u8' array,
which is not guaranteed to have the needed alignment.

Fix it by having both callers declare the keystream as a 'u32' array.
For now this is preferable to switching over to the unaligned access
macros because chacha20_block() is only being used in cases where we can
easily control the alignment (stack buffers).

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 17:33:33 +11:00
Eric Biggers
a1c73383c0 crypto: chacha20 - Remove cra_alignmask
Now that crypto_chacha20_setkey() and crypto_chacha20_init() use the
unaligned access macros and crypto_xor() also accepts unaligned buffers,
there is no need to have a cra_alignmask set for chacha20-generic.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 17:33:32 +11:00
Eric Biggers
dbd872a123 crypto: chacha20 - Use unaligned access macros when loading key and IV
The generic ChaCha20 implementation has a cra_alignmask of 3, which
ensures that the key passed into crypto_chacha20_setkey() and the IV
passed into crypto_chacha20_init() are 4-byte aligned.  However, these
functions are also called from the ARM and ARM64 implementations of
ChaCha20, which intentionally do not have a cra_alignmask set.  This is
broken because 32-bit words are being loaded from potentially-unaligned
buffers without the unaligned access macros.

Fix it by using the unaligned access macros when loading the key and IV.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 17:33:32 +11:00
Eric Biggers
ecf3220d88 crypto: chacha20 - Fix unaligned access when loading constants
The four 32-bit constants for the initial state of ChaCha20 were loaded
from a char array which is not guaranteed to have the needed alignment.

Fix it by just assigning the constants directly instead.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 17:33:31 +11:00
Jon Maxwell
c3a5360563 crypto: cryptd - Add cryptd_max_cpu_qlen module parameter
Make the cryptd queue length configurable. We recently had customer where this
needed to be tuned to accommodate the aesni_intel module and prevent packet
drop.

Signed-off-by: Jon Maxwell <jmaxwell37@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 17:33:31 +11:00
Yang Shi
79e53b2a5d crypto: remove unused hardirq.h
Preempt counter APIs have been split out, currently, hardirq.h just
includes irq_enter/exit APIs which are not used by crypto at all.

So, remove the unused hardirq.h.

Signed-off-by: Yang Shi <yang.s@alibaba-inc.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 17:33:29 +11:00
Geert Uytterhoeven
c9683276dd crypto: keywrap - Add missing ULL suffixes for 64-bit constants
On 32-bit (e.g. with m68k-linux-gnu-gcc-4.1):

    crypto/keywrap.c: In function ‘crypto_kw_decrypt’:
    crypto/keywrap.c:191: warning: integer constant is too large for ‘long’ type
    crypto/keywrap.c: In function ‘crypto_kw_encrypt’:
    crypto/keywrap.c:224: warning: integer constant is too large for ‘long’ type

Fixes: 9e49451d7a ("crypto: keywrap - simplify code")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 17:33:26 +11:00
Tudor-Dan Ambarus
5601e014fe crypto: tcrypt - set assoc in sg_init_aead()
Results better code readability.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 17:33:26 +11:00
Martin Kepplinger
1af39daaad crypto: replace FSF address with web source in license notices
A few years ago the FSF moved and "59 Temple Place" is wrong. Having this
still in our source files feels old and unmaintained.

Let's take the license statement serious and not confuse users.

As https://www.gnu.org/licenses/gpl-howto.html suggests, we replace the
postal address with "<http://www.gnu.org/licenses/>".

Signed-off-by: Martin Kepplinger <martink@posteo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 17:33:25 +11:00
Pierre
4c0e22c905 crypto: ecc - Fix NULL pointer deref. on no default_rng
If crypto_get_default_rng returns an error, the
function ecc_gen_privkey should return an error.
Instead, it currently tries to use the default_rng
nevertheless, thus creating a kernel panic with a
NULL pointer dereference.
Returning the error directly, as was supposedly
intended when looking at the code, fixes this.

Signed-off-by: Pierre Ducroquet <pinaraf@pinaraf.info>
Reviewed-by: PrasannaKumar Muralidharan <prasannatsmkumar@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 17:33:24 +11:00
Robert Baronescu
5c6ac1d4f8 crypto: tcrypt - fix S/G table for test_aead_speed()
In case buffer length is a multiple of PAGE_SIZE,
the S/G table is incorrectly generated.
Fix this by handling buflen = k * PAGE_SIZE separately.

Signed-off-by: Robert Baronescu <robert.baronescu@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 16:43:40 +11:00
Eric Biggers
ecaaab5649 crypto: salsa20 - fix blkcipher_walk API usage
When asked to encrypt or decrypt 0 bytes, both the generic and x86
implementations of Salsa20 crash in blkcipher_walk_done(), either when
doing 'kfree(walk->buffer)' or 'free_page((unsigned long)walk->page)',
because walk->buffer and walk->page have not been initialized.

The bug is that Salsa20 is calling blkcipher_walk_done() even when
nothing is in 'walk.nbytes'.  But blkcipher_walk_done() is only meant to
be called when a nonzero number of bytes have been provided.

The broken code is part of an optimization that tries to make only one
call to salsa20_encrypt_bytes() to process inputs that are not evenly
divisible by 64 bytes.  To fix the bug, just remove this "optimization"
and use the blkcipher_walk API the same way all the other users do.

Reproducer:

    #include <linux/if_alg.h>
    #include <sys/socket.h>
    #include <unistd.h>

    int main()
    {
            int algfd, reqfd;
            struct sockaddr_alg addr = {
                    .salg_type = "skcipher",
                    .salg_name = "salsa20",
            };
            char key[16] = { 0 };

            algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
            bind(algfd, (void *)&addr, sizeof(addr));
            reqfd = accept(algfd, 0, 0);
            setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key));
            read(reqfd, key, sizeof(key));
    }

Reported-by: syzbot <syzkaller@googlegroups.com>
Fixes: eb6f13eb9f ("[CRYPTO] salsa20_generic: Fix multi-page processing")
Cc: <stable@vger.kernel.org> # v2.6.25+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 16:25:58 +11:00
Eric Biggers
af3ff8045b crypto: hmac - require that the underlying hash algorithm is unkeyed
Because the HMAC template didn't check that its underlying hash
algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))"
through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC
being used without having been keyed, resulting in sha3_update() being
called without sha3_init(), causing a stack buffer overflow.

This is a very old bug, but it seems to have only started causing real
problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3)
because the innermost hash's state is ->import()ed from a zeroed buffer,
and it just so happens that other hash algorithms are fine with that,
but SHA-3 is not.  However, there could be arch or hardware-dependent
hash algorithms also affected; I couldn't test everything.

Fix the bug by introducing a function crypto_shash_alg_has_setkey()
which tests whether a shash algorithm is keyed.  Then update the HMAC
template to require that its underlying hash algorithm is unkeyed.

Here is a reproducer:

    #include <linux/if_alg.h>
    #include <sys/socket.h>

    int main()
    {
        int algfd;
        struct sockaddr_alg addr = {
            .salg_type = "hash",
            .salg_name = "hmac(hmac(sha3-512-generic))",
        };
        char key[4096] = { 0 };

        algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
        bind(algfd, (const struct sockaddr *)&addr, sizeof(addr));
        setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key));
    }

Here was the KASAN report from syzbot:

    BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:341  [inline]
    BUG: KASAN: stack-out-of-bounds in sha3_update+0xdf/0x2e0  crypto/sha3_generic.c:161
    Write of size 4096 at addr ffff8801cca07c40 by task syzkaller076574/3044

    CPU: 1 PID: 3044 Comm: syzkaller076574 Not tainted 4.14.0-mm1+ #25
    Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS  Google 01/01/2011
    Call Trace:
      __dump_stack lib/dump_stack.c:17 [inline]
      dump_stack+0x194/0x257 lib/dump_stack.c:53
      print_address_description+0x73/0x250 mm/kasan/report.c:252
      kasan_report_error mm/kasan/report.c:351 [inline]
      kasan_report+0x25b/0x340 mm/kasan/report.c:409
      check_memory_region_inline mm/kasan/kasan.c:260 [inline]
      check_memory_region+0x137/0x190 mm/kasan/kasan.c:267
      memcpy+0x37/0x50 mm/kasan/kasan.c:303
      memcpy include/linux/string.h:341 [inline]
      sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161
      crypto_shash_update+0xcb/0x220 crypto/shash.c:109
      shash_finup_unaligned+0x2a/0x60 crypto/shash.c:151
      crypto_shash_finup+0xc4/0x120 crypto/shash.c:165
      hmac_finup+0x182/0x330 crypto/hmac.c:152
      crypto_shash_finup+0xc4/0x120 crypto/shash.c:165
      shash_digest_unaligned+0x9e/0xd0 crypto/shash.c:172
      crypto_shash_digest+0xc4/0x120 crypto/shash.c:186
      hmac_setkey+0x36a/0x690 crypto/hmac.c:66
      crypto_shash_setkey+0xad/0x190 crypto/shash.c:64
      shash_async_setkey+0x47/0x60 crypto/shash.c:207
      crypto_ahash_setkey+0xaf/0x180 crypto/ahash.c:200
      hash_setkey+0x40/0x90 crypto/algif_hash.c:446
      alg_setkey crypto/af_alg.c:221 [inline]
      alg_setsockopt+0x2a1/0x350 crypto/af_alg.c:254
      SYSC_setsockopt net/socket.c:1851 [inline]
      SyS_setsockopt+0x189/0x360 net/socket.c:1830
      entry_SYSCALL_64_fastpath+0x1f/0x96

Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 13:39:15 +11:00
Eric Biggers
887207ed9e crypto: af_alg - fix NULL pointer dereference in
af_alg_free_areq_sgls()

If allocating the ->tsgl member of 'struct af_alg_async_req' failed,
during cleanup we dereferenced the NULL ->tsgl pointer in
af_alg_free_areq_sgls(), because ->tsgl_entries was nonzero.

Fix it by only freeing the ->tsgl list if it is non-NULL.

This affected both algif_skcipher and algif_aead.

Fixes: e870456d8e ("crypto: algif_skcipher - overhaul memory management")
Fixes: d887c52d6a ("crypto: algif_aead - overhaul memory management")
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: <stable@vger.kernel.org> # v4.14+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 13:39:14 +11:00
Eric Biggers
b32a7dc8ae crypto: algif_aead - fix reference counting of null skcipher
In the AEAD interface for AF_ALG, the reference to the "null skcipher"
held by each tfm was being dropped in the wrong place -- when each
af_alg_ctx was freed instead of when the aead_tfm was freed.  As
discovered by syzkaller, a specially crafted program could use this to
cause the null skcipher to be freed while it is still in use.

Fix it by dropping the reference in the right place.

Fixes: 72548b093e ("crypto: algif_aead - copy AAD from src to dst")
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: <stable@vger.kernel.org> # v4.14+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 13:39:14 +11:00
Eric Biggers
d2890c3778 crypto: rsa - fix buffer overread when stripping leading zeroes
In rsa_get_n(), if the buffer contained all 0's and "FIPS mode" is
enabled, we would read one byte past the end of the buffer while
scanning the leading zeroes.  Fix it by checking 'n_sz' before '!*ptr'.

This bug was reachable by adding a specially crafted key of type
"asymmetric" (requires CONFIG_RSA and CONFIG_X509_CERTIFICATE_PARSER).

KASAN report:

    BUG: KASAN: slab-out-of-bounds in rsa_get_n+0x19e/0x1d0 crypto/rsa_helper.c:33
    Read of size 1 at addr ffff88003501a708 by task keyctl/196

    CPU: 1 PID: 196 Comm: keyctl Not tainted 4.14.0-09238-g1d3b78bbc6e9 #26
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-20171110_100015-anatol 04/01/2014
    Call Trace:
     rsa_get_n+0x19e/0x1d0 crypto/rsa_helper.c:33
     asn1_ber_decoder+0x82a/0x1fd0 lib/asn1_decoder.c:328
     rsa_set_pub_key+0xd3/0x320 crypto/rsa.c:278
     crypto_akcipher_set_pub_key ./include/crypto/akcipher.h:364 [inline]
     pkcs1pad_set_pub_key+0xae/0x200 crypto/rsa-pkcs1pad.c:117
     crypto_akcipher_set_pub_key ./include/crypto/akcipher.h:364 [inline]
     public_key_verify_signature+0x270/0x9d0 crypto/asymmetric_keys/public_key.c:106
     x509_check_for_self_signed+0x2ea/0x480 crypto/asymmetric_keys/x509_public_key.c:141
     x509_cert_parse+0x46a/0x620 crypto/asymmetric_keys/x509_cert_parser.c:129
     x509_key_preparse+0x61/0x750 crypto/asymmetric_keys/x509_public_key.c:174
     asymmetric_key_preparse+0xa4/0x150 crypto/asymmetric_keys/asymmetric_type.c:388
     key_create_or_update+0x4d4/0x10a0 security/keys/key.c:850
     SYSC_add_key security/keys/keyctl.c:122 [inline]
     SyS_add_key+0xe8/0x290 security/keys/keyctl.c:62
     entry_SYSCALL_64_fastpath+0x1f/0x96

    Allocated by task 196:
     __do_kmalloc mm/slab.c:3711 [inline]
     __kmalloc_track_caller+0x118/0x2e0 mm/slab.c:3726
     kmemdup+0x17/0x40 mm/util.c:118
     kmemdup ./include/linux/string.h:414 [inline]
     x509_cert_parse+0x2cb/0x620 crypto/asymmetric_keys/x509_cert_parser.c:106
     x509_key_preparse+0x61/0x750 crypto/asymmetric_keys/x509_public_key.c:174
     asymmetric_key_preparse+0xa4/0x150 crypto/asymmetric_keys/asymmetric_type.c:388
     key_create_or_update+0x4d4/0x10a0 security/keys/key.c:850
     SYSC_add_key security/keys/keyctl.c:122 [inline]
     SyS_add_key+0xe8/0x290 security/keys/keyctl.c:62
     entry_SYSCALL_64_fastpath+0x1f/0x96

Fixes: 5a7de97309 ("crypto: rsa - return raw integers for the ASN.1 parser")
Cc: <stable@vger.kernel.org> # v4.8+
Cc: Tudor Ambarus <tudor-dan.ambarus@nxp.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: James Morris <james.l.morris@oracle.com>
Reviewed-by: David Howells <dhowells@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-29 13:39:14 +11:00
Linus Torvalds
43570f0383 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:

 - avoid potential bogus alignment for some AEAD operations

 - fix crash in algif_aead

 - avoid sleeping in softirq context with async af_alg

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: skcipher - Fix skcipher_walk_aead_common
  crypto: af_alg - remove locking in async callback
  crypto: algif_aead - skip SGL entries with NULL page
2017-11-28 16:22:10 -08:00
Ondrej Mosnáček
c14ca83865 crypto: skcipher - Fix skcipher_walk_aead_common
The skcipher_walk_aead_common function calls scatterwalk_copychunks on
the input and output walks to skip the associated data. If the AD end
at an SG list entry boundary, then after these calls the walks will
still be pointing to the end of the skipped region.

These offsets are later checked for alignment in skcipher_walk_next,
so the skcipher_walk may detect the alignment incorrectly.

This patch fixes it by calling scatterwalk_done after the copychunks
calls to ensure that the offsets refer to the right SG list entry.

Fixes: b286d8b1a6 ("crypto: skcipher - Add skcipher walk interface")
Cc: <stable@vger.kernel.org>
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-25 08:08:52 +08:00
Stephan Mueller
7d2c3f54e6 crypto: af_alg - remove locking in async callback
The code paths protected by the socket-lock do not use or modify the
socket in a non-atomic fashion. The actions pertaining the socket do not
even need to be handled as an atomic operation. Thus, the socket-lock
can be safely ignored.

This fixes a bug regarding scheduling in atomic as the callback function
may be invoked in interrupt context.

In addition, the sock_hold is moved before the AIO encrypt/decrypt
operation to ensure that the socket is always present. This avoids a
tiny race window where the socket is unprotected and yet used by the AIO
operation.

Finally, the release of resources for a crypto operation is moved into a
common function of af_alg_free_resources.

Cc: <stable@vger.kernel.org>
Fixes: e870456d8e ("crypto: algif_skcipher - overhaul memory management")
Fixes: d887c52d6a ("crypto: algif_aead - overhaul memory management")
Reported-by: Romain Izard <romain.izard.pro@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Tested-by: Romain Izard <romain.izard.pro@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-24 13:23:39 +08:00
Stephan Mueller
8e1fa89aa8 crypto: algif_aead - skip SGL entries with NULL page
The TX SGL may contain SGL entries that are assigned a NULL page. This
may happen if a multi-stage AIO operation is performed where the data
for each stage is pointed to by one SGL entry. Upon completion of that
stage, af_alg_pull_tsgl will assign NULL to the SGL entry.

The NULL cipher used to copy the AAD from TX SGL to the destination
buffer, however, cannot handle the case where the SGL starts with an SGL
entry having a NULL page. Thus, the code needs to advance the start
pointer into the SGL to the first non-NULL entry.

This fixes a crash visible on Intel x86 32 bit using the libkcapi test
suite.

Cc: <stable@vger.kernel.org>
Fixes: 72548b093e ("crypto: algif_aead - copy AAD from src to dst")
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-11-24 13:23:36 +08:00