Unlike chaining modes such as CBC, stream ciphers other than CTR
usually hold an internal state that must be preserved if the
operation is to be done piecemeal. This has not been represented
in the API, resulting in the inability to split up stream cipher
operations.
This patch adds the basic representation of an internal state to
skcipher and lskcipher. In the interest of backwards compatibility,
the default has been set such that existing users are assumed to
be operating in one go as opposed to piecemeal.
With the new API, each lskcipher/skcipher algorithm has a new
attribute called statesize. For skcipher, this is the size of
the buffer that can be exported or imported similar to ahash.
For lskcipher, instead of providing a buffer of ivsize, the user
now has to provide a buffer of ivsize + statesize.
Each skcipher operation is assumed to be final as they are now,
but this may be overridden with a request flag. When the override
occurs, the user may then export the partial state and reimport
it later.
For lskcipher operations this is reversed. All operations are
not final and the state will be exported unless the FINAL bit is
set. However, the CONT bit still has to be set for the state
to be used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Having multiple in-flight AIO requests results in unpredictable
output because they all share the same IV. Fix this by only allowing
one request at a time.
Fixes: 83094e5e9e ("crypto: af_alg - add async support to algif_aead")
Fixes: a596999b7d ("crypto: algif - change algif_skcipher to be asynchronous")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
SP800-90C 3rd draft states that SHA-1 will be removed from all
specifications, including drbg by end of 2030. Given kernels built
today will be operating past that date, start complying with upcoming
requirements.
No functional change, as SHA-256 / SHA-512 based DRBG have always been
the preferred ones.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update code comment, self test & healthcheck to use HMAC SHA512,
instead of HMAC SHA256. These changes are in dead-code, or FIPS
enabled code-paths only and have not effect on usual kernel builds.
On systems booting in FIPS mode that has the effect of switch sanity
selftest to HMAC sha512 based (which has been the default DRBG).
This patch updates code from 9b7b94683a ("crypto: DRBG - switch to
HMAC SHA512 DRBG as default DRBG"), but is not interesting to
cherry-pick for stable updates, because it doesn't affect regular
builds, nor has any tangible effect on FIPS certifcation.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When originally drbg was introduced FIPS self-checks for all types but
CTR were using the most preferred parameters for each type of
DRBG. Update CTR self-check to use aes256.
This patch updates code from 541af946fe ("crypto: drbg - SP800-90A
Deterministic Random Bit Generator"), but is not interesting to
cherry-pick for stable updates, because it doesn't affect regular
builds, nor has any tangible effect on FIPS certifcation.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
drbg supports multiple types of drbg, and multiple parameters of
each. Health check sanity only checks one drbg of a single type. One
can enable all three types of drbg. And instead of checking the most
preferred algorithm (last one wins), it is currently checking first
one instead.
Update ifdef to ensure that healthcheck prefers HMAC, over HASH, over
CTR, last one wins, like all other code and functions.
This patch updates code from 541af946fe ("crypto: drbg - SP800-90A
Deterministic Random Bit Generator"), but is not interesting to
cherry-pick for stable updates, because it doesn't affect regular
builds, nor has any tangible effect on FIPS certifcation.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Static checkers insist that the mpi_alloc() allocation can fail so add
a check to prevent a NULL dereference. Small allocations like this
can't actually fail in current kernels, but adding a check is very
simple and makes the static checkers happy.
Fixes: 6637e11e4a ("crypto: rsa - allow only odd e and restrict value in FIPS mode")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
EINPROGRESS and EBUSY have special meaning for async operations.
However, shash is always synchronous, so these statuses have no special
meaning for shash and don't need to be excluded when handling errors.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmVJ+X0ACgkQxycdCkmx
i6d26w/+LqMXxGgO7tLfnNtcaz3Xz3pNwMC0psIdfkL2ZxY44qX/kxWuURasjQqT
AYB4jTG/cRMtXTw3ayFxtNZ1CFCNNNUKxWmzSvv398m5SmqJTG4IHmi3td0vlpUn
i4AHNbRM8spINlIE/R4j/Bv9fsUHTvuRGHOclRtNMH8+Hk2CySNGeENeUheWxQDN
/+r1Jiq+AkyX1e1lmzBhR6PnHx6rPIWVB2nVIMAZAzoDrNVy+o2hQ4Oln00Sceip
TJ74cQoA9/tFPqYONEG37oc/22jKYHN7nUnO6H5E7pi8qRXunsxH5rX5A412VAI0
KbDnRqXWPVuaGcq3WdaE/9L9All3Hy2bpzT0J/khVSZ3oFl4aarf50mbD2pQNjqW
D1aDrPhFwlT0dXGzz3oNX1y1exSfRvku3kQxwO2ejRG2L+FRoGHCXnl7LkMWvNQZ
bsZruPReGZqtKk0CxTT7Asxi3SEWBe2GiFc3ZqDrvsBVrwb10FyzT8zHfMVDqn2r
foRPOBl4muhMwWUd3LxMxluV/wsx0LwW6++KRzzAewoBRI/x8G37epBQIzO23slf
q69i7Fo0W3dIH2+9vIqSrBkKXbjqlO/3oTH3n/FnstyNJT26S+5zqRHNLF13zEw5
OrpwsZynWaKNivwZRnLzrTZGGsXJSiMF4VbXb8lwWaTWJtiADMU=
=ZfkT
-----END PGP SIGNATURE-----
Merge tag 'v6.7-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
"This fixes a regression in ahash and hides the Kconfig sub-options for
the jitter RNG"
* tag 'v6.7-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: ahash - Set using_shash for cloned ahash wrapper over shash
crypto: jitterentropy - Hide esoteric Kconfig options under FIPS and EXPERT
As JITTERENTROPY is selected by default if you enable the CRYPTO
API, any Kconfig options added there will show up for every single
user. Hide the esoteric options under EXPERT as well as FIPS so
that only distro makers will see them.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
API:
- Add virtual-address based lskcipher interface.
- Optimise ahash/shash performance in light of costly indirect calls.
- Remove ahash alignmask attribute.
Algorithms:
- Improve AES/XTS performance of 6-way unrolling for ppc.
- Remove some uses of obsolete algorithms (md4, md5, sha1).
- Add FIPS 202 SHA-3 support in pkcs1pad.
- Add fast path for single-page messages in adiantum.
- Remove zlib-deflate.
Drivers:
- Add support for S4 in meson RNG driver.
- Add STM32MP13x support in stm32.
- Add hwrng interface support in qcom-rng.
- Add support for deflate algorithm in hisilicon/zip.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmVB3vgACgkQxycdCkmx
i6dsOBAAykbnX8BpnpnOXYywE9ZWrl98rAk51MK0N9olZNfg78zRPIv7fFxFdC20
SDJrDSNPmn0Qvaa5e0EfoAdklsm0k2GkXL/BwPKMKWUsyIoJVYI3WrBMnjBy9xMp
yfME+h0bKoXJCZKnYkIUSGUejmUPSyRlEylrXoFlH/VWYwAaii/x9zwreQoF+0LR
KI24A1q8AYs6Dw9HSfndaAub9GOzrqKYs6fSaMG+77Y4UC5aoi5J9Bp2G3uVyHay
x/0bZtIxKXS9wn+LeG/3GspX23x/I5VwBOdAoMigrYmAIaIg5qgyMszudltTAs4R
zF1Kh7WsnM5+vpnBSeigzo+/GGOU3QTz8y3tBTg+3ZR7GWGOwQLiizhOYqCyOfAH
pIm6c++sZw/OOHiL69Nt4HeLKzGNYYWk3s4X/B/6cqoouPfOsfBaQobZNx9zfy7q
ZNEvSVBjrFX/L6wDSotny1LTWLUNjHbmLaMV5uQZ/SQKEtv19fp2Dl7SsLkHH+3v
ldOAwfoJR6QcSwz3Ez02TUAvQhtP172Hnxi7u44eiZu2aUboLhCFr7aEU6kVdBCx
1rIRVHD1oqlOEDRwPRXzhF3I8R4QDORJIxZ6UUhg7yueuI+XCGDsBNC+LqBrBmSR
IbdjqmSDUBhJyM5yMnt1VFYhqKQ/ZzwZ3JQviwW76Es9pwEIolM=
=IZmR
-----END PGP SIGNATURE-----
Merge tag 'v6.7-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Add virtual-address based lskcipher interface
- Optimise ahash/shash performance in light of costly indirect calls
- Remove ahash alignmask attribute
Algorithms:
- Improve AES/XTS performance of 6-way unrolling for ppc
- Remove some uses of obsolete algorithms (md4, md5, sha1)
- Add FIPS 202 SHA-3 support in pkcs1pad
- Add fast path for single-page messages in adiantum
- Remove zlib-deflate
Drivers:
- Add support for S4 in meson RNG driver
- Add STM32MP13x support in stm32
- Add hwrng interface support in qcom-rng
- Add support for deflate algorithm in hisilicon/zip"
* tag 'v6.7-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (283 commits)
crypto: adiantum - flush destination page before unmapping
crypto: testmgr - move pkcs1pad(rsa,sha3-*) to correct place
Documentation/module-signing.txt: bring up to date
module: enable automatic module signing with FIPS 202 SHA-3
crypto: asymmetric_keys - allow FIPS 202 SHA-3 signatures
crypto: rsa-pkcs1pad - Add FIPS 202 SHA-3 support
crypto: FIPS 202 SHA-3 register in hash info for IMA
x509: Add OIDs for FIPS 202 SHA-3 hash and signatures
crypto: ahash - optimize performance when wrapping shash
crypto: ahash - check for shash type instead of not ahash type
crypto: hash - move "ahash wrapping shash" functions to ahash.c
crypto: talitos - stop using crypto_ahash::init
crypto: chelsio - stop using crypto_ahash::init
crypto: ahash - improve file comment
crypto: ahash - remove struct ahash_request_priv
crypto: ahash - remove crypto_ahash_alignmask
crypto: gcm - stop using alignmask of ahash
crypto: chacha20poly1305 - stop using alignmask of ahash
crypto: ccm - stop using alignmask of ahash
net: ipv6: stop checking crypto_ahash_alignmask
...
-----BEGIN PGP SIGNATURE-----
iIoEABYIADIWIQQdXVVFGN5XqKr1Hj7LwZzRsCrn5QUCZUDyWhQcem9oYXJAbGlu
dXguaWJtLmNvbQAKCRDLwZzRsCrn5QtIAPwLSdHw2qix1A6lMhbRiXqFOWINHcTF
DMtZkiPmpeuTKAEA0KaXfddKq5OC5S/ixPEEZCVqOq2ixxfMDhudyoh/qQs=
=lh3g
-----END PGP SIGNATURE-----
Merge tag 'integrity-v6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity
Pull integrity updates from Mimi Zohar:
"Four integrity changes: two IMA-overlay updates, an integrity Kconfig
cleanup, and a secondary keyring update"
* tag 'integrity-v6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity:
ima: detect changes to the backing overlay file
certs: Only allow certs signed by keys on the builtin keyring
integrity: fix indentation of config attributes
ima: annotate iint mutex to avoid lockdep false positive warnings
Upon additional review, the new fast path in adiantum_finish() is
missing the call to flush_dcache_page() that scatterwalk_map_and_copy()
was doing. It's apparently debatable whether flush_dcache_page() is
actually needed, as per the discussion at
https://lore.kernel.org/lkml/YYP1lAq46NWzhOf0@casper.infradead.org/T/#u.
However, it appears that currently all the helper functions that write
to a page, such as scatterwalk_map_and_copy(), memcpy_to_page(), and
memzero_page(), do the dcache flush. So do it to be consistent.
Fixes: dadf5e56c9 ("crypto: adiantum - add fast path for single-page messages")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
alg_test_descs[] needs to be in sorted order, since it is used for
binary search. This fixes the following boot-time warning:
testmgr: alg_test_descs entries in wrong order: 'pkcs1pad(rsa,sha512)' before 'pkcs1pad(rsa,sha3-256)'
Fixes: ee62afb9d0 ("crypto: rsa-pkcs1pad - Add FIPS 202 SHA-3 support")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Originally the secondary trusted keyring provided a keyring to which extra
keys may be added, provided those keys were not blacklisted and were
vouched for by a key built into the kernel or already in the secondary
trusted keyring.
On systems with the machine keyring configured, additional keys may also
be vouched for by a key on the machine keyring.
Prevent loading additional certificates directly onto the secondary
keyring, vouched for by keys on the machine keyring, yet allow these
certificates to be loaded onto other trusted keyrings.
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
Add FIPS 202 SHA-3 hash signature support in x509 certificates, pkcs7
signatures, and authenticode signatures. Supports hashes of size 256
and up, as 224 is too weak for any practical purposes.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add support in rsa-pkcs1pad for FIPS 202 SHA-3 hashes, sizes 256 and
up. As 224 is too weak for any practical purposes.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Register FIPS 202 SHA-3 hashes in hash info for IMA and other
users. Sizes 256 and up, as 224 is too weak for any practical
purposes.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The "ahash" API provides access to both CPU-based and hardware offload-
based implementations of hash algorithms. Typically the former are
implemented as "shash" algorithms under the hood, while the latter are
implemented as "ahash" algorithms. The "ahash" API provides access to
both. Various kernel subsystems use the ahash API because they want to
support hashing hardware offload without using a separate API for it.
Yet, the common case is that a crypto accelerator is not actually being
used, and ahash is just wrapping a CPU-based shash algorithm.
This patch optimizes the ahash API for that common case by eliminating
the extra indirect call for each ahash operation on top of shash.
It also fixes the double-counting of crypto stats in this scenario
(though CONFIG_CRYPTO_STATS should *not* be enabled by anyone interested
in performance anyway...), and it eliminates redundant checking of
CRYPTO_TFM_NEED_KEY. As a bonus, it also shrinks struct crypto_ahash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since the previous patch made crypto_shash_type visible to ahash.c,
change checks for '->cra_type != &crypto_ahash_type' to '->cra_type ==
&crypto_shash_type'. This makes more sense and avoids having to
forward-declare crypto_ahash_type. The result is still the same, since
the type is either shash or ahash here.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The functions that are involved in implementing the ahash API on top of
an shash algorithm belong better in ahash.c, not in shash.c where they
currently are. Move them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
struct ahash_request_priv is unused, so remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify crypto_gcm_create_common() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify chachapoly_create() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify crypto_ccm_create_common() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
crypto_ahash_alignmask() always returns 0 and will be removed. In
preparation for this, stop checking crypto_ahash_alignmask() in testmgr.
As a result of this change,
test_sg_division::offset_relative_to_alignmask and
testvec_config::key_offset_relative_to_alignmask no longer have any
effect on ahash (or shash) algorithms. Therefore, also stop setting
these flags in default_hash_testvec_configs[].
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify the code in authenc accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify the code in authenc accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, the ahash API checks the alignment of all key and result
buffers against the algorithm's declared alignmask, and for any
unaligned buffers it falls back to manually aligned temporary buffers.
This is virtually useless, however. First, since it does not apply to
the message, its effect is much more limited than e.g. is the case for
the alignmask for "skcipher". Second, the key and result buffers are
given as virtual addresses and cannot (in general) be DMA'ed into, so
drivers end up having to copy to/from them in software anyway. As a
result it's easy to use memcpy() or the unaligned access helpers.
The crypto_hash_walk_*() helper functions do use the alignmask to align
the message. But with one exception those are only used for shash
algorithms being exposed via the ahash API, not for native ahashes, and
aligning the message is not required in this case, especially now that
alignmask support has been removed from shash. The exception is the
n2_core driver, which doesn't set an alignmask.
In any case, no ahash algorithms actually set a nonzero alignmask
anymore. Therefore, remove support for it from ahash. The benefit is
that all the code to handle "misaligned" buffers in the ahash API goes
away, reducing the overhead of the ahash API.
This follows the same change that was made to shash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Per section 4.c. of the IETF Trust Legal Provisions, "Code Components"
in IETF Documents are licensed on the terms of the BSD-3-Clause license:
https://trustee.ietf.org/documents/trust-legal-provisions/tlp-5/
The term "Code Components" specifically includes ASN.1 modules:
https://trustee.ietf.org/documents/trust-legal-provisions/code-components-list-3/
Add an SPDX identifier as well as a copyright notice pursuant to section
6.d. of the Trust Legal Provisions to all ASN.1 modules in the tree
which are derived from IETF Documents.
Section 4.d. of the Trust Legal Provisions requests that each Code
Component identify the RFC from which it is taken, so link that RFC
in every ASN.1 module.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The health test result in the current code is only given for the currently
processed raw time stamp. This implies to react on the health test error,
the result must be checked after each raw time stamp being processed. To
avoid this constant checking requirement, any health test error is recorded
and stored to be analyzed at a later time, if needed.
This change ensures that the power-up test catches any health test error.
Without that patch, the power-up health test result is not enforced.
The introduced changes are already in use with the user space version of
the Jitter RNG.
Fixes: 04597c8dd6 ("jitter - add RCT/APT support for different OSRs")
Reported-by: Joachim Vandersmissen <git@jvdsn.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the shash algorithm type does not support nonzero alignmasks,
shash_alg::base.cra_alignmask is always 0, so OR-ing it into another
value is a no-op.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the shash algorithm type does not support nonzero alignmasks,
shash_alg::base.cra_alignmask is always 0, so OR-ing it into another
value is a no-op.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the shash algorithm type does not support nonzero alignmasks,
crypto_shash_alignmask() always returns 0 and will be removed. In
preparation for this, stop checking crypto_shash_alignmask() in testmgr.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the shash algorithm type does not support nonzero alignmasks,
crypto_shash_alignmask() always returns 0 and will be removed. In
preparation for this, stop checking crypto_shash_alignmask() in drbg.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, the shash API checks the alignment of all message, key, and
digest buffers against the algorithm's declared alignmask, and for any
unaligned buffers it falls back to manually aligned temporary buffers.
This is virtually useless, however. In the case of the message buffer,
cryptographic hash functions internally operate on fixed-size blocks, so
implementations end up needing to deal with byte-aligned data anyway
because the length(s) passed to ->update might not be divisible by the
block size. Word-alignment of the message can theoretically be helpful
for CRCs, like what was being done in crc32c-sparc64. But in practice
it's better for the algorithms to use unaligned accesses or align the
message themselves. A similar argument applies to the key and digest.
In any case, no shash algorithms actually set a nonzero alignmask
anymore. Therefore, remove support for it from shash. The benefit is
that all the code to handle "misaligned" buffers in the shash API goes
away, reducing the overhead of the shash API.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The xcbc template is setting its alignmask to that of its underlying
'cipher'. Yet, it doesn't care itself about how its inputs and outputs
are aligned, which is ostensibly the point of the alignmask. Instead,
xcbc actually just uses its alignmask itself to runtime-align certain
fields in its tfm and desc contexts appropriately for its underlying
cipher. That is almost entirely pointless too, though, since xcbc is
already using the cipher API functions that handle alignment themselves,
and few ciphers set a nonzero alignmask anyway. Also, even without
runtime alignment, an alignment of at least 4 bytes can be guaranteed.
Thus, at best this code is optimizing for the rare case of ciphers that
set an alignmask >= 7, at the cost of hurting the common cases.
Therefore, this patch removes the manual alignment code from xcbc and
makes it stop setting an alignmask.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The vmac template is setting its alignmask to that of its underlying
'cipher'. This doesn't actually accomplish anything useful, though, so
stop doing it. (vmac_update() does have an alignment bug, where it
assumes u64 alignment when it shouldn't, but that bug exists both before
and after this patch.) This is a prerequisite for removing support for
nonzero alignmasks from shash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The hmac template is setting its alignmask to that of its underlying
unkeyed hash algorithm, and it is aligning the ipad and opad fields in
its tfm context to that alignment. However, hmac does not actually need
any sort of alignment itself, which makes this pointless except to keep
the pads aligned to what the underlying algorithm prefers. But very few
shash algorithms actually set an alignmask, and it is being removed from
those remaining ones; also, after setkey, the pads are only passed to
crypto_shash_import and crypto_shash_export which ignore the alignmask.
Therefore, make the hmac template stop setting an alignmask and simply
use natural alignment for ipad and opad. Note, this change also moves
the pads from the beginning of the tfm context to the end, which makes
much more sense; the variable-length fields should be at the end.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The cmac template is setting its alignmask to that of its underlying
'cipher'. Yet, it doesn't care itself about how its inputs and outputs
are aligned, which is ostensibly the point of the alignmask. Instead,
cmac actually just uses its alignmask itself to runtime-align certain
fields in its tfm and desc contexts appropriately for its underlying
cipher. That is almost entirely pointless too, though, since cmac is
already using the cipher API functions that handle alignment themselves,
and few ciphers set a nonzero alignmask anyway. Also, even without
runtime alignment, an alignment of at least 4 bytes can be guaranteed.
Thus, at best this code is optimizing for the rare case of ciphers that
set an alignmask >= 7, at the cost of hurting the common cases.
Therefore, this patch removes the manual alignment code from cmac and
makes it stop setting an alignmask.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The cbcmac template is aligning a field in its desc context to the
alignmask of its underlying 'cipher', at runtime. This is almost
entirely pointless, since cbcmac is already using the cipher API
functions that handle alignment themselves, and few ciphers set a
nonzero alignmask anyway. Also, even without runtime alignment, an
alignment of at least 4 bytes can be guaranteed.
Thus, at best this code is optimizing for the rare case of ciphers that
set an alignmask >= 7, at the cost of hurting the common cases.
Therefore, remove the manual alignment code from cbcmac.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Most shash algorithms don't have custom ->import and ->export functions,
resulting in the memcpy() based default being used. Yet,
crypto_shash_import() and crypto_shash_export() still make an indirect
call, which is expensive. Therefore, change how the default import and
export are called to make it so that crypto_shash_import() and
crypto_shash_export() don't do an indirect call in this case.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The modular build fails because the self-test code depends on pkcs7
which in turn depends on x509 which contains the self-test.
Split the self-test out into its own module to break the cycle.
Fixes: 3cde3174eb ("certs: Add FIPS selftests")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When an algorithm of the new "lskcipher" type is exposed through the
"skcipher" API, calls to crypto_skcipher_setkey() don't pass on the
CRYPTO_TFM_REQ_FORBID_WEAK_KEYS flag to the lskcipher. This causes
self-test failures for ecb(des), as weak keys are not rejected anymore.
Fix this.
Fixes: 31865c4c4d ("crypto: skcipher - Add lskcipher")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Set the error value to -EINVAL instead of zero when the underlying
name (within "ecb()") fails basic sanity checks.
Fixes: 8aee5d4ebd ("crypto: lskcipher - Add compatibility wrapper around ECB")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/r/202310111323.ZjK7bzjw-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
It is possible to stand up own certificates and sign PE-COFF binaries
using SHA-224. However it never became popular or needed since it has
similar costs as SHA-256. Windows Authenticode infrastructure never
had support for SHA-224, and all secureboot keys used fro linux
vmlinuz have always been using at least SHA-256.
Given the point of mscode_parser is to support interoperatiblity with
typical de-facto hashes, remove support for SHA-224 to avoid
posibility of creating interoperatibility issues with rhboot/shim,
grub, and non-linux systems trying to sign or verify vmlinux.
SHA-224 itself is not removed from the kernel, as it is truncated
SHA-256. If requested I can write patches to remove SHA-224 support
across all of the drivers.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Removes support for sha1 signed kernel modules, importing sha1 signed
x.509 certificates.
rsa-pkcs1pad keeps sha1 padding support, which seems to be used by
virtio driver.
sha1 remains available as there are many drivers and subsystems using
it. Note only hmac(sha1) with secret keys remains cryptographically
secure.
In the kernel there are filesystems, IMA, tpm/pcr that appear to be
using sha1. Maybe they can all start to be slowly upgraded to
something else i.e. blake3, ParallelHash, SHAKE256 as needed.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the source scatterlist is a single page, optimize the first hash
step of adiantum to use crypto_shash_digest() instead of
init/update/final, and use the same local kmap for both hashing the bulk
part and loading the narrow part of the source data.
Likewise, when the destination scatterlist is a single page, optimize
the second hash step of adiantum to use crypto_shash_digest() instead of
init/update/final, and use the same local kmap for both hashing the bulk
part and storing the narrow part of the destination data.
In some cases these optimizations improve performance significantly.
Note: ideally, for optimal performance each architecture should
implement the full "adiantum(xchacha12,aes)" algorithm and fully
optimize the contiguous buffer case to use no indirect calls. That's
not something I've gotten around to doing, though. This commit just
makes a relatively small change that provides some benefit with the
existing template-based approach.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fold shash_digest_unaligned() into its only remaining caller. Also,
avoid a redundant check of CRYPTO_TFM_NEED_KEY by replacing the call to
crypto_shash_init() with shash->init(desc). Finally, replace
shash_update_unaligned() + shash_final_unaligned() with
shash_finup_unaligned() which does exactly that.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For an shash algorithm that doesn't implement ->digest, currently
crypto_shash_digest() with aligned input makes 5 indirect calls: 1 to
shash_digest_unaligned(), 1 to ->init, 2 to ->update ('alignmask + 1'
bytes, then the rest), then 1 to ->final. This is true even if the
algorithm implements ->finup. This is caused by an unnecessary fallback
to code meant to handle unaligned inputs. In fact,
crypto_shash_digest() already does the needed alignment check earlier.
Therefore, optimize the number of indirect calls for aligned inputs to 3
when the algorithm implements ->finup. It remains at 5 when the
algorithm implements neither ->finup nor ->digest.
Similarly, for an shash algorithm that doesn't implement ->finup,
currently crypto_shash_finup() with aligned input makes 4 indirect
calls: 1 to shash_finup_unaligned(), 2 to ->update, and
1 to ->final. Optimize this to 3 calls.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since commit adad556efc ("crypto: api - Fix built-in testing
dependency failures"), the following warning appears when booting an
x86_64 kernel that is configured with
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y and CONFIG_CRYPTO_AES_NI_INTEL=y,
even when CONFIG_CRYPTO_XTS=y and CONFIG_CRYPTO_AES=y:
alg: skcipher: skipping comparison tests for xts-aes-aesni because xts(ecb(aes-generic)) is unavailable
This is caused by an issue in the xts template where it allocates an
"aes" single-block cipher without declaring a dependency on it via the
crypto_spawn mechanism. This issue was exposed by the above commit
because it reversed the order that the algorithms are tested in.
Specifically, when "xts(ecb(aes-generic))" is instantiated and tested
during the comparison tests for "xts-aes-aesni", the "xts" template
allocates an "aes" crypto_cipher for encrypting tweaks. This resolves
to "aes-aesni". (Getting "aes-aesni" instead of "aes-generic" here is a
bit weird, but it's apparently intended.) Due to the above-mentioned
commit, the testing of "aes-aesni", and the finalization of its
registration, now happens at this point instead of before. At the end
of that, crypto_remove_spawns() unregisters all algorithm instances that
depend on a lower-priority "aes" implementation such as "aes-generic"
but that do not depend on "aes-aesni". However, because "xts" does not
use the crypto_spawn mechanism for its "aes", its dependency on
"aes-aesni" is not recognized by crypto_remove_spawns(). Thus,
crypto_remove_spawns() unexpectedly unregisters "xts(ecb(aes-generic))".
Fix this issue by making the "xts" template use the crypto_spawn
mechanism for its "aes" dependency, like what other templates do.
Note, this fix could be applied as far back as commit f1c131b454
("crypto: xts - Convert to skcipher"). However, the issue only got
exposed by the much more recent changes to how the crypto API runs the
self-tests, so there should be no need to backport this to very old
kernels. Also, an alternative fix would be to flip the list iteration
order in crypto_start_tests() to restore the original testing order.
I'm thinking we should do that too, since the original order seems more
natural, but it shouldn't be relied on for correctness.
Fixes: adad556efc ("crypto: api - Fix built-in testing dependency failures")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The new sign/verify code broke the case of pkcs1pad without a
hash algorithm. Fix it by setting issig correctly for this case.
Fixes: 63ba4d6759 ("KEYS: asymmetric: Use new crypto interface without scatterlists")
Cc: stable@vger.kernel.org # v6.5
Reported-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Denis Kenzior <denkenz@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In case a health test error occurs during runtime, the power-up health
tests are rerun to verify that the noise source is still good and
that the reported health test error was an outlier. For performing this
power-up health test, the already existing entropy collector instance
is used instead of allocating a new one. This change has the following
implications:
* The noise that is collected as part of the newly run health tests is
inserted into the entropy collector and thus stirs the existing
data present in there further. Thus, the entropy collected during
the health test is not wasted. This is also allowed by SP800-90B.
* The power-on health test is not affected by the state of the entropy
collector, because it resets the APT / RCT state. The remainder of
the state is unrelated to the health test as it is only applied to
newly obtained time stamps.
This change also fixes a bug report about an allocation while in an
atomic lock (the lock is taken in jent_kcapi_random, jent_read_entropy
is called and this can call jent_entropy_init).
Fixes: 04597c8dd6 ("jitter - add RCT/APT support for different OSRs")
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns may be of the type lskcipher, only the common
fields may be accessed. This was already the case but use the
correct helpers to make this more obvious.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add code to handle an underlying lskcihper object when grabbing
an skcipher spawn.
Fixes: 31865c4c4d ("crypto: skcipher - Add lskcipher")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As lskcipher requires the ecb wrapper for the transition add an
explicit dependency on it so that it is always present. This can
be removed once all simple ciphers have been converted to lskcipher.
Reported-by: Nathan Chancellor <nathan@kernel.org>
Fixes: 705b52fef3 ("crypto: cbc - Convert from skcipher to lskcipher")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove zlib-deflate test vectors as it no longer exists in the kernel.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Remove the implementation of zlib-deflate because it is completely
unused in the kernel.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Remove support for md4 md5 hash and signatures in x.509 certificate
parsers, pkcs7 signature parser, authenticode parser.
All of these are insecure or broken, and everyone has long time ago
migrated to alternative hash implementations.
Also remove md2 & md3 oids which have already didn't have support.
This is also likely the last user of md4 in the kernel, and thus
crypto/md4.c and related tests in tcrypt & testmgr can likely be
removed. Other users such as cifs smbfs ext modpost sumversions have
their own internal implementation as needed.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ASN.1 module in RFC 5280 appendix A.1 uses EXPLICIT TAGS whereas the
one in appendix A.2 uses IMPLICIT TAGS.
The kernel's simplified asn1_compiler.c always uses EXPLICIT TAGS, hence
definitions from appendix A.2 need to be annotated as IMPLICIT for the
compiler to generate RFC-compliant code.
In particular, GeneralName is defined in appendix A.2:
GeneralName ::= CHOICE {
otherName [0] OtherName,
...
dNSName [2] IA5String,
x400Address [3] ORAddress,
directoryName [4] Name,
...
}
Because appendix A.2 uses IMPLICIT TAGS, the IA5String tag (0x16) of a
dNSName is not rendered. Instead, the string directly succeeds the
[2] tag (0x82).
Likewise, the SEQUENCE tag (0x30) of an OtherName is not rendered.
Instead, only the constituents of the SEQUENCE are rendered: An OID tag
(0x06), a [0] tag (0xa0) and an ANY tag. That's three consecutive tags
instead of a single encompassing tag.
The situation is different for x400Address and directoryName choices:
They reference ORAddress and Name, which are defined in appendix A.1,
therefore use EXPLICIT TAGS.
The AKID ASN.1 module is missing several IMPLICIT annotations, hence
isn't RFC-compliant. In the unlikely event that an AKID contains other
elements beside a directoryName, users may see parse errors.
Add the missing annotations but do not tag this commit for stable as I
am not aware of any issue reports. Fixes are only eligible for stable
if they're "obviously correct" and with ASN.1 there's no such thing.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All callers ignore the return value, so simplify by not providing one.
Note that crypto_engine_exit() is typically called in a device driver's
remove path (or the error path in probe), where errors cannot be handled
anyhow.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The oversampling rate used by the Jitter RNG allows the configuration of
the heuristically implied entropy in one timing measurement. This
entropy rate is (1 / OSR) bits of entropy per time stamp.
Considering that the Jitter RNG now support APT/RCT health tests for
different OSRs, allow this value to be configured at compile time to
support systems with limited amount of entropy in their timer.
The allowed range of OSR values complies with the APT/RCT cutoff health
test values which range from 1 through 15.
The default value of the OSR selection support is left at 1 which is the
current default. Thus, the addition of the configuration support does
not alter the default Jitter RNG behavior.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The memory size consumed by the Jitter RNG is one contributing factor in
the amount of entropy that is gathered. As the amount of entropy
directly correlates with the distance of the memory from the CPU, the
caches that are possibly present on a given system have an impact on the
collected entropy.
Thus, the kernel compile time should offer a means to configure the
amount of memory used by the Jitter RNG. Although this option could be
turned into a runtime option (e.g. a kernel command line option), it
should remain a compile time option as otherwise adminsitrators who may
not have performed an entropy assessment may select a value that is
inappropriate.
The default value selected by the configuration is identical to the
current Jitter RNG value. Thus, the patch should not lead to any change
in the Jitter RNG behavior.
To accommodate larger memory buffers, kvzalloc / kvfree is used.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The oversampling rate (OSR) value specifies the heuristically implied
entropy in the recorded data - H_submitter = 1/osr. A different entropy
estimate implies a different APT/RCT cutoff value. This change adds
support for OSRs 1 through 15. This OSR can be selected by the caller
of the Jitter RNG.
For this patch, the caller still uses one hard-coded OSR. A subsequent
patch allows this value to be configured.
In addition, the power-up self test is adjusted as follows:
* It allows the caller to provide an oversampling rate that should be
tested with - commonly it should be the same as used for the actual
runtime operation. This makes the power-up testing therefore consistent
with the runtime operation.
* It calls now jent_measure_jitter (i.e. collects the full entropy
that can possibly be harvested by the Jitter RNG) instead of only
jent_condition_data (which only returns the entropy harvested from
the conditioning component). This should now alleviate reports where
the Jitter RNG initialization thinks there is too little entropy.
* The power-up test now solely relies on the (enhanced) APT and RCT
test that is used as a health test at runtime.
The code allowing the different OSRs as well as the power-up test
changes are present in the user space version of the Jitter RNG 3.4.1
and thus was already in production use for some time.
Reported-by "Ospan, Abylay" <aospan@amazon.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds two different implementations of ECB. First of
all an lskcipher wrapper around existing ciphers is introduced as
a temporary transition aid.
Secondly a permanent lskcipher template is also added. It's simply
a wrapper around the underlying lskcipher algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As an aid to the transition from cipher algorithm implementations
to lskcipher, add a temporary wrapper when creating simple lskcipher
templates by using ecb(X) instead of X if an lskcipher implementation
of X cannot be found.
This can be reverted once all cipher implementations have switched
over to lskcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a new API type lskcipher designed for taking straight kernel
pointers instead of SG lists. Its relationship to skcipher will
be analogous to that between shash and ahash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the macro CRYPTO_ALG_TYPE_AHASH_MASK out of linux/crypto.h
and into crypto/ahash.c so that it's not visible to users of the
Crypto API.
Also remove the unused CRYPTO_ALG_TYPE_HASH_MASK macro.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the helper crypto_has_aead. This is meant to replace the
existing use of crypto_has_alg to locate AEAD algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
tfm is assigned first, so it does not need to initialize
the assignment.
Signed-off-by: Li zeming <zeming@nfschina.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In sm2_compute_z_digest() function, the newly allocated structure
mpi_ec_ctx is used, but forget to initialize it, which will cause
a crash when performing subsequent operations.
Fixes: e5221fa6a3 ("KEYS: asymmetric: Move sm2 code into x509_public_key")
Cc: stable@vger.kernel.org # v6.5
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
We found a hungtask bug in test_aead_vec_cfg as follows:
INFO: task cryptomgr_test:391009 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Call trace:
__switch_to+0x98/0xe0
__schedule+0x6c4/0xf40
schedule+0xd8/0x1b4
schedule_timeout+0x474/0x560
wait_for_common+0x368/0x4e0
wait_for_completion+0x20/0x30
wait_for_completion+0x20/0x30
test_aead_vec_cfg+0xab4/0xd50
test_aead+0x144/0x1f0
alg_test_aead+0xd8/0x1e0
alg_test+0x634/0x890
cryptomgr_test+0x40/0x70
kthread+0x1e0/0x220
ret_from_fork+0x10/0x18
Kernel panic - not syncing: hung_task: blocked tasks
For padata_do_parallel, when the return err is 0 or -EBUSY, it will call
wait_for_completion(&wait->completion) in test_aead_vec_cfg. In normal
case, aead_request_complete() will be called in pcrypt_aead_serial and the
return err is 0 for padata_do_parallel. But, when pinst->flags is
PADATA_RESET, the return err is -EBUSY for padata_do_parallel, and it
won't call aead_request_complete(). Therefore, test_aead_vec_cfg will
hung at wait_for_completion(&wait->completion), which will cause
hungtask.
The problem comes as following:
(padata_do_parallel) |
rcu_read_lock_bh(); |
err = -EINVAL; | (padata_replace)
| pinst->flags |= PADATA_RESET;
err = -EBUSY |
if (pinst->flags & PADATA_RESET) |
rcu_read_unlock_bh() |
return err
In order to resolve the problem, we replace the return err -EBUSY with
-EAGAIN, which means parallel_data is changing, and the caller should call
it again.
v3:
remove retry and just change the return err.
v2:
introduce padata_try_do_parallel() in pcrypt_aead_encrypt and
pcrypt_aead_decrypt to solve the hungtask.
Signed-off-by: Lu Jialin <lujialin4@huawei.com>
Signed-off-by: Guo Zihua <guozihua@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
API:
- Move crypto engine callback from tfm ctx into algorithm object.
- Fix atomic sleep bug in crypto_destroy_instance.
- Move lib/mpi into lib/crypto.
Algorithms:
- Add chacha20 and poly1305 implementation for powerpc p10.
Drivers:
- Add AES skcipher and aead support to starfive.
- Add Dynamic Boost Control support to ccp.
- Add support for STM32P13 platform to stm32.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmTsZkMACgkQxycdCkmx
i6furw//e6kYK1CTOqidPM6nI0KK1Ok204VXu56H0wM4THZ09ZwcbDNKpvI6vjMi
XZkKthiayl/1okmpRVP0rPqMWDtxajeu6IUAQqqFGUFU8R7AqCDrOd+te+zlSFWG
16ySNQO47RND0OzNqZ4ojgCC0n9RpP+zOfndmderZ4EnfXSbodwGUwkcuE7Z96cP
jNoainO2iwlyMZPlVynrw61O3RxGu/s/ch+uY1mV+TyvAAWoOlzt57gYUs3eGduz
4Ky+0Ubctg3sfBaqA2Hg6GjtAqG/QUssRyj8YgsFMrgXPHDTbLh6abej39wWo4gz
ZdC7Bm47hV/yfVdWe2iq3/5iqdILEdPBh3fDh6NNsZ1Jlm3aEZpH9rEXm0k4X2MJ
A9NDAFVj8dAYVZza7+Y8jPc8FNe+HqN9HYip/2K7g68WAJGWnMc9lq9qGwGmg1Gl
dn6yM27AgH8B+UljWYM9FS1ZFsc8KCudJavRZqA2d0W3rbXVWAoBBp83ii0yX1Nm
ZPAblAYMZCDeCtrVrDYKLtGn566rfpCrv3R5cppwHLksGJsDxgWrjG47l9uy5HXI
u05jiXT11R+pjIU2Wv5qsiUIhyvli6AaiFYHIdZ8fWaovPAOdhrCrN3IryvUVHj/
LqMcnmW1rWGNYN9pqHn0sQZ730ZJIma0klhTZOn8HPJNbiK68X0=
=LbcA
-----END PGP SIGNATURE-----
Merge tag 'v6.6-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Move crypto engine callback from tfm ctx into algorithm object
- Fix atomic sleep bug in crypto_destroy_instance
- Move lib/mpi into lib/crypto
Algorithms:
- Add chacha20 and poly1305 implementation for powerpc p10
Drivers:
- Add AES skcipher and aead support to starfive
- Add Dynamic Boost Control support to ccp
- Add support for STM32P13 platform to stm32"
* tag 'v6.6-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (149 commits)
Revert "dt-bindings: crypto: qcom,prng: Add SM8450"
crypto: chelsio - Remove unused declarations
X.509: if signature is unsupported skip validation
crypto: qat - fix crypto capability detection for 4xxx
crypto: drivers - Explicitly include correct DT includes
crypto: engine - Remove crypto_engine_ctx
crypto: zynqmp - Use new crypto_engine_op interface
crypto: virtio - Use new crypto_engine_op interface
crypto: stm32 - Use new crypto_engine_op interface
crypto: jh7110 - Use new crypto_engine_op interface
crypto: rk3288 - Use new crypto_engine_op interface
crypto: omap - Use new crypto_engine_op interface
crypto: keembay - Use new crypto_engine_op interface
crypto: sl3516 - Use new crypto_engine_op interface
crypto: caam - Use new crypto_engine_op interface
crypto: aspeed - Remove non-standard sha512 algorithms
crypto: aspeed - Use new crypto_engine_op interface
crypto: amlogic - Use new crypto_engine_op interface
crypto: sun8i-ss - Use new crypto_engine_op interface
crypto: sun8i-ce - Use new crypto_engine_op interface
...
Contents:
- Restrict linking of keys to .ima and .evm keyrings based on
digitalSignature attribute in the certificate.
- PowerVM: load machine owner keys into the .machine [1] keyring.
- PowerVM: load module signing keys into the secondary trusted keyring
(keys blessed by the vendor).
- tpm_tis_spi: half-duplex transfer mode
- tpm_tis: retry corrupted transfers
- Apply revocation list (.mokx) to an all system keyrings (e.g. .machine
keyring).
[1] https://blogs.oracle.com/linux/post/the-machine-keyring
BR, Jarkko
-----BEGIN PGP SIGNATURE-----
iIgEABYIADAWIQRE6pSOnaBC00OEHEIaerohdGur0gUCZN5/qBIcamFya2tvQGtl
cm5lbC5vcmcACgkQGnq6IXRrq9J4GQEAstTtQfGGrx5KInOTMWOvaq/Cum5iW4AD
NefVfbUtCCQBANvFtxoPYQS5u6+rIdxzIwFiNUlOyt2uR2bkk4UUiPML
=Vvs8
-----END PGP SIGNATURE-----
Merge tag 'tpmdd-v6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd
Pull tpm updates from Jarkko Sakkinen:
- Restrict linking of keys to .ima and .evm keyrings based on
digitalSignature attribute in the certificate
- PowerVM: load machine owner keys into the .machine [1] keyring
- PowerVM: load module signing keys into the secondary trusted keyring
(keys blessed by the vendor)
- tpm_tis_spi: half-duplex transfer mode
- tpm_tis: retry corrupted transfers
- Apply revocation list (.mokx) to an all system keyrings (e.g.
.machine keyring)
Link: https://blogs.oracle.com/linux/post/the-machine-keyring [1]
* tag 'tpmdd-v6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd:
certs: Reference revocation list for all keyrings
tpm/tpm_tis_synquacer: Use module_platform_driver macro to simplify the code
tpm: remove redundant variable len
tpm_tis: Resend command to recover from data transfer errors
tpm_tis: Use responseRetry to recover from data transfer errors
tpm_tis: Move CRC check to generic send routine
tpm_tis_spi: Add hardware wait polling
KEYS: Replace all non-returning strlcpy with strscpy
integrity: PowerVM support for loading third party code signing keys
integrity: PowerVM machine keyring enablement
integrity: check whether imputed trust is enabled
integrity: remove global variable from machine_keyring.c
integrity: ignore keys failing CA restrictions on non-UEFI platform
integrity: PowerVM support for loading CA keys on machine keyring
integrity: Enforce digitalSignature usage in the ima and evm keyrings
KEYS: DigitalSignature link restriction
tpm_tis: Revert "tpm_tis: Disable interrupts on ThinkPad T490s"
When the hash algorithm for the signature is not available the digest size
is 0 and the signature in the certificate is marked as unsupported.
When validating a self-signed certificate, this needs to be checked,
because otherwise trying to validate the signature will fail with an
warning:
Loading compiled-in X.509 certificates
WARNING: CPU: 0 PID: 1 at crypto/rsa-pkcs1pad.c:537 \
pkcs1pad_verify+0x46/0x12c
...
Problem loading in-kernel X.509 certificate (-22)
Signed-off-by: Thore Sommer <public@thson.de>
Cc: stable@vger.kernel.org # v4.7+
Fixes: 6c2dc5ae4a ("X.509: Extract signature digest and make self-signed cert checks earlier")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Syzbot was able to trigger use of uninitialized memory in
af_alg_free_resources.
Bug is caused by missing initialization of rsgl->sgl.need_unpin before
adding to rsgl_list. Then in case of extract_iter_to_sg() failure, rsgl
is left with uninitialized need_unpin which is read during clean up
BUG: KMSAN: uninit-value in af_alg_free_sg crypto/af_alg.c:545 [inline]
BUG: KMSAN: uninit-value in af_alg_free_areq_sgls crypto/af_alg.c:778 [inline]
BUG: KMSAN: uninit-value in af_alg_free_resources+0x3d1/0xf60 crypto/af_alg.c:1117
af_alg_free_sg crypto/af_alg.c:545 [inline]
af_alg_free_areq_sgls crypto/af_alg.c:778 [inline]
af_alg_free_resources+0x3d1/0xf60 crypto/af_alg.c:1117
_skcipher_recvmsg crypto/algif_skcipher.c:144 [inline]
...
Uninit was created at:
slab_post_alloc_hook+0x12f/0xb70 mm/slab.h:767
slab_alloc_node mm/slub.c:3470 [inline]
__kmem_cache_alloc_node+0x536/0x8d0 mm/slub.c:3509
__do_kmalloc_node mm/slab_common.c:984 [inline]
__kmalloc+0x121/0x3c0 mm/slab_common.c:998
kmalloc include/linux/slab.h:586 [inline]
sock_kmalloc+0x128/0x1c0 net/core/sock.c:2683
af_alg_alloc_areq+0x41/0x2a0 crypto/af_alg.c:1188
_skcipher_recvmsg crypto/algif_skcipher.c:71 [inline]
Fixes: c1abe6f570 ("crypto: af_alg: Use extract_iter_to_sg() to create scatterlists")
Reported-and-tested-by: syzbot+cba21d50095623218389@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=cba21d50095623218389
Signed-off-by: Pavel Skripkin <paskripkin@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than having the callback in the request, move it into the
crypto_alg object. This avoids having crypto_engine look into the
request context is private to the driver.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Create crypto/internal/engine.h to house details that should not
be used by drivers. It is empty for the time being.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The engine file does not need the actual crypto type definitions
so move those header inclusions to where they are actually used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The callbacks for prepare and unprepare request in crypto_engine
is superfluous. They can be done directly from do_one_request.
Move the code into do_one_request and remove the unused callbacks.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a new link restriction. Restrict the addition of keys in a keyring
based on the key having digitalSignature usage set. Additionally, verify
the new certificate against the ones in the system keyrings. Add two
additional functions to use the new restriction within either the builtin
or secondary keyrings.
[jarkko@kernel.org: Fix checkpatch.pl --strict issues]
Signed-off-by: Eric Snowberg <eric.snowberg@oracle.com>
Reviewed-and-tested-by: Mimi Zohar <zohar@linux.ibm.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
The RCT cutoff values are correct, but they don't exactly match the ones
one would expect when computing them using the formula in SP800-90B. This
discrepancy is due to the fact that the Jitter Entropy RCT starts at 1. To
avoid any confusion by future reviewers, add some comments and explicitly
subtract 1 from the "correct" cutoff values in the definitions.
Signed-off-by: Joachim Vandersmissen <git@jvdsn.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function crypto_drop_spawn expects to be called in process
context. However, when an instance is unregistered while it still
has active users, the last user may cause the instance to be freed
in atomic context.
Fix this by delaying the freeing to a work queue.
Fixes: 6bfd48096f ("[CRYPTO] api: Added spawns")
Reported-by: Florent Revest <revest@chromium.org>
Reported-by: syzbot+d769eed29cc42d75e2a3@syzkaller.appspotmail.com
Reported-by: syzbot+610ec0671f51e838436e@syzkaller.appspotmail.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Florent Revest <revest@chromium.org>
Acked-by: Florent Revest <revest@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Calls to lookup_user_key() require a corresponding key_put() to
decrement the usage counter. Once it reaches zero, we schedule key GC.
Therefore decrement struct key.usage in alg_set_by_key_serial().
Fixes: 7984ceb134 ("crypto: af_alg - Support symmetric encryption via keyring keys")
Cc: <stable@vger.kernel.org>
Signed-off-by: Frederick Lawler <fred@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix af_alg_alloc_areq() to initialise areq->first_rsgl.sgl.sgt.sgl to point
to the scatterlist array in areq->first_rsgl.sgl.sgl.
Without this, the gcm-aes-s390 driver will oops when it tries to do
gcm_walk_start() on req->dst because req->dst is set to the value of
areq->first_rsgl.sgl.sgl by _aead_recvmsg() calling
aead_request_set_crypt().
The problem comes if an empty ciphertext is passed: the loop in
af_alg_get_rsgl() just passes straight out and doesn't set areq->first_rsgl
up.
This isn't a problem on x86_64 using gcmaes_crypt_by_sg() because, as far
as I can tell, that ignores req->dst and only uses req->src[*].
[*] Is this a bug in aesni-intel_glue.c?
The s390x oops looks something like:
Unable to handle kernel pointer dereference in virtual kernel address space
Failing address: 0000000a00000000 TEID: 0000000a00000803
Fault in home space mode while using kernel ASCE.
AS:00000000a43a0007 R3:0000000000000024
Oops: 003b ilc:2 [#1] SMP
...
Call Trace:
[<000003ff7fc3d47e>] gcm_walk_start+0x16/0x28 [aes_s390]
[<00000000a2a342f2>] crypto_aead_decrypt+0x9a/0xb8
[<00000000a2a60888>] aead_recvmsg+0x478/0x698
[<00000000a2e519a0>] sock_recvmsg+0x70/0xb0
[<00000000a2e51a56>] sock_read_iter+0x76/0xa0
[<00000000a273e066>] vfs_read+0x26e/0x2a8
[<00000000a273e8c4>] ksys_read+0xbc/0x100
[<00000000a311d808>] __do_syscall+0x1d0/0x1f8
[<00000000a312ff30>] system_call+0x70/0x98
Last Breaking-Event-Address:
[<000003ff7fc3e6b4>] gcm_aes_crypt+0x104/0xa68 [aes_s390]
Fixes: c1abe6f570 ("crypto: af_alg: Use extract_iter_to_sg() to create scatterlists")
Reported-by: Ondrej Mosnáček <omosnacek@gmail.com>
Link: https://lore.kernel.org/r/CAAUqJDuRkHE8fPgZJGaKjUjd3QfGwzfumuJBmStPqBhubxyk_A@mail.gmail.com/
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: Sven Schnelle <svens@linux.ibm.com>
cc: Harald Freudenberger <freude@linux.vnet.ibm.com>
cc: "David S. Miller" <davem@davemloft.net>
cc: Paolo Abeni <pabeni@redhat.com>
cc: linux-crypto@vger.kernel.org
cc: linux-s390@vger.kernel.org
cc: regressions@lists.linux.dev
Tested-by: Sven Schnelle <svens@linux.ibm.com>
Tested-by: Ondrej Mosnáček <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
key might contain private part of the key, so better use
kfree_sensitive to free it
Signed-off-by: Mahmoud Adam <mngyadam@amazon.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
These functions are defined in the sig.c file, but not called elsewhere,
so delete these unused functions.
crypto/sig.c:24:34: warning: unused function '__crypto_sig_tfm'.
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=5701
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
strlcpy() reads the entire source buffer first.
This read may exceed the destination size limit.
This is both inefficient and can lead to linear read
overflows if a source string is not NUL-terminated [1].
In an effort to remove strlcpy() completely [2], replace
strlcpy() here with strscpy().
Direct replacement is safe here since return value of -errno
is used to check for truncation instead of sizeof(dest).
[1] https://www.kernel.org/doc/html/latest/process/deprecated.html#strlcpy
[2] https://github.com/KSPP/linux/issues/89
Signed-off-by: Azeem Shaikh <azeemshaikh38@gmail.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix kernel-doc warnings in verify_pefile:
crypto/asymmetric_keys/verify_pefile.c:423: warning: Excess function
parameter 'trust_keys' description in 'verify_pefile_signature'
crypto/asymmetric_keys/verify_pefile.c:423: warning: Function parameter
or member 'trusted_keys' not described in 'verify_pefile_signature'
Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 'MSG_MORE' state of the previous sendmsg() is fetched without the
socket lock held, so two sendmsg calls can race. This can be seen with a
large sendfile() as that now does a series of sendmsg() calls, and if a
write() comes in on the same socket at an inopportune time, it can flip the
state.
Fix this by moving the fetch of ctx->more inside the socket lock.
Fixes: c662b043cd ("crypto: af_alg/hash: Support MSG_SPLICE_PAGES")
Reported-by: syzbot+689ec3afb1ef07b766b2@syzkaller.appspotmail.com
Link: https://lore.kernel.org/r/000000000000554b8205ffdea64e@google.com/
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: syzbot+689ec3afb1ef07b766b2@syzkaller.appspotmail.com
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: Paolo Abeni <pabeni@redhat.com>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
These error paths should return the appropriate error codes instead of
returning success.
Fixes: 63ba4d6759 ("KEYS: asymmetric: Use new crypto interface without scatterlists")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
af_alg_sendmsg() takes data-to-be-copied that's provided by write(),
send(), sendmsg() and similar into pages that it allocates and will merge
new data into the last page in the list, based on the value of ctx->merge.
Now that af_alg_sendmsg() accepts MSG_SPLICE_PAGES, it adds spliced pages
directly into the list and then incorrectly appends data to them if there's
space left because ctx->merge says that it can. This was cleared by
af_alg_sendpage(), but that got lost.
Fix this by skipping the merge if MSG_SPLICE_PAGES is specified and
clearing ctx->merge after MSG_SPLICE_PAGES has added stuff to the list.
Fixes: bf63e250c4 ("crypto: af_alg: Support MSG_SPLICE_PAGES")
Reported-by: Ondrej Mosnáček <omosnacek@gmail.com>
Link: https://lore.kernel.org/r/CAAUqJDvFuvms55Td1c=XKv6epfRnnP78438nZQ-JKyuCptGBiQ@mail.gmail.com/
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: Paolo Abeni <pabeni@redhat.com>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
API:
- Add linear akcipher/sig API.
- Add tfm cloning (hmac, cmac).
- Add statesize to crypto_ahash.
Algorithms:
- Allow only odd e and restrict value in FIPS mode for RSA.
- Replace LFSR with SHA3-256 in jitter.
- Add interface for gathering of raw entropy in jitter.
Drivers:
- Fix race on data_avail and actual data in hwrng/virtio.
- Add hash and HMAC support in starfive.
- Add RSA algo support in starfive.
- Add support for PCI device 0x156E in ccp.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmSdECcACgkQxycdCkmx
i6dW3g//a4DR6aaqYF8pU4svAzO56a0Plx3DVHUiJ4ygRB7xOzrQqXjCren6wY2a
LFuetwxebAhIAPsC79vI+3j8VAIlU9cNVqOxBIJHGY7wFO4m1AjqBjlealzqLrth
+nEIeUibqLeRw7imOO4adzSsKuSQgyU5rPtKWfrGqqI3RhuMgfWroCtmJ82jmq5l
uMZgB+aGGkzyXztxubHRPeJ3nOFEzo95SscpJ43lOjMcURRBhEa+20jXDhUGwpI7
9ycFV31AW+tfkIprAcliiIzZuwIbzlCkte6AxjAVsN100T/wh9JS1Y+uf1P0oZ9y
AUQQKyc8/QpSkzHZPTncat5P6zta28r8Q5neCvEEEGGuOE8Oc6kb0Os+RE5ANMU4
2A/zrKGOMIWeEWwXGc51xT3gxyl/Rn5wLw1pW7Lm4d5osGT9jiVXx/g66hKLpagJ
jegI6CqgvUajkRNi7JPVnSAauu0Ay8O6pU37/8gLOXNGVZBqONpRimk9qB05LNSF
QYzM2sgYv1tQEmjnG8jLhF5Z8brnqYTv2TZwBX43W10EDQNqUYUDff9Flean5xCb
+2mxJc81rgtUffnMXyYvQwKLhVKoLpeLR6Ts455S5aP06WAfoyEJyYTA/LHG24GX
H2HdS9g5y/K15k9yygMWaXgAx7O7MjM9gEa2VQakhnByj/eQM0s=
=rOLu
-----END PGP SIGNATURE-----
Merge tag 'v6.5-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Add linear akcipher/sig API
- Add tfm cloning (hmac, cmac)
- Add statesize to crypto_ahash
Algorithms:
- Allow only odd e and restrict value in FIPS mode for RSA
- Replace LFSR with SHA3-256 in jitter
- Add interface for gathering of raw entropy in jitter
Drivers:
- Fix race on data_avail and actual data in hwrng/virtio
- Add hash and HMAC support in starfive
- Add RSA algo support in starfive
- Add support for PCI device 0x156E in ccp"
* tag 'v6.5-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (85 commits)
crypto: akcipher - Do not copy dst if it is NULL
crypto: sig - Fix verify call
crypto: akcipher - Set request tfm on sync path
crypto: sm2 - Provide sm2_compute_z_digest when sm2 is disabled
hwrng: imx-rngc - switch to DEFINE_SIMPLE_DEV_PM_OPS
hwrng: st - keep clock enabled while hwrng is registered
hwrng: st - support compile-testing
hwrng: imx-rngc - fix the timeout for init and self check
KEYS: asymmetric: Use new crypto interface without scatterlists
KEYS: asymmetric: Move sm2 code into x509_public_key
KEYS: Add forward declaration in asymmetric-parser.h
crypto: sig - Add interface for sign/verify
crypto: akcipher - Add sync interface without SG lists
crypto: cipher - On clone do crypto_mod_get()
crypto: api - Add __crypto_alloc_tfmgfp
crypto: api - Remove crypto_init_ops()
crypto: rsa - allow only odd e and restrict value in FIPS mode
crypto: geniv - Split geniv out of AEAD Kconfig option
crypto: algboss - Add missing dependency on RNG2
crypto: starfive - Add RSA algo support
...
As signature verification has a NULL destination buffer, the pointer
needs to be checked before the memcpy is done.
Fixes: addde1f2c9 ("crypto: akcipher - Add sync interface without SG lists")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The dst SG list needs to be set to NULL for verify calls. Do
this as otherwise the underlying algorithm may fail.
Furthermore the digest needs to be copied just like the source.
Fixes: 6cb8815f41 ("crypto: sig - Add interface for sign/verify")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The sm2 certificate requires a modified digest. Move the code
for the hashing from the signature verification path into the
code where we generate the digest.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Split out the sign/verify functionality from the existing akcipher
interface. Most algorithms in akcipher either support encryption
and decryption, or signing and verify. Only one supports both.
As a signature algorithm may not support encryption at all, these
two should be spearated.
For now sig is simply a wrapper around akcipher as all algorithms
remain unchanged. This is a first step and allows users to start
allocating sig instead of akcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The only user of akcipher does not use SG lists. Therefore forcing
users to use SG lists only results unnecessary overhead. Add a new
interface that supports arbitrary kernel pointers.
For the time being the copy will be performed unconditionally. But
this will go away once the underlying interface is updated.
Note also that only encryption and decryption is addressed by this
patch as sign/verify will go into a new interface (sig).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The refcounter of underlying algorithm should be incremented, otherwise
it'll be destroyed with the cloned cipher, wrecking the original cipher.
Signed-off-by: Dmitry Safonov <dima@arista.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use it straight away in crypto_clone_cipher(), as that is not meant to
sleep.
Fixes: 51d8d6d0f4 ("crypto: cipher - Add crypto_clone_cipher")
Signed-off-by: Dmitry Safonov <dima@arista.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Purge crypto_type::init() as well.
The last user seems to be gone with commit d63007eb95 ("crypto:
ablkcipher - remove deprecated and unused ablkcipher support").
Signed-off-by: Dmitry Safonov <dima@arista.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
check if rsa public exponent is odd and check its value is between
2^16 < e < 2^256.
FIPS 186-5 DSS (page 35)[1] specify that:
1. The public exponent e shall be selected with the following constraints:
(a) The public verification exponent e shall be selected prior to
generating the primes, p and q, and the private signature exponent
d.
(b) The exponent e shall be an odd positive integer such that:
2^16 < e < 2^256.
[1] https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-5.pdf
Signed-off-by: Mahmoud Adam <mngyadam@amazon.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Give geniv its own Kconfig option so that its dependencies are
distinct from that of the AEAD API code. This also allows it
to be disabled if no IV generators (seqiv/echainiv) are enabled.
Remove the obsolete select on RNG2 by SKCIPHER2 as skcipher IV
generators disappeared long ago.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The testmgr code uses crypto_rng without depending on it. Add
an explicit dependency to Kconfig.
Also sort the MANAGER2 dependencies alphabetically.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When af_alg_sendmsg() calls extract_iter_to_sg(), it passes MAX_SGL_ENTS as
the maximum number of elements that may be written to, but some of the
elements may already have been used (as recorded in sgl->cur), so
extract_iter_to_sg() may end up overrunning the scatterlist.
Fix this to limit the number of elements to "MAX_SGL_ENTS - sgl->cur".
Note: It probably makes sense in future to alter the behaviour of
extract_iter_to_sg() to stop if "sgtable->nents >= sg_max" instead, but
this is a smaller fix for now.
The bug causes errors looking something like:
BUG: KASAN: slab-out-of-bounds in sg_assign_page include/linux/scatterlist.h:109 [inline]
BUG: KASAN: slab-out-of-bounds in sg_set_page include/linux/scatterlist.h:139 [inline]
BUG: KASAN: slab-out-of-bounds in extract_bvec_to_sg lib/scatterlist.c:1183 [inline]
BUG: KASAN: slab-out-of-bounds in extract_iter_to_sg lib/scatterlist.c:1352 [inline]
BUG: KASAN: slab-out-of-bounds in extract_iter_to_sg+0x17a6/0x1960 lib/scatterlist.c:1339
Fixes: bf63e250c4 ("crypto: af_alg: Support MSG_SPLICE_PAGES")
Reported-by: syzbot+6efc50cc1f8d718d6cb7@syzkaller.appspotmail.com
Link: https://lore.kernel.org/r/000000000000b2585a05fdeb8379@google.com/
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: syzbot+6efc50cc1f8d718d6cb7@syzkaller.appspotmail.com
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Jens Axboe <axboe@kernel.dk>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
The following checkpatch warning has been fixed:
- WARNING: Missing a blank line after declarations
Signed-off-by: Franziska Naepelt <franziska.naepelt@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cross-merge networking fixes after downstream PR.
Conflicts:
net/sched/sch_taprio.c
d636fc5dd6 ("net: sched: add rcu annotations around qdisc->qdisc_sleeping")
dced11ef84 ("net/sched: taprio: don't overwrite "sch" variable in taprio_dump_class_stats()")
net/ipv4/sysctl_net_ipv4.c
e209fee411 ("net/ipv4: ping_group_range: allow GID from 2147483648 to 4294967294")
ccce324dab ("tcp: make the first N SYN RTO backoffs linear")
https://lore.kernel.org/all/20230605100816.08d41a7b@canb.auug.org.au/
No adjacent changes.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Make AF_ALG sendmsg() support MSG_SPLICE_PAGES in the hashing code. This
causes pages to be spliced from the source iterator if possible.
This allows ->sendpage() to be replaced by something that can handle
multiple multipage folios in a single transaction.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Jens Axboe <axboe@kernel.dk>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Convert af_alg_sendpage() to use sendmsg() with MSG_SPLICE_PAGES rather
than directly splicing in the pages itself.
This allows ->sendpage() to be replaced by something that can handle
multiple multipage folios in a single transaction.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Jens Axboe <axboe@kernel.dk>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Make AF_ALG sendmsg() support MSG_SPLICE_PAGES. This causes pages to be
spliced from the source iterator.
This allows ->sendpage() to be replaced by something that can handle
multiple multipage folios in a single transaction.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Jens Axboe <axboe@kernel.dk>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Put the loop in af_alg_sendmsg() into an if-statement to indent it to make
the next patch easier to review as that will add another branch to handle
MSG_SPLICE_PAGES to the if-statement.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Jens Axboe <axboe@kernel.dk>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Use extract_iter_to_sg() to decant the destination iterator into a
scatterlist in af_alg_get_rsgl(). af_alg_make_sg() can then be removed.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Jens Axboe <axboe@kernel.dk>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Convert AF_ALG to use iov_iter_extract_pages() instead of
iov_iter_get_pages(). This will pin pages or leave them unaltered rather
than getting a ref on them as appropriate to the iterator.
The pages need to be pinned for DIO-read rather than having refs taken on
them to prevent VM copy-on-write from malfunctioning during a concurrent
fork() (the result of the I/O would otherwise end up only visible to the
child process and not the parent).
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Jens Axboe <axboe@kernel.dk>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Commit ac4e97abce ("scatterlist: sg_set_buf() argument must be in linear
mapping") checks that both the signature and the digest reside in the
linear mapping area.
However, more recently commit ba14a194a4 ("fork: Add generic vmalloced
stack support") made it possible to move the stack in the vmalloc area,
which is not contiguous, and thus not suitable for sg_set_buf() which needs
adjacent pages.
Always make a copy of the signature and digest in the same buffer used to
store the key and its parameters, and pass them to sg_init_one(). Prefer it
to conditionally doing the copy if necessary, to keep the code simple. The
buffer allocated with kmalloc() is in the linear mapping area.
Cc: stable@vger.kernel.org # 4.9.x
Fixes: ba14a194a4 ("fork: Add generic vmalloced stack support")
Link: https://lore.kernel.org/linux-integrity/Y4pIpxbjBdajymBJ@sol.localdomain/
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Tested-by: Stefan Berger <stefanb@linux.ibm.com>
With the update of the permanent and intermittent health errors, the
actual indicator for the health test indicates a potential error only
for the one offending time stamp gathered in the current iteration
round. The next iteration round will "overwrite" the health test result.
Thus, the entropy collection loop in jent_gen_entropy checks for
the health test failure upon each loop iteration. However, the
initialization operation checked for the APT health test once for
an APT window which implies it would not catch most errors.
Thus, the check for all health errors is now invoked unconditionally
during each loop iteration for the startup test.
With the change, the error JENT_ERCT becomes unused as all health
errors are only reported with the JENT_HEALTH return code. This
allows the removal of the error indicator.
Fixes: 3fde2fe99a ("crypto: jitter - permanent and intermittent health errors"
)
Reported-by: Joachim Vandersmissen <git@jvdsn.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make the help text for CRYPTO_STATS explicitly mention that it reduces
the performance of the crypto API.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Some shash algorithms are so simple that they don't have an init_tfm
function. These can be cloned trivially. Check this before failing
in crypto_clone_shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Allow cmac to be cloned. The underlying cipher needs to support
cloning by not having a cra_init function (all implementations of
aes that do not require a fallback can be cloned).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Allow simple ciphers to be cloned, if they don't have a cra_init
function. This basically rules out those ciphers that require a
fallback.
In future simple ciphers will be eliminated, and replaced with a
linear skcipher interface. When that happens this restriction will
disappear.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the modern init_tfm/exit_tfm interface instead of the obsolete
cra_init/cra_exit interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
gcc warns if prototypes are only visible to the caller but
not the callee:
crypto/aegis128-neon-inner.c:134:6: warning: no previous prototype for 'crypto_aegis128_init_neon' [-Wmissing-prototypes]
crypto/aegis128-neon-inner.c:164:6: warning: no previous prototype for 'crypto_aegis128_update_neon' [-Wmissing-prototypes]
crypto/aegis128-neon-inner.c:221:6: warning: no previous prototype for 'crypto_aegis128_encrypt_chunk_neon' [-Wmissing-prototypes]
crypto/aegis128-neon-inner.c:270:6: warning: no previous prototype for 'crypto_aegis128_decrypt_chunk_neon' [-Wmissing-prototypes]
crypto/aegis128-neon-inner.c:316:5: warning: no previous prototype for 'crypto_aegis128_final_neon' [-Wmissing-prototypes]
The prototypes cannot be in the regular aegis.h, as the inner neon code
cannot include normal kernel headers. Instead add a new header just for
the functions provided by this file.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The test interface allows a privileged process to capture the raw
unconditioned noise that is collected by the Jitter RNG for statistical
analysis. Such testing allows the analysis how much entropy
the Jitter RNG noise source provides on a given platform. The obtained
data is the time stamp sampled by the Jitter RNG. Considering that
the Jitter RNG inserts the delta of this time stamp compared to the
immediately preceding time stamp, the obtained data needs to be
post-processed accordingly to obtain the data the Jitter RNG inserts
into its entropy pool.
The raw entropy collection is provided to obtain the raw unmodified
time stamps that are about to be added to the Jitter RNG entropy pool
and are credited with entropy. Thus, this patch adds an interface
which renders the Jitter RNG insecure. This patch is NOT INTENDED
FOR PRODUCTION SYSTEMS, but solely for development/test systems to
verify the available entropy rate.
Access to the data is given through the jent_raw_hires debugfs file.
The data buffer should be multiples of sizeof(u32) to fill the entire
buffer. Using the option jitterentropy_testing.boot_raw_hires_test=1
the raw noise of the first 1000 entropy events since boot can be
sampled.
This test interface allows generating the data required for
analysis whether the Jitter RNG is in compliance with SP800-90B
sections 3.1.3 and 3.1.4.
If the test interface is not compiled, its code is a noop which has no
impact on the performance.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Using the kernel crypto API, the SHA3-256 algorithm is used as
conditioning element to replace the LFSR in the Jitter RNG. All other
parts of the Jitter RNG are unchanged.
The application and use of the SHA-3 conditioning operation is identical
to the user space Jitter RNG 3.4.0 by applying the following concept:
- the Jitter RNG initializes a SHA-3 state which acts as the "entropy
pool" when the Jitter RNG is allocated.
- When a new time delta is obtained, it is inserted into the "entropy
pool" with a SHA-3 update operation. Note, this operation in most of
the cases is a simple memcpy() onto the SHA-3 stack.
- To cause a true SHA-3 operation for each time delta operation, a
second SHA-3 operation is performed hashing Jitter RNG status
information. The final message digest is also inserted into the
"entropy pool" with a SHA-3 update operation. Yet, this data is not
considered to provide any entropy, but it shall stir the entropy pool.
- To generate a random number, a SHA-3 final operation is performed to
calculate a message digest followed by an immediate SHA-3 init to
re-initialize the "entropy pool". The obtained message digest is one
block of the Jitter RNG that is returned to the caller.
Mathematically speaking, the random number generated by the Jitter RNG
is:
aux_t = SHA-3(Jitter RNG state data)
Jitter RNG block = SHA-3(time_i || aux_i || time_(i-1) || aux_(i-1) ||
... || time_(i-255) || aux_(i-255))
when assuming that the OSR = 1, i.e. the default value.
This operation implies that the Jitter RNG has an output-blocksize of
256 bits instead of the 64 bits of the LFSR-based Jitter RNG that is
replaced with this patch.
The patch also replaces the varying number of invocations of the
conditioning function with one fixed number of invocations. The use
of the conditioning function consistent with the userspace Jitter RNG
library version 3.4.0.
The code is tested with a system that exhibited the least amount of
entropy generated by the Jitter RNG: the SiFive Unmatched RISC-V
system. The measured entropy rate is well above the heuristically
implied entropy value of 1 bit of entropy per time delta. On all other
tested systems, the measured entropy rate is even higher by orders
of magnitude. The measurement was performed using updated tooling
provided with the user space Jitter RNG library test framework.
The performance of the Jitter RNG with this patch is about en par
with the performance of the Jitter RNG without the patch.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>