Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
 "API:
   - Allow DRBG testing through user-space af_alg
   - Add tcrypt speed testing support for keyed hashes
   - Add type-safe init/exit hooks for ahash

  Algorithms:
   - Mark arc4 as obsolete and pending for future removal
   - Mark anubis, khazad, sead and tea as obsolete
   - Improve boot-time xor benchmark
   - Add OSCCA SM2 asymmetric cipher algorithm and use it for integrity

  Drivers:
   - Fixes and enhancement for XTS in caam
   - Add support for XIP8001B hwrng in xiphera-trng
   - Add RNG and hash support in sun8i-ce/sun8i-ss
   - Allow imx-rngc to be used by kernel entropy pool
   - Use crypto engine in omap-sham
   - Add support for Ingenic X1830 with ingenic"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (205 commits)
  X.509: Fix modular build of public_key_sm2
  crypto: xor - Remove unused variable count in do_xor_speed
  X.509: fix error return value on the failed path
  crypto: bcm - Verify GCM/CCM key length in setkey
  crypto: qat - drop input parameter from adf_enable_aer()
  crypto: qat - fix function parameters descriptions
  crypto: atmel-tdes - use semicolons rather than commas to separate statements
  crypto: drivers - use semicolons rather than commas to separate statements
  hwrng: mxc-rnga - use semicolons rather than commas to separate statements
  hwrng: iproc-rng200 - use semicolons rather than commas to separate statements
  hwrng: stm32 - use semicolons rather than commas to separate statements
  crypto: xor - use ktime for template benchmarking
  crypto: xor - defer load time benchmark to a later time
  crypto: hisilicon/zip - fix the uninitalized 'curr_qm_qp_num'
  crypto: hisilicon/zip - fix the return value when device is busy
  crypto: hisilicon/zip - fix zero length input in GZIP decompress
  crypto: hisilicon/zip - fix the uncleared debug registers
  lib/mpi: Fix unused variable warnings
  crypto: x86/poly1305 - Remove assignments with no effect
  hwrng: npcm - modify readl to readb
  ...
This commit is contained in:
Linus Torvalds 2020-10-13 08:50:16 -07:00
commit 39a5101f98
229 changed files with 9477 additions and 3114 deletions

View File

@ -296,15 +296,16 @@ follows:
struct sockaddr_alg sa = {
.salg_family = AF_ALG,
.salg_type = "rng", /* this selects the symmetric cipher */
.salg_name = "drbg_nopr_sha256" /* this is the cipher name */
.salg_type = "rng", /* this selects the random number generator */
.salg_name = "drbg_nopr_sha256" /* this is the RNG name */
};
Depending on the RNG type, the RNG must be seeded. The seed is provided
using the setsockopt interface to set the key. For example, the
ansi_cprng requires a seed. The DRBGs do not require a seed, but may be
seeded.
seeded. The seed is also known as a *Personalization String* in NIST SP 800-90A
standard.
Using the read()/recvmsg() system calls, random numbers can be obtained.
The kernel generates at most 128 bytes in one call. If user space
@ -314,6 +315,16 @@ WARNING: The user space caller may invoke the initially mentioned accept
system call multiple times. In this case, the returned file descriptors
have the same state.
Following CAVP testing interfaces are enabled when kernel is built with
CRYPTO_USER_API_RNG_CAVP option:
- the concatenation of *Entropy* and *Nonce* can be provided to the RNG via
ALG_SET_DRBG_ENTROPY setsockopt interface. Setting the entropy requires
CAP_SYS_ADMIN permission.
- *Additional Data* can be provided using the send()/sendmsg() system calls,
but only after the entropy has been set.
Zero-Copy Interface
-------------------
@ -377,6 +388,9 @@ mentioned optname:
provided ciphertext is assumed to contain an authentication tag of
the given size (see section about AEAD memory layout below).
- ALG_SET_DRBG_ENTROPY -- Setting the entropy of the random number generator.
This option is applicable to RNG cipher type only.
User space API example
----------------------

View File

@ -0,0 +1,43 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/rng/ingenic,trng.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Bindings for DTRNG in Ingenic SoCs
maintainers:
- 周琰杰 (Zhou Yanjie) <zhouyanjie@wanyeetech.com>
description:
The True Random Number Generator in Ingenic SoCs.
properties:
compatible:
enum:
- ingenic,x1830-dtrng
reg:
maxItems: 1
clocks:
maxItems: 1
required:
- compatible
- reg
- clocks
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/x1830-cgu.h>
dtrng: trng@10072000 {
compatible = "ingenic,x1830-dtrng";
reg = <0x10072000 0xc>;
clocks = <&cgu X1830_CLK_DTRNG>;
};
...

View File

@ -0,0 +1,33 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/rng/xiphera,xip8001b-trng.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Xiphera XIP8001B-trng bindings
maintainers:
- Atte Tommiska <atte.tommiska@xiphera.com>
description: |
Xiphera FPGA-based true random number generator intellectual property core.
properties:
compatible:
const: xiphera,xip8001b-trng
reg:
maxItems: 1
required:
- compatible
- reg
additionalProperties: false
examples:
- |
rng@43c00000 {
compatible = "xiphera,xip8001b-trng";
reg = <0x43c00000 0x10000>;
};

View File

@ -1174,6 +1174,8 @@ patternProperties:
description: Shenzhen Xingbangda Display Technology Co., Ltd
"^xinpeng,.*":
description: Shenzhen Xinpeng Technology Co., Ltd
"^xiphera,.*":
description: Xiphera Ltd.
"^xlnx,.*":
description: Xilinx
"^xnano,.*":

View File

@ -13068,7 +13068,9 @@ F: lib/packing.c
PADATA PARALLEL EXECUTION MECHANISM
M: Steffen Klassert <steffen.klassert@secunet.com>
M: Daniel Jordan <daniel.m.jordan@oracle.com>
L: linux-crypto@vger.kernel.org
L: linux-kernel@vger.kernel.org
S: Maintained
F: Documentation/core-api/padata.rst
F: include/linux/padata.h

View File

@ -77,11 +77,6 @@
vldr \out\()h, \sym + 8
.endm
.macro __adr, reg, lbl
adr \reg, \lbl
THUMB( orr \reg, \reg, #1 )
.endm
.macro in_bs_ch, b0, b1, b2, b3, b4, b5, b6, b7
veor \b2, \b2, \b1
veor \b5, \b5, \b6
@ -629,11 +624,11 @@ ENDPROC(aesbs_decrypt8)
push {r4-r6, lr}
ldr r5, [sp, #16] // number of blocks
99: __adr ip, 0f
99: adr ip, 0f
and lr, r5, #7
cmp r5, #8
sub ip, ip, lr, lsl #2
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
vld1.8 {q0}, [r1]!
vld1.8 {q1}, [r1]!
@ -648,11 +643,11 @@ ENDPROC(aesbs_decrypt8)
mov rounds, r3
bl \do8
__adr ip, 1f
adr ip, 1f
and lr, r5, #7
cmp r5, #8
sub ip, ip, lr, lsl #2
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
vst1.8 {\o0}, [r0]!
vst1.8 {\o1}, [r0]!
@ -689,12 +684,12 @@ ENTRY(aesbs_cbc_decrypt)
push {r4-r6, lr}
ldm ip, {r5-r6} // load args 4-5
99: __adr ip, 0f
99: adr ip, 0f
and lr, r5, #7
cmp r5, #8
sub ip, ip, lr, lsl #2
mov lr, r1
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
vld1.8 {q0}, [lr]!
vld1.8 {q1}, [lr]!
@ -718,11 +713,11 @@ ENTRY(aesbs_cbc_decrypt)
vmov q14, q8
vmov q15, q8
__adr ip, 1f
adr ip, 1f
and lr, r5, #7
cmp r5, #8
sub ip, ip, lr, lsl #2
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
vld1.8 {q9}, [r1]!
vld1.8 {q10}, [r1]!
@ -733,9 +728,9 @@ ENTRY(aesbs_cbc_decrypt)
vld1.8 {q15}, [r1]!
W(nop)
1: __adr ip, 2f
1: adr ip, 2f
sub ip, ip, lr, lsl #3
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
veor q0, q0, q8
vst1.8 {q0}, [r0]!
@ -804,13 +799,13 @@ ENTRY(aesbs_ctr_encrypt)
vmov q6, q0
vmov q7, q0
__adr ip, 0f
adr ip, 0f
sub lr, r5, #1
and lr, lr, #7
cmp r5, #8
sub ip, ip, lr, lsl #5
sub ip, ip, lr, lsl #2
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
next_ctr q1
next_ctr q2
@ -824,13 +819,13 @@ ENTRY(aesbs_ctr_encrypt)
mov rounds, r3
bl aesbs_encrypt8
__adr ip, 1f
adr ip, 1f
and lr, r5, #7
cmp r5, #8
movgt r4, #0
ldrle r4, [sp, #40] // load final in the last round
sub ip, ip, lr, lsl #2
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
vld1.8 {q8}, [r1]!
vld1.8 {q9}, [r1]!
@ -843,10 +838,10 @@ ENTRY(aesbs_ctr_encrypt)
1: bne 2f
vld1.8 {q15}, [r1]!
2: __adr ip, 3f
2: adr ip, 3f
cmp r5, #8
sub ip, ip, lr, lsl #3
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
veor q0, q0, q8
vst1.8 {q0}, [r0]!
@ -900,12 +895,12 @@ __xts_prepare8:
vshr.u64 d30, d31, #7
vmov q12, q14
__adr ip, 0f
adr ip, 0f
and r4, r6, #7
cmp r6, #8
sub ip, ip, r4, lsl #5
mov r4, sp
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
vld1.8 {q0}, [r1]!
next_tweak q12, q14, q15, q13
@ -961,8 +956,7 @@ ENDPROC(__xts_prepare8)
push {r4-r8, lr}
mov r5, sp // preserve sp
ldrd r6, r7, [sp, #24] // get blocks and iv args
ldr r8, [sp, #32] // reorder final tweak?
rsb r8, r8, #1
rsb r8, ip, #1
sub ip, sp, #128 // make room for 8x tweak
bic ip, ip, #0xf // align sp to 16 bytes
mov sp, ip
@ -973,12 +967,12 @@ ENDPROC(__xts_prepare8)
mov rounds, r3
bl \do8
__adr ip, 0f
adr ip, 0f
and lr, r6, #7
cmp r6, #8
sub ip, ip, lr, lsl #2
mov r4, sp
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
vld1.8 {q8}, [r4, :128]!
vld1.8 {q9}, [r4, :128]!
@ -989,9 +983,9 @@ ENDPROC(__xts_prepare8)
vld1.8 {q14}, [r4, :128]!
vld1.8 {q15}, [r4, :128]
0: __adr ip, 1f
0: adr ip, 1f
sub ip, ip, lr, lsl #3
bxlt ip // computed goto if blocks < 8
movlt pc, ip // computed goto if blocks < 8
veor \o0, \o0, q8
vst1.8 {\o0}, [r0]!
@ -1018,9 +1012,11 @@ ENDPROC(__xts_prepare8)
.endm
ENTRY(aesbs_xts_encrypt)
mov ip, #0 // never reorder final tweak
__xts_crypt aesbs_encrypt8, q0, q1, q4, q6, q3, q7, q2, q5
ENDPROC(aesbs_xts_encrypt)
ENTRY(aesbs_xts_decrypt)
ldr ip, [sp, #8] // reorder final tweak?
__xts_crypt aesbs_decrypt8, q0, q1, q6, q4, q2, q7, q3, q5
ENDPROC(aesbs_xts_decrypt)

View File

@ -8,7 +8,6 @@
#include <asm/neon.h>
#include <asm/simd.h>
#include <crypto/aes.h>
#include <crypto/cbc.h>
#include <crypto/ctr.h>
#include <crypto/internal/simd.h>
#include <crypto/internal/skcipher.h>
@ -49,7 +48,7 @@ struct aesbs_ctx {
struct aesbs_cbc_ctx {
struct aesbs_ctx key;
struct crypto_cipher *enc_tfm;
struct crypto_skcipher *enc_tfm;
};
struct aesbs_xts_ctx {
@ -140,19 +139,23 @@ static int aesbs_cbc_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
kernel_neon_end();
memzero_explicit(&rk, sizeof(rk));
return crypto_cipher_setkey(ctx->enc_tfm, in_key, key_len);
}
static void cbc_encrypt_one(struct crypto_skcipher *tfm, const u8 *src, u8 *dst)
{
struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
crypto_cipher_encrypt_one(ctx->enc_tfm, dst, src);
return crypto_skcipher_setkey(ctx->enc_tfm, in_key, key_len);
}
static int cbc_encrypt(struct skcipher_request *req)
{
return crypto_cbc_encrypt_walk(req, cbc_encrypt_one);
struct skcipher_request *subreq = skcipher_request_ctx(req);
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
skcipher_request_set_tfm(subreq, ctx->enc_tfm);
skcipher_request_set_callback(subreq,
skcipher_request_flags(req),
NULL, NULL);
skcipher_request_set_crypt(subreq, req->src, req->dst,
req->cryptlen, req->iv);
return crypto_skcipher_encrypt(subreq);
}
static int cbc_decrypt(struct skcipher_request *req)
@ -183,20 +186,27 @@ static int cbc_decrypt(struct skcipher_request *req)
return err;
}
static int cbc_init(struct crypto_tfm *tfm)
static int cbc_init(struct crypto_skcipher *tfm)
{
struct aesbs_cbc_ctx *ctx = crypto_tfm_ctx(tfm);
struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
unsigned int reqsize;
ctx->enc_tfm = crypto_alloc_cipher("aes", 0, 0);
ctx->enc_tfm = crypto_alloc_skcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(ctx->enc_tfm))
return PTR_ERR(ctx->enc_tfm);
return PTR_ERR_OR_ZERO(ctx->enc_tfm);
reqsize = sizeof(struct skcipher_request);
reqsize += crypto_skcipher_reqsize(ctx->enc_tfm);
crypto_skcipher_set_reqsize(tfm, reqsize);
return 0;
}
static void cbc_exit(struct crypto_tfm *tfm)
static void cbc_exit(struct crypto_skcipher *tfm)
{
struct aesbs_cbc_ctx *ctx = crypto_tfm_ctx(tfm);
struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
crypto_free_cipher(ctx->enc_tfm);
crypto_free_skcipher(ctx->enc_tfm);
}
static int aesbs_ctr_setkey_sync(struct crypto_skcipher *tfm, const u8 *in_key,
@ -304,9 +314,9 @@ static int aesbs_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
return aesbs_setkey(tfm, in_key, key_len);
}
static int xts_init(struct crypto_tfm *tfm)
static int xts_init(struct crypto_skcipher *tfm)
{
struct aesbs_xts_ctx *ctx = crypto_tfm_ctx(tfm);
struct aesbs_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
ctx->cts_tfm = crypto_alloc_cipher("aes", 0, 0);
if (IS_ERR(ctx->cts_tfm))
@ -319,9 +329,9 @@ static int xts_init(struct crypto_tfm *tfm)
return PTR_ERR_OR_ZERO(ctx->tweak_tfm);
}
static void xts_exit(struct crypto_tfm *tfm)
static void xts_exit(struct crypto_skcipher *tfm)
{
struct aesbs_xts_ctx *ctx = crypto_tfm_ctx(tfm);
struct aesbs_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
crypto_free_cipher(ctx->tweak_tfm);
crypto_free_cipher(ctx->cts_tfm);
@ -432,8 +442,6 @@ static struct skcipher_alg aes_algs[] = { {
.base.cra_ctxsize = sizeof(struct aesbs_cbc_ctx),
.base.cra_module = THIS_MODULE,
.base.cra_flags = CRYPTO_ALG_INTERNAL,
.base.cra_init = cbc_init,
.base.cra_exit = cbc_exit,
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
@ -442,6 +450,8 @@ static struct skcipher_alg aes_algs[] = { {
.setkey = aesbs_cbc_setkey,
.encrypt = cbc_encrypt,
.decrypt = cbc_decrypt,
.init = cbc_init,
.exit = cbc_exit,
}, {
.base.cra_name = "__ctr(aes)",
.base.cra_driver_name = "__ctr-aes-neonbs",
@ -483,8 +493,6 @@ static struct skcipher_alg aes_algs[] = { {
.base.cra_ctxsize = sizeof(struct aesbs_xts_ctx),
.base.cra_module = THIS_MODULE,
.base.cra_flags = CRYPTO_ALG_INTERNAL,
.base.cra_init = xts_init,
.base.cra_exit = xts_exit,
.min_keysize = 2 * AES_MIN_KEY_SIZE,
.max_keysize = 2 * AES_MAX_KEY_SIZE,
@ -493,6 +501,8 @@ static struct skcipher_alg aes_algs[] = { {
.setkey = aesbs_xts_setkey,
.encrypt = xts_encrypt,
.decrypt = xts_decrypt,
.init = xts_init,
.exit = xts_exit,
} };
static struct simd_skcipher_alg *aes_simd_algs[ARRAY_SIZE(aes_algs)];

View File

@ -16,6 +16,7 @@
#include <linux/module.h>
#include <linux/init.h>
#include <linux/jump_label.h>
#include <linux/scatterlist.h>
#include <crypto/curve25519.h>
asmlinkage void curve25519_neon(u8 mypublic[CURVE25519_KEY_SIZE],

View File

@ -20,6 +20,7 @@
void poly1305_init_arm(void *state, const u8 *key);
void poly1305_blocks_arm(void *state, const u8 *src, u32 len, u32 hibit);
void poly1305_blocks_neon(void *state, const u8 *src, u32 len, u32 hibit);
void poly1305_emit_arm(void *state, u8 *digest, const u32 *nonce);
void __weak poly1305_blocks_neon(void *state, const u8 *src, u32 len, u32 hibit)

View File

@ -175,7 +175,6 @@ $code=<<___;
#else
.syntax unified
# ifdef __thumb2__
# define adrl adr
.thumb
# else
.code 32
@ -471,7 +470,8 @@ sha256_block_data_order_neon:
stmdb sp!,{r4-r12,lr}
sub $H,sp,#16*4+16
adrl $Ktbl,K256
adr $Ktbl,.Lsha256_block_data_order
sub $Ktbl,$Ktbl,#.Lsha256_block_data_order-K256
bic $H,$H,#15 @ align for 128-bit stores
mov $t2,sp
mov sp,$H @ alloca

View File

@ -56,7 +56,6 @@
#else
.syntax unified
# ifdef __thumb2__
# define adrl adr
.thumb
# else
.code 32
@ -1885,7 +1884,8 @@ sha256_block_data_order_neon:
stmdb sp!,{r4-r12,lr}
sub r11,sp,#16*4+16
adrl r14,K256
adr r14,.Lsha256_block_data_order
sub r14,r14,#.Lsha256_block_data_order-K256
bic r11,r11,#15 @ align for 128-bit stores
mov r12,sp
mov sp,r11 @ alloca

View File

@ -212,7 +212,6 @@ $code=<<___;
#else
.syntax unified
# ifdef __thumb2__
# define adrl adr
.thumb
# else
.code 32
@ -602,7 +601,8 @@ sha512_block_data_order_neon:
dmb @ errata #451034 on early Cortex A8
add $len,$inp,$len,lsl#7 @ len to point at the end of inp
VFP_ABI_PUSH
adrl $Ktbl,K512
adr $Ktbl,.Lsha512_block_data_order
sub $Ktbl,$Ktbl,.Lsha512_block_data_order-K512
vldmia $ctx,{$A-$H} @ load context
.Loop_neon:
___

View File

@ -79,7 +79,6 @@
#else
.syntax unified
# ifdef __thumb2__
# define adrl adr
.thumb
# else
.code 32
@ -543,7 +542,8 @@ sha512_block_data_order_neon:
dmb @ errata #451034 on early Cortex A8
add r2,r1,r2,lsl#7 @ len to point at the end of inp
VFP_ABI_PUSH
adrl r3,K512
adr r3,.Lsha512_block_data_order
sub r3,r3,.Lsha512_block_data_order-K512
vldmia r0,{d16-d23} @ load context
.Loop_neon:
vshr.u64 d24,d20,#14 @ 0

View File

@ -347,7 +347,7 @@ static int gcm_encrypt(struct aead_request *req)
u8 buf[AES_BLOCK_SIZE];
u8 iv[AES_BLOCK_SIZE];
u64 dg[2] = {};
u128 lengths;
be128 lengths;
u8 *tag;
int err;
@ -461,7 +461,7 @@ static int gcm_decrypt(struct aead_request *req)
u8 buf[AES_BLOCK_SIZE];
u8 iv[AES_BLOCK_SIZE];
u64 dg[2] = {};
u128 lengths;
be128 lengths;
u8 *tag;
int err;

View File

@ -25,6 +25,9 @@ struct sha1_ce_state {
u32 finalize;
};
extern const u32 sha1_ce_offsetof_count;
extern const u32 sha1_ce_offsetof_finalize;
asmlinkage void sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
int blocks);

View File

@ -25,6 +25,9 @@ struct sha256_ce_state {
u32 finalize;
};
extern const u32 sha256_ce_offsetof_count;
extern const u32 sha256_ce_offsetof_finalize;
asmlinkage void sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src,
int blocks);

View File

@ -9,6 +9,7 @@
#include <crypto/internal/hash.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/random.h>
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/cpufeature.h>
@ -22,10 +23,11 @@ static unsigned long iterations = 10000;
static int __init crc_test_init(void)
{
u16 crc16 = 0, verify16 = 0;
u32 crc32 = 0, verify32 = 0;
__le32 verify32le = 0;
unsigned char *data;
u32 verify32 = 0;
unsigned long i;
__le32 crc32;
int ret;
struct crypto_shash *crct10dif_tfm;
@ -98,7 +100,7 @@ static int __init crc_test_init(void)
crypto_shash_final(crc32c_shash, (u8 *)(&crc32));
verify32 = le32_to_cpu(verify32le);
verify32le = ~cpu_to_le32(__crc32c_le(~verify32, data+offset, len));
if (crc32 != (u32)verify32le) {
if (crc32 != verify32le) {
pr_err("FAILURE in CRC32: got 0x%08x expected 0x%08x (len %lu)\n",
crc32, verify32, len);
break;

View File

@ -11,6 +11,7 @@
#include <linux/jump_label.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sizes.h>
#include <asm/cpufeature.h>
#include <asm/fpu/api.h>

View File

@ -12,6 +12,7 @@
#include <crypto/internal/skcipher.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sizes.h>
#include <asm/simd.h>
asmlinkage void chacha_block_xor_ssse3(u32 *state, u8 *dst, const u8 *src,

View File

@ -28,9 +28,9 @@
#define SCALE_F sizeof(unsigned long)
#ifdef CONFIG_X86_64
#define REX_PRE "0x48, "
#define CRC32_INST "crc32q %1, %q0"
#else
#define REX_PRE
#define CRC32_INST "crc32l %1, %0"
#endif
#ifdef CONFIG_X86_64
@ -48,11 +48,8 @@ asmlinkage unsigned int crc_pcl(const u8 *buffer, int len,
static u32 crc32c_intel_le_hw_byte(u32 crc, unsigned char const *data, size_t length)
{
while (length--) {
__asm__ __volatile__(
".byte 0xf2, 0xf, 0x38, 0xf0, 0xf1"
:"=S"(crc)
:"0"(crc), "c"(*data)
);
asm("crc32b %1, %0"
: "+r" (crc) : "rm" (*data));
data++;
}
@ -66,11 +63,8 @@ static u32 __pure crc32c_intel_le_hw(u32 crc, unsigned char const *p, size_t len
unsigned long *ptmp = (unsigned long *)p;
while (iquotient--) {
__asm__ __volatile__(
".byte 0xf2, " REX_PRE "0xf, 0x38, 0xf1, 0xf1;"
:"=S"(crc)
:"0"(crc), "c"(*ptmp)
);
asm(CRC32_INST
: "+r" (crc) : "rm" (*ptmp));
ptmp++;
}

View File

@ -11,6 +11,7 @@
#include <linux/jump_label.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/scatterlist.h>
#include <asm/cpufeature.h>
#include <asm/processor.h>
@ -45,11 +46,11 @@ static inline u64 add_scalar(u64 *out, const u64 *f1, u64 f2)
asm volatile(
/* Clear registers to propagate the carry bit */
" xor %%r8, %%r8;"
" xor %%r9, %%r9;"
" xor %%r10, %%r10;"
" xor %%r11, %%r11;"
" xor %1, %1;"
" xor %%r8d, %%r8d;"
" xor %%r9d, %%r9d;"
" xor %%r10d, %%r10d;"
" xor %%r11d, %%r11d;"
" xor %k1, %k1;"
/* Begin addition chain */
" addq 0(%3), %0;"
@ -93,7 +94,7 @@ static inline void fadd(u64 *out, const u64 *f1, const u64 *f2)
" cmovc %0, %%rax;"
/* Step 2: Add carry*38 to the original sum */
" xor %%rcx, %%rcx;"
" xor %%ecx, %%ecx;"
" add %%rax, %%r8;"
" adcx %%rcx, %%r9;"
" movq %%r9, 8(%1);"
@ -165,28 +166,28 @@ static inline void fmul(u64 *out, const u64 *f1, const u64 *f2, u64 *tmp)
/* Compute src1[0] * src2 */
" movq 0(%1), %%rdx;"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " movq %%r8, 0(%0);"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " movq %%r8, 0(%0);"
" mulxq 8(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " movq %%r10, 8(%0);"
" mulxq 16(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;"
" mulxq 24(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " mov $0, %%rax;"
" adox %%rdx, %%rax;"
/* Compute src1[1] * src2 */
" movq 8(%1), %%rdx;"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " adcxq 8(%0), %%r8;" " movq %%r8, 8(%0);"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " adcxq 8(%0), %%r8;" " movq %%r8, 8(%0);"
" mulxq 8(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 16(%0);"
" mulxq 16(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;" " adcx %%r14, %%rbx;" " mov $0, %%r8;"
" mulxq 24(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " adcx %%rax, %%r14;" " mov $0, %%rax;"
" adox %%rdx, %%rax;" " adcx %%r8, %%rax;"
/* Compute src1[2] * src2 */
" movq 16(%1), %%rdx;"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " adcxq 16(%0), %%r8;" " movq %%r8, 16(%0);"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " adcxq 16(%0), %%r8;" " movq %%r8, 16(%0);"
" mulxq 8(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 24(%0);"
" mulxq 16(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;" " adcx %%r14, %%rbx;" " mov $0, %%r8;"
" mulxq 24(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " adcx %%rax, %%r14;" " mov $0, %%rax;"
" adox %%rdx, %%rax;" " adcx %%r8, %%rax;"
/* Compute src1[3] * src2 */
" movq 24(%1), %%rdx;"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " adcxq 24(%0), %%r8;" " movq %%r8, 24(%0);"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " adcxq 24(%0), %%r8;" " movq %%r8, 24(%0);"
" mulxq 8(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 32(%0);"
" mulxq 16(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;" " adcx %%r14, %%rbx;" " movq %%rbx, 40(%0);" " mov $0, %%r8;"
" mulxq 24(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " adcx %%rax, %%r14;" " movq %%r14, 48(%0);" " mov $0, %%rax;"
@ -200,7 +201,7 @@ static inline void fmul(u64 *out, const u64 *f1, const u64 *f2, u64 *tmp)
/* Step 1: Compute dst + carry == tmp_hi * 38 + tmp_lo */
" mov $38, %%rdx;"
" mulxq 32(%1), %%r8, %%r13;"
" xor %3, %3;"
" xor %k3, %k3;"
" adoxq 0(%1), %%r8;"
" mulxq 40(%1), %%r9, %%rbx;"
" adcx %%r13, %%r9;"
@ -246,28 +247,28 @@ static inline void fmul2(u64 *out, const u64 *f1, const u64 *f2, u64 *tmp)
/* Compute src1[0] * src2 */
" movq 0(%1), %%rdx;"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " movq %%r8, 0(%0);"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " movq %%r8, 0(%0);"
" mulxq 8(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " movq %%r10, 8(%0);"
" mulxq 16(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;"
" mulxq 24(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " mov $0, %%rax;"
" adox %%rdx, %%rax;"
/* Compute src1[1] * src2 */
" movq 8(%1), %%rdx;"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " adcxq 8(%0), %%r8;" " movq %%r8, 8(%0);"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " adcxq 8(%0), %%r8;" " movq %%r8, 8(%0);"
" mulxq 8(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 16(%0);"
" mulxq 16(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;" " adcx %%r14, %%rbx;" " mov $0, %%r8;"
" mulxq 24(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " adcx %%rax, %%r14;" " mov $0, %%rax;"
" adox %%rdx, %%rax;" " adcx %%r8, %%rax;"
/* Compute src1[2] * src2 */
" movq 16(%1), %%rdx;"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " adcxq 16(%0), %%r8;" " movq %%r8, 16(%0);"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " adcxq 16(%0), %%r8;" " movq %%r8, 16(%0);"
" mulxq 8(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 24(%0);"
" mulxq 16(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;" " adcx %%r14, %%rbx;" " mov $0, %%r8;"
" mulxq 24(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " adcx %%rax, %%r14;" " mov $0, %%rax;"
" adox %%rdx, %%rax;" " adcx %%r8, %%rax;"
/* Compute src1[3] * src2 */
" movq 24(%1), %%rdx;"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " adcxq 24(%0), %%r8;" " movq %%r8, 24(%0);"
" mulxq 0(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " adcxq 24(%0), %%r8;" " movq %%r8, 24(%0);"
" mulxq 8(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 32(%0);"
" mulxq 16(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;" " adcx %%r14, %%rbx;" " movq %%rbx, 40(%0);" " mov $0, %%r8;"
" mulxq 24(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " adcx %%rax, %%r14;" " movq %%r14, 48(%0);" " mov $0, %%rax;"
@ -277,29 +278,29 @@ static inline void fmul2(u64 *out, const u64 *f1, const u64 *f2, u64 *tmp)
/* Compute src1[0] * src2 */
" movq 32(%1), %%rdx;"
" mulxq 32(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " movq %%r8, 64(%0);"
" mulxq 40(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " movq %%r10, 72(%0);"
" mulxq 32(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " movq %%r8, 64(%0);"
" mulxq 40(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " movq %%r10, 72(%0);"
" mulxq 48(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;"
" mulxq 56(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " mov $0, %%rax;"
" adox %%rdx, %%rax;"
/* Compute src1[1] * src2 */
" movq 40(%1), %%rdx;"
" mulxq 32(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " adcxq 72(%0), %%r8;" " movq %%r8, 72(%0);"
" mulxq 40(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 80(%0);"
" mulxq 32(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " adcxq 72(%0), %%r8;" " movq %%r8, 72(%0);"
" mulxq 40(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 80(%0);"
" mulxq 48(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;" " adcx %%r14, %%rbx;" " mov $0, %%r8;"
" mulxq 56(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " adcx %%rax, %%r14;" " mov $0, %%rax;"
" adox %%rdx, %%rax;" " adcx %%r8, %%rax;"
/* Compute src1[2] * src2 */
" movq 48(%1), %%rdx;"
" mulxq 32(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " adcxq 80(%0), %%r8;" " movq %%r8, 80(%0);"
" mulxq 40(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 88(%0);"
" mulxq 32(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " adcxq 80(%0), %%r8;" " movq %%r8, 80(%0);"
" mulxq 40(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 88(%0);"
" mulxq 48(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;" " adcx %%r14, %%rbx;" " mov $0, %%r8;"
" mulxq 56(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " adcx %%rax, %%r14;" " mov $0, %%rax;"
" adox %%rdx, %%rax;" " adcx %%r8, %%rax;"
/* Compute src1[3] * src2 */
" movq 56(%1), %%rdx;"
" mulxq 32(%3), %%r8, %%r9;" " xor %%r10, %%r10;" " adcxq 88(%0), %%r8;" " movq %%r8, 88(%0);"
" mulxq 40(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 96(%0);"
" mulxq 32(%3), %%r8, %%r9;" " xor %%r10d, %%r10d;" " adcxq 88(%0), %%r8;" " movq %%r8, 88(%0);"
" mulxq 40(%3), %%r10, %%r11;" " adox %%r9, %%r10;" " adcx %%rbx, %%r10;" " movq %%r10, 96(%0);"
" mulxq 48(%3), %%rbx, %%r13;" " adox %%r11, %%rbx;" " adcx %%r14, %%rbx;" " movq %%rbx, 104(%0);" " mov $0, %%r8;"
" mulxq 56(%3), %%r14, %%rdx;" " adox %%r13, %%r14;" " adcx %%rax, %%r14;" " movq %%r14, 112(%0);" " mov $0, %%rax;"
" adox %%rdx, %%rax;" " adcx %%r8, %%rax;" " movq %%rax, 120(%0);"
@ -312,7 +313,7 @@ static inline void fmul2(u64 *out, const u64 *f1, const u64 *f2, u64 *tmp)
/* Step 1: Compute dst + carry == tmp_hi * 38 + tmp_lo */
" mov $38, %%rdx;"
" mulxq 32(%1), %%r8, %%r13;"
" xor %3, %3;"
" xor %k3, %k3;"
" adoxq 0(%1), %%r8;"
" mulxq 40(%1), %%r9, %%rbx;"
" adcx %%r13, %%r9;"
@ -345,7 +346,7 @@ static inline void fmul2(u64 *out, const u64 *f1, const u64 *f2, u64 *tmp)
/* Step 1: Compute dst + carry == tmp_hi * 38 + tmp_lo */
" mov $38, %%rdx;"
" mulxq 96(%1), %%r8, %%r13;"
" xor %3, %3;"
" xor %k3, %k3;"
" adoxq 64(%1), %%r8;"
" mulxq 104(%1), %%r9, %%rbx;"
" adcx %%r13, %%r9;"
@ -516,7 +517,7 @@ static inline void fsqr(u64 *out, const u64 *f, u64 *tmp)
/* Step 1: Compute all partial products */
" movq 0(%1), %%rdx;" /* f[0] */
" mulxq 8(%1), %%r8, %%r14;" " xor %%r15, %%r15;" /* f[1]*f[0] */
" mulxq 8(%1), %%r8, %%r14;" " xor %%r15d, %%r15d;" /* f[1]*f[0] */
" mulxq 16(%1), %%r9, %%r10;" " adcx %%r14, %%r9;" /* f[2]*f[0] */
" mulxq 24(%1), %%rax, %%rcx;" " adcx %%rax, %%r10;" /* f[3]*f[0] */
" movq 24(%1), %%rdx;" /* f[3] */
@ -526,7 +527,7 @@ static inline void fsqr(u64 *out, const u64 *f, u64 *tmp)
" mulxq 16(%1), %%rax, %%rcx;" " mov $0, %%r14;" /* f[2]*f[1] */
/* Step 2: Compute two parallel carry chains */
" xor %%r15, %%r15;"
" xor %%r15d, %%r15d;"
" adox %%rax, %%r10;"
" adcx %%r8, %%r8;"
" adox %%rcx, %%r11;"
@ -563,7 +564,7 @@ static inline void fsqr(u64 *out, const u64 *f, u64 *tmp)
/* Step 1: Compute dst + carry == tmp_hi * 38 + tmp_lo */
" mov $38, %%rdx;"
" mulxq 32(%1), %%r8, %%r13;"
" xor %%rcx, %%rcx;"
" xor %%ecx, %%ecx;"
" adoxq 0(%1), %%r8;"
" mulxq 40(%1), %%r9, %%rbx;"
" adcx %%r13, %%r9;"
@ -607,7 +608,7 @@ static inline void fsqr2(u64 *out, const u64 *f, u64 *tmp)
asm volatile(
/* Step 1: Compute all partial products */
" movq 0(%1), %%rdx;" /* f[0] */
" mulxq 8(%1), %%r8, %%r14;" " xor %%r15, %%r15;" /* f[1]*f[0] */
" mulxq 8(%1), %%r8, %%r14;" " xor %%r15d, %%r15d;" /* f[1]*f[0] */
" mulxq 16(%1), %%r9, %%r10;" " adcx %%r14, %%r9;" /* f[2]*f[0] */
" mulxq 24(%1), %%rax, %%rcx;" " adcx %%rax, %%r10;" /* f[3]*f[0] */
" movq 24(%1), %%rdx;" /* f[3] */
@ -617,7 +618,7 @@ static inline void fsqr2(u64 *out, const u64 *f, u64 *tmp)
" mulxq 16(%1), %%rax, %%rcx;" " mov $0, %%r14;" /* f[2]*f[1] */
/* Step 2: Compute two parallel carry chains */
" xor %%r15, %%r15;"
" xor %%r15d, %%r15d;"
" adox %%rax, %%r10;"
" adcx %%r8, %%r8;"
" adox %%rcx, %%r11;"
@ -647,7 +648,7 @@ static inline void fsqr2(u64 *out, const u64 *f, u64 *tmp)
/* Step 1: Compute all partial products */
" movq 32(%1), %%rdx;" /* f[0] */
" mulxq 40(%1), %%r8, %%r14;" " xor %%r15, %%r15;" /* f[1]*f[0] */
" mulxq 40(%1), %%r8, %%r14;" " xor %%r15d, %%r15d;" /* f[1]*f[0] */
" mulxq 48(%1), %%r9, %%r10;" " adcx %%r14, %%r9;" /* f[2]*f[0] */
" mulxq 56(%1), %%rax, %%rcx;" " adcx %%rax, %%r10;" /* f[3]*f[0] */
" movq 56(%1), %%rdx;" /* f[3] */
@ -657,7 +658,7 @@ static inline void fsqr2(u64 *out, const u64 *f, u64 *tmp)
" mulxq 48(%1), %%rax, %%rcx;" " mov $0, %%r14;" /* f[2]*f[1] */
/* Step 2: Compute two parallel carry chains */
" xor %%r15, %%r15;"
" xor %%r15d, %%r15d;"
" adox %%rax, %%r10;"
" adcx %%r8, %%r8;"
" adox %%rcx, %%r11;"
@ -692,7 +693,7 @@ static inline void fsqr2(u64 *out, const u64 *f, u64 *tmp)
/* Step 1: Compute dst + carry == tmp_hi * 38 + tmp_lo */
" mov $38, %%rdx;"
" mulxq 32(%1), %%r8, %%r13;"
" xor %%rcx, %%rcx;"
" xor %%ecx, %%ecx;"
" adoxq 0(%1), %%r8;"
" mulxq 40(%1), %%r9, %%rbx;"
" adcx %%r13, %%r9;"
@ -725,7 +726,7 @@ static inline void fsqr2(u64 *out, const u64 *f, u64 *tmp)
/* Step 1: Compute dst + carry == tmp_hi * 38 + tmp_lo */
" mov $38, %%rdx;"
" mulxq 96(%1), %%r8, %%r13;"
" xor %%rcx, %%rcx;"
" xor %%ecx, %%ecx;"
" adoxq 64(%1), %%r8;"
" mulxq 104(%1), %%r9, %%rbx;"
" adcx %%r13, %%r9;"

View File

@ -10,6 +10,7 @@
#include <crypto/internal/simd.h>
#include <crypto/nhpoly1305.h>
#include <linux/module.h>
#include <linux/sizes.h>
#include <asm/simd.h>
asmlinkage void nh_avx2(const u32 *key, const u8 *message, size_t message_len,

View File

@ -10,6 +10,7 @@
#include <crypto/internal/simd.h>
#include <crypto/nhpoly1305.h>
#include <linux/module.h>
#include <linux/sizes.h>
#include <asm/simd.h>
asmlinkage void nh_sse2(const u32 *key, const u8 *message, size_t message_len,

View File

@ -246,7 +246,7 @@ $code.=<<___ if (!$kernel);
___
&declare_function("poly1305_init_x86_64", 32, 3);
$code.=<<___;
xor %rax,%rax
xor %eax,%eax
mov %rax,0($ctx) # initialize hash value
mov %rax,8($ctx)
mov %rax,16($ctx)
@ -2853,7 +2853,7 @@ $code.=<<___;
.type poly1305_init_base2_44,\@function,3
.align 32
poly1305_init_base2_44:
xor %rax,%rax
xor %eax,%eax
mov %rax,0($ctx) # initialize hash value
mov %rax,8($ctx)
mov %rax,16($ctx)
@ -3947,7 +3947,7 @@ xor128_decrypt_n_pad:
mov \$16,$len
sub %r10,$len
xor %eax,%eax
xor %r11,%r11
xor %r11d,%r11d
.Loop_dec_byte:
mov ($inp,$otp),%r11b
mov ($otp),%al
@ -4085,7 +4085,7 @@ avx_handler:
.long 0xa548f3fc # cld; rep movsq
mov $disp,%rsi
xor %rcx,%rcx # arg1, UNW_FLAG_NHANDLER
xor %ecx,%ecx # arg1, UNW_FLAG_NHANDLER
mov 8(%rsi),%rdx # arg2, disp->ImageBase
mov 0(%rsi),%r8 # arg3, disp->ControlPc
mov 16(%rsi),%r9 # arg4, disp->FunctionEntry

View File

@ -11,6 +11,7 @@
#include <linux/jump_label.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sizes.h>
#include <asm/intel-family.h>
#include <asm/simd.h>
@ -157,9 +158,6 @@ static unsigned int crypto_poly1305_setdctxkey(struct poly1305_desc_ctx *dctx,
dctx->s[1] = get_unaligned_le32(&inp[4]);
dctx->s[2] = get_unaligned_le32(&inp[8]);
dctx->s[3] = get_unaligned_le32(&inp[12]);
inp += POLY1305_BLOCK_SIZE;
len -= POLY1305_BLOCK_SIZE;
acc += POLY1305_BLOCK_SIZE;
dctx->sset = true;
}
}

View File

@ -260,6 +260,23 @@ config CRYPTO_ECRDSA
standard algorithms (called GOST algorithms). Only signature verification
is implemented.
config CRYPTO_SM2
tristate "SM2 algorithm"
select CRYPTO_SM3
select CRYPTO_AKCIPHER
select CRYPTO_MANAGER
select MPILIB
select ASN1
help
Generic implementation of the SM2 public key algorithm. It was
published by State Encryption Management Bureau, China.
as specified by OSCCA GM/T 0003.1-2012 -- 0003.5-2012.
References:
https://tools.ietf.org/html/draft-shen-sm2-ecdsa-02
http://www.oscca.gov.cn/sca/xxgk/2010-12/17/content_1002386.shtml
http://www.gmbz.org.cn/main/bzlb.html
config CRYPTO_CURVE25519
tristate "Curve25519 algorithm"
select CRYPTO_KPP
@ -1185,6 +1202,7 @@ config CRYPTO_AES_PPC_SPE
config CRYPTO_ANUBIS
tristate "Anubis cipher algorithm"
depends on CRYPTO_USER_API_ENABLE_OBSOLETE
select CRYPTO_ALGAPI
help
Anubis cipher algorithm.
@ -1199,6 +1217,7 @@ config CRYPTO_ANUBIS
config CRYPTO_ARC4
tristate "ARC4 cipher algorithm"
depends on CRYPTO_USER_API_ENABLE_OBSOLETE
select CRYPTO_SKCIPHER
select CRYPTO_LIB_ARC4
help
@ -1423,6 +1442,7 @@ config CRYPTO_FCRYPT
config CRYPTO_KHAZAD
tristate "Khazad cipher algorithm"
depends on CRYPTO_USER_API_ENABLE_OBSOLETE
select CRYPTO_ALGAPI
help
Khazad cipher algorithm.
@ -1486,6 +1506,7 @@ config CRYPTO_CHACHA_MIPS
config CRYPTO_SEED
tristate "SEED cipher algorithm"
depends on CRYPTO_USER_API_ENABLE_OBSOLETE
select CRYPTO_ALGAPI
help
SEED cipher algorithm (RFC4269).
@ -1612,6 +1633,7 @@ config CRYPTO_SM4
config CRYPTO_TEA
tristate "TEA, XTEA and XETA cipher algorithms"
depends on CRYPTO_USER_API_ENABLE_OBSOLETE
select CRYPTO_ALGAPI
help
TEA cipher algorithm.
@ -1870,6 +1892,15 @@ config CRYPTO_USER_API_RNG
This option enables the user-spaces interface for random
number generator algorithms.
config CRYPTO_USER_API_RNG_CAVP
bool "Enable CAVP testing of DRBG"
depends on CRYPTO_USER_API_RNG && CRYPTO_DRBG
help
This option enables extra API for CAVP testing via the user-space
interface: resetting of DRBG entropy, and providing Additional Data.
This should only be enabled for CAVP testing. You should say
no unless you know what this is.
config CRYPTO_USER_API_AEAD
tristate "User-space interface for AEAD cipher algorithms"
depends on NET
@ -1881,6 +1912,15 @@ config CRYPTO_USER_API_AEAD
This option enables the user-spaces interface for AEAD
cipher algorithms.
config CRYPTO_USER_API_ENABLE_OBSOLETE
bool "Enable obsolete cryptographic algorithms for userspace"
depends on CRYPTO_USER_API
default y
help
Allow obsolete cryptographic algorithms to be selected that have
already been phased out from internal use by the kernel, and are
only useful for userspace clients that still rely on them.
config CRYPTO_STATS
bool "Crypto usage statistics for User-space"
depends on CRYPTO_USER

View File

@ -42,6 +42,14 @@ rsa_generic-y += rsa_helper.o
rsa_generic-y += rsa-pkcs1pad.o
obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o
$(obj)/sm2signature.asn1.o: $(obj)/sm2signature.asn1.c $(obj)/sm2signature.asn1.h
$(obj)/sm2.o: $(obj)/sm2signature.asn1.h
sm2_generic-y += sm2signature.asn1.o
sm2_generic-y += sm2.o
obj-$(CONFIG_CRYPTO_SM2) += sm2_generic.o
crypto_acompress-y := acompress.o
crypto_acompress-y += scompress.o
obj-$(CONFIG_CRYPTO_ACOMP2) += crypto_acompress.o

View File

@ -254,6 +254,14 @@ static int alg_setsockopt(struct socket *sock, int level, int optname,
if (!type->setauthsize)
goto unlock;
err = type->setauthsize(ask->private, optlen);
break;
case ALG_SET_DRBG_ENTROPY:
if (sock->state == SS_CONNECTED)
goto unlock;
if (!type->setentropy)
goto unlock;
err = type->setentropy(ask->private, optval, optlen);
}
unlock:
@ -286,6 +294,11 @@ int af_alg_accept(struct sock *sk, struct socket *newsock, bool kern)
security_sock_graft(sk2, newsock);
security_sk_clone(sk, sk2);
/*
* newsock->ops assigned here to allow type->accept call to override
* them when required.
*/
newsock->ops = type->ops;
err = type->accept(ask->private, sk2);
nokey = err == -ENOKEY;
@ -304,7 +317,6 @@ int af_alg_accept(struct sock *sk, struct socket *newsock, bool kern)
alg_sk(sk2)->parent = sk;
alg_sk(sk2)->type = type;
newsock->ops = type->ops;
newsock->state = SS_CONNECTED;
if (nokey)

View File

@ -10,7 +10,6 @@
#include <crypto/internal/hash.h>
#include <crypto/scatterwalk.h>
#include <linux/bug.h>
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/module.h>
@ -46,10 +45,7 @@ static int hash_walk_next(struct crypto_hash_walk *walk)
unsigned int nbytes = min(walk->entrylen,
((unsigned int)(PAGE_SIZE)) - offset);
if (walk->flags & CRYPTO_ALG_ASYNC)
walk->data = kmap(walk->pg);
else
walk->data = kmap_atomic(walk->pg);
walk->data = kmap_atomic(walk->pg);
walk->data += offset;
if (offset & alignmask) {
@ -99,16 +95,8 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
}
}
if (walk->flags & CRYPTO_ALG_ASYNC)
kunmap(walk->pg);
else {
kunmap_atomic(walk->data);
/*
* The may sleep test only makes sense for sync users.
* Async users don't need to sleep here anyway.
*/
crypto_yield(walk->flags);
}
kunmap_atomic(walk->data);
crypto_yield(walk->flags);
if (err)
return err;
@ -140,33 +128,12 @@ int crypto_hash_walk_first(struct ahash_request *req,
walk->alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req));
walk->sg = req->src;
walk->flags = req->base.flags & CRYPTO_TFM_REQ_MASK;
walk->flags = req->base.flags;
return hash_walk_new_entry(walk);
}
EXPORT_SYMBOL_GPL(crypto_hash_walk_first);
int crypto_ahash_walk_first(struct ahash_request *req,
struct crypto_hash_walk *walk)
{
walk->total = req->nbytes;
if (!walk->total) {
walk->entrylen = 0;
return 0;
}
walk->alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req));
walk->sg = req->src;
walk->flags = req->base.flags & CRYPTO_TFM_REQ_MASK;
walk->flags |= CRYPTO_ALG_ASYNC;
BUILD_BUG_ON(CRYPTO_TFM_REQ_MASK & CRYPTO_ALG_ASYNC);
return hash_walk_new_entry(walk);
}
EXPORT_SYMBOL_GPL(crypto_ahash_walk_first);
static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen)
{
@ -477,6 +444,14 @@ static int ahash_def_finup(struct ahash_request *req)
return ahash_def_finup_finish1(req, err);
}
static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm)
{
struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
struct ahash_alg *alg = crypto_ahash_alg(hash);
alg->exit_tfm(hash);
}
static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
{
struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
@ -500,7 +475,10 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
ahash_set_needkey(hash);
}
return 0;
if (alg->exit_tfm)
tfm->exit = crypto_ahash_exit_tfm;
return alg->init_tfm ? alg->init_tfm(hash) : 0;
}
static unsigned int crypto_ahash_extsize(struct crypto_alg *alg)

View File

@ -78,7 +78,7 @@ static int crypto_aead_copy_sgl(struct crypto_sync_skcipher *null_tfm,
SYNC_SKCIPHER_REQUEST_ON_STACK(skreq, null_tfm);
skcipher_request_set_sync_tfm(skreq, null_tfm);
skcipher_request_set_callback(skreq, CRYPTO_TFM_REQ_MAY_BACKLOG,
skcipher_request_set_callback(skreq, CRYPTO_TFM_REQ_MAY_SLEEP,
NULL, NULL);
skcipher_request_set_crypt(skreq, src, dst, len, NULL);
@ -120,7 +120,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg,
/*
* Make sure sufficient data is present -- note, the same check is
* is also present in sendmsg/sendpage. The checks in sendpage/sendmsg
* also present in sendmsg/sendpage. The checks in sendpage/sendmsg
* shall provide an information to the data sender that something is
* wrong, but they are irrelevant to maintain the kernel integrity.
* We need this check here too in case user space decides to not honor
@ -291,19 +291,20 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg,
areq->outlen = outlen;
aead_request_set_callback(&areq->cra_u.aead_req,
CRYPTO_TFM_REQ_MAY_BACKLOG,
CRYPTO_TFM_REQ_MAY_SLEEP,
af_alg_async_cb, areq);
err = ctx->enc ? crypto_aead_encrypt(&areq->cra_u.aead_req) :
crypto_aead_decrypt(&areq->cra_u.aead_req);
/* AIO operation in progress */
if (err == -EINPROGRESS || err == -EBUSY)
if (err == -EINPROGRESS)
return -EIOCBQUEUED;
sock_put(sk);
} else {
/* Synchronous operation */
aead_request_set_callback(&areq->cra_u.aead_req,
CRYPTO_TFM_REQ_MAY_SLEEP |
CRYPTO_TFM_REQ_MAY_BACKLOG,
crypto_req_done, &ctx->wait);
err = crypto_wait_req(ctx->enc ?

View File

@ -38,6 +38,7 @@
* DAMAGE.
*/
#include <linux/capability.h>
#include <linux/module.h>
#include <crypto/rng.h>
#include <linux/random.h>
@ -53,15 +54,26 @@ struct rng_ctx {
#define MAXSIZE 128
unsigned int len;
struct crypto_rng *drng;
u8 *addtl;
size_t addtl_len;
};
static int rng_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
int flags)
struct rng_parent_ctx {
struct crypto_rng *drng;
u8 *entropy;
};
static void rng_reset_addtl(struct rng_ctx *ctx)
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
struct rng_ctx *ctx = ask->private;
int err;
kfree_sensitive(ctx->addtl);
ctx->addtl = NULL;
ctx->addtl_len = 0;
}
static int _rng_recvmsg(struct crypto_rng *drng, struct msghdr *msg, size_t len,
u8 *addtl, size_t addtl_len)
{
int err = 0;
int genlen = 0;
u8 result[MAXSIZE];
@ -82,7 +94,7 @@ static int rng_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
* seeding as they automatically seed. The X9.31 DRNG will return
* an error if it was not seeded properly.
*/
genlen = crypto_rng_get_bytes(ctx->drng, result, len);
genlen = crypto_rng_generate(drng, addtl, addtl_len, result, len);
if (genlen < 0)
return genlen;
@ -92,6 +104,63 @@ static int rng_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
return err ? err : len;
}
static int rng_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
int flags)
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
struct rng_ctx *ctx = ask->private;
return _rng_recvmsg(ctx->drng, msg, len, NULL, 0);
}
static int rng_test_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
int flags)
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
struct rng_ctx *ctx = ask->private;
int ret;
lock_sock(sock->sk);
ret = _rng_recvmsg(ctx->drng, msg, len, ctx->addtl, ctx->addtl_len);
rng_reset_addtl(ctx);
release_sock(sock->sk);
return ret;
}
static int rng_test_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
{
int err;
struct alg_sock *ask = alg_sk(sock->sk);
struct rng_ctx *ctx = ask->private;
lock_sock(sock->sk);
if (len > MAXSIZE) {
err = -EMSGSIZE;
goto unlock;
}
rng_reset_addtl(ctx);
ctx->addtl = kmalloc(len, GFP_KERNEL);
if (!ctx->addtl) {
err = -ENOMEM;
goto unlock;
}
err = memcpy_from_msg(ctx->addtl, msg, len);
if (err) {
rng_reset_addtl(ctx);
goto unlock;
}
ctx->addtl_len = len;
unlock:
release_sock(sock->sk);
return err ? err : len;
}
static struct proto_ops algif_rng_ops = {
.family = PF_ALG,
@ -111,14 +180,53 @@ static struct proto_ops algif_rng_ops = {
.recvmsg = rng_recvmsg,
};
static struct proto_ops __maybe_unused algif_rng_test_ops = {
.family = PF_ALG,
.connect = sock_no_connect,
.socketpair = sock_no_socketpair,
.getname = sock_no_getname,
.ioctl = sock_no_ioctl,
.listen = sock_no_listen,
.shutdown = sock_no_shutdown,
.mmap = sock_no_mmap,
.bind = sock_no_bind,
.accept = sock_no_accept,
.sendpage = sock_no_sendpage,
.release = af_alg_release,
.recvmsg = rng_test_recvmsg,
.sendmsg = rng_test_sendmsg,
};
static void *rng_bind(const char *name, u32 type, u32 mask)
{
return crypto_alloc_rng(name, type, mask);
struct rng_parent_ctx *pctx;
struct crypto_rng *rng;
pctx = kzalloc(sizeof(*pctx), GFP_KERNEL);
if (!pctx)
return ERR_PTR(-ENOMEM);
rng = crypto_alloc_rng(name, type, mask);
if (IS_ERR(rng)) {
kfree(pctx);
return ERR_CAST(rng);
}
pctx->drng = rng;
return pctx;
}
static void rng_release(void *private)
{
crypto_free_rng(private);
struct rng_parent_ctx *pctx = private;
if (unlikely(!pctx))
return;
crypto_free_rng(pctx->drng);
kfree_sensitive(pctx->entropy);
kfree_sensitive(pctx);
}
static void rng_sock_destruct(struct sock *sk)
@ -126,6 +234,7 @@ static void rng_sock_destruct(struct sock *sk)
struct alg_sock *ask = alg_sk(sk);
struct rng_ctx *ctx = ask->private;
rng_reset_addtl(ctx);
sock_kfree_s(sk, ctx, ctx->len);
af_alg_release_parent(sk);
}
@ -133,6 +242,7 @@ static void rng_sock_destruct(struct sock *sk)
static int rng_accept_parent(void *private, struct sock *sk)
{
struct rng_ctx *ctx;
struct rng_parent_ctx *pctx = private;
struct alg_sock *ask = alg_sk(sk);
unsigned int len = sizeof(*ctx);
@ -141,6 +251,8 @@ static int rng_accept_parent(void *private, struct sock *sk)
return -ENOMEM;
ctx->len = len;
ctx->addtl = NULL;
ctx->addtl_len = 0;
/*
* No seeding done at that point -- if multiple accepts are
@ -148,20 +260,58 @@ static int rng_accept_parent(void *private, struct sock *sk)
* state of the RNG.
*/
ctx->drng = private;
ctx->drng = pctx->drng;
ask->private = ctx;
sk->sk_destruct = rng_sock_destruct;
/*
* Non NULL pctx->entropy means that CAVP test has been initiated on
* this socket, replace proto_ops algif_rng_ops with algif_rng_test_ops.
*/
if (IS_ENABLED(CONFIG_CRYPTO_USER_API_RNG_CAVP) && pctx->entropy)
sk->sk_socket->ops = &algif_rng_test_ops;
return 0;
}
static int rng_setkey(void *private, const u8 *seed, unsigned int seedlen)
{
struct rng_parent_ctx *pctx = private;
/*
* Check whether seedlen is of sufficient size is done in RNG
* implementations.
*/
return crypto_rng_reset(private, seed, seedlen);
return crypto_rng_reset(pctx->drng, seed, seedlen);
}
static int __maybe_unused rng_setentropy(void *private, sockptr_t entropy,
unsigned int len)
{
struct rng_parent_ctx *pctx = private;
u8 *kentropy = NULL;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
if (pctx->entropy)
return -EINVAL;
if (len > MAXSIZE)
return -EMSGSIZE;
if (len) {
kentropy = memdup_sockptr(entropy, len);
if (IS_ERR(kentropy))
return PTR_ERR(kentropy);
}
crypto_rng_alg(pctx->drng)->set_ent(pctx->drng, kentropy, len);
/*
* Since rng doesn't perform any memory management for the entropy
* buffer, save kentropy pointer to pctx now to free it after use.
*/
pctx->entropy = kentropy;
return 0;
}
static const struct af_alg_type algif_type_rng = {
@ -169,6 +319,9 @@ static const struct af_alg_type algif_type_rng = {
.release = rng_release,
.accept = rng_accept_parent,
.setkey = rng_setkey,
#ifdef CONFIG_CRYPTO_USER_API_RNG_CAVP
.setentropy = rng_setentropy,
#endif
.ops = &algif_rng_ops,
.name = "rng",
.owner = THIS_MODULE

View File

@ -123,7 +123,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
crypto_skcipher_decrypt(&areq->cra_u.skcipher_req);
/* AIO operation in progress */
if (err == -EINPROGRESS || err == -EBUSY)
if (err == -EINPROGRESS)
return -EIOCBQUEUED;
sock_put(sk);

View File

@ -11,7 +11,9 @@
#include <crypto/arc4.h>
#include <crypto/internal/skcipher.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sched.h>
static int crypto_arc4_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
@ -39,6 +41,14 @@ static int crypto_arc4_crypt(struct skcipher_request *req)
return err;
}
static int crypto_arc4_init(struct crypto_skcipher *tfm)
{
pr_warn_ratelimited("\"%s\" (%ld) uses obsolete ecb(arc4) skcipher\n",
current->comm, (unsigned long)current->pid);
return 0;
}
static struct skcipher_alg arc4_alg = {
/*
* For legacy reasons, this is named "ecb(arc4)", not "arc4".
@ -55,6 +65,7 @@ static struct skcipher_alg arc4_alg = {
.setkey = crypto_arc4_setkey,
.encrypt = crypto_arc4_crypt,
.decrypt = crypto_arc4_crypt,
.init = crypto_arc4_init,
};
static int __init arc4_init(void)

View File

@ -17,6 +17,8 @@
#include <keys/asymmetric-subtype.h>
#include <crypto/public_key.h>
#include <crypto/akcipher.h>
#include <crypto/sm2.h>
#include <crypto/sm3_base.h>
MODULE_DESCRIPTION("In-software asymmetric public-key subtype");
MODULE_AUTHOR("Red Hat, Inc.");
@ -246,6 +248,61 @@ error_free_tfm:
return ret;
}
#if IS_REACHABLE(CONFIG_CRYPTO_SM2)
static int cert_sig_digest_update(const struct public_key_signature *sig,
struct crypto_akcipher *tfm_pkey)
{
struct crypto_shash *tfm;
struct shash_desc *desc;
size_t desc_size;
unsigned char dgst[SM3_DIGEST_SIZE];
int ret;
BUG_ON(!sig->data);
ret = sm2_compute_z_digest(tfm_pkey, SM2_DEFAULT_USERID,
SM2_DEFAULT_USERID_LEN, dgst);
if (ret)
return ret;
tfm = crypto_alloc_shash(sig->hash_algo, 0, 0);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
desc_size = crypto_shash_descsize(tfm) + sizeof(*desc);
desc = kzalloc(desc_size, GFP_KERNEL);
if (!desc) {
ret = -ENOMEM;
goto error_free_tfm;
}
desc->tfm = tfm;
ret = crypto_shash_init(desc);
if (ret < 0)
goto error_free_desc;
ret = crypto_shash_update(desc, dgst, SM3_DIGEST_SIZE);
if (ret < 0)
goto error_free_desc;
ret = crypto_shash_finup(desc, sig->data, sig->data_size, sig->digest);
error_free_desc:
kfree(desc);
error_free_tfm:
crypto_free_shash(tfm);
return ret;
}
#else
static inline int cert_sig_digest_update(
const struct public_key_signature *sig,
struct crypto_akcipher *tfm_pkey)
{
return -ENOTSUPP;
}
#endif /* ! IS_REACHABLE(CONFIG_CRYPTO_SM2) */
/*
* Verify a signature using a public key.
*/
@ -299,6 +356,12 @@ int public_key_verify_signature(const struct public_key *pkey,
if (ret)
goto error_free_key;
if (strcmp(sig->pkey_algo, "sm2") == 0 && sig->data_size) {
ret = cert_sig_digest_update(sig, tfm);
if (ret)
goto error_free_key;
}
sg_init_table(src_sg, 2);
sg_set_buf(&src_sg[0], sig->s, sig->s_size);
sg_set_buf(&src_sg[1], sig->digest, sig->digest_size);

View File

@ -234,6 +234,10 @@ int x509_note_pkey_algo(void *context, size_t hdrlen,
case OID_gost2012Signature512:
ctx->cert->sig->hash_algo = "streebog512";
goto ecrdsa;
case OID_SM2_with_SM3:
ctx->cert->sig->hash_algo = "sm3";
goto sm2;
}
rsa_pkcs1:
@ -246,6 +250,11 @@ ecrdsa:
ctx->cert->sig->encoding = "raw";
ctx->algo_oid = ctx->last_oid;
return 0;
sm2:
ctx->cert->sig->pkey_algo = "sm2";
ctx->cert->sig->encoding = "raw";
ctx->algo_oid = ctx->last_oid;
return 0;
}
/*
@ -266,7 +275,8 @@ int x509_note_signature(void *context, size_t hdrlen,
}
if (strcmp(ctx->cert->sig->pkey_algo, "rsa") == 0 ||
strcmp(ctx->cert->sig->pkey_algo, "ecrdsa") == 0) {
strcmp(ctx->cert->sig->pkey_algo, "ecrdsa") == 0 ||
strcmp(ctx->cert->sig->pkey_algo, "sm2") == 0) {
/* Discard the BIT STRING metadata */
if (vlen < 1 || *(const u8 *)value != 0)
return -EBADMSG;
@ -451,13 +461,20 @@ int x509_extract_key_data(void *context, size_t hdrlen,
struct x509_parse_context *ctx = context;
ctx->key_algo = ctx->last_oid;
if (ctx->last_oid == OID_rsaEncryption)
switch (ctx->last_oid) {
case OID_rsaEncryption:
ctx->cert->pub->pkey_algo = "rsa";
else if (ctx->last_oid == OID_gost2012PKey256 ||
ctx->last_oid == OID_gost2012PKey512)
break;
case OID_gost2012PKey256:
case OID_gost2012PKey512:
ctx->cert->pub->pkey_algo = "ecrdsa";
else
break;
case OID_id_ecPublicKey:
ctx->cert->pub->pkey_algo = "sm2";
break;
default:
return -ENOPKG;
}
/* Discard the BIT STRING metadata */
if (vlen < 1 || *(const u8 *)value != 0)

View File

@ -30,6 +30,9 @@ int x509_get_sig_params(struct x509_certificate *cert)
pr_devel("==>%s()\n", __func__);
sig->data = cert->tbs;
sig->data_size = cert->tbs_size;
if (!cert->pub->pkey_algo)
cert->unsupported_key = true;

View File

@ -6,7 +6,6 @@
*/
#include <crypto/algapi.h>
#include <crypto/cbc.h>
#include <crypto/internal/skcipher.h>
#include <linux/err.h>
#include <linux/init.h>
@ -14,34 +13,157 @@
#include <linux/log2.h>
#include <linux/module.h>
static inline void crypto_cbc_encrypt_one(struct crypto_skcipher *tfm,
const u8 *src, u8 *dst)
static int crypto_cbc_encrypt_segment(struct skcipher_walk *walk,
struct crypto_skcipher *skcipher)
{
crypto_cipher_encrypt_one(skcipher_cipher_simple(tfm), dst, src);
unsigned int bsize = crypto_skcipher_blocksize(skcipher);
void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
struct crypto_cipher *cipher;
struct crypto_tfm *tfm;
u8 *iv = walk->iv;
cipher = skcipher_cipher_simple(skcipher);
tfm = crypto_cipher_tfm(cipher);
fn = crypto_cipher_alg(cipher)->cia_encrypt;
do {
crypto_xor(iv, src, bsize);
fn(tfm, dst, iv);
memcpy(iv, dst, bsize);
src += bsize;
dst += bsize;
} while ((nbytes -= bsize) >= bsize);
return nbytes;
}
static int crypto_cbc_encrypt_inplace(struct skcipher_walk *walk,
struct crypto_skcipher *skcipher)
{
unsigned int bsize = crypto_skcipher_blocksize(skcipher);
void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
struct crypto_cipher *cipher;
struct crypto_tfm *tfm;
u8 *iv = walk->iv;
cipher = skcipher_cipher_simple(skcipher);
tfm = crypto_cipher_tfm(cipher);
fn = crypto_cipher_alg(cipher)->cia_encrypt;
do {
crypto_xor(src, iv, bsize);
fn(tfm, src, src);
iv = src;
src += bsize;
} while ((nbytes -= bsize) >= bsize);
memcpy(walk->iv, iv, bsize);
return nbytes;
}
static int crypto_cbc_encrypt(struct skcipher_request *req)
{
return crypto_cbc_encrypt_walk(req, crypto_cbc_encrypt_one);
}
static inline void crypto_cbc_decrypt_one(struct crypto_skcipher *tfm,
const u8 *src, u8 *dst)
{
crypto_cipher_decrypt_one(skcipher_cipher_simple(tfm), dst, src);
}
static int crypto_cbc_decrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
struct skcipher_walk walk;
int err;
err = skcipher_walk_virt(&walk, req, false);
while (walk.nbytes) {
err = crypto_cbc_decrypt_blocks(&walk, tfm,
crypto_cbc_decrypt_one);
if (walk.src.virt.addr == walk.dst.virt.addr)
err = crypto_cbc_encrypt_inplace(&walk, skcipher);
else
err = crypto_cbc_encrypt_segment(&walk, skcipher);
err = skcipher_walk_done(&walk, err);
}
return err;
}
static int crypto_cbc_decrypt_segment(struct skcipher_walk *walk,
struct crypto_skcipher *skcipher)
{
unsigned int bsize = crypto_skcipher_blocksize(skcipher);
void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 *dst = walk->dst.virt.addr;
struct crypto_cipher *cipher;
struct crypto_tfm *tfm;
u8 *iv = walk->iv;
cipher = skcipher_cipher_simple(skcipher);
tfm = crypto_cipher_tfm(cipher);
fn = crypto_cipher_alg(cipher)->cia_decrypt;
do {
fn(tfm, dst, src);
crypto_xor(dst, iv, bsize);
iv = src;
src += bsize;
dst += bsize;
} while ((nbytes -= bsize) >= bsize);
memcpy(walk->iv, iv, bsize);
return nbytes;
}
static int crypto_cbc_decrypt_inplace(struct skcipher_walk *walk,
struct crypto_skcipher *skcipher)
{
unsigned int bsize = crypto_skcipher_blocksize(skcipher);
void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
u8 last_iv[MAX_CIPHER_BLOCKSIZE];
struct crypto_cipher *cipher;
struct crypto_tfm *tfm;
cipher = skcipher_cipher_simple(skcipher);
tfm = crypto_cipher_tfm(cipher);
fn = crypto_cipher_alg(cipher)->cia_decrypt;
/* Start of the last block. */
src += nbytes - (nbytes & (bsize - 1)) - bsize;
memcpy(last_iv, src, bsize);
for (;;) {
fn(tfm, src, src);
if ((nbytes -= bsize) < bsize)
break;
crypto_xor(src, src - bsize, bsize);
src -= bsize;
}
crypto_xor(src, walk->iv, bsize);
memcpy(walk->iv, last_iv, bsize);
return nbytes;
}
static int crypto_cbc_decrypt(struct skcipher_request *req)
{
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
struct skcipher_walk walk;
int err;
err = skcipher_walk_virt(&walk, req, false);
while (walk.nbytes) {
if (walk.src.virt.addr == walk.dst.virt.addr)
err = crypto_cbc_decrypt_inplace(&walk, skcipher);
else
err = crypto_cbc_decrypt_segment(&walk, skcipher);
err = skcipher_walk_done(&walk, err);
}

View File

@ -15,7 +15,7 @@
* pages = {},
* month = {June},
*}
* Used by the iSCSI driver, possibly others, and derived from the
* Used by the iSCSI driver, possibly others, and derived from
* the iscsi-crc.c module of the linux-iscsi driver at
* http://linux-iscsi.sourceforge.net.
*
@ -50,7 +50,7 @@ struct chksum_desc_ctx {
};
/*
* Steps through buffer one byte at at time, calculates reflected
* Steps through buffer one byte at a time, calculates reflected
* crc using table.
*/

View File

@ -35,7 +35,7 @@ struct chksum_desc_ctx {
};
/*
* Steps through buffer one byte at at time, calculates reflected
* Steps through buffer one byte at a time, calculates reflected
* crc using table.
*/

View File

@ -9,6 +9,7 @@
#include <linux/err.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <crypto/engine.h>
#include <uapi/linux/sched/types.h>
#include "internal.h"
@ -465,7 +466,7 @@ EXPORT_SYMBOL_GPL(crypto_engine_stop);
* crypto-engine queue.
* @dev: the device attached with one hardware engine
* @retry_support: whether hardware has support for retry mechanism
* @cbk_do_batch: pointer to a callback function to be invoked when executing a
* @cbk_do_batch: pointer to a callback function to be invoked when executing
* a batch of requests.
* This has the form:
* callback(struct crypto_engine *engine)

View File

@ -22,6 +22,7 @@
#include <crypto/internal/akcipher.h>
#include <crypto/akcipher.h>
#include <linux/oid_registry.h>
#include <linux/scatterlist.h>
#include "ecrdsa_params.asn1.h"
#include "ecrdsa_pub_key.asn1.h"
#include "ecc.h"

View File

@ -10,16 +10,14 @@
#include <crypto/algapi.h>
#include <linux/completion.h>
#include <linux/mm.h>
#include <linux/highmem.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/notifier.h>
#include <linux/numa.h>
#include <linux/refcount.h>
#include <linux/rwsem.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/types.h>
struct crypto_instance;
struct crypto_template;
@ -140,5 +138,11 @@ static inline void crypto_notify(unsigned long val, void *v)
blocking_notifier_call_chain(&crypto_chain, val, v);
}
static inline void crypto_yield(u32 flags)
{
if (flags & CRYPTO_TFM_REQ_MAY_SLEEP)
cond_resched();
}
#endif /* _CRYPTO_INTERNAL_H */

View File

@ -37,11 +37,11 @@
* DAMAGE.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/fips.h>
#include <linux/time.h>
#include <linux/crypto.h>
#include <crypto/internal/rng.h>
#include "jitterentropy.h"

View File

@ -36,7 +36,7 @@ static void c_stop(struct seq_file *m, void *p)
static int c_show(struct seq_file *m, void *p)
{
struct crypto_alg *alg = list_entry(p, struct crypto_alg, cra_list);
seq_printf(m, "name : %s\n", alg->cra_name);
seq_printf(m, "driver : %s\n", alg->cra_driver_name);
seq_printf(m, "module : %s\n", module_name(alg->cra_module));
@ -59,7 +59,7 @@ static int c_show(struct seq_file *m, void *p)
alg->cra_type->show(m, alg);
goto out;
}
switch (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_CIPHER:
seq_printf(m, "type : cipher\n");

View File

@ -14,6 +14,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/random.h>
#include <linux/scatterlist.h>
/*
* Hash algorithm OIDs plus ASN.1 DER wrappings [RFC4880 sec 5.2.2].

481
crypto/sm2.c Normal file
View File

@ -0,0 +1,481 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* SM2 asymmetric public-key algorithm
* as specified by OSCCA GM/T 0003.1-2012 -- 0003.5-2012 SM2 and
* described at https://tools.ietf.org/html/draft-shen-sm2-ecdsa-02
*
* Copyright (c) 2020, Alibaba Group.
* Authors: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
*/
#include <linux/module.h>
#include <linux/mpi.h>
#include <crypto/internal/akcipher.h>
#include <crypto/akcipher.h>
#include <crypto/hash.h>
#include <crypto/sm3_base.h>
#include <crypto/rng.h>
#include <crypto/sm2.h>
#include "sm2signature.asn1.h"
#define MPI_NBYTES(m) ((mpi_get_nbits(m) + 7) / 8)
struct ecc_domain_parms {
const char *desc; /* Description of the curve. */
unsigned int nbits; /* Number of bits. */
unsigned int fips:1; /* True if this is a FIPS140-2 approved curve */
/* The model describing this curve. This is mainly used to select
* the group equation.
*/
enum gcry_mpi_ec_models model;
/* The actual ECC dialect used. This is used for curve specific
* optimizations and to select encodings etc.
*/
enum ecc_dialects dialect;
const char *p; /* The prime defining the field. */
const char *a, *b; /* The coefficients. For Twisted Edwards
* Curves b is used for d. For Montgomery
* Curves (a,b) has ((A-2)/4,B^-1).
*/
const char *n; /* The order of the base point. */
const char *g_x, *g_y; /* Base point. */
unsigned int h; /* Cofactor. */
};
static const struct ecc_domain_parms sm2_ecp = {
.desc = "sm2p256v1",
.nbits = 256,
.fips = 0,
.model = MPI_EC_WEIERSTRASS,
.dialect = ECC_DIALECT_STANDARD,
.p = "0xfffffffeffffffffffffffffffffffffffffffff00000000ffffffffffffffff",
.a = "0xfffffffeffffffffffffffffffffffffffffffff00000000fffffffffffffffc",
.b = "0x28e9fa9e9d9f5e344d5a9e4bcf6509a7f39789f515ab8f92ddbcbd414d940e93",
.n = "0xfffffffeffffffffffffffffffffffff7203df6b21c6052b53bbf40939d54123",
.g_x = "0x32c4ae2c1f1981195f9904466a39c9948fe30bbff2660be1715a4589334c74c7",
.g_y = "0xbc3736a2f4f6779c59bdcee36b692153d0a9877cc62a474002df32e52139f0a0",
.h = 1
};
static int sm2_ec_ctx_init(struct mpi_ec_ctx *ec)
{
const struct ecc_domain_parms *ecp = &sm2_ecp;
MPI p, a, b;
MPI x, y;
int rc = -EINVAL;
p = mpi_scanval(ecp->p);
a = mpi_scanval(ecp->a);
b = mpi_scanval(ecp->b);
if (!p || !a || !b)
goto free_p;
x = mpi_scanval(ecp->g_x);
y = mpi_scanval(ecp->g_y);
if (!x || !y)
goto free;
rc = -ENOMEM;
/* mpi_ec_setup_elliptic_curve */
ec->G = mpi_point_new(0);
if (!ec->G)
goto free;
mpi_set(ec->G->x, x);
mpi_set(ec->G->y, y);
mpi_set_ui(ec->G->z, 1);
rc = -EINVAL;
ec->n = mpi_scanval(ecp->n);
if (!ec->n) {
mpi_point_release(ec->G);
goto free;
}
ec->h = ecp->h;
ec->name = ecp->desc;
mpi_ec_init(ec, ecp->model, ecp->dialect, 0, p, a, b);
rc = 0;
free:
mpi_free(x);
mpi_free(y);
free_p:
mpi_free(p);
mpi_free(a);
mpi_free(b);
return rc;
}
static void sm2_ec_ctx_deinit(struct mpi_ec_ctx *ec)
{
mpi_ec_deinit(ec);
memset(ec, 0, sizeof(*ec));
}
static int sm2_ec_ctx_reset(struct mpi_ec_ctx *ec)
{
sm2_ec_ctx_deinit(ec);
return sm2_ec_ctx_init(ec);
}
/* RESULT must have been initialized and is set on success to the
* point given by VALUE.
*/
static int sm2_ecc_os2ec(MPI_POINT result, MPI value)
{
int rc;
size_t n;
const unsigned char *buf;
unsigned char *buf_memory;
MPI x, y;
n = (mpi_get_nbits(value)+7)/8;
buf_memory = kmalloc(n, GFP_KERNEL);
rc = mpi_print(GCRYMPI_FMT_USG, buf_memory, n, &n, value);
if (rc) {
kfree(buf_memory);
return rc;
}
buf = buf_memory;
if (n < 1) {
kfree(buf_memory);
return -EINVAL;
}
if (*buf != 4) {
kfree(buf_memory);
return -EINVAL; /* No support for point compression. */
}
if (((n-1)%2)) {
kfree(buf_memory);
return -EINVAL;
}
n = (n-1)/2;
x = mpi_read_raw_data(buf + 1, n);
if (!x) {
kfree(buf_memory);
return -ENOMEM;
}
y = mpi_read_raw_data(buf + 1 + n, n);
kfree(buf_memory);
if (!y) {
mpi_free(x);
return -ENOMEM;
}
mpi_normalize(x);
mpi_normalize(y);
mpi_set(result->x, x);
mpi_set(result->y, y);
mpi_set_ui(result->z, 1);
mpi_free(x);
mpi_free(y);
return 0;
}
struct sm2_signature_ctx {
MPI sig_r;
MPI sig_s;
};
int sm2_get_signature_r(void *context, size_t hdrlen, unsigned char tag,
const void *value, size_t vlen)
{
struct sm2_signature_ctx *sig = context;
if (!value || !vlen)
return -EINVAL;
sig->sig_r = mpi_read_raw_data(value, vlen);
if (!sig->sig_r)
return -ENOMEM;
return 0;
}
int sm2_get_signature_s(void *context, size_t hdrlen, unsigned char tag,
const void *value, size_t vlen)
{
struct sm2_signature_ctx *sig = context;
if (!value || !vlen)
return -EINVAL;
sig->sig_s = mpi_read_raw_data(value, vlen);
if (!sig->sig_s)
return -ENOMEM;
return 0;
}
static int sm2_z_digest_update(struct shash_desc *desc,
MPI m, unsigned int pbytes)
{
static const unsigned char zero[32];
unsigned char *in;
unsigned int inlen;
in = mpi_get_buffer(m, &inlen, NULL);
if (!in)
return -EINVAL;
if (inlen < pbytes) {
/* padding with zero */
crypto_sm3_update(desc, zero, pbytes - inlen);
crypto_sm3_update(desc, in, inlen);
} else if (inlen > pbytes) {
/* skip the starting zero */
crypto_sm3_update(desc, in + inlen - pbytes, pbytes);
} else {
crypto_sm3_update(desc, in, inlen);
}
kfree(in);
return 0;
}
static int sm2_z_digest_update_point(struct shash_desc *desc,
MPI_POINT point, struct mpi_ec_ctx *ec, unsigned int pbytes)
{
MPI x, y;
int ret = -EINVAL;
x = mpi_new(0);
y = mpi_new(0);
if (!mpi_ec_get_affine(x, y, point, ec) &&
!sm2_z_digest_update(desc, x, pbytes) &&
!sm2_z_digest_update(desc, y, pbytes))
ret = 0;
mpi_free(x);
mpi_free(y);
return ret;
}
int sm2_compute_z_digest(struct crypto_akcipher *tfm,
const unsigned char *id, size_t id_len,
unsigned char dgst[SM3_DIGEST_SIZE])
{
struct mpi_ec_ctx *ec = akcipher_tfm_ctx(tfm);
uint16_t bits_len;
unsigned char entl[2];
SHASH_DESC_ON_STACK(desc, NULL);
unsigned int pbytes;
if (id_len > (USHRT_MAX / 8) || !ec->Q)
return -EINVAL;
bits_len = (uint16_t)(id_len * 8);
entl[0] = bits_len >> 8;
entl[1] = bits_len & 0xff;
pbytes = MPI_NBYTES(ec->p);
/* ZA = H256(ENTLA | IDA | a | b | xG | yG | xA | yA) */
sm3_base_init(desc);
crypto_sm3_update(desc, entl, 2);
crypto_sm3_update(desc, id, id_len);
if (sm2_z_digest_update(desc, ec->a, pbytes) ||
sm2_z_digest_update(desc, ec->b, pbytes) ||
sm2_z_digest_update_point(desc, ec->G, ec, pbytes) ||
sm2_z_digest_update_point(desc, ec->Q, ec, pbytes))
return -EINVAL;
crypto_sm3_final(desc, dgst);
return 0;
}
EXPORT_SYMBOL(sm2_compute_z_digest);
static int _sm2_verify(struct mpi_ec_ctx *ec, MPI hash, MPI sig_r, MPI sig_s)
{
int rc = -EINVAL;
struct gcry_mpi_point sG, tP;
MPI t = NULL;
MPI x1 = NULL, y1 = NULL;
mpi_point_init(&sG);
mpi_point_init(&tP);
x1 = mpi_new(0);
y1 = mpi_new(0);
t = mpi_new(0);
/* r, s in [1, n-1] */
if (mpi_cmp_ui(sig_r, 1) < 0 || mpi_cmp(sig_r, ec->n) > 0 ||
mpi_cmp_ui(sig_s, 1) < 0 || mpi_cmp(sig_s, ec->n) > 0) {
goto leave;
}
/* t = (r + s) % n, t == 0 */
mpi_addm(t, sig_r, sig_s, ec->n);
if (mpi_cmp_ui(t, 0) == 0)
goto leave;
/* sG + tP = (x1, y1) */
rc = -EBADMSG;
mpi_ec_mul_point(&sG, sig_s, ec->G, ec);
mpi_ec_mul_point(&tP, t, ec->Q, ec);
mpi_ec_add_points(&sG, &sG, &tP, ec);
if (mpi_ec_get_affine(x1, y1, &sG, ec))
goto leave;
/* R = (e + x1) % n */
mpi_addm(t, hash, x1, ec->n);
/* check R == r */
rc = -EKEYREJECTED;
if (mpi_cmp(t, sig_r))
goto leave;
rc = 0;
leave:
mpi_point_free_parts(&sG);
mpi_point_free_parts(&tP);
mpi_free(x1);
mpi_free(y1);
mpi_free(t);
return rc;
}
static int sm2_verify(struct akcipher_request *req)
{
struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
struct mpi_ec_ctx *ec = akcipher_tfm_ctx(tfm);
unsigned char *buffer;
struct sm2_signature_ctx sig;
MPI hash;
int ret;
if (unlikely(!ec->Q))
return -EINVAL;
buffer = kmalloc(req->src_len + req->dst_len, GFP_KERNEL);
if (!buffer)
return -ENOMEM;
sg_pcopy_to_buffer(req->src,
sg_nents_for_len(req->src, req->src_len + req->dst_len),
buffer, req->src_len + req->dst_len, 0);
sig.sig_r = NULL;
sig.sig_s = NULL;
ret = asn1_ber_decoder(&sm2signature_decoder, &sig,
buffer, req->src_len);
if (ret)
goto error;
ret = -ENOMEM;
hash = mpi_read_raw_data(buffer + req->src_len, req->dst_len);
if (!hash)
goto error;
ret = _sm2_verify(ec, hash, sig.sig_r, sig.sig_s);
mpi_free(hash);
error:
mpi_free(sig.sig_r);
mpi_free(sig.sig_s);
kfree(buffer);
return ret;
}
static int sm2_set_pub_key(struct crypto_akcipher *tfm,
const void *key, unsigned int keylen)
{
struct mpi_ec_ctx *ec = akcipher_tfm_ctx(tfm);
MPI a;
int rc;
rc = sm2_ec_ctx_reset(ec);
if (rc)
return rc;
ec->Q = mpi_point_new(0);
if (!ec->Q)
return -ENOMEM;
/* include the uncompressed flag '0x04' */
rc = -ENOMEM;
a = mpi_read_raw_data(key, keylen);
if (!a)
goto error;
mpi_normalize(a);
rc = sm2_ecc_os2ec(ec->Q, a);
mpi_free(a);
if (rc)
goto error;
return 0;
error:
mpi_point_release(ec->Q);
ec->Q = NULL;
return rc;
}
static unsigned int sm2_max_size(struct crypto_akcipher *tfm)
{
/* Unlimited max size */
return PAGE_SIZE;
}
static int sm2_init_tfm(struct crypto_akcipher *tfm)
{
struct mpi_ec_ctx *ec = akcipher_tfm_ctx(tfm);
return sm2_ec_ctx_init(ec);
}
static void sm2_exit_tfm(struct crypto_akcipher *tfm)
{
struct mpi_ec_ctx *ec = akcipher_tfm_ctx(tfm);
sm2_ec_ctx_deinit(ec);
}
static struct akcipher_alg sm2 = {
.verify = sm2_verify,
.set_pub_key = sm2_set_pub_key,
.max_size = sm2_max_size,
.init = sm2_init_tfm,
.exit = sm2_exit_tfm,
.base = {
.cra_name = "sm2",
.cra_driver_name = "sm2-generic",
.cra_priority = 100,
.cra_module = THIS_MODULE,
.cra_ctxsize = sizeof(struct mpi_ec_ctx),
},
};
static int sm2_init(void)
{
return crypto_register_akcipher(&sm2);
}
static void sm2_exit(void)
{
crypto_unregister_akcipher(&sm2);
}
subsys_initcall(sm2_init);
module_exit(sm2_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Tianjia Zhang <tianjia.zhang@linux.alibaba.com>");
MODULE_DESCRIPTION("SM2 generic algorithm");
MODULE_ALIAS_CRYPTO("sm2-generic");

4
crypto/sm2signature.asn1 Normal file
View File

@ -0,0 +1,4 @@
Sm2Signature ::= SEQUENCE {
sig_r INTEGER ({ sm2_get_signature_r }),
sig_s INTEGER ({ sm2_get_signature_s })
}

View File

@ -149,17 +149,18 @@ int crypto_sm3_update(struct shash_desc *desc, const u8 *data,
}
EXPORT_SYMBOL(crypto_sm3_update);
static int sm3_final(struct shash_desc *desc, u8 *out)
int crypto_sm3_final(struct shash_desc *desc, u8 *out)
{
sm3_base_do_finalize(desc, sm3_generic_block_fn);
return sm3_base_finish(desc, out);
}
EXPORT_SYMBOL(crypto_sm3_final);
int crypto_sm3_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *hash)
{
sm3_base_do_update(desc, data, len, sm3_generic_block_fn);
return sm3_final(desc, hash);
return crypto_sm3_final(desc, hash);
}
EXPORT_SYMBOL(crypto_sm3_finup);
@ -167,7 +168,7 @@ static struct shash_alg sm3_alg = {
.digestsize = SM3_DIGEST_SIZE,
.init = sm3_base_init,
.update = crypto_sm3_update,
.final = sm3_final,
.final = crypto_sm3_final,
.finup = crypto_sm3_finup,
.descsize = sizeof(struct sm3_state),
.base = {

View File

@ -63,6 +63,7 @@ static u32 type;
static u32 mask;
static int mode;
static u32 num_mb = 8;
static unsigned int klen;
static char *tvmem[TVMEMSIZE];
static const char *check[] = {
@ -398,7 +399,7 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
ret = do_one_aead_op(cur->req, ret);
if (ret) {
pr_err("calculating auth failed failed (%d)\n",
pr_err("calculating auth failed (%d)\n",
ret);
break;
}
@ -648,7 +649,7 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
crypto_aead_encrypt(req));
if (ret) {
pr_err("calculating auth failed failed (%d)\n",
pr_err("calculating auth failed (%d)\n",
ret);
break;
}
@ -864,8 +865,8 @@ static void test_mb_ahash_speed(const char *algo, unsigned int secs,
goto out;
}
if (speed[i].klen)
crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen);
if (klen)
crypto_ahash_setkey(tfm, tvmem[0], klen);
for (k = 0; k < num_mb; k++)
ahash_request_set_crypt(data[k].req, data[k].sg,
@ -1099,8 +1100,8 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs,
break;
}
if (speed[i].klen)
crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen);
if (klen)
crypto_ahash_setkey(tfm, tvmem[0], klen);
pr_info("test%3u "
"(%5u byte blocks,%5u bytes per update,%4u updates): ",
@ -2418,7 +2419,8 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
if (mode > 300 && mode < 400) break;
fallthrough;
case 318:
test_hash_speed("ghash-generic", sec, hash_speed_template_16);
klen = 16;
test_hash_speed("ghash", sec, generic_hash_speed_template);
if (mode > 300 && mode < 400) break;
fallthrough;
case 319:
@ -3076,6 +3078,8 @@ MODULE_PARM_DESC(sec, "Length in seconds of speed tests "
"(defaults to zero which uses CPU cycles instead)");
module_param(num_mb, uint, 0000);
MODULE_PARM_DESC(num_mb, "Number of concurrent requests to be used in mb speed tests (defaults to 8)");
module_param(klen, uint, 0);
MODULE_PARM_DESC(klen, "Key length (defaults to 0)");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Quick & dirty crypto testing module");

View File

@ -25,7 +25,6 @@ struct aead_speed_template {
struct hash_speed {
unsigned int blen; /* buffer length */
unsigned int plen; /* per-update length */
unsigned int klen; /* key length */
};
/*
@ -97,34 +96,6 @@ static struct hash_speed generic_hash_speed_template[] = {
{ .blen = 0, .plen = 0, }
};
static struct hash_speed hash_speed_template_16[] = {
{ .blen = 16, .plen = 16, .klen = 16, },
{ .blen = 64, .plen = 16, .klen = 16, },
{ .blen = 64, .plen = 64, .klen = 16, },
{ .blen = 256, .plen = 16, .klen = 16, },
{ .blen = 256, .plen = 64, .klen = 16, },
{ .blen = 256, .plen = 256, .klen = 16, },
{ .blen = 1024, .plen = 16, .klen = 16, },
{ .blen = 1024, .plen = 256, .klen = 16, },
{ .blen = 1024, .plen = 1024, .klen = 16, },
{ .blen = 2048, .plen = 16, .klen = 16, },
{ .blen = 2048, .plen = 256, .klen = 16, },
{ .blen = 2048, .plen = 1024, .klen = 16, },
{ .blen = 2048, .plen = 2048, .klen = 16, },
{ .blen = 4096, .plen = 16, .klen = 16, },
{ .blen = 4096, .plen = 256, .klen = 16, },
{ .blen = 4096, .plen = 1024, .klen = 16, },
{ .blen = 4096, .plen = 4096, .klen = 16, },
{ .blen = 8192, .plen = 16, .klen = 16, },
{ .blen = 8192, .plen = 256, .klen = 16, },
{ .blen = 8192, .plen = 1024, .klen = 16, },
{ .blen = 8192, .plen = 4096, .klen = 16, },
{ .blen = 8192, .plen = 8192, .klen = 16, },
/* End marker */
{ .blen = 0, .plen = 0, .klen = 0, }
};
static struct hash_speed poly1305_speed_template[] = {
{ .blen = 96, .plen = 16, },
{ .blen = 96, .plen = 32, },

View File

@ -27,6 +27,7 @@
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/uio.h>
#include <crypto/rng.h>
#include <crypto/drbg.h>
#include <crypto/akcipher.h>
@ -3954,7 +3955,7 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
key = kmalloc(vecs->key_len + sizeof(u32) * 2 + vecs->param_len,
GFP_KERNEL);
if (!key)
goto free_xbuf;
goto free_req;
memcpy(key, vecs->key, vecs->key_len);
ptr = key + vecs->key_len;
ptr = test_pack_u32(ptr, vecs->algo);
@ -3966,7 +3967,7 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
else
err = crypto_akcipher_set_priv_key(tfm, key, vecs->key_len);
if (err)
goto free_req;
goto free_key;
/*
* First run test which do not require a private key, such as
@ -3976,7 +3977,7 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
out_len_max = crypto_akcipher_maxsize(tfm);
outbuf_enc = kzalloc(out_len_max, GFP_KERNEL);
if (!outbuf_enc)
goto free_req;
goto free_key;
if (!vecs->siggen_sigver_test) {
m = vecs->m;
@ -3995,6 +3996,7 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
op = "verify";
}
err = -E2BIG;
if (WARN_ON(m_size > PAGE_SIZE))
goto free_all;
memcpy(xbuf[0], m, m_size);
@ -4025,7 +4027,7 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
pr_err("alg: akcipher: %s test failed. err %d\n", op, err);
goto free_all;
}
if (!vecs->siggen_sigver_test) {
if (!vecs->siggen_sigver_test && c) {
if (req->dst_len != c_size) {
pr_err("alg: akcipher: %s test failed. Invalid output len\n",
op);
@ -4056,6 +4058,12 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
goto free_all;
}
if (!vecs->siggen_sigver_test && !c) {
c = outbuf_enc;
c_size = req->dst_len;
}
err = -E2BIG;
op = vecs->siggen_sigver_test ? "sign" : "decrypt";
if (WARN_ON(c_size > PAGE_SIZE))
goto free_all;
@ -4092,9 +4100,10 @@ static int test_akcipher_one(struct crypto_akcipher *tfm,
free_all:
kfree(outbuf_dec);
kfree(outbuf_enc);
free_key:
kfree(key);
free_req:
akcipher_request_free(req);
kfree(key);
free_xbuf:
testmgr_free_buf(xbuf);
return err;
@ -5376,6 +5385,12 @@ static const struct alg_test_desc alg_test_descs[] = {
.suite = {
.hash = __VECS(sha512_tv_template)
}
}, {
.alg = "sm2",
.test = alg_test_akcipher,
.suite = {
.akcipher = __VECS(sm2_tv_template)
}
}, {
.alg = "sm3",
.test = alg_test_hash,

View File

@ -3792,6 +3792,65 @@ static const struct hash_testvec hmac_streebog512_tv_template[] = {
},
};
/*
* SM2 test vectors.
*/
static const struct akcipher_testvec sm2_tv_template[] = {
{ /* Generated from openssl */
.key =
"\x04"
"\x8e\xa0\x33\x69\x91\x7e\x3d\xec\xad\x8e\xf0\x45\x5e\x13\x3e\x68"
"\x5b\x8c\xab\x5c\xc6\xc8\x50\xdf\x91\x00\xe0\x24\x73\x4d\x31\xf2"
"\x2e\xc0\xd5\x6b\xee\xda\x98\x93\xec\xd8\x36\xaa\xb9\xcf\x63\x82"
"\xef\xa7\x1a\x03\xed\x16\xba\x74\xb8\x8b\xf9\xe5\x70\x39\xa4\x70",
.key_len = 65,
.param_len = 0,
.c =
"\x30\x45"
"\x02\x20"
"\x70\xab\xb6\x7d\xd6\x54\x80\x64\x42\x7e\x2d\x05\x08\x36\xc9\x96"
"\x25\xc2\xbb\xff\x08\xe5\x43\x15\x5e\xf3\x06\xd9\x2b\x2f\x0a\x9f"
"\x02\x21"
"\x00"
"\xbf\x21\x5f\x7e\x5d\x3f\x1a\x4d\x8f\x84\xc2\xe9\xa6\x4c\xa4\x18"
"\xb2\xb8\x46\xf4\x32\x96\xfa\x57\xc6\x29\xd4\x89\xae\xcc\xda\xdb",
.c_size = 71,
.algo = OID_SM2_with_SM3,
.m =
"\x47\xa7\xbf\xd3\xda\xc4\x79\xee\xda\x8b\x4f\xe8\x40\x94\xd4\x32"
"\x8f\xf1\xcd\x68\x4d\xbd\x9b\x1d\xe0\xd8\x9a\x5d\xad\x85\x47\x5c",
.m_size = 32,
.public_key_vec = true,
.siggen_sigver_test = true,
},
{ /* From libgcrypt */
.key =
"\x04"
"\x87\x59\x38\x9a\x34\xaa\xad\x07\xec\xf4\xe0\xc8\xc2\x65\x0a\x44"
"\x59\xc8\xd9\x26\xee\x23\x78\x32\x4e\x02\x61\xc5\x25\x38\xcb\x47"
"\x75\x28\x10\x6b\x1e\x0b\x7c\x8d\xd5\xff\x29\xa9\xc8\x6a\x89\x06"
"\x56\x56\xeb\x33\x15\x4b\xc0\x55\x60\x91\xef\x8a\xc9\xd1\x7d\x78",
.key_len = 65,
.param_len = 0,
.c =
"\x30\x44"
"\x02\x20"
"\xd9\xec\xef\xe8\x5f\xee\x3c\x59\x57\x8e\x5b\xab\xb3\x02\xe1\x42"
"\x4b\x67\x2c\x0b\x26\xb6\x51\x2c\x3e\xfc\xc6\x49\xec\xfe\x89\xe5"
"\x02\x20"
"\x43\x45\xd0\xa5\xff\xe5\x13\x27\x26\xd0\xec\x37\xad\x24\x1e\x9a"
"\x71\x9a\xa4\x89\xb0\x7e\x0f\xc4\xbb\x2d\x50\xd0\xe5\x7f\x7a\x68",
.c_size = 70,
.algo = OID_SM2_with_SM3,
.m =
"\x11\x22\x33\x44\x55\x66\x77\x88\x99\xaa\xbb\xcc\xdd\xee\xff\x00"
"\x12\x34\x56\x78\x9a\xbc\xde\xf0\x12\x34\x56\x78\x9a\xbc\xde\xf0",
.m_size = 32,
.public_key_vec = true,
.siggen_sigver_test = true,
},
};
/* Example vectors below taken from
* http://www.oscca.gov.cn/UpFile/20101222141857786.pdf
*

View File

@ -54,49 +54,63 @@ EXPORT_SYMBOL(xor_blocks);
/* Set of all registered templates. */
static struct xor_block_template *__initdata template_list;
#define BENCH_SIZE (PAGE_SIZE)
#ifndef MODULE
static void __init do_xor_register(struct xor_block_template *tmpl)
{
tmpl->next = template_list;
template_list = tmpl;
}
static int __init register_xor_blocks(void)
{
active_template = XOR_SELECT_TEMPLATE(NULL);
if (!active_template) {
#define xor_speed do_xor_register
// register all the templates and pick the first as the default
XOR_TRY_TEMPLATES;
#undef xor_speed
active_template = template_list;
}
return 0;
}
#endif
#define BENCH_SIZE 4096
#define REPS 800U
static void __init
do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2)
{
int speed;
unsigned long now, j;
int i, count, max;
int i, j;
ktime_t min, start, diff;
tmpl->next = template_list;
template_list = tmpl;
preempt_disable();
/*
* Count the number of XORs done during a whole jiffy, and use
* this to calculate the speed of checksumming. We use a 2-page
* allocation to have guaranteed color L1-cache layout.
*/
max = 0;
for (i = 0; i < 5; i++) {
j = jiffies;
count = 0;
while ((now = jiffies) == j)
cpu_relax();
while (time_before(jiffies, now + 1)) {
min = (ktime_t)S64_MAX;
for (i = 0; i < 3; i++) {
start = ktime_get();
for (j = 0; j < REPS; j++) {
mb(); /* prevent loop optimzation */
tmpl->do_2(BENCH_SIZE, b1, b2);
mb();
count++;
mb();
}
if (count > max)
max = count;
diff = ktime_sub(ktime_get(), start);
if (diff < min)
min = diff;
}
preempt_enable();
speed = max * (HZ * BENCH_SIZE / 1024);
// bytes/ns == GB/s, multiply by 1000 to get MB/s [not MiB/s]
speed = (1000 * REPS * BENCH_SIZE) / (unsigned int)ktime_to_ns(min);
tmpl->speed = speed;
printk(KERN_INFO " %-10s: %5d.%03d MB/sec\n", tmpl->name,
speed / 1000, speed % 1000);
pr_info(" %-16s: %5d MB/sec\n", tmpl->name, speed);
}
static int __init
@ -129,14 +143,15 @@ calibrate_xor_blocks(void)
#define xor_speed(templ) do_xor_speed((templ), b1, b2)
printk(KERN_INFO "xor: measuring software checksum speed\n");
template_list = NULL;
XOR_TRY_TEMPLATES;
fastest = template_list;
for (f = fastest; f; f = f->next)
if (f->speed > fastest->speed)
fastest = f;
printk(KERN_INFO "xor: using function: %s (%d.%03d MB/sec)\n",
fastest->name, fastest->speed / 1000, fastest->speed % 1000);
pr_info("xor: using function: %s (%d MB/sec)\n",
fastest->name, fastest->speed);
#undef xor_speed
@ -150,6 +165,10 @@ static __exit void xor_exit(void) { }
MODULE_LICENSE("GPL");
#ifndef MODULE
/* when built-in xor.o must initialize before drivers/md/md.o */
core_initcall(calibrate_xor_blocks);
core_initcall(register_xor_blocks);
#endif
module_init(calibrate_xor_blocks);
module_exit(xor_exit);

View File

@ -282,6 +282,20 @@ config HW_RANDOM_INGENIC_RNG
If unsure, say Y.
config HW_RANDOM_INGENIC_TRNG
tristate "Ingenic True Random Number Generator support"
depends on HW_RANDOM
depends on MACH_X1830
default HW_RANDOM
help
This driver provides kernel-side support for the True Random Number Generator
hardware found in ingenic X1830 SoC. YSH & ATIL CU1830-Neo uses X1830 SoC.
To compile this driver as a module, choose M here: the
module will be called ingenic-trng.
If unsure, say Y.
config HW_RANDOM_NOMADIK
tristate "ST-Ericsson Nomadik Random Number Generator support"
depends on ARCH_NOMADIK
@ -512,6 +526,16 @@ config HW_RANDOM_CCTRNG
will be called cctrng.
If unsure, say 'N'.
config HW_RANDOM_XIPHERA
tristate "Xiphera FPGA based True Random Number Generator support"
depends on HAS_IOMEM
help
This driver provides kernel-side support for Xiphera True Random
Number Generator Intellectual Property Core.
To compile this driver as a module, choose M here: the
module will be called xiphera-trng.
endif # HW_RANDOM
config UML_RANDOM

View File

@ -24,6 +24,7 @@ obj-$(CONFIG_HW_RANDOM_TX4939) += tx4939-rng.o
obj-$(CONFIG_HW_RANDOM_MXC_RNGA) += mxc-rnga.o
obj-$(CONFIG_HW_RANDOM_IMX_RNGC) += imx-rngc.o
obj-$(CONFIG_HW_RANDOM_INGENIC_RNG) += ingenic-rng.o
obj-$(CONFIG_HW_RANDOM_INGENIC_TRNG) += ingenic-trng.o
obj-$(CONFIG_HW_RANDOM_OCTEON) += octeon-rng.o
obj-$(CONFIG_HW_RANDOM_NOMADIK) += nomadik-rng.o
obj-$(CONFIG_HW_RANDOM_PSERIES) += pseries-rng.o
@ -44,3 +45,4 @@ obj-$(CONFIG_HW_RANDOM_KEYSTONE) += ks-sa-rng.o
obj-$(CONFIG_HW_RANDOM_OPTEE) += optee-rng.o
obj-$(CONFIG_HW_RANDOM_NPCM) += npcm-rng.o
obj-$(CONFIG_HW_RANDOM_CCTRNG) += cctrng.o
obj-$(CONFIG_HW_RANDOM_XIPHERA) += xiphera-trng.o

View File

@ -463,11 +463,10 @@ static int cc_trng_clk_init(struct cctrng_drvdata *drvdata)
int rc = 0;
clk = devm_clk_get_optional(dev, NULL);
if (IS_ERR(clk)) {
if (PTR_ERR(clk) != -EPROBE_DEFER)
dev_err(dev, "Error getting clock: %pe\n", clk);
return PTR_ERR(clk);
}
if (IS_ERR(clk))
return dev_err_probe(dev, PTR_ERR(clk),
"Error getting clock\n");
drvdata->clk = clk;
rc = clk_prepare_enable(drvdata->clk);

View File

@ -285,6 +285,7 @@ static int imx_rngc_probe(struct platform_device *pdev)
rngc->rng.init = imx_rngc_init;
rngc->rng.read = imx_rngc_read;
rngc->rng.cleanup = imx_rngc_cleanup;
rngc->rng.quality = 19;
rngc->dev = &pdev->dev;
platform_set_drvdata(pdev, rngc);

View File

@ -0,0 +1,161 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Ingenic True Random Number Generator driver
* Copyright (c) 2019 (Qi Pengzhen) <aric.pzqi@ingenic.com>
* Copyright (c) 2020 (Zhou Yanjie) <zhouyanjie@wanyeetech.com>
*/
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/hw_random.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
/* DTRNG register offsets */
#define TRNG_REG_CFG_OFFSET 0x00
#define TRNG_REG_RANDOMNUM_OFFSET 0x04
#define TRNG_REG_STATUS_OFFSET 0x08
/* bits within the CFG register */
#define CFG_RDY_CLR BIT(12)
#define CFG_INT_MASK BIT(11)
#define CFG_GEN_EN BIT(0)
/* bits within the STATUS register */
#define STATUS_RANDOM_RDY BIT(0)
struct ingenic_trng {
void __iomem *base;
struct clk *clk;
struct hwrng rng;
};
static int ingenic_trng_init(struct hwrng *rng)
{
struct ingenic_trng *trng = container_of(rng, struct ingenic_trng, rng);
unsigned int ctrl;
ctrl = readl(trng->base + TRNG_REG_CFG_OFFSET);
ctrl |= CFG_GEN_EN;
writel(ctrl, trng->base + TRNG_REG_CFG_OFFSET);
return 0;
}
static void ingenic_trng_cleanup(struct hwrng *rng)
{
struct ingenic_trng *trng = container_of(rng, struct ingenic_trng, rng);
unsigned int ctrl;
ctrl = readl(trng->base + TRNG_REG_CFG_OFFSET);
ctrl &= ~CFG_GEN_EN;
writel(ctrl, trng->base + TRNG_REG_CFG_OFFSET);
}
static int ingenic_trng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
{
struct ingenic_trng *trng = container_of(rng, struct ingenic_trng, rng);
u32 *data = buf;
u32 status;
int ret;
ret = readl_poll_timeout(trng->base + TRNG_REG_STATUS_OFFSET, status,
status & STATUS_RANDOM_RDY, 10, 1000);
if (ret == -ETIMEDOUT) {
pr_err("%s: Wait for DTRNG data ready timeout\n", __func__);
return ret;
}
*data = readl(trng->base + TRNG_REG_RANDOMNUM_OFFSET);
return 4;
}
static int ingenic_trng_probe(struct platform_device *pdev)
{
struct ingenic_trng *trng;
int ret;
trng = devm_kzalloc(&pdev->dev, sizeof(*trng), GFP_KERNEL);
if (!trng)
return -ENOMEM;
trng->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(trng->base)) {
pr_err("%s: Failed to map DTRNG registers\n", __func__);
ret = PTR_ERR(trng->base);
return PTR_ERR(trng->base);
}
trng->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(trng->clk)) {
ret = PTR_ERR(trng->clk);
pr_crit("%s: Cannot get DTRNG clock\n", __func__);
return PTR_ERR(trng->clk);
}
ret = clk_prepare_enable(trng->clk);
if (ret) {
pr_crit("%s: Unable to enable DTRNG clock\n", __func__);
return ret;
}
trng->rng.name = pdev->name;
trng->rng.init = ingenic_trng_init;
trng->rng.cleanup = ingenic_trng_cleanup;
trng->rng.read = ingenic_trng_read;
ret = hwrng_register(&trng->rng);
if (ret) {
dev_err(&pdev->dev, "Failed to register hwrng\n");
return ret;
}
platform_set_drvdata(pdev, trng);
dev_info(&pdev->dev, "Ingenic DTRNG driver registered\n");
return 0;
}
static int ingenic_trng_remove(struct platform_device *pdev)
{
struct ingenic_trng *trng = platform_get_drvdata(pdev);
unsigned int ctrl;
hwrng_unregister(&trng->rng);
ctrl = readl(trng->base + TRNG_REG_CFG_OFFSET);
ctrl &= ~CFG_GEN_EN;
writel(ctrl, trng->base + TRNG_REG_CFG_OFFSET);
clk_disable_unprepare(trng->clk);
return 0;
}
static const struct of_device_id ingenic_trng_of_match[] = {
{ .compatible = "ingenic,x1830-dtrng" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, ingenic_trng_of_match);
static struct platform_driver ingenic_trng_driver = {
.probe = ingenic_trng_probe,
.remove = ingenic_trng_remove,
.driver = {
.name = "ingenic-trng",
.of_match_table = ingenic_trng_of_match,
},
};
module_platform_driver(ingenic_trng_driver);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("漆鹏振 (Qi Pengzhen) <aric.pzqi@ingenic.com>");
MODULE_AUTHOR("周琰杰 (Zhou Yanjie) <zhouyanjie@wanyeetech.com>");
MODULE_DESCRIPTION("Ingenic True Random Number Generator driver");

View File

@ -330,7 +330,7 @@ static int __init mod_init(void)
int err = -ENODEV;
int i;
struct pci_dev *dev = NULL;
void __iomem *mem = mem;
void __iomem *mem;
u8 hw_status;
struct intel_rng_hw *intel_rng_hw;

View File

@ -195,10 +195,10 @@ static int iproc_rng200_probe(struct platform_device *pdev)
return PTR_ERR(priv->base);
}
priv->rng.name = "iproc-rng200",
priv->rng.read = iproc_rng200_read,
priv->rng.init = iproc_rng200_init,
priv->rng.cleanup = iproc_rng200_cleanup,
priv->rng.name = "iproc-rng200";
priv->rng.read = iproc_rng200_read;
priv->rng.init = iproc_rng200_init;
priv->rng.cleanup = iproc_rng200_cleanup;
/* Register driver */
ret = devm_hwrng_register(dev, &priv->rng);

View File

@ -143,9 +143,9 @@ static int __init mxc_rnga_probe(struct platform_device *pdev)
mxc_rng->dev = &pdev->dev;
mxc_rng->rng.name = "mxc-rnga";
mxc_rng->rng.init = mxc_rnga_init;
mxc_rng->rng.cleanup = mxc_rnga_cleanup,
mxc_rng->rng.data_present = mxc_rnga_data_present,
mxc_rng->rng.data_read = mxc_rnga_data_read,
mxc_rng->rng.cleanup = mxc_rnga_cleanup;
mxc_rng->rng.data_present = mxc_rnga_data_present;
mxc_rng->rng.data_read = mxc_rnga_data_read;
mxc_rng->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(mxc_rng->clk)) {

View File

@ -58,24 +58,24 @@ static int npcm_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
pm_runtime_get_sync((struct device *)priv->rng.priv);
while (max >= sizeof(u32)) {
while (max) {
if (wait) {
if (readl_poll_timeout(priv->base + NPCM_RNGCS_REG,
if (readb_poll_timeout(priv->base + NPCM_RNGCS_REG,
ready,
ready & NPCM_RNG_DATA_VALID,
NPCM_RNG_POLL_USEC,
NPCM_RNG_TIMEOUT_USEC))
break;
} else {
if ((readl(priv->base + NPCM_RNGCS_REG) &
if ((readb(priv->base + NPCM_RNGCS_REG) &
NPCM_RNG_DATA_VALID) == 0)
break;
}
*(u32 *)buf = readl(priv->base + NPCM_RNGD_REG);
retval += sizeof(u32);
buf += sizeof(u32);
max -= sizeof(u32);
*(u8 *)buf = readb(priv->base + NPCM_RNGD_REG);
retval++;
buf++;
max--;
}
pm_runtime_mark_last_busy((struct device *)priv->rng.priv);

View File

@ -122,14 +122,14 @@ static int optee_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
if (max > MAX_ENTROPY_REQ_SZ)
max = MAX_ENTROPY_REQ_SZ;
while (read == 0) {
while (read < max) {
rng_size = get_optee_rng_data(pvt_data, data, (max - read));
data += rng_size;
read += rng_size;
if (wait) {
if (timeout-- == 0)
if (wait && pvt_data->data_rate) {
if ((timeout-- == 0) || (read == max))
return read;
msleep((1000 * (max - read)) / pvt_data->data_rate);
} else {

View File

@ -145,12 +145,12 @@ static int stm32_rng_probe(struct platform_device *ofdev)
dev_set_drvdata(dev, priv);
priv->rng.name = dev_driver_string(dev),
priv->rng.name = dev_driver_string(dev);
#ifndef CONFIG_PM
priv->rng.init = stm32_rng_init,
priv->rng.cleanup = stm32_rng_cleanup,
priv->rng.init = stm32_rng_init;
priv->rng.cleanup = stm32_rng_cleanup;
#endif
priv->rng.read = stm32_rng_read,
priv->rng.read = stm32_rng_read;
priv->rng.priv = (unsigned long) dev;
priv->rng.quality = 900;

View File

@ -0,0 +1,150 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (C) 2020 Xiphera Ltd. */
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/hw_random.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#define CONTROL_REG 0x00000000
#define STATUS_REG 0x00000004
#define RAND_REG 0x00000000
#define HOST_TO_TRNG_RESET 0x00000001
#define HOST_TO_TRNG_RELEASE_RESET 0x00000002
#define HOST_TO_TRNG_ENABLE 0x80000000
#define HOST_TO_TRNG_ZEROIZE 0x80000004
#define HOST_TO_TRNG_ACK_ZEROIZE 0x80000008
#define HOST_TO_TRNG_READ 0x8000000F
/* trng statuses */
#define TRNG_ACK_RESET 0x000000AC
#define TRNG_SUCCESSFUL_STARTUP 0x00000057
#define TRNG_FAILED_STARTUP 0x000000FA
#define TRNG_NEW_RAND_AVAILABLE 0x000000ED
struct xiphera_trng {
void __iomem *mem;
struct hwrng rng;
};
static int xiphera_trng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
{
struct xiphera_trng *trng = container_of(rng, struct xiphera_trng, rng);
int ret = 0;
while (max >= sizeof(u32)) {
/* check for data */
if (readl(trng->mem + STATUS_REG) == TRNG_NEW_RAND_AVAILABLE) {
*(u32 *)buf = readl(trng->mem + RAND_REG);
/*
* Inform the trng of the read
* and re-enable it to produce a new random number
*/
writel(HOST_TO_TRNG_READ, trng->mem + CONTROL_REG);
writel(HOST_TO_TRNG_ENABLE, trng->mem + CONTROL_REG);
ret += sizeof(u32);
buf += sizeof(u32);
max -= sizeof(u32);
} else {
break;
}
}
return ret;
}
static int xiphera_trng_probe(struct platform_device *pdev)
{
int ret;
struct xiphera_trng *trng;
struct device *dev = &pdev->dev;
struct resource *res;
trng = devm_kzalloc(dev, sizeof(*trng), GFP_KERNEL);
if (!trng)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
trng->mem = devm_ioremap_resource(dev, res);
if (IS_ERR(trng->mem))
return PTR_ERR(trng->mem);
/*
* the trng needs to be reset first which might not happen in time,
* hence we incorporate a small delay to ensure proper behaviour
*/
writel(HOST_TO_TRNG_RESET, trng->mem + CONTROL_REG);
usleep_range(100, 200);
if (readl(trng->mem + STATUS_REG) != TRNG_ACK_RESET) {
/*
* there is a small chance the trng is just not ready yet,
* so we try one more time. If the second time fails, we give up
*/
usleep_range(100, 200);
if (readl(trng->mem + STATUS_REG) != TRNG_ACK_RESET) {
dev_err(dev, "failed to reset the trng ip\n");
return -ENODEV;
}
}
/*
* once again, to ensure proper behaviour we sleep
* for a while after zeroizing the trng
*/
writel(HOST_TO_TRNG_RELEASE_RESET, trng->mem + CONTROL_REG);
writel(HOST_TO_TRNG_ENABLE, trng->mem + CONTROL_REG);
writel(HOST_TO_TRNG_ZEROIZE, trng->mem + CONTROL_REG);
msleep(20);
if (readl(trng->mem + STATUS_REG) != TRNG_SUCCESSFUL_STARTUP) {
/* diagnose the reason for the failure */
if (readl(trng->mem + STATUS_REG) == TRNG_FAILED_STARTUP) {
dev_err(dev, "trng ip startup-tests failed\n");
return -ENODEV;
}
dev_err(dev, "startup-tests yielded no response\n");
return -ENODEV;
}
writel(HOST_TO_TRNG_ACK_ZEROIZE, trng->mem + CONTROL_REG);
trng->rng.name = pdev->name;
trng->rng.read = xiphera_trng_read;
trng->rng.quality = 900;
ret = devm_hwrng_register(dev, &trng->rng);
if (ret) {
dev_err(dev, "failed to register rng device: %d\n", ret);
return ret;
}
platform_set_drvdata(pdev, trng);
return 0;
}
static const struct of_device_id xiphera_trng_of_match[] = {
{ .compatible = "xiphera,xip8001b-trng", },
{},
};
MODULE_DEVICE_TABLE(of, xiphera_trng_of_match);
static struct platform_driver xiphera_trng_driver = {
.driver = {
.name = "xiphera-trng",
.of_match_table = xiphera_trng_of_match,
},
.probe = xiphera_trng_probe,
};
module_platform_driver(xiphera_trng_driver);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Atte Tommiska");
MODULE_DESCRIPTION("Xiphera FPGA-based true random number generator driver");

View File

@ -873,6 +873,7 @@ config CRYPTO_DEV_SA2UL
select CRYPTO_AES
select CRYPTO_AES_ARM64
select CRYPTO_ALGAPI
select CRYPTO_AUTHENC
select CRYPTO_SHA1
select CRYPTO_SHA256
select CRYPTO_SHA512

View File

@ -59,6 +59,32 @@ config CRYPTO_DEV_SUN8I_CE_DEBUG
This will create /sys/kernel/debug/sun8i-ce/stats for displaying
the number of requests per flow and per algorithm.
config CRYPTO_DEV_SUN8I_CE_HASH
bool "Enable support for hash on sun8i-ce"
depends on CRYPTO_DEV_SUN8I_CE
select MD5
select SHA1
select SHA256
select SHA512
help
Say y to enable support for hash algorithms.
config CRYPTO_DEV_SUN8I_CE_PRNG
bool "Support for Allwinner Crypto Engine PRNG"
depends on CRYPTO_DEV_SUN8I_CE
select CRYPTO_RNG
help
Select this option if you want to provide kernel-side support for
the Pseudo-Random Number Generator found in the Crypto Engine.
config CRYPTO_DEV_SUN8I_CE_TRNG
bool "Support for Allwinner Crypto Engine TRNG"
depends on CRYPTO_DEV_SUN8I_CE
select HW_RANDOM
help
Select this option if you want to provide kernel-side support for
the True Random Number Generator found in the Crypto Engine.
config CRYPTO_DEV_SUN8I_SS
tristate "Support for Allwinner Security System cryptographic offloader"
select CRYPTO_SKCIPHER
@ -85,3 +111,20 @@ config CRYPTO_DEV_SUN8I_SS_DEBUG
Say y to enable sun8i-ss debug stats.
This will create /sys/kernel/debug/sun8i-ss/stats for displaying
the number of requests per flow and per algorithm.
config CRYPTO_DEV_SUN8I_SS_PRNG
bool "Support for Allwinner Security System PRNG"
depends on CRYPTO_DEV_SUN8I_SS
select CRYPTO_RNG
help
Select this option if you want to provide kernel-side support for
the Pseudo-Random Number Generator found in the Security System.
config CRYPTO_DEV_SUN8I_SS_HASH
bool "Enable support for hash on sun8i-ss"
depends on CRYPTO_DEV_SUN8I_SS
select MD5
select SHA1
select SHA256
help
Say y to enable support for hash algorithms.

View File

@ -9,6 +9,7 @@
* You could find the datasheet in Documentation/arm/sunxi.rst
*/
#include "sun4i-ss.h"
#include <asm/unaligned.h>
#include <linux/scatterlist.h>
/* This is a totally arbitrary value */
@ -196,7 +197,7 @@ static int sun4i_hash(struct ahash_request *areq)
struct sg_mapping_iter mi;
int in_r, err = 0;
size_t copied = 0;
__le32 wb = 0;
u32 wb = 0;
dev_dbg(ss->dev, "%s %s bc=%llu len=%u mode=%x wl=%u h0=%0x",
__func__, crypto_tfm_alg_name(areq->base.tfm),
@ -408,7 +409,7 @@ hash_final:
nbw = op->len - 4 * nwait;
if (nbw) {
wb = cpu_to_le32(*(u32 *)(op->buf + nwait * 4));
wb = le32_to_cpup((__le32 *)(op->buf + nwait * 4));
wb &= GENMASK((nbw * 8) - 1, 0);
op->byte_count += nbw;
@ -417,7 +418,7 @@ hash_final:
/* write the remaining bytes of the nbw buffer */
wb |= ((1 << 7) << (nbw * 8));
bf[j++] = le32_to_cpu(wb);
((__le32 *)bf)[j++] = cpu_to_le32(wb);
/*
* number of space to pad to obtain 64o minus 8(size) minus 4 (final 1)
@ -479,16 +480,16 @@ hash_final:
/* Get the hash from the device */
if (op->mode == SS_OP_SHA1) {
for (i = 0; i < 5; i++) {
v = readl(ss->base + SS_MD0 + i * 4);
if (ss->variant->sha1_in_be)
v = cpu_to_le32(readl(ss->base + SS_MD0 + i * 4));
put_unaligned_le32(v, areq->result + i * 4);
else
v = cpu_to_be32(readl(ss->base + SS_MD0 + i * 4));
memcpy(areq->result + i * 4, &v, 4);
put_unaligned_be32(v, areq->result + i * 4);
}
} else {
for (i = 0; i < 4; i++) {
v = cpu_to_le32(readl(ss->base + SS_MD0 + i * 4));
memcpy(areq->result + i * 4, &v, 4);
v = readl(ss->base + SS_MD0 + i * 4);
put_unaligned_le32(v, areq->result + i * 4);
}
}

View File

@ -1,2 +1,5 @@
obj-$(CONFIG_CRYPTO_DEV_SUN8I_CE) += sun8i-ce.o
sun8i-ce-y += sun8i-ce-core.o sun8i-ce-cipher.o
sun8i-ce-$(CONFIG_CRYPTO_DEV_SUN8I_CE_HASH) += sun8i-ce-hash.o
sun8i-ce-$(CONFIG_CRYPTO_DEV_SUN8I_CE_PRNG) += sun8i-ce-prng.o
sun8i-ce-$(CONFIG_CRYPTO_DEV_SUN8I_CE_TRNG) += sun8i-ce-trng.o

View File

@ -75,8 +75,9 @@ static int sun8i_ce_cipher_fallback(struct skcipher_request *areq)
return err;
}
static int sun8i_ce_cipher(struct skcipher_request *areq)
static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req)
{
struct skcipher_request *areq = container_of(async_req, struct skcipher_request, base);
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
struct sun8i_ce_dev *ce = op->ce;
@ -87,8 +88,6 @@ static int sun8i_ce_cipher(struct skcipher_request *areq)
struct ce_task *cet;
struct scatterlist *sg;
unsigned int todo, len, offset, ivsize;
dma_addr_t addr_iv = 0, addr_key = 0;
void *backup_iv = NULL;
u32 common, sym;
int flow, i;
int nr_sgs = 0;
@ -119,7 +118,7 @@ static int sun8i_ce_cipher(struct skcipher_request *areq)
common |= rctx->op_dir | CE_COMM_INT;
cet->t_common_ctl = cpu_to_le32(common);
/* CTS and recent CE (H6) need length in bytes, in word otherwise */
if (ce->variant->has_t_dlen_in_bytes)
if (ce->variant->cipher_t_dlen_in_bytes)
cet->t_dlen = cpu_to_le32(areq->cryptlen);
else
cet->t_dlen = cpu_to_le32(areq->cryptlen / 4);
@ -141,41 +140,41 @@ static int sun8i_ce_cipher(struct skcipher_request *areq)
cet->t_sym_ctl = cpu_to_le32(sym);
cet->t_asym_ctl = 0;
addr_key = dma_map_single(ce->dev, op->key, op->keylen, DMA_TO_DEVICE);
cet->t_key = cpu_to_le32(addr_key);
if (dma_mapping_error(ce->dev, addr_key)) {
rctx->addr_key = dma_map_single(ce->dev, op->key, op->keylen, DMA_TO_DEVICE);
if (dma_mapping_error(ce->dev, rctx->addr_key)) {
dev_err(ce->dev, "Cannot DMA MAP KEY\n");
err = -EFAULT;
goto theend;
}
cet->t_key = cpu_to_le32(rctx->addr_key);
ivsize = crypto_skcipher_ivsize(tfm);
if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) {
chan->ivlen = ivsize;
chan->bounce_iv = kzalloc(ivsize, GFP_KERNEL | GFP_DMA);
if (!chan->bounce_iv) {
rctx->ivlen = ivsize;
rctx->bounce_iv = kzalloc(ivsize, GFP_KERNEL | GFP_DMA);
if (!rctx->bounce_iv) {
err = -ENOMEM;
goto theend_key;
}
if (rctx->op_dir & CE_DECRYPTION) {
backup_iv = kzalloc(ivsize, GFP_KERNEL);
if (!backup_iv) {
rctx->backup_iv = kzalloc(ivsize, GFP_KERNEL);
if (!rctx->backup_iv) {
err = -ENOMEM;
goto theend_key;
}
offset = areq->cryptlen - ivsize;
scatterwalk_map_and_copy(backup_iv, areq->src, offset,
ivsize, 0);
scatterwalk_map_and_copy(rctx->backup_iv, areq->src,
offset, ivsize, 0);
}
memcpy(chan->bounce_iv, areq->iv, ivsize);
addr_iv = dma_map_single(ce->dev, chan->bounce_iv, chan->ivlen,
DMA_TO_DEVICE);
cet->t_iv = cpu_to_le32(addr_iv);
if (dma_mapping_error(ce->dev, addr_iv)) {
memcpy(rctx->bounce_iv, areq->iv, ivsize);
rctx->addr_iv = dma_map_single(ce->dev, rctx->bounce_iv, rctx->ivlen,
DMA_TO_DEVICE);
if (dma_mapping_error(ce->dev, rctx->addr_iv)) {
dev_err(ce->dev, "Cannot DMA MAP IV\n");
err = -ENOMEM;
goto theend_iv;
}
cet->t_iv = cpu_to_le32(rctx->addr_iv);
}
if (areq->src == areq->dst) {
@ -235,7 +234,9 @@ static int sun8i_ce_cipher(struct skcipher_request *areq)
}
chan->timeout = areq->cryptlen;
err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(areq->base.tfm));
rctx->nr_sgs = nr_sgs;
rctx->nr_sgd = nr_sgd;
return 0;
theend_sgs:
if (areq->src == areq->dst) {
@ -248,34 +249,83 @@ theend_sgs:
theend_iv:
if (areq->iv && ivsize > 0) {
if (addr_iv)
dma_unmap_single(ce->dev, addr_iv, chan->ivlen,
DMA_TO_DEVICE);
if (rctx->addr_iv)
dma_unmap_single(ce->dev, rctx->addr_iv, rctx->ivlen, DMA_TO_DEVICE);
offset = areq->cryptlen - ivsize;
if (rctx->op_dir & CE_DECRYPTION) {
memcpy(areq->iv, backup_iv, ivsize);
kfree_sensitive(backup_iv);
memcpy(areq->iv, rctx->backup_iv, ivsize);
kfree_sensitive(rctx->backup_iv);
} else {
scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
ivsize, 0);
}
kfree(chan->bounce_iv);
kfree(rctx->bounce_iv);
}
theend_key:
dma_unmap_single(ce->dev, addr_key, op->keylen, DMA_TO_DEVICE);
dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE);
theend:
return err;
}
static int sun8i_ce_handle_cipher_request(struct crypto_engine *engine, void *areq)
static int sun8i_ce_cipher_run(struct crypto_engine *engine, void *areq)
{
int err;
struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(breq);
struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
struct sun8i_ce_dev *ce = op->ce;
struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(breq);
int flow, err;
err = sun8i_ce_cipher(breq);
flow = rctx->flow;
err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm));
crypto_finalize_skcipher_request(engine, breq, err);
return 0;
}
static int sun8i_ce_cipher_unprepare(struct crypto_engine *engine, void *async_req)
{
struct skcipher_request *areq = container_of(async_req, struct skcipher_request, base);
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
struct sun8i_ce_dev *ce = op->ce;
struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
struct sun8i_ce_flow *chan;
struct ce_task *cet;
unsigned int ivsize, offset;
int nr_sgs = rctx->nr_sgs;
int nr_sgd = rctx->nr_sgd;
int flow;
flow = rctx->flow;
chan = &ce->chanlist[flow];
cet = chan->tl;
ivsize = crypto_skcipher_ivsize(tfm);
if (areq->src == areq->dst) {
dma_unmap_sg(ce->dev, areq->src, nr_sgs, DMA_BIDIRECTIONAL);
} else {
if (nr_sgs > 0)
dma_unmap_sg(ce->dev, areq->src, nr_sgs, DMA_TO_DEVICE);
dma_unmap_sg(ce->dev, areq->dst, nr_sgd, DMA_FROM_DEVICE);
}
if (areq->iv && ivsize > 0) {
if (cet->t_iv)
dma_unmap_single(ce->dev, rctx->addr_iv, rctx->ivlen, DMA_TO_DEVICE);
offset = areq->cryptlen - ivsize;
if (rctx->op_dir & CE_DECRYPTION) {
memcpy(areq->iv, rctx->backup_iv, ivsize);
kfree_sensitive(rctx->backup_iv);
} else {
scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
ivsize, 0);
}
kfree(rctx->bounce_iv);
}
dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE);
return 0;
}
@ -347,9 +397,9 @@ int sun8i_ce_cipher_init(struct crypto_tfm *tfm)
crypto_tfm_alg_driver_name(&sktfm->base),
crypto_tfm_alg_driver_name(crypto_skcipher_tfm(op->fallback_tfm)));
op->enginectx.op.do_one_request = sun8i_ce_handle_cipher_request;
op->enginectx.op.prepare_request = NULL;
op->enginectx.op.unprepare_request = NULL;
op->enginectx.op.do_one_request = sun8i_ce_cipher_run;
op->enginectx.op.prepare_request = sun8i_ce_cipher_prepare;
op->enginectx.op.unprepare_request = sun8i_ce_cipher_unprepare;
err = pm_runtime_get_sync(op->ce->dev);
if (err < 0)
@ -366,10 +416,7 @@ void sun8i_ce_cipher_exit(struct crypto_tfm *tfm)
{
struct sun8i_cipher_tfm_ctx *op = crypto_tfm_ctx(tfm);
if (op->key) {
memzero_explicit(op->key, op->keylen);
kfree(op->key);
}
kfree_sensitive(op->key);
crypto_free_skcipher(op->fallback_tfm);
pm_runtime_put_sync_suspend(op->ce->dev);
}
@ -391,10 +438,7 @@ int sun8i_ce_aes_setkey(struct crypto_skcipher *tfm, const u8 *key,
dev_dbg(ce->dev, "ERROR: Invalid keylen %u\n", keylen);
return -EINVAL;
}
if (op->key) {
memzero_explicit(op->key, op->keylen);
kfree(op->key);
}
kfree_sensitive(op->key);
op->keylen = keylen;
op->key = kmemdup(key, keylen, GFP_KERNEL | GFP_DMA);
if (!op->key)
@ -416,10 +460,7 @@ int sun8i_ce_des3_setkey(struct crypto_skcipher *tfm, const u8 *key,
if (err)
return err;
if (op->key) {
memzero_explicit(op->key, op->keylen);
kfree(op->key);
}
kfree_sensitive(op->key);
op->keylen = keylen;
op->key = kmemdup(key, keylen, GFP_KERNEL | GFP_DMA);
if (!op->key)

View File

@ -22,6 +22,7 @@
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/reset.h>
#include <crypto/internal/rng.h>
#include <crypto/internal/skcipher.h>
#include "sun8i-ce.h"
@ -35,73 +36,108 @@
static const struct ce_variant ce_h3_variant = {
.alg_cipher = { CE_ALG_AES, CE_ALG_DES, CE_ALG_3DES,
},
.alg_hash = { CE_ALG_MD5, CE_ALG_SHA1, CE_ALG_SHA224, CE_ALG_SHA256,
CE_ALG_SHA384, CE_ALG_SHA512
},
.op_mode = { CE_OP_ECB, CE_OP_CBC
},
.ce_clks = {
{ "bus", 0, 200000000 },
{ "mod", 50000000, 0 },
}
},
.esr = ESR_H3,
.prng = CE_ALG_PRNG,
.trng = CE_ID_NOTSUPP,
};
static const struct ce_variant ce_h5_variant = {
.alg_cipher = { CE_ALG_AES, CE_ALG_DES, CE_ALG_3DES,
},
.alg_hash = { CE_ALG_MD5, CE_ALG_SHA1, CE_ALG_SHA224, CE_ALG_SHA256,
CE_ID_NOTSUPP, CE_ID_NOTSUPP
},
.op_mode = { CE_OP_ECB, CE_OP_CBC
},
.ce_clks = {
{ "bus", 0, 200000000 },
{ "mod", 300000000, 0 },
}
},
.esr = ESR_H5,
.prng = CE_ALG_PRNG,
.trng = CE_ID_NOTSUPP,
};
static const struct ce_variant ce_h6_variant = {
.alg_cipher = { CE_ALG_AES, CE_ALG_DES, CE_ALG_3DES,
},
.alg_hash = { CE_ALG_MD5, CE_ALG_SHA1, CE_ALG_SHA224, CE_ALG_SHA256,
CE_ALG_SHA384, CE_ALG_SHA512
},
.op_mode = { CE_OP_ECB, CE_OP_CBC
},
.has_t_dlen_in_bytes = true,
.cipher_t_dlen_in_bytes = true,
.hash_t_dlen_in_bits = true,
.prng_t_dlen_in_bytes = true,
.trng_t_dlen_in_bytes = true,
.ce_clks = {
{ "bus", 0, 200000000 },
{ "mod", 300000000, 0 },
{ "ram", 0, 400000000 },
}
},
.esr = ESR_H6,
.prng = CE_ALG_PRNG_V2,
.trng = CE_ALG_TRNG_V2,
};
static const struct ce_variant ce_a64_variant = {
.alg_cipher = { CE_ALG_AES, CE_ALG_DES, CE_ALG_3DES,
},
.alg_hash = { CE_ALG_MD5, CE_ALG_SHA1, CE_ALG_SHA224, CE_ALG_SHA256,
CE_ID_NOTSUPP, CE_ID_NOTSUPP
},
.op_mode = { CE_OP_ECB, CE_OP_CBC
},
.ce_clks = {
{ "bus", 0, 200000000 },
{ "mod", 300000000, 0 },
}
},
.esr = ESR_A64,
.prng = CE_ALG_PRNG,
.trng = CE_ID_NOTSUPP,
};
static const struct ce_variant ce_r40_variant = {
.alg_cipher = { CE_ALG_AES, CE_ALG_DES, CE_ALG_3DES,
},
.alg_hash = { CE_ALG_MD5, CE_ALG_SHA1, CE_ALG_SHA224, CE_ALG_SHA256,
CE_ID_NOTSUPP, CE_ID_NOTSUPP
},
.op_mode = { CE_OP_ECB, CE_OP_CBC
},
.ce_clks = {
{ "bus", 0, 200000000 },
{ "mod", 300000000, 0 },
}
},
.esr = ESR_R40,
.prng = CE_ALG_PRNG,
.trng = CE_ID_NOTSUPP,
};
/*
* sun8i_ce_get_engine_number() get the next channel slot
* This is a simple round-robin way of getting the next channel
* The flow 3 is reserve for xRNG operations
*/
int sun8i_ce_get_engine_number(struct sun8i_ce_dev *ce)
{
return atomic_inc_return(&ce->flow) % MAXFLOW;
return atomic_inc_return(&ce->flow) % (MAXFLOW - 1);
}
int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name)
{
u32 v;
int err = 0;
struct ce_task *cet = ce->chanlist[flow].tl;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
ce->chanlist[flow].stat_req++;
@ -120,7 +156,10 @@ int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name)
/* Be sure all data is written before enabling the task */
wmb();
v = 1 | (ce->chanlist[flow].tl->t_common_ctl & 0x7F) << 8;
/* Only H6 needs to write a part of t_common_ctl along with "1", but since it is ignored
* on older SoCs, we have no reason to complicate things.
*/
v = 1 | ((le32_to_cpu(ce->chanlist[flow].tl->t_common_ctl) & 0x7F) << 8);
writel(v, ce->base + CE_TLR);
mutex_unlock(&ce->mlock);
@ -128,19 +167,56 @@ int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name)
msecs_to_jiffies(ce->chanlist[flow].timeout));
if (ce->chanlist[flow].status == 0) {
dev_err(ce->dev, "DMA timeout for %s\n", name);
dev_err(ce->dev, "DMA timeout for %s (tm=%d) on flow %d\n", name,
ce->chanlist[flow].timeout, flow);
err = -EFAULT;
}
/* No need to lock for this read, the channel is locked so
* nothing could modify the error value for this channel
*/
v = readl(ce->base + CE_ESR);
if (v) {
switch (ce->variant->esr) {
case ESR_H3:
/* Sadly, the error bit is not per flow */
if (v) {
dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow);
err = -EFAULT;
print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4,
cet, sizeof(struct ce_task), false);
}
if (v & CE_ERR_ALGO_NOTSUP)
dev_err(ce->dev, "CE ERROR: algorithm not supported\n");
if (v & CE_ERR_DATALEN)
dev_err(ce->dev, "CE ERROR: data length error\n");
if (v & CE_ERR_KEYSRAM)
dev_err(ce->dev, "CE ERROR: keysram access error for AES\n");
break;
case ESR_A64:
case ESR_H5:
case ESR_R40:
v >>= (flow * 4);
v &= 0xF;
if (v) {
dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow);
err = -EFAULT;
print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4,
cet, sizeof(struct ce_task), false);
}
if (v & CE_ERR_ALGO_NOTSUP)
dev_err(ce->dev, "CE ERROR: algorithm not supported\n");
if (v & CE_ERR_DATALEN)
dev_err(ce->dev, "CE ERROR: data length error\n");
if (v & CE_ERR_KEYSRAM)
dev_err(ce->dev, "CE ERROR: keysram access error for AES\n");
break;
case ESR_H6:
v >>= (flow * 8);
v &= 0xFF;
if (v) {
dev_err(ce->dev, "CE ERROR: %x for flow %x\n", v, flow);
err = -EFAULT;
print_hex_dump(KERN_INFO, "TASK: ", DUMP_PREFIX_NONE, 16, 4,
cet, sizeof(struct ce_task), false);
}
if (v & CE_ERR_ALGO_NOTSUP)
dev_err(ce->dev, "CE ERROR: algorithm not supported\n");
@ -150,7 +226,10 @@ int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name)
dev_err(ce->dev, "CE ERROR: keysram access error for AES\n");
if (v & CE_ERR_ADDR_INVALID)
dev_err(ce->dev, "CE ERROR: address invalid\n");
}
if (v & CE_ERR_KEYLADDER)
dev_err(ce->dev, "CE ERROR: key ladder configuration error\n");
break;
}
return err;
}
@ -280,13 +359,214 @@ static struct sun8i_ce_alg_template ce_algs[] = {
.decrypt = sun8i_ce_skdecrypt,
}
},
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_HASH
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ce_algo_id = CE_ID_HASH_MD5,
.alg.hash = {
.init = sun8i_ce_hash_init,
.update = sun8i_ce_hash_update,
.final = sun8i_ce_hash_final,
.finup = sun8i_ce_hash_finup,
.digest = sun8i_ce_hash_digest,
.export = sun8i_ce_hash_export,
.import = sun8i_ce_hash_import,
.halg = {
.digestsize = MD5_DIGEST_SIZE,
.statesize = sizeof(struct md5_state),
.base = {
.cra_name = "md5",
.cra_driver_name = "md5-sun8i-ce",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ce_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ce_hash_crainit,
.cra_exit = sun8i_ce_hash_craexit,
}
}
}
},
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ce_algo_id = CE_ID_HASH_SHA1,
.alg.hash = {
.init = sun8i_ce_hash_init,
.update = sun8i_ce_hash_update,
.final = sun8i_ce_hash_final,
.finup = sun8i_ce_hash_finup,
.digest = sun8i_ce_hash_digest,
.export = sun8i_ce_hash_export,
.import = sun8i_ce_hash_import,
.halg = {
.digestsize = SHA1_DIGEST_SIZE,
.statesize = sizeof(struct sha1_state),
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-sun8i-ce",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ce_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ce_hash_crainit,
.cra_exit = sun8i_ce_hash_craexit,
}
}
}
},
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ce_algo_id = CE_ID_HASH_SHA224,
.alg.hash = {
.init = sun8i_ce_hash_init,
.update = sun8i_ce_hash_update,
.final = sun8i_ce_hash_final,
.finup = sun8i_ce_hash_finup,
.digest = sun8i_ce_hash_digest,
.export = sun8i_ce_hash_export,
.import = sun8i_ce_hash_import,
.halg = {
.digestsize = SHA224_DIGEST_SIZE,
.statesize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha224",
.cra_driver_name = "sha224-sun8i-ce",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ce_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ce_hash_crainit,
.cra_exit = sun8i_ce_hash_craexit,
}
}
}
},
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ce_algo_id = CE_ID_HASH_SHA256,
.alg.hash = {
.init = sun8i_ce_hash_init,
.update = sun8i_ce_hash_update,
.final = sun8i_ce_hash_final,
.finup = sun8i_ce_hash_finup,
.digest = sun8i_ce_hash_digest,
.export = sun8i_ce_hash_export,
.import = sun8i_ce_hash_import,
.halg = {
.digestsize = SHA256_DIGEST_SIZE,
.statesize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha256",
.cra_driver_name = "sha256-sun8i-ce",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ce_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ce_hash_crainit,
.cra_exit = sun8i_ce_hash_craexit,
}
}
}
},
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ce_algo_id = CE_ID_HASH_SHA384,
.alg.hash = {
.init = sun8i_ce_hash_init,
.update = sun8i_ce_hash_update,
.final = sun8i_ce_hash_final,
.finup = sun8i_ce_hash_finup,
.digest = sun8i_ce_hash_digest,
.export = sun8i_ce_hash_export,
.import = sun8i_ce_hash_import,
.halg = {
.digestsize = SHA384_DIGEST_SIZE,
.statesize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-sun8i-ce",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ce_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ce_hash_crainit,
.cra_exit = sun8i_ce_hash_craexit,
}
}
}
},
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ce_algo_id = CE_ID_HASH_SHA512,
.alg.hash = {
.init = sun8i_ce_hash_init,
.update = sun8i_ce_hash_update,
.final = sun8i_ce_hash_final,
.finup = sun8i_ce_hash_finup,
.digest = sun8i_ce_hash_digest,
.export = sun8i_ce_hash_export,
.import = sun8i_ce_hash_import,
.halg = {
.digestsize = SHA512_DIGEST_SIZE,
.statesize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-sun8i-ce",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ce_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ce_hash_crainit,
.cra_exit = sun8i_ce_hash_craexit,
}
}
}
},
#endif
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_PRNG
{
.type = CRYPTO_ALG_TYPE_RNG,
.alg.rng = {
.base = {
.cra_name = "stdrng",
.cra_driver_name = "sun8i-ce-prng",
.cra_priority = 300,
.cra_ctxsize = sizeof(struct sun8i_ce_rng_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ce_prng_init,
.cra_exit = sun8i_ce_prng_exit,
},
.generate = sun8i_ce_prng_generate,
.seed = sun8i_ce_prng_seed,
.seedsize = PRNG_SEED_SIZE,
}
},
#endif
};
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
static int sun8i_ce_dbgfs_read(struct seq_file *seq, void *v)
static int sun8i_ce_debugfs_show(struct seq_file *seq, void *v)
{
struct sun8i_ce_dev *ce = seq->private;
int i;
unsigned int i;
for (i = 0; i < MAXFLOW; i++)
seq_printf(seq, "Channel %d: nreq %lu\n", i, ce->chanlist[i].stat_req);
@ -301,23 +581,28 @@ static int sun8i_ce_dbgfs_read(struct seq_file *seq, void *v)
ce_algs[i].alg.skcipher.base.cra_name,
ce_algs[i].stat_req, ce_algs[i].stat_fb);
break;
case CRYPTO_ALG_TYPE_AHASH:
seq_printf(seq, "%s %s %lu %lu\n",
ce_algs[i].alg.hash.halg.base.cra_driver_name,
ce_algs[i].alg.hash.halg.base.cra_name,
ce_algs[i].stat_req, ce_algs[i].stat_fb);
break;
case CRYPTO_ALG_TYPE_RNG:
seq_printf(seq, "%s %s %lu %lu\n",
ce_algs[i].alg.rng.base.cra_driver_name,
ce_algs[i].alg.rng.base.cra_name,
ce_algs[i].stat_req, ce_algs[i].stat_bytes);
break;
}
}
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_TRNG
seq_printf(seq, "HWRNG %lu %lu\n",
ce->hwrng_stat_req, ce->hwrng_stat_bytes);
#endif
return 0;
}
static int sun8i_ce_dbgfs_open(struct inode *inode, struct file *file)
{
return single_open(file, sun8i_ce_dbgfs_read, inode->i_private);
}
static const struct file_operations sun8i_ce_debugfs_fops = {
.owner = THIS_MODULE,
.open = sun8i_ce_dbgfs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
DEFINE_SHOW_ATTRIBUTE(sun8i_ce_debugfs);
#endif
static void sun8i_ce_free_chanlist(struct sun8i_ce_dev *ce, int i)
@ -482,7 +767,8 @@ static int sun8i_ce_get_clks(struct sun8i_ce_dev *ce)
static int sun8i_ce_register_algs(struct sun8i_ce_dev *ce)
{
int ce_method, err, id, i;
int ce_method, err, id;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(ce_algs); i++) {
ce_algs[i].ce = ce;
@ -515,6 +801,43 @@ static int sun8i_ce_register_algs(struct sun8i_ce_dev *ce)
return err;
}
break;
case CRYPTO_ALG_TYPE_AHASH:
id = ce_algs[i].ce_algo_id;
ce_method = ce->variant->alg_hash[id];
if (ce_method == CE_ID_NOTSUPP) {
dev_info(ce->dev,
"DEBUG: Algo of %s not supported\n",
ce_algs[i].alg.hash.halg.base.cra_name);
ce_algs[i].ce = NULL;
break;
}
dev_info(ce->dev, "Register %s\n",
ce_algs[i].alg.hash.halg.base.cra_name);
err = crypto_register_ahash(&ce_algs[i].alg.hash);
if (err) {
dev_err(ce->dev, "ERROR: Fail to register %s\n",
ce_algs[i].alg.hash.halg.base.cra_name);
ce_algs[i].ce = NULL;
return err;
}
break;
case CRYPTO_ALG_TYPE_RNG:
if (ce->variant->prng == CE_ID_NOTSUPP) {
dev_info(ce->dev,
"DEBUG: Algo of %s not supported\n",
ce_algs[i].alg.rng.base.cra_name);
ce_algs[i].ce = NULL;
break;
}
dev_info(ce->dev, "Register %s\n",
ce_algs[i].alg.rng.base.cra_name);
err = crypto_register_rng(&ce_algs[i].alg.rng);
if (err) {
dev_err(ce->dev, "Fail to register %s\n",
ce_algs[i].alg.rng.base.cra_name);
ce_algs[i].ce = NULL;
}
break;
default:
ce_algs[i].ce = NULL;
dev_err(ce->dev, "ERROR: tried to register an unknown algo\n");
@ -525,7 +848,7 @@ static int sun8i_ce_register_algs(struct sun8i_ce_dev *ce)
static void sun8i_ce_unregister_algs(struct sun8i_ce_dev *ce)
{
int i;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(ce_algs); i++) {
if (!ce_algs[i].ce)
@ -536,6 +859,16 @@ static void sun8i_ce_unregister_algs(struct sun8i_ce_dev *ce)
ce_algs[i].alg.skcipher.base.cra_name);
crypto_unregister_skcipher(&ce_algs[i].alg.skcipher);
break;
case CRYPTO_ALG_TYPE_AHASH:
dev_info(ce->dev, "Unregister %d %s\n", i,
ce_algs[i].alg.hash.halg.base.cra_name);
crypto_unregister_ahash(&ce_algs[i].alg.hash);
break;
case CRYPTO_ALG_TYPE_RNG:
dev_info(ce->dev, "Unregister %d %s\n", i,
ce_algs[i].alg.rng.base.cra_name);
crypto_unregister_rng(&ce_algs[i].alg.rng);
break;
}
}
}
@ -573,14 +906,12 @@ static int sun8i_ce_probe(struct platform_device *pdev)
return irq;
ce->reset = devm_reset_control_get(&pdev->dev, NULL);
if (IS_ERR(ce->reset)) {
if (PTR_ERR(ce->reset) == -EPROBE_DEFER)
return PTR_ERR(ce->reset);
dev_err(&pdev->dev, "No reset control found\n");
return PTR_ERR(ce->reset);
}
if (IS_ERR(ce->reset))
return dev_err_probe(&pdev->dev, PTR_ERR(ce->reset),
"No reset control found\n");
mutex_init(&ce->mlock);
mutex_init(&ce->rnglock);
err = sun8i_ce_allocate_chanlist(ce);
if (err)
@ -605,6 +936,10 @@ static int sun8i_ce_probe(struct platform_device *pdev)
if (err < 0)
goto error_alg;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_TRNG
sun8i_ce_hwrng_register(ce);
#endif
v = readl(ce->base + CE_CTR);
v >>= CE_DIE_ID_SHIFT;
v &= CE_DIE_ID_MASK;
@ -634,6 +969,10 @@ static int sun8i_ce_remove(struct platform_device *pdev)
{
struct sun8i_ce_dev *ce = platform_get_drvdata(pdev);
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_TRNG
sun8i_ce_hwrng_unregister(ce);
#endif
sun8i_ce_unregister_algs(ce);
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG

View File

@ -0,0 +1,413 @@
// SPDX-License-Identifier: GPL-2.0
/*
* sun8i-ce-hash.c - hardware cryptographic offloader for
* Allwinner H3/A64/H5/H2+/H6/R40 SoC
*
* Copyright (C) 2015-2020 Corentin Labbe <clabbe@baylibre.com>
*
* This file add support for MD5 and SHA1/SHA224/SHA256/SHA384/SHA512.
*
* You could find the datasheet in Documentation/arm/sunxi/README
*/
#include <linux/dma-mapping.h>
#include <linux/pm_runtime.h>
#include <linux/scatterlist.h>
#include <crypto/internal/hash.h>
#include <crypto/sha.h>
#include <crypto/md5.h>
#include "sun8i-ce.h"
int sun8i_ce_hash_crainit(struct crypto_tfm *tfm)
{
struct sun8i_ce_hash_tfm_ctx *op = crypto_tfm_ctx(tfm);
struct ahash_alg *alg = __crypto_ahash_alg(tfm->__crt_alg);
struct sun8i_ce_alg_template *algt;
int err;
memset(op, 0, sizeof(struct sun8i_ce_hash_tfm_ctx));
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash);
op->ce = algt->ce;
op->enginectx.op.do_one_request = sun8i_ce_hash_run;
op->enginectx.op.prepare_request = NULL;
op->enginectx.op.unprepare_request = NULL;
/* FALLBACK */
op->fallback_tfm = crypto_alloc_ahash(crypto_tfm_alg_name(tfm), 0,
CRYPTO_ALG_NEED_FALLBACK);
if (IS_ERR(op->fallback_tfm)) {
dev_err(algt->ce->dev, "Fallback driver could no be loaded\n");
return PTR_ERR(op->fallback_tfm);
}
if (algt->alg.hash.halg.statesize < crypto_ahash_statesize(op->fallback_tfm))
algt->alg.hash.halg.statesize = crypto_ahash_statesize(op->fallback_tfm);
crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
sizeof(struct sun8i_ce_hash_reqctx) +
crypto_ahash_reqsize(op->fallback_tfm));
dev_info(op->ce->dev, "Fallback for %s is %s\n",
crypto_tfm_alg_driver_name(tfm),
crypto_tfm_alg_driver_name(&op->fallback_tfm->base));
err = pm_runtime_get_sync(op->ce->dev);
if (err < 0)
goto error_pm;
return 0;
error_pm:
pm_runtime_put_noidle(op->ce->dev);
crypto_free_ahash(op->fallback_tfm);
return err;
}
void sun8i_ce_hash_craexit(struct crypto_tfm *tfm)
{
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_tfm_ctx(tfm);
crypto_free_ahash(tfmctx->fallback_tfm);
pm_runtime_put_sync_suspend(tfmctx->ce->dev);
}
int sun8i_ce_hash_init(struct ahash_request *areq)
{
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
memset(rctx, 0, sizeof(struct sun8i_ce_hash_reqctx));
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP;
return crypto_ahash_init(&rctx->fallback_req);
}
int sun8i_ce_hash_export(struct ahash_request *areq, void *out)
{
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP;
return crypto_ahash_export(&rctx->fallback_req, out);
}
int sun8i_ce_hash_import(struct ahash_request *areq, const void *in)
{
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP;
return crypto_ahash_import(&rctx->fallback_req, in);
}
int sun8i_ce_hash_final(struct ahash_request *areq)
{
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ce_alg_template *algt;
#endif
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags &
CRYPTO_TFM_REQ_MAY_SLEEP;
rctx->fallback_req.result = areq->result;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash);
algt->stat_fb++;
#endif
return crypto_ahash_final(&rctx->fallback_req);
}
int sun8i_ce_hash_update(struct ahash_request *areq)
{
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags &
CRYPTO_TFM_REQ_MAY_SLEEP;
rctx->fallback_req.nbytes = areq->nbytes;
rctx->fallback_req.src = areq->src;
return crypto_ahash_update(&rctx->fallback_req);
}
int sun8i_ce_hash_finup(struct ahash_request *areq)
{
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ce_alg_template *algt;
#endif
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags &
CRYPTO_TFM_REQ_MAY_SLEEP;
rctx->fallback_req.nbytes = areq->nbytes;
rctx->fallback_req.src = areq->src;
rctx->fallback_req.result = areq->result;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash);
algt->stat_fb++;
#endif
return crypto_ahash_finup(&rctx->fallback_req);
}
static int sun8i_ce_hash_digest_fb(struct ahash_request *areq)
{
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ce_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ce_alg_template *algt;
#endif
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags &
CRYPTO_TFM_REQ_MAY_SLEEP;
rctx->fallback_req.nbytes = areq->nbytes;
rctx->fallback_req.src = areq->src;
rctx->fallback_req.result = areq->result;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash);
algt->stat_fb++;
#endif
return crypto_ahash_digest(&rctx->fallback_req);
}
static bool sun8i_ce_hash_need_fallback(struct ahash_request *areq)
{
struct scatterlist *sg;
if (areq->nbytes == 0)
return true;
/* we need to reserve one SG for padding one */
if (sg_nents(areq->src) > MAX_SG - 1)
return true;
sg = areq->src;
while (sg) {
if (sg->length % 4 || !IS_ALIGNED(sg->offset, sizeof(u32)))
return true;
sg = sg_next(sg);
}
return false;
}
int sun8i_ce_hash_digest(struct ahash_request *areq)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
struct sun8i_ce_alg_template *algt;
struct sun8i_ce_dev *ce;
struct crypto_engine *engine;
struct scatterlist *sg;
int nr_sgs, e, i;
if (sun8i_ce_hash_need_fallback(areq))
return sun8i_ce_hash_digest_fb(areq);
nr_sgs = sg_nents(areq->src);
if (nr_sgs > MAX_SG - 1)
return sun8i_ce_hash_digest_fb(areq);
for_each_sg(areq->src, sg, nr_sgs, i) {
if (sg->length % 4 || !IS_ALIGNED(sg->offset, sizeof(u32)))
return sun8i_ce_hash_digest_fb(areq);
}
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash);
ce = algt->ce;
e = sun8i_ce_get_engine_number(ce);
rctx->flow = e;
engine = ce->chanlist[e].engine;
return crypto_transfer_hash_request_to_engine(engine, areq);
}
int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
{
struct ahash_request *areq = container_of(breq, struct ahash_request, base);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ce_hash_reqctx *rctx = ahash_request_ctx(areq);
struct sun8i_ce_alg_template *algt;
struct sun8i_ce_dev *ce;
struct sun8i_ce_flow *chan;
struct ce_task *cet;
struct scatterlist *sg;
int nr_sgs, flow, err;
unsigned int len;
u32 common;
u64 byte_count;
__le32 *bf;
void *buf;
int j, i, todo;
int nbw = 0;
u64 fill, min_fill;
__be64 *bebits;
__le64 *lebits;
void *result;
u64 bs;
int digestsize;
dma_addr_t addr_res, addr_pad;
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash);
ce = algt->ce;
bs = algt->alg.hash.halg.base.cra_blocksize;
digestsize = algt->alg.hash.halg.digestsize;
if (digestsize == SHA224_DIGEST_SIZE)
digestsize = SHA256_DIGEST_SIZE;
if (digestsize == SHA384_DIGEST_SIZE)
digestsize = SHA512_DIGEST_SIZE;
/* the padding could be up to two block. */
buf = kzalloc(bs * 2, GFP_KERNEL | GFP_DMA);
if (!buf)
return -ENOMEM;
bf = (__le32 *)buf;
result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA);
if (!result)
return -ENOMEM;
flow = rctx->flow;
chan = &ce->chanlist[flow];
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
algt->stat_req++;
#endif
dev_dbg(ce->dev, "%s %s len=%d\n", __func__, crypto_tfm_alg_name(areq->base.tfm), areq->nbytes);
cet = chan->tl;
memset(cet, 0, sizeof(struct ce_task));
cet->t_id = cpu_to_le32(flow);
common = ce->variant->alg_hash[algt->ce_algo_id];
common |= CE_COMM_INT;
cet->t_common_ctl = cpu_to_le32(common);
cet->t_sym_ctl = 0;
cet->t_asym_ctl = 0;
nr_sgs = dma_map_sg(ce->dev, areq->src, sg_nents(areq->src), DMA_TO_DEVICE);
if (nr_sgs <= 0 || nr_sgs > MAX_SG) {
dev_err(ce->dev, "Invalid sg number %d\n", nr_sgs);
err = -EINVAL;
goto theend;
}
len = areq->nbytes;
for_each_sg(areq->src, sg, nr_sgs, i) {
cet->t_src[i].addr = cpu_to_le32(sg_dma_address(sg));
todo = min(len, sg_dma_len(sg));
cet->t_src[i].len = cpu_to_le32(todo / 4);
len -= todo;
}
if (len > 0) {
dev_err(ce->dev, "remaining len %d\n", len);
err = -EINVAL;
goto theend;
}
addr_res = dma_map_single(ce->dev, result, digestsize, DMA_FROM_DEVICE);
cet->t_dst[0].addr = cpu_to_le32(addr_res);
cet->t_dst[0].len = cpu_to_le32(digestsize / 4);
if (dma_mapping_error(ce->dev, addr_res)) {
dev_err(ce->dev, "DMA map dest\n");
err = -EINVAL;
goto theend;
}
byte_count = areq->nbytes;
j = 0;
bf[j++] = cpu_to_le32(0x80);
if (bs == 64) {
fill = 64 - (byte_count % 64);
min_fill = 2 * sizeof(u32) + (nbw ? 0 : sizeof(u32));
} else {
fill = 128 - (byte_count % 128);
min_fill = 4 * sizeof(u32) + (nbw ? 0 : sizeof(u32));
}
if (fill < min_fill)
fill += bs;
j += (fill - min_fill) / sizeof(u32);
switch (algt->ce_algo_id) {
case CE_ID_HASH_MD5:
lebits = (__le64 *)&bf[j];
*lebits = cpu_to_le64(byte_count << 3);
j += 2;
break;
case CE_ID_HASH_SHA1:
case CE_ID_HASH_SHA224:
case CE_ID_HASH_SHA256:
bebits = (__be64 *)&bf[j];
*bebits = cpu_to_be64(byte_count << 3);
j += 2;
break;
case CE_ID_HASH_SHA384:
case CE_ID_HASH_SHA512:
bebits = (__be64 *)&bf[j];
*bebits = cpu_to_be64(byte_count >> 61);
j += 2;
bebits = (__be64 *)&bf[j];
*bebits = cpu_to_be64(byte_count << 3);
j += 2;
break;
}
addr_pad = dma_map_single(ce->dev, buf, j * 4, DMA_TO_DEVICE);
cet->t_src[i].addr = cpu_to_le32(addr_pad);
cet->t_src[i].len = cpu_to_le32(j);
if (dma_mapping_error(ce->dev, addr_pad)) {
dev_err(ce->dev, "DMA error on padding SG\n");
err = -EINVAL;
goto theend;
}
if (ce->variant->hash_t_dlen_in_bits)
cet->t_dlen = cpu_to_le32((areq->nbytes + j * 4) * 8);
else
cet->t_dlen = cpu_to_le32(areq->nbytes / 4 + j);
chan->timeout = areq->nbytes;
err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(areq->base.tfm));
dma_unmap_single(ce->dev, addr_pad, j * 4, DMA_TO_DEVICE);
dma_unmap_sg(ce->dev, areq->src, nr_sgs, DMA_TO_DEVICE);
dma_unmap_single(ce->dev, addr_res, digestsize, DMA_FROM_DEVICE);
kfree(buf);
memcpy(areq->result, result, algt->alg.hash.halg.digestsize);
kfree(result);
theend:
crypto_finalize_hash_request(engine, breq, err);
return 0;
}

View File

@ -0,0 +1,164 @@
// SPDX-License-Identifier: GPL-2.0
/*
* sun8i-ce-prng.c - hardware cryptographic offloader for
* Allwinner H3/A64/H5/H2+/H6/R40 SoC
*
* Copyright (C) 2015-2020 Corentin Labbe <clabbe@baylibre.com>
*
* This file handle the PRNG
*
* You could find a link for the datasheet in Documentation/arm/sunxi/README
*/
#include "sun8i-ce.h"
#include <linux/dma-mapping.h>
#include <linux/pm_runtime.h>
#include <crypto/internal/rng.h>
int sun8i_ce_prng_init(struct crypto_tfm *tfm)
{
struct sun8i_ce_rng_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
memset(ctx, 0, sizeof(struct sun8i_ce_rng_tfm_ctx));
return 0;
}
void sun8i_ce_prng_exit(struct crypto_tfm *tfm)
{
struct sun8i_ce_rng_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
memzero_explicit(ctx->seed, ctx->slen);
kfree(ctx->seed);
ctx->seed = NULL;
ctx->slen = 0;
}
int sun8i_ce_prng_seed(struct crypto_rng *tfm, const u8 *seed,
unsigned int slen)
{
struct sun8i_ce_rng_tfm_ctx *ctx = crypto_rng_ctx(tfm);
if (ctx->seed && ctx->slen != slen) {
memzero_explicit(ctx->seed, ctx->slen);
kfree(ctx->seed);
ctx->slen = 0;
ctx->seed = NULL;
}
if (!ctx->seed)
ctx->seed = kmalloc(slen, GFP_KERNEL | GFP_DMA);
if (!ctx->seed)
return -ENOMEM;
memcpy(ctx->seed, seed, slen);
ctx->slen = slen;
return 0;
}
int sun8i_ce_prng_generate(struct crypto_rng *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int dlen)
{
struct sun8i_ce_rng_tfm_ctx *ctx = crypto_rng_ctx(tfm);
struct rng_alg *alg = crypto_rng_alg(tfm);
struct sun8i_ce_alg_template *algt;
struct sun8i_ce_dev *ce;
dma_addr_t dma_iv, dma_dst;
int err = 0;
int flow = 3;
unsigned int todo;
struct sun8i_ce_flow *chan;
struct ce_task *cet;
u32 common, sym;
void *d;
algt = container_of(alg, struct sun8i_ce_alg_template, alg.rng);
ce = algt->ce;
if (ctx->slen == 0) {
dev_err(ce->dev, "not seeded\n");
return -EINVAL;
}
/* we want dlen + seedsize rounded up to a multiple of PRNG_DATA_SIZE */
todo = dlen + ctx->slen + PRNG_DATA_SIZE * 2;
todo -= todo % PRNG_DATA_SIZE;
d = kzalloc(todo, GFP_KERNEL | GFP_DMA);
if (!d) {
err = -ENOMEM;
goto err_mem;
}
dev_dbg(ce->dev, "%s PRNG slen=%u dlen=%u todo=%u multi=%u\n", __func__,
slen, dlen, todo, todo / PRNG_DATA_SIZE);
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
algt->stat_req++;
algt->stat_bytes += todo;
#endif
dma_iv = dma_map_single(ce->dev, ctx->seed, ctx->slen, DMA_TO_DEVICE);
if (dma_mapping_error(ce->dev, dma_iv)) {
dev_err(ce->dev, "Cannot DMA MAP IV\n");
goto err_iv;
}
dma_dst = dma_map_single(ce->dev, d, todo, DMA_FROM_DEVICE);
if (dma_mapping_error(ce->dev, dma_dst)) {
dev_err(ce->dev, "Cannot DMA MAP DST\n");
err = -EFAULT;
goto err_dst;
}
err = pm_runtime_get_sync(ce->dev);
if (err < 0) {
pm_runtime_put_noidle(ce->dev);
goto err_pm;
}
mutex_lock(&ce->rnglock);
chan = &ce->chanlist[flow];
cet = &chan->tl[0];
memset(cet, 0, sizeof(struct ce_task));
cet->t_id = cpu_to_le32(flow);
common = ce->variant->prng | CE_COMM_INT;
cet->t_common_ctl = cpu_to_le32(common);
/* recent CE (H6) need length in bytes, in word otherwise */
if (ce->variant->prng_t_dlen_in_bytes)
cet->t_dlen = cpu_to_le32(todo);
else
cet->t_dlen = cpu_to_le32(todo / 4);
sym = PRNG_LD;
cet->t_sym_ctl = cpu_to_le32(sym);
cet->t_asym_ctl = 0;
cet->t_key = cpu_to_le32(dma_iv);
cet->t_iv = cpu_to_le32(dma_iv);
cet->t_dst[0].addr = cpu_to_le32(dma_dst);
cet->t_dst[0].len = cpu_to_le32(todo / 4);
ce->chanlist[flow].timeout = 2000;
err = sun8i_ce_run_task(ce, 3, "PRNG");
mutex_unlock(&ce->rnglock);
pm_runtime_put(ce->dev);
err_pm:
dma_unmap_single(ce->dev, dma_dst, todo, DMA_FROM_DEVICE);
err_dst:
dma_unmap_single(ce->dev, dma_iv, ctx->slen, DMA_TO_DEVICE);
if (!err) {
memcpy(dst, d, dlen);
memcpy(ctx->seed, d + dlen, ctx->slen);
}
memzero_explicit(d, todo);
err_iv:
kfree(d);
err_mem:
return err;
}

View File

@ -0,0 +1,127 @@
// SPDX-License-Identifier: GPL-2.0
/*
* sun8i-ce-trng.c - hardware cryptographic offloader for
* Allwinner H3/A64/H5/H2+/H6/R40 SoC
*
* Copyright (C) 2015-2020 Corentin Labbe <clabbe@baylibre.com>
*
* This file handle the TRNG
*
* You could find a link for the datasheet in Documentation/arm/sunxi/README
*/
#include "sun8i-ce.h"
#include <linux/dma-mapping.h>
#include <linux/pm_runtime.h>
#include <linux/hw_random.h>
/*
* Note that according to the algorithm ID, 2 versions of the TRNG exists,
* The first present in H3/H5/R40/A64 and the second present in H6.
* This file adds support for both, but only the second is working
* reliabily according to rngtest.
**/
static int sun8i_ce_trng_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
struct sun8i_ce_dev *ce;
dma_addr_t dma_dst;
int err = 0;
int flow = 3;
unsigned int todo;
struct sun8i_ce_flow *chan;
struct ce_task *cet;
u32 common;
void *d;
ce = container_of(rng, struct sun8i_ce_dev, trng);
/* round the data length to a multiple of 32*/
todo = max + 32;
todo -= todo % 32;
d = kzalloc(todo, GFP_KERNEL | GFP_DMA);
if (!d)
return -ENOMEM;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
ce->hwrng_stat_req++;
ce->hwrng_stat_bytes += todo;
#endif
dma_dst = dma_map_single(ce->dev, d, todo, DMA_FROM_DEVICE);
if (dma_mapping_error(ce->dev, dma_dst)) {
dev_err(ce->dev, "Cannot DMA MAP DST\n");
err = -EFAULT;
goto err_dst;
}
err = pm_runtime_get_sync(ce->dev);
if (err < 0) {
pm_runtime_put_noidle(ce->dev);
goto err_pm;
}
mutex_lock(&ce->rnglock);
chan = &ce->chanlist[flow];
cet = &chan->tl[0];
memset(cet, 0, sizeof(struct ce_task));
cet->t_id = cpu_to_le32(flow);
common = ce->variant->trng | CE_COMM_INT;
cet->t_common_ctl = cpu_to_le32(common);
/* recent CE (H6) need length in bytes, in word otherwise */
if (ce->variant->trng_t_dlen_in_bytes)
cet->t_dlen = cpu_to_le32(todo);
else
cet->t_dlen = cpu_to_le32(todo / 4);
cet->t_sym_ctl = 0;
cet->t_asym_ctl = 0;
cet->t_dst[0].addr = cpu_to_le32(dma_dst);
cet->t_dst[0].len = cpu_to_le32(todo / 4);
ce->chanlist[flow].timeout = todo;
err = sun8i_ce_run_task(ce, 3, "TRNG");
mutex_unlock(&ce->rnglock);
pm_runtime_put(ce->dev);
err_pm:
dma_unmap_single(ce->dev, dma_dst, todo, DMA_FROM_DEVICE);
if (!err) {
memcpy(data, d, max);
err = max;
}
memzero_explicit(d, todo);
err_dst:
kfree(d);
return err;
}
int sun8i_ce_hwrng_register(struct sun8i_ce_dev *ce)
{
int ret;
if (ce->variant->trng == CE_ID_NOTSUPP) {
dev_info(ce->dev, "TRNG not supported\n");
return 0;
}
ce->trng.name = "sun8i Crypto Engine TRNG";
ce->trng.read = sun8i_ce_trng_read;
ce->trng.quality = 1000;
ret = hwrng_register(&ce->trng);
if (ret)
dev_err(ce->dev, "Fail to register the TRNG\n");
return ret;
}
void sun8i_ce_hwrng_unregister(struct sun8i_ce_dev *ce)
{
if (ce->variant->trng == CE_ID_NOTSUPP)
return;
hwrng_unregister(&ce->trng);
}

View File

@ -12,6 +12,11 @@
#include <linux/atomic.h>
#include <linux/debugfs.h>
#include <linux/crypto.h>
#include <linux/hw_random.h>
#include <crypto/internal/hash.h>
#include <crypto/md5.h>
#include <crypto/rng.h>
#include <crypto/sha.h>
/* CE Registers */
#define CE_TDQ 0x00
@ -45,6 +50,16 @@
#define CE_ALG_AES 0
#define CE_ALG_DES 1
#define CE_ALG_3DES 2
#define CE_ALG_MD5 16
#define CE_ALG_SHA1 17
#define CE_ALG_SHA224 18
#define CE_ALG_SHA256 19
#define CE_ALG_SHA384 20
#define CE_ALG_SHA512 21
#define CE_ALG_TRNG 48
#define CE_ALG_PRNG 49
#define CE_ALG_TRNG_V2 0x1c
#define CE_ALG_PRNG_V2 0x1d
/* Used in ce_variant */
#define CE_ID_NOTSUPP 0xFF
@ -54,6 +69,14 @@
#define CE_ID_CIPHER_DES3 2
#define CE_ID_CIPHER_MAX 3
#define CE_ID_HASH_MD5 0
#define CE_ID_HASH_SHA1 1
#define CE_ID_HASH_SHA224 2
#define CE_ID_HASH_SHA256 3
#define CE_ID_HASH_SHA384 4
#define CE_ID_HASH_SHA512 5
#define CE_ID_HASH_MAX 6
#define CE_ID_OP_ECB 0
#define CE_ID_OP_CBC 1
#define CE_ID_OP_MAX 2
@ -65,6 +88,16 @@
#define CE_ERR_ADDR_INVALID BIT(5)
#define CE_ERR_KEYLADDER BIT(6)
#define ESR_H3 0
#define ESR_A64 1
#define ESR_R40 2
#define ESR_H5 3
#define ESR_H6 4
#define PRNG_DATA_SIZE (160 / 8)
#define PRNG_SEED_SIZE DIV_ROUND_UP(175, 8)
#define PRNG_LD BIT(17)
#define CE_DIE_ID_SHIFT 16
#define CE_DIE_ID_MASK 0x07
@ -90,16 +123,34 @@ struct ce_clock {
* struct ce_variant - Describe CE capability for each variant hardware
* @alg_cipher: list of supported ciphers. for each CE_ID_ this will give the
* coresponding CE_ALG_XXX value
* @alg_hash: list of supported hashes. for each CE_ID_ this will give the
* corresponding CE_ALG_XXX value
* @op_mode: list of supported block modes
* @has_t_dlen_in_bytes: Does the request size for cipher is in
* @cipher_t_dlen_in_bytes: Does the request size for cipher is in
* bytes or words
* @hash_t_dlen_in_bytes: Does the request size for hash is in
* bits or words
* @prng_t_dlen_in_bytes: Does the request size for PRNG is in
* bytes or words
* @trng_t_dlen_in_bytes: Does the request size for TRNG is in
* bytes or words
* @ce_clks: list of clocks needed by this variant
* @esr: The type of error register
* @prng: The CE_ALG_XXX value for the PRNG
* @trng: The CE_ALG_XXX value for the TRNG
*/
struct ce_variant {
char alg_cipher[CE_ID_CIPHER_MAX];
char alg_hash[CE_ID_HASH_MAX];
u32 op_mode[CE_ID_OP_MAX];
bool has_t_dlen_in_bytes;
bool cipher_t_dlen_in_bytes;
bool hash_t_dlen_in_bits;
bool prng_t_dlen_in_bytes;
bool trng_t_dlen_in_bytes;
struct ce_clock ce_clks[CE_MAX_CLOCKS];
int esr;
unsigned char prng;
unsigned char trng;
};
struct sginfo {
@ -129,8 +180,6 @@ struct ce_task {
/*
* struct sun8i_ce_flow - Information used by each flow
* @engine: ptr to the crypto_engine for this flow
* @bounce_iv: buffer which contain the IV
* @ivlen: size of bounce_iv
* @complete: completion for the current task on this flow
* @status: set to 1 by interrupt if task is done
* @t_phy: Physical address of task
@ -139,8 +188,6 @@ struct ce_task {
*/
struct sun8i_ce_flow {
struct crypto_engine *engine;
void *bounce_iv;
unsigned int ivlen;
struct completion complete;
int status;
dma_addr_t t_phy;
@ -158,6 +205,7 @@ struct sun8i_ce_flow {
* @reset: pointer to reset controller
* @dev: the platform device
* @mlock: Control access to device registers
* @rnglock: Control access to the RNG (dedicated channel 3)
* @chanlist: array of all flow
* @flow: flow to use in next request
* @variant: pointer to variant specific data
@ -170,6 +218,7 @@ struct sun8i_ce_dev {
struct reset_control *reset;
struct device *dev;
struct mutex mlock;
struct mutex rnglock;
struct sun8i_ce_flow *chanlist;
atomic_t flow;
const struct ce_variant *variant;
@ -177,17 +226,38 @@ struct sun8i_ce_dev {
struct dentry *dbgfs_dir;
struct dentry *dbgfs_stats;
#endif
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_TRNG
struct hwrng trng;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
unsigned long hwrng_stat_req;
unsigned long hwrng_stat_bytes;
#endif
#endif
};
/*
* struct sun8i_cipher_req_ctx - context for a skcipher request
* @op_dir: direction (encrypt vs decrypt) for this request
* @flow: the flow to use for this request
* @backup_iv: buffer which contain the next IV to store
* @bounce_iv: buffer which contain the IV
* @ivlen: size of bounce_iv
* @nr_sgs: The number of source SG (as given by dma_map_sg())
* @nr_sgd: The number of destination SG (as given by dma_map_sg())
* @addr_iv: The IV addr returned by dma_map_single, need to unmap later
* @addr_key: The key addr returned by dma_map_single, need to unmap later
* @fallback_req: request struct for invoking the fallback skcipher TFM
*/
struct sun8i_cipher_req_ctx {
u32 op_dir;
int flow;
void *backup_iv;
void *bounce_iv;
unsigned int ivlen;
int nr_sgs;
int nr_sgd;
dma_addr_t addr_iv;
dma_addr_t addr_key;
struct skcipher_request fallback_req; // keep at the end
};
@ -207,6 +277,38 @@ struct sun8i_cipher_tfm_ctx {
struct crypto_skcipher *fallback_tfm;
};
/*
* struct sun8i_ce_hash_tfm_ctx - context for an ahash TFM
* @enginectx: crypto_engine used by this TFM
* @ce: pointer to the private data of driver handling this TFM
* @fallback_tfm: pointer to the fallback TFM
*/
struct sun8i_ce_hash_tfm_ctx {
struct crypto_engine_ctx enginectx;
struct sun8i_ce_dev *ce;
struct crypto_ahash *fallback_tfm;
};
/*
* struct sun8i_ce_hash_reqctx - context for an ahash request
* @fallback_req: pre-allocated fallback request
* @flow: the flow to use for this request
*/
struct sun8i_ce_hash_reqctx {
struct ahash_request fallback_req;
int flow;
};
/*
* struct sun8i_ce_prng_ctx - context for PRNG TFM
* @seed: The seed to use
* @slen: The size of the seed
*/
struct sun8i_ce_rng_tfm_ctx {
void *seed;
unsigned int slen;
};
/*
* struct sun8i_ce_alg_template - crypto_alg template
* @type: the CRYPTO_ALG_TYPE for this template
@ -217,6 +319,7 @@ struct sun8i_cipher_tfm_ctx {
* @alg: one of sub struct must be used
* @stat_req: number of request done on this template
* @stat_fb: number of request which has fallbacked
* @stat_bytes: total data size done by this template
*/
struct sun8i_ce_alg_template {
u32 type;
@ -225,10 +328,13 @@ struct sun8i_ce_alg_template {
struct sun8i_ce_dev *ce;
union {
struct skcipher_alg skcipher;
struct ahash_alg hash;
struct rng_alg rng;
} alg;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_CE_DEBUG
unsigned long stat_req;
unsigned long stat_fb;
unsigned long stat_bytes;
#endif
};
@ -246,3 +352,24 @@ int sun8i_ce_skencrypt(struct skcipher_request *areq);
int sun8i_ce_get_engine_number(struct sun8i_ce_dev *ce);
int sun8i_ce_run_task(struct sun8i_ce_dev *ce, int flow, const char *name);
int sun8i_ce_hash_crainit(struct crypto_tfm *tfm);
void sun8i_ce_hash_craexit(struct crypto_tfm *tfm);
int sun8i_ce_hash_init(struct ahash_request *areq);
int sun8i_ce_hash_export(struct ahash_request *areq, void *out);
int sun8i_ce_hash_import(struct ahash_request *areq, const void *in);
int sun8i_ce_hash(struct ahash_request *areq);
int sun8i_ce_hash_final(struct ahash_request *areq);
int sun8i_ce_hash_update(struct ahash_request *areq);
int sun8i_ce_hash_finup(struct ahash_request *areq);
int sun8i_ce_hash_digest(struct ahash_request *areq);
int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq);
int sun8i_ce_prng_generate(struct crypto_rng *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int dlen);
int sun8i_ce_prng_seed(struct crypto_rng *tfm, const u8 *seed, unsigned int slen);
void sun8i_ce_prng_exit(struct crypto_tfm *tfm);
int sun8i_ce_prng_init(struct crypto_tfm *tfm);
int sun8i_ce_hwrng_register(struct sun8i_ce_dev *ce);
void sun8i_ce_hwrng_unregister(struct sun8i_ce_dev *ce);

View File

@ -1,2 +1,4 @@
obj-$(CONFIG_CRYPTO_DEV_SUN8I_SS) += sun8i-ss.o
sun8i-ss-y += sun8i-ss-core.o sun8i-ss-cipher.o
sun8i-ss-$(CONFIG_CRYPTO_DEV_SUN8I_SS_PRNG) += sun8i-ss-prng.o
sun8i-ss-$(CONFIG_CRYPTO_DEV_SUN8I_SS_HASH) += sun8i-ss-hash.o

View File

@ -248,7 +248,6 @@ theend_iv:
offset = areq->cryptlen - ivsize;
if (rctx->op_dir & SS_DECRYPTION) {
memcpy(areq->iv, backup_iv, ivsize);
memzero_explicit(backup_iv, ivsize);
kfree_sensitive(backup_iv);
} else {
scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
@ -368,10 +367,7 @@ void sun8i_ss_cipher_exit(struct crypto_tfm *tfm)
{
struct sun8i_cipher_tfm_ctx *op = crypto_tfm_ctx(tfm);
if (op->key) {
memzero_explicit(op->key, op->keylen);
kfree(op->key);
}
kfree_sensitive(op->key);
crypto_free_skcipher(op->fallback_tfm);
pm_runtime_put_sync(op->ss->dev);
}
@ -393,10 +389,7 @@ int sun8i_ss_aes_setkey(struct crypto_skcipher *tfm, const u8 *key,
dev_dbg(ss->dev, "ERROR: Invalid keylen %u\n", keylen);
return -EINVAL;
}
if (op->key) {
memzero_explicit(op->key, op->keylen);
kfree(op->key);
}
kfree_sensitive(op->key);
op->keylen = keylen;
op->key = kmemdup(key, keylen, GFP_KERNEL | GFP_DMA);
if (!op->key)
@ -419,10 +412,7 @@ int sun8i_ss_des3_setkey(struct crypto_skcipher *tfm, const u8 *key,
return -EINVAL;
}
if (op->key) {
memzero_explicit(op->key, op->keylen);
kfree(op->key);
}
kfree_sensitive(op->key);
op->keylen = keylen;
op->key = kmemdup(key, keylen, GFP_KERNEL | GFP_DMA);
if (!op->key)

View File

@ -22,6 +22,7 @@
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/reset.h>
#include <crypto/internal/rng.h>
#include <crypto/internal/skcipher.h>
#include "sun8i-ss.h"
@ -40,6 +41,8 @@ static const struct ss_variant ss_a80_variant = {
static const struct ss_variant ss_a83t_variant = {
.alg_cipher = { SS_ALG_AES, SS_ALG_DES, SS_ALG_3DES,
},
.alg_hash = { SS_ALG_MD5, SS_ALG_SHA1, SS_ALG_SHA224, SS_ALG_SHA256,
},
.op_mode = { SS_OP_ECB, SS_OP_CBC,
},
.ss_clks = {
@ -61,7 +64,7 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx
const char *name)
{
int flow = rctx->flow;
u32 v = 1;
u32 v = SS_START;
int i;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
@ -264,13 +267,154 @@ static struct sun8i_ss_alg_template ss_algs[] = {
.decrypt = sun8i_ss_skdecrypt,
}
},
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_PRNG
{
.type = CRYPTO_ALG_TYPE_RNG,
.alg.rng = {
.base = {
.cra_name = "stdrng",
.cra_driver_name = "sun8i-ss-prng",
.cra_priority = 300,
.cra_ctxsize = sizeof(struct sun8i_ss_rng_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ss_prng_init,
.cra_exit = sun8i_ss_prng_exit,
},
.generate = sun8i_ss_prng_generate,
.seed = sun8i_ss_prng_seed,
.seedsize = PRNG_SEED_SIZE,
}
},
#endif
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_HASH
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ss_algo_id = SS_ID_HASH_MD5,
.alg.hash = {
.init = sun8i_ss_hash_init,
.update = sun8i_ss_hash_update,
.final = sun8i_ss_hash_final,
.finup = sun8i_ss_hash_finup,
.digest = sun8i_ss_hash_digest,
.export = sun8i_ss_hash_export,
.import = sun8i_ss_hash_import,
.halg = {
.digestsize = MD5_DIGEST_SIZE,
.statesize = sizeof(struct md5_state),
.base = {
.cra_name = "md5",
.cra_driver_name = "md5-sun8i-ss",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = MD5_HMAC_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ss_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ss_hash_crainit,
.cra_exit = sun8i_ss_hash_craexit,
}
}
}
},
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ss_algo_id = SS_ID_HASH_SHA1,
.alg.hash = {
.init = sun8i_ss_hash_init,
.update = sun8i_ss_hash_update,
.final = sun8i_ss_hash_final,
.finup = sun8i_ss_hash_finup,
.digest = sun8i_ss_hash_digest,
.export = sun8i_ss_hash_export,
.import = sun8i_ss_hash_import,
.halg = {
.digestsize = SHA1_DIGEST_SIZE,
.statesize = sizeof(struct sha1_state),
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-sun8i-ss",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ss_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ss_hash_crainit,
.cra_exit = sun8i_ss_hash_craexit,
}
}
}
},
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ss_algo_id = SS_ID_HASH_SHA224,
.alg.hash = {
.init = sun8i_ss_hash_init,
.update = sun8i_ss_hash_update,
.final = sun8i_ss_hash_final,
.finup = sun8i_ss_hash_finup,
.digest = sun8i_ss_hash_digest,
.export = sun8i_ss_hash_export,
.import = sun8i_ss_hash_import,
.halg = {
.digestsize = SHA224_DIGEST_SIZE,
.statesize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha224",
.cra_driver_name = "sha224-sun8i-ss",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ss_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ss_hash_crainit,
.cra_exit = sun8i_ss_hash_craexit,
}
}
}
},
{ .type = CRYPTO_ALG_TYPE_AHASH,
.ss_algo_id = SS_ID_HASH_SHA256,
.alg.hash = {
.init = sun8i_ss_hash_init,
.update = sun8i_ss_hash_update,
.final = sun8i_ss_hash_final,
.finup = sun8i_ss_hash_finup,
.digest = sun8i_ss_hash_digest,
.export = sun8i_ss_hash_export,
.import = sun8i_ss_hash_import,
.halg = {
.digestsize = SHA256_DIGEST_SIZE,
.statesize = sizeof(struct sha256_state),
.base = {
.cra_name = "sha256",
.cra_driver_name = "sha256-sun8i-ss",
.cra_priority = 300,
.cra_alignmask = 3,
.cra_flags = CRYPTO_ALG_TYPE_AHASH |
CRYPTO_ALG_ASYNC |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct sun8i_ss_hash_tfm_ctx),
.cra_module = THIS_MODULE,
.cra_init = sun8i_ss_hash_crainit,
.cra_exit = sun8i_ss_hash_craexit,
}
}
}
},
#endif
};
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
static int sun8i_ss_dbgfs_read(struct seq_file *seq, void *v)
static int sun8i_ss_debugfs_show(struct seq_file *seq, void *v)
{
struct sun8i_ss_dev *ss = seq->private;
int i;
unsigned int i;
for (i = 0; i < MAXFLOW; i++)
seq_printf(seq, "Channel %d: nreq %lu\n", i, ss->flows[i].stat_req);
@ -280,28 +424,29 @@ static int sun8i_ss_dbgfs_read(struct seq_file *seq, void *v)
continue;
switch (ss_algs[i].type) {
case CRYPTO_ALG_TYPE_SKCIPHER:
seq_printf(seq, "%s %s %lu %lu\n",
seq_printf(seq, "%s %s reqs=%lu fallback=%lu\n",
ss_algs[i].alg.skcipher.base.cra_driver_name,
ss_algs[i].alg.skcipher.base.cra_name,
ss_algs[i].stat_req, ss_algs[i].stat_fb);
break;
case CRYPTO_ALG_TYPE_RNG:
seq_printf(seq, "%s %s reqs=%lu tsize=%lu\n",
ss_algs[i].alg.rng.base.cra_driver_name,
ss_algs[i].alg.rng.base.cra_name,
ss_algs[i].stat_req, ss_algs[i].stat_bytes);
break;
case CRYPTO_ALG_TYPE_AHASH:
seq_printf(seq, "%s %s reqs=%lu fallback=%lu\n",
ss_algs[i].alg.hash.halg.base.cra_driver_name,
ss_algs[i].alg.hash.halg.base.cra_name,
ss_algs[i].stat_req, ss_algs[i].stat_fb);
break;
}
}
return 0;
}
static int sun8i_ss_dbgfs_open(struct inode *inode, struct file *file)
{
return single_open(file, sun8i_ss_dbgfs_read, inode->i_private);
}
static const struct file_operations sun8i_ss_debugfs_fops = {
.owner = THIS_MODULE,
.open = sun8i_ss_dbgfs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
DEFINE_SHOW_ATTRIBUTE(sun8i_ss_debugfs);
#endif
static void sun8i_ss_free_flows(struct sun8i_ss_dev *ss, int i)
@ -415,7 +560,8 @@ static void sun8i_ss_pm_exit(struct sun8i_ss_dev *ss)
static int sun8i_ss_register_algs(struct sun8i_ss_dev *ss)
{
int ss_method, err, id, i;
int ss_method, err, id;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(ss_algs); i++) {
ss_algs[i].ss = ss;
@ -448,6 +594,34 @@ static int sun8i_ss_register_algs(struct sun8i_ss_dev *ss)
return err;
}
break;
case CRYPTO_ALG_TYPE_RNG:
err = crypto_register_rng(&ss_algs[i].alg.rng);
if (err) {
dev_err(ss->dev, "Fail to register %s\n",
ss_algs[i].alg.rng.base.cra_name);
ss_algs[i].ss = NULL;
}
break;
case CRYPTO_ALG_TYPE_AHASH:
id = ss_algs[i].ss_algo_id;
ss_method = ss->variant->alg_hash[id];
if (ss_method == SS_ID_NOTSUPP) {
dev_info(ss->dev,
"DEBUG: Algo of %s not supported\n",
ss_algs[i].alg.hash.halg.base.cra_name);
ss_algs[i].ss = NULL;
break;
}
dev_info(ss->dev, "Register %s\n",
ss_algs[i].alg.hash.halg.base.cra_name);
err = crypto_register_ahash(&ss_algs[i].alg.hash);
if (err) {
dev_err(ss->dev, "ERROR: Fail to register %s\n",
ss_algs[i].alg.hash.halg.base.cra_name);
ss_algs[i].ss = NULL;
return err;
}
break;
default:
ss_algs[i].ss = NULL;
dev_err(ss->dev, "ERROR: tried to register an unknown algo\n");
@ -458,7 +632,7 @@ static int sun8i_ss_register_algs(struct sun8i_ss_dev *ss)
static void sun8i_ss_unregister_algs(struct sun8i_ss_dev *ss)
{
int i;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(ss_algs); i++) {
if (!ss_algs[i].ss)
@ -469,6 +643,16 @@ static void sun8i_ss_unregister_algs(struct sun8i_ss_dev *ss)
ss_algs[i].alg.skcipher.base.cra_name);
crypto_unregister_skcipher(&ss_algs[i].alg.skcipher);
break;
case CRYPTO_ALG_TYPE_RNG:
dev_info(ss->dev, "Unregister %d %s\n", i,
ss_algs[i].alg.rng.base.cra_name);
crypto_unregister_rng(&ss_algs[i].alg.rng);
break;
case CRYPTO_ALG_TYPE_AHASH:
dev_info(ss->dev, "Unregister %d %s\n", i,
ss_algs[i].alg.hash.halg.base.cra_name);
crypto_unregister_ahash(&ss_algs[i].alg.hash);
break;
}
}
}
@ -545,12 +729,9 @@ static int sun8i_ss_probe(struct platform_device *pdev)
return irq;
ss->reset = devm_reset_control_get(&pdev->dev, NULL);
if (IS_ERR(ss->reset)) {
if (PTR_ERR(ss->reset) == -EPROBE_DEFER)
return PTR_ERR(ss->reset);
dev_err(&pdev->dev, "No reset control found\n");
return PTR_ERR(ss->reset);
}
if (IS_ERR(ss->reset))
return dev_err_probe(&pdev->dev, PTR_ERR(ss->reset),
"No reset control found\n");
mutex_init(&ss->mlock);

View File

@ -0,0 +1,444 @@
// SPDX-License-Identifier: GPL-2.0
/*
* sun8i-ss-hash.c - hardware cryptographic offloader for
* Allwinner A80/A83T SoC
*
* Copyright (C) 2015-2020 Corentin Labbe <clabbe@baylibre.com>
*
* This file add support for MD5 and SHA1/SHA224/SHA256.
*
* You could find the datasheet in Documentation/arm/sunxi.rst
*/
#include <linux/dma-mapping.h>
#include <linux/pm_runtime.h>
#include <linux/scatterlist.h>
#include <crypto/internal/hash.h>
#include <crypto/sha.h>
#include <crypto/md5.h>
#include "sun8i-ss.h"
int sun8i_ss_hash_crainit(struct crypto_tfm *tfm)
{
struct sun8i_ss_hash_tfm_ctx *op = crypto_tfm_ctx(tfm);
struct ahash_alg *alg = __crypto_ahash_alg(tfm->__crt_alg);
struct sun8i_ss_alg_template *algt;
int err;
memset(op, 0, sizeof(struct sun8i_ss_hash_tfm_ctx));
algt = container_of(alg, struct sun8i_ss_alg_template, alg.hash);
op->ss = algt->ss;
op->enginectx.op.do_one_request = sun8i_ss_hash_run;
op->enginectx.op.prepare_request = NULL;
op->enginectx.op.unprepare_request = NULL;
/* FALLBACK */
op->fallback_tfm = crypto_alloc_ahash(crypto_tfm_alg_name(tfm), 0,
CRYPTO_ALG_NEED_FALLBACK);
if (IS_ERR(op->fallback_tfm)) {
dev_err(algt->ss->dev, "Fallback driver could no be loaded\n");
return PTR_ERR(op->fallback_tfm);
}
if (algt->alg.hash.halg.statesize < crypto_ahash_statesize(op->fallback_tfm))
algt->alg.hash.halg.statesize = crypto_ahash_statesize(op->fallback_tfm);
crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
sizeof(struct sun8i_ss_hash_reqctx) +
crypto_ahash_reqsize(op->fallback_tfm));
dev_info(op->ss->dev, "Fallback for %s is %s\n",
crypto_tfm_alg_driver_name(tfm),
crypto_tfm_alg_driver_name(&op->fallback_tfm->base));
err = pm_runtime_get_sync(op->ss->dev);
if (err < 0)
goto error_pm;
return 0;
error_pm:
pm_runtime_put_noidle(op->ss->dev);
crypto_free_ahash(op->fallback_tfm);
return err;
}
void sun8i_ss_hash_craexit(struct crypto_tfm *tfm)
{
struct sun8i_ss_hash_tfm_ctx *tfmctx = crypto_tfm_ctx(tfm);
crypto_free_ahash(tfmctx->fallback_tfm);
pm_runtime_put_sync_suspend(tfmctx->ss->dev);
}
int sun8i_ss_hash_init(struct ahash_request *areq)
{
struct sun8i_ss_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ss_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
memset(rctx, 0, sizeof(struct sun8i_ss_hash_reqctx));
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP;
return crypto_ahash_init(&rctx->fallback_req);
}
int sun8i_ss_hash_export(struct ahash_request *areq, void *out)
{
struct sun8i_ss_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ss_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP;
return crypto_ahash_export(&rctx->fallback_req, out);
}
int sun8i_ss_hash_import(struct ahash_request *areq, const void *in)
{
struct sun8i_ss_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ss_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP;
return crypto_ahash_import(&rctx->fallback_req, in);
}
int sun8i_ss_hash_final(struct ahash_request *areq)
{
struct sun8i_ss_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ss_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ss_alg_template *algt;
#endif
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags &
CRYPTO_TFM_REQ_MAY_SLEEP;
rctx->fallback_req.result = areq->result;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
algt = container_of(alg, struct sun8i_ss_alg_template, alg.hash);
algt->stat_fb++;
#endif
return crypto_ahash_final(&rctx->fallback_req);
}
int sun8i_ss_hash_update(struct ahash_request *areq)
{
struct sun8i_ss_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ss_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags &
CRYPTO_TFM_REQ_MAY_SLEEP;
rctx->fallback_req.nbytes = areq->nbytes;
rctx->fallback_req.src = areq->src;
return crypto_ahash_update(&rctx->fallback_req);
}
int sun8i_ss_hash_finup(struct ahash_request *areq)
{
struct sun8i_ss_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ss_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ss_alg_template *algt;
#endif
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags &
CRYPTO_TFM_REQ_MAY_SLEEP;
rctx->fallback_req.nbytes = areq->nbytes;
rctx->fallback_req.src = areq->src;
rctx->fallback_req.result = areq->result;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
algt = container_of(alg, struct sun8i_ss_alg_template, alg.hash);
algt->stat_fb++;
#endif
return crypto_ahash_finup(&rctx->fallback_req);
}
static int sun8i_ss_hash_digest_fb(struct ahash_request *areq)
{
struct sun8i_ss_hash_reqctx *rctx = ahash_request_ctx(areq);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct sun8i_ss_hash_tfm_ctx *tfmctx = crypto_ahash_ctx(tfm);
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ss_alg_template *algt;
#endif
ahash_request_set_tfm(&rctx->fallback_req, tfmctx->fallback_tfm);
rctx->fallback_req.base.flags = areq->base.flags &
CRYPTO_TFM_REQ_MAY_SLEEP;
rctx->fallback_req.nbytes = areq->nbytes;
rctx->fallback_req.src = areq->src;
rctx->fallback_req.result = areq->result;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
algt = container_of(alg, struct sun8i_ss_alg_template, alg.hash);
algt->stat_fb++;
#endif
return crypto_ahash_digest(&rctx->fallback_req);
}
static int sun8i_ss_run_hash_task(struct sun8i_ss_dev *ss,
struct sun8i_ss_hash_reqctx *rctx,
const char *name)
{
int flow = rctx->flow;
u32 v = SS_START;
int i;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
ss->flows[flow].stat_req++;
#endif
/* choose between stream0/stream1 */
if (flow)
v |= SS_FLOW1;
else
v |= SS_FLOW0;
v |= rctx->method;
for (i = 0; i < MAX_SG; i++) {
if (!rctx->t_dst[i].addr)
break;
mutex_lock(&ss->mlock);
if (i > 0) {
v |= BIT(17);
writel(rctx->t_dst[i - 1].addr, ss->base + SS_KEY_ADR_REG);
writel(rctx->t_dst[i - 1].addr, ss->base + SS_IV_ADR_REG);
}
dev_dbg(ss->dev,
"Processing SG %d on flow %d %s ctl=%x %d to %d method=%x src=%x dst=%x\n",
i, flow, name, v,
rctx->t_src[i].len, rctx->t_dst[i].len,
rctx->method, rctx->t_src[i].addr, rctx->t_dst[i].addr);
writel(rctx->t_src[i].addr, ss->base + SS_SRC_ADR_REG);
writel(rctx->t_dst[i].addr, ss->base + SS_DST_ADR_REG);
writel(rctx->t_src[i].len, ss->base + SS_LEN_ADR_REG);
writel(BIT(0) | BIT(1), ss->base + SS_INT_CTL_REG);
reinit_completion(&ss->flows[flow].complete);
ss->flows[flow].status = 0;
wmb();
writel(v, ss->base + SS_CTL_REG);
mutex_unlock(&ss->mlock);
wait_for_completion_interruptible_timeout(&ss->flows[flow].complete,
msecs_to_jiffies(2000));
if (ss->flows[flow].status == 0) {
dev_err(ss->dev, "DMA timeout for %s\n", name);
return -EFAULT;
}
}
return 0;
}
static bool sun8i_ss_hash_need_fallback(struct ahash_request *areq)
{
struct scatterlist *sg;
if (areq->nbytes == 0)
return true;
/* we need to reserve one SG for the padding one */
if (sg_nents(areq->src) > MAX_SG - 1)
return true;
sg = areq->src;
while (sg) {
/* SS can operate hash only on full block size
* since SS support only MD5,sha1,sha224 and sha256, blocksize
* is always 64
* TODO: handle request if last SG is not len%64
* but this will need to copy data on a new SG of size=64
*/
if (sg->length % 64 || !IS_ALIGNED(sg->offset, sizeof(u32)))
return true;
sg = sg_next(sg);
}
return false;
}
int sun8i_ss_hash_digest(struct ahash_request *areq)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ss_hash_reqctx *rctx = ahash_request_ctx(areq);
struct sun8i_ss_alg_template *algt;
struct sun8i_ss_dev *ss;
struct crypto_engine *engine;
struct scatterlist *sg;
int nr_sgs, e, i;
if (sun8i_ss_hash_need_fallback(areq))
return sun8i_ss_hash_digest_fb(areq);
nr_sgs = sg_nents(areq->src);
if (nr_sgs > MAX_SG - 1)
return sun8i_ss_hash_digest_fb(areq);
for_each_sg(areq->src, sg, nr_sgs, i) {
if (sg->length % 4 || !IS_ALIGNED(sg->offset, sizeof(u32)))
return sun8i_ss_hash_digest_fb(areq);
}
algt = container_of(alg, struct sun8i_ss_alg_template, alg.hash);
ss = algt->ss;
e = sun8i_ss_get_engine_number(ss);
rctx->flow = e;
engine = ss->flows[e].engine;
return crypto_transfer_hash_request_to_engine(engine, areq);
}
/* sun8i_ss_hash_run - run an ahash request
* Send the data of the request to the SS along with an extra SG with padding
*/
int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
{
struct ahash_request *areq = container_of(breq, struct ahash_request, base);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg);
struct sun8i_ss_hash_reqctx *rctx = ahash_request_ctx(areq);
struct sun8i_ss_alg_template *algt;
struct sun8i_ss_dev *ss;
struct scatterlist *sg;
int nr_sgs, err, digestsize;
unsigned int len;
u64 fill, min_fill, byte_count;
void *pad, *result;
int j, i, todo;
__be64 *bebits;
__le64 *lebits;
dma_addr_t addr_res, addr_pad;
__le32 *bf;
algt = container_of(alg, struct sun8i_ss_alg_template, alg.hash);
ss = algt->ss;
digestsize = algt->alg.hash.halg.digestsize;
if (digestsize == SHA224_DIGEST_SIZE)
digestsize = SHA256_DIGEST_SIZE;
/* the padding could be up to two block. */
pad = kzalloc(algt->alg.hash.halg.base.cra_blocksize * 2, GFP_KERNEL | GFP_DMA);
if (!pad)
return -ENOMEM;
bf = (__le32 *)pad;
result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA);
if (!result)
return -ENOMEM;
for (i = 0; i < MAX_SG; i++) {
rctx->t_dst[i].addr = 0;
rctx->t_dst[i].len = 0;
}
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
algt->stat_req++;
#endif
rctx->method = ss->variant->alg_hash[algt->ss_algo_id];
nr_sgs = dma_map_sg(ss->dev, areq->src, sg_nents(areq->src), DMA_TO_DEVICE);
if (nr_sgs <= 0 || nr_sgs > MAX_SG) {
dev_err(ss->dev, "Invalid sg number %d\n", nr_sgs);
err = -EINVAL;
goto theend;
}
addr_res = dma_map_single(ss->dev, result, digestsize, DMA_FROM_DEVICE);
if (dma_mapping_error(ss->dev, addr_res)) {
dev_err(ss->dev, "DMA map dest\n");
err = -EINVAL;
goto theend;
}
len = areq->nbytes;
for_each_sg(areq->src, sg, nr_sgs, i) {
rctx->t_src[i].addr = sg_dma_address(sg);
todo = min(len, sg_dma_len(sg));
rctx->t_src[i].len = todo / 4;
len -= todo;
rctx->t_dst[i].addr = addr_res;
rctx->t_dst[i].len = digestsize / 4;
}
if (len > 0) {
dev_err(ss->dev, "remaining len %d\n", len);
err = -EINVAL;
goto theend;
}
byte_count = areq->nbytes;
j = 0;
bf[j++] = cpu_to_le32(0x80);
fill = 64 - (byte_count % 64);
min_fill = 3 * sizeof(u32);
if (fill < min_fill)
fill += 64;
j += (fill - min_fill) / sizeof(u32);
switch (algt->ss_algo_id) {
case SS_ID_HASH_MD5:
lebits = (__le64 *)&bf[j];
*lebits = cpu_to_le64(byte_count << 3);
j += 2;
break;
case SS_ID_HASH_SHA1:
case SS_ID_HASH_SHA224:
case SS_ID_HASH_SHA256:
bebits = (__be64 *)&bf[j];
*bebits = cpu_to_be64(byte_count << 3);
j += 2;
break;
}
addr_pad = dma_map_single(ss->dev, pad, j * 4, DMA_TO_DEVICE);
rctx->t_src[i].addr = addr_pad;
rctx->t_src[i].len = j;
rctx->t_dst[i].addr = addr_res;
rctx->t_dst[i].len = digestsize / 4;
if (dma_mapping_error(ss->dev, addr_pad)) {
dev_err(ss->dev, "DMA error on padding SG\n");
err = -EINVAL;
goto theend;
}
err = sun8i_ss_run_hash_task(ss, rctx, crypto_tfm_alg_name(areq->base.tfm));
dma_unmap_single(ss->dev, addr_pad, j * 4, DMA_TO_DEVICE);
dma_unmap_sg(ss->dev, areq->src, nr_sgs, DMA_TO_DEVICE);
dma_unmap_single(ss->dev, addr_res, digestsize, DMA_FROM_DEVICE);
kfree(pad);
memcpy(areq->result, result, algt->alg.hash.halg.digestsize);
kfree(result);
theend:
crypto_finalize_hash_request(engine, breq, err);
return 0;
}

View File

@ -0,0 +1,173 @@
// SPDX-License-Identifier: GPL-2.0
/*
* sun8i-ss-prng.c - hardware cryptographic offloader for
* Allwinner A80/A83T SoC
*
* Copyright (C) 2015-2020 Corentin Labbe <clabbe@baylibre.com>
*
* This file handle the PRNG found in the SS
*
* You could find a link for the datasheet in Documentation/arm/sunxi.rst
*/
#include "sun8i-ss.h"
#include <linux/dma-mapping.h>
#include <linux/pm_runtime.h>
#include <crypto/internal/rng.h>
int sun8i_ss_prng_seed(struct crypto_rng *tfm, const u8 *seed,
unsigned int slen)
{
struct sun8i_ss_rng_tfm_ctx *ctx = crypto_rng_ctx(tfm);
if (ctx->seed && ctx->slen != slen) {
memzero_explicit(ctx->seed, ctx->slen);
kfree(ctx->seed);
ctx->slen = 0;
ctx->seed = NULL;
}
if (!ctx->seed)
ctx->seed = kmalloc(slen, GFP_KERNEL | GFP_DMA);
if (!ctx->seed)
return -ENOMEM;
memcpy(ctx->seed, seed, slen);
ctx->slen = slen;
return 0;
}
int sun8i_ss_prng_init(struct crypto_tfm *tfm)
{
struct sun8i_ss_rng_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
memset(ctx, 0, sizeof(struct sun8i_ss_rng_tfm_ctx));
return 0;
}
void sun8i_ss_prng_exit(struct crypto_tfm *tfm)
{
struct sun8i_ss_rng_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
memzero_explicit(ctx->seed, ctx->slen);
kfree(ctx->seed);
ctx->seed = NULL;
ctx->slen = 0;
}
int sun8i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int dlen)
{
struct sun8i_ss_rng_tfm_ctx *ctx = crypto_rng_ctx(tfm);
struct rng_alg *alg = crypto_rng_alg(tfm);
struct sun8i_ss_alg_template *algt;
struct sun8i_ss_dev *ss;
dma_addr_t dma_iv, dma_dst;
unsigned int todo;
int err = 0;
int flow;
void *d;
u32 v;
algt = container_of(alg, struct sun8i_ss_alg_template, alg.rng);
ss = algt->ss;
if (ctx->slen == 0) {
dev_err(ss->dev, "The PRNG is not seeded\n");
return -EINVAL;
}
/* The SS does not give an updated seed, so we need to get a new one.
* So we will ask for an extra PRNG_SEED_SIZE data.
* We want dlen + seedsize rounded up to a multiple of PRNG_DATA_SIZE
*/
todo = dlen + PRNG_SEED_SIZE + PRNG_DATA_SIZE;
todo -= todo % PRNG_DATA_SIZE;
d = kzalloc(todo, GFP_KERNEL | GFP_DMA);
if (!d)
return -ENOMEM;
flow = sun8i_ss_get_engine_number(ss);
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
algt->stat_req++;
algt->stat_bytes += todo;
#endif
v = SS_ALG_PRNG | SS_PRNG_CONTINUE | SS_START;
if (flow)
v |= SS_FLOW1;
else
v |= SS_FLOW0;
dma_iv = dma_map_single(ss->dev, ctx->seed, ctx->slen, DMA_TO_DEVICE);
if (dma_mapping_error(ss->dev, dma_iv)) {
dev_err(ss->dev, "Cannot DMA MAP IV\n");
return -EFAULT;
}
dma_dst = dma_map_single(ss->dev, d, todo, DMA_FROM_DEVICE);
if (dma_mapping_error(ss->dev, dma_dst)) {
dev_err(ss->dev, "Cannot DMA MAP DST\n");
err = -EFAULT;
goto err_iv;
}
err = pm_runtime_get_sync(ss->dev);
if (err < 0) {
pm_runtime_put_noidle(ss->dev);
goto err_pm;
}
err = 0;
mutex_lock(&ss->mlock);
writel(dma_iv, ss->base + SS_IV_ADR_REG);
/* the PRNG act badly (failing rngtest) without SS_KEY_ADR_REG set */
writel(dma_iv, ss->base + SS_KEY_ADR_REG);
writel(dma_dst, ss->base + SS_DST_ADR_REG);
writel(todo / 4, ss->base + SS_LEN_ADR_REG);
reinit_completion(&ss->flows[flow].complete);
ss->flows[flow].status = 0;
/* Be sure all data is written before enabling the task */
wmb();
writel(v, ss->base + SS_CTL_REG);
wait_for_completion_interruptible_timeout(&ss->flows[flow].complete,
msecs_to_jiffies(todo));
if (ss->flows[flow].status == 0) {
dev_err(ss->dev, "DMA timeout for PRNG (size=%u)\n", todo);
err = -EFAULT;
}
/* Since cipher and hash use the linux/cryptoengine and that we have
* a cryptoengine per flow, we are sure that they will issue only one
* request per flow.
* Since the cryptoengine wait for completion before submitting a new
* one, the mlock could be left just after the final writel.
* But cryptoengine cannot handle crypto_rng, so we need to be sure
* nothing will use our flow.
* The easiest way is to grab mlock until the hardware end our requests.
* We could have used a per flow lock, but this would increase
* complexity.
* The drawback is that no request could be handled for the other flow.
*/
mutex_unlock(&ss->mlock);
pm_runtime_put(ss->dev);
err_pm:
dma_unmap_single(ss->dev, dma_dst, todo, DMA_FROM_DEVICE);
err_iv:
dma_unmap_single(ss->dev, dma_iv, ctx->slen, DMA_TO_DEVICE);
if (!err) {
memcpy(dst, d, dlen);
/* Update seed */
memcpy(ctx->seed, d + dlen, ctx->slen);
}
memzero_explicit(d, todo);
kfree(d);
return err;
}

View File

@ -8,10 +8,16 @@
#include <crypto/aes.h>
#include <crypto/des.h>
#include <crypto/engine.h>
#include <crypto/rng.h>
#include <crypto/skcipher.h>
#include <linux/atomic.h>
#include <linux/debugfs.h>
#include <linux/crypto.h>
#include <crypto/internal/hash.h>
#include <crypto/md5.h>
#include <crypto/sha.h>
#define SS_START 1
#define SS_ENCRYPTION 0
#define SS_DECRYPTION BIT(6)
@ -19,6 +25,11 @@
#define SS_ALG_AES 0
#define SS_ALG_DES (1 << 2)
#define SS_ALG_3DES (2 << 2)
#define SS_ALG_MD5 (3 << 2)
#define SS_ALG_PRNG (4 << 2)
#define SS_ALG_SHA1 (6 << 2)
#define SS_ALG_SHA224 (7 << 2)
#define SS_ALG_SHA256 (8 << 2)
#define SS_CTL_REG 0x00
#define SS_INT_CTL_REG 0x04
@ -47,9 +58,17 @@
#define SS_OP_ECB 0
#define SS_OP_CBC (1 << 13)
#define SS_ID_HASH_MD5 0
#define SS_ID_HASH_SHA1 1
#define SS_ID_HASH_SHA224 2
#define SS_ID_HASH_SHA256 3
#define SS_ID_HASH_MAX 4
#define SS_FLOW0 BIT(30)
#define SS_FLOW1 BIT(31)
#define SS_PRNG_CONTINUE BIT(18)
#define MAX_SG 8
#define MAXFLOW 2
@ -59,6 +78,9 @@
#define SS_DIE_ID_SHIFT 20
#define SS_DIE_ID_MASK 0x07
#define PRNG_DATA_SIZE (160 / 8)
#define PRNG_SEED_SIZE DIV_ROUND_UP(175, 8)
/*
* struct ss_clock - Describe clocks used by sun8i-ss
* @name: Name of clock needed by this variant
@ -75,11 +97,14 @@ struct ss_clock {
* struct ss_variant - Describe SS capability for each variant hardware
* @alg_cipher: list of supported ciphers. for each SS_ID_ this will give the
* coresponding SS_ALG_XXX value
* @alg_hash: list of supported hashes. for each SS_ID_ this will give the
* corresponding SS_ALG_XXX value
* @op_mode: list of supported block modes
* @ss_clks! list of clock needed by this variant
* @ss_clks: list of clock needed by this variant
*/
struct ss_variant {
char alg_cipher[SS_ID_CIPHER_MAX];
char alg_hash[SS_ID_HASH_MAX];
u32 op_mode[SS_ID_OP_MAX];
struct ss_clock ss_clks[SS_MAX_CLOCKS];
};
@ -170,6 +195,8 @@ struct sun8i_cipher_req_ctx {
* @keylen: len of the key
* @ss: pointer to the private data of driver handling this TFM
* @fallback_tfm: pointer to the fallback TFM
*
* enginectx must be the first element
*/
struct sun8i_cipher_tfm_ctx {
struct crypto_engine_ctx enginectx;
@ -179,6 +206,46 @@ struct sun8i_cipher_tfm_ctx {
struct crypto_skcipher *fallback_tfm;
};
/*
* struct sun8i_ss_prng_ctx - context for PRNG TFM
* @seed: The seed to use
* @slen: The size of the seed
*/
struct sun8i_ss_rng_tfm_ctx {
void *seed;
unsigned int slen;
};
/*
* struct sun8i_ss_hash_tfm_ctx - context for an ahash TFM
* @enginectx: crypto_engine used by this TFM
* @fallback_tfm: pointer to the fallback TFM
* @ss: pointer to the private data of driver handling this TFM
*
* enginectx must be the first element
*/
struct sun8i_ss_hash_tfm_ctx {
struct crypto_engine_ctx enginectx;
struct crypto_ahash *fallback_tfm;
struct sun8i_ss_dev *ss;
};
/*
* struct sun8i_ss_hash_reqctx - context for an ahash request
* @t_src: list of DMA address and size for source SGs
* @t_dst: list of DMA address and size for destination SGs
* @fallback_req: pre-allocated fallback request
* @method: the register value for the algorithm used by this request
* @flow: the flow to use for this request
*/
struct sun8i_ss_hash_reqctx {
struct sginfo t_src[MAX_SG];
struct sginfo t_dst[MAX_SG];
struct ahash_request fallback_req;
u32 method;
int flow;
};
/*
* struct sun8i_ss_alg_template - crypto_alg template
* @type: the CRYPTO_ALG_TYPE for this template
@ -189,6 +256,7 @@ struct sun8i_cipher_tfm_ctx {
* @alg: one of sub struct must be used
* @stat_req: number of request done on this template
* @stat_fb: number of request which has fallbacked
* @stat_bytes: total data size done by this template
*/
struct sun8i_ss_alg_template {
u32 type;
@ -197,10 +265,13 @@ struct sun8i_ss_alg_template {
struct sun8i_ss_dev *ss;
union {
struct skcipher_alg skcipher;
struct rng_alg rng;
struct ahash_alg hash;
} alg;
#ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
unsigned long stat_req;
unsigned long stat_fb;
unsigned long stat_bytes;
#endif
};
@ -218,3 +289,19 @@ int sun8i_ss_skencrypt(struct skcipher_request *areq);
int sun8i_ss_get_engine_number(struct sun8i_ss_dev *ss);
int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx, const char *name);
int sun8i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int dlen);
int sun8i_ss_prng_seed(struct crypto_rng *tfm, const u8 *seed, unsigned int slen);
int sun8i_ss_prng_init(struct crypto_tfm *tfm);
void sun8i_ss_prng_exit(struct crypto_tfm *tfm);
int sun8i_ss_hash_crainit(struct crypto_tfm *tfm);
void sun8i_ss_hash_craexit(struct crypto_tfm *tfm);
int sun8i_ss_hash_init(struct ahash_request *areq);
int sun8i_ss_hash_export(struct ahash_request *areq, void *out);
int sun8i_ss_hash_import(struct ahash_request *areq, const void *in);
int sun8i_ss_hash_final(struct ahash_request *areq);
int sun8i_ss_hash_update(struct ahash_request *areq);
int sun8i_ss_hash_finup(struct ahash_request *areq);
int sun8i_ss_hash_digest(struct ahash_request *areq);
int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq);

View File

@ -55,7 +55,7 @@ static void set_dynamic_sa_command_1(struct dynamic_sa_ctl *sa, u32 cm,
sa->sa_command_1.w = 0;
sa->sa_command_1.bf.crypto_mode31 = (cm & 4) >> 2;
sa->sa_command_1.bf.crypto_mode9_8 = cm & 3;
sa->sa_command_1.bf.feedback_mode = cfb,
sa->sa_command_1.bf.feedback_mode = cfb;
sa->sa_command_1.bf.sa_rev = 1;
sa->sa_command_1.bf.hmac_muting = hmac_mc;
sa->sa_command_1.bf.extended_seq_num = esn;

View File

@ -15,6 +15,7 @@
#include <linux/ratelimit.h>
#include <linux/mutex.h>
#include <linux/scatterlist.h>
#include <crypto/internal/hash.h>
#include <crypto/internal/aead.h>
#include <crypto/internal/rng.h>

View File

@ -99,7 +99,7 @@ static int meson_cipher(struct skcipher_request *areq)
unsigned int keyivlen, ivsize, offset, tloffset;
dma_addr_t phykeyiv;
void *backup_iv = NULL, *bkeyiv;
__le32 v;
u32 v;
algt = container_of(alg, struct meson_alg_template, alg.skcipher);
@ -340,10 +340,7 @@ void meson_cipher_exit(struct crypto_tfm *tfm)
{
struct meson_cipher_tfm_ctx *op = crypto_tfm_ctx(tfm);
if (op->key) {
memzero_explicit(op->key, op->keylen);
kfree(op->key);
}
kfree_sensitive(op->key);
crypto_free_skcipher(op->fallback_tfm);
}
@ -367,10 +364,7 @@ int meson_aes_setkey(struct crypto_skcipher *tfm, const u8 *key,
dev_dbg(mc->dev, "ERROR: Invalid keylen %u\n", keylen);
return -EINVAL;
}
if (op->key) {
memzero_explicit(op->key, op->keylen);
kfree(op->key);
}
kfree_sensitive(op->key);
op->keylen = keylen;
op->key = kmemdup(key, keylen, GFP_KERNEL | GFP_DMA);
if (!op->key)

View File

@ -98,7 +98,7 @@ static struct meson_alg_template mc_algs[] = {
};
#ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG
static int meson_dbgfs_read(struct seq_file *seq, void *v)
static int meson_debugfs_show(struct seq_file *seq, void *v)
{
struct meson_dev *mc = seq->private;
int i;
@ -118,19 +118,7 @@ static int meson_dbgfs_read(struct seq_file *seq, void *v)
}
return 0;
}
static int meson_dbgfs_open(struct inode *inode, struct file *file)
{
return single_open(file, meson_dbgfs_read, inode->i_private);
}
static const struct file_operations meson_debugfs_fops = {
.owner = THIS_MODULE,
.open = meson_dbgfs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
DEFINE_SHOW_ATTRIBUTE(meson_debugfs);
#endif
static void meson_free_chanlist(struct meson_dev *mc, int i)

View File

@ -1539,7 +1539,7 @@ static int atmel_aes_gcm_length(struct atmel_aes_dev *dd)
/* Write incr32(J0) into IV. */
j0_lsw = j0[3];
j0[3] = cpu_to_be32(be32_to_cpu(j0[3]) + 1);
be32_add_cpu(&j0[3], 1);
atmel_aes_write_block(dd, AES_IVR(0), j0);
j0[3] = j0_lsw;

View File

@ -912,7 +912,7 @@ static void atmel_tdes_skcipher_alg_init(struct skcipher_alg *alg)
{
alg->base.cra_priority = ATMEL_TDES_PRIORITY;
alg->base.cra_flags = CRYPTO_ALG_ASYNC;
alg->base.cra_ctxsize = sizeof(struct atmel_tdes_ctx),
alg->base.cra_ctxsize = sizeof(struct atmel_tdes_ctx);
alg->base.cra_module = THIS_MODULE;
alg->init = atmel_tdes_init_tfm;

View File

@ -165,10 +165,6 @@ spu_skcipher_rx_sg_create(struct brcm_message *mssg,
return -EFAULT;
}
if (ctx->cipher.alg == CIPHER_ALG_RC4)
/* Add buffer to catch 260-byte SUPDT field for RC4 */
sg_set_buf(sg++, rctx->msg_buf.c.supdt_tweak, SPU_SUPDT_LEN);
if (stat_pad_len)
sg_set_buf(sg++, rctx->msg_buf.rx_stat_pad, stat_pad_len);
@ -317,7 +313,6 @@ static int handle_skcipher_req(struct iproc_reqctx_s *rctx)
u8 local_iv_ctr[MAX_IV_SIZE];
u32 stat_pad_len; /* num bytes to align status field */
u32 pad_len; /* total length of all padding */
bool update_key = false;
struct brcm_message *mssg; /* mailbox message */
/* number of entries in src and dst sg in mailbox message. */
@ -391,28 +386,6 @@ static int handle_skcipher_req(struct iproc_reqctx_s *rctx)
}
}
if (ctx->cipher.alg == CIPHER_ALG_RC4) {
rx_frag_num++;
if (chunk_start) {
/*
* for non-first RC4 chunks, use SUPDT from previous
* response as key for this chunk.
*/
cipher_parms.key_buf = rctx->msg_buf.c.supdt_tweak;
update_key = true;
cipher_parms.type = CIPHER_TYPE_UPDT;
} else if (!rctx->is_encrypt) {
/*
* First RC4 chunk. For decrypt, key in pre-built msg
* header may have been changed if encrypt required
* multiple chunks. So revert the key to the
* ctx->enckey value.
*/
update_key = true;
cipher_parms.type = CIPHER_TYPE_INIT;
}
}
if (ctx->max_payload == SPU_MAX_PAYLOAD_INF)
flow_log("max_payload infinite\n");
else
@ -425,14 +398,9 @@ static int handle_skcipher_req(struct iproc_reqctx_s *rctx)
memcpy(rctx->msg_buf.bcm_spu_req_hdr, ctx->bcm_spu_req_hdr,
sizeof(rctx->msg_buf.bcm_spu_req_hdr));
/*
* Pass SUPDT field as key. Key field in finish() call is only used
* when update_key has been set above for RC4. Will be ignored in
* all other cases.
*/
spu->spu_cipher_req_finish(rctx->msg_buf.bcm_spu_req_hdr + BCM_HDR_LEN,
ctx->spu_req_hdr_len, !(rctx->is_encrypt),
&cipher_parms, update_key, chunksize);
&cipher_parms, chunksize);
atomic64_add(chunksize, &iproc_priv.bytes_out);
@ -527,9 +495,6 @@ static void handle_skcipher_resp(struct iproc_reqctx_s *rctx)
__func__, rctx->total_received, payload_len);
dump_sg(req->dst, rctx->total_received, payload_len);
if (ctx->cipher.alg == CIPHER_ALG_RC4)
packet_dump(" supdt ", rctx->msg_buf.c.supdt_tweak,
SPU_SUPDT_LEN);
rctx->total_received += payload_len;
if (rctx->total_received == rctx->total_todo) {
@ -1853,26 +1818,6 @@ static int aes_setkey(struct crypto_skcipher *cipher, const u8 *key,
return 0;
}
static int rc4_setkey(struct crypto_skcipher *cipher, const u8 *key,
unsigned int keylen)
{
struct iproc_ctx_s *ctx = crypto_skcipher_ctx(cipher);
int i;
ctx->enckeylen = ARC4_MAX_KEY_SIZE + ARC4_STATE_SIZE;
ctx->enckey[0] = 0x00; /* 0x00 */
ctx->enckey[1] = 0x00; /* i */
ctx->enckey[2] = 0x00; /* 0x00 */
ctx->enckey[3] = 0x00; /* j */
for (i = 0; i < ARC4_MAX_KEY_SIZE; i++)
ctx->enckey[i + ARC4_STATE_SIZE] = key[i % keylen];
ctx->cipher_type = CIPHER_TYPE_INIT;
return 0;
}
static int skcipher_setkey(struct crypto_skcipher *cipher, const u8 *key,
unsigned int keylen)
{
@ -1895,9 +1840,6 @@ static int skcipher_setkey(struct crypto_skcipher *cipher, const u8 *key,
case CIPHER_ALG_AES:
err = aes_setkey(cipher, key, keylen);
break;
case CIPHER_ALG_RC4:
err = rc4_setkey(cipher, key, keylen);
break;
default:
pr_err("%s() Error: unknown cipher alg\n", __func__);
err = -EINVAL;
@ -1905,11 +1847,9 @@ static int skcipher_setkey(struct crypto_skcipher *cipher, const u8 *key,
if (err)
return err;
/* RC4 already populated ctx->enkey */
if (ctx->cipher.alg != CIPHER_ALG_RC4) {
memcpy(ctx->enckey, key, keylen);
ctx->enckeylen = keylen;
}
memcpy(ctx->enckey, key, keylen);
ctx->enckeylen = keylen;
/* SPU needs XTS keys in the reverse order the crypto API presents */
if ((ctx->cipher.alg == CIPHER_ALG_AES) &&
(ctx->cipher.mode == CIPHER_MODE_XTS)) {
@ -2872,9 +2812,6 @@ static int aead_authenc_setkey(struct crypto_aead *cipher,
goto badkey;
}
break;
case CIPHER_ALG_RC4:
ctx->cipher_type = CIPHER_TYPE_INIT;
break;
default:
pr_err("%s() Error: Unknown cipher alg\n", __func__);
return -EINVAL;
@ -2930,7 +2867,6 @@ static int aead_gcm_ccm_setkey(struct crypto_aead *cipher,
ctx->enckeylen = keylen;
ctx->authkeylen = 0;
memcpy(ctx->enckey, key, ctx->enckeylen);
switch (ctx->enckeylen) {
case AES_KEYSIZE_128:
@ -2946,6 +2882,8 @@ static int aead_gcm_ccm_setkey(struct crypto_aead *cipher,
goto badkey;
}
memcpy(ctx->enckey, key, ctx->enckeylen);
flow_log(" enckeylen:%u authkeylen:%u\n", ctx->enckeylen,
ctx->authkeylen);
flow_dump(" enc: ", ctx->enckey, ctx->enckeylen);
@ -3000,6 +2938,10 @@ static int aead_gcm_esp_setkey(struct crypto_aead *cipher,
struct iproc_ctx_s *ctx = crypto_aead_ctx(cipher);
flow_log("%s\n", __func__);
if (keylen < GCM_ESP_SALT_SIZE)
return -EINVAL;
ctx->salt_len = GCM_ESP_SALT_SIZE;
ctx->salt_offset = GCM_ESP_SALT_OFFSET;
memcpy(ctx->salt, key + keylen - GCM_ESP_SALT_SIZE, GCM_ESP_SALT_SIZE);
@ -3028,6 +2970,10 @@ static int rfc4543_gcm_esp_setkey(struct crypto_aead *cipher,
struct iproc_ctx_s *ctx = crypto_aead_ctx(cipher);
flow_log("%s\n", __func__);
if (keylen < GCM_ESP_SALT_SIZE)
return -EINVAL;
ctx->salt_len = GCM_ESP_SALT_SIZE;
ctx->salt_offset = GCM_ESP_SALT_OFFSET;
memcpy(ctx->salt, key + keylen - GCM_ESP_SALT_SIZE, GCM_ESP_SALT_SIZE);
@ -3057,6 +3003,10 @@ static int aead_ccm_esp_setkey(struct crypto_aead *cipher,
struct iproc_ctx_s *ctx = crypto_aead_ctx(cipher);
flow_log("%s\n", __func__);
if (keylen < CCM_ESP_SALT_SIZE)
return -EINVAL;
ctx->salt_len = CCM_ESP_SALT_SIZE;
ctx->salt_offset = CCM_ESP_SALT_OFFSET;
memcpy(ctx->salt, key + keylen - CCM_ESP_SALT_SIZE, CCM_ESP_SALT_SIZE);
@ -3603,25 +3553,6 @@ static struct iproc_alg_s driver_algs[] = {
},
/* SKCIPHER algorithms. */
{
.type = CRYPTO_ALG_TYPE_SKCIPHER,
.alg.skcipher = {
.base.cra_name = "ecb(arc4)",
.base.cra_driver_name = "ecb-arc4-iproc",
.base.cra_blocksize = ARC4_BLOCK_SIZE,
.min_keysize = ARC4_MIN_KEY_SIZE,
.max_keysize = ARC4_MAX_KEY_SIZE,
.ivsize = 0,
},
.cipher_info = {
.alg = CIPHER_ALG_RC4,
.mode = CIPHER_MODE_NONE,
},
.auth_info = {
.alg = HASH_ALG_NONE,
.mode = HASH_MODE_NONE,
},
},
{
.type = CRYPTO_ALG_TYPE_SKCIPHER,
.alg.skcipher = {
@ -4526,15 +4457,9 @@ static void spu_counters_init(void)
static int spu_register_skcipher(struct iproc_alg_s *driver_alg)
{
struct spu_hw *spu = &iproc_priv.spu;
struct skcipher_alg *crypto = &driver_alg->alg.skcipher;
int err;
/* SPU2 does not support RC4 */
if ((driver_alg->cipher_info.alg == CIPHER_ALG_RC4) &&
(spu->spu_type == SPU_TYPE_SPU2))
return 0;
crypto->base.cra_module = THIS_MODULE;
crypto->base.cra_priority = cipher_pri;
crypto->base.cra_alignmask = 0;

View File

@ -388,7 +388,6 @@ struct spu_hw {
u16 spu_req_hdr_len,
unsigned int is_inbound,
struct spu_cipher_parms *cipher_parms,
bool update_key,
unsigned int data_size);
void (*spu_request_pad)(u8 *pad_start, u32 gcm_padding,
u32 hash_pad_len, enum hash_alg auth_alg,

View File

@ -222,10 +222,6 @@ void spum_dump_msg_hdr(u8 *buf, unsigned int buf_len)
cipher_key_len = 24;
name = "3DES";
break;
case CIPHER_ALG_RC4:
cipher_key_len = 260;
name = "ARC4";
break;
case CIPHER_ALG_AES:
switch (cipher_type) {
case CIPHER_TYPE_AES128:
@ -919,21 +915,16 @@ u16 spum_cipher_req_init(u8 *spu_hdr, struct spu_cipher_parms *cipher_parms)
* @spu_req_hdr_len: Length in bytes of the SPU request header
* @isInbound: 0 encrypt, 1 decrypt
* @cipher_parms: Parameters describing cipher operation to be performed
* @update_key: If true, rewrite the cipher key in SCTX
* @data_size: Length of the data in the BD field
*
* Assumes much of the header was already filled in at setkey() time in
* spum_cipher_req_init().
* spum_cipher_req_init() fills in the encryption key. For RC4, when submitting
* a request for a non-first chunk, we use the 260-byte SUPDT field from the
* previous response as the key. update_key is true for this case. Unused in all
* other cases.
* spum_cipher_req_init() fills in the encryption key.
*/
void spum_cipher_req_finish(u8 *spu_hdr,
u16 spu_req_hdr_len,
unsigned int is_inbound,
struct spu_cipher_parms *cipher_parms,
bool update_key,
unsigned int data_size)
{
struct SPUHEADER *spuh;
@ -948,11 +939,6 @@ void spum_cipher_req_finish(u8 *spu_hdr,
flow_log(" in: %u\n", is_inbound);
flow_log(" cipher alg: %u, cipher_type: %u\n", cipher_parms->alg,
cipher_parms->type);
if (update_key) {
flow_log(" cipher key len: %u\n", cipher_parms->key_len);
flow_dump(" key: ", cipher_parms->key_buf,
cipher_parms->key_len);
}
/*
* In XTS mode, API puts "i" parameter (block tweak) in IV. For
@ -981,13 +967,6 @@ void spum_cipher_req_finish(u8 *spu_hdr,
else
cipher_bits &= ~CIPHER_INBOUND;
/* update encryption key for RC4 on non-first chunk */
if (update_key) {
spuh->sa.cipher_flags |=
cipher_parms->type << CIPHER_TYPE_SHIFT;
memcpy(spuh + 1, cipher_parms->key_buf, cipher_parms->key_len);
}
if (cipher_parms->alg && cipher_parms->iv_buf && cipher_parms->iv_len)
/* cipher iv provided so put it in here */
memcpy(bdesc_ptr - cipher_parms->iv_len, cipher_parms->iv_buf,

View File

@ -251,7 +251,6 @@ void spum_cipher_req_finish(u8 *spu_hdr,
u16 spu_req_hdr_len,
unsigned int is_inbound,
struct spu_cipher_parms *cipher_parms,
bool update_key,
unsigned int data_size);
void spum_request_pad(u8 *pad_start,

View File

@ -1170,21 +1170,16 @@ u16 spu2_cipher_req_init(u8 *spu_hdr, struct spu_cipher_parms *cipher_parms)
* @spu_req_hdr_len: Length in bytes of the SPU request header
* @isInbound: 0 encrypt, 1 decrypt
* @cipher_parms: Parameters describing cipher operation to be performed
* @update_key: If true, rewrite the cipher key in SCTX
* @data_size: Length of the data in the BD field
*
* Assumes much of the header was already filled in at setkey() time in
* spu_cipher_req_init().
* spu_cipher_req_init() fills in the encryption key. For RC4, when submitting a
* request for a non-first chunk, we use the 260-byte SUPDT field from the
* previous response as the key. update_key is true for this case. Unused in all
* other cases.
* spu_cipher_req_init() fills in the encryption key.
*/
void spu2_cipher_req_finish(u8 *spu_hdr,
u16 spu_req_hdr_len,
unsigned int is_inbound,
struct spu_cipher_parms *cipher_parms,
bool update_key,
unsigned int data_size)
{
struct SPU2_FMD *fmd;
@ -1196,11 +1191,6 @@ void spu2_cipher_req_finish(u8 *spu_hdr,
flow_log(" in: %u\n", is_inbound);
flow_log(" cipher alg: %u, cipher_type: %u\n", cipher_parms->alg,
cipher_parms->type);
if (update_key) {
flow_log(" cipher key len: %u\n", cipher_parms->key_len);
flow_dump(" key: ", cipher_parms->key_buf,
cipher_parms->key_len);
}
flow_log(" iv len: %d\n", cipher_parms->iv_len);
flow_dump(" iv: ", cipher_parms->iv_buf, cipher_parms->iv_len);
flow_log(" data_size: %u\n", data_size);

View File

@ -200,7 +200,6 @@ void spu2_cipher_req_finish(u8 *spu_hdr,
u16 spu_req_hdr_len,
unsigned int is_inbound,
struct spu_cipher_parms *cipher_parms,
bool update_key,
unsigned int data_size);
void spu2_request_pad(u8 *pad_start, u32 gcm_padding, u32 hash_pad_len,
enum hash_alg auth_alg, enum hash_mode auth_mode,

View File

@ -101,6 +101,7 @@ config CRYPTO_DEV_FSL_CAAM_CRYPTO_API
select CRYPTO_AUTHENC
select CRYPTO_SKCIPHER
select CRYPTO_LIB_DES
select CRYPTO_XTS
help
Selecting this will offload crypto for users of the
scatterlist crypto API (such as the linux native IPSec
@ -114,6 +115,7 @@ config CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI
select CRYPTO_AUTHENC
select CRYPTO_SKCIPHER
select CRYPTO_DES
select CRYPTO_XTS
help
Selecting this will use CAAM Queue Interface (QI) for sending
& receiving crypto jobs to/from CAAM. This gives better performance
@ -165,6 +167,7 @@ config CRYPTO_DEV_FSL_DPAA2_CAAM
select CRYPTO_AEAD
select CRYPTO_HASH
select CRYPTO_DES
select CRYPTO_XTS
help
CAAM driver for QorIQ Data Path Acceleration Architecture 2.
It handles DPSECI DPAA2 objects that sit on the Management Complex

View File

@ -27,6 +27,8 @@ ifneq ($(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI),)
ccflags-y += -DCONFIG_CAAM_QI
endif
caam-$(CONFIG_DEBUG_FS) += debugfs.o
obj-$(CONFIG_CRYPTO_DEV_FSL_DPAA2_CAAM) += dpaa2_caam.o
dpaa2_caam-y := caamalg_qi2.o dpseci.o

View File

@ -57,6 +57,8 @@
#include "key_gen.h"
#include "caamalg_desc.h"
#include <crypto/engine.h>
#include <crypto/xts.h>
#include <asm/unaligned.h>
/*
* crypto alg
@ -114,10 +116,13 @@ struct caam_ctx {
struct alginfo adata;
struct alginfo cdata;
unsigned int authsize;
bool xts_key_fallback;
struct crypto_skcipher *fallback;
};
struct caam_skcipher_req_ctx {
struct skcipher_edesc *edesc;
struct skcipher_request fallback_req;
};
struct caam_aead_req_ctx {
@ -829,11 +834,23 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
{
struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
struct device *jrdev = ctx->jrdev;
struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
u32 *desc;
int err;
if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) {
err = xts_verify_key(skcipher, key, keylen);
if (err) {
dev_dbg(jrdev, "key size mismatch\n");
return -EINVAL;
return err;
}
if (keylen != 2 * AES_KEYSIZE_128 && keylen != 2 * AES_KEYSIZE_256)
ctx->xts_key_fallback = true;
if (ctrlpriv->era <= 8 || ctx->xts_key_fallback) {
err = crypto_skcipher_setkey(ctx->fallback, key, keylen);
if (err)
return err;
}
ctx->cdata.keylen = keylen;
@ -1755,6 +1772,14 @@ static int skcipher_do_one_req(struct crypto_engine *engine, void *areq)
return ret;
}
static inline bool xts_skcipher_ivsize(struct skcipher_request *req)
{
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
return !!get_unaligned((u64 *)(req->iv + (ivsize / 2)));
}
static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
{
struct skcipher_edesc *edesc;
@ -1762,12 +1787,34 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
struct device *jrdev = ctx->jrdev;
struct caam_drv_private_jr *jrpriv = dev_get_drvdata(jrdev);
struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
u32 *desc;
int ret = 0;
if (!req->cryptlen)
/*
* XTS is expected to return an error even for input length = 0
* Note that the case input length < block size will be caught during
* HW offloading and return an error.
*/
if (!req->cryptlen && !ctx->fallback)
return 0;
if (ctx->fallback && ((ctrlpriv->era <= 8 && xts_skcipher_ivsize(req)) ||
ctx->xts_key_fallback)) {
struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
skcipher_request_set_callback(&rctx->fallback_req,
req->base.flags,
req->base.complete,
req->base.data);
skcipher_request_set_crypt(&rctx->fallback_req, req->src,
req->dst, req->cryptlen, req->iv);
return encrypt ? crypto_skcipher_encrypt(&rctx->fallback_req) :
crypto_skcipher_decrypt(&rctx->fallback_req);
}
/* allocate extended descriptor */
edesc = skcipher_edesc_alloc(req, DESC_JOB_IO_LEN * CAAM_CMD_SZ);
if (IS_ERR(edesc))
@ -1905,6 +1952,7 @@ static struct caam_skcipher_alg driver_algs[] = {
.base = {
.cra_name = "xts(aes)",
.cra_driver_name = "xts-aes-caam",
.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = AES_BLOCK_SIZE,
},
.setkey = xts_skcipher_setkey,
@ -3344,13 +3392,35 @@ static int caam_cra_init(struct crypto_skcipher *tfm)
struct caam_skcipher_alg *caam_alg =
container_of(alg, typeof(*caam_alg), skcipher);
struct caam_ctx *ctx = crypto_skcipher_ctx(tfm);
crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx));
u32 alg_aai = caam_alg->caam.class1_alg_type & OP_ALG_AAI_MASK;
int ret = 0;
ctx->enginectx.op.do_one_request = skcipher_do_one_req;
return caam_init_common(crypto_skcipher_ctx(tfm), &caam_alg->caam,
false);
if (alg_aai == OP_ALG_AAI_XTS) {
const char *tfm_name = crypto_tfm_alg_name(&tfm->base);
struct crypto_skcipher *fallback;
fallback = crypto_alloc_skcipher(tfm_name, 0,
CRYPTO_ALG_NEED_FALLBACK);
if (IS_ERR(fallback)) {
dev_err(ctx->jrdev, "Failed to allocate %s fallback: %ld\n",
tfm_name, PTR_ERR(fallback));
return PTR_ERR(fallback);
}
ctx->fallback = fallback;
crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx) +
crypto_skcipher_reqsize(fallback));
} else {
crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx));
}
ret = caam_init_common(ctx, &caam_alg->caam, false);
if (ret && ctx->fallback)
crypto_free_skcipher(ctx->fallback);
return ret;
}
static int caam_aead_init(struct crypto_aead *tfm)
@ -3378,7 +3448,11 @@ static void caam_exit_common(struct caam_ctx *ctx)
static void caam_cra_exit(struct crypto_skcipher *tfm)
{
caam_exit_common(crypto_skcipher_ctx(tfm));
struct caam_ctx *ctx = crypto_skcipher_ctx(tfm);
if (ctx->fallback)
crypto_free_skcipher(ctx->fallback);
caam_exit_common(ctx);
}
static void caam_aead_exit(struct crypto_aead *tfm)
@ -3412,8 +3486,8 @@ static void caam_skcipher_alg_init(struct caam_skcipher_alg *t_alg)
alg->base.cra_module = THIS_MODULE;
alg->base.cra_priority = CAAM_CRA_PRIORITY;
alg->base.cra_ctxsize = sizeof(struct caam_ctx);
alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY;
alg->base.cra_flags |= (CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY);
alg->init = caam_cra_init;
alg->exit = caam_cra_exit;

View File

@ -373,6 +373,7 @@ EXPORT_SYMBOL(cnstr_shdsc_aead_encap);
* with OP_ALG_AAI_HMAC_PRECOMP.
* @ivsize: initialization vector size
* @icvsize: integrity check value (ICV) size (truncated or full)
* @geniv: whether to generate Encrypted Chain IV
* @is_rfc3686: true when ctr(aes) is wrapped by rfc3686 template
* @nonce: pointer to rfc3686 nonce
* @ctx1_iv_off: IV offset in CONTEXT1 register
@ -1550,13 +1551,14 @@ void cnstr_shdsc_xts_skcipher_encap(u32 * const desc, struct alginfo *cdata)
set_jump_tgt_here(desc, key_jump_cmd);
/*
* create sequence for loading the sector index
* Upper 8B of IV - will be used as sector index
* Lower 8B of IV - will be discarded
* create sequence for loading the sector index / 16B tweak value
* Lower 8B of IV - sector index / tweak lower half
* Upper 8B of IV - upper half of 16B tweak
*/
append_seq_load(desc, 8, LDST_SRCDST_BYTE_CONTEXT | LDST_CLASS_1_CCB |
(0x20 << LDST_OFFSET_SHIFT));
append_seq_fifo_load(desc, 8, FIFOLD_CLASS_SKIP);
append_seq_load(desc, 8, LDST_SRCDST_BYTE_CONTEXT | LDST_CLASS_1_CCB |
(0x30 << LDST_OFFSET_SHIFT));
/* Load operation */
append_operation(desc, cdata->algtype | OP_ALG_AS_INITFINAL |
@ -1565,9 +1567,11 @@ void cnstr_shdsc_xts_skcipher_encap(u32 * const desc, struct alginfo *cdata)
/* Perform operation */
skcipher_append_src_dst(desc);
/* Store upper 8B of IV */
/* Store lower 8B and upper 8B of IV */
append_seq_store(desc, 8, LDST_SRCDST_BYTE_CONTEXT | LDST_CLASS_1_CCB |
(0x20 << LDST_OFFSET_SHIFT));
append_seq_store(desc, 8, LDST_SRCDST_BYTE_CONTEXT | LDST_CLASS_1_CCB |
(0x30 << LDST_OFFSET_SHIFT));
print_hex_dump_debug("xts skcipher enc shdesc@" __stringify(__LINE__)
": ", DUMP_PREFIX_ADDRESS, 16, 4,
@ -1609,23 +1613,25 @@ void cnstr_shdsc_xts_skcipher_decap(u32 * const desc, struct alginfo *cdata)
set_jump_tgt_here(desc, key_jump_cmd);
/*
* create sequence for loading the sector index
* Upper 8B of IV - will be used as sector index
* Lower 8B of IV - will be discarded
* create sequence for loading the sector index / 16B tweak value
* Lower 8B of IV - sector index / tweak lower half
* Upper 8B of IV - upper half of 16B tweak
*/
append_seq_load(desc, 8, LDST_SRCDST_BYTE_CONTEXT | LDST_CLASS_1_CCB |
(0x20 << LDST_OFFSET_SHIFT));
append_seq_fifo_load(desc, 8, FIFOLD_CLASS_SKIP);
append_seq_load(desc, 8, LDST_SRCDST_BYTE_CONTEXT | LDST_CLASS_1_CCB |
(0x30 << LDST_OFFSET_SHIFT));
/* Load operation */
append_dec_op1(desc, cdata->algtype);
/* Perform operation */
skcipher_append_src_dst(desc);
/* Store upper 8B of IV */
/* Store lower 8B and upper 8B of IV */
append_seq_store(desc, 8, LDST_SRCDST_BYTE_CONTEXT | LDST_CLASS_1_CCB |
(0x20 << LDST_OFFSET_SHIFT));
append_seq_store(desc, 8, LDST_SRCDST_BYTE_CONTEXT | LDST_CLASS_1_CCB |
(0x30 << LDST_OFFSET_SHIFT));
print_hex_dump_debug("xts skcipher dec shdesc@" __stringify(__LINE__)
": ", DUMP_PREFIX_ADDRESS, 16, 4, desc,

View File

@ -18,6 +18,8 @@
#include "qi.h"
#include "jr.h"
#include "caamalg_desc.h"
#include <crypto/xts.h>
#include <asm/unaligned.h>
/*
* crypto alg
@ -67,6 +69,12 @@ struct caam_ctx {
struct device *qidev;
spinlock_t lock; /* Protects multiple init of driver context */
struct caam_drv_ctx *drv_ctx[NUM_OP];
bool xts_key_fallback;
struct crypto_skcipher *fallback;
};
struct caam_skcipher_req_ctx {
struct skcipher_request fallback_req;
};
static int aead_set_sh_desc(struct crypto_aead *aead)
@ -725,11 +733,23 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
{
struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
struct device *jrdev = ctx->jrdev;
struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
int ret = 0;
int err;
if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) {
err = xts_verify_key(skcipher, key, keylen);
if (err) {
dev_dbg(jrdev, "key size mismatch\n");
return -EINVAL;
return err;
}
if (keylen != 2 * AES_KEYSIZE_128 && keylen != 2 * AES_KEYSIZE_256)
ctx->xts_key_fallback = true;
if (ctrlpriv->era <= 8 || ctx->xts_key_fallback) {
err = crypto_skcipher_setkey(ctx->fallback, key, keylen);
if (err)
return err;
}
ctx->cdata.keylen = keylen;
@ -1373,16 +1393,46 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
return edesc;
}
static inline bool xts_skcipher_ivsize(struct skcipher_request *req)
{
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
return !!get_unaligned((u64 *)(req->iv + (ivsize / 2)));
}
static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt)
{
struct skcipher_edesc *edesc;
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctx->jrdev->parent);
int ret;
if (!req->cryptlen)
/*
* XTS is expected to return an error even for input length = 0
* Note that the case input length < block size will be caught during
* HW offloading and return an error.
*/
if (!req->cryptlen && !ctx->fallback)
return 0;
if (ctx->fallback && ((ctrlpriv->era <= 8 && xts_skcipher_ivsize(req)) ||
ctx->xts_key_fallback)) {
struct caam_skcipher_req_ctx *rctx = skcipher_request_ctx(req);
skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback);
skcipher_request_set_callback(&rctx->fallback_req,
req->base.flags,
req->base.complete,
req->base.data);
skcipher_request_set_crypt(&rctx->fallback_req, req->src,
req->dst, req->cryptlen, req->iv);
return encrypt ? crypto_skcipher_encrypt(&rctx->fallback_req) :
crypto_skcipher_decrypt(&rctx->fallback_req);
}
if (unlikely(caam_congested))
return -EAGAIN;
@ -1507,6 +1557,7 @@ static struct caam_skcipher_alg driver_algs[] = {
.base = {
.cra_name = "xts(aes)",
.cra_driver_name = "xts-aes-caam-qi",
.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = AES_BLOCK_SIZE,
},
.setkey = xts_skcipher_setkey,
@ -2440,9 +2491,32 @@ static int caam_cra_init(struct crypto_skcipher *tfm)
struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
struct caam_skcipher_alg *caam_alg =
container_of(alg, typeof(*caam_alg), skcipher);
struct caam_ctx *ctx = crypto_skcipher_ctx(tfm);
u32 alg_aai = caam_alg->caam.class1_alg_type & OP_ALG_AAI_MASK;
int ret = 0;
return caam_init_common(crypto_skcipher_ctx(tfm), &caam_alg->caam,
false);
if (alg_aai == OP_ALG_AAI_XTS) {
const char *tfm_name = crypto_tfm_alg_name(&tfm->base);
struct crypto_skcipher *fallback;
fallback = crypto_alloc_skcipher(tfm_name, 0,
CRYPTO_ALG_NEED_FALLBACK);
if (IS_ERR(fallback)) {
dev_err(ctx->jrdev, "Failed to allocate %s fallback: %ld\n",
tfm_name, PTR_ERR(fallback));
return PTR_ERR(fallback);
}
ctx->fallback = fallback;
crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx) +
crypto_skcipher_reqsize(fallback));
}
ret = caam_init_common(ctx, &caam_alg->caam, false);
if (ret && ctx->fallback)
crypto_free_skcipher(ctx->fallback);
return ret;
}
static int caam_aead_init(struct crypto_aead *tfm)
@ -2468,7 +2542,11 @@ static void caam_exit_common(struct caam_ctx *ctx)
static void caam_cra_exit(struct crypto_skcipher *tfm)
{
caam_exit_common(crypto_skcipher_ctx(tfm));
struct caam_ctx *ctx = crypto_skcipher_ctx(tfm);
if (ctx->fallback)
crypto_free_skcipher(ctx->fallback);
caam_exit_common(ctx);
}
static void caam_aead_exit(struct crypto_aead *tfm)
@ -2502,8 +2580,8 @@ static void caam_skcipher_alg_init(struct caam_skcipher_alg *t_alg)
alg->base.cra_module = THIS_MODULE;
alg->base.cra_priority = CAAM_CRA_PRIORITY;
alg->base.cra_ctxsize = sizeof(struct caam_ctx);
alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY;
alg->base.cra_flags |= (CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY);
alg->init = caam_cra_init;
alg->exit = caam_cra_exit;

View File

@ -19,6 +19,8 @@
#include <linux/fsl/mc.h>
#include <soc/fsl/dpaa2-io.h>
#include <soc/fsl/dpaa2-fd.h>
#include <crypto/xts.h>
#include <asm/unaligned.h>
#define CAAM_CRA_PRIORITY 2000
@ -59,7 +61,7 @@ struct caam_skcipher_alg {
};
/**
* caam_ctx - per-session context
* struct caam_ctx - per-session context
* @flc: Flow Contexts array
* @key: [authentication key], encryption key
* @flc_dma: I/O virtual addresses of the Flow Contexts
@ -80,6 +82,8 @@ struct caam_ctx {
struct alginfo adata;
struct alginfo cdata;
unsigned int authsize;
bool xts_key_fallback;
struct crypto_skcipher *fallback;
};
static void *dpaa2_caam_iova_to_virt(struct dpaa2_caam_priv *priv,
@ -1054,12 +1058,24 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
{
struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
struct device *dev = ctx->dev;
struct dpaa2_caam_priv *priv = dev_get_drvdata(dev);
struct caam_flc *flc;
u32 *desc;
int err;
if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) {
err = xts_verify_key(skcipher, key, keylen);
if (err) {
dev_dbg(dev, "key size mismatch\n");
return -EINVAL;
return err;
}
if (keylen != 2 * AES_KEYSIZE_128 && keylen != 2 * AES_KEYSIZE_256)
ctx->xts_key_fallback = true;
if (priv->sec_attr.era <= 8 || ctx->xts_key_fallback) {
err = crypto_skcipher_setkey(ctx->fallback, key, keylen);
if (err)
return err;
}
ctx->cdata.keylen = keylen;
@ -1443,17 +1459,44 @@ static void skcipher_decrypt_done(void *cbk_ctx, u32 status)
skcipher_request_complete(req, ecode);
}
static inline bool xts_skcipher_ivsize(struct skcipher_request *req)
{
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
return !!get_unaligned((u64 *)(req->iv + (ivsize / 2)));
}
static int skcipher_encrypt(struct skcipher_request *req)
{
struct skcipher_edesc *edesc;
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
struct caam_request *caam_req = skcipher_request_ctx(req);
struct dpaa2_caam_priv *priv = dev_get_drvdata(ctx->dev);
int ret;
if (!req->cryptlen)
/*
* XTS is expected to return an error even for input length = 0
* Note that the case input length < block size will be caught during
* HW offloading and return an error.
*/
if (!req->cryptlen && !ctx->fallback)
return 0;
if (ctx->fallback && ((priv->sec_attr.era <= 8 && xts_skcipher_ivsize(req)) ||
ctx->xts_key_fallback)) {
skcipher_request_set_tfm(&caam_req->fallback_req, ctx->fallback);
skcipher_request_set_callback(&caam_req->fallback_req,
req->base.flags,
req->base.complete,
req->base.data);
skcipher_request_set_crypt(&caam_req->fallback_req, req->src,
req->dst, req->cryptlen, req->iv);
return crypto_skcipher_encrypt(&caam_req->fallback_req);
}
/* allocate extended descriptor */
edesc = skcipher_edesc_alloc(req);
if (IS_ERR(edesc))
@ -1480,10 +1523,30 @@ static int skcipher_decrypt(struct skcipher_request *req)
struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
struct caam_request *caam_req = skcipher_request_ctx(req);
struct dpaa2_caam_priv *priv = dev_get_drvdata(ctx->dev);
int ret;
if (!req->cryptlen)
/*
* XTS is expected to return an error even for input length = 0
* Note that the case input length < block size will be caught during
* HW offloading and return an error.
*/
if (!req->cryptlen && !ctx->fallback)
return 0;
if (ctx->fallback && ((priv->sec_attr.era <= 8 && xts_skcipher_ivsize(req)) ||
ctx->xts_key_fallback)) {
skcipher_request_set_tfm(&caam_req->fallback_req, ctx->fallback);
skcipher_request_set_callback(&caam_req->fallback_req,
req->base.flags,
req->base.complete,
req->base.data);
skcipher_request_set_crypt(&caam_req->fallback_req, req->src,
req->dst, req->cryptlen, req->iv);
return crypto_skcipher_decrypt(&caam_req->fallback_req);
}
/* allocate extended descriptor */
edesc = skcipher_edesc_alloc(req);
if (IS_ERR(edesc))
@ -1537,9 +1600,34 @@ static int caam_cra_init_skcipher(struct crypto_skcipher *tfm)
struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
struct caam_skcipher_alg *caam_alg =
container_of(alg, typeof(*caam_alg), skcipher);
struct caam_ctx *ctx = crypto_skcipher_ctx(tfm);
u32 alg_aai = caam_alg->caam.class1_alg_type & OP_ALG_AAI_MASK;
int ret = 0;
crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_request));
return caam_cra_init(crypto_skcipher_ctx(tfm), &caam_alg->caam, false);
if (alg_aai == OP_ALG_AAI_XTS) {
const char *tfm_name = crypto_tfm_alg_name(&tfm->base);
struct crypto_skcipher *fallback;
fallback = crypto_alloc_skcipher(tfm_name, 0,
CRYPTO_ALG_NEED_FALLBACK);
if (IS_ERR(fallback)) {
dev_err(ctx->dev, "Failed to allocate %s fallback: %ld\n",
tfm_name, PTR_ERR(fallback));
return PTR_ERR(fallback);
}
ctx->fallback = fallback;
crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_request) +
crypto_skcipher_reqsize(fallback));
} else {
crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_request));
}
ret = caam_cra_init(ctx, &caam_alg->caam, false);
if (ret && ctx->fallback)
crypto_free_skcipher(ctx->fallback);
return ret;
}
static int caam_cra_init_aead(struct crypto_aead *tfm)
@ -1562,7 +1650,11 @@ static void caam_exit_common(struct caam_ctx *ctx)
static void caam_cra_exit(struct crypto_skcipher *tfm)
{
caam_exit_common(crypto_skcipher_ctx(tfm));
struct caam_ctx *ctx = crypto_skcipher_ctx(tfm);
if (ctx->fallback)
crypto_free_skcipher(ctx->fallback);
caam_exit_common(ctx);
}
static void caam_cra_exit_aead(struct crypto_aead *tfm)
@ -1665,6 +1757,7 @@ static struct caam_skcipher_alg driver_algs[] = {
.base = {
.cra_name = "xts(aes)",
.cra_driver_name = "xts-aes-caam-qi2",
.cra_flags = CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = AES_BLOCK_SIZE,
},
.setkey = xts_skcipher_setkey,
@ -2912,8 +3005,8 @@ static void caam_skcipher_alg_init(struct caam_skcipher_alg *t_alg)
alg->base.cra_module = THIS_MODULE;
alg->base.cra_priority = CAAM_CRA_PRIORITY;
alg->base.cra_ctxsize = sizeof(struct caam_ctx);
alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY;
alg->base.cra_flags |= (CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY |
CRYPTO_ALG_KERN_DRIVER_ONLY);
alg->init = caam_cra_init_skcipher;
alg->exit = caam_cra_exit;
@ -2951,7 +3044,7 @@ enum hash_optype {
};
/**
* caam_hash_ctx - ahash per-session context
* struct caam_hash_ctx - ahash per-session context
* @flc: Flow Contexts array
* @key: authentication key
* @flc_dma: I/O virtual addresses of the Flow Contexts
@ -5115,8 +5208,7 @@ static int dpaa2_caam_probe(struct fsl_mc_device *dpseci_dev)
/* DPIO */
err = dpaa2_dpseci_dpio_setup(priv);
if (err) {
if (err != -EPROBE_DEFER)
dev_err(dev, "dpaa2_dpseci_dpio_setup() failed\n");
dev_err_probe(dev, err, "dpaa2_dpseci_dpio_setup() failed\n");
goto err_dpio_setup;
}

View File

@ -13,6 +13,7 @@
#include <linux/netdevice.h>
#include "dpseci.h"
#include "desc_constr.h"
#include <crypto/skcipher.h>
#define DPAA2_CAAM_STORE_SIZE 16
/* NAPI weight *must* be a multiple of the store size. */
@ -186,6 +187,7 @@ struct caam_request {
void (*cbk)(void *ctx, u32 err);
void *ctx;
void *edesc;
struct skcipher_request fallback_req;
};
/**

Some files were not shown because too many files have changed in this diff Show More