forked from Minki/linux
1a076689cd
It has been observed that in zero-loss benchmarks, when a slow traffic rate is being tested, the IRQ timer coalescing parameter was set too high, and the ethernet controller would start dropping packets because the job ring back half wouldn't be executed in time before the ethernet controller would fill its buffers, thereby significantly reducing the zero-loss performance figures. Empirical testing has shown that the best zero-loss performance is achieved when IRQ coalescing is set to minimum values and/or turned off, since apparently the job ring driver already implements an adequately-performing general-purpose IRQ mitigation strategy in software. Whilst we could go with minimal count (2-8) and timing settings (192-256), we prefer just turning h/w coalescing altogether off to minimize setkey latency (due to split key generation), and for consistent cross-SoC performance (the SEC vs. core clock ratio changes). Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> |
||
---|---|---|
.. | ||
caamalg.c | ||
caamhash.c | ||
caamrng.c | ||
compat.h | ||
ctrl.c | ||
desc_constr.h | ||
desc.h | ||
error.c | ||
error.h | ||
intern.h | ||
jr.c | ||
jr.h | ||
Kconfig | ||
key_gen.c | ||
key_gen.h | ||
Makefile | ||
pdb.h | ||
regs.h | ||
sg_sw_sec4.h |