linux/include/trace
Jason A. Donenfeld 9c07f57869 random: simplify entropy debiting
Our pool is 256 bits, and we only ever use all of it or don't use it at
all, which is decided by whether or not it has at least 128 bits in it.
So we can drastically simplify the accounting and cmpxchg loop to do
exactly this.  While we're at it, we move the minimum bit size into a
constant so it can be shared between the two places where it matters.

The reason we want any of this is for the case in which an attacker has
compromised the current state, and then bruteforces small amounts of
entropy added to it. By demanding a particular minimum amount of entropy
be present before reseeding, we make that bruteforcing difficult.

Note that this rationale no longer includes anything about /dev/random
blocking at the right moment, since /dev/random no longer blocks (except
for at ~boot), but rather uses the crng. In a former life, /dev/random
was different and therefore required a more nuanced account(), but this
is no longer.

Behaviorally, nothing changes here. This is just a simplification of
the code.

Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-21 16:48:06 +01:00
..
events random: simplify entropy debiting 2022-02-21 16:48:06 +01:00
bpf_probe.h tracing: Add '__rel_loc' using trace event macros 2021-12-06 15:37:21 -05:00
define_trace.h tracepoint: Optimize using static_call() 2020-09-01 09:58:06 +02:00
perf.h tracing/perf: Avoid -Warray-bounds warning for __rel_loc macro 2022-01-27 19:15:45 -05:00
syscall.h tracepoints: Migrate to use SYSCALL_WORK flag 2020-11-16 21:53:15 +01:00
trace_events.h tracing/perf: Avoid -Warray-bounds warning for __rel_loc macro 2022-01-27 19:15:45 -05:00