mirror of
https://github.com/torvalds/linux.git
synced 2024-11-12 07:01:57 +00:00
drivers/char/mem.c: avoid OOM lockup during large reads from /dev/zero
While running 20 parallel instances of dd as follows:
#!/bin/bash
for i in `seq 1 20`; do
dd if=/dev/zero of=/export/hda3/dd_$i bs=1073741824 count=1 &
done
wait
on a 16G machine, we noticed that rather than just killing the processes,
the entire kernel went down. Stracing dd reveals that it first does an
mmap2, which makes 1GB worth of zero page mappings. Then it performs a
read on those pages from /dev/zero, and finally it performs a write.
The machine died during the reads. Looking at the code, it was noticed
that /dev/zero's read operation had been changed by
557ed1fa26
("remove ZERO_PAGE") from giving
zero page mappings to actually zeroing the page.
The zeroing of the pages causes physical pages to be allocated to the
process. But, when the process exhausts all the memory that it can, the
kernel cannot kill it, as it is still in the kernel mode allocating more
memory. Consequently, the kernel eventually crashes.
To fix this, I propose that when a fatal signal is pending during
/dev/zero read operation, we simply return and let the user process die.
Signed-off-by: Salman Qazi <sqazi@google.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Modified error return and comment trivially. - Linus]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
2cb7878a3a
commit
730c586ad5
@ -694,6 +694,9 @@ static ssize_t read_zero(struct file * file, char __user * buf,
|
||||
written += chunk - unwritten;
|
||||
if (unwritten)
|
||||
break;
|
||||
/* Consider changing this to just 'signal_pending()' with lots of testing */
|
||||
if (fatal_signal_pending(current))
|
||||
return written ? written : -EINTR;
|
||||
buf += chunk;
|
||||
count -= chunk;
|
||||
cond_resched();
|
||||
|
Loading…
Reference in New Issue
Block a user