jffs2_write_end() is sometimes passing back a "written" length greater
than the length we passed into it, leading to a BUG at mm/filemap.c:1749
when used with unionfs.
It happens because we actually write more than was requested, to reduce
log fragmentation. These "longer" writes are fine, but they shouldn't
get propagated back to the vm/vfs.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
[In commit 9ed437c50d we fixed a problem
with standard permissions on newly-created inodes, when POSIX ACLs are
enabled. This cleans it up...]
The attached patch separate jffs2_init_acl() into two parts.
The one is jffs2_init_acl_pre() called from jffs2_new_inode().
It compute ACL oriented inode->i_mode bits, and allocate in-memory ACL
objects associated with the new inode just before when inode meta
infomation is written to the medium.
The other is jffs2_init_acl_post() called from jffs2_symlink(),
jffs2_mkdir(), jffs2_mknod() and jffs2_do_create().
It actually writes in-memory ACL objects into the medium next to
the success of writing meta-information.
In the current implementation, we have to write a same inode meta
infomation twice when inode->i_mode is updated by the default ACL.
However, we can avoid the behavior by putting an updated i_mode
before it is written at first, as jffs2_init_acl_pre() doing.
Signed-off-by: KaiGai Kohei <kaigai@ak.jp.nec.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
The task_struct->pid member is going to be deprecated, so start
using the helpers (task_pid_nr/task_pid_vnr/task_pid_nr_ns) in
the kernel.
The first thing to start with is the pid, printed to dmesg - in
this case we may safely use task_pid_nr(). Besides, printks produce
more (much more) than a half of all the explicit pid usage.
[akpm@linux-foundation.org: git-drm went and changed lots of stuff]
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Cc: Dave Airlie <airlied@linux.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Slab constructors currently have a flags parameter that is never used. And
the order of the arguments is opposite to other slab functions. The object
pointer is placed before the kmem_cache pointer.
Convert
ctor(void *object, struct kmem_cache *s, unsigned long flags)
to
ctor(struct kmem_cache *s, void *object)
throughout the kernel
[akpm@linux-foundation.org: coupla fixes]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In three places: summary scan, normal scan, REF_PRISTINE GC.
Just truncate at the NUL, since that was the correct thing to do in the
only case where this (inexplicable) breakage has been seen.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
In OLPC trac #4184 we found a case where a corrupted node didn't
actually get obsoleted when we tried to garbage-collect it. So we wrote
out many million copies of it, in repeated attempts to obsolete it,
until the flash became full. Don't Do That.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Instead of matching resv_blocks_gcmerge, which is only about 3, instead
match resv_blocks_gctrigger, which includes a proportion of the total
device size.
These ought to become tunable from userspace, at some point.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
With huge amounts of free space, we weren't bothering to GC for while a
while, and pathological numbers of obsolete nodes were accumulating,
seriously affecting performance on NAND flash (OLPC trac #3978)
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Fix a couple of instances in JFFS2 where the unpoint() routine is
being called with the wrong length in cases where the point() routine
truncated a request.
Signed-off-by: Andy Lowe <alowe@mvista.com>
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
I've bisected the deadlock when many small appends are done on jffs2 down to
this commit:
commit 6fe6900e1e
Author: Nick Piggin <npiggin@suse.de>
Date: Sun May 6 14:49:04 2007 -0700
mm: make read_cache_page synchronous
Ensure pages are uptodate after returning from read_cache_page, which allows
us to cut out most of the filesystem-internal PageUptodate calls.
I didn't have a great look down the call chains, but this appears to fixes 7
possible use-before uptodate in hfs, 2 in hfsplus, 1 in jfs, a few in
ecryptfs, 1 in jffs2, and a possible cleared data overwritten with readpage in
block2mtd. All depending on whether the filler is async and/or can return
with a !uptodate page.
It introduced a wait to read_cache_page, as well as a
read_cache_page_async function equivalent to the old read_cache_page
without any callers.
Switching jffs2_gc_fetch_page to read_cache_page_async for the old
behavior makes the deadlocks go away, but maybe reintroduces the
use-before-uptodate problem? I don't understand the mm/fs interaction
well enough to say.
[It's fine. dwmw2.]
Signed-off-by: Jason Lunz <lunz@falooley.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
fs/jffs2/erase.c: In function 'jffs2_block_check_erase':
fs/jffs2/erase.c:355: warning: format '%08x' expects type 'unsigned int', but argument 3 has type 'long unsigned int'
and
fs/jffs2/erase.c: In function 'jffs2_erase_pending_blocks':
fs/jffs2/erase.c:404: warning: 'bad_offset' may be used uninitialized in this function
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
When POSIX ACL support was enabled, we weren't writing correct
legacy modes to the medium on inode creation, or when the ACL was set.
This meant that the permissions would be incorrect after the file system
was remounted.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Commit a491486a20 introduced a locking
problem in JFFS2 -- we up() the alloc_sem when we weren't previously
holding it. This leads to all kinds of fun behaviour later.
There was a _reason_ for the
if (1 /* alternative path needs testing */ ||
which the above-mentioned commit removed :)
Discovered and debugged by Giulio Fedel <giulio.fedel@andorsystems.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit a7a6ace140 revamped the OOB
handling but accidentally switched to 12-byte cleanmarkers, which is
incompatible with what 'flash_eraseall -j' will do. So using
flash_eraseall -j and then trying to mount the 'empty' flash will fail,
because the cleanmarkers aren't recognised.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Debugging the hardware problems in OLPC trac #1905 would be a whole lot
easier if the correct node offsets were printed for the offending nodes.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
The try_to_freeze() call was in the wrong place; we need it in the
signal-pending loop now that a pending freeze also makes
signal_pending() return true.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
jffs2_add_physical_node_ref() should never really return error -- it's
an internal debugging check which triggered. We really need to work out
why and stop it happening. But in the meantime, let's make the failure
mode a little less nasty.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Slab destructors were no longer supported after Christoph's
c59def9f22 change. They've been
BUGs for both slab and slub, and slob never supported them
either.
This rips out support for the dtor pointer from kmem_cache_create()
completely and fixes up every single callsite in the kernel (there were
about 224, not including the slab allocator definitions themselves,
or the documentation references).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Introduce is_owner_or_cap() macro in fs.h, and convert over relevant
users to it. This is done because we want to avoid bugs in the future
where we check for only effective fsuid of the current task against a
file's owning uid, without simultaneously checking for CAP_FOWNER as
well, thus violating its semantics.
[ XFS uses special macros and structures, and in general looked ...
untouchable, so we leave it alone -- but it has been looked over. ]
The (current->fsuid != inode->i_uid) check in generic_permission() and
exec_permission_lite() is left alone, because those operations are
covered by CAP_DAC_OVERRIDE and CAP_DAC_READ_SEARCH. Similarly operations
falling under the purview of CAP_CHOWN and CAP_LEASE are also left alone.
Signed-off-by: Satyam Sharma <ssatyam@cse.iitk.ac.in>
Cc: Al Viro <viro@ftp.linux.org.uk>
Acked-by: Serge E. Hallyn <serge@hallyn.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently, the freezer treats all tasks as freezable, except for the kernel
threads that explicitly set the PF_NOFREEZE flag for themselves. This
approach is problematic, since it requires every kernel thread to either
set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't
care for the freezing of tasks at all.
It seems better to only require the kernel threads that want to or need to
be frozen to use some freezer-related code and to remove any
freezer-related code from the other (nonfreezable) kernel threads, which is
done in this patch.
The patch causes all kernel threads to be nonfreezable by default (ie. to
have PF_NOFREEZE set by default) and introduces the set_freezable()
function that should be called by the freezable kernel threads in order to
unset PF_NOFREEZE. It also makes all of the currently freezable kernel
threads call set_freezable(), so it shouldn't cause any (intentional)
change of behaviour to appear. Additionally, it updates documentation to
describe the freezing of tasks more accurately.
[akpm@linux-foundation.org: build fixes]
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
fs/jffs2/compr.c: In function ‘jffs2_compressors_init’:
fs/jffs2/compr.c:320: warning: implicit declaration of function ‘jffs2_lzo_init’
fs/jffs2/compr.c: In function ‘jffs2_compressors_exit’:
fs/jffs2/compr.c:346: warning: implicit declaration of function ‘jffs2_lzo_exit’
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Add a "favourlzo" compression mode to jffs2 which tries to
optimise by size but gives lzo an advantage when comparing sizes.
This means the faster lzo algorithm can be preferred when there
isn't much difference in compressed size (the exact threshold can
be changed).
Signed-off-by: Richard Purdie <rpurdie@openedhand.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Add LZO1X compression/decompression support to jffs2.
LZO's interface doesn't entirely match that required by jffs2 so a
buffer and memcpy is unavoidable.
Signed-off-by: Richard Purdie <rpurdie@openedhand.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
We've seen some evil corruption issues, where the corruption seems to be
introduced after the JFFS2 crc32 is calculated but before the NAND
controller calculates the ECC. So it's in RAM or in the PCI DMA
transfer; not on the flash. Attempt to catch it earlier by (optionally)
reading back from the flash immediately after writing it.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
They can use generic_file_splice_read() instead. Since sys_sendfile() now
prefers that, there should be no change in behaviour.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Debugging the hardware problems in OLPC trac #1905 would be a whole lot
easier if the correct node offsets were printed for the offending nodes.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
We should have stopped returning 1 from read_dnode() to indicate
failure. We can just mark the damn thing obsolete immediately. But I
missed a case where we don't.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
We should have stopped returning 1 from read_dnode() to indicate
failure. We can just mark the damn thing obsolete immediately. But I
missed a case where we don't.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
The try_to_freeze() call was in the wrong place; we need it in the
signal-pending loop now that a pending freeze also makes
signal_pending() return true.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
With current desing erase_free_sem is locked every time the flash
block is being erased. For NOR flashes - ~1 second is needed to erase
single flash block. In the worst case scenario erase_free_sem may be
locked for a couple of seconds when the number of blocks is being
erased (e.g. after large file was removed). When erase_free_sem is
locked all read/write operations for given JFFS2 partition are locked
too - in effect from time to time access to the JFFS2 partition is
locked for a number of seconds. This fix makes critical section in
flash erasing procedure shorter - now erase_free_sem is locked around
erase_completion_lock spinlock only.
Originally from Radoslaw Bisewski
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
jffs2_add_physical_node_ref() should never really return error -- it's
an internal debugging check which triggered. We really need to work out
why and stop it happening. But in the meantime, let's make the failure
mode a little less nasty.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Faster and won't trash the D-cache.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
When pdflush is erasing lots of sectors, drivers calling
mtd->sync will hang until all blocks are erased. Be nicer.
Signed-off-by: Joakim Tjernlund <Joakim.Tjernlund@transmode.se>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
* git://git.infradead.org/mtd-2.6:
[JFFS2] Fix obsoletion of metadata nodes in jffs2_add_tn_to_tree()
[MTD] Fix error checking after get_mtd_device() in get_sb_mtd functions
[JFFS2] Fix buffer length calculations in jffs2_get_inode_nodes()
[JFFS2] Fix potential memory leak of dead xattrs on unmount.
[JFFS2] Fix BUG() caused by failing to discard xattrs on deleted files.
[MTD] generalise the handling of MTD-specific superblocks
[MTD] [MAPS] don't force uclinux mtd map to be root dev
We should keep the mdata node with higher version number, not just the
one we happen to find latest. Doh.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
If we have already read enough bytes, no need to call read_more().
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
An xattr_datum which ends up orphaned should be freed by the GC
thread. But if we umount before the GC thread is finished, or if we
mount read-only and the GC thread never runs, they might never be
freed. Clean them up during unmount, if there are any left.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>