Commit Graph

62779 Commits

Author SHA1 Message Date
Al Viro
5a2d3edd8d do_last(): rejoing the common path earlier in FMODE_{OPENED,CREATED} case
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:12 -04:00
Al Viro
59e96e6583 do_last(): don't bother with keeping got_write in FMODE_OPENED case
it's easier to drop it right after lookup_open() and regain if
needed (i.e. if we will need to truncate).  On the non-FMODE_OPENED
path we do that anyway.  In case of FMODE_CREATED we won't be
needing it.  And it's easier to prove correctness that way,
especially since the initial failure to get write access is not
always fatal; proving that we'll never end up truncating in that
case is rather convoluted.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:12 -04:00
Al Viro
3ad5615a07 do_last(): merge the may_open() calls
have FMODE_OPENED case rejoin the main path at earlier point

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:12 -04:00
Al Viro
7be219b4dc atomic_open(): lift the call of may_open() into do_last()
there we'll be able to merge it with its counterparts in other
cases, and there's no reason to do it before the parent has
been unlocked

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:12 -04:00
Al Viro
6fb968cdf9 atomic_open(): return the right dentry in FMODE_OPENED case
->atomic_open() might have used a different alias than the one we'd
passed to it; in "not opened" case we take care of that, in "opened"
one we don't.  Currently we don't care downstream of "opened" case
which alias to return; however, that will change shortly when we
get to unifying may_open() calls.

It's not hard to get right in all cases, anyway.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:12 -04:00
Al Viro
9deed3ebca new helper: traverse_mounts()
common guts of follow_down() and follow_managed() taken to a new
helper - traverse_mounts().  The remnants of follow_managed()
are folded into its sole remaining caller (handle_mounts()).
Calling conventions of handle_mounts() slightly sanitized -
instead of the weird "1 for success, -E... for failure" that used
to be imposed by the calling conventions of walk_component() et.al.
we can use the normal "0 for success, -E... for failure".

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:12 -04:00
Al Viro
ea936aeb3e massage __follow_mount_rcu() a bit
make the loop more similar to that in follow_managed(), with
explicit tracking of flags, etc.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:12 -04:00
Al Viro
c108837e06 namei: have link_path_walk() maintain LOOKUP_PARENT
set on entry, clear when we get to the last component.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:12 -04:00
Al Viro
d8d4611a4f link_path_walk(): simplify stack handling
We use nd->stack to store two things: pinning down the symlinks
we are resolving and resuming the name traversal when a nested
symlink is finished.

Currently, nd->depth is used to keep track of both.  It's 0 when
we call link_path_walk() for the first time (for the pathname
itself) and 1 on all subsequent calls (for trailing symlinks,
if any).  That's fine, as far as pinning symlinks goes - when
handling a trailing symlink, the string we are interpreting
is the body of symlink pinned down in nd->stack[0].  It's
rather inconvenient with respect to handling nested symlinks,
though - when we run out of a string we are currently interpreting,
we need to decide whether it's a nested symlink (in which case
we need to pick the string saved back when we started to interpret
that nested symlink and resume its traversal) or not (in which
case we are done with link_path_walk()).

Current solution is a bit of a kludge - in handling of trailing symlink
(in lookup_last() and open_last_lookups() we clear nd->stack[0].name.
That allows link_path_walk() to use the following rules when
running out of a string to interpret:
	* if nd->depth is zero, we are at the end of pathname itself.
	* if nd->depth is positive, check the saved string; for
nested symlink it will be non-NULL, for trailing symlink - NULL.

It works, but it's rather non-obvious.  Note that we have two sets:
the set of symlinks currently being traversed and the set of postponed
pathname tails.  The former is stored in nd->stack[0..nd->depth-1].link
and it's valid throught the pathname resolution; the latter is valid only
during an individual call of link_path_walk() and it occupies
nd->stack[0..nd->depth-1].name for the first call of link_path_walk() and
nd->stack[1..nd->depth-1].name for subsequent ones.  The kludge is basically
a way to recognize the second set becoming empty.

The things get simpler if we keep track of the second set's size
explicitly and always store it in nd->stack[0..depth-1].name.
We access the second set only inside link_path_walk(), so its
size can live in a local variable; that way the check becomes
trivial without the need of that kludge.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:12 -04:00
Al Viro
b1a8197240 pick_link(): check for WALK_TRAILING, not LOOKUP_PARENT
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:12 -04:00
Al Viro
8c4efe22e7 namei: invert the meaning of WALK_FOLLOW
old flags & WALK_FOLLOW <=> new !(flags & WALK_TRAILING)
That's what that flag had really been used for.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:09:09 -04:00
Al Viro
b4c0353693 sanitize handling of nd->last_type, kill LAST_BIND
->last_type values are set in 3 places: path_init() (sets to LAST_ROOT),
link_path_walk (LAST_NORM/DOT/DOTDOT) and pick_link (LAST_BIND).

The are checked in walk_component(), lookup_last() and do_last().
They also get copied to the caller by filename_parentat().  In the last
3 cases the value is what we had at the return from link_path_walk().
In case of walk_component() it's either directly downstream from
assignment in link_path_walk() or, when called by lookup_last(), the
value we have at the return from link_path_walk().

The value at the entry into link_path_walk() can survive to return only
if the pathname contains nothing but slashes.  Note that pick_link()
never returns such - pure jumps are handled directly.  So for the calls
of link_path_walk() for trailing symlinks it does not matter what value
had been there at the entry; the value at the return won't depend upon it.

There are 3 call chains that might have pick_link() storing LAST_BIND:

1) pick_link() from step_into() from walk_component() from
link_path_walk().  In that case we will either be parsing the next
component immediately after return into link_path_walk(), which will
overwrite the ->last_type before anyone has a chance to look at it,
or we'll fail, in which case nobody will be looking at ->last_type at all.

2) pick_link() from step_into() from walk_component() from lookup_last().
The value is never looked at due to the above; it won't affect the value
seen at return from any link_path_walk().

3) pick_link() from step_into() from do_last().  Ditto.

In other words, assignemnt in pick_link() is pointless, and so is
LAST_BIND itself; nothing ever looks at that value.  Kill it off.
And make link_path_walk() _always_ assign ->last_type - in the only
case when the value at the entry might survive to the return that value
is always LAST_ROOT, inherited from path_init().  Move that assignment
from path_init() into the beginning of link_path_walk(), to consolidate
the things.

Historical note: LAST_BIND used to be used for the kludge with trailing
pure jump symlinks (extra iteration through the top-level loop).
No point keeping it anymore...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:19 -04:00
Al Viro
ad6cc4c338 finally fold get_link() into pick_link()
kill nd->link_inode, while we are at it

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:19 -04:00
Al Viro
06708adb99 merging pick_link() with get_link(), part 6
move the only remaining call of get_link() into pick_link()

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:18 -04:00
Al Viro
b0417d2c72 merging pick_link() with get_link(), part 5
move get_link() call into step_into().

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:18 -04:00
Al Viro
92d270165c merging pick_link() with get_link(), part 4
Move the call of get_link() into walk_component().  Change the
calling conventions for walk_component() to returning the link
body to follow (if any).

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:18 -04:00
Al Viro
40fcf5a931 merging pick_link() with get_link(), part 3
After a pure jump ("/" or procfs-style symlink) we don't need to
hold the link anymore.  link_path_walk() dropped it if such case
had been detected, lookup_last/do_last() (i.e. old trailing_symlink())
left it on the stack - it ended up calling terminate_walk() shortly
anyway, which would've purged the entire stack.

Do it in get_link() itself instead.  Simpler logics that way...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:18 -04:00
Al Viro
1ccac622f9 merging pick_link() with get_link(), part 2
Fold trailing_symlink() into lookup_last() and do_last(), change
the calling conventions of those two.  Rules change:
	success, we are done => NULL instead of 0
	error	=> ERR_PTR(-E...) instead of -E...
	got a symlink to follow => return the path to be followed instead of 1

The loops calling those (in path_lookupat() and path_openat()) adjusted.

A subtle change of control flow here: originally a pure-jump trailing
symlink ("/" or procfs one) would've passed through the upper level
loop once more, with "" for path to traverse.  That would've brought
us back to the lookup_last/do_last entry and we would've hit LAST_BIND
case (LAST_BIND left from get_link() called by trailing_symlink())
and pretty much skip to the point right after where we'd left the
sucker back when we picked that trailing symlink.

Now we don't bother with that extra pass through the upper level
loop - if get_link() says "I've just done a pure jump, nothing
else to do", we just treat that as non-symlink case.

Boilerplate added on that step will go away shortly - it'll migrate
into walk_component() and then to step_into(), collapsing into the
change of calling conventions for those.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:18 -04:00
Al Viro
43679723d2 merging pick_link() with get_link(), part 1
Move restoring LOOKUP_PARENT and zeroing nd->stack.name[0] past
the call of get_link() (nothing _currently_ uses them in there).
That allows to moved the call of may_follow_link() into get_link()
as well, since now the presence of LOOKUP_PARENT distinguishes
the callers from each other (link_path_walk() has it, trailing_symlink()
doesn't).

Preparations for folding trailing_symlink() into callers (lookup_last()
and do_last()) and changing the calling conventions of those.  Next
stage after that will have get_link() call migrate into walk_component(),
then - into step_into().  It's tricky enough to warrant doing that
in stages, unfortunately...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:17 -04:00
Al Viro
a9dc1494a7 expand the only remaining call of path_lookup_conditional()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:17 -04:00
Al Viro
161aff1d93 LOOKUP_MOUNTPOINT: fold path_mountpointat() into path_lookupat()
New LOOKUP flag, telling path_lookupat() to act as path_mountpointat().
IOW, traverse mounts at the final point and skip revalidation of the
location where it ends up.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:17 -04:00
Al Viro
cbae4d12ee fold handle_mounts() into step_into()
The following is true:
	* calls of handle_mounts() and step_into() are always
paired in sequences like
	err = handle_mounts(nd, dentry, &path, &inode, &seq);
	if (unlikely(err < 0))
		return err;
	err = step_into(nd, &path, flags, inode, seq);
	* in all such sequences path is uninitialized before and
unused after this pair of calls
	* in all such sequences inode and seq are unused afterwards.

So the call of handle_mounts() can be shifted inside step_into(),
turning 'path' into a local variable in the combined function.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:08:15 -04:00
Al Viro
aca2903eef new step_into() flag: WALK_NOFOLLOW
Tells step_into() not to follow symlinks, regardless of LOOKUP_FOLLOW.
Allows to switch handle_lookup_down() to of step_into(), getting
all follow_managed() and step_into() calls paired.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:06:13 -04:00
Al Viro
56676ec390 step_into() callers: dismiss the symlink earlier
We need to dismiss a symlink when we are done traversing it;
currently that's done when we call step_into() for its last
component.  For the cases when we do not call step_into()
for that component (i.e. when it's . or ..) we do the same
symlink dismissal after the call of handle_dots().

What we need to guarantee is that the symlink won't be dismissed
while we are still using nd->last.name - it's pointing into the
body of said symlink.  step_into() is sufficiently late - by
the time it's called we'd already obtained the dentry, so the
name we'd been looking up is no longer needed.  However, it
turns out to be cleaner to have that ("we are done with that
component now, can dismiss the link") done explicitly - in the
callers of step_into().

In handle_dots() case we won't be using the component string
at all, so for . and .. the corresponding point is actually
_before_ the call of handle_dots(), not after it.

Fix a minor irregularity in do_last(), while we are at it -
if trailing symlink ended with . or .. we forgot to dismiss
it.  Not a problem, since nameidata is about to be done with
(neither . nor .. can be a trailing symlink, so this is the
last iteration through the loop) and terminate_walk() will
clean the stack anyway, but let's keep it more regular.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:00:32 -04:00
Al Viro
20e343571c lookup_fast(): take mount traversal into callers
Current calling conventions: -E... on error, 0 on cache miss,
result of handle_mounts(nd, dentry, path, inode, seqp) on
success.  Turn that into returning ERR_PTR(-E...), NULL and dentry
resp.; deal with handle_mounts() in the callers.  The thing
is, they already do that in cache miss handling case, so we
just need to supply dentry to them and unify the mount traversal
in those cases.  Fewer arguments that way, and we get closer
to merging handle_mounts() and step_into().

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:00:32 -04:00
Al Viro
c153007b7b teach handle_mounts() to handle RCU mode
... and make the callers of __follow_mount_rcu() use handle_mounts().

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 21:00:30 -04:00
Al Viro
b023e1728b lookup_fast(): consolidate the RCU success case
1) in case of __follow_mount_rcu() failure, lookup_fast() proceeds
to call unlazy_child() and, should it succeed, handle_mounts().
Note that we have status > 0 (or we wouldn't be calling
__follow_mount_rcu() at all), so all stuff conditional upon
non-positive status won't be even touched.

Consolidate just that sequence after the call of __follow_mount_rcu().

2) calling d_is_negative() and keeping its result is pointless -
we either don't get past checking ->d_seq (and don't use the results of
d_is_negative() at all), or we are guaranteed that ->d_inode and
type bits of ->d_flags had been consistent at the time of d_is_negative()
call.  IOW, we could only get to the use of its result if it's
equal to !inode.  The same ->d_seq check guarantees that after that point
this CPU won't observe ->d_flags values older than ->d_inode update.
So 'negative' variable is completely pointless these days.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13 20:59:49 -04:00
Al Viro
db3c9ade50 handle_mounts(): pass dentry in, turn path into a pure out argument
All callers are equivalent to
	path->dentry = dentry;
	path->mnt = nd->path.mnt;
	err = handle_mounts(path, ...)
Pass dentry as an explicit argument, fill *path in handle_mounts()
itself.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-12 18:15:42 -04:00
Al Viro
e73cabff59 do_last(): collapse the call of path_to_nameidata()
... and shift filling struct path to just before the call of
handle_mounts().  All callers of handle_mounts() are
immediately preceded by path->mnt = nd->path.mnt now.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-12 18:15:42 -04:00
Al Viro
da5ebf5aa6 lookup_open(): saner calling conventions (return dentry on success)
same story as for atomic_open() in the previous commit.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-12 18:09:20 -04:00
Al Viro
239eb98338 atomic_open(): saner calling conventions (return dentry on success)
Currently it either returns -E... or puts (nd->path.mnt,dentry)
into *path and returns 0.  Make it return ERR_PTR(-E...) or
dentry; adjust the caller.  Fewer arguments and it's easier
to keep track of *path contents that way.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27 14:43:56 -05:00
Al Viro
bd7c4b5083 handle_mounts(): start building a sane wrapper for follow_managed()
All callers of follow_managed() follow it on success with the same steps -
d_backing_inode(path->dentry) is calculated and stored into some struct inode *
variable and, in all but one case, an unsigned variable (nd->seq to be) is
zeroed.  The single exception is lookup_fast() and there zeroing is correct
thing to do - not doing it is a pointless microoptimization.

	Add a wrapper for follow_managed() that would do that combination.
It's mostly a vehicle for code massage - it will be changing quite a bit,
and the current calling conventions are by no means final.  Right now it
takes path, nameidata and (as out params) inode and seq, similar to
__follow_mount_rcu().  Which will soon get folded into it...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27 14:43:56 -05:00
Al Viro
31d1726d72 make build_open_flags() treat O_CREAT | O_EXCL as implying O_NOFOLLOW
O_CREAT | O_EXCL means "-EEXIST if we run into a trailing symlink".
As it is, we might or might not have LOOKUP_FOLLOW in op->intent
in that case - that depends upon having O_NOFOLLOW in open flags.
It doesn't matter, since we won't be checking it in that case -
do_last() bails out earlier.

However, making sure it's not set (i.e. acting as if we had an explicit
O_NOFOLLOW) makes the behaviour more explicit and allows to reorder the
check for O_CREAT | O_EXCL in do_last() with the call of step_into()
immediately following it.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27 14:43:56 -05:00
Al Viro
1c9f5e06a6 follow_automount() doesn't need the entire nameidata
Only the address of ->total_link_count and the flags.
And fix an off-by-one is ELOOP detection - make it
consistent with symlink following, where we check if
the pre-increment value has reached 40, rather than
check the post-increment one.

[kudos to Christian Brauner for spotted braino]

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27 14:43:55 -05:00
Al Viro
25e195aa1e follow_automount(): get rid of dead^Wstillborn code
1) no instances of ->d_automount() have ever made use of the "return
ERR_PTR(-EISDIR) if you don't feel like mounting anything" - that's
a rudiment of plans that got superseded before the thing went into
the tree.  Despite the comment in follow_automount(), autofs has
never done that.

2) if there's no ->d_automount() in dentry_operations, filesystems
should not set DCACHE_NEED_AUTOMOUNT in the first place.  None have
ever done so...

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27 14:43:55 -05:00
Al Viro
26df6034fd fix automount/automount race properly
Protection against automount/automount races (two threads hitting the same
referral point at the same time) is based upon do_add_mount() prevention of
identical overmounts - trying to overmount the root of mounted tree with
the same tree fails with -EBUSY.  It's unreliable (the other thread might've
mounted something on top of the automount it has triggered) *and* causes
no end of headache for follow_automount() and its caller, since
finish_automount() behaves like do_new_mount() - if the mountpoint to be is
overmounted, it mounts on top what's overmounting it.  It's not only wrong
(we want to go into what's overmounting the automount point and quietly
discard what we planned to mount there), it introduces the possibility of
original parent mount getting dropped.  That's what 8aef188452 (VFS: Fix
vfsmount overput on simultaneous automount) deals with, but it can't do
anything about the reliability of conflict detection - if something had
been overmounted the other thread's automount (e.g. that other thread
having stepped into automount in mount(2)), we don't get that -EBUSY and
the result is
	 referral point under automounted NFS under explicit overmount
under another copy of automounted NFS

What we need is finish_automount() *NOT* digging into overmounts - if it
finds one, it should just quietly discard the thing it was asked to mount.
And don't bother with actually crossing into the results of finish_automount() -
the same loop that calls follow_automount() will do that just fine on the
next iteration.

IOW, instead of calling lock_mount() have finish_automount() do it manually,
_without_ the "move into overmount and retry" part.  And leave crossing into
the results to the caller of follow_automount(), which simplifies it a lot.

Moral: if you end up with a lot of glue working around the calling conventions
of something, perhaps these calling conventions are simply wrong...

Fixes: 8aef188452 (VFS: Fix vfsmount overput on simultaneous automount)
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-27 14:40:43 -05:00
Al Viro
8f11538ebe do_add_mount(): lift lock_mount/unlock_mount into callers
preparation to finish_automount() fix (next commit)

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-10 11:59:06 -05:00
Linus Torvalds
89a47dd1af Kbuild updates for v5.6 (2nd)
- fix randconfig to generate a sane .config
 
  - rename hostprogs-y / always to hostprogs / always-y, which are
    more natual syntax.
 
  - optimize scripts/kallsyms
 
  - fix yes2modconfig and mod2yesconfig
 
  - make multiple directory targets ('make foo/ bar/') work
 -----BEGIN PGP SIGNATURE-----
 
 iQJJBAABCgAzFiEEbmPs18K1szRHjPqEPYsBB53g2wYFAl47NfMVHG1hc2FoaXJv
 eUBrZXJuZWwub3JnAAoJED2LAQed4NsGRGwP/3AHO8P0wGEeFKs3ziSMjs2W7/Pj
 lN08Kuxm0u3LnyEEcHVUveoi+xBYqvrw0RsGgYf5S8q0Mpep7MPqbfkDUxV/0Zkj
 QP2CsvOTbjdBjH7q3ojkwLcDl0Pxu9mg3eZMRXZ2WQeNXuMRw6Bicoh7ElvB1Bv/
 HC+j30i2Me3cf/riQGSAsstvlXyIR8RaerR8PfRGESTysiiN76+JcHTatJHhOJL9
 O6XKkzo8/CXMYKKVF4Ae4NP+WFg6E96/pAPx0Rf47RbPX9UG35L9rkzTDnk70Ms6
 OhKiu3hXsRX7mkqApuoTqjge4+iiQcKZxYmMXU1vGlIRzjwg19/4YFP6pDSCcnIu
 kKb8KN4o4N41N7MFS3OLZWwISA8Vw6RbtwDZ3AghDWb7EHb9oNW42mGfcAPr1+wZ
 /KH6RHTzaz+5q2MgyMY1NhADFrhIT9CvDM+UJECgbokblnw7PHAnPmbsuVak9ZOH
 u9ojO1HpTTuIYO6N6v4K5zQBZF1N+RvkmBnhHd8j6SksppsCoC/G62QxgXhF2YK3
 FQMpATCpuyengLxWAmPEjsyyPOlrrdu9UxqNsXVy5ol40+7zpxuHwKcQKCa9urJR
 rcpbIwLaBcLhHU4BmvBxUk5aZxxGV2F0O0gXTOAbT2xhd6BipZSMhUmN49SErhQm
 NC/coUmQX7McxMXh
 =sv4U
 -----END PGP SIGNATURE-----

Merge tag 'kbuild-v5.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild

Pull more Kbuild updates from Masahiro Yamada:

 - fix randconfig to generate a sane .config

 - rename hostprogs-y / always to hostprogs / always-y, which are more
   natual syntax.

 - optimize scripts/kallsyms

 - fix yes2modconfig and mod2yesconfig

 - make multiple directory targets ('make foo/ bar/') work

* tag 'kbuild-v5.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
  kbuild: make multiple directory targets work
  kconfig: Invalidate all symbols after changing to y or m.
  kallsyms: fix type of kallsyms_token_table[]
  scripts/kallsyms: change table to store (strcut sym_entry *)
  scripts/kallsyms: rename local variables in read_symbol()
  kbuild: rename hostprogs-y/always to hostprogs/always-y
  kbuild: fix the document to use extra-y for vmlinux.lds
  kconfig: fix broken dependency in randconfig-generated .config
2020-02-09 16:05:50 -08:00
Linus Torvalds
380a129eb2 fs: New zonefs file system
Zonefs is a very simple file system exposing each zone of a zoned block
 device as a file.
 
 Unlike a regular file system with native zoned block device support
 (e.g. f2fs or the on-going btrfs effort), zonefs does not hide the
 sequential write constraint of zoned block devices to the user. As a
 result, zonefs is not a POSIX compliant file system. Its goal is to
 simplify the implementation of zoned block devices support in
 applications by replacing raw block device file accesses with a richer
 file based API, avoiding relying on direct block device file ioctls
 which may be more obscure to developers.
 
 One example of this approach is the implementation of LSM
 (log-structured merge) tree structures (such as used in RocksDB and
 LevelDB) on zoned block devices by allowing SSTables to be stored in a
 zone file similarly to a regular file system rather than as a range of
 sectors of a zoned device. The introduction of the higher level
 construct "one file is one zone" can help reducing the amount of changes
 needed in the application while at the same time allowing the use of
 zoned block devices with various programming languages other than C.
 
 Zonefs IO management implementation uses the new iomap generic code.
 Zonefs has been successfully tested using a functional test suite
 (available with zonefs userland format tool on github) and a prototype
 implementation of LevelDB on top of zonefs.
 
 Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQSRPv8tYSvhwAzJdzjdoc3SxdoYdgUCXj1y8QAKCRDdoc3SxdoY
 dqozAP9J3t+Q95BgKgI5jP+XEtyYsPBTaVrvaSaViEnwtJLVoQD/ZQ1lTCZSE9OI
 UkvWawkuFtLGfOxTqyA3eZrZi22Ttwk=
 =YVvO
 -----END PGP SIGNATURE-----

Merge tag 'zonefs-5.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs

Pull new zonefs file system from Damien Le Moal:
 "Zonefs is a very simple file system exposing each zone of a zoned
  block device as a file.

  Unlike a regular file system with native zoned block device support
  (e.g. f2fs or the on-going btrfs effort), zonefs does not hide the
  sequential write constraint of zoned block devices to the user. As a
  result, zonefs is not a POSIX compliant file system. Its goal is to
  simplify the implementation of zoned block devices support in
  applications by replacing raw block device file accesses with a richer
  file based API, avoiding relying on direct block device file ioctls
  which may be more obscure to developers.

  One example of this approach is the implementation of LSM
  (log-structured merge) tree structures (such as used in RocksDB and
  LevelDB) on zoned block devices by allowing SSTables to be stored in a
  zone file similarly to a regular file system rather than as a range of
  sectors of a zoned device. The introduction of the higher level
  construct "one file is one zone" can help reducing the amount of
  changes needed in the application while at the same time allowing the
  use of zoned block devices with various programming languages other
  than C.

  Zonefs IO management implementation uses the new iomap generic code.
  Zonefs has been successfully tested using a functional test suite
  (available with zonefs userland format tool on github) and a prototype
  implementation of LevelDB on top of zonefs"

* tag 'zonefs-5.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs:
  zonefs: Add documentation
  fs: New zonefs file system
2020-02-09 15:51:46 -08:00
Linus Torvalds
d1ea35f4cd 13 cifs/smb3 patches most from testing at the SMB3 plugfest this week
-----BEGIN PGP SIGNATURE-----
 
 iQGzBAABCgAdFiEE6fsu8pdIjtWE/DpLiiy9cAdyT1EFAl49bNsACgkQiiy9cAdy
 T1EGlQwArDJiHUV7W/WaoDZnusPPQqUT3ayqAHL0P8cDsjxLu3uNMkUISr0HdbxC
 kqYahSTb+/BKQzoZhVe5wK3S8W6R8+wyaPJExRCL3brlIHVP/eC9uUjSgkT6QVDl
 /vZCwxj7KmTK/S+ofji/XTl2f8f8BCw2biGVxwR2Jj5pwKI4wFIMFm7mDetTQRD4
 bK0UR2Owiw4DpPXdwHlXPf9N06z0ETa1UdMXklIBgeK9B1eT1STD9q/iHJh3bLpO
 klhbiq5eGRCcs9cBVTQcn6U+zGYBOcdJuhPGbAObEU+R2vNX06clydKlKy1oz1VL
 4jbVVn9xuGZ9evFBC3h7Na1X7C3V28WcpfeRfFxZ157hNuQSNo5wiq0rF66EQ14U
 hbmlx2S2ooyNKcnrj46SUw9zVLZ0xcx1Mw7kmoyHgI/vznW9fvV0Y2JXawJMPei5
 VuQTgDLFsvnIIrUnrGBu2UXMzXghxLZ3SXJVKXuW3luvNRk82RAGHmIdty3OTgPp
 DN9lhGvv
 =F1qf
 -----END PGP SIGNATURE-----

Merge tag '5.6-rc-smb3-plugfest-patches' of git://git.samba.org/sfrench/cifs-2.6

Pull cifs fixes from Steve French:
 "13 cifs/smb3 patches, most from testing at the SMB3 plugfest this week:

   - Important fix for multichannel and for modefromsid mounts.

   - Two reconnect fixes

   - Addition of SMB3 change notify support

   - Backup tools fix

   - A few additional minor debug improvements (tracepoints and
     additional logging found useful during testing this week)"

* tag '5.6-rc-smb3-plugfest-patches' of git://git.samba.org/sfrench/cifs-2.6:
  smb3: Add defines for new information level, FileIdInformation
  smb3: print warning once if posix context returned on open
  smb3: add one more dynamic tracepoint missing from strict fsync path
  cifs: fix mode bits from dir listing when mounted with modefromsid
  cifs: fix channel signing
  cifs: add SMB3 change notification support
  cifs: make multichannel warning more visible
  cifs: fix soft mounts hanging in the reconnect code
  cifs: Add tracepoints for errors on flush or fsync
  cifs: log warning message (once) if out of disk space
  cifs: fail i/o on soft mounts if sessionsetup errors out
  smb3: fix problem with null cifs super block with previous patch
  SMB3: Backup intent flag missing from some more ops
2020-02-09 13:27:17 -08:00
Linus Torvalds
5586c3c1e0 Merge branch 'work.vboxsf' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull vboxfs from Al Viro:
 "This is the VirtualBox guest shared folder support by Hans de Goede,
  with fixups for fs_parse folded in to avoid bisection hazards from
  those API changes..."

* 'work.vboxsf' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  fs: Add VirtualBox guest shared folder (vboxsf) support
2020-02-09 12:41:00 -08:00
Hans de Goede
0fd1695766 fs: Add VirtualBox guest shared folder (vboxsf) support
VirtualBox hosts can share folders with guests, this commit adds a
VFS driver implementing the Linux-guest side of this, allowing folders
exported by the host to be mounted under Linux.

This driver depends on the guest <-> host IPC functions exported by
the vboxguest driver.

Acked-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-02-08 17:34:58 -05:00
Linus Torvalds
b85080c106 compat-ioctl fix for v5.6
One patch in the compat-ioctl series broke 32-bit rootfs for multiple
 people testing on 64-bit kernels. Let's fix it in -rc1 before others
 run into the same issue.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJePyKSAAoJEGCrR//JCVInFMEP/0fH0rI/M2xL4neSjp50qV7Q
 M86gLTmatgNFLZGdC+xbaImaxpZHEMIF93JgOzN+0IRfdc7MbD23JEm5yzCZEc2I
 NCAsmjcoA+mq8ntOl8o2J1pgLpVOe4BoBRvAGYFIM2kS63JXloCi9mD/3svDJF8C
 WVPOskzoNT1pO9mRUKsAE740qdI86US/ksvrGOQQHVFVm3Iwm3srot4OBQYRaErw
 bVEl8vqSOEKtXQwk3r7QcW9Bi83WZuqTbkLxqTtWm/U/8JreA82qH1/6zDbKj7d5
 IH/J8D4FJ2LLRP5860bLj0vGl5mKBEJaou2S8ak7sRNu4K8DZJlTYu7YNMTgocQE
 kbNylNqftKastVRGS1EPmOPdam8PiPlu3sxOY1X1TKO5+4jtb1WcqtFqmUkM5/Jk
 C4iMDXNHXmHcC+ORrjchpIZs53nkfSAdeUzQ7xxwZlfaHD3DXnsFi5DHEZ9PWf9a
 UlCOaWFLXRqdso30iobaaLa6JKa50Znjiqfsh6mbUuU9tmDnTffaHMy9eFhfMjqH
 eP3acYXkAyK2toHuE6l5qaM6mOlYfrooCw/75CfQB0CR3e5tmXXLdFA319/OHRwX
 m+tFby6Fw4KGGZWzXls54rZNa3ZmODj6N3Ymx1lKZfBVwblLNGHJYYVPP1ZUBCTS
 43oommoGHO6/roOjLl3j
 =uMS2
 -----END PGP SIGNATURE-----

Merge tag 'compat-ioctl-fix' of git://git.kernel.org:/pub/scm/linux/kernel/git/arnd/playground

Pull compat-ioctl fix from Arnd Bergmann:
 "One patch in the compat-ioctl series broke 32-bit rootfs for multiple
  people testing on 64-bit kernels. Let's fix it in -rc1 before others
  run into the same issue"

* tag 'compat-ioctl-fix' of git://git.kernel.org:/pub/scm/linux/kernel/git/arnd/playground:
  compat_ioctl: fix FIONREAD on devices
2020-02-08 13:44:41 -08:00
Linus Torvalds
c9d35ee049 Merge branch 'merge.nfs-fs_parse.1' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull vfs file system parameter updates from Al Viro:
 "Saner fs_parser.c guts and data structures. The system-wide registry
  of syntax types (string/enum/int32/oct32/.../etc.) is gone and so is
  the horror switch() in fs_parse() that would have to grow another case
  every time something got added to that system-wide registry.

  New syntax types can be added by filesystems easily now, and their
  namespace is that of functions - not of system-wide enum members. IOW,
  they can be shared or kept private and if some turn out to be widely
  useful, we can make them common library helpers, etc., without having
  to do anything whatsoever to fs_parse() itself.

  And we already get that kind of requests - the thing that finally
  pushed me into doing that was "oh, and let's add one for timeouts -
  things like 15s or 2h". If some filesystem really wants that, let them
  do it. Without somebody having to play gatekeeper for the variants
  blessed by direct support in fs_parse(), TYVM.

  Quite a bit of boilerplate is gone. And IMO the data structures make a
  lot more sense now. -200LoC, while we are at it"

* 'merge.nfs-fs_parse.1' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (25 commits)
  tmpfs: switch to use of invalfc()
  cgroup1: switch to use of errorfc() et.al.
  procfs: switch to use of invalfc()
  hugetlbfs: switch to use of invalfc()
  cramfs: switch to use of errofc() et.al.
  gfs2: switch to use of errorfc() et.al.
  fuse: switch to use errorfc() et.al.
  ceph: use errorfc() and friends instead of spelling the prefix out
  prefix-handling analogues of errorf() and friends
  turn fs_param_is_... into functions
  fs_parse: handle optional arguments sanely
  fs_parse: fold fs_parameter_desc/fs_parameter_spec
  fs_parser: remove fs_parameter_description name field
  add prefix to fs_context->log
  ceph_parse_param(), ceph_parse_mon_ips(): switch to passing fc_log
  new primitive: __fs_parse()
  switch rbd and libceph to p_log-based primitives
  struct p_log, variants of warnf() et.al. taking that one instead
  teach logfc() to handle prefices, give it saner calling conventions
  get rid of cg_invalf()
  ...
2020-02-08 13:26:41 -08:00
Linus Torvalds
236f453294 Merge branch 'work.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull misc vfs updates from Al Viro:

 - bmap series from cmaiolino

 - getting rid of convolutions in copy_mount_options() (use a couple of
   copy_from_user() instead of the __get_user() crap)

* 'work.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  saner copy_mount_options()
  fibmap: Reject negative block numbers
  fibmap: Use bmap instead of ->bmap method in ioctl_fibmap
  ecryptfs: drop direct calls to ->bmap
  cachefiles: drop direct usage of ->bmap method.
  fs: Enable bmap() function to properly return errors
2020-02-08 13:04:49 -08:00
Linus Torvalds
995933305e Merge branch 'pipe-exclusive-wakeup'
Merge thundering herd avoidance on pipe IO.

This would have been applied for 5.5 already, but got delayed because of
a user-space race condition in the GNU make jobserver code.  Now that
there's a new GNU make 4.3 release, and most distributions seem to have
at least applied the (almost three year old) fix for the problem, let's
see if people notice.

And it might have been just bad random timing luck on my machine.

If you do hit the race condition, things will still work, but the
symptom is that you don't get nearly the expected parallelism when using
"make -j<N>".

The jobserver bug can definitely happen without this patch too, but
seems to be easier to trigger when we no longer wake up pipe waiters
unnecessarily.

* pipe-exclusive-wakeup:
  pipe: use exclusive waits when reading or writing
2020-02-08 11:44:02 -08:00
Linus Torvalds
0ddad21d3e pipe: use exclusive waits when reading or writing
This makes the pipe code use separate wait-queues and exclusive waiting
for readers and writers, avoiding a nasty thundering herd problem when
there are lots of readers waiting for data on a pipe (or, less commonly,
lots of writers waiting for a pipe to have space).

While this isn't a common occurrence in the traditional "use a pipe as a
data transport" case, where you typically only have a single reader and
a single writer process, there is one common special case: using a pipe
as a source of "locking tokens" rather than for data communication.

In particular, the GNU make jobserver code ends up using a pipe as a way
to limit parallelism, where each job consumes a token by reading a byte
from the jobserver pipe, and releases the token by writing a byte back
to the pipe.

This pattern is fairly traditional on Unix, and works very well, but
will waste a lot of time waking up a lot of processes when only a single
reader needs to be woken up when a writer releases a new token.

A simplified test-case of just this pipe interaction is to create 64
processes, and then pass a single token around between them (this
test-case also intentionally passes another token that gets ignored to
test the "wake up next" logic too, in case anybody wonders about it):

    #include <unistd.h>

    int main(int argc, char **argv)
    {
        int fd[2], counters[2];

        pipe(fd);
        counters[0] = 0;
        counters[1] = -1;
        write(fd[1], counters, sizeof(counters));

        /* 64 processes */
        fork(); fork(); fork(); fork(); fork(); fork();

        do {
                int i;
                read(fd[0], &i, sizeof(i));
                if (i < 0)
                        continue;
                counters[0] = i+1;
                write(fd[1], counters, (1+(i & 1)) *sizeof(int));
        } while (counters[0] < 1000000);
        return 0;
    }

and in a perfect world, passing that token around should only cause one
context switch per transfer, when the writer of a token causes a
directed wakeup of just a single reader.

But with the "writer wakes all readers" model we traditionally had, on
my test box the above case causes more than an order of magnitude more
scheduling: instead of the expected ~1M context switches, "perf stat"
shows

        231,852.37 msec task-clock                #   15.857 CPUs utilized
        11,250,961      context-switches          #    0.049 M/sec
           616,304      cpu-migrations            #    0.003 M/sec
             1,648      page-faults               #    0.007 K/sec
 1,097,903,998,514      cycles                    #    4.735 GHz
   120,781,778,352      instructions              #    0.11  insn per cycle
    27,997,056,043      branches                  #  120.754 M/sec
       283,581,233      branch-misses             #    1.01% of all branches

      14.621273891 seconds time elapsed

       0.018243000 seconds user
       3.611468000 seconds sys

before this commit.

After this commit, I get

          5,229.55 msec task-clock                #    3.072 CPUs utilized
         1,212,233      context-switches          #    0.232 M/sec
           103,951      cpu-migrations            #    0.020 M/sec
             1,328      page-faults               #    0.254 K/sec
    21,307,456,166      cycles                    #    4.074 GHz
    12,947,819,999      instructions              #    0.61  insn per cycle
     2,881,985,678      branches                  #  551.096 M/sec
        64,267,015      branch-misses             #    2.23% of all branches

       1.702148350 seconds time elapsed

       0.004868000 seconds user
       0.110786000 seconds sys

instead. Much better.

[ Note! This kernel improvement seems to be very good at triggering a
  race condition in the make jobserver (in GNU make 4.2.1) for me. It's
  a long known bug that was fixed back in June 2017 by GNU make commit
  b552b0525198 ("[SV 51159] Use a non-blocking read with pselect to
  avoid hangs.").

  But there wasn't a new release of GNU make until 4.3 on Jan 19 2020,
  so a number of distributions may still have the buggy version. Some
  have backported the fix to their 4.2.1 release, though, and even
  without the fix it's quite timing-dependent whether the bug actually
  is hit. ]

Josh Triplett says:
 "I've been hammering on your pipe fix patch (switching to exclusive
  wait queues) for a month or so, on several different systems, and I've
  run into no issues with it. The patch *substantially* improves
  parallel build times on large (~100 CPU) systems, both with parallel
  make and with other things that use make's pipe-based jobserver.

  All current distributions (including stable and long-term stable
  distributions) have versions of GNU make that no longer have the
  jobserver bug"

Tested-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-02-08 11:39:19 -08:00
Arnd Bergmann
0a061743af compat_ioctl: fix FIONREAD on devices
My final cleanup patch for sys_compat_ioctl() introduced a regression on
the FIONREAD ioctl command, which is used for both regular and special
files, but only works on regular files after my patch, as I had missed
the warning that Al Viro put into a comment right above it.

Change it back so it can work on any file again by moving the implementation
to do_vfs_ioctl() instead.

Fixes: 77b9040195 ("compat_ioctl: simplify the implementation")
Reported-and-tested-by: Christian Zigotzky <chzigotzky@xenosoft.de>
Reported-and-tested-by: youling257 <youling257@gmail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-02-08 18:02:54 +01:00
Linus Torvalds
f757165705 fuse fixes for 5.6-rc1
-----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQSQHSd0lITzzeNWNm3h3BK/laaZPAUCXj182AAKCRDh3BK/laaZ
 PIiWAQCprdMIBe0u9Rd9cqQYXClOI7PI9oIcpLmkIlHDuUWDgQD/Y4c1UMsN8yQY
 d8cYZXMivKKhyY2nRitR1mbv0RPoGwE=
 =8hFo
 -----END PGP SIGNATURE-----

Merge tag 'fuse-fixes-5.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse

Pull fuse fixes from Miklos Szeredi:

 - Fix a regression introduced in v5.1 that triggers WARNINGs for some
   fuse filesystems

 - Fix an xfstest failure

 - Allow overlayfs to be used on top of fuse/virtiofs

 - Code and documentation cleanups

* tag 'fuse-fixes-5.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse:
  fuse: use true,false for bool variable
  Documentation: filesystems: convert fuse to RST
  fuse: Support RENAME_WHITEOUT flag
  fuse: don't overflow LLONG_MAX with end offset
  fix up iter on short count in fuse_direct_io()
2020-02-07 17:59:07 -08:00
Linus Torvalds
175787e011 Changes in gfs2:
- Fix a bug in Abhi Das's journal head lookup improvements that can cause a
   valid journal to be rejected.
 - Fix an O_SYNC write handling bug reported by Christoph Hellwig.
 -----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCAAyFiEEJZs3krPW0xkhLMTc1b+f6wMTZToFAl49TKgUHGFncnVlbmJh
 QHJlZGhhdC5jb20ACgkQ1b+f6wMTZTqG4RAAmXJfWI7UskM3ZRSVivbBwc0R3P9h
 PgdgeXlr/k4mJ2I421/btln614l4nD+JuG/FplOvj2m3YCzeKj2W+OiSxqahmwGm
 7yGb0sFHjx6wZcID37mig5aqo2MJYI0xTAR6rSI2tAA7E/B7KX8SfFIS2EWB2vLz
 +rzCawqUGWKCiS/tZTe6JlB0Aeg20oy0Y00p3gewMN0ILNqy0w9kA+LVOGQYyUmW
 rrad638czIXi7kugWvNo82vcoU140m3A6OTaINf3EaT8AmOtw+e4qyej+f82BiVS
 RIXWKI+uRSfKFE+aYkwTxQn0BCMor63QIs3aaDXyLZBqnhTywRsckK6O7iBLFVDb
 NQc86wxiHzDoWubXstV0lrlel5m5dHT0dUq8mVogj4kOtwOvejiDRopqtdGkdxA5
 j8zFV6O8BRPUw5g7MS37n9myuTNRj5q3L2vmN2xZ6fZmHykorwbwLbw0WMeAQ/4i
 pQh4NGE6h/lSPZIa/ZVCH8hwj2b41ZtAQK3k8G5IhNUhMHNkU8iynTbddoHFPsOF
 67hj/aAycDx2NA6j/suS8PGt7PPU+VwwoDLKIG64bTHFTpwnRDNzZDt9wPIB9sGu
 P4QlSQYPrLf/u8+TD8yv2VJsyAEMFl1UQWbxqf6EzLtKjRM/TIoQsdRUF7upqUnN
 k5CM8FgmfYpIiFQ=
 =+6PA
 -----END PGP SIGNATURE-----

Merge tag 'gfs2-for-5.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2

Pull gfs2 fixes from Andreas Gruenbacher:

 - Fix a bug in Abhi Das's journal head lookup improvements that can
   cause a valid journal to be rejected.

 - Fix an O_SYNC write handling bug reported by Christoph Hellwig.

* tag 'gfs2-for-5.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2:
  gfs2: fix O_SYNC write handling
  gfs2: move setting current->backing_dev_info
  gfs2: fix gfs2_find_jhead that returns uninitialized jhead with seq 0
2020-02-07 17:54:46 -08:00