mirror of
https://github.com/torvalds/linux.git
synced 2024-11-15 08:31:55 +00:00
7d6beb71da
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQRAhzRXHqcMeLMyaSiRxhvAZXjcogUCYCegywAKCRCRxhvAZXjc
ouJ6AQDlf+7jCQlQdeKKoN9QDFfMzG1ooemat36EpRRTONaGuAD8D9A4sUsG4+5f
4IU5Lj9oY4DEmF8HenbWK2ZHsesL2Qg=
=yPaw
-----END PGP SIGNATURE-----
Merge tag 'idmapped-mounts-v5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux
Pull idmapped mounts from Christian Brauner:
"This introduces idmapped mounts which has been in the making for some
time. Simply put, different mounts can expose the same file or
directory with different ownership. This initial implementation comes
with ports for fat, ext4 and with Christoph's port for xfs with more
filesystems being actively worked on by independent people and
maintainers.
Idmapping mounts handle a wide range of long standing use-cases. Here
are just a few:
- Idmapped mounts make it possible to easily share files between
multiple users or multiple machines especially in complex
scenarios. For example, idmapped mounts will be used in the
implementation of portable home directories in
systemd-homed.service(8) where they allow users to move their home
directory to an external storage device and use it on multiple
computers where they are assigned different uids and gids. This
effectively makes it possible to assign random uids and gids at
login time.
- It is possible to share files from the host with unprivileged
containers without having to change ownership permanently through
chown(2).
- It is possible to idmap a container's rootfs and without having to
mangle every file. For example, Chromebooks use it to share the
user's Download folder with their unprivileged containers in their
Linux subsystem.
- It is possible to share files between containers with
non-overlapping idmappings.
- Filesystem that lack a proper concept of ownership such as fat can
use idmapped mounts to implement discretionary access (DAC)
permission checking.
- They allow users to efficiently changing ownership on a per-mount
basis without having to (recursively) chown(2) all files. In
contrast to chown (2) changing ownership of large sets of files is
instantenous with idmapped mounts. This is especially useful when
ownership of a whole root filesystem of a virtual machine or
container is changed. With idmapped mounts a single syscall
mount_setattr syscall will be sufficient to change the ownership of
all files.
- Idmapped mounts always take the current ownership into account as
idmappings specify what a given uid or gid is supposed to be mapped
to. This contrasts with the chown(2) syscall which cannot by itself
take the current ownership of the files it changes into account. It
simply changes the ownership to the specified uid and gid. This is
especially problematic when recursively chown(2)ing a large set of
files which is commong with the aforementioned portable home
directory and container and vm scenario.
- Idmapped mounts allow to change ownership locally, restricting it
to specific mounts, and temporarily as the ownership changes only
apply as long as the mount exists.
Several userspace projects have either already put up patches and
pull-requests for this feature or will do so should you decide to pull
this:
- systemd: In a wide variety of scenarios but especially right away
in their implementation of portable home directories.
https://systemd.io/HOME_DIRECTORY/
- container runtimes: containerd, runC, LXD:To share data between
host and unprivileged containers, unprivileged and privileged
containers, etc. The pull request for idmapped mounts support in
containerd, the default Kubernetes runtime is already up for quite
a while now: https://github.com/containerd/containerd/pull/4734
- The virtio-fs developers and several users have expressed interest
in using this feature with virtual machines once virtio-fs is
ported.
- ChromeOS: Sharing host-directories with unprivileged containers.
I've tightly synced with all those projects and all of those listed
here have also expressed their need/desire for this feature on the
mailing list. For more info on how people use this there's a bunch of
talks about this too. Here's just two recent ones:
https://www.cncf.io/wp-content/uploads/2020/12/Rootless-Containers-in-Gitpod.pdf
https://fosdem.org/2021/schedule/event/containers_idmap/
This comes with an extensive xfstests suite covering both ext4 and
xfs:
https://git.kernel.org/brauner/xfstests-dev/h/idmapped_mounts
It covers truncation, creation, opening, xattrs, vfscaps, setid
execution, setgid inheritance and more both with idmapped and
non-idmapped mounts. It already helped to discover an unrelated xfs
setgid inheritance bug which has since been fixed in mainline. It will
be sent for inclusion with the xfstests project should you decide to
merge this.
In order to support per-mount idmappings vfsmounts are marked with
user namespaces. The idmapping of the user namespace will be used to
map the ids of vfs objects when they are accessed through that mount.
By default all vfsmounts are marked with the initial user namespace.
The initial user namespace is used to indicate that a mount is not
idmapped. All operations behave as before and this is verified in the
testsuite.
Based on prior discussions we want to attach the whole user namespace
and not just a dedicated idmapping struct. This allows us to reuse all
the helpers that already exist for dealing with idmappings instead of
introducing a whole new range of helpers. In addition, if we decide in
the future that we are confident enough to enable unprivileged users
to setup idmapped mounts the permission checking can take into account
whether the caller is privileged in the user namespace the mount is
currently marked with.
The user namespace the mount will be marked with can be specified by
passing a file descriptor refering to the user namespace as an
argument to the new mount_setattr() syscall together with the new
MOUNT_ATTR_IDMAP flag. The system call follows the openat2() pattern
of extensibility.
The following conditions must be met in order to create an idmapped
mount:
- The caller must currently have the CAP_SYS_ADMIN capability in the
user namespace the underlying filesystem has been mounted in.
- The underlying filesystem must support idmapped mounts.
- The mount must not already be idmapped. This also implies that the
idmapping of a mount cannot be altered once it has been idmapped.
- The mount must be a detached/anonymous mount, i.e. it must have
been created by calling open_tree() with the OPEN_TREE_CLONE flag
and it must not already have been visible in the filesystem.
The last two points guarantee easier semantics for userspace and the
kernel and make the implementation significantly simpler.
By default vfsmounts are marked with the initial user namespace and no
behavioral or performance changes are observed.
The manpage with a detailed description can be found here:
1d7b902e28
In order to support idmapped mounts, filesystems need to be changed
and mark themselves with the FS_ALLOW_IDMAP flag in fs_flags. The
patches to convert individual filesystem are not very large or
complicated overall as can be seen from the included fat, ext4, and
xfs ports. Patches for other filesystems are actively worked on and
will be sent out separately. The xfstestsuite can be used to verify
that port has been done correctly.
The mount_setattr() syscall is motivated independent of the idmapped
mounts patches and it's been around since July 2019. One of the most
valuable features of the new mount api is the ability to perform
mounts based on file descriptors only.
Together with the lookup restrictions available in the openat2()
RESOLVE_* flag namespace which we added in v5.6 this is the first time
we are close to hardened and race-free (e.g. symlinks) mounting and
path resolution.
While userspace has started porting to the new mount api to mount
proper filesystems and create new bind-mounts it is currently not
possible to change mount options of an already existing bind mount in
the new mount api since the mount_setattr() syscall is missing.
With the addition of the mount_setattr() syscall we remove this last
restriction and userspace can now fully port to the new mount api,
covering every use-case the old mount api could. We also add the
crucial ability to recursively change mount options for a whole mount
tree, both removing and adding mount options at the same time. This
syscall has been requested multiple times by various people and
projects.
There is a simple tool available at
https://github.com/brauner/mount-idmapped
that allows to create idmapped mounts so people can play with this
patch series. I'll add support for the regular mount binary should you
decide to pull this in the following weeks:
Here's an example to a simple idmapped mount of another user's home
directory:
u1001@f2-vm:/$ sudo ./mount --idmap both:1000:1001:1 /home/ubuntu/ /mnt
u1001@f2-vm:/$ ls -al /home/ubuntu/
total 28
drwxr-xr-x 2 ubuntu ubuntu 4096 Oct 28 22:07 .
drwxr-xr-x 4 root root 4096 Oct 28 04:00 ..
-rw------- 1 ubuntu ubuntu 3154 Oct 28 22:12 .bash_history
-rw-r--r-- 1 ubuntu ubuntu 220 Feb 25 2020 .bash_logout
-rw-r--r-- 1 ubuntu ubuntu 3771 Feb 25 2020 .bashrc
-rw-r--r-- 1 ubuntu ubuntu 807 Feb 25 2020 .profile
-rw-r--r-- 1 ubuntu ubuntu 0 Oct 16 16:11 .sudo_as_admin_successful
-rw------- 1 ubuntu ubuntu 1144 Oct 28 00:43 .viminfo
u1001@f2-vm:/$ ls -al /mnt/
total 28
drwxr-xr-x 2 u1001 u1001 4096 Oct 28 22:07 .
drwxr-xr-x 29 root root 4096 Oct 28 22:01 ..
-rw------- 1 u1001 u1001 3154 Oct 28 22:12 .bash_history
-rw-r--r-- 1 u1001 u1001 220 Feb 25 2020 .bash_logout
-rw-r--r-- 1 u1001 u1001 3771 Feb 25 2020 .bashrc
-rw-r--r-- 1 u1001 u1001 807 Feb 25 2020 .profile
-rw-r--r-- 1 u1001 u1001 0 Oct 16 16:11 .sudo_as_admin_successful
-rw------- 1 u1001 u1001 1144 Oct 28 00:43 .viminfo
u1001@f2-vm:/$ touch /mnt/my-file
u1001@f2-vm:/$ setfacl -m u:1001:rwx /mnt/my-file
u1001@f2-vm:/$ sudo setcap -n 1001 cap_net_raw+ep /mnt/my-file
u1001@f2-vm:/$ ls -al /mnt/my-file
-rw-rwxr--+ 1 u1001 u1001 0 Oct 28 22:14 /mnt/my-file
u1001@f2-vm:/$ ls -al /home/ubuntu/my-file
-rw-rwxr--+ 1 ubuntu ubuntu 0 Oct 28 22:14 /home/ubuntu/my-file
u1001@f2-vm:/$ getfacl /mnt/my-file
getfacl: Removing leading '/' from absolute path names
# file: mnt/my-file
# owner: u1001
# group: u1001
user::rw-
user:u1001:rwx
group::rw-
mask::rwx
other::r--
u1001@f2-vm:/$ getfacl /home/ubuntu/my-file
getfacl: Removing leading '/' from absolute path names
# file: home/ubuntu/my-file
# owner: ubuntu
# group: ubuntu
user::rw-
user:ubuntu:rwx
group::rw-
mask::rwx
other::r--"
* tag 'idmapped-mounts-v5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux: (41 commits)
xfs: remove the possibly unused mp variable in xfs_file_compat_ioctl
xfs: support idmapped mounts
ext4: support idmapped mounts
fat: handle idmapped mounts
tests: add mount_setattr() selftests
fs: introduce MOUNT_ATTR_IDMAP
fs: add mount_setattr()
fs: add attr_flags_to_mnt_flags helper
fs: split out functions to hold writers
namespace: only take read lock in do_reconfigure_mnt()
mount: make {lock,unlock}_mount_hash() static
namespace: take lock_mount_hash() directly when changing flags
nfs: do not export idmapped mounts
overlayfs: do not mount on top of idmapped mounts
ecryptfs: do not mount on top of idmapped mounts
ima: handle idmapped mounts
apparmor: handle idmapped mounts
fs: make helpers idmap mount aware
exec: handle idmapped mounts
would_dump: handle idmapped mounts
...
1058 lines
28 KiB
C
1058 lines
28 KiB
C
// SPDX-License-Identifier: GPL-2.0-only
|
|
/*
|
|
*
|
|
* Copyright (C) 2011 Novell Inc.
|
|
*/
|
|
|
|
#include <linux/fs.h>
|
|
#include <linux/slab.h>
|
|
#include <linux/cred.h>
|
|
#include <linux/xattr.h>
|
|
#include <linux/posix_acl.h>
|
|
#include <linux/ratelimit.h>
|
|
#include <linux/fiemap.h>
|
|
#include "overlayfs.h"
|
|
|
|
|
|
int ovl_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
|
|
struct iattr *attr)
|
|
{
|
|
int err;
|
|
bool full_copy_up = false;
|
|
struct dentry *upperdentry;
|
|
const struct cred *old_cred;
|
|
|
|
err = setattr_prepare(&init_user_ns, dentry, attr);
|
|
if (err)
|
|
return err;
|
|
|
|
err = ovl_want_write(dentry);
|
|
if (err)
|
|
goto out;
|
|
|
|
if (attr->ia_valid & ATTR_SIZE) {
|
|
struct inode *realinode = d_inode(ovl_dentry_real(dentry));
|
|
|
|
err = -ETXTBSY;
|
|
if (atomic_read(&realinode->i_writecount) < 0)
|
|
goto out_drop_write;
|
|
|
|
/* Truncate should trigger data copy up as well */
|
|
full_copy_up = true;
|
|
}
|
|
|
|
if (!full_copy_up)
|
|
err = ovl_copy_up(dentry);
|
|
else
|
|
err = ovl_copy_up_with_data(dentry);
|
|
if (!err) {
|
|
struct inode *winode = NULL;
|
|
|
|
upperdentry = ovl_dentry_upper(dentry);
|
|
|
|
if (attr->ia_valid & ATTR_SIZE) {
|
|
winode = d_inode(upperdentry);
|
|
err = get_write_access(winode);
|
|
if (err)
|
|
goto out_drop_write;
|
|
}
|
|
|
|
if (attr->ia_valid & (ATTR_KILL_SUID|ATTR_KILL_SGID))
|
|
attr->ia_valid &= ~ATTR_MODE;
|
|
|
|
/*
|
|
* We might have to translate ovl file into real file object
|
|
* once use cases emerge. For now, simply don't let underlying
|
|
* filesystem rely on attr->ia_file
|
|
*/
|
|
attr->ia_valid &= ~ATTR_FILE;
|
|
|
|
/*
|
|
* If open(O_TRUNC) is done, VFS calls ->setattr with ATTR_OPEN
|
|
* set. Overlayfs does not pass O_TRUNC flag to underlying
|
|
* filesystem during open -> do not pass ATTR_OPEN. This
|
|
* disables optimization in fuse which assumes open(O_TRUNC)
|
|
* already set file size to 0. But we never passed O_TRUNC to
|
|
* fuse. So by clearing ATTR_OPEN, fuse will be forced to send
|
|
* setattr request to server.
|
|
*/
|
|
attr->ia_valid &= ~ATTR_OPEN;
|
|
|
|
inode_lock(upperdentry->d_inode);
|
|
old_cred = ovl_override_creds(dentry->d_sb);
|
|
err = notify_change(&init_user_ns, upperdentry, attr, NULL);
|
|
revert_creds(old_cred);
|
|
if (!err)
|
|
ovl_copyattr(upperdentry->d_inode, dentry->d_inode);
|
|
inode_unlock(upperdentry->d_inode);
|
|
|
|
if (winode)
|
|
put_write_access(winode);
|
|
}
|
|
out_drop_write:
|
|
ovl_drop_write(dentry);
|
|
out:
|
|
return err;
|
|
}
|
|
|
|
static int ovl_map_dev_ino(struct dentry *dentry, struct kstat *stat, int fsid)
|
|
{
|
|
bool samefs = ovl_same_fs(dentry->d_sb);
|
|
unsigned int xinobits = ovl_xino_bits(dentry->d_sb);
|
|
unsigned int xinoshift = 64 - xinobits;
|
|
|
|
if (samefs) {
|
|
/*
|
|
* When all layers are on the same fs, all real inode
|
|
* number are unique, so we use the overlay st_dev,
|
|
* which is friendly to du -x.
|
|
*/
|
|
stat->dev = dentry->d_sb->s_dev;
|
|
return 0;
|
|
} else if (xinobits) {
|
|
/*
|
|
* All inode numbers of underlying fs should not be using the
|
|
* high xinobits, so we use high xinobits to partition the
|
|
* overlay st_ino address space. The high bits holds the fsid
|
|
* (upper fsid is 0). The lowest xinobit is reserved for mapping
|
|
* the non-peresistent inode numbers range in case of overflow.
|
|
* This way all overlay inode numbers are unique and use the
|
|
* overlay st_dev.
|
|
*/
|
|
if (likely(!(stat->ino >> xinoshift))) {
|
|
stat->ino |= ((u64)fsid) << (xinoshift + 1);
|
|
stat->dev = dentry->d_sb->s_dev;
|
|
return 0;
|
|
} else if (ovl_xino_warn(dentry->d_sb)) {
|
|
pr_warn_ratelimited("inode number too big (%pd2, ino=%llu, xinobits=%d)\n",
|
|
dentry, stat->ino, xinobits);
|
|
}
|
|
}
|
|
|
|
/* The inode could not be mapped to a unified st_ino address space */
|
|
if (S_ISDIR(dentry->d_inode->i_mode)) {
|
|
/*
|
|
* Always use the overlay st_dev for directories, so 'find
|
|
* -xdev' will scan the entire overlay mount and won't cross the
|
|
* overlay mount boundaries.
|
|
*
|
|
* If not all layers are on the same fs the pair {real st_ino;
|
|
* overlay st_dev} is not unique, so use the non persistent
|
|
* overlay st_ino for directories.
|
|
*/
|
|
stat->dev = dentry->d_sb->s_dev;
|
|
stat->ino = dentry->d_inode->i_ino;
|
|
} else {
|
|
/*
|
|
* For non-samefs setup, if we cannot map all layers st_ino
|
|
* to a unified address space, we need to make sure that st_dev
|
|
* is unique per underlying fs, so we use the unique anonymous
|
|
* bdev assigned to the underlying fs.
|
|
*/
|
|
stat->dev = OVL_FS(dentry->d_sb)->fs[fsid].pseudo_dev;
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
int ovl_getattr(struct user_namespace *mnt_userns, const struct path *path,
|
|
struct kstat *stat, u32 request_mask, unsigned int flags)
|
|
{
|
|
struct dentry *dentry = path->dentry;
|
|
enum ovl_path_type type;
|
|
struct path realpath;
|
|
const struct cred *old_cred;
|
|
bool is_dir = S_ISDIR(dentry->d_inode->i_mode);
|
|
int fsid = 0;
|
|
int err;
|
|
bool metacopy_blocks = false;
|
|
|
|
metacopy_blocks = ovl_is_metacopy_dentry(dentry);
|
|
|
|
type = ovl_path_real(dentry, &realpath);
|
|
old_cred = ovl_override_creds(dentry->d_sb);
|
|
err = vfs_getattr(&realpath, stat, request_mask, flags);
|
|
if (err)
|
|
goto out;
|
|
|
|
/*
|
|
* For non-dir or same fs, we use st_ino of the copy up origin.
|
|
* This guaranties constant st_dev/st_ino across copy up.
|
|
* With xino feature and non-samefs, we use st_ino of the copy up
|
|
* origin masked with high bits that represent the layer id.
|
|
*
|
|
* If lower filesystem supports NFS file handles, this also guaranties
|
|
* persistent st_ino across mount cycle.
|
|
*/
|
|
if (!is_dir || ovl_same_dev(dentry->d_sb)) {
|
|
if (!OVL_TYPE_UPPER(type)) {
|
|
fsid = ovl_layer_lower(dentry)->fsid;
|
|
} else if (OVL_TYPE_ORIGIN(type)) {
|
|
struct kstat lowerstat;
|
|
u32 lowermask = STATX_INO | STATX_BLOCKS |
|
|
(!is_dir ? STATX_NLINK : 0);
|
|
|
|
ovl_path_lower(dentry, &realpath);
|
|
err = vfs_getattr(&realpath, &lowerstat,
|
|
lowermask, flags);
|
|
if (err)
|
|
goto out;
|
|
|
|
/*
|
|
* Lower hardlinks may be broken on copy up to different
|
|
* upper files, so we cannot use the lower origin st_ino
|
|
* for those different files, even for the same fs case.
|
|
*
|
|
* Similarly, several redirected dirs can point to the
|
|
* same dir on a lower layer. With the "verify_lower"
|
|
* feature, we do not use the lower origin st_ino, if
|
|
* we haven't verified that this redirect is unique.
|
|
*
|
|
* With inodes index enabled, it is safe to use st_ino
|
|
* of an indexed origin. The index validates that the
|
|
* upper hardlink is not broken and that a redirected
|
|
* dir is the only redirect to that origin.
|
|
*/
|
|
if (ovl_test_flag(OVL_INDEX, d_inode(dentry)) ||
|
|
(!ovl_verify_lower(dentry->d_sb) &&
|
|
(is_dir || lowerstat.nlink == 1))) {
|
|
fsid = ovl_layer_lower(dentry)->fsid;
|
|
stat->ino = lowerstat.ino;
|
|
}
|
|
|
|
/*
|
|
* If we are querying a metacopy dentry and lower
|
|
* dentry is data dentry, then use the blocks we
|
|
* queried just now. We don't have to do additional
|
|
* vfs_getattr(). If lower itself is metacopy, then
|
|
* additional vfs_getattr() is unavoidable.
|
|
*/
|
|
if (metacopy_blocks &&
|
|
realpath.dentry == ovl_dentry_lowerdata(dentry)) {
|
|
stat->blocks = lowerstat.blocks;
|
|
metacopy_blocks = false;
|
|
}
|
|
}
|
|
|
|
if (metacopy_blocks) {
|
|
/*
|
|
* If lower is not same as lowerdata or if there was
|
|
* no origin on upper, we can end up here.
|
|
*/
|
|
struct kstat lowerdatastat;
|
|
u32 lowermask = STATX_BLOCKS;
|
|
|
|
ovl_path_lowerdata(dentry, &realpath);
|
|
err = vfs_getattr(&realpath, &lowerdatastat,
|
|
lowermask, flags);
|
|
if (err)
|
|
goto out;
|
|
stat->blocks = lowerdatastat.blocks;
|
|
}
|
|
}
|
|
|
|
err = ovl_map_dev_ino(dentry, stat, fsid);
|
|
if (err)
|
|
goto out;
|
|
|
|
/*
|
|
* It's probably not worth it to count subdirs to get the
|
|
* correct link count. nlink=1 seems to pacify 'find' and
|
|
* other utilities.
|
|
*/
|
|
if (is_dir && OVL_TYPE_MERGE(type))
|
|
stat->nlink = 1;
|
|
|
|
/*
|
|
* Return the overlay inode nlinks for indexed upper inodes.
|
|
* Overlay inode nlink counts the union of the upper hardlinks
|
|
* and non-covered lower hardlinks. It does not include the upper
|
|
* index hardlink.
|
|
*/
|
|
if (!is_dir && ovl_test_flag(OVL_INDEX, d_inode(dentry)))
|
|
stat->nlink = dentry->d_inode->i_nlink;
|
|
|
|
out:
|
|
revert_creds(old_cred);
|
|
|
|
return err;
|
|
}
|
|
|
|
int ovl_permission(struct user_namespace *mnt_userns,
|
|
struct inode *inode, int mask)
|
|
{
|
|
struct inode *upperinode = ovl_inode_upper(inode);
|
|
struct inode *realinode = upperinode ?: ovl_inode_lower(inode);
|
|
const struct cred *old_cred;
|
|
int err;
|
|
|
|
/* Careful in RCU walk mode */
|
|
if (!realinode) {
|
|
WARN_ON(!(mask & MAY_NOT_BLOCK));
|
|
return -ECHILD;
|
|
}
|
|
|
|
/*
|
|
* Check overlay inode with the creds of task and underlying inode
|
|
* with creds of mounter
|
|
*/
|
|
err = generic_permission(&init_user_ns, inode, mask);
|
|
if (err)
|
|
return err;
|
|
|
|
old_cred = ovl_override_creds(inode->i_sb);
|
|
if (!upperinode &&
|
|
!special_file(realinode->i_mode) && mask & MAY_WRITE) {
|
|
mask &= ~(MAY_WRITE | MAY_APPEND);
|
|
/* Make sure mounter can read file for copy up later */
|
|
mask |= MAY_READ;
|
|
}
|
|
err = inode_permission(&init_user_ns, realinode, mask);
|
|
revert_creds(old_cred);
|
|
|
|
return err;
|
|
}
|
|
|
|
static const char *ovl_get_link(struct dentry *dentry,
|
|
struct inode *inode,
|
|
struct delayed_call *done)
|
|
{
|
|
const struct cred *old_cred;
|
|
const char *p;
|
|
|
|
if (!dentry)
|
|
return ERR_PTR(-ECHILD);
|
|
|
|
old_cred = ovl_override_creds(dentry->d_sb);
|
|
p = vfs_get_link(ovl_dentry_real(dentry), done);
|
|
revert_creds(old_cred);
|
|
return p;
|
|
}
|
|
|
|
bool ovl_is_private_xattr(struct super_block *sb, const char *name)
|
|
{
|
|
struct ovl_fs *ofs = sb->s_fs_info;
|
|
|
|
if (ofs->config.userxattr)
|
|
return strncmp(name, OVL_XATTR_USER_PREFIX,
|
|
sizeof(OVL_XATTR_USER_PREFIX) - 1) == 0;
|
|
else
|
|
return strncmp(name, OVL_XATTR_TRUSTED_PREFIX,
|
|
sizeof(OVL_XATTR_TRUSTED_PREFIX) - 1) == 0;
|
|
}
|
|
|
|
int ovl_xattr_set(struct dentry *dentry, struct inode *inode, const char *name,
|
|
const void *value, size_t size, int flags)
|
|
{
|
|
int err;
|
|
struct dentry *upperdentry = ovl_i_dentry_upper(inode);
|
|
struct dentry *realdentry = upperdentry ?: ovl_dentry_lower(dentry);
|
|
const struct cred *old_cred;
|
|
|
|
err = ovl_want_write(dentry);
|
|
if (err)
|
|
goto out;
|
|
|
|
if (!value && !upperdentry) {
|
|
old_cred = ovl_override_creds(dentry->d_sb);
|
|
err = vfs_getxattr(&init_user_ns, realdentry, name, NULL, 0);
|
|
revert_creds(old_cred);
|
|
if (err < 0)
|
|
goto out_drop_write;
|
|
}
|
|
|
|
if (!upperdentry) {
|
|
err = ovl_copy_up(dentry);
|
|
if (err)
|
|
goto out_drop_write;
|
|
|
|
realdentry = ovl_dentry_upper(dentry);
|
|
}
|
|
|
|
old_cred = ovl_override_creds(dentry->d_sb);
|
|
if (value)
|
|
err = vfs_setxattr(&init_user_ns, realdentry, name, value, size,
|
|
flags);
|
|
else {
|
|
WARN_ON(flags != XATTR_REPLACE);
|
|
err = vfs_removexattr(&init_user_ns, realdentry, name);
|
|
}
|
|
revert_creds(old_cred);
|
|
|
|
/* copy c/mtime */
|
|
ovl_copyattr(d_inode(realdentry), inode);
|
|
|
|
out_drop_write:
|
|
ovl_drop_write(dentry);
|
|
out:
|
|
return err;
|
|
}
|
|
|
|
int ovl_xattr_get(struct dentry *dentry, struct inode *inode, const char *name,
|
|
void *value, size_t size)
|
|
{
|
|
ssize_t res;
|
|
const struct cred *old_cred;
|
|
struct dentry *realdentry =
|
|
ovl_i_dentry_upper(inode) ?: ovl_dentry_lower(dentry);
|
|
|
|
old_cred = ovl_override_creds(dentry->d_sb);
|
|
res = vfs_getxattr(&init_user_ns, realdentry, name, value, size);
|
|
revert_creds(old_cred);
|
|
return res;
|
|
}
|
|
|
|
static bool ovl_can_list(struct super_block *sb, const char *s)
|
|
{
|
|
/* Never list private (.overlay) */
|
|
if (ovl_is_private_xattr(sb, s))
|
|
return false;
|
|
|
|
/* List all non-trusted xatts */
|
|
if (strncmp(s, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) != 0)
|
|
return true;
|
|
|
|
/* list other trusted for superuser only */
|
|
return ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN);
|
|
}
|
|
|
|
ssize_t ovl_listxattr(struct dentry *dentry, char *list, size_t size)
|
|
{
|
|
struct dentry *realdentry = ovl_dentry_real(dentry);
|
|
ssize_t res;
|
|
size_t len;
|
|
char *s;
|
|
const struct cred *old_cred;
|
|
|
|
old_cred = ovl_override_creds(dentry->d_sb);
|
|
res = vfs_listxattr(realdentry, list, size);
|
|
revert_creds(old_cred);
|
|
if (res <= 0 || size == 0)
|
|
return res;
|
|
|
|
/* filter out private xattrs */
|
|
for (s = list, len = res; len;) {
|
|
size_t slen = strnlen(s, len) + 1;
|
|
|
|
/* underlying fs providing us with an broken xattr list? */
|
|
if (WARN_ON(slen > len))
|
|
return -EIO;
|
|
|
|
len -= slen;
|
|
if (!ovl_can_list(dentry->d_sb, s)) {
|
|
res -= slen;
|
|
memmove(s, s + slen, len);
|
|
} else {
|
|
s += slen;
|
|
}
|
|
}
|
|
|
|
return res;
|
|
}
|
|
|
|
struct posix_acl *ovl_get_acl(struct inode *inode, int type)
|
|
{
|
|
struct inode *realinode = ovl_inode_real(inode);
|
|
const struct cred *old_cred;
|
|
struct posix_acl *acl;
|
|
|
|
if (!IS_ENABLED(CONFIG_FS_POSIX_ACL) || !IS_POSIXACL(realinode))
|
|
return NULL;
|
|
|
|
old_cred = ovl_override_creds(inode->i_sb);
|
|
acl = get_acl(realinode, type);
|
|
revert_creds(old_cred);
|
|
|
|
return acl;
|
|
}
|
|
|
|
int ovl_update_time(struct inode *inode, struct timespec64 *ts, int flags)
|
|
{
|
|
if (flags & S_ATIME) {
|
|
struct ovl_fs *ofs = inode->i_sb->s_fs_info;
|
|
struct path upperpath = {
|
|
.mnt = ovl_upper_mnt(ofs),
|
|
.dentry = ovl_upperdentry_dereference(OVL_I(inode)),
|
|
};
|
|
|
|
if (upperpath.dentry) {
|
|
touch_atime(&upperpath);
|
|
inode->i_atime = d_inode(upperpath.dentry)->i_atime;
|
|
}
|
|
}
|
|
return 0;
|
|
}
|
|
|
|
static int ovl_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
|
|
u64 start, u64 len)
|
|
{
|
|
int err;
|
|
struct inode *realinode = ovl_inode_realdata(inode);
|
|
const struct cred *old_cred;
|
|
|
|
if (!realinode->i_op->fiemap)
|
|
return -EOPNOTSUPP;
|
|
|
|
old_cred = ovl_override_creds(inode->i_sb);
|
|
err = realinode->i_op->fiemap(realinode, fieinfo, start, len);
|
|
revert_creds(old_cred);
|
|
|
|
return err;
|
|
}
|
|
|
|
static const struct inode_operations ovl_file_inode_operations = {
|
|
.setattr = ovl_setattr,
|
|
.permission = ovl_permission,
|
|
.getattr = ovl_getattr,
|
|
.listxattr = ovl_listxattr,
|
|
.get_acl = ovl_get_acl,
|
|
.update_time = ovl_update_time,
|
|
.fiemap = ovl_fiemap,
|
|
};
|
|
|
|
static const struct inode_operations ovl_symlink_inode_operations = {
|
|
.setattr = ovl_setattr,
|
|
.get_link = ovl_get_link,
|
|
.getattr = ovl_getattr,
|
|
.listxattr = ovl_listxattr,
|
|
.update_time = ovl_update_time,
|
|
};
|
|
|
|
static const struct inode_operations ovl_special_inode_operations = {
|
|
.setattr = ovl_setattr,
|
|
.permission = ovl_permission,
|
|
.getattr = ovl_getattr,
|
|
.listxattr = ovl_listxattr,
|
|
.get_acl = ovl_get_acl,
|
|
.update_time = ovl_update_time,
|
|
};
|
|
|
|
static const struct address_space_operations ovl_aops = {
|
|
/* For O_DIRECT dentry_open() checks f_mapping->a_ops->direct_IO */
|
|
.direct_IO = noop_direct_IO,
|
|
};
|
|
|
|
/*
|
|
* It is possible to stack overlayfs instance on top of another
|
|
* overlayfs instance as lower layer. We need to annotate the
|
|
* stackable i_mutex locks according to stack level of the super
|
|
* block instance. An overlayfs instance can never be in stack
|
|
* depth 0 (there is always a real fs below it). An overlayfs
|
|
* inode lock will use the lockdep annotaion ovl_i_mutex_key[depth].
|
|
*
|
|
* For example, here is a snip from /proc/lockdep_chains after
|
|
* dir_iterate of nested overlayfs:
|
|
*
|
|
* [...] &ovl_i_mutex_dir_key[depth] (stack_depth=2)
|
|
* [...] &ovl_i_mutex_dir_key[depth]#2 (stack_depth=1)
|
|
* [...] &type->i_mutex_dir_key (stack_depth=0)
|
|
*
|
|
* Locking order w.r.t ovl_want_write() is important for nested overlayfs.
|
|
*
|
|
* This chain is valid:
|
|
* - inode->i_rwsem (inode_lock[2])
|
|
* - upper_mnt->mnt_sb->s_writers (ovl_want_write[0])
|
|
* - OVL_I(inode)->lock (ovl_inode_lock[2])
|
|
* - OVL_I(lowerinode)->lock (ovl_inode_lock[1])
|
|
*
|
|
* And this chain is valid:
|
|
* - inode->i_rwsem (inode_lock[2])
|
|
* - OVL_I(inode)->lock (ovl_inode_lock[2])
|
|
* - lowerinode->i_rwsem (inode_lock[1])
|
|
* - OVL_I(lowerinode)->lock (ovl_inode_lock[1])
|
|
*
|
|
* But lowerinode->i_rwsem SHOULD NOT be acquired while ovl_want_write() is
|
|
* held, because it is in reverse order of the non-nested case using the same
|
|
* upper fs:
|
|
* - inode->i_rwsem (inode_lock[1])
|
|
* - upper_mnt->mnt_sb->s_writers (ovl_want_write[0])
|
|
* - OVL_I(inode)->lock (ovl_inode_lock[1])
|
|
*/
|
|
#define OVL_MAX_NESTING FILESYSTEM_MAX_STACK_DEPTH
|
|
|
|
static inline void ovl_lockdep_annotate_inode_mutex_key(struct inode *inode)
|
|
{
|
|
#ifdef CONFIG_LOCKDEP
|
|
static struct lock_class_key ovl_i_mutex_key[OVL_MAX_NESTING];
|
|
static struct lock_class_key ovl_i_mutex_dir_key[OVL_MAX_NESTING];
|
|
static struct lock_class_key ovl_i_lock_key[OVL_MAX_NESTING];
|
|
|
|
int depth = inode->i_sb->s_stack_depth - 1;
|
|
|
|
if (WARN_ON_ONCE(depth < 0 || depth >= OVL_MAX_NESTING))
|
|
depth = 0;
|
|
|
|
if (S_ISDIR(inode->i_mode))
|
|
lockdep_set_class(&inode->i_rwsem, &ovl_i_mutex_dir_key[depth]);
|
|
else
|
|
lockdep_set_class(&inode->i_rwsem, &ovl_i_mutex_key[depth]);
|
|
|
|
lockdep_set_class(&OVL_I(inode)->lock, &ovl_i_lock_key[depth]);
|
|
#endif
|
|
}
|
|
|
|
static void ovl_next_ino(struct inode *inode)
|
|
{
|
|
struct ovl_fs *ofs = inode->i_sb->s_fs_info;
|
|
|
|
inode->i_ino = atomic_long_inc_return(&ofs->last_ino);
|
|
if (unlikely(!inode->i_ino))
|
|
inode->i_ino = atomic_long_inc_return(&ofs->last_ino);
|
|
}
|
|
|
|
static void ovl_map_ino(struct inode *inode, unsigned long ino, int fsid)
|
|
{
|
|
int xinobits = ovl_xino_bits(inode->i_sb);
|
|
unsigned int xinoshift = 64 - xinobits;
|
|
|
|
/*
|
|
* When d_ino is consistent with st_ino (samefs or i_ino has enough
|
|
* bits to encode layer), set the same value used for st_ino to i_ino,
|
|
* so inode number exposed via /proc/locks and a like will be
|
|
* consistent with d_ino and st_ino values. An i_ino value inconsistent
|
|
* with d_ino also causes nfsd readdirplus to fail.
|
|
*/
|
|
inode->i_ino = ino;
|
|
if (ovl_same_fs(inode->i_sb)) {
|
|
return;
|
|
} else if (xinobits && likely(!(ino >> xinoshift))) {
|
|
inode->i_ino |= (unsigned long)fsid << (xinoshift + 1);
|
|
return;
|
|
}
|
|
|
|
/*
|
|
* For directory inodes on non-samefs with xino disabled or xino
|
|
* overflow, we allocate a non-persistent inode number, to be used for
|
|
* resolving st_ino collisions in ovl_map_dev_ino().
|
|
*
|
|
* To avoid ino collision with legitimate xino values from upper
|
|
* layer (fsid 0), use the lowest xinobit to map the non
|
|
* persistent inode numbers to the unified st_ino address space.
|
|
*/
|
|
if (S_ISDIR(inode->i_mode)) {
|
|
ovl_next_ino(inode);
|
|
if (xinobits) {
|
|
inode->i_ino &= ~0UL >> xinobits;
|
|
inode->i_ino |= 1UL << xinoshift;
|
|
}
|
|
}
|
|
}
|
|
|
|
void ovl_inode_init(struct inode *inode, struct ovl_inode_params *oip,
|
|
unsigned long ino, int fsid)
|
|
{
|
|
struct inode *realinode;
|
|
|
|
if (oip->upperdentry)
|
|
OVL_I(inode)->__upperdentry = oip->upperdentry;
|
|
if (oip->lowerpath && oip->lowerpath->dentry)
|
|
OVL_I(inode)->lower = igrab(d_inode(oip->lowerpath->dentry));
|
|
if (oip->lowerdata)
|
|
OVL_I(inode)->lowerdata = igrab(d_inode(oip->lowerdata));
|
|
|
|
realinode = ovl_inode_real(inode);
|
|
ovl_copyattr(realinode, inode);
|
|
ovl_copyflags(realinode, inode);
|
|
ovl_map_ino(inode, ino, fsid);
|
|
}
|
|
|
|
static void ovl_fill_inode(struct inode *inode, umode_t mode, dev_t rdev)
|
|
{
|
|
inode->i_mode = mode;
|
|
inode->i_flags |= S_NOCMTIME;
|
|
#ifdef CONFIG_FS_POSIX_ACL
|
|
inode->i_acl = inode->i_default_acl = ACL_DONT_CACHE;
|
|
#endif
|
|
|
|
ovl_lockdep_annotate_inode_mutex_key(inode);
|
|
|
|
switch (mode & S_IFMT) {
|
|
case S_IFREG:
|
|
inode->i_op = &ovl_file_inode_operations;
|
|
inode->i_fop = &ovl_file_operations;
|
|
inode->i_mapping->a_ops = &ovl_aops;
|
|
break;
|
|
|
|
case S_IFDIR:
|
|
inode->i_op = &ovl_dir_inode_operations;
|
|
inode->i_fop = &ovl_dir_operations;
|
|
break;
|
|
|
|
case S_IFLNK:
|
|
inode->i_op = &ovl_symlink_inode_operations;
|
|
break;
|
|
|
|
default:
|
|
inode->i_op = &ovl_special_inode_operations;
|
|
init_special_inode(inode, mode, rdev);
|
|
break;
|
|
}
|
|
}
|
|
|
|
/*
|
|
* With inodes index enabled, an overlay inode nlink counts the union of upper
|
|
* hardlinks and non-covered lower hardlinks. During the lifetime of a non-pure
|
|
* upper inode, the following nlink modifying operations can happen:
|
|
*
|
|
* 1. Lower hardlink copy up
|
|
* 2. Upper hardlink created, unlinked or renamed over
|
|
* 3. Lower hardlink whiteout or renamed over
|
|
*
|
|
* For the first, copy up case, the union nlink does not change, whether the
|
|
* operation succeeds or fails, but the upper inode nlink may change.
|
|
* Therefore, before copy up, we store the union nlink value relative to the
|
|
* lower inode nlink in the index inode xattr .overlay.nlink.
|
|
*
|
|
* For the second, upper hardlink case, the union nlink should be incremented
|
|
* or decremented IFF the operation succeeds, aligned with nlink change of the
|
|
* upper inode. Therefore, before link/unlink/rename, we store the union nlink
|
|
* value relative to the upper inode nlink in the index inode.
|
|
*
|
|
* For the last, lower cover up case, we simplify things by preceding the
|
|
* whiteout or cover up with copy up. This makes sure that there is an index
|
|
* upper inode where the nlink xattr can be stored before the copied up upper
|
|
* entry is unlink.
|
|
*/
|
|
#define OVL_NLINK_ADD_UPPER (1 << 0)
|
|
|
|
/*
|
|
* On-disk format for indexed nlink:
|
|
*
|
|
* nlink relative to the upper inode - "U[+-]NUM"
|
|
* nlink relative to the lower inode - "L[+-]NUM"
|
|
*/
|
|
|
|
static int ovl_set_nlink_common(struct dentry *dentry,
|
|
struct dentry *realdentry, const char *format)
|
|
{
|
|
struct inode *inode = d_inode(dentry);
|
|
struct inode *realinode = d_inode(realdentry);
|
|
char buf[13];
|
|
int len;
|
|
|
|
len = snprintf(buf, sizeof(buf), format,
|
|
(int) (inode->i_nlink - realinode->i_nlink));
|
|
|
|
if (WARN_ON(len >= sizeof(buf)))
|
|
return -EIO;
|
|
|
|
return ovl_do_setxattr(OVL_FS(inode->i_sb), ovl_dentry_upper(dentry),
|
|
OVL_XATTR_NLINK, buf, len);
|
|
}
|
|
|
|
int ovl_set_nlink_upper(struct dentry *dentry)
|
|
{
|
|
return ovl_set_nlink_common(dentry, ovl_dentry_upper(dentry), "U%+i");
|
|
}
|
|
|
|
int ovl_set_nlink_lower(struct dentry *dentry)
|
|
{
|
|
return ovl_set_nlink_common(dentry, ovl_dentry_lower(dentry), "L%+i");
|
|
}
|
|
|
|
unsigned int ovl_get_nlink(struct ovl_fs *ofs, struct dentry *lowerdentry,
|
|
struct dentry *upperdentry,
|
|
unsigned int fallback)
|
|
{
|
|
int nlink_diff;
|
|
int nlink;
|
|
char buf[13];
|
|
int err;
|
|
|
|
if (!lowerdentry || !upperdentry || d_inode(lowerdentry)->i_nlink == 1)
|
|
return fallback;
|
|
|
|
err = ovl_do_getxattr(ofs, upperdentry, OVL_XATTR_NLINK,
|
|
&buf, sizeof(buf) - 1);
|
|
if (err < 0)
|
|
goto fail;
|
|
|
|
buf[err] = '\0';
|
|
if ((buf[0] != 'L' && buf[0] != 'U') ||
|
|
(buf[1] != '+' && buf[1] != '-'))
|
|
goto fail;
|
|
|
|
err = kstrtoint(buf + 1, 10, &nlink_diff);
|
|
if (err < 0)
|
|
goto fail;
|
|
|
|
nlink = d_inode(buf[0] == 'L' ? lowerdentry : upperdentry)->i_nlink;
|
|
nlink += nlink_diff;
|
|
|
|
if (nlink <= 0)
|
|
goto fail;
|
|
|
|
return nlink;
|
|
|
|
fail:
|
|
pr_warn_ratelimited("failed to get index nlink (%pd2, err=%i)\n",
|
|
upperdentry, err);
|
|
return fallback;
|
|
}
|
|
|
|
struct inode *ovl_new_inode(struct super_block *sb, umode_t mode, dev_t rdev)
|
|
{
|
|
struct inode *inode;
|
|
|
|
inode = new_inode(sb);
|
|
if (inode)
|
|
ovl_fill_inode(inode, mode, rdev);
|
|
|
|
return inode;
|
|
}
|
|
|
|
static int ovl_inode_test(struct inode *inode, void *data)
|
|
{
|
|
return inode->i_private == data;
|
|
}
|
|
|
|
static int ovl_inode_set(struct inode *inode, void *data)
|
|
{
|
|
inode->i_private = data;
|
|
return 0;
|
|
}
|
|
|
|
static bool ovl_verify_inode(struct inode *inode, struct dentry *lowerdentry,
|
|
struct dentry *upperdentry, bool strict)
|
|
{
|
|
/*
|
|
* For directories, @strict verify from lookup path performs consistency
|
|
* checks, so NULL lower/upper in dentry must match NULL lower/upper in
|
|
* inode. Non @strict verify from NFS handle decode path passes NULL for
|
|
* 'unknown' lower/upper.
|
|
*/
|
|
if (S_ISDIR(inode->i_mode) && strict) {
|
|
/* Real lower dir moved to upper layer under us? */
|
|
if (!lowerdentry && ovl_inode_lower(inode))
|
|
return false;
|
|
|
|
/* Lookup of an uncovered redirect origin? */
|
|
if (!upperdentry && ovl_inode_upper(inode))
|
|
return false;
|
|
}
|
|
|
|
/*
|
|
* Allow non-NULL lower inode in ovl_inode even if lowerdentry is NULL.
|
|
* This happens when finding a copied up overlay inode for a renamed
|
|
* or hardlinked overlay dentry and lower dentry cannot be followed
|
|
* by origin because lower fs does not support file handles.
|
|
*/
|
|
if (lowerdentry && ovl_inode_lower(inode) != d_inode(lowerdentry))
|
|
return false;
|
|
|
|
/*
|
|
* Allow non-NULL __upperdentry in inode even if upperdentry is NULL.
|
|
* This happens when finding a lower alias for a copied up hard link.
|
|
*/
|
|
if (upperdentry && ovl_inode_upper(inode) != d_inode(upperdentry))
|
|
return false;
|
|
|
|
return true;
|
|
}
|
|
|
|
struct inode *ovl_lookup_inode(struct super_block *sb, struct dentry *real,
|
|
bool is_upper)
|
|
{
|
|
struct inode *inode, *key = d_inode(real);
|
|
|
|
inode = ilookup5(sb, (unsigned long) key, ovl_inode_test, key);
|
|
if (!inode)
|
|
return NULL;
|
|
|
|
if (!ovl_verify_inode(inode, is_upper ? NULL : real,
|
|
is_upper ? real : NULL, false)) {
|
|
iput(inode);
|
|
return ERR_PTR(-ESTALE);
|
|
}
|
|
|
|
return inode;
|
|
}
|
|
|
|
bool ovl_lookup_trap_inode(struct super_block *sb, struct dentry *dir)
|
|
{
|
|
struct inode *key = d_inode(dir);
|
|
struct inode *trap;
|
|
bool res;
|
|
|
|
trap = ilookup5(sb, (unsigned long) key, ovl_inode_test, key);
|
|
if (!trap)
|
|
return false;
|
|
|
|
res = IS_DEADDIR(trap) && !ovl_inode_upper(trap) &&
|
|
!ovl_inode_lower(trap);
|
|
|
|
iput(trap);
|
|
return res;
|
|
}
|
|
|
|
/*
|
|
* Create an inode cache entry for layer root dir, that will intentionally
|
|
* fail ovl_verify_inode(), so any lookup that will find some layer root
|
|
* will fail.
|
|
*/
|
|
struct inode *ovl_get_trap_inode(struct super_block *sb, struct dentry *dir)
|
|
{
|
|
struct inode *key = d_inode(dir);
|
|
struct inode *trap;
|
|
|
|
if (!d_is_dir(dir))
|
|
return ERR_PTR(-ENOTDIR);
|
|
|
|
trap = iget5_locked(sb, (unsigned long) key, ovl_inode_test,
|
|
ovl_inode_set, key);
|
|
if (!trap)
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
if (!(trap->i_state & I_NEW)) {
|
|
/* Conflicting layer roots? */
|
|
iput(trap);
|
|
return ERR_PTR(-ELOOP);
|
|
}
|
|
|
|
trap->i_mode = S_IFDIR;
|
|
trap->i_flags = S_DEAD;
|
|
unlock_new_inode(trap);
|
|
|
|
return trap;
|
|
}
|
|
|
|
/*
|
|
* Does overlay inode need to be hashed by lower inode?
|
|
*/
|
|
static bool ovl_hash_bylower(struct super_block *sb, struct dentry *upper,
|
|
struct dentry *lower, bool index)
|
|
{
|
|
struct ovl_fs *ofs = sb->s_fs_info;
|
|
|
|
/* No, if pure upper */
|
|
if (!lower)
|
|
return false;
|
|
|
|
/* Yes, if already indexed */
|
|
if (index)
|
|
return true;
|
|
|
|
/* Yes, if won't be copied up */
|
|
if (!ovl_upper_mnt(ofs))
|
|
return true;
|
|
|
|
/* No, if lower hardlink is or will be broken on copy up */
|
|
if ((upper || !ovl_indexdir(sb)) &&
|
|
!d_is_dir(lower) && d_inode(lower)->i_nlink > 1)
|
|
return false;
|
|
|
|
/* No, if non-indexed upper with NFS export */
|
|
if (sb->s_export_op && upper)
|
|
return false;
|
|
|
|
/* Otherwise, hash by lower inode for fsnotify */
|
|
return true;
|
|
}
|
|
|
|
static struct inode *ovl_iget5(struct super_block *sb, struct inode *newinode,
|
|
struct inode *key)
|
|
{
|
|
return newinode ? inode_insert5(newinode, (unsigned long) key,
|
|
ovl_inode_test, ovl_inode_set, key) :
|
|
iget5_locked(sb, (unsigned long) key,
|
|
ovl_inode_test, ovl_inode_set, key);
|
|
}
|
|
|
|
struct inode *ovl_get_inode(struct super_block *sb,
|
|
struct ovl_inode_params *oip)
|
|
{
|
|
struct ovl_fs *ofs = OVL_FS(sb);
|
|
struct dentry *upperdentry = oip->upperdentry;
|
|
struct ovl_path *lowerpath = oip->lowerpath;
|
|
struct inode *realinode = upperdentry ? d_inode(upperdentry) : NULL;
|
|
struct inode *inode;
|
|
struct dentry *lowerdentry = lowerpath ? lowerpath->dentry : NULL;
|
|
bool bylower = ovl_hash_bylower(sb, upperdentry, lowerdentry,
|
|
oip->index);
|
|
int fsid = bylower ? lowerpath->layer->fsid : 0;
|
|
bool is_dir;
|
|
unsigned long ino = 0;
|
|
int err = oip->newinode ? -EEXIST : -ENOMEM;
|
|
|
|
if (!realinode)
|
|
realinode = d_inode(lowerdentry);
|
|
|
|
/*
|
|
* Copy up origin (lower) may exist for non-indexed upper, but we must
|
|
* not use lower as hash key if this is a broken hardlink.
|
|
*/
|
|
is_dir = S_ISDIR(realinode->i_mode);
|
|
if (upperdentry || bylower) {
|
|
struct inode *key = d_inode(bylower ? lowerdentry :
|
|
upperdentry);
|
|
unsigned int nlink = is_dir ? 1 : realinode->i_nlink;
|
|
|
|
inode = ovl_iget5(sb, oip->newinode, key);
|
|
if (!inode)
|
|
goto out_err;
|
|
if (!(inode->i_state & I_NEW)) {
|
|
/*
|
|
* Verify that the underlying files stored in the inode
|
|
* match those in the dentry.
|
|
*/
|
|
if (!ovl_verify_inode(inode, lowerdentry, upperdentry,
|
|
true)) {
|
|
iput(inode);
|
|
err = -ESTALE;
|
|
goto out_err;
|
|
}
|
|
|
|
dput(upperdentry);
|
|
kfree(oip->redirect);
|
|
goto out;
|
|
}
|
|
|
|
/* Recalculate nlink for non-dir due to indexing */
|
|
if (!is_dir)
|
|
nlink = ovl_get_nlink(ofs, lowerdentry, upperdentry,
|
|
nlink);
|
|
set_nlink(inode, nlink);
|
|
ino = key->i_ino;
|
|
} else {
|
|
/* Lower hardlink that will be broken on copy up */
|
|
inode = new_inode(sb);
|
|
if (!inode) {
|
|
err = -ENOMEM;
|
|
goto out_err;
|
|
}
|
|
ino = realinode->i_ino;
|
|
fsid = lowerpath->layer->fsid;
|
|
}
|
|
ovl_fill_inode(inode, realinode->i_mode, realinode->i_rdev);
|
|
ovl_inode_init(inode, oip, ino, fsid);
|
|
|
|
if (upperdentry && ovl_is_impuredir(sb, upperdentry))
|
|
ovl_set_flag(OVL_IMPURE, inode);
|
|
|
|
if (oip->index)
|
|
ovl_set_flag(OVL_INDEX, inode);
|
|
|
|
OVL_I(inode)->redirect = oip->redirect;
|
|
|
|
if (bylower)
|
|
ovl_set_flag(OVL_CONST_INO, inode);
|
|
|
|
/* Check for non-merge dir that may have whiteouts */
|
|
if (is_dir) {
|
|
if (((upperdentry && lowerdentry) || oip->numlower > 1) ||
|
|
ovl_check_origin_xattr(ofs, upperdentry ?: lowerdentry)) {
|
|
ovl_set_flag(OVL_WHITEOUTS, inode);
|
|
}
|
|
}
|
|
|
|
if (inode->i_state & I_NEW)
|
|
unlock_new_inode(inode);
|
|
out:
|
|
return inode;
|
|
|
|
out_err:
|
|
pr_warn_ratelimited("failed to get inode (%i)\n", err);
|
|
inode = ERR_PTR(err);
|
|
goto out;
|
|
}
|