Merge branch 'akpm' (patches from Andrew)

Merge updates from Andrew Morton:

 - a few misc things

 - ocfs2 updates

 - v9fs updates

 - MM

 - procfs updates

 - lib/ updates

 - autofs updates

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (118 commits)
  autofs: small cleanup in autofs_getpath()
  autofs: clean up includes
  autofs: comment on selinux changes needed for module autoload
  autofs: update MAINTAINERS entry for autofs
  autofs: use autofs instead of autofs4 in documentation
  autofs: rename autofs documentation files
  autofs: create autofs Kconfig and Makefile
  autofs: delete fs/autofs4 source files
  autofs: update fs/autofs4/Makefile
  autofs: update fs/autofs4/Kconfig
  autofs: copy autofs4 to autofs
  autofs4: use autofs instead of autofs4 everywhere
  autofs4: merge auto_fs.h and auto_fs4.h
  fs/binfmt_misc.c: do not allow offset overflow
  checkpatch: improve patch recognition
  lib/ucs2_string.c: add MODULE_LICENSE()
  lib/mpi: headers cleanup
  lib/percpu_ida.c: use _irqsave() instead of local_irq_save() + spin_lock
  lib/idr.c: remove simple_ida_lock
  lib/bitmap.c: micro-optimization for __bitmap_complement()
  ...
This commit is contained in:
Linus Torvalds 2018-06-07 18:39:37 -07:00
commit 68abbe7295
147 changed files with 2950 additions and 2071 deletions

View File

@ -1001,14 +1001,44 @@ PAGE_SIZE multiple when read back.
The total amount of memory currently being used by the cgroup
and its descendants.
memory.min
A read-write single value file which exists on non-root
cgroups. The default is "0".
Hard memory protection. If the memory usage of a cgroup
is within its effective min boundary, the cgroup's memory
won't be reclaimed under any conditions. If there is no
unprotected reclaimable memory available, OOM killer
is invoked.
Effective min boundary is limited by memory.min values of
all ancestor cgroups. If there is memory.min overcommitment
(child cgroup or cgroups are requiring more protected memory
than parent will allow), then each child cgroup will get
the part of parent's protection proportional to its
actual memory usage below memory.min.
Putting more memory than generally available under this
protection is discouraged and may lead to constant OOMs.
If a memory cgroup is not populated with processes,
its memory.min is ignored.
memory.low
A read-write single value file which exists on non-root
cgroups. The default is "0".
Best-effort memory protection. If the memory usages of a
cgroup and all its ancestors are below their low boundaries,
the cgroup's memory won't be reclaimed unless memory can be
reclaimed from unprotected cgroups.
Best-effort memory protection. If the memory usage of a
cgroup is within its effective low boundary, the cgroup's
memory won't be reclaimed unless memory can be reclaimed
from unprotected cgroups.
Effective low boundary is limited by memory.low values of
all ancestor cgroups. If there is memory.low overcommitment
(child cgroup or cgroups are requiring more protected memory
than parent will allow), then each child cgroup will get
the part of parent's protection proportional to its
actual memory usage below memory.low.
Putting more memory than generally available under this
protection is discouraged.
@ -1199,6 +1229,27 @@ PAGE_SIZE multiple when read back.
Swap usage hard limit. If a cgroup's swap usage reaches this
limit, anonymous memory of the cgroup will not be swapped out.
memory.swap.events
A read-only flat-keyed file which exists on non-root cgroups.
The following entries are defined. Unless specified
otherwise, a value change in this file generates a file
modified event.
max
The number of times the cgroup's swap usage was about
to go over the max boundary and swap allocation
failed.
fail
The number of times swap allocation failed either
because of running out of swap system-wide or max
limit.
When reduced under the current usage, the existing swap
entries are reclaimed gradually and the swap usage may stay
higher than the limit for an extended period of time. This
reduces the impact on the workload and memory management.
Usage Guidelines
~~~~~~~~~~~~~~~~
@ -1934,17 +1985,8 @@ system performance due to overreclaim, to the point where the feature
becomes self-defeating.
The memory.low boundary on the other hand is a top-down allocated
reserve. A cgroup enjoys reclaim protection when it and all its
ancestors are below their low boundaries, which makes delegation of
subtrees possible. Secondly, new cgroups have no reserve per default
and in the common case most cgroups are eligible for the preferred
reclaim pass. This allows the new low boundary to be efficiently
implemented with just a minor addition to the generic reclaim code,
without the need for out-of-band data structures and reclaim passes.
Because the generic reclaim code considers all cgroups except for the
ones running low in the preferred first reclaim pass, overreclaim of
individual groups is eliminated as well, resulting in much better
overall workload performance.
reserve. A cgroup enjoys reclaim protection when it's within its low,
which makes delegation of subtrees possible.
The original high boundary, the hard limit, is defined as a strict
limit that can not budge, even if the OOM killer has to be called.

View File

@ -218,6 +218,7 @@ line of text and contains the following stats separated by whitespace:
same_pages the number of same element filled pages written to this disk.
No memory is allocated for such pages.
pages_compacted the number of pages freed during compaction
huge_pages the number of incompressible pages
9) Deactivate:
swapoff /dev/zram0
@ -242,5 +243,29 @@ to backing storage rather than keeping it in memory.
User should set up backing device via /sys/block/zramX/backing_dev
before disksize setting.
= memory tracking
With CONFIG_ZRAM_MEMORY_TRACKING, user can know information of the
zram block. It could be useful to catch cold or incompressible
pages of the process with*pagemap.
If you enable the feature, you could see block state via
/sys/kernel/debug/zram/zram0/block_state". The output is as follows,
300 75.033841 .wh
301 63.806904 s..
302 63.806919 ..h
First column is zram's block index.
Second column is access time since the system was booted
Third column is state of the block.
(s: same page
w: written page to backing store
h: huge page)
First line of above example says 300th block is accessed at 75.033841sec
and the block's state is huge so it is written back to the backing
storage. It's a debugging feature so anyone shouldn't rely on it to work
properly.
Nitin Gupta
ngupta@vflare.org

View File

@ -1,6 +1,6 @@
#
# Feature name: pte_special
# Kconfig: __HAVE_ARCH_PTE_SPECIAL
# Kconfig: ARCH_HAS_PTE_SPECIAL
# description: arch supports the pte_special()/pte_mkspecial() VM APIs
#
-----------------------

View File

@ -10,8 +10,8 @@ afs.txt
- info and examples for the distributed AFS (Andrew File System) fs.
affs.txt
- info and mount options for the Amiga Fast File System.
autofs4-mount-control.txt
- info on device control operations for autofs4 module.
autofs-mount-control.txt
- info on device control operations for autofs module.
automount-support.txt
- information about filesystem automount support.
befs.txt

View File

@ -1,5 +1,5 @@
Miscellaneous Device control operations for the autofs4 kernel module
Miscellaneous Device control operations for the autofs kernel module
====================================================================
The problem
@ -164,7 +164,7 @@ possibility for future development due to the requirements of the
message bus architecture.
autofs4 Miscellaneous Device mount control interface
autofs Miscellaneous Device mount control interface
====================================================
The control interface is opening a device node, typically /dev/autofs.
@ -244,7 +244,7 @@ The device node ioctl operations implemented by this interface are:
AUTOFS_DEV_IOCTL_VERSION
------------------------
Get the major and minor version of the autofs4 device ioctl kernel module
Get the major and minor version of the autofs device ioctl kernel module
implementation. It requires an initialized struct autofs_dev_ioctl as an
input parameter and sets the version information in the passed in structure.
It returns 0 on success or the error -EINVAL if a version mismatch is
@ -254,7 +254,7 @@ detected.
AUTOFS_DEV_IOCTL_PROTOVER_CMD and AUTOFS_DEV_IOCTL_PROTOSUBVER_CMD
------------------------------------------------------------------
Get the major and minor version of the autofs4 protocol version understood
Get the major and minor version of the autofs protocol version understood
by loaded module. This call requires an initialized struct autofs_dev_ioctl
with the ioctlfd field set to a valid autofs mount point descriptor
and sets the requested version number in version field of struct args_protover
@ -404,4 +404,3 @@ type is also given we are looking for a particular autofs mount and if
a match isn't found a fail is returned. If the the located path is the
root of a mount 1 is returned along with the super magic of the mount
or 0 otherwise.

View File

@ -30,15 +30,15 @@ key advantages:
Context
-------
The "autofs4" filesystem module is only one part of an autofs system.
The "autofs" filesystem module is only one part of an autofs system.
There also needs to be a user-space program which looks up names
and mounts filesystems. This will often be the "automount" program,
though other tools including "systemd" can make use of "autofs4".
though other tools including "systemd" can make use of "autofs".
This document describes only the kernel module and the interactions
required with any user-space program. Subsequent text refers to this
as the "automount daemon" or simply "the daemon".
"autofs4" is a Linux kernel module with provides the "autofs"
"autofs" is a Linux kernel module with provides the "autofs"
filesystem type. Several "autofs" filesystems can be mounted and they
can each be managed separately, or all managed by the same daemon.
@ -215,7 +215,7 @@ of expiry.
The VFS also supports "expiry" of mounts using the MNT_EXPIRE flag to
the `umount` system call. Unmounting with MNT_EXPIRE will fail unless
a previous attempt had been made, and the filesystem has been inactive
and untouched since that previous attempt. autofs4 does not depend on
and untouched since that previous attempt. autofs does not depend on
this but has its own internal tracking of whether filesystems were
recently used. This allows individual names in the autofs directory
to expire separately.
@ -415,7 +415,7 @@ which can be used to communicate directly with the autofs filesystem.
It requires CAP_SYS_ADMIN for access.
The `ioctl`s that can be used on this device are described in a separate
document `autofs4-mount-control.txt`, and are summarized briefly here.
document `autofs-mount-control.txt`, and are summarized briefly here.
Each ioctl is passed a pointer to an `autofs_dev_ioctl` structure:
struct autofs_dev_ioctl {

View File

@ -9,7 +9,7 @@ also be requested by userspace.
IN-KERNEL AUTOMOUNTING
======================
See section "Mount Traps" of Documentation/filesystems/autofs4.txt
See section "Mount Traps" of Documentation/filesystems/autofs.txt
Then from userspace, you can just do something like:

View File

@ -460,7 +460,7 @@ this retry process in the next article.
Automount points are locations in the filesystem where an attempt to
lookup a name can trigger changes to how that lookup should be
handled, in particular by mounting a filesystem there. These are
covered in greater detail in autofs4.txt in the Linux documentation
covered in greater detail in autofs.txt in the Linux documentation
tree, but a few notes specifically related to path lookup are in order
here.

View File

@ -7723,11 +7723,11 @@ W: https://linuxtv.org
S: Maintained
F: drivers/media/radio/radio-keene*
KERNEL AUTOMOUNTER v4 (AUTOFS4)
KERNEL AUTOMOUNTER
M: Ian Kent <raven@themaw.net>
L: autofs@vger.kernel.org
S: Maintained
F: fs/autofs4/
F: fs/autofs/
KERNEL BUILD + files below scripts/ (unless maintained elsewhere)
M: Masahiro Yamada <yamada.masahiro@socionext.com>

View File

@ -48,6 +48,7 @@ config ARC
select HAVE_GENERIC_DMA_COHERENT
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZMA
select ARCH_HAS_PTE_SPECIAL
config MIGHT_HAVE_PCI
bool

View File

@ -320,8 +320,6 @@ PTE_BIT_FUNC(mkexec, |= (_PAGE_EXECUTE));
PTE_BIT_FUNC(mkspecial, |= (_PAGE_SPECIAL));
PTE_BIT_FUNC(mkhuge, |= (_PAGE_HW_SZ));
#define __HAVE_ARCH_PTE_SPECIAL
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));

View File

@ -8,6 +8,7 @@ config ARM
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_PTE_SPECIAL if ARM_LPAE
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_PHYS_TO_DMA
select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL

View File

@ -219,7 +219,6 @@ static inline pte_t pte_mkspecial(pte_t pte)
pte_val(pte) |= L_PTE_SPECIAL;
return pte;
}
#define __HAVE_ARCH_PTE_SPECIAL
#define pmd_write(pmd) (pmd_isclear((pmd), L_PMD_SECT_RDONLY))
#define pmd_dirty(pmd) (pmd_isset((pmd), L_PMD_SECT_DIRTY))

View File

@ -17,6 +17,7 @@ config ARM64
select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
select ARCH_HAS_KCOV
select ARCH_HAS_MEMBARRIER_SYNC_CORE
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_SG_CHAIN
select ARCH_HAS_STRICT_KERNEL_RWX

View File

@ -306,8 +306,6 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
#define HPAGE_MASK (~(HPAGE_SIZE - 1))
#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)
#define __HAVE_ARCH_PTE_SPECIAL
static inline pte_t pgd_pte(pgd_t pgd)
{
return __pte(pgd_val(pgd));

View File

@ -135,6 +135,7 @@ config PPC
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_PHYS_TO_DMA
select ARCH_HAS_PMEM_API if PPC64
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_MEMBARRIER_CALLBACKS
select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE
select ARCH_HAS_SG_CHAIN

View File

@ -335,9 +335,6 @@ extern unsigned long pci_io_base;
/* Advertise special mapping type for AGP */
#define HAVE_PAGE_AGP
/* Advertise support for _PAGE_SPECIAL */
#define __HAVE_ARCH_PTE_SPECIAL
#ifndef __ASSEMBLY__
/*

View File

@ -208,9 +208,6 @@ static inline bool pte_user(pte_t pte)
#define PAGE_AGP (PAGE_KERNEL_NC)
#define HAVE_PAGE_AGP
/* Advertise support for _PAGE_SPECIAL */
#define __HAVE_ARCH_PTE_SPECIAL
#ifndef _PAGE_READ
/* if not defined, we should not find _PAGE_WRITE too */
#define _PAGE_READ 0

View File

@ -42,6 +42,7 @@ config RISCV
select THREAD_INFO_IN_TASK
select RISCV_TIMER
select GENERIC_IRQ_MULTI_HANDLER
select ARCH_HAS_PTE_SPECIAL
config MMU
def_bool y

View File

@ -42,7 +42,4 @@
_PAGE_WRITE | _PAGE_EXEC | \
_PAGE_USER | _PAGE_GLOBAL))
/* Advertise support for _PAGE_SPECIAL */
#define __HAVE_ARCH_PTE_SPECIAL
#endif /* _ASM_RISCV_PGTABLE_BITS_H */

View File

@ -65,6 +65,7 @@ config S390
select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
select ARCH_HAS_KCOV
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_SET_MEMORY
select ARCH_HAS_SG_CHAIN
select ARCH_HAS_STRICT_KERNEL_RWX

View File

@ -171,7 +171,6 @@ static inline int is_module_addr(void *addr)
#define _PAGE_WRITE 0x020 /* SW pte write bit */
#define _PAGE_SPECIAL 0x040 /* SW associated with special page */
#define _PAGE_UNUSED 0x080 /* SW bit for pgste usage state */
#define __HAVE_ARCH_PTE_SPECIAL
#ifdef CONFIG_MEM_SOFT_DIRTY
#define _PAGE_SOFT_DIRTY 0x002 /* SW pte soft dirty bit */

View File

@ -190,14 +190,15 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
if (!list_empty(&mm->context.pgtable_list)) {
page = list_first_entry(&mm->context.pgtable_list,
struct page, lru);
mask = atomic_read(&page->_mapcount);
mask = atomic_read(&page->_refcount) >> 24;
mask = (mask | (mask >> 4)) & 3;
if (mask != 3) {
table = (unsigned long *) page_to_phys(page);
bit = mask & 1; /* =1 -> second 2K */
if (bit)
table += PTRS_PER_PTE;
atomic_xor_bits(&page->_mapcount, 1U << bit);
atomic_xor_bits(&page->_refcount,
1U << (bit + 24));
list_del(&page->lru);
}
}
@ -218,12 +219,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
table = (unsigned long *) page_to_phys(page);
if (mm_alloc_pgste(mm)) {
/* Return 4K page table with PGSTEs */
atomic_set(&page->_mapcount, 3);
atomic_xor_bits(&page->_refcount, 3 << 24);
memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE);
memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
} else {
/* Return the first 2K fragment of the page */
atomic_set(&page->_mapcount, 1);
atomic_xor_bits(&page->_refcount, 1 << 24);
memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
spin_lock_bh(&mm->context.lock);
list_add(&page->lru, &mm->context.pgtable_list);
@ -242,7 +243,8 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
/* Free 2K page table fragment of a 4K page */
bit = (__pa(table) & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t));
spin_lock_bh(&mm->context.lock);
mask = atomic_xor_bits(&page->_mapcount, 1U << bit);
mask = atomic_xor_bits(&page->_refcount, 1U << (bit + 24));
mask >>= 24;
if (mask & 3)
list_add(&page->lru, &mm->context.pgtable_list);
else
@ -253,7 +255,6 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
}
pgtable_page_dtor(page);
atomic_set(&page->_mapcount, -1);
__free_page(page);
}
@ -274,7 +275,8 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
}
bit = (__pa(table) & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t));
spin_lock_bh(&mm->context.lock);
mask = atomic_xor_bits(&page->_mapcount, 0x11U << bit);
mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
mask >>= 24;
if (mask & 3)
list_add_tail(&page->lru, &mm->context.pgtable_list);
else
@ -296,12 +298,13 @@ static void __tlb_remove_table(void *_table)
break;
case 1: /* lower 2K of a 4K page table */
case 2: /* higher 2K of a 4K page table */
if (atomic_xor_bits(&page->_mapcount, mask << 4) != 0)
mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24));
mask >>= 24;
if (mask != 0)
break;
/* fallthrough */
case 3: /* 4K page table with pgstes */
pgtable_page_dtor(page);
atomic_set(&page->_mapcount, -1);
__free_page(page);
break;
}

View File

@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
config SUPERH
def_bool y
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_NO_COHERENT_DMA_MMAP if !MMU

View File

@ -156,8 +156,6 @@ extern void page_table_range_init(unsigned long start, unsigned long end,
#define HAVE_ARCH_UNMAPPED_AREA
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
#define __HAVE_ARCH_PTE_SPECIAL
#include <asm-generic/pgtable.h>
#endif /* __ASM_SH_PGTABLE_H */

View File

@ -88,6 +88,7 @@ config SPARC64
select ARCH_USE_QUEUED_SPINLOCKS
select GENERIC_TIME_VSYSCALL
select ARCH_CLOCKSOURCE_DATA
select ARCH_HAS_PTE_SPECIAL
config ARCH_DEFCONFIG
string

View File

@ -117,9 +117,6 @@ bool kern_addr_valid(unsigned long addr);
#define _PAGE_PMD_HUGE _AC(0x0100000000000000,UL) /* Huge page */
#define _PAGE_PUD_HUGE _PAGE_PMD_HUGE
/* Advertise support for _PAGE_SPECIAL */
#define __HAVE_ARCH_PTE_SPECIAL
/* SUN4U pte bits... */
#define _PAGE_SZ4MB_4U _AC(0x6000000000000000,UL) /* 4MB Page */
#define _PAGE_SZ512K_4U _AC(0x4000000000000000,UL) /* 512K Page */

View File

@ -60,6 +60,7 @@ config X86
select ARCH_HAS_KCOV if X86_64
select ARCH_HAS_MEMBARRIER_SYNC_CORE
select ARCH_HAS_PMEM_API if X86_64
select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_REFCOUNT
select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64
select ARCH_HAS_UACCESS_MCSAFE if X86_64

View File

@ -65,7 +65,6 @@
#define _PAGE_PKEY_BIT2 (_AT(pteval_t, 0))
#define _PAGE_PKEY_BIT3 (_AT(pteval_t, 0))
#endif
#define __HAVE_ARCH_PTE_SPECIAL
#define _PAGE_PKEY_MASK (_PAGE_PKEY_BIT0 | \
_PAGE_PKEY_BIT1 | \

View File

@ -114,13 +114,12 @@ static inline void pgd_list_del(pgd_t *pgd)
static void pgd_set_mm(pgd_t *pgd, struct mm_struct *mm)
{
BUILD_BUG_ON(sizeof(virt_to_page(pgd)->index) < sizeof(mm));
virt_to_page(pgd)->index = (pgoff_t)mm;
virt_to_page(pgd)->pt_mm = mm;
}
struct mm_struct *pgd_page_get_mm(struct page *page)
{
return (struct mm_struct *)page->index;
return page->pt_mm;
}
static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd)

View File

@ -13,7 +13,7 @@ config ZRAM
It has several use cases, for example: /tmp storage, use as swap
disks and maybe many more.
See zram.txt for more information.
See Documentation/blockdev/zram.txt for more information.
config ZRAM_WRITEBACK
bool "Write back incompressible page to backing device"
@ -25,4 +25,14 @@ config ZRAM_WRITEBACK
For this feature, admin should set up backing device via
/sys/block/zramX/backing_dev.
See zram.txt for more infomration.
See Documentation/blockdev/zram.txt for more information.
config ZRAM_MEMORY_TRACKING
bool "Track zRam block status"
depends on ZRAM && DEBUG_FS
help
With this feature, admin can track the state of allocated blocks
of zRAM. Admin could see the information via
/sys/kernel/debug/zram/zramX/block_state.
See Documentation/blockdev/zram.txt for more information.

View File

@ -31,6 +31,7 @@
#include <linux/err.h>
#include <linux/idr.h>
#include <linux/sysfs.h>
#include <linux/debugfs.h>
#include <linux/cpuhotplug.h>
#include "zram_drv.h"
@ -52,11 +53,28 @@ static size_t huge_class_size;
static void zram_free_page(struct zram *zram, size_t index);
static void zram_slot_lock(struct zram *zram, u32 index)
{
bit_spin_lock(ZRAM_LOCK, &zram->table[index].value);
}
static void zram_slot_unlock(struct zram *zram, u32 index)
{
bit_spin_unlock(ZRAM_LOCK, &zram->table[index].value);
}
static inline bool init_done(struct zram *zram)
{
return zram->disksize;
}
static inline bool zram_allocated(struct zram *zram, u32 index)
{
return (zram->table[index].value >> (ZRAM_FLAG_SHIFT + 1)) ||
zram->table[index].handle;
}
static inline struct zram *dev_to_zram(struct device *dev)
{
return (struct zram *)dev_to_disk(dev)->private_data;
@ -73,7 +91,7 @@ static void zram_set_handle(struct zram *zram, u32 index, unsigned long handle)
}
/* flag operations require table entry bit_spin_lock() being held */
static int zram_test_flag(struct zram *zram, u32 index,
static bool zram_test_flag(struct zram *zram, u32 index,
enum zram_pageflags flag)
{
return zram->table[index].value & BIT(flag);
@ -600,6 +618,114 @@ static int read_from_bdev(struct zram *zram, struct bio_vec *bvec,
static void zram_wb_clear(struct zram *zram, u32 index) {}
#endif
#ifdef CONFIG_ZRAM_MEMORY_TRACKING
static struct dentry *zram_debugfs_root;
static void zram_debugfs_create(void)
{
zram_debugfs_root = debugfs_create_dir("zram", NULL);
}
static void zram_debugfs_destroy(void)
{
debugfs_remove_recursive(zram_debugfs_root);
}
static void zram_accessed(struct zram *zram, u32 index)
{
zram->table[index].ac_time = ktime_get_boottime();
}
static void zram_reset_access(struct zram *zram, u32 index)
{
zram->table[index].ac_time = 0;
}
static ssize_t read_block_state(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
char *kbuf;
ssize_t index, written = 0;
struct zram *zram = file->private_data;
unsigned long nr_pages = zram->disksize >> PAGE_SHIFT;
struct timespec64 ts;
kbuf = kvmalloc(count, GFP_KERNEL);
if (!kbuf)
return -ENOMEM;
down_read(&zram->init_lock);
if (!init_done(zram)) {
up_read(&zram->init_lock);
kvfree(kbuf);
return -EINVAL;
}
for (index = *ppos; index < nr_pages; index++) {
int copied;
zram_slot_lock(zram, index);
if (!zram_allocated(zram, index))
goto next;
ts = ktime_to_timespec64(zram->table[index].ac_time);
copied = snprintf(kbuf + written, count,
"%12zd %12lld.%06lu %c%c%c\n",
index, (s64)ts.tv_sec,
ts.tv_nsec / NSEC_PER_USEC,
zram_test_flag(zram, index, ZRAM_SAME) ? 's' : '.',
zram_test_flag(zram, index, ZRAM_WB) ? 'w' : '.',
zram_test_flag(zram, index, ZRAM_HUGE) ? 'h' : '.');
if (count < copied) {
zram_slot_unlock(zram, index);
break;
}
written += copied;
count -= copied;
next:
zram_slot_unlock(zram, index);
*ppos += 1;
}
up_read(&zram->init_lock);
if (copy_to_user(buf, kbuf, written))
written = -EFAULT;
kvfree(kbuf);
return written;
}
static const struct file_operations proc_zram_block_state_op = {
.open = simple_open,
.read = read_block_state,
.llseek = default_llseek,
};
static void zram_debugfs_register(struct zram *zram)
{
if (!zram_debugfs_root)
return;
zram->debugfs_dir = debugfs_create_dir(zram->disk->disk_name,
zram_debugfs_root);
debugfs_create_file("block_state", 0400, zram->debugfs_dir,
zram, &proc_zram_block_state_op);
}
static void zram_debugfs_unregister(struct zram *zram)
{
debugfs_remove_recursive(zram->debugfs_dir);
}
#else
static void zram_debugfs_create(void) {};
static void zram_debugfs_destroy(void) {};
static void zram_accessed(struct zram *zram, u32 index) {};
static void zram_reset_access(struct zram *zram, u32 index) {};
static void zram_debugfs_register(struct zram *zram) {};
static void zram_debugfs_unregister(struct zram *zram) {};
#endif
/*
* We switched to per-cpu streams and this attr is not needed anymore.
@ -719,14 +845,15 @@ static ssize_t mm_stat_show(struct device *dev,
max_used = atomic_long_read(&zram->stats.max_used_pages);
ret = scnprintf(buf, PAGE_SIZE,
"%8llu %8llu %8llu %8lu %8ld %8llu %8lu\n",
"%8llu %8llu %8llu %8lu %8ld %8llu %8lu %8llu\n",
orig_size << PAGE_SHIFT,
(u64)atomic64_read(&zram->stats.compr_data_size),
mem_used << PAGE_SHIFT,
zram->limit_pages << PAGE_SHIFT,
max_used << PAGE_SHIFT,
(u64)atomic64_read(&zram->stats.same_pages),
pool_stats.pages_compacted);
pool_stats.pages_compacted,
(u64)atomic64_read(&zram->stats.huge_pages));
up_read(&zram->init_lock);
return ret;
@ -753,16 +880,6 @@ static DEVICE_ATTR_RO(io_stat);
static DEVICE_ATTR_RO(mm_stat);
static DEVICE_ATTR_RO(debug_stat);
static void zram_slot_lock(struct zram *zram, u32 index)
{
bit_spin_lock(ZRAM_ACCESS, &zram->table[index].value);
}
static void zram_slot_unlock(struct zram *zram, u32 index)
{
bit_spin_unlock(ZRAM_ACCESS, &zram->table[index].value);
}
static void zram_meta_free(struct zram *zram, u64 disksize)
{
size_t num_pages = disksize >> PAGE_SHIFT;
@ -805,6 +922,13 @@ static void zram_free_page(struct zram *zram, size_t index)
{
unsigned long handle;
zram_reset_access(zram, index);
if (zram_test_flag(zram, index, ZRAM_HUGE)) {
zram_clear_flag(zram, index, ZRAM_HUGE);
atomic64_dec(&zram->stats.huge_pages);
}
if (zram_wb_enabled(zram) && zram_test_flag(zram, index, ZRAM_WB)) {
zram_wb_clear(zram, index);
atomic64_dec(&zram->stats.pages_stored);
@ -973,6 +1097,7 @@ compress_again:
}
if (unlikely(comp_len >= huge_class_size)) {
comp_len = PAGE_SIZE;
if (zram_wb_enabled(zram) && allow_wb) {
zcomp_stream_put(zram->comp);
ret = write_to_bdev(zram, bvec, index, bio, &element);
@ -984,7 +1109,6 @@ compress_again:
allow_wb = false;
goto compress_again;
}
comp_len = PAGE_SIZE;
}
/*
@ -1046,6 +1170,11 @@ out:
zram_slot_lock(zram, index);
zram_free_page(zram, index);
if (comp_len == PAGE_SIZE) {
zram_set_flag(zram, index, ZRAM_HUGE);
atomic64_inc(&zram->stats.huge_pages);
}
if (flags) {
zram_set_flag(zram, index, flags);
zram_set_element(zram, index, element);
@ -1166,6 +1295,10 @@ static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
generic_end_io_acct(q, rw_acct, &zram->disk->part0, start_time);
zram_slot_lock(zram, index);
zram_accessed(zram, index);
zram_slot_unlock(zram, index);
if (unlikely(ret < 0)) {
if (!is_write)
atomic64_inc(&zram->stats.failed_reads);
@ -1577,6 +1710,7 @@ static int zram_add(void)
}
strlcpy(zram->compressor, default_compressor, sizeof(zram->compressor));
zram_debugfs_register(zram);
pr_info("Added device: %s\n", zram->disk->disk_name);
return device_id;
@ -1610,6 +1744,7 @@ static int zram_remove(struct zram *zram)
zram->claim = true;
mutex_unlock(&bdev->bd_mutex);
zram_debugfs_unregister(zram);
/*
* Remove sysfs first, so no one will perform a disksize
* store while we destroy the devices. This also helps during
@ -1712,6 +1847,7 @@ static void destroy_devices(void)
{
class_unregister(&zram_control_class);
idr_for_each(&zram_index_idr, &zram_remove_cb, NULL);
zram_debugfs_destroy();
idr_destroy(&zram_index_idr);
unregister_blkdev(zram_major, "zram");
cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE);
@ -1733,6 +1869,7 @@ static int __init zram_init(void)
return ret;
}
zram_debugfs_create();
zram_major = register_blkdev(0, "zram");
if (zram_major <= 0) {
pr_err("Unable to get major number\n");

View File

@ -43,10 +43,11 @@
/* Flags for zram pages (table[page_no].value) */
enum zram_pageflags {
/* Page consists the same element */
ZRAM_SAME = ZRAM_FLAG_SHIFT,
ZRAM_ACCESS, /* page is now accessed */
/* zram slot is locked */
ZRAM_LOCK = ZRAM_FLAG_SHIFT,
ZRAM_SAME, /* Page consists the same element */
ZRAM_WB, /* page is stored on backing_device */
ZRAM_HUGE, /* Incompressible page */
__NR_ZRAM_PAGEFLAGS,
};
@ -60,6 +61,9 @@ struct zram_table_entry {
unsigned long element;
};
unsigned long value;
#ifdef CONFIG_ZRAM_MEMORY_TRACKING
ktime_t ac_time;
#endif
};
struct zram_stats {
@ -71,6 +75,7 @@ struct zram_stats {
atomic64_t invalid_io; /* non-page-aligned I/O requests */
atomic64_t notify_free; /* no. of swap slot free notifications */
atomic64_t same_pages; /* no. of same element filled pages */
atomic64_t huge_pages; /* no. of huge pages */
atomic64_t pages_stored; /* no. of pages currently stored */
atomic_long_t max_used_pages; /* no. of maximum pages stored */
atomic64_t writestall; /* no. of write slow paths */
@ -107,5 +112,8 @@ struct zram {
unsigned long nr_pages;
spinlock_t bitmap_lock;
#endif
#ifdef CONFIG_ZRAM_MEMORY_TRACKING
struct dentry *debugfs_dir;
#endif
};
#endif

View File

@ -210,12 +210,12 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
p9_debug(P9_DEBUG_ERROR,
"integer field, but no integer?\n");
ret = r;
continue;
}
v9ses->debug = option;
} else {
v9ses->debug = option;
#ifdef CONFIG_NET_9P_DEBUG
p9_debug_level = option;
p9_debug_level = option;
#endif
}
break;
case Opt_dfltuid:
@ -231,7 +231,6 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
p9_debug(P9_DEBUG_ERROR,
"uid field, but not a uid?\n");
ret = -EINVAL;
continue;
}
break;
case Opt_dfltgid:
@ -247,7 +246,6 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
p9_debug(P9_DEBUG_ERROR,
"gid field, but not a gid?\n");
ret = -EINVAL;
continue;
}
break;
case Opt_afid:
@ -256,9 +254,9 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
p9_debug(P9_DEBUG_ERROR,
"integer field, but no integer?\n");
ret = r;
continue;
} else {
v9ses->afid = option;
}
v9ses->afid = option;
break;
case Opt_uname:
kfree(v9ses->uname);
@ -306,13 +304,12 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
"problem allocating copy of cache arg\n");
goto free_and_return;
}
ret = get_cache_mode(s);
if (ret == -EINVAL) {
kfree(s);
goto free_and_return;
}
r = get_cache_mode(s);
if (r < 0)
ret = r;
else
v9ses->cache = r;
v9ses->cache = ret;
kfree(s);
break;
@ -341,14 +338,12 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
pr_info("Unknown access argument %s\n",
s);
kfree(s);
goto free_and_return;
continue;
}
v9ses->uid = make_kuid(current_user_ns(), uid);
if (!uid_valid(v9ses->uid)) {
ret = -EINVAL;
pr_info("Uknown uid %s\n", s);
kfree(s);
goto free_and_return;
}
}

View File

@ -108,6 +108,7 @@ source "fs/notify/Kconfig"
source "fs/quota/Kconfig"
source "fs/autofs/Kconfig"
source "fs/autofs4/Kconfig"
source "fs/fuse/Kconfig"
source "fs/overlayfs/Kconfig"
@ -203,6 +204,9 @@ config HUGETLBFS
config HUGETLB_PAGE
def_bool HUGETLBFS
config MEMFD_CREATE
def_bool TMPFS || HUGETLBFS
config ARCH_HAS_GIGANTIC_PAGE
bool

View File

@ -102,6 +102,7 @@ obj-$(CONFIG_AFFS_FS) += affs/
obj-$(CONFIG_ROMFS_FS) += romfs/
obj-$(CONFIG_QNX4FS_FS) += qnx4/
obj-$(CONFIG_QNX6FS_FS) += qnx6/
obj-$(CONFIG_AUTOFS_FS) += autofs/
obj-$(CONFIG_AUTOFS4_FS) += autofs4/
obj-$(CONFIG_ADFS_FS) += adfs/
obj-$(CONFIG_FUSE_FS) += fuse/

20
fs/autofs/Kconfig Normal file
View File

@ -0,0 +1,20 @@
config AUTOFS_FS
tristate "Kernel automounter support (supports v3, v4 and v5)"
default n
help
The automounter is a tool to automatically mount remote file systems
on demand. This implementation is partially kernel-based to reduce
overhead in the already-mounted case; this is unlike the BSD
automounter (amd), which is a pure user space daemon.
To use the automounter you need the user-space tools from
<https://www.kernel.org/pub/linux/daemons/autofs/>; you also want
to answer Y to "NFS file system support", below.
To compile this support as a module, choose M here: the module will be
called autofs.
If you are not a part of a fairly large, distributed network or
don't have a laptop which needs to dynamically reconfigure to the
local network, you probably do not need an automounter, and can say
N here.

7
fs/autofs/Makefile Normal file
View File

@ -0,0 +1,7 @@
#
# Makefile for the linux autofs-filesystem routines.
#
obj-$(CONFIG_AUTOFS_FS) += autofs.o
autofs-objs := init.o inode.o root.o symlink.o waitq.o expire.o dev-ioctl.o

View File

@ -9,7 +9,7 @@
/* Internal header file for autofs */
#include <linux/auto_fs4.h>
#include <linux/auto_fs.h>
#include <linux/auto_dev-ioctl.h>
#include <linux/kernel.h>
@ -25,7 +25,7 @@
#include <linux/spinlock.h>
#include <linux/list.h>
#include <linux/completion.h>
#include <asm/current.h>
#include <linux/file.h>
/* This is the range of ioctl() numbers we claim as ours */
#define AUTOFS_IOC_FIRST AUTOFS_IOC_READY
@ -122,44 +122,44 @@ struct autofs_sb_info {
struct rcu_head rcu;
};
static inline struct autofs_sb_info *autofs4_sbi(struct super_block *sb)
static inline struct autofs_sb_info *autofs_sbi(struct super_block *sb)
{
return (struct autofs_sb_info *)(sb->s_fs_info);
}
static inline struct autofs_info *autofs4_dentry_ino(struct dentry *dentry)
static inline struct autofs_info *autofs_dentry_ino(struct dentry *dentry)
{
return (struct autofs_info *)(dentry->d_fsdata);
}
/* autofs4_oz_mode(): do we see the man behind the curtain? (The
/* autofs_oz_mode(): do we see the man behind the curtain? (The
* processes which do manipulations for us in user space sees the raw
* filesystem without "magic".)
*/
static inline int autofs4_oz_mode(struct autofs_sb_info *sbi)
static inline int autofs_oz_mode(struct autofs_sb_info *sbi)
{
return sbi->catatonic || task_pgrp(current) == sbi->oz_pgrp;
}
struct inode *autofs4_get_inode(struct super_block *, umode_t);
void autofs4_free_ino(struct autofs_info *);
struct inode *autofs_get_inode(struct super_block *, umode_t);
void autofs_free_ino(struct autofs_info *);
/* Expiration */
int is_autofs4_dentry(struct dentry *);
int autofs4_expire_wait(const struct path *path, int rcu_walk);
int autofs4_expire_run(struct super_block *, struct vfsmount *,
struct autofs_sb_info *,
struct autofs_packet_expire __user *);
int autofs4_do_expire_multi(struct super_block *sb, struct vfsmount *mnt,
struct autofs_sb_info *sbi, int when);
int autofs4_expire_multi(struct super_block *, struct vfsmount *,
struct autofs_sb_info *, int __user *);
struct dentry *autofs4_expire_direct(struct super_block *sb,
struct vfsmount *mnt,
struct autofs_sb_info *sbi, int how);
struct dentry *autofs4_expire_indirect(struct super_block *sb,
struct vfsmount *mnt,
struct autofs_sb_info *sbi, int how);
int is_autofs_dentry(struct dentry *);
int autofs_expire_wait(const struct path *path, int rcu_walk);
int autofs_expire_run(struct super_block *, struct vfsmount *,
struct autofs_sb_info *,
struct autofs_packet_expire __user *);
int autofs_do_expire_multi(struct super_block *sb, struct vfsmount *mnt,
struct autofs_sb_info *sbi, int when);
int autofs_expire_multi(struct super_block *, struct vfsmount *,
struct autofs_sb_info *, int __user *);
struct dentry *autofs_expire_direct(struct super_block *sb,
struct vfsmount *mnt,
struct autofs_sb_info *sbi, int how);
struct dentry *autofs_expire_indirect(struct super_block *sb,
struct vfsmount *mnt,
struct autofs_sb_info *sbi, int how);
/* Device node initialization */
@ -168,11 +168,11 @@ void autofs_dev_ioctl_exit(void);
/* Operations structures */
extern const struct inode_operations autofs4_symlink_inode_operations;
extern const struct inode_operations autofs4_dir_inode_operations;
extern const struct file_operations autofs4_dir_operations;
extern const struct file_operations autofs4_root_operations;
extern const struct dentry_operations autofs4_dentry_operations;
extern const struct inode_operations autofs_symlink_inode_operations;
extern const struct inode_operations autofs_dir_inode_operations;
extern const struct file_operations autofs_dir_operations;
extern const struct file_operations autofs_root_operations;
extern const struct dentry_operations autofs_dentry_operations;
/* VFS automount flags management functions */
static inline void __managed_dentry_set_managed(struct dentry *dentry)
@ -201,9 +201,9 @@ static inline void managed_dentry_clear_managed(struct dentry *dentry)
/* Initializing function */
int autofs4_fill_super(struct super_block *, void *, int);
struct autofs_info *autofs4_new_ino(struct autofs_sb_info *);
void autofs4_clean_ino(struct autofs_info *);
int autofs_fill_super(struct super_block *, void *, int);
struct autofs_info *autofs_new_ino(struct autofs_sb_info *);
void autofs_clean_ino(struct autofs_info *);
static inline int autofs_prepare_pipe(struct file *pipe)
{
@ -218,25 +218,25 @@ static inline int autofs_prepare_pipe(struct file *pipe)
/* Queue management functions */
int autofs4_wait(struct autofs_sb_info *,
int autofs_wait(struct autofs_sb_info *,
const struct path *, enum autofs_notify);
int autofs4_wait_release(struct autofs_sb_info *, autofs_wqt_t, int);
void autofs4_catatonic_mode(struct autofs_sb_info *);
int autofs_wait_release(struct autofs_sb_info *, autofs_wqt_t, int);
void autofs_catatonic_mode(struct autofs_sb_info *);
static inline u32 autofs4_get_dev(struct autofs_sb_info *sbi)
static inline u32 autofs_get_dev(struct autofs_sb_info *sbi)
{
return new_encode_dev(sbi->sb->s_dev);
}
static inline u64 autofs4_get_ino(struct autofs_sb_info *sbi)
static inline u64 autofs_get_ino(struct autofs_sb_info *sbi)
{
return d_inode(sbi->sb->s_root)->i_ino;
}
static inline void __autofs4_add_expiring(struct dentry *dentry)
static inline void __autofs_add_expiring(struct dentry *dentry)
{
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
struct autofs_info *ino = autofs_dentry_ino(dentry);
if (ino) {
if (list_empty(&ino->expiring))
@ -244,10 +244,10 @@ static inline void __autofs4_add_expiring(struct dentry *dentry)
}
}
static inline void autofs4_add_expiring(struct dentry *dentry)
static inline void autofs_add_expiring(struct dentry *dentry)
{
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
struct autofs_info *ino = autofs_dentry_ino(dentry);
if (ino) {
spin_lock(&sbi->lookup_lock);
@ -257,10 +257,10 @@ static inline void autofs4_add_expiring(struct dentry *dentry)
}
}
static inline void autofs4_del_expiring(struct dentry *dentry)
static inline void autofs_del_expiring(struct dentry *dentry)
{
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
struct autofs_info *ino = autofs_dentry_ino(dentry);
if (ino) {
spin_lock(&sbi->lookup_lock);
@ -270,4 +270,4 @@ static inline void autofs4_del_expiring(struct dentry *dentry)
}
}
void autofs4_kill_sb(struct super_block *);
void autofs_kill_sb(struct super_block *);

View File

@ -7,23 +7,10 @@
* option, any later version, incorporated herein by reference.
*/
#include <linux/module.h>
#include <linux/vmalloc.h>
#include <linux/miscdevice.h>
#include <linux/init.h>
#include <linux/wait.h>
#include <linux/namei.h>
#include <linux/fcntl.h>
#include <linux/file.h>
#include <linux/fdtable.h>
#include <linux/sched.h>
#include <linux/cred.h>
#include <linux/compat.h>
#include <linux/syscalls.h>
#include <linux/magic.h>
#include <linux/dcache.h>
#include <linux/uaccess.h>
#include <linux/slab.h>
#include "autofs_i.h"
@ -166,7 +153,7 @@ static struct autofs_sb_info *autofs_dev_ioctl_sbi(struct file *f)
if (f) {
inode = file_inode(f);
sbi = autofs4_sbi(inode->i_sb);
sbi = autofs_sbi(inode->i_sb);
}
return sbi;
}
@ -236,7 +223,7 @@ static int test_by_dev(const struct path *path, void *p)
static int test_by_type(const struct path *path, void *p)
{
struct autofs_info *ino = autofs4_dentry_ino(path->dentry);
struct autofs_info *ino = autofs_dentry_ino(path->dentry);
return ino && ino->sbi->type & *(unsigned *)p;
}
@ -324,7 +311,7 @@ static int autofs_dev_ioctl_ready(struct file *fp,
autofs_wqt_t token;
token = (autofs_wqt_t) param->ready.token;
return autofs4_wait_release(sbi, token, 0);
return autofs_wait_release(sbi, token, 0);
}
/*
@ -340,7 +327,7 @@ static int autofs_dev_ioctl_fail(struct file *fp,
token = (autofs_wqt_t) param->fail.token;
status = param->fail.status < 0 ? param->fail.status : -ENOENT;
return autofs4_wait_release(sbi, token, status);
return autofs_wait_release(sbi, token, status);
}
/*
@ -412,7 +399,7 @@ static int autofs_dev_ioctl_catatonic(struct file *fp,
struct autofs_sb_info *sbi,
struct autofs_dev_ioctl *param)
{
autofs4_catatonic_mode(sbi);
autofs_catatonic_mode(sbi);
return 0;
}
@ -459,10 +446,10 @@ static int autofs_dev_ioctl_requester(struct file *fp,
if (err)
goto out;
ino = autofs4_dentry_ino(path.dentry);
ino = autofs_dentry_ino(path.dentry);
if (ino) {
err = 0;
autofs4_expire_wait(&path, 0);
autofs_expire_wait(&path, 0);
spin_lock(&sbi->fs_lock);
param->requester.uid =
from_kuid_munged(current_user_ns(), ino->uid);
@ -489,7 +476,7 @@ static int autofs_dev_ioctl_expire(struct file *fp,
how = param->expire.how;
mnt = fp->f_path.mnt;
return autofs4_do_expire_multi(sbi->sb, mnt, sbi, how);
return autofs_do_expire_multi(sbi->sb, mnt, sbi, how);
}
/* Check if autofs mount point is in use */
@ -686,7 +673,7 @@ static int _autofs_dev_ioctl(unsigned int command,
* Admin needs to be able to set the mount catatonic in
* order to be able to perform the re-open.
*/
if (!autofs4_oz_mode(sbi) &&
if (!autofs_oz_mode(sbi) &&
cmd != AUTOFS_DEV_IOCTL_CATATONIC_CMD) {
err = -EACCES;
fput(fp);

View File

@ -13,10 +13,10 @@
static unsigned long now;
/* Check if a dentry can be expired */
static inline int autofs4_can_expire(struct dentry *dentry,
unsigned long timeout, int do_now)
static inline int autofs_can_expire(struct dentry *dentry,
unsigned long timeout, int do_now)
{
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_info *ino = autofs_dentry_ino(dentry);
/* dentry in the process of being deleted */
if (ino == NULL)
@ -31,7 +31,7 @@ static inline int autofs4_can_expire(struct dentry *dentry,
}
/* Check a mount point for busyness */
static int autofs4_mount_busy(struct vfsmount *mnt, struct dentry *dentry)
static int autofs_mount_busy(struct vfsmount *mnt, struct dentry *dentry)
{
struct dentry *top = dentry;
struct path path = {.mnt = mnt, .dentry = dentry};
@ -44,8 +44,8 @@ static int autofs4_mount_busy(struct vfsmount *mnt, struct dentry *dentry)
if (!follow_down_one(&path))
goto done;
if (is_autofs4_dentry(path.dentry)) {
struct autofs_sb_info *sbi = autofs4_sbi(path.dentry->d_sb);
if (is_autofs_dentry(path.dentry)) {
struct autofs_sb_info *sbi = autofs_sbi(path.dentry->d_sb);
/* This is an autofs submount, we can't expire it */
if (autofs_type_indirect(sbi->type))
@ -56,7 +56,7 @@ static int autofs4_mount_busy(struct vfsmount *mnt, struct dentry *dentry)
if (!may_umount_tree(path.mnt)) {
struct autofs_info *ino;
ino = autofs4_dentry_ino(top);
ino = autofs_dentry_ino(top);
ino->last_used = jiffies;
goto done;
}
@ -74,7 +74,7 @@ done:
static struct dentry *get_next_positive_subdir(struct dentry *prev,
struct dentry *root)
{
struct autofs_sb_info *sbi = autofs4_sbi(root->d_sb);
struct autofs_sb_info *sbi = autofs_sbi(root->d_sb);
struct list_head *next;
struct dentry *q;
@ -121,7 +121,7 @@ cont:
static struct dentry *get_next_positive_dentry(struct dentry *prev,
struct dentry *root)
{
struct autofs_sb_info *sbi = autofs4_sbi(root->d_sb);
struct autofs_sb_info *sbi = autofs_sbi(root->d_sb);
struct list_head *next;
struct dentry *p, *ret;
@ -184,10 +184,10 @@ again:
* The tree is not busy iff no mountpoints are busy and there are no
* autofs submounts.
*/
static int autofs4_direct_busy(struct vfsmount *mnt,
struct dentry *top,
unsigned long timeout,
int do_now)
static int autofs_direct_busy(struct vfsmount *mnt,
struct dentry *top,
unsigned long timeout,
int do_now)
{
pr_debug("top %p %pd\n", top, top);
@ -195,14 +195,14 @@ static int autofs4_direct_busy(struct vfsmount *mnt,
if (!may_umount_tree(mnt)) {
struct autofs_info *ino;
ino = autofs4_dentry_ino(top);
ino = autofs_dentry_ino(top);
if (ino)
ino->last_used = jiffies;
return 1;
}
/* Timeout of a direct mount is determined by its top dentry */
if (!autofs4_can_expire(top, timeout, do_now))
if (!autofs_can_expire(top, timeout, do_now))
return 1;
return 0;
@ -212,12 +212,12 @@ static int autofs4_direct_busy(struct vfsmount *mnt,
* Check a directory tree of mount points for busyness
* The tree is not busy iff no mountpoints are busy
*/
static int autofs4_tree_busy(struct vfsmount *mnt,
struct dentry *top,
unsigned long timeout,
int do_now)
static int autofs_tree_busy(struct vfsmount *mnt,
struct dentry *top,
unsigned long timeout,
int do_now)
{
struct autofs_info *top_ino = autofs4_dentry_ino(top);
struct autofs_info *top_ino = autofs_dentry_ino(top);
struct dentry *p;
pr_debug("top %p %pd\n", top, top);
@ -237,13 +237,13 @@ static int autofs4_tree_busy(struct vfsmount *mnt,
* If the fs is busy update the expiry counter.
*/
if (d_mountpoint(p)) {
if (autofs4_mount_busy(mnt, p)) {
if (autofs_mount_busy(mnt, p)) {
top_ino->last_used = jiffies;
dput(p);
return 1;
}
} else {
struct autofs_info *ino = autofs4_dentry_ino(p);
struct autofs_info *ino = autofs_dentry_ino(p);
unsigned int ino_count = atomic_read(&ino->count);
/* allow for dget above and top is already dgot */
@ -261,16 +261,16 @@ static int autofs4_tree_busy(struct vfsmount *mnt,
}
/* Timeout of a tree mount is ultimately determined by its top dentry */
if (!autofs4_can_expire(top, timeout, do_now))
if (!autofs_can_expire(top, timeout, do_now))
return 1;
return 0;
}
static struct dentry *autofs4_check_leaves(struct vfsmount *mnt,
struct dentry *parent,
unsigned long timeout,
int do_now)
static struct dentry *autofs_check_leaves(struct vfsmount *mnt,
struct dentry *parent,
unsigned long timeout,
int do_now)
{
struct dentry *p;
@ -282,11 +282,11 @@ static struct dentry *autofs4_check_leaves(struct vfsmount *mnt,
if (d_mountpoint(p)) {
/* Can we umount this guy */
if (autofs4_mount_busy(mnt, p))
if (autofs_mount_busy(mnt, p))
continue;
/* Can we expire this guy */
if (autofs4_can_expire(p, timeout, do_now))
if (autofs_can_expire(p, timeout, do_now))
return p;
}
}
@ -294,10 +294,10 @@ static struct dentry *autofs4_check_leaves(struct vfsmount *mnt,
}
/* Check if we can expire a direct mount (possibly a tree) */
struct dentry *autofs4_expire_direct(struct super_block *sb,
struct vfsmount *mnt,
struct autofs_sb_info *sbi,
int how)
struct dentry *autofs_expire_direct(struct super_block *sb,
struct vfsmount *mnt,
struct autofs_sb_info *sbi,
int how)
{
unsigned long timeout;
struct dentry *root = dget(sb->s_root);
@ -310,9 +310,9 @@ struct dentry *autofs4_expire_direct(struct super_block *sb,
now = jiffies;
timeout = sbi->exp_timeout;
if (!autofs4_direct_busy(mnt, root, timeout, do_now)) {
if (!autofs_direct_busy(mnt, root, timeout, do_now)) {
spin_lock(&sbi->fs_lock);
ino = autofs4_dentry_ino(root);
ino = autofs_dentry_ino(root);
/* No point expiring a pending mount */
if (ino->flags & AUTOFS_INF_PENDING) {
spin_unlock(&sbi->fs_lock);
@ -321,7 +321,7 @@ struct dentry *autofs4_expire_direct(struct super_block *sb,
ino->flags |= AUTOFS_INF_WANT_EXPIRE;
spin_unlock(&sbi->fs_lock);
synchronize_rcu();
if (!autofs4_direct_busy(mnt, root, timeout, do_now)) {
if (!autofs_direct_busy(mnt, root, timeout, do_now)) {
spin_lock(&sbi->fs_lock);
ino->flags |= AUTOFS_INF_EXPIRING;
init_completion(&ino->expire_complete);
@ -350,7 +350,7 @@ static struct dentry *should_expire(struct dentry *dentry,
{
int do_now = how & AUTOFS_EXP_IMMEDIATE;
int exp_leaves = how & AUTOFS_EXP_LEAVES;
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_info *ino = autofs_dentry_ino(dentry);
unsigned int ino_count;
/* No point expiring a pending mount */
@ -367,11 +367,11 @@ static struct dentry *should_expire(struct dentry *dentry,
pr_debug("checking mountpoint %p %pd\n", dentry, dentry);
/* Can we umount this guy */
if (autofs4_mount_busy(mnt, dentry))
if (autofs_mount_busy(mnt, dentry))
return NULL;
/* Can we expire this guy */
if (autofs4_can_expire(dentry, timeout, do_now))
if (autofs_can_expire(dentry, timeout, do_now))
return dentry;
return NULL;
}
@ -382,7 +382,7 @@ static struct dentry *should_expire(struct dentry *dentry,
* A symlink can't be "busy" in the usual sense so
* just check last used for expire timeout.
*/
if (autofs4_can_expire(dentry, timeout, do_now))
if (autofs_can_expire(dentry, timeout, do_now))
return dentry;
return NULL;
}
@ -397,7 +397,7 @@ static struct dentry *should_expire(struct dentry *dentry,
if (d_count(dentry) > ino_count)
return NULL;
if (!autofs4_tree_busy(mnt, dentry, timeout, do_now))
if (!autofs_tree_busy(mnt, dentry, timeout, do_now))
return dentry;
/*
* Case 3: pseudo direct mount, expire individual leaves
@ -411,7 +411,7 @@ static struct dentry *should_expire(struct dentry *dentry,
if (d_count(dentry) > ino_count)
return NULL;
expired = autofs4_check_leaves(mnt, dentry, timeout, do_now);
expired = autofs_check_leaves(mnt, dentry, timeout, do_now);
if (expired) {
if (expired == dentry)
dput(dentry);
@ -427,10 +427,10 @@ static struct dentry *should_expire(struct dentry *dentry,
* - it is unused by any user process
* - it has been unused for exp_timeout time
*/
struct dentry *autofs4_expire_indirect(struct super_block *sb,
struct vfsmount *mnt,
struct autofs_sb_info *sbi,
int how)
struct dentry *autofs_expire_indirect(struct super_block *sb,
struct vfsmount *mnt,
struct autofs_sb_info *sbi,
int how)
{
unsigned long timeout;
struct dentry *root = sb->s_root;
@ -450,7 +450,7 @@ struct dentry *autofs4_expire_indirect(struct super_block *sb,
int flags = how;
spin_lock(&sbi->fs_lock);
ino = autofs4_dentry_ino(dentry);
ino = autofs_dentry_ino(dentry);
if (ino->flags & AUTOFS_INF_WANT_EXPIRE) {
spin_unlock(&sbi->fs_lock);
continue;
@ -462,7 +462,7 @@ struct dentry *autofs4_expire_indirect(struct super_block *sb,
continue;
spin_lock(&sbi->fs_lock);
ino = autofs4_dentry_ino(expired);
ino = autofs_dentry_ino(expired);
ino->flags |= AUTOFS_INF_WANT_EXPIRE;
spin_unlock(&sbi->fs_lock);
synchronize_rcu();
@ -498,11 +498,11 @@ found:
return expired;
}
int autofs4_expire_wait(const struct path *path, int rcu_walk)
int autofs_expire_wait(const struct path *path, int rcu_walk)
{
struct dentry *dentry = path->dentry;
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
struct autofs_info *ino = autofs_dentry_ino(dentry);
int status;
int state;
@ -529,7 +529,7 @@ retry:
pr_debug("waiting for expire %p name=%pd\n", dentry, dentry);
status = autofs4_wait(sbi, path, NFY_NONE);
status = autofs_wait(sbi, path, NFY_NONE);
wait_for_completion(&ino->expire_complete);
pr_debug("expire done status=%d\n", status);
@ -545,10 +545,10 @@ retry:
}
/* Perform an expiry operation */
int autofs4_expire_run(struct super_block *sb,
struct vfsmount *mnt,
struct autofs_sb_info *sbi,
struct autofs_packet_expire __user *pkt_p)
int autofs_expire_run(struct super_block *sb,
struct vfsmount *mnt,
struct autofs_sb_info *sbi,
struct autofs_packet_expire __user *pkt_p)
{
struct autofs_packet_expire pkt;
struct autofs_info *ino;
@ -560,7 +560,7 @@ int autofs4_expire_run(struct super_block *sb,
pkt.hdr.proto_version = sbi->version;
pkt.hdr.type = autofs_ptype_expire;
dentry = autofs4_expire_indirect(sb, mnt, sbi, 0);
dentry = autofs_expire_indirect(sb, mnt, sbi, 0);
if (!dentry)
return -EAGAIN;
@ -573,7 +573,7 @@ int autofs4_expire_run(struct super_block *sb,
ret = -EFAULT;
spin_lock(&sbi->fs_lock);
ino = autofs4_dentry_ino(dentry);
ino = autofs_dentry_ino(dentry);
/* avoid rapid-fire expire attempts if expiry fails */
ino->last_used = now;
ino->flags &= ~(AUTOFS_INF_EXPIRING|AUTOFS_INF_WANT_EXPIRE);
@ -583,25 +583,25 @@ int autofs4_expire_run(struct super_block *sb,
return ret;
}
int autofs4_do_expire_multi(struct super_block *sb, struct vfsmount *mnt,
struct autofs_sb_info *sbi, int when)
int autofs_do_expire_multi(struct super_block *sb, struct vfsmount *mnt,
struct autofs_sb_info *sbi, int when)
{
struct dentry *dentry;
int ret = -EAGAIN;
if (autofs_type_trigger(sbi->type))
dentry = autofs4_expire_direct(sb, mnt, sbi, when);
dentry = autofs_expire_direct(sb, mnt, sbi, when);
else
dentry = autofs4_expire_indirect(sb, mnt, sbi, when);
dentry = autofs_expire_indirect(sb, mnt, sbi, when);
if (dentry) {
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_info *ino = autofs_dentry_ino(dentry);
const struct path path = { .mnt = mnt, .dentry = dentry };
/* This is synchronous because it makes the daemon a
* little easier
*/
ret = autofs4_wait(sbi, &path, NFY_EXPIRE);
ret = autofs_wait(sbi, &path, NFY_EXPIRE);
spin_lock(&sbi->fs_lock);
/* avoid rapid-fire expire attempts if expiry fails */
@ -619,7 +619,7 @@ int autofs4_do_expire_multi(struct super_block *sb, struct vfsmount *mnt,
* Call repeatedly until it returns -EAGAIN, meaning there's nothing
* more to be done.
*/
int autofs4_expire_multi(struct super_block *sb, struct vfsmount *mnt,
int autofs_expire_multi(struct super_block *sb, struct vfsmount *mnt,
struct autofs_sb_info *sbi, int __user *arg)
{
int do_now = 0;
@ -627,6 +627,5 @@ int autofs4_expire_multi(struct super_block *sb, struct vfsmount *mnt,
if (arg && get_user(do_now, arg))
return -EFAULT;
return autofs4_do_expire_multi(sb, mnt, sbi, do_now);
return autofs_do_expire_multi(sb, mnt, sbi, do_now);
}

View File

@ -13,18 +13,18 @@
static struct dentry *autofs_mount(struct file_system_type *fs_type,
int flags, const char *dev_name, void *data)
{
return mount_nodev(fs_type, flags, data, autofs4_fill_super);
return mount_nodev(fs_type, flags, data, autofs_fill_super);
}
static struct file_system_type autofs_fs_type = {
.owner = THIS_MODULE,
.name = "autofs",
.mount = autofs_mount,
.kill_sb = autofs4_kill_sb,
.kill_sb = autofs_kill_sb,
};
MODULE_ALIAS_FS("autofs");
static int __init init_autofs4_fs(void)
static int __init init_autofs_fs(void)
{
int err;
@ -37,12 +37,12 @@ static int __init init_autofs4_fs(void)
return err;
}
static void __exit exit_autofs4_fs(void)
static void __exit exit_autofs_fs(void)
{
autofs_dev_ioctl_exit();
unregister_filesystem(&autofs_fs_type);
}
module_init(init_autofs4_fs)
module_exit(exit_autofs4_fs)
module_init(init_autofs_fs)
module_exit(exit_autofs_fs)
MODULE_LICENSE("GPL");

View File

@ -7,18 +7,14 @@
* option, any later version, incorporated herein by reference.
*/
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/file.h>
#include <linux/seq_file.h>
#include <linux/pagemap.h>
#include <linux/parser.h>
#include <linux/bitops.h>
#include <linux/magic.h>
#include "autofs_i.h"
#include <linux/module.h>
struct autofs_info *autofs4_new_ino(struct autofs_sb_info *sbi)
#include "autofs_i.h"
struct autofs_info *autofs_new_ino(struct autofs_sb_info *sbi)
{
struct autofs_info *ino;
@ -32,21 +28,21 @@ struct autofs_info *autofs4_new_ino(struct autofs_sb_info *sbi)
return ino;
}
void autofs4_clean_ino(struct autofs_info *ino)
void autofs_clean_ino(struct autofs_info *ino)
{
ino->uid = GLOBAL_ROOT_UID;
ino->gid = GLOBAL_ROOT_GID;
ino->last_used = jiffies;
}
void autofs4_free_ino(struct autofs_info *ino)
void autofs_free_ino(struct autofs_info *ino)
{
kfree(ino);
}
void autofs4_kill_sb(struct super_block *sb)
void autofs_kill_sb(struct super_block *sb)
{
struct autofs_sb_info *sbi = autofs4_sbi(sb);
struct autofs_sb_info *sbi = autofs_sbi(sb);
/*
* In the event of a failure in get_sb_nodev the superblock
@ -56,7 +52,7 @@ void autofs4_kill_sb(struct super_block *sb)
*/
if (sbi) {
/* Free wait queues, close pipe */
autofs4_catatonic_mode(sbi);
autofs_catatonic_mode(sbi);
put_pid(sbi->oz_pgrp);
}
@ -66,9 +62,9 @@ void autofs4_kill_sb(struct super_block *sb)
kfree_rcu(sbi, rcu);
}
static int autofs4_show_options(struct seq_file *m, struct dentry *root)
static int autofs_show_options(struct seq_file *m, struct dentry *root)
{
struct autofs_sb_info *sbi = autofs4_sbi(root->d_sb);
struct autofs_sb_info *sbi = autofs_sbi(root->d_sb);
struct inode *root_inode = d_inode(root->d_sb->s_root);
if (!sbi)
@ -101,16 +97,16 @@ static int autofs4_show_options(struct seq_file *m, struct dentry *root)
return 0;
}
static void autofs4_evict_inode(struct inode *inode)
static void autofs_evict_inode(struct inode *inode)
{
clear_inode(inode);
kfree(inode->i_private);
}
static const struct super_operations autofs4_sops = {
static const struct super_operations autofs_sops = {
.statfs = simple_statfs,
.show_options = autofs4_show_options,
.evict_inode = autofs4_evict_inode,
.show_options = autofs_show_options,
.evict_inode = autofs_evict_inode,
};
enum {Opt_err, Opt_fd, Opt_uid, Opt_gid, Opt_pgrp, Opt_minproto, Opt_maxproto,
@ -206,7 +202,7 @@ static int parse_options(char *options, int *pipefd, kuid_t *uid, kgid_t *gid,
return (*pipefd < 0);
}
int autofs4_fill_super(struct super_block *s, void *data, int silent)
int autofs_fill_super(struct super_block *s, void *data, int silent)
{
struct inode *root_inode;
struct dentry *root;
@ -246,19 +242,19 @@ int autofs4_fill_super(struct super_block *s, void *data, int silent)
s->s_blocksize = 1024;
s->s_blocksize_bits = 10;
s->s_magic = AUTOFS_SUPER_MAGIC;
s->s_op = &autofs4_sops;
s->s_d_op = &autofs4_dentry_operations;
s->s_op = &autofs_sops;
s->s_d_op = &autofs_dentry_operations;
s->s_time_gran = 1;
/*
* Get the root inode and dentry, but defer checking for errors.
*/
ino = autofs4_new_ino(sbi);
ino = autofs_new_ino(sbi);
if (!ino) {
ret = -ENOMEM;
goto fail_free;
}
root_inode = autofs4_get_inode(s, S_IFDIR | 0755);
root_inode = autofs_get_inode(s, S_IFDIR | 0755);
root = d_make_root(root_inode);
if (!root)
goto fail_ino;
@ -305,8 +301,8 @@ int autofs4_fill_super(struct super_block *s, void *data, int silent)
if (autofs_type_trigger(sbi->type))
__managed_dentry_set_managed(root);
root_inode->i_fop = &autofs4_root_operations;
root_inode->i_op = &autofs4_dir_inode_operations;
root_inode->i_fop = &autofs_root_operations;
root_inode->i_op = &autofs_dir_inode_operations;
pr_debug("pipe fd = %d, pgrp = %u\n", pipefd, pid_nr(sbi->oz_pgrp));
pipe = fget(pipefd);
@ -340,14 +336,14 @@ fail_dput:
dput(root);
goto fail_free;
fail_ino:
autofs4_free_ino(ino);
autofs_free_ino(ino);
fail_free:
kfree(sbi);
s->s_fs_info = NULL;
return ret;
}
struct inode *autofs4_get_inode(struct super_block *sb, umode_t mode)
struct inode *autofs_get_inode(struct super_block *sb, umode_t mode)
{
struct inode *inode = new_inode(sb);
@ -364,10 +360,10 @@ struct inode *autofs4_get_inode(struct super_block *sb, umode_t mode)
if (S_ISDIR(mode)) {
set_nlink(inode, 2);
inode->i_op = &autofs4_dir_inode_operations;
inode->i_fop = &autofs4_dir_operations;
inode->i_op = &autofs_dir_inode_operations;
inode->i_fop = &autofs_dir_operations;
} else if (S_ISLNK(mode)) {
inode->i_op = &autofs4_symlink_inode_operations;
inode->i_op = &autofs_symlink_inode_operations;
} else
WARN_ON(1);

View File

@ -9,72 +9,66 @@
*/
#include <linux/capability.h>
#include <linux/errno.h>
#include <linux/stat.h>
#include <linux/slab.h>
#include <linux/param.h>
#include <linux/time.h>
#include <linux/compat.h>
#include <linux/mutex.h>
#include "autofs_i.h"
static int autofs4_dir_symlink(struct inode *, struct dentry *, const char *);
static int autofs4_dir_unlink(struct inode *, struct dentry *);
static int autofs4_dir_rmdir(struct inode *, struct dentry *);
static int autofs4_dir_mkdir(struct inode *, struct dentry *, umode_t);
static long autofs4_root_ioctl(struct file *, unsigned int, unsigned long);
static int autofs_dir_symlink(struct inode *, struct dentry *, const char *);
static int autofs_dir_unlink(struct inode *, struct dentry *);
static int autofs_dir_rmdir(struct inode *, struct dentry *);
static int autofs_dir_mkdir(struct inode *, struct dentry *, umode_t);
static long autofs_root_ioctl(struct file *, unsigned int, unsigned long);
#ifdef CONFIG_COMPAT
static long autofs4_root_compat_ioctl(struct file *,
unsigned int, unsigned long);
static long autofs_root_compat_ioctl(struct file *,
unsigned int, unsigned long);
#endif
static int autofs4_dir_open(struct inode *inode, struct file *file);
static struct dentry *autofs4_lookup(struct inode *,
struct dentry *, unsigned int);
static struct vfsmount *autofs4_d_automount(struct path *);
static int autofs4_d_manage(const struct path *, bool);
static void autofs4_dentry_release(struct dentry *);
static int autofs_dir_open(struct inode *inode, struct file *file);
static struct dentry *autofs_lookup(struct inode *,
struct dentry *, unsigned int);
static struct vfsmount *autofs_d_automount(struct path *);
static int autofs_d_manage(const struct path *, bool);
static void autofs_dentry_release(struct dentry *);
const struct file_operations autofs4_root_operations = {
const struct file_operations autofs_root_operations = {
.open = dcache_dir_open,
.release = dcache_dir_close,
.read = generic_read_dir,
.iterate_shared = dcache_readdir,
.llseek = dcache_dir_lseek,
.unlocked_ioctl = autofs4_root_ioctl,
.unlocked_ioctl = autofs_root_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = autofs4_root_compat_ioctl,
.compat_ioctl = autofs_root_compat_ioctl,
#endif
};
const struct file_operations autofs4_dir_operations = {
.open = autofs4_dir_open,
const struct file_operations autofs_dir_operations = {
.open = autofs_dir_open,
.release = dcache_dir_close,
.read = generic_read_dir,
.iterate_shared = dcache_readdir,
.llseek = dcache_dir_lseek,
};
const struct inode_operations autofs4_dir_inode_operations = {
.lookup = autofs4_lookup,
.unlink = autofs4_dir_unlink,
.symlink = autofs4_dir_symlink,
.mkdir = autofs4_dir_mkdir,
.rmdir = autofs4_dir_rmdir,
const struct inode_operations autofs_dir_inode_operations = {
.lookup = autofs_lookup,
.unlink = autofs_dir_unlink,
.symlink = autofs_dir_symlink,
.mkdir = autofs_dir_mkdir,
.rmdir = autofs_dir_rmdir,
};
const struct dentry_operations autofs4_dentry_operations = {
.d_automount = autofs4_d_automount,
.d_manage = autofs4_d_manage,
.d_release = autofs4_dentry_release,
const struct dentry_operations autofs_dentry_operations = {
.d_automount = autofs_d_automount,
.d_manage = autofs_d_manage,
.d_release = autofs_dentry_release,
};
static void autofs4_add_active(struct dentry *dentry)
static void autofs_add_active(struct dentry *dentry)
{
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
struct autofs_info *ino;
ino = autofs4_dentry_ino(dentry);
ino = autofs_dentry_ino(dentry);
if (ino) {
spin_lock(&sbi->lookup_lock);
if (!ino->active_count) {
@ -86,12 +80,12 @@ static void autofs4_add_active(struct dentry *dentry)
}
}
static void autofs4_del_active(struct dentry *dentry)
static void autofs_del_active(struct dentry *dentry)
{
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
struct autofs_info *ino;
ino = autofs4_dentry_ino(dentry);
ino = autofs_dentry_ino(dentry);
if (ino) {
spin_lock(&sbi->lookup_lock);
ino->active_count--;
@ -103,14 +97,14 @@ static void autofs4_del_active(struct dentry *dentry)
}
}
static int autofs4_dir_open(struct inode *inode, struct file *file)
static int autofs_dir_open(struct inode *inode, struct file *file)
{
struct dentry *dentry = file->f_path.dentry;
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
pr_debug("file=%p dentry=%p %pd\n", file, dentry, dentry);
if (autofs4_oz_mode(sbi))
if (autofs_oz_mode(sbi))
goto out;
/*
@ -133,10 +127,10 @@ out:
return dcache_dir_open(inode, file);
}
static void autofs4_dentry_release(struct dentry *de)
static void autofs_dentry_release(struct dentry *de)
{
struct autofs_info *ino = autofs4_dentry_ino(de);
struct autofs_sb_info *sbi = autofs4_sbi(de->d_sb);
struct autofs_info *ino = autofs_dentry_ino(de);
struct autofs_sb_info *sbi = autofs_sbi(de->d_sb);
pr_debug("releasing %p\n", de);
@ -152,12 +146,12 @@ static void autofs4_dentry_release(struct dentry *de)
spin_unlock(&sbi->lookup_lock);
}
autofs4_free_ino(ino);
autofs_free_ino(ino);
}
static struct dentry *autofs4_lookup_active(struct dentry *dentry)
static struct dentry *autofs_lookup_active(struct dentry *dentry)
{
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
struct dentry *parent = dentry->d_parent;
const struct qstr *name = &dentry->d_name;
unsigned int len = name->len;
@ -209,10 +203,10 @@ next:
return NULL;
}
static struct dentry *autofs4_lookup_expiring(struct dentry *dentry,
bool rcu_walk)
static struct dentry *autofs_lookup_expiring(struct dentry *dentry,
bool rcu_walk)
{
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
struct dentry *parent = dentry->d_parent;
const struct qstr *name = &dentry->d_name;
unsigned int len = name->len;
@ -269,17 +263,17 @@ next:
return NULL;
}
static int autofs4_mount_wait(const struct path *path, bool rcu_walk)
static int autofs_mount_wait(const struct path *path, bool rcu_walk)
{
struct autofs_sb_info *sbi = autofs4_sbi(path->dentry->d_sb);
struct autofs_info *ino = autofs4_dentry_ino(path->dentry);
struct autofs_sb_info *sbi = autofs_sbi(path->dentry->d_sb);
struct autofs_info *ino = autofs_dentry_ino(path->dentry);
int status = 0;
if (ino->flags & AUTOFS_INF_PENDING) {
if (rcu_walk)
return -ECHILD;
pr_debug("waiting for mount name=%pd\n", path->dentry);
status = autofs4_wait(sbi, path, NFY_MOUNT);
status = autofs_wait(sbi, path, NFY_MOUNT);
pr_debug("mount wait done status=%d\n", status);
}
ino->last_used = jiffies;
@ -291,11 +285,11 @@ static int do_expire_wait(const struct path *path, bool rcu_walk)
struct dentry *dentry = path->dentry;
struct dentry *expiring;
expiring = autofs4_lookup_expiring(dentry, rcu_walk);
expiring = autofs_lookup_expiring(dentry, rcu_walk);
if (IS_ERR(expiring))
return PTR_ERR(expiring);
if (!expiring)
return autofs4_expire_wait(path, rcu_walk);
return autofs_expire_wait(path, rcu_walk);
else {
const struct path this = { .mnt = path->mnt, .dentry = expiring };
/*
@ -303,17 +297,17 @@ static int do_expire_wait(const struct path *path, bool rcu_walk)
* be quite complete, but the directory has been removed
* so it must have been successful, just wait for it.
*/
autofs4_expire_wait(&this, 0);
autofs4_del_expiring(expiring);
autofs_expire_wait(&this, 0);
autofs_del_expiring(expiring);
dput(expiring);
}
return 0;
}
static struct dentry *autofs4_mountpoint_changed(struct path *path)
static struct dentry *autofs_mountpoint_changed(struct path *path)
{
struct dentry *dentry = path->dentry;
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
/*
* If this is an indirect mount the dentry could have gone away
@ -327,7 +321,7 @@ static struct dentry *autofs4_mountpoint_changed(struct path *path)
new = d_lookup(parent, &dentry->d_name);
if (!new)
return NULL;
ino = autofs4_dentry_ino(new);
ino = autofs_dentry_ino(new);
ino->last_used = jiffies;
dput(path->dentry);
path->dentry = new;
@ -335,17 +329,17 @@ static struct dentry *autofs4_mountpoint_changed(struct path *path)
return path->dentry;
}
static struct vfsmount *autofs4_d_automount(struct path *path)
static struct vfsmount *autofs_d_automount(struct path *path)
{
struct dentry *dentry = path->dentry;
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
struct autofs_info *ino = autofs_dentry_ino(dentry);
int status;
pr_debug("dentry=%p %pd\n", dentry, dentry);
/* The daemon never triggers a mount. */
if (autofs4_oz_mode(sbi))
if (autofs_oz_mode(sbi))
return NULL;
/*
@ -364,7 +358,7 @@ static struct vfsmount *autofs4_d_automount(struct path *path)
spin_lock(&sbi->fs_lock);
if (ino->flags & AUTOFS_INF_PENDING) {
spin_unlock(&sbi->fs_lock);
status = autofs4_mount_wait(path, 0);
status = autofs_mount_wait(path, 0);
if (status)
return ERR_PTR(status);
goto done;
@ -405,7 +399,7 @@ static struct vfsmount *autofs4_d_automount(struct path *path)
}
ino->flags |= AUTOFS_INF_PENDING;
spin_unlock(&sbi->fs_lock);
status = autofs4_mount_wait(path, 0);
status = autofs_mount_wait(path, 0);
spin_lock(&sbi->fs_lock);
ino->flags &= ~AUTOFS_INF_PENDING;
if (status) {
@ -416,24 +410,24 @@ static struct vfsmount *autofs4_d_automount(struct path *path)
spin_unlock(&sbi->fs_lock);
done:
/* Mount succeeded, check if we ended up with a new dentry */
dentry = autofs4_mountpoint_changed(path);
dentry = autofs_mountpoint_changed(path);
if (!dentry)
return ERR_PTR(-ENOENT);
return NULL;
}
static int autofs4_d_manage(const struct path *path, bool rcu_walk)
static int autofs_d_manage(const struct path *path, bool rcu_walk)
{
struct dentry *dentry = path->dentry;
struct autofs_sb_info *sbi = autofs4_sbi(dentry->d_sb);
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_sb_info *sbi = autofs_sbi(dentry->d_sb);
struct autofs_info *ino = autofs_dentry_ino(dentry);
int status;
pr_debug("dentry=%p %pd\n", dentry, dentry);
/* The daemon never waits. */
if (autofs4_oz_mode(sbi)) {
if (autofs_oz_mode(sbi)) {
if (!path_is_mountpoint(path))
return -EISDIR;
return 0;
@ -447,7 +441,7 @@ static int autofs4_d_manage(const struct path *path, bool rcu_walk)
* This dentry may be under construction so wait on mount
* completion.
*/
status = autofs4_mount_wait(path, rcu_walk);
status = autofs_mount_wait(path, rcu_walk);
if (status)
return status;
@ -500,8 +494,8 @@ static int autofs4_d_manage(const struct path *path, bool rcu_walk)
}
/* Lookups in the root directory */
static struct dentry *autofs4_lookup(struct inode *dir,
struct dentry *dentry, unsigned int flags)
static struct dentry *autofs_lookup(struct inode *dir,
struct dentry *dentry, unsigned int flags)
{
struct autofs_sb_info *sbi;
struct autofs_info *ino;
@ -513,13 +507,13 @@ static struct dentry *autofs4_lookup(struct inode *dir,
if (dentry->d_name.len > NAME_MAX)
return ERR_PTR(-ENAMETOOLONG);
sbi = autofs4_sbi(dir->i_sb);
sbi = autofs_sbi(dir->i_sb);
pr_debug("pid = %u, pgrp = %u, catatonic = %d, oz_mode = %d\n",
current->pid, task_pgrp_nr(current), sbi->catatonic,
autofs4_oz_mode(sbi));
autofs_oz_mode(sbi));
active = autofs4_lookup_active(dentry);
active = autofs_lookup_active(dentry);
if (active)
return active;
else {
@ -529,7 +523,7 @@ static struct dentry *autofs4_lookup(struct inode *dir,
* can return fail immediately. The daemon however does need
* to create directories within the file system.
*/
if (!autofs4_oz_mode(sbi) && !IS_ROOT(dentry->d_parent))
if (!autofs_oz_mode(sbi) && !IS_ROOT(dentry->d_parent))
return ERR_PTR(-ENOENT);
/* Mark entries in the root as mount triggers */
@ -537,24 +531,24 @@ static struct dentry *autofs4_lookup(struct inode *dir,
autofs_type_indirect(sbi->type))
__managed_dentry_set_managed(dentry);
ino = autofs4_new_ino(sbi);
ino = autofs_new_ino(sbi);
if (!ino)
return ERR_PTR(-ENOMEM);
dentry->d_fsdata = ino;
ino->dentry = dentry;
autofs4_add_active(dentry);
autofs_add_active(dentry);
}
return NULL;
}
static int autofs4_dir_symlink(struct inode *dir,
static int autofs_dir_symlink(struct inode *dir,
struct dentry *dentry,
const char *symname)
{
struct autofs_sb_info *sbi = autofs4_sbi(dir->i_sb);
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_sb_info *sbi = autofs_sbi(dir->i_sb);
struct autofs_info *ino = autofs_dentry_ino(dentry);
struct autofs_info *p_ino;
struct inode *inode;
size_t size = strlen(symname);
@ -562,14 +556,14 @@ static int autofs4_dir_symlink(struct inode *dir,
pr_debug("%s <- %pd\n", symname, dentry);
if (!autofs4_oz_mode(sbi))
if (!autofs_oz_mode(sbi))
return -EACCES;
BUG_ON(!ino);
autofs4_clean_ino(ino);
autofs_clean_ino(ino);
autofs4_del_active(dentry);
autofs_del_active(dentry);
cp = kmalloc(size + 1, GFP_KERNEL);
if (!cp)
@ -577,7 +571,7 @@ static int autofs4_dir_symlink(struct inode *dir,
strcpy(cp, symname);
inode = autofs4_get_inode(dir->i_sb, S_IFLNK | 0555);
inode = autofs_get_inode(dir->i_sb, S_IFLNK | 0555);
if (!inode) {
kfree(cp);
return -ENOMEM;
@ -588,7 +582,7 @@ static int autofs4_dir_symlink(struct inode *dir,
dget(dentry);
atomic_inc(&ino->count);
p_ino = autofs4_dentry_ino(dentry->d_parent);
p_ino = autofs_dentry_ino(dentry->d_parent);
if (p_ino && !IS_ROOT(dentry))
atomic_inc(&p_ino->count);
@ -610,20 +604,20 @@ static int autofs4_dir_symlink(struct inode *dir,
* If a process is blocked on the dentry waiting for the expire to finish,
* it will invalidate the dentry and try to mount with a new one.
*
* Also see autofs4_dir_rmdir()..
* Also see autofs_dir_rmdir()..
*/
static int autofs4_dir_unlink(struct inode *dir, struct dentry *dentry)
static int autofs_dir_unlink(struct inode *dir, struct dentry *dentry)
{
struct autofs_sb_info *sbi = autofs4_sbi(dir->i_sb);
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_sb_info *sbi = autofs_sbi(dir->i_sb);
struct autofs_info *ino = autofs_dentry_ino(dentry);
struct autofs_info *p_ino;
/* This allows root to remove symlinks */
if (!autofs4_oz_mode(sbi) && !capable(CAP_SYS_ADMIN))
if (!autofs_oz_mode(sbi) && !capable(CAP_SYS_ADMIN))
return -EPERM;
if (atomic_dec_and_test(&ino->count)) {
p_ino = autofs4_dentry_ino(dentry->d_parent);
p_ino = autofs_dentry_ino(dentry->d_parent);
if (p_ino && !IS_ROOT(dentry))
atomic_dec(&p_ino->count);
}
@ -635,7 +629,7 @@ static int autofs4_dir_unlink(struct inode *dir, struct dentry *dentry)
dir->i_mtime = current_time(dir);
spin_lock(&sbi->lookup_lock);
__autofs4_add_expiring(dentry);
__autofs_add_expiring(dentry);
d_drop(dentry);
spin_unlock(&sbi->lookup_lock);
@ -692,15 +686,15 @@ static void autofs_clear_leaf_automount_flags(struct dentry *dentry)
managed_dentry_set_managed(parent);
}
static int autofs4_dir_rmdir(struct inode *dir, struct dentry *dentry)
static int autofs_dir_rmdir(struct inode *dir, struct dentry *dentry)
{
struct autofs_sb_info *sbi = autofs4_sbi(dir->i_sb);
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_sb_info *sbi = autofs_sbi(dir->i_sb);
struct autofs_info *ino = autofs_dentry_ino(dentry);
struct autofs_info *p_ino;
pr_debug("dentry %p, removing %pd\n", dentry, dentry);
if (!autofs4_oz_mode(sbi))
if (!autofs_oz_mode(sbi))
return -EACCES;
spin_lock(&sbi->lookup_lock);
@ -708,7 +702,7 @@ static int autofs4_dir_rmdir(struct inode *dir, struct dentry *dentry)
spin_unlock(&sbi->lookup_lock);
return -ENOTEMPTY;
}
__autofs4_add_expiring(dentry);
__autofs_add_expiring(dentry);
d_drop(dentry);
spin_unlock(&sbi->lookup_lock);
@ -716,7 +710,7 @@ static int autofs4_dir_rmdir(struct inode *dir, struct dentry *dentry)
autofs_clear_leaf_automount_flags(dentry);
if (atomic_dec_and_test(&ino->count)) {
p_ino = autofs4_dentry_ino(dentry->d_parent);
p_ino = autofs_dentry_ino(dentry->d_parent);
if (p_ino && dentry->d_parent != dentry)
atomic_dec(&p_ino->count);
}
@ -730,26 +724,26 @@ static int autofs4_dir_rmdir(struct inode *dir, struct dentry *dentry)
return 0;
}
static int autofs4_dir_mkdir(struct inode *dir,
struct dentry *dentry, umode_t mode)
static int autofs_dir_mkdir(struct inode *dir,
struct dentry *dentry, umode_t mode)
{
struct autofs_sb_info *sbi = autofs4_sbi(dir->i_sb);
struct autofs_info *ino = autofs4_dentry_ino(dentry);
struct autofs_sb_info *sbi = autofs_sbi(dir->i_sb);
struct autofs_info *ino = autofs_dentry_ino(dentry);
struct autofs_info *p_ino;
struct inode *inode;
if (!autofs4_oz_mode(sbi))
if (!autofs_oz_mode(sbi))
return -EACCES;
pr_debug("dentry %p, creating %pd\n", dentry, dentry);
BUG_ON(!ino);
autofs4_clean_ino(ino);
autofs_clean_ino(ino);
autofs4_del_active(dentry);
autofs_del_active(dentry);
inode = autofs4_get_inode(dir->i_sb, S_IFDIR | mode);
inode = autofs_get_inode(dir->i_sb, S_IFDIR | mode);
if (!inode)
return -ENOMEM;
d_add(dentry, inode);
@ -759,7 +753,7 @@ static int autofs4_dir_mkdir(struct inode *dir,
dget(dentry);
atomic_inc(&ino->count);
p_ino = autofs4_dentry_ino(dentry->d_parent);
p_ino = autofs_dentry_ino(dentry->d_parent);
if (p_ino && !IS_ROOT(dentry))
atomic_inc(&p_ino->count);
inc_nlink(dir);
@ -770,7 +764,7 @@ static int autofs4_dir_mkdir(struct inode *dir,
/* Get/set timeout ioctl() operation */
#ifdef CONFIG_COMPAT
static inline int autofs4_compat_get_set_timeout(struct autofs_sb_info *sbi,
static inline int autofs_compat_get_set_timeout(struct autofs_sb_info *sbi,
compat_ulong_t __user *p)
{
unsigned long ntimeout;
@ -795,7 +789,7 @@ error:
}
#endif
static inline int autofs4_get_set_timeout(struct autofs_sb_info *sbi,
static inline int autofs_get_set_timeout(struct autofs_sb_info *sbi,
unsigned long __user *p)
{
unsigned long ntimeout;
@ -820,14 +814,14 @@ error:
}
/* Return protocol version */
static inline int autofs4_get_protover(struct autofs_sb_info *sbi,
static inline int autofs_get_protover(struct autofs_sb_info *sbi,
int __user *p)
{
return put_user(sbi->version, p);
}
/* Return protocol sub version */
static inline int autofs4_get_protosubver(struct autofs_sb_info *sbi,
static inline int autofs_get_protosubver(struct autofs_sb_info *sbi,
int __user *p)
{
return put_user(sbi->sub_version, p);
@ -836,7 +830,7 @@ static inline int autofs4_get_protosubver(struct autofs_sb_info *sbi,
/*
* Tells the daemon whether it can umount the autofs mount.
*/
static inline int autofs4_ask_umount(struct vfsmount *mnt, int __user *p)
static inline int autofs_ask_umount(struct vfsmount *mnt, int __user *p)
{
int status = 0;
@ -850,14 +844,14 @@ static inline int autofs4_ask_umount(struct vfsmount *mnt, int __user *p)
return status;
}
/* Identify autofs4_dentries - this is so we can tell if there's
/* Identify autofs_dentries - this is so we can tell if there's
* an extra dentry refcount or not. We only hold a refcount on the
* dentry if its non-negative (ie, d_inode != NULL)
*/
int is_autofs4_dentry(struct dentry *dentry)
int is_autofs_dentry(struct dentry *dentry)
{
return dentry && d_really_is_positive(dentry) &&
dentry->d_op == &autofs4_dentry_operations &&
dentry->d_op == &autofs_dentry_operations &&
dentry->d_fsdata != NULL;
}
@ -865,10 +859,10 @@ int is_autofs4_dentry(struct dentry *dentry)
* ioctl()'s on the root directory is the chief method for the daemon to
* generate kernel reactions
*/
static int autofs4_root_ioctl_unlocked(struct inode *inode, struct file *filp,
static int autofs_root_ioctl_unlocked(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg)
{
struct autofs_sb_info *sbi = autofs4_sbi(inode->i_sb);
struct autofs_sb_info *sbi = autofs_sbi(inode->i_sb);
void __user *p = (void __user *)arg;
pr_debug("cmd = 0x%08x, arg = 0x%08lx, sbi = %p, pgrp = %u\n",
@ -878,64 +872,63 @@ static int autofs4_root_ioctl_unlocked(struct inode *inode, struct file *filp,
_IOC_NR(cmd) - _IOC_NR(AUTOFS_IOC_FIRST) >= AUTOFS_IOC_COUNT)
return -ENOTTY;
if (!autofs4_oz_mode(sbi) && !capable(CAP_SYS_ADMIN))
if (!autofs_oz_mode(sbi) && !capable(CAP_SYS_ADMIN))
return -EPERM;
switch (cmd) {
case AUTOFS_IOC_READY: /* Wait queue: go ahead and retry */
return autofs4_wait_release(sbi, (autofs_wqt_t) arg, 0);
return autofs_wait_release(sbi, (autofs_wqt_t) arg, 0);
case AUTOFS_IOC_FAIL: /* Wait queue: fail with ENOENT */
return autofs4_wait_release(sbi, (autofs_wqt_t) arg, -ENOENT);
return autofs_wait_release(sbi, (autofs_wqt_t) arg, -ENOENT);
case AUTOFS_IOC_CATATONIC: /* Enter catatonic mode (daemon shutdown) */
autofs4_catatonic_mode(sbi);
autofs_catatonic_mode(sbi);
return 0;
case AUTOFS_IOC_PROTOVER: /* Get protocol version */
return autofs4_get_protover(sbi, p);
return autofs_get_protover(sbi, p);
case AUTOFS_IOC_PROTOSUBVER: /* Get protocol sub version */
return autofs4_get_protosubver(sbi, p);
return autofs_get_protosubver(sbi, p);
case AUTOFS_IOC_SETTIMEOUT:
return autofs4_get_set_timeout(sbi, p);
return autofs_get_set_timeout(sbi, p);
#ifdef CONFIG_COMPAT
case AUTOFS_IOC_SETTIMEOUT32:
return autofs4_compat_get_set_timeout(sbi, p);
return autofs_compat_get_set_timeout(sbi, p);
#endif
case AUTOFS_IOC_ASKUMOUNT:
return autofs4_ask_umount(filp->f_path.mnt, p);
return autofs_ask_umount(filp->f_path.mnt, p);
/* return a single thing to expire */
case AUTOFS_IOC_EXPIRE:
return autofs4_expire_run(inode->i_sb,
filp->f_path.mnt, sbi, p);
return autofs_expire_run(inode->i_sb, filp->f_path.mnt, sbi, p);
/* same as above, but can send multiple expires through pipe */
case AUTOFS_IOC_EXPIRE_MULTI:
return autofs4_expire_multi(inode->i_sb,
filp->f_path.mnt, sbi, p);
return autofs_expire_multi(inode->i_sb,
filp->f_path.mnt, sbi, p);
default:
return -EINVAL;
}
}
static long autofs4_root_ioctl(struct file *filp,
static long autofs_root_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
struct inode *inode = file_inode(filp);
return autofs4_root_ioctl_unlocked(inode, filp, cmd, arg);
return autofs_root_ioctl_unlocked(inode, filp, cmd, arg);
}
#ifdef CONFIG_COMPAT
static long autofs4_root_compat_ioctl(struct file *filp,
static long autofs_root_compat_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
struct inode *inode = file_inode(filp);
int ret;
if (cmd == AUTOFS_IOC_READY || cmd == AUTOFS_IOC_FAIL)
ret = autofs4_root_ioctl_unlocked(inode, filp, cmd, arg);
ret = autofs_root_ioctl_unlocked(inode, filp, cmd, arg);
else
ret = autofs4_root_ioctl_unlocked(inode, filp, cmd,
ret = autofs_root_ioctl_unlocked(inode, filp, cmd,
(unsigned long) compat_ptr(arg));
return ret;

View File

@ -8,22 +8,22 @@
#include "autofs_i.h"
static const char *autofs4_get_link(struct dentry *dentry,
struct inode *inode,
struct delayed_call *done)
static const char *autofs_get_link(struct dentry *dentry,
struct inode *inode,
struct delayed_call *done)
{
struct autofs_sb_info *sbi;
struct autofs_info *ino;
if (!dentry)
return ERR_PTR(-ECHILD);
sbi = autofs4_sbi(dentry->d_sb);
ino = autofs4_dentry_ino(dentry);
if (ino && !autofs4_oz_mode(sbi))
sbi = autofs_sbi(dentry->d_sb);
ino = autofs_dentry_ino(dentry);
if (ino && !autofs_oz_mode(sbi))
ino->last_used = jiffies;
return d_inode(dentry)->i_private;
}
const struct inode_operations autofs4_symlink_inode_operations = {
.get_link = autofs4_get_link
const struct inode_operations autofs_symlink_inode_operations = {
.get_link = autofs_get_link
};

View File

@ -7,19 +7,15 @@
* option, any later version, incorporated herein by reference.
*/
#include <linux/slab.h>
#include <linux/time.h>
#include <linux/signal.h>
#include <linux/sched/signal.h>
#include <linux/file.h>
#include "autofs_i.h"
/* We make this a static variable rather than a part of the superblock; it
* is better if we don't reassign numbers easily even across filesystems
*/
static autofs_wqt_t autofs4_next_wait_queue = 1;
static autofs_wqt_t autofs_next_wait_queue = 1;
void autofs4_catatonic_mode(struct autofs_sb_info *sbi)
void autofs_catatonic_mode(struct autofs_sb_info *sbi)
{
struct autofs_wait_queue *wq, *nwq;
@ -49,8 +45,8 @@ void autofs4_catatonic_mode(struct autofs_sb_info *sbi)
mutex_unlock(&sbi->wq_mutex);
}
static int autofs4_write(struct autofs_sb_info *sbi,
struct file *file, const void *addr, int bytes)
static int autofs_write(struct autofs_sb_info *sbi,
struct file *file, const void *addr, int bytes)
{
unsigned long sigpipe, flags;
const char *data = (const char *)addr;
@ -82,7 +78,7 @@ static int autofs4_write(struct autofs_sb_info *sbi,
return bytes == 0 ? 0 : wr < 0 ? wr : -EIO;
}
static void autofs4_notify_daemon(struct autofs_sb_info *sbi,
static void autofs_notify_daemon(struct autofs_sb_info *sbi,
struct autofs_wait_queue *wq,
int type)
{
@ -167,23 +163,23 @@ static void autofs4_notify_daemon(struct autofs_sb_info *sbi,
mutex_unlock(&sbi->wq_mutex);
switch (ret = autofs4_write(sbi, pipe, &pkt, pktsz)) {
switch (ret = autofs_write(sbi, pipe, &pkt, pktsz)) {
case 0:
break;
case -ENOMEM:
case -ERESTARTSYS:
/* Just fail this one */
autofs4_wait_release(sbi, wq->wait_queue_token, ret);
autofs_wait_release(sbi, wq->wait_queue_token, ret);
break;
default:
autofs4_catatonic_mode(sbi);
autofs_catatonic_mode(sbi);
break;
}
fput(pipe);
}
static int autofs4_getpath(struct autofs_sb_info *sbi,
struct dentry *dentry, char **name)
static int autofs_getpath(struct autofs_sb_info *sbi,
struct dentry *dentry, char *name)
{
struct dentry *root = sbi->sb->s_root;
struct dentry *tmp;
@ -193,7 +189,7 @@ static int autofs4_getpath(struct autofs_sb_info *sbi,
unsigned seq;
rename_retry:
buf = *name;
buf = name;
len = 0;
seq = read_seqbegin(&rename_lock);
@ -228,7 +224,7 @@ rename_retry:
}
static struct autofs_wait_queue *
autofs4_find_wait(struct autofs_sb_info *sbi, const struct qstr *qstr)
autofs_find_wait(struct autofs_sb_info *sbi, const struct qstr *qstr)
{
struct autofs_wait_queue *wq;
@ -263,7 +259,7 @@ static int validate_request(struct autofs_wait_queue **wait,
return -ENOENT;
/* Wait in progress, continue; */
wq = autofs4_find_wait(sbi, qstr);
wq = autofs_find_wait(sbi, qstr);
if (wq) {
*wait = wq;
return 1;
@ -272,7 +268,7 @@ static int validate_request(struct autofs_wait_queue **wait,
*wait = NULL;
/* If we don't yet have any info this is a new request */
ino = autofs4_dentry_ino(dentry);
ino = autofs_dentry_ino(dentry);
if (!ino)
return 1;
@ -297,7 +293,7 @@ static int validate_request(struct autofs_wait_queue **wait,
if (sbi->catatonic)
return -ENOENT;
wq = autofs4_find_wait(sbi, qstr);
wq = autofs_find_wait(sbi, qstr);
if (wq) {
*wait = wq;
return 1;
@ -351,7 +347,7 @@ static int validate_request(struct autofs_wait_queue **wait,
return 1;
}
int autofs4_wait(struct autofs_sb_info *sbi,
int autofs_wait(struct autofs_sb_info *sbi,
const struct path *path, enum autofs_notify notify)
{
struct dentry *dentry = path->dentry;
@ -399,7 +395,7 @@ int autofs4_wait(struct autofs_sb_info *sbi,
if (IS_ROOT(dentry) && autofs_type_trigger(sbi->type))
qstr.len = sprintf(name, "%p", dentry);
else {
qstr.len = autofs4_getpath(sbi, dentry, &name);
qstr.len = autofs_getpath(sbi, dentry, name);
if (!qstr.len) {
kfree(name);
return -ENOENT;
@ -430,15 +426,15 @@ int autofs4_wait(struct autofs_sb_info *sbi,
return -ENOMEM;
}
wq->wait_queue_token = autofs4_next_wait_queue;
if (++autofs4_next_wait_queue == 0)
autofs4_next_wait_queue = 1;
wq->wait_queue_token = autofs_next_wait_queue;
if (++autofs_next_wait_queue == 0)
autofs_next_wait_queue = 1;
wq->next = sbi->queues;
sbi->queues = wq;
init_waitqueue_head(&wq->queue);
memcpy(&wq->name, &qstr, sizeof(struct qstr));
wq->dev = autofs4_get_dev(sbi);
wq->ino = autofs4_get_ino(sbi);
wq->dev = autofs_get_dev(sbi);
wq->ino = autofs_get_ino(sbi);
wq->uid = current_uid();
wq->gid = current_gid();
wq->pid = pid;
@ -467,9 +463,9 @@ int autofs4_wait(struct autofs_sb_info *sbi,
wq->name.name, notify);
/*
* autofs4_notify_daemon() may block; it will unlock ->wq_mutex
* autofs_notify_daemon() may block; it will unlock ->wq_mutex
*/
autofs4_notify_daemon(sbi, wq, type);
autofs_notify_daemon(sbi, wq, type);
} else {
wq->wait_ctr++;
pr_debug("existing wait id = 0x%08lx, name = %.*s, nfy=%d\n",
@ -500,12 +496,12 @@ int autofs4_wait(struct autofs_sb_info *sbi,
struct dentry *de = NULL;
/* direct mount or browsable map */
ino = autofs4_dentry_ino(dentry);
ino = autofs_dentry_ino(dentry);
if (!ino) {
/* If not lookup actual dentry used */
de = d_lookup(dentry->d_parent, &dentry->d_name);
if (de)
ino = autofs4_dentry_ino(de);
ino = autofs_dentry_ino(de);
}
/* Set mount requester */
@ -530,7 +526,8 @@ int autofs4_wait(struct autofs_sb_info *sbi,
}
int autofs4_wait_release(struct autofs_sb_info *sbi, autofs_wqt_t wait_queue_token, int status)
int autofs_wait_release(struct autofs_sb_info *sbi,
autofs_wqt_t wait_queue_token, int status)
{
struct autofs_wait_queue *wq, **wql;

View File

@ -1,5 +1,7 @@
config AUTOFS4_FS
tristate "Kernel automounter version 4 support (also supports v3)"
tristate "Kernel automounter version 4 support (also supports v3 and v5)"
default n
depends on AUTOFS_FS = n
help
The automounter is a tool to automatically mount remote file systems
on demand. This implementation is partially kernel-based to reduce
@ -7,14 +9,38 @@ config AUTOFS4_FS
automounter (amd), which is a pure user space daemon.
To use the automounter you need the user-space tools from
<https://www.kernel.org/pub/linux/daemons/autofs/v4/>; you also
want to answer Y to "NFS file system support", below.
<https://www.kernel.org/pub/linux/daemons/autofs/>; you also want
to answer Y to "NFS file system support", below.
To compile this support as a module, choose M here: the module will be
called autofs4. You will need to add "alias autofs autofs4" to your
modules configuration file.
This module is in the process of being renamed from autofs4 to
autofs. Since autofs is now the only module that provides the
autofs file system the module is not version 4 specific.
If you are not a part of a fairly large, distributed network or
don't have a laptop which needs to dynamically reconfigure to the
local network, you probably do not need an automounter, and can say
N here.
The autofs4 module is now built from the source located in
fs/autofs. The autofs4 directory and its configuration entry
will be removed two kernel versions from the inclusion of this
change.
Changes that will need to be made should be limited to:
- source include statments should be changed from autofs_fs4.h to
autofs_fs.h since these two header files have been merged.
- user space scripts that manually load autofs4.ko should be
changed to load autofs.ko. But since the module directory name
and the module name are the same as the file system name there
is no need to manually load module.
- any "alias autofs autofs4" will need to be removed.
- due to the autofs4 module directory name not being the same as
its file system name autoloading didn't work properly. Because
of this kernel configurations would often build the module into
the kernel. This may have resulted in selinux policies that will
prevent the autofs module from autoloading and will need to be
updated.
Please configure AUTOFS_FS instead of AUTOFS4_FS from now on.
NOTE: Since the modules autofs and autofs4 use the same file system
type name of "autofs" only one can be built. The "depends"
above will result in AUTOFS4_FS not appearing in .config for
any setting of AUTOFS_FS other than n and AUTOFS4_FS will
appear under the AUTOFS_FS entry otherwise which is intended
to draw attention to the module rename change.

View File

@ -4,4 +4,6 @@
obj-$(CONFIG_AUTOFS4_FS) += autofs4.o
autofs4-objs := init.o inode.o root.o symlink.o waitq.o expire.o dev-ioctl.o
autofs4-objs := ../autofs/init.o ../autofs/inode.o ../autofs/root.o \
../autofs/symlink.o ../autofs/waitq.o ../autofs/expire.o \
../autofs/dev-ioctl.o

View File

@ -387,8 +387,13 @@ static Node *create_entry(const char __user *buffer, size_t count)
s = strchr(p, del);
if (!s)
goto einval;
*s++ = '\0';
e->offset = simple_strtoul(p, &p, 10);
*s = '\0';
if (p != s) {
int r = kstrtoint(p, 10, &e->offset);
if (r != 0 || e->offset < 0)
goto einval;
}
p = s;
if (*p++)
goto einval;
pr_debug("register: offset: %#x\n", e->offset);
@ -428,7 +433,8 @@ static Node *create_entry(const char __user *buffer, size_t count)
if (e->mask &&
string_unescape_inplace(e->mask, UNESCAPE_HEX) != e->size)
goto einval;
if (e->size + e->offset > BINPRM_BUF_SIZE)
if (e->size > BINPRM_BUF_SIZE ||
BINPRM_BUF_SIZE - e->size < e->offset)
goto einval;
pr_debug("register: magic/mask length: %i\n", e->size);
if (USE_DEBUG) {

View File

@ -38,8 +38,6 @@
#include <linux/ppp-ioctl.h>
#include <linux/if_pppox.h>
#include <linux/mtio.h>
#include <linux/auto_fs.h>
#include <linux/auto_fs4.h>
#include <linux/tty.h>
#include <linux/vt_kern.h>
#include <linux/fb.h>

View File

@ -905,12 +905,12 @@ out:
* If this page is ever written to we will re-fault and change the mapping to
* point to real DAX storage instead.
*/
static int dax_load_hole(struct address_space *mapping, void *entry,
static vm_fault_t dax_load_hole(struct address_space *mapping, void *entry,
struct vm_fault *vmf)
{
struct inode *inode = mapping->host;
unsigned long vaddr = vmf->address;
int ret = VM_FAULT_NOPAGE;
vm_fault_t ret = VM_FAULT_NOPAGE;
struct page *zero_page;
void *entry2;
pfn_t pfn;
@ -929,7 +929,7 @@ static int dax_load_hole(struct address_space *mapping, void *entry,
goto out;
}
vm_insert_mixed(vmf->vma, vaddr, pfn);
ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
out:
trace_dax_load_hole(inode, vmf, ret);
return ret;
@ -1112,7 +1112,7 @@ dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter,
}
EXPORT_SYMBOL_GPL(dax_iomap_rw);
static int dax_fault_return(int error)
static vm_fault_t dax_fault_return(int error)
{
if (error == 0)
return VM_FAULT_NOPAGE;
@ -1132,7 +1132,7 @@ static bool dax_fault_is_synchronous(unsigned long flags,
&& (iomap->flags & IOMAP_F_DIRTY);
}
static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
int *iomap_errp, const struct iomap_ops *ops)
{
struct vm_area_struct *vma = vmf->vma;
@ -1145,18 +1145,18 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
int error, major = 0;
bool write = vmf->flags & FAULT_FLAG_WRITE;
bool sync;
int vmf_ret = 0;
vm_fault_t ret = 0;
void *entry;
pfn_t pfn;
trace_dax_pte_fault(inode, vmf, vmf_ret);
trace_dax_pte_fault(inode, vmf, ret);
/*
* Check whether offset isn't beyond end of file now. Caller is supposed
* to hold locks serializing us with truncate / punch hole so this is
* a reliable test.
*/
if (pos >= i_size_read(inode)) {
vmf_ret = VM_FAULT_SIGBUS;
ret = VM_FAULT_SIGBUS;
goto out;
}
@ -1165,7 +1165,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
entry = grab_mapping_entry(mapping, vmf->pgoff, 0);
if (IS_ERR(entry)) {
vmf_ret = dax_fault_return(PTR_ERR(entry));
ret = dax_fault_return(PTR_ERR(entry));
goto out;
}
@ -1176,7 +1176,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
* retried.
*/
if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) {
vmf_ret = VM_FAULT_NOPAGE;
ret = VM_FAULT_NOPAGE;
goto unlock_entry;
}
@ -1189,7 +1189,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
if (iomap_errp)
*iomap_errp = error;
if (error) {
vmf_ret = dax_fault_return(error);
ret = dax_fault_return(error);
goto unlock_entry;
}
if (WARN_ON_ONCE(iomap.offset + iomap.length < pos + PAGE_SIZE)) {
@ -1219,9 +1219,9 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
goto error_finish_iomap;
__SetPageUptodate(vmf->cow_page);
vmf_ret = finish_fault(vmf);
if (!vmf_ret)
vmf_ret = VM_FAULT_DONE_COW;
ret = finish_fault(vmf);
if (!ret)
ret = VM_FAULT_DONE_COW;
goto finish_iomap;
}
@ -1257,23 +1257,20 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
goto error_finish_iomap;
}
*pfnp = pfn;
vmf_ret = VM_FAULT_NEEDDSYNC | major;
ret = VM_FAULT_NEEDDSYNC | major;
goto finish_iomap;
}
trace_dax_insert_mapping(inode, vmf, entry);
if (write)
error = vm_insert_mixed_mkwrite(vma, vaddr, pfn);
ret = vmf_insert_mixed_mkwrite(vma, vaddr, pfn);
else
error = vm_insert_mixed(vma, vaddr, pfn);
ret = vmf_insert_mixed(vma, vaddr, pfn);
/* -EBUSY is fine, somebody else faulted on the same PTE */
if (error == -EBUSY)
error = 0;
break;
goto finish_iomap;
case IOMAP_UNWRITTEN:
case IOMAP_HOLE:
if (!write) {
vmf_ret = dax_load_hole(mapping, entry, vmf);
ret = dax_load_hole(mapping, entry, vmf);
goto finish_iomap;
}
/*FALLTHRU*/
@ -1284,12 +1281,12 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
}
error_finish_iomap:
vmf_ret = dax_fault_return(error) | major;
ret = dax_fault_return(error);
finish_iomap:
if (ops->iomap_end) {
int copied = PAGE_SIZE;
if (vmf_ret & VM_FAULT_ERROR)
if (ret & VM_FAULT_ERROR)
copied = 0;
/*
* The fault is done by now and there's no way back (other
@ -1302,12 +1299,12 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
unlock_entry:
put_locked_mapping_entry(mapping, vmf->pgoff);
out:
trace_dax_pte_fault_done(inode, vmf, vmf_ret);
return vmf_ret;
trace_dax_pte_fault_done(inode, vmf, ret);
return ret | major;
}
#ifdef CONFIG_FS_DAX_PMD
static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap,
static vm_fault_t dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap,
void *entry)
{
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
@ -1348,7 +1345,7 @@ fallback:
return VM_FAULT_FALLBACK;
}
static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
const struct iomap_ops *ops)
{
struct vm_area_struct *vma = vmf->vma;
@ -1358,7 +1355,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
bool sync;
unsigned int iomap_flags = (write ? IOMAP_WRITE : 0) | IOMAP_FAULT;
struct inode *inode = mapping->host;
int result = VM_FAULT_FALLBACK;
vm_fault_t result = VM_FAULT_FALLBACK;
struct iomap iomap = { 0 };
pgoff_t max_pgoff, pgoff;
void *entry;
@ -1509,7 +1506,7 @@ out:
return result;
}
#else
static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
const struct iomap_ops *ops)
{
return VM_FAULT_FALLBACK;
@ -1529,7 +1526,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
* has done all the necessary locking for page fault to proceed
* successfully.
*/
int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
vm_fault_t dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
pfn_t *pfnp, int *iomap_errp, const struct iomap_ops *ops)
{
switch (pe_size) {
@ -1553,14 +1550,14 @@ EXPORT_SYMBOL_GPL(dax_iomap_fault);
* DAX file. It takes care of marking corresponding radix tree entry as dirty
* as well.
*/
static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
static vm_fault_t dax_insert_pfn_mkwrite(struct vm_fault *vmf,
enum page_entry_size pe_size,
pfn_t pfn)
{
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
void *entry, **slot;
pgoff_t index = vmf->pgoff;
int vmf_ret, error;
vm_fault_t ret;
xa_lock_irq(&mapping->i_pages);
entry = get_unlocked_mapping_entry(mapping, index, &slot);
@ -1579,21 +1576,20 @@ static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
xa_unlock_irq(&mapping->i_pages);
switch (pe_size) {
case PE_SIZE_PTE:
error = vm_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
vmf_ret = dax_fault_return(error);
ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
break;
#ifdef CONFIG_FS_DAX_PMD
case PE_SIZE_PMD:
vmf_ret = vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd,
ret = vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd,
pfn, true);
break;
#endif
default:
vmf_ret = VM_FAULT_FALLBACK;
ret = VM_FAULT_FALLBACK;
}
put_locked_mapping_entry(mapping, index);
trace_dax_insert_pfn_mkwrite(mapping->host, vmf, vmf_ret);
return vmf_ret;
trace_dax_insert_pfn_mkwrite(mapping->host, vmf, ret);
return ret;
}
/**
@ -1606,8 +1602,8 @@ static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
* stored persistently on the media and handles inserting of appropriate page
* table entry.
*/
int dax_finish_sync_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
pfn_t pfn)
vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf,
enum page_entry_size pe_size, pfn_t pfn)
{
int err;
loff_t start = ((loff_t)vmf->pgoff) << PAGE_SHIFT;

View File

@ -23,7 +23,7 @@
#include <linux/rcupdate.h>
#include <linux/pid_namespace.h>
#include <linux/user_namespace.h>
#include <linux/shmem_fs.h>
#include <linux/memfd.h>
#include <linux/compat.h>
#include <linux/poll.h>

View File

@ -788,6 +788,23 @@ static inline void ocfs2_add_holder(struct ocfs2_lock_res *lockres,
spin_unlock(&lockres->l_lock);
}
static struct ocfs2_lock_holder *
ocfs2_pid_holder(struct ocfs2_lock_res *lockres,
struct pid *pid)
{
struct ocfs2_lock_holder *oh;
spin_lock(&lockres->l_lock);
list_for_each_entry(oh, &lockres->l_holders, oh_list) {
if (oh->oh_owner_pid == pid) {
spin_unlock(&lockres->l_lock);
return oh;
}
}
spin_unlock(&lockres->l_lock);
return NULL;
}
static inline void ocfs2_remove_holder(struct ocfs2_lock_res *lockres,
struct ocfs2_lock_holder *oh)
{
@ -798,24 +815,6 @@ static inline void ocfs2_remove_holder(struct ocfs2_lock_res *lockres,
put_pid(oh->oh_owner_pid);
}
static inline int ocfs2_is_locked_by_me(struct ocfs2_lock_res *lockres)
{
struct ocfs2_lock_holder *oh;
struct pid *pid;
/* look in the list of holders for one with the current task as owner */
spin_lock(&lockres->l_lock);
pid = task_pid(current);
list_for_each_entry(oh, &lockres->l_holders, oh_list) {
if (oh->oh_owner_pid == pid) {
spin_unlock(&lockres->l_lock);
return 1;
}
}
spin_unlock(&lockres->l_lock);
return 0;
}
static inline void ocfs2_inc_holders(struct ocfs2_lock_res *lockres,
int level)
@ -2610,34 +2609,93 @@ void ocfs2_inode_unlock(struct inode *inode,
*
* return < 0 on error, return == 0 if there's no lock holder on the stack
* before this call, return == 1 if this call would be a recursive locking.
* return == -1 if this lock attempt will cause an upgrade which is forbidden.
*
* When taking lock levels into account,we face some different situations.
*
* 1. no lock is held
* In this case, just lock the inode as requested and return 0
*
* 2. We are holding a lock
* For this situation, things diverges into several cases
*
* wanted holding what to do
* ex ex see 2.1 below
* ex pr see 2.2 below
* pr ex see 2.1 below
* pr pr see 2.1 below
*
* 2.1 lock level that is been held is compatible
* with the wanted level, so no lock action will be tacken.
*
* 2.2 Otherwise, an upgrade is needed, but it is forbidden.
*
* Reason why upgrade within a process is forbidden is that
* lock upgrade may cause dead lock. The following illustrates
* how it happens.
*
* thread on node1 thread on node2
* ocfs2_inode_lock_tracker(ex=0)
*
* <====== ocfs2_inode_lock_tracker(ex=1)
*
* ocfs2_inode_lock_tracker(ex=1)
*/
int ocfs2_inode_lock_tracker(struct inode *inode,
struct buffer_head **ret_bh,
int ex,
struct ocfs2_lock_holder *oh)
{
int status;
int arg_flags = 0, has_locked;
int status = 0;
struct ocfs2_lock_res *lockres;
struct ocfs2_lock_holder *tmp_oh;
struct pid *pid = task_pid(current);
lockres = &OCFS2_I(inode)->ip_inode_lockres;
has_locked = ocfs2_is_locked_by_me(lockres);
/* Just get buffer head if the cluster lock has been taken */
if (has_locked)
arg_flags = OCFS2_META_LOCK_GETBH;
tmp_oh = ocfs2_pid_holder(lockres, pid);
if (likely(!has_locked || ret_bh)) {
status = ocfs2_inode_lock_full(inode, ret_bh, ex, arg_flags);
if (!tmp_oh) {
/*
* This corresponds to the case 1.
* We haven't got any lock before.
*/
status = ocfs2_inode_lock_full(inode, ret_bh, ex, 0);
if (status < 0) {
if (status != -ENOENT)
mlog_errno(status);
return status;
}
oh->oh_ex = ex;
ocfs2_add_holder(lockres, oh);
return 0;
}
if (unlikely(ex && !tmp_oh->oh_ex)) {
/*
* case 2.2 upgrade may cause dead lock, forbid it.
*/
mlog(ML_ERROR, "Recursive locking is not permitted to "
"upgrade to EX level from PR level.\n");
dump_stack();
return -EINVAL;
}
/*
* case 2.1 OCFS2_META_LOCK_GETBH flag make ocfs2_inode_lock_full.
* ignore the lock level and just update it.
*/
if (ret_bh) {
status = ocfs2_inode_lock_full(inode, ret_bh, ex,
OCFS2_META_LOCK_GETBH);
if (status < 0) {
if (status != -ENOENT)
mlog_errno(status);
return status;
}
}
if (!has_locked)
ocfs2_add_holder(lockres, oh);
return has_locked;
return tmp_oh ? 1 : 0;
}
void ocfs2_inode_unlock_tracker(struct inode *inode,
@ -2649,12 +2707,13 @@ void ocfs2_inode_unlock_tracker(struct inode *inode,
lockres = &OCFS2_I(inode)->ip_inode_lockres;
/* had_lock means that the currect process already takes the cluster
* lock previously. If had_lock is 1, we have nothing to do here, and
* it will get unlocked where we got the lock.
* lock previously.
* If had_lock is 1, we have nothing to do here.
* If had_lock is 0, we will release the lock.
*/
if (!had_lock) {
ocfs2_inode_unlock(inode, oh->oh_ex);
ocfs2_remove_holder(lockres, oh);
ocfs2_inode_unlock(inode, ex);
}
}

View File

@ -96,6 +96,7 @@ struct ocfs2_trim_fs_info {
struct ocfs2_lock_holder {
struct list_head oh_list;
struct pid *oh_owner_pid;
int oh_ex;
};
/* ocfs2_inode_lock_full() 'arg_flags' flags */

View File

@ -563,8 +563,8 @@ int ocfs2_add_inode_data(struct ocfs2_super *osb,
return ret;
}
static int __ocfs2_extend_allocation(struct inode *inode, u32 logical_start,
u32 clusters_to_add, int mark_unwritten)
static int ocfs2_extend_allocation(struct inode *inode, u32 logical_start,
u32 clusters_to_add, int mark_unwritten)
{
int status = 0;
int restart_func = 0;
@ -1035,8 +1035,8 @@ int ocfs2_extend_no_holes(struct inode *inode, struct buffer_head *di_bh,
clusters_to_add -= oi->ip_clusters;
if (clusters_to_add) {
ret = __ocfs2_extend_allocation(inode, oi->ip_clusters,
clusters_to_add, 0);
ret = ocfs2_extend_allocation(inode, oi->ip_clusters,
clusters_to_add, 0);
if (ret) {
mlog_errno(ret);
goto out;
@ -1493,7 +1493,7 @@ static int ocfs2_allocate_unwritten_extents(struct inode *inode,
goto next;
}
ret = __ocfs2_extend_allocation(inode, cpos, alloc_size, 1);
ret = ocfs2_extend_allocation(inode, cpos, alloc_size, 1);
if (ret) {
if (ret != -ENOSPC)
mlog_errno(ret);

View File

@ -65,8 +65,6 @@ int ocfs2_extend_no_holes(struct inode *inode, struct buffer_head *di_bh,
u64 new_i_size, u64 zero_to);
int ocfs2_zero_extend(struct inode *inode, struct buffer_head *di_bh,
loff_t zero_to);
int ocfs2_extend_allocation(struct inode *inode, u32 logical_start,
u32 clusters_to_add, int mark_unwritten);
int ocfs2_setattr(struct dentry *dentry, struct iattr *attr);
int ocfs2_getattr(const struct path *path, struct kstat *stat,
u32 request_mask, unsigned int flags);

View File

@ -402,7 +402,7 @@ out_err:
static void o2ffg_update_histogram(struct ocfs2_info_free_chunk_list *hist,
unsigned int chunksize)
{
int index;
u32 index;
index = __ilog2_u32(chunksize);
if (index >= OCFS2_INFO_MAX_HIST)

View File

@ -44,11 +44,11 @@
#include "ocfs2_trace.h"
static int ocfs2_fault(struct vm_fault *vmf)
static vm_fault_t ocfs2_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
sigset_t oldset;
int ret;
vm_fault_t ret;
ocfs2_block_signals(&oldset);
ret = filemap_fault(vmf);
@ -59,10 +59,11 @@ static int ocfs2_fault(struct vm_fault *vmf)
return ret;
}
static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh,
struct page *page)
static vm_fault_t __ocfs2_page_mkwrite(struct file *file,
struct buffer_head *di_bh, struct page *page)
{
int ret = VM_FAULT_NOPAGE;
int err;
vm_fault_t ret = VM_FAULT_NOPAGE;
struct inode *inode = file_inode(file);
struct address_space *mapping = inode->i_mapping;
loff_t pos = page_offset(page);
@ -105,15 +106,12 @@ static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh,
if (page->index == last_index)
len = ((size - 1) & ~PAGE_MASK) + 1;
ret = ocfs2_write_begin_nolock(mapping, pos, len, OCFS2_WRITE_MMAP,
err = ocfs2_write_begin_nolock(mapping, pos, len, OCFS2_WRITE_MMAP,
&locked_page, &fsdata, di_bh, page);
if (ret) {
if (ret != -ENOSPC)
mlog_errno(ret);
if (ret == -ENOMEM)
ret = VM_FAULT_OOM;
else
ret = VM_FAULT_SIGBUS;
if (err) {
if (err != -ENOSPC)
mlog_errno(err);
ret = vmf_error(err);
goto out;
}
@ -121,20 +119,21 @@ static int __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh,
ret = VM_FAULT_NOPAGE;
goto out;
}
ret = ocfs2_write_end_nolock(mapping, pos, len, len, fsdata);
BUG_ON(ret != len);
err = ocfs2_write_end_nolock(mapping, pos, len, len, fsdata);
BUG_ON(err != len);
ret = VM_FAULT_LOCKED;
out:
return ret;
}
static int ocfs2_page_mkwrite(struct vm_fault *vmf)
static vm_fault_t ocfs2_page_mkwrite(struct vm_fault *vmf)
{
struct page *page = vmf->page;
struct inode *inode = file_inode(vmf->vma->vm_file);
struct buffer_head *di_bh = NULL;
sigset_t oldset;
int ret;
int err;
vm_fault_t ret;
sb_start_pagefault(inode->i_sb);
ocfs2_block_signals(&oldset);
@ -144,13 +143,10 @@ static int ocfs2_page_mkwrite(struct vm_fault *vmf)
* node. Taking the data lock will also ensure that we don't
* attempt page truncation as part of a downconvert.
*/
ret = ocfs2_inode_lock(inode, &di_bh, 1);
if (ret < 0) {
mlog_errno(ret);
if (ret == -ENOMEM)
ret = VM_FAULT_OOM;
else
ret = VM_FAULT_SIGBUS;
err = ocfs2_inode_lock(inode, &di_bh, 1);
if (err < 0) {
mlog_errno(err);
ret = vmf_error(err);
goto out;
}

View File

@ -2332,8 +2332,7 @@ int ocfs2_orphan_del(struct ocfs2_super *osb,
struct buffer_head *orphan_dir_bh,
bool dio)
{
const int namelen = OCFS2_DIO_ORPHAN_PREFIX_LEN + OCFS2_ORPHAN_NAMELEN;
char name[namelen + 1];
char name[OCFS2_DIO_ORPHAN_PREFIX_LEN + OCFS2_ORPHAN_NAMELEN + 1];
struct ocfs2_dinode *orphan_fe;
int status = 0;
struct ocfs2_dir_lookup_result lookup = { NULL, };

View File

@ -807,11 +807,11 @@ struct ocfs2_dir_block_trailer {
* in this block. (unused) */
/*10*/ __u8 db_signature[8]; /* Signature for verification */
__le64 db_reserved2;
__le64 db_free_next; /* Next block in list (unused) */
/*20*/ __le64 db_blkno; /* Offset on disk, in blocks */
__le64 db_parent_dinode; /* dinode which owns me, in
/*20*/ __le64 db_free_next; /* Next block in list (unused) */
__le64 db_blkno; /* Offset on disk, in blocks */
/*30*/ __le64 db_parent_dinode; /* dinode which owns me, in
blocks */
/*30*/ struct ocfs2_block_check db_check; /* Error checking */
struct ocfs2_block_check db_check; /* Error checking */
/*40*/
};

View File

@ -268,7 +268,7 @@ static inline void task_sig(struct seq_file *m, struct task_struct *p)
unsigned long flags;
sigset_t pending, shpending, blocked, ignored, caught;
int num_threads = 0;
unsigned long qsize = 0;
unsigned int qsize = 0;
unsigned long qlim = 0;
sigemptyset(&pending);

View File

@ -213,10 +213,14 @@ static ssize_t proc_pid_cmdline_read(struct file *file, char __user *buf,
char *page;
unsigned long count = _count;
unsigned long arg_start, arg_end, env_start, env_end;
unsigned long len1, len2, len;
unsigned long p;
unsigned long len1, len2;
char __user *buf0 = buf;
struct {
unsigned long p;
unsigned long len;
} cmdline[2];
char c;
ssize_t rv;
int rv;
BUG_ON(*pos < 0);
@ -239,12 +243,12 @@ static ssize_t proc_pid_cmdline_read(struct file *file, char __user *buf,
goto out_mmput;
}
down_read(&mm->mmap_sem);
spin_lock(&mm->arg_lock);
arg_start = mm->arg_start;
arg_end = mm->arg_end;
env_start = mm->env_start;
env_end = mm->env_end;
up_read(&mm->mmap_sem);
spin_unlock(&mm->arg_lock);
BUG_ON(arg_start > arg_end);
BUG_ON(env_start > env_end);
@ -253,61 +257,31 @@ static ssize_t proc_pid_cmdline_read(struct file *file, char __user *buf,
len2 = env_end - env_start;
/* Empty ARGV. */
if (len1 == 0) {
rv = 0;
goto out_free_page;
}
if (len1 == 0)
goto end;
/*
* Inherently racy -- command line shares address space
* with code and data.
*/
rv = access_remote_vm(mm, arg_end - 1, &c, 1, FOLL_ANON);
if (rv <= 0)
goto out_free_page;
rv = 0;
if (access_remote_vm(mm, arg_end - 1, &c, 1, FOLL_ANON) != 1)
goto end;
cmdline[0].p = arg_start;
cmdline[0].len = len1;
if (c == '\0') {
/* Command line (set of strings) occupies whole ARGV. */
if (len1 <= *pos)
goto out_free_page;
p = arg_start + *pos;
len = len1 - *pos;
while (count > 0 && len > 0) {
unsigned int _count;
int nr_read;
_count = min3(count, len, PAGE_SIZE);
nr_read = access_remote_vm(mm, p, page, _count, FOLL_ANON);
if (nr_read < 0)
rv = nr_read;
if (nr_read <= 0)
goto out_free_page;
if (copy_to_user(buf, page, nr_read)) {
rv = -EFAULT;
goto out_free_page;
}
p += nr_read;
len -= nr_read;
buf += nr_read;
count -= nr_read;
rv += nr_read;
}
cmdline[1].len = 0;
} else {
/*
* Command line (1 string) occupies ARGV and
* extends into ENVP.
*/
struct {
unsigned long p;
unsigned long len;
} cmdline[2] = {
{ .p = arg_start, .len = len1 },
{ .p = env_start, .len = len2 },
};
cmdline[1].p = env_start;
cmdline[1].len = len2;
}
{
loff_t pos1 = *pos;
unsigned int i;
@ -317,44 +291,40 @@ static ssize_t proc_pid_cmdline_read(struct file *file, char __user *buf,
i++;
}
while (i < 2) {
unsigned long p;
unsigned long len;
p = cmdline[i].p + pos1;
len = cmdline[i].len - pos1;
while (count > 0 && len > 0) {
unsigned int _count, l;
int nr_read;
bool final;
unsigned int nr_read, nr_write;
_count = min3(count, len, PAGE_SIZE);
nr_read = access_remote_vm(mm, p, page, _count, FOLL_ANON);
if (nr_read < 0)
rv = nr_read;
if (nr_read <= 0)
goto out_free_page;
nr_read = min3(count, len, PAGE_SIZE);
nr_read = access_remote_vm(mm, p, page, nr_read, FOLL_ANON);
if (nr_read == 0)
goto end;
/*
* Command line can be shorter than whole ARGV
* even if last "marker" byte says it is not.
*/
final = false;
l = strnlen(page, nr_read);
if (l < nr_read) {
nr_read = l;
final = true;
}
if (c == '\0')
nr_write = nr_read;
else
nr_write = strnlen(page, nr_read);
if (copy_to_user(buf, page, nr_read)) {
if (copy_to_user(buf, page, nr_write)) {
rv = -EFAULT;
goto out_free_page;
}
p += nr_read;
len -= nr_read;
buf += nr_read;
count -= nr_read;
rv += nr_read;
p += nr_write;
len -= nr_write;
buf += nr_write;
count -= nr_write;
if (final)
goto out_free_page;
if (nr_write < nr_read)
goto end;
}
/* Only first chunk can be read partially. */
@ -363,12 +333,13 @@ static ssize_t proc_pid_cmdline_read(struct file *file, char __user *buf,
}
}
end:
*pos += buf - buf0;
rv = buf - buf0;
out_free_page:
free_page((unsigned long)page);
out_mmput:
mmput(mm);
if (rv > 0)
*pos += rv;
return rv;
}
@ -430,7 +401,6 @@ static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns,
struct stack_trace trace;
unsigned long *entries;
int err;
int i;
entries = kmalloc(MAX_STACK_TRACE_DEPTH * sizeof(*entries), GFP_KERNEL);
if (!entries)
@ -443,6 +413,8 @@ static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns,
err = lock_trace(task);
if (!err) {
unsigned int i;
save_stack_trace_tsk(task, &trace);
for (i = 0; i < trace.nr_entries; i++) {
@ -927,10 +899,10 @@ static ssize_t environ_read(struct file *file, char __user *buf,
if (!mmget_not_zero(mm))
goto free;
down_read(&mm->mmap_sem);
spin_lock(&mm->arg_lock);
env_start = mm->env_start;
env_end = mm->env_end;
up_read(&mm->mmap_sem);
spin_unlock(&mm->arg_lock);
while (count > 0) {
size_t this_len, max_len;
@ -1784,9 +1756,9 @@ int pid_getattr(const struct path *path, struct kstat *stat,
generic_fillattr(inode, stat);
rcu_read_lock();
stat->uid = GLOBAL_ROOT_UID;
stat->gid = GLOBAL_ROOT_GID;
rcu_read_lock();
task = pid_task(proc_pid(inode), PIDTYPE_PID);
if (task) {
if (!has_pid_permissions(pid, task, HIDEPID_INVISIBLE)) {
@ -1875,7 +1847,7 @@ const struct dentry_operations pid_dentry_operations =
* by stat.
*/
bool proc_fill_cache(struct file *file, struct dir_context *ctx,
const char *name, int len,
const char *name, unsigned int len,
instantiate_t instantiate, struct task_struct *task, const void *ptr)
{
struct dentry *child, *dir = file->f_path.dentry;
@ -3251,7 +3223,7 @@ int proc_pid_readdir(struct file *file, struct dir_context *ctx)
iter.task;
iter.tgid += 1, iter = next_tgid(ns, iter)) {
char name[10 + 1];
int len;
unsigned int len;
cond_resched();
if (!has_pid_permissions(ns, iter.task, HIDEPID_INVISIBLE))
@ -3578,7 +3550,7 @@ static int proc_task_readdir(struct file *file, struct dir_context *ctx)
task;
task = next_tid(task), ctx->pos++) {
char name[10 + 1];
int len;
unsigned int len;
tid = task_pid_nr_ns(task, ns);
len = snprintf(name, sizeof(name), "%u", tid);
if (!proc_fill_cache(file, ctx, name, len,

View File

@ -248,7 +248,7 @@ static int proc_readfd_common(struct file *file, struct dir_context *ctx,
struct file *f;
struct fd_data data;
char name[10 + 1];
int len;
unsigned int len;
f = fcheck_files(files, fd);
if (!f)

View File

@ -163,7 +163,7 @@ extern loff_t mem_lseek(struct file *, loff_t, int);
/* Lookups */
typedef struct dentry *instantiate_t(struct dentry *,
struct task_struct *, const void *);
extern bool proc_fill_cache(struct file *, struct dir_context *, const char *, int,
bool proc_fill_cache(struct file *, struct dir_context *, const char *, unsigned int,
instantiate_t, struct task_struct *, const void *);
/*

View File

@ -154,6 +154,8 @@ u64 stable_page_flags(struct page *page)
if (PageBalloon(page))
u |= 1 << KPF_BALLOON;
if (PageTable(page))
u |= 1 << KPF_PGTABLE;
if (page_is_idle(page))
u |= 1 << KPF_IDLE;

View File

@ -1259,8 +1259,9 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
if (pte_swp_soft_dirty(pte))
flags |= PM_SOFT_DIRTY;
entry = pte_to_swp_entry(pte);
frame = swp_type(entry) |
(swp_offset(entry) << MAX_SWAPFILES_SHIFT);
if (pm->show_pfn)
frame = swp_type(entry) |
(swp_offset(entry) << MAX_SWAPFILES_SHIFT);
flags |= PM_SWAP;
if (is_migration_entry(entry))
page = migration_entry_to_page(entry);
@ -1311,11 +1312,14 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
else if (is_swap_pmd(pmd)) {
swp_entry_t entry = pmd_to_swp_entry(pmd);
unsigned long offset = swp_offset(entry);
unsigned long offset;
offset += (addr & ~PMD_MASK) >> PAGE_SHIFT;
frame = swp_type(entry) |
(offset << MAX_SWAPFILES_SHIFT);
if (pm->show_pfn) {
offset = swp_offset(entry) +
((addr & ~PMD_MASK) >> PAGE_SHIFT);
frame = swp_type(entry) |
(offset << MAX_SWAPFILES_SHIFT);
}
flags |= PM_SWAP;
if (pmd_swp_soft_dirty(pmd))
flags |= PM_SOFT_DIRTY;
@ -1333,10 +1337,12 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
err = add_to_pagemap(addr, &pme, pm);
if (err)
break;
if (pm->show_pfn && (flags & PM_PRESENT))
frame++;
else if (flags & PM_SWAP)
frame += (1 << MAX_SWAPFILES_SHIFT);
if (pm->show_pfn) {
if (flags & PM_PRESENT)
frame++;
else if (flags & PM_SWAP)
frame += (1 << MAX_SWAPFILES_SHIFT);
}
}
spin_unlock(ptl);
return err;

View File

@ -62,6 +62,8 @@ struct userfaultfd_ctx {
enum userfaultfd_state state;
/* released */
bool released;
/* memory mappings are changing because of non-cooperative event */
bool mmap_changing;
/* mm with one ore more vmas attached to this userfaultfd_ctx */
struct mm_struct *mm;
};
@ -641,6 +643,7 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
* already released.
*/
out:
WRITE_ONCE(ctx->mmap_changing, false);
userfaultfd_ctx_put(ctx);
}
@ -686,10 +689,12 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
ctx->state = UFFD_STATE_RUNNING;
ctx->features = octx->features;
ctx->released = false;
ctx->mmap_changing = false;
ctx->mm = vma->vm_mm;
mmgrab(ctx->mm);
userfaultfd_ctx_get(octx);
WRITE_ONCE(octx->mmap_changing, true);
fctx->orig = octx;
fctx->new = ctx;
list_add_tail(&fctx->list, fcs);
@ -732,6 +737,7 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma,
if (ctx && (ctx->features & UFFD_FEATURE_EVENT_REMAP)) {
vm_ctx->ctx = ctx;
userfaultfd_ctx_get(ctx);
WRITE_ONCE(ctx->mmap_changing, true);
}
}
@ -772,6 +778,7 @@ bool userfaultfd_remove(struct vm_area_struct *vma,
return true;
userfaultfd_ctx_get(ctx);
WRITE_ONCE(ctx->mmap_changing, true);
up_read(&mm->mmap_sem);
msg_init(&ewq.msg);
@ -815,6 +822,7 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma,
return -ENOMEM;
userfaultfd_ctx_get(ctx);
WRITE_ONCE(ctx->mmap_changing, true);
unmap_ctx->ctx = ctx;
unmap_ctx->start = start;
unmap_ctx->end = end;
@ -1653,6 +1661,10 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx,
user_uffdio_copy = (struct uffdio_copy __user *) arg;
ret = -EAGAIN;
if (READ_ONCE(ctx->mmap_changing))
goto out;
ret = -EFAULT;
if (copy_from_user(&uffdio_copy, user_uffdio_copy,
/* don't copy "copy" last field */
@ -1674,7 +1686,7 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx,
goto out;
if (mmget_not_zero(ctx->mm)) {
ret = mcopy_atomic(ctx->mm, uffdio_copy.dst, uffdio_copy.src,
uffdio_copy.len);
uffdio_copy.len, &ctx->mmap_changing);
mmput(ctx->mm);
} else {
return -ESRCH;
@ -1705,6 +1717,10 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx,
user_uffdio_zeropage = (struct uffdio_zeropage __user *) arg;
ret = -EAGAIN;
if (READ_ONCE(ctx->mmap_changing))
goto out;
ret = -EFAULT;
if (copy_from_user(&uffdio_zeropage, user_uffdio_zeropage,
/* don't copy "zeropage" last field */
@ -1721,7 +1737,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx,
if (mmget_not_zero(ctx->mm)) {
ret = mfill_zeropage(ctx->mm, uffdio_zeropage.range.start,
uffdio_zeropage.range.len);
uffdio_zeropage.range.len,
&ctx->mmap_changing);
mmput(ctx->mm);
} else {
return -ESRCH;
@ -1900,6 +1917,7 @@ SYSCALL_DEFINE1(userfaultfd, int, flags)
ctx->features = 0;
ctx->state = UFFD_STATE_WAIT_API;
ctx->released = false;
ctx->mmap_changing = false;
ctx->mm = current->mm;
/* prevent the mm struct to be freed */
mmgrab(ctx->mm);

View File

@ -13,17 +13,14 @@
#ifndef __ASSEMBLY__
typedef signed char s8;
typedef unsigned char u8;
typedef signed short s16;
typedef unsigned short u16;
typedef signed int s32;
typedef unsigned int u32;
typedef signed long long s64;
typedef unsigned long long u64;
typedef __s8 s8;
typedef __u8 u8;
typedef __s16 s16;
typedef __u16 u16;
typedef __s32 s32;
typedef __u32 u32;
typedef __s64 s64;
typedef __u64 u64;
#define S8_C(x) x
#define U8_C(x) x ## U

View File

@ -125,8 +125,8 @@ ssize_t dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter,
const struct iomap_ops *ops);
int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
pfn_t *pfnp, int *errp, const struct iomap_ops *ops);
int dax_finish_sync_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
pfn_t pfn);
vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf,
enum page_entry_size pe_size, pfn_t pfn);
int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index);
int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
pgoff_t index);

View File

@ -24,6 +24,7 @@ struct vm_area_struct;
#define ___GFP_HIGH 0x20u
#define ___GFP_IO 0x40u
#define ___GFP_FS 0x80u
#define ___GFP_WRITE 0x100u
#define ___GFP_NOWARN 0x200u
#define ___GFP_RETRY_MAYFAIL 0x400u
#define ___GFP_NOFAIL 0x800u
@ -36,11 +37,10 @@ struct vm_area_struct;
#define ___GFP_THISNODE 0x40000u
#define ___GFP_ATOMIC 0x80000u
#define ___GFP_ACCOUNT 0x100000u
#define ___GFP_DIRECT_RECLAIM 0x400000u
#define ___GFP_WRITE 0x800000u
#define ___GFP_KSWAPD_RECLAIM 0x1000000u
#define ___GFP_DIRECT_RECLAIM 0x200000u
#define ___GFP_KSWAPD_RECLAIM 0x400000u
#ifdef CONFIG_LOCKDEP
#define ___GFP_NOLOCKDEP 0x2000000u
#define ___GFP_NOLOCKDEP 0x800000u
#else
#define ___GFP_NOLOCKDEP 0
#endif
@ -205,7 +205,7 @@ struct vm_area_struct;
#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)
/* Room for N __GFP_FOO bits */
#define __GFP_BITS_SHIFT (25 + IS_ENABLED(CONFIG_LOCKDEP))
#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP))
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
/*
@ -343,7 +343,7 @@ static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags)
* 0x1 => DMA or NORMAL
* 0x2 => HIGHMEM or NORMAL
* 0x3 => BAD (DMA+HIGHMEM)
* 0x4 => DMA32 or DMA or NORMAL
* 0x4 => DMA32 or NORMAL
* 0x5 => BAD (DMA+DMA32)
* 0x6 => BAD (HIGHMEM+DMA32)
* 0x7 => BAD (HIGHMEM+DMA32+DMA)
@ -351,7 +351,7 @@ static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags)
* 0x9 => DMA or NORMAL (MOVABLE+DMA)
* 0xa => MOVABLE (Movable is valid only if HIGHMEM is set too)
* 0xb => BAD (MOVABLE+HIGHMEM+DMA)
* 0xc => DMA32 (MOVABLE+DMA32)
* 0xc => DMA32 or NORMAL (MOVABLE+DMA32)
* 0xd => BAD (MOVABLE+DMA32+DMA)
* 0xe => BAD (MOVABLE+DMA32+HIGHMEM)
* 0xf => BAD (MOVABLE+DMA32+HIGHMEM+DMA)

View File

@ -522,9 +522,7 @@ void hmm_devmem_remove(struct hmm_devmem *devmem);
static inline void hmm_devmem_page_set_drvdata(struct page *page,
unsigned long data)
{
unsigned long *drvdata = (unsigned long *)&page->pgmap;
drvdata[1] = data;
page->hmm_data = data;
}
/*
@ -535,9 +533,7 @@ static inline void hmm_devmem_page_set_drvdata(struct page *page,
*/
static inline unsigned long hmm_devmem_page_get_drvdata(const struct page *page)
{
const unsigned long *drvdata = (const unsigned long *)&page->pgmap;
return drvdata[1];
return page->hmm_data;
}

View File

@ -29,6 +29,7 @@
#define LLONG_MIN (-LLONG_MAX - 1)
#define ULLONG_MAX (~0ULL)
#define SIZE_MAX (~(size_t)0)
#define PHYS_ADDR_MAX (~(phys_addr_t)0)
#define U8_MAX ((u8)~0U)
#define S8_MAX ((s8)(U8_MAX>>1))

View File

@ -37,17 +37,6 @@ static inline void ksm_exit(struct mm_struct *mm)
__ksm_exit(mm);
}
static inline struct stable_node *page_stable_node(struct page *page)
{
return PageKsm(page) ? page_rmapping(page) : NULL;
}
static inline void set_page_stable_node(struct page *page,
struct stable_node *stable_node)
{
page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
}
/*
* When do_swap_page() first faults in from swap what used to be a KSM page,
* no problem, it will be assigned to this vma's anon_vma; but thereafter,
@ -89,12 +78,6 @@ static inline struct page *ksm_might_need_to_copy(struct page *page,
return page;
}
static inline int page_referenced_ksm(struct page *page,
struct mem_cgroup *memcg, unsigned long *vm_flags)
{
return 0;
}
static inline void rmap_walk_ksm(struct page *page,
struct rmap_walk_control *rwc)
{

View File

@ -53,9 +53,17 @@ enum memcg_memory_event {
MEMCG_HIGH,
MEMCG_MAX,
MEMCG_OOM,
MEMCG_SWAP_MAX,
MEMCG_SWAP_FAIL,
MEMCG_NR_MEMORY_EVENTS,
};
enum mem_cgroup_protection {
MEMCG_PROT_NONE,
MEMCG_PROT_LOW,
MEMCG_PROT_MIN,
};
struct mem_cgroup_reclaim_cookie {
pg_data_t *pgdat;
int priority;
@ -158,6 +166,15 @@ enum memcg_kmem_state {
KMEM_ONLINE,
};
#if defined(CONFIG_SMP)
struct memcg_padding {
char x[0];
} ____cacheline_internodealigned_in_smp;
#define MEMCG_PADDING(name) struct memcg_padding name;
#else
#define MEMCG_PADDING(name)
#endif
/*
* The memory controller data structure. The memory controller controls both
* page cache and RSS per cgroup. We would eventually like to provide
@ -179,8 +196,7 @@ struct mem_cgroup {
struct page_counter kmem;
struct page_counter tcpmem;
/* Normal memory consumption range */
unsigned long low;
/* Upper bound of normal memory consumption range */
unsigned long high;
/* Range enforcement for interrupt charges */
@ -205,9 +221,11 @@ struct mem_cgroup {
int oom_kill_disable;
/* memory.events */
atomic_long_t memory_events[MEMCG_NR_MEMORY_EVENTS];
struct cgroup_file events_file;
/* handle for "memory.swap.events" */
struct cgroup_file swap_events_file;
/* protect arrays of thresholds */
struct mutex thresholds_lock;
@ -225,19 +243,26 @@ struct mem_cgroup {
* mem_cgroup ? And what type of charges should we move ?
*/
unsigned long move_charge_at_immigrate;
/* taken only while moving_account > 0 */
spinlock_t move_lock;
unsigned long move_lock_flags;
MEMCG_PADDING(_pad1_);
/*
* set > 0 if pages under this cgroup are moving to other cgroup.
*/
atomic_t moving_account;
/* taken only while moving_account > 0 */
spinlock_t move_lock;
struct task_struct *move_lock_task;
unsigned long move_lock_flags;
/* memory.stat */
struct mem_cgroup_stat_cpu __percpu *stat_cpu;
MEMCG_PADDING(_pad2_);
atomic_long_t stat[MEMCG_NR_STAT];
atomic_long_t events[NR_VM_EVENT_ITEMS];
atomic_long_t memory_events[MEMCG_NR_MEMORY_EVENTS];
unsigned long socket_pressure;
@ -285,7 +310,8 @@ static inline bool mem_cgroup_disabled(void)
return !cgroup_subsys_enabled(memory_cgrp_subsys);
}
bool mem_cgroup_low(struct mem_cgroup *root, struct mem_cgroup *memcg);
enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,
struct mem_cgroup *memcg);
int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask, struct mem_cgroup **memcgp,
@ -462,7 +488,7 @@ unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec,
void mem_cgroup_handle_over_high(void);
unsigned long mem_cgroup_get_limit(struct mem_cgroup *memcg);
unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg);
void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
struct task_struct *p);
@ -730,10 +756,10 @@ static inline void memcg_memory_event(struct mem_cgroup *memcg,
{
}
static inline bool mem_cgroup_low(struct mem_cgroup *root,
struct mem_cgroup *memcg)
static inline enum mem_cgroup_protection mem_cgroup_protected(
struct mem_cgroup *root, struct mem_cgroup *memcg)
{
return false;
return MEMCG_PROT_NONE;
}
static inline int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
@ -853,7 +879,7 @@ mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg,
return 0;
}
static inline unsigned long mem_cgroup_get_limit(struct mem_cgroup *memcg)
static inline unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg)
{
return 0;
}
@ -1093,7 +1119,6 @@ static inline void dec_lruvec_page_state(struct page *page,
#ifdef CONFIG_CGROUP_WRITEBACK
struct list_head *mem_cgroup_cgwb_list(struct mem_cgroup *memcg);
struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb);
void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
unsigned long *pheadroom, unsigned long *pdirty,

16
include/linux/memfd.h Normal file
View File

@ -0,0 +1,16 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __LINUX_MEMFD_H
#define __LINUX_MEMFD_H
#include <linux/file.h>
#ifdef CONFIG_MEMFD_CREATE
extern long memfd_fcntl(struct file *file, unsigned int cmd, unsigned long arg);
#else
static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned long a)
{
return -EINVAL;
}
#endif
#endif /* __LINUX_MEMFD_H */

View File

@ -107,7 +107,6 @@ static inline bool movable_node_is_enabled(void)
}
#ifdef CONFIG_MEMORY_HOTREMOVE
extern bool is_pageblock_removable_nolock(struct page *page);
extern int arch_remove_memory(u64 start, u64 size,
struct vmem_altmap *altmap);
extern int __remove_pages(struct zone *zone, unsigned long start_pfn,

View File

@ -1851,6 +1851,7 @@ static inline bool pgtable_page_ctor(struct page *page)
{
if (!ptlock_init(page))
return false;
__SetPageTable(page);
inc_zone_page_state(page, NR_PAGETABLE);
return true;
}
@ -1858,6 +1859,7 @@ static inline bool pgtable_page_ctor(struct page *page)
static inline void pgtable_page_dtor(struct page *page)
{
pte_lock_deinit(page);
__ClearPageTable(page);
dec_zone_page_state(page, NR_PAGETABLE);
}
@ -2303,10 +2305,10 @@ extern void truncate_inode_pages_range(struct address_space *,
extern void truncate_inode_pages_final(struct address_space *);
/* generic vm_area_ops exported for stackable file systems */
extern int filemap_fault(struct vm_fault *vmf);
extern vm_fault_t filemap_fault(struct vm_fault *vmf);
extern void filemap_map_pages(struct vm_fault *vmf,
pgoff_t start_pgoff, pgoff_t end_pgoff);
extern int filemap_page_mkwrite(struct vm_fault *vmf);
extern vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf);
/* mm/page-writeback.c */
int __must_check write_one_page(struct page *page);
@ -2431,8 +2433,8 @@ int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn, pgprot_t pgprot);
int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
pfn_t pfn);
int vm_insert_mixed_mkwrite(struct vm_area_struct *vma, unsigned long addr,
pfn_t pfn);
vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
unsigned long addr, pfn_t pfn);
int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
static inline vm_fault_t vmf_insert_page(struct vm_area_struct *vma,
@ -2530,12 +2532,10 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
#ifdef CONFIG_PAGE_POISONING
extern bool page_poisoning_enabled(void);
extern void kernel_poison_pages(struct page *page, int numpages, int enable);
extern bool page_is_poisoned(struct page *page);
#else
static inline bool page_poisoning_enabled(void) { return false; }
static inline void kernel_poison_pages(struct page *page, int numpages,
int enable) { }
static inline bool page_is_poisoned(struct page *page) { return false; }
#endif
#ifdef CONFIG_DEBUG_PAGEALLOC

View File

@ -33,29 +33,27 @@ struct hmm;
* it to keep track of whatever it is we are using the page for at the
* moment. Note that we have no way to track which tasks are using
* a page, though if it is a pagecache page, rmap structures can tell us
* who is mapping it. If you allocate the page using alloc_pages(), you
* can use some of the space in struct page for your own purposes.
* who is mapping it.
*
* Pages that were once in the page cache may be found under the RCU lock
* even after they have been recycled to a different purpose. The page
* cache reads and writes some of the fields in struct page to pin the
* page before checking that it's still in the page cache. It is vital
* that all users of struct page:
* 1. Use the first word as PageFlags.
* 2. Clear or preserve bit 0 of page->compound_head. It is used as
* PageTail for compound pages, and the page cache must not see false
* positives. Some users put a pointer here (guaranteed to be at least
* 4-byte aligned), other users avoid using the field altogether.
* 3. page->_refcount must either not be used, or must be used in such a
* way that other CPUs temporarily incrementing and then decrementing the
* refcount does not cause problems. On receiving the page from
* alloc_pages(), the refcount will be positive.
* 4. Either preserve page->_mapcount or restore it to -1 before freeing it.
* If you allocate the page using alloc_pages(), you can use some of the
* space in struct page for your own purposes. The five words in the main
* union are available, except for bit 0 of the first word which must be
* kept clear. Many users use this word to store a pointer to an object
* which is guaranteed to be aligned. If you use the same storage as
* page->mapping, you must restore it to NULL before freeing the page.
*
* If you allocate pages of order > 0, you can use the fields in the struct
* page associated with each page, but bear in mind that the pages may have
* been inserted individually into the page cache, so you must use the above
* four fields in a compatible way for each struct page.
* If your page will not be mapped to userspace, you can also use the four
* bytes in the mapcount union, but you must call page_mapcount_reset()
* before freeing it.
*
* If you want to use the refcount field, it must be used in such a way
* that other CPUs temporarily incrementing and then decrementing the
* refcount does not cause problems. On receiving the page from
* alloc_pages(), the refcount will be positive.
*
* If you allocate pages of order > 0, you can use some of the fields
* in each subpage, but you may need to restore some of their values
* afterwards.
*
* SLUB uses cmpxchg_double() to atomically update its freelist and
* counters. That requires that freelist & counters be adjacent and
@ -65,135 +63,122 @@ struct hmm;
*/
#ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE
#define _struct_page_alignment __aligned(2 * sizeof(unsigned long))
#if defined(CONFIG_HAVE_CMPXCHG_DOUBLE)
#define _slub_counter_t unsigned long
#else
#define _slub_counter_t unsigned int
#endif
#else /* !CONFIG_HAVE_ALIGNED_STRUCT_PAGE */
#define _struct_page_alignment
#define _slub_counter_t unsigned int
#endif /* !CONFIG_HAVE_ALIGNED_STRUCT_PAGE */
#endif
struct page {
/* First double word block */
unsigned long flags; /* Atomic flags, some possibly
* updated asynchronously */
union {
/* See page-flags.h for the definition of PAGE_MAPPING_FLAGS */
struct address_space *mapping;
void *s_mem; /* slab first object */
atomic_t compound_mapcount; /* first tail page */
/* page_deferred_list().next -- second tail page */
};
/* Second double word */
union {
pgoff_t index; /* Our offset within mapping. */
void *freelist; /* sl[aou]b first free object */
/* page_deferred_list().prev -- second tail page */
};
union {
_slub_counter_t counters;
unsigned int active; /* SLAB */
struct { /* SLUB */
unsigned inuse:16;
unsigned objects:15;
unsigned frozen:1;
};
int units; /* SLOB */
struct { /* Page cache */
/*
* Count of ptes mapped in mms, to show when
* page is mapped & limit reverse map searches.
*
* Extra information about page type may be
* stored here for pages that are never mapped,
* in which case the value MUST BE <= -2.
* See page-flags.h for more details.
*/
atomic_t _mapcount;
/*
* Usage count, *USE WRAPPER FUNCTION* when manual
* accounting. See page_ref.h
*/
atomic_t _refcount;
};
};
/*
* WARNING: bit 0 of the first word encode PageTail(). That means
* the rest users of the storage space MUST NOT use the bit to
* Five words (20/40 bytes) are available in this union.
* WARNING: bit 0 of the first word is used for PageTail(). That
* means the other users of this union MUST NOT use the bit to
* avoid collision and false-positive PageTail().
*/
union {
struct list_head lru; /* Pageout list, eg. active_list
* protected by zone_lru_lock !
* Can be used as a generic list
* by the page owner.
*/
struct dev_pagemap *pgmap; /* ZONE_DEVICE pages are never on an
* lru or handled by a slab
* allocator, this points to the
* hosting device page map.
*/
struct { /* slub per cpu partial pages */
struct page *next; /* Next partial slab */
#ifdef CONFIG_64BIT
int pages; /* Nr of partial slabs left */
int pobjects; /* Approximate # of objects */
#else
short int pages;
short int pobjects;
#endif
struct { /* Page cache and anonymous pages */
/**
* @lru: Pageout list, eg. active_list protected by
* zone_lru_lock. Sometimes used as a generic list
* by the page owner.
*/
struct list_head lru;
/* See page-flags.h for PAGE_MAPPING_FLAGS */
struct address_space *mapping;
pgoff_t index; /* Our offset within mapping. */
/**
* @private: Mapping-private opaque data.
* Usually used for buffer_heads if PagePrivate.
* Used for swp_entry_t if PageSwapCache.
* Indicates order in the buddy system if PageBuddy.
*/
unsigned long private;
};
struct rcu_head rcu_head; /* Used by SLAB
* when destroying via RCU
*/
/* Tail pages of compound page */
struct {
unsigned long compound_head; /* If bit zero is set */
struct { /* slab, slob and slub */
union {
struct list_head slab_list; /* uses lru */
struct { /* Partial pages */
struct page *next;
#ifdef CONFIG_64BIT
int pages; /* Nr of pages left */
int pobjects; /* Approximate count */
#else
short int pages;
short int pobjects;
#endif
};
};
struct kmem_cache *slab_cache; /* not slob */
/* Double-word boundary */
void *freelist; /* first free object */
union {
void *s_mem; /* slab: first object */
unsigned long counters; /* SLUB */
struct { /* SLUB */
unsigned inuse:16;
unsigned objects:15;
unsigned frozen:1;
};
};
};
struct { /* Tail pages of compound page */
unsigned long compound_head; /* Bit zero is set */
/* First tail page only */
unsigned char compound_dtor;
unsigned char compound_order;
/* two/six bytes available here */
atomic_t compound_mapcount;
};
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && USE_SPLIT_PMD_PTLOCKS
struct {
unsigned long __pad; /* do not overlay pmd_huge_pte
* with compound_head to avoid
* possible bit 0 collision.
*/
struct { /* Second tail page of compound page */
unsigned long _compound_pad_1; /* compound_head */
unsigned long _compound_pad_2;
struct list_head deferred_list;
};
struct { /* Page table pages */
unsigned long _pt_pad_1; /* compound_head */
pgtable_t pmd_huge_pte; /* protected by page->ptl */
};
unsigned long _pt_pad_2; /* mapping */
struct mm_struct *pt_mm; /* x86 pgds only */
#if ALLOC_SPLIT_PTLOCKS
spinlock_t *ptl;
#else
spinlock_t ptl;
#endif
};
struct { /* ZONE_DEVICE pages */
/** @pgmap: Points to the hosting device page map. */
struct dev_pagemap *pgmap;
unsigned long hmm_data;
unsigned long _zd_pad_1; /* uses mapping */
};
/** @rcu_head: You can use this to free a page by RCU. */
struct rcu_head rcu_head;
};
union {
union { /* This union is 4 bytes in size. */
/*
* Mapping-private opaque data:
* Usually used for buffer_heads if PagePrivate
* Used for swp_entry_t if PageSwapCache
* Indicates order in the buddy system if PageBuddy
* If the page can be mapped to userspace, encodes the number
* of times this page is referenced by a page table.
*/
unsigned long private;
#if USE_SPLIT_PTE_PTLOCKS
#if ALLOC_SPLIT_PTLOCKS
spinlock_t *ptl;
#else
spinlock_t ptl;
#endif
#endif
struct kmem_cache *slab_cache; /* SL[AU]B: Pointer to slab */
atomic_t _mapcount;
/*
* If the page is neither PageSlab nor mappable to userspace,
* the value stored here may help determine what this page
* is used for. See page-flags.h for a list of page types
* which are currently stored here.
*/
unsigned int page_type;
unsigned int active; /* SLAB */
int units; /* SLOB */
};
/* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */
atomic_t _refcount;
#ifdef CONFIG_MEMCG
struct mem_cgroup *mem_cgroup;
#endif
@ -413,6 +398,8 @@ struct mm_struct {
unsigned long exec_vm; /* VM_EXEC & ~VM_WRITE & ~VM_STACK */
unsigned long stack_vm; /* VM_STACK */
unsigned long def_flags;
spinlock_t arg_lock; /* protect the below fields */
unsigned long start_code, end_code, start_data, end_data;
unsigned long start_brk, brk, start_stack;
unsigned long arg_start, arg_end, env_start, env_end;
@ -627,9 +614,9 @@ struct vm_special_mapping {
* If non-NULL, then this is called to resolve page faults
* on the special mapping. If used, .pages is not checked.
*/
int (*fault)(const struct vm_special_mapping *sm,
struct vm_area_struct *vma,
struct vm_fault *vmf);
vm_fault_t (*fault)(const struct vm_special_mapping *sm,
struct vm_area_struct *vma,
struct vm_fault *vmf);
int (*mremap)(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma);

View File

@ -53,93 +53,32 @@ struct gcry_mpi {
typedef struct gcry_mpi *MPI;
#define mpi_get_nlimbs(a) ((a)->nlimbs)
#define mpi_is_neg(a) ((a)->sign)
/*-- mpiutil.c --*/
MPI mpi_alloc(unsigned nlimbs);
MPI mpi_alloc_secure(unsigned nlimbs);
MPI mpi_alloc_like(MPI a);
void mpi_free(MPI a);
int mpi_resize(MPI a, unsigned nlimbs);
int mpi_copy(MPI *copy, const MPI a);
void mpi_clear(MPI a);
int mpi_set(MPI w, MPI u);
int mpi_set_ui(MPI w, ulong u);
MPI mpi_alloc_set_ui(unsigned long u);
void mpi_m_check(MPI a);
void mpi_swap(MPI a, MPI b);
/*-- mpicoder.c --*/
MPI do_encode_md(const void *sha_buffer, unsigned nbits);
MPI mpi_read_raw_data(const void *xbuffer, size_t nbytes);
MPI mpi_read_from_buffer(const void *buffer, unsigned *ret_nread);
MPI mpi_read_raw_from_sgl(struct scatterlist *sgl, unsigned int len);
int mpi_fromstr(MPI val, const char *str);
u32 mpi_get_keyid(MPI a, u32 *keyid);
void *mpi_get_buffer(MPI a, unsigned *nbytes, int *sign);
int mpi_read_buffer(MPI a, uint8_t *buf, unsigned buf_len, unsigned *nbytes,
int *sign);
void *mpi_get_secure_buffer(MPI a, unsigned *nbytes, int *sign);
int mpi_write_to_sgl(MPI a, struct scatterlist *sg, unsigned nbytes,
int *sign);
#define log_mpidump g10_log_mpidump
/*-- mpi-add.c --*/
int mpi_add_ui(MPI w, MPI u, ulong v);
int mpi_add(MPI w, MPI u, MPI v);
int mpi_addm(MPI w, MPI u, MPI v, MPI m);
int mpi_sub_ui(MPI w, MPI u, ulong v);
int mpi_sub(MPI w, MPI u, MPI v);
int mpi_subm(MPI w, MPI u, MPI v, MPI m);
/*-- mpi-mul.c --*/
int mpi_mul_ui(MPI w, MPI u, ulong v);
int mpi_mul_2exp(MPI w, MPI u, ulong cnt);
int mpi_mul(MPI w, MPI u, MPI v);
int mpi_mulm(MPI w, MPI u, MPI v, MPI m);
/*-- mpi-div.c --*/
ulong mpi_fdiv_r_ui(MPI rem, MPI dividend, ulong divisor);
int mpi_fdiv_r(MPI rem, MPI dividend, MPI divisor);
int mpi_fdiv_q(MPI quot, MPI dividend, MPI divisor);
int mpi_fdiv_qr(MPI quot, MPI rem, MPI dividend, MPI divisor);
int mpi_tdiv_r(MPI rem, MPI num, MPI den);
int mpi_tdiv_qr(MPI quot, MPI rem, MPI num, MPI den);
int mpi_tdiv_q_2exp(MPI w, MPI u, unsigned count);
int mpi_divisible_ui(const MPI dividend, ulong divisor);
/*-- mpi-gcd.c --*/
int mpi_gcd(MPI g, const MPI a, const MPI b);
/*-- mpi-pow.c --*/
int mpi_pow(MPI w, MPI u, MPI v);
int mpi_powm(MPI res, MPI base, MPI exp, MPI mod);
/*-- mpi-mpow.c --*/
int mpi_mulpowm(MPI res, MPI *basearray, MPI *exparray, MPI mod);
/*-- mpi-cmp.c --*/
int mpi_cmp_ui(MPI u, ulong v);
int mpi_cmp(MPI u, MPI v);
/*-- mpi-scan.c --*/
int mpi_getbyte(MPI a, unsigned idx);
void mpi_putbyte(MPI a, unsigned idx, int value);
unsigned mpi_trailing_zeros(MPI a);
/*-- mpi-bit.c --*/
void mpi_normalize(MPI a);
unsigned mpi_get_nbits(MPI a);
int mpi_test_bit(MPI a, unsigned n);
int mpi_set_bit(MPI a, unsigned n);
int mpi_set_highbit(MPI a, unsigned n);
void mpi_clear_highbit(MPI a, unsigned n);
void mpi_clear_bit(MPI a, unsigned n);
int mpi_rshift(MPI x, MPI a, unsigned n);
/*-- mpi-inv.c --*/
int mpi_invm(MPI x, MPI u, MPI v);
/* inline functions */

View File

@ -642,49 +642,62 @@ PAGEFLAG_FALSE(DoubleMap)
#endif
/*
* For pages that are never mapped to userspace, page->mapcount may be
* used for storing extra information about page type. Any value used
* for this purpose must be <= -2, but it's better start not too close
* to -2 so that an underflow of the page_mapcount() won't be mistaken
* for a special page.
* For pages that are never mapped to userspace (and aren't PageSlab),
* page_type may be used. Because it is initialised to -1, we invert the
* sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and
* __ClearPageFoo *sets* the bit used for PageFoo. We reserve a few high and
* low bits so that an underflow or overflow of page_mapcount() won't be
* mistaken for a page type value.
*/
#define PAGE_MAPCOUNT_OPS(uname, lname) \
#define PAGE_TYPE_BASE 0xf0000000
/* Reserve 0x0000007f to catch underflows of page_mapcount */
#define PG_buddy 0x00000080
#define PG_balloon 0x00000100
#define PG_kmemcg 0x00000200
#define PG_table 0x00000400
#define PageType(page, flag) \
((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
#define PAGE_TYPE_OPS(uname, lname) \
static __always_inline int Page##uname(struct page *page) \
{ \
return atomic_read(&page->_mapcount) == \
PAGE_##lname##_MAPCOUNT_VALUE; \
return PageType(page, PG_##lname); \
} \
static __always_inline void __SetPage##uname(struct page *page) \
{ \
VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page); \
atomic_set(&page->_mapcount, PAGE_##lname##_MAPCOUNT_VALUE); \
VM_BUG_ON_PAGE(!PageType(page, 0), page); \
page->page_type &= ~PG_##lname; \
} \
static __always_inline void __ClearPage##uname(struct page *page) \
{ \
VM_BUG_ON_PAGE(!Page##uname(page), page); \
atomic_set(&page->_mapcount, -1); \
page->page_type |= PG_##lname; \
}
/*
* PageBuddy() indicate that the page is free and in the buddy system
* PageBuddy() indicates that the page is free and in the buddy system
* (see mm/page_alloc.c).
*/
#define PAGE_BUDDY_MAPCOUNT_VALUE (-128)
PAGE_MAPCOUNT_OPS(Buddy, BUDDY)
PAGE_TYPE_OPS(Buddy, buddy)
/*
* PageBalloon() is set on pages that are on the balloon page list
* PageBalloon() is true for pages that are on the balloon page list
* (see mm/balloon_compaction.c).
*/
#define PAGE_BALLOON_MAPCOUNT_VALUE (-256)
PAGE_MAPCOUNT_OPS(Balloon, BALLOON)
PAGE_TYPE_OPS(Balloon, balloon)
/*
* If kmemcg is enabled, the buddy allocator will set PageKmemcg() on
* pages allocated with __GFP_ACCOUNT. It gets cleared on page free.
*/
#define PAGE_KMEMCG_MAPCOUNT_VALUE (-512)
PAGE_MAPCOUNT_OPS(Kmemcg, KMEMCG)
PAGE_TYPE_OPS(Kmemcg, kmemcg)
/*
* Marks pages in use as page tables.
*/
PAGE_TYPE_OPS(Table, table)
extern bool is_free_buddy_page(struct page *page);

View File

@ -7,10 +7,22 @@
#include <asm/page.h>
struct page_counter {
atomic_long_t count;
unsigned long limit;
atomic_long_t usage;
unsigned long min;
unsigned long low;
unsigned long max;
struct page_counter *parent;
/* effective memory.min and memory.min usage tracking */
unsigned long emin;
atomic_long_t min_usage;
atomic_long_t children_min_usage;
/* effective memory.low and memory.low usage tracking */
unsigned long elow;
atomic_long_t low_usage;
atomic_long_t children_low_usage;
/* legacy */
unsigned long watermark;
unsigned long failcnt;
@ -25,14 +37,14 @@ struct page_counter {
static inline void page_counter_init(struct page_counter *counter,
struct page_counter *parent)
{
atomic_long_set(&counter->count, 0);
counter->limit = PAGE_COUNTER_MAX;
atomic_long_set(&counter->usage, 0);
counter->max = PAGE_COUNTER_MAX;
counter->parent = parent;
}
static inline unsigned long page_counter_read(struct page_counter *counter)
{
return atomic_long_read(&counter->count);
return atomic_long_read(&counter->usage);
}
void page_counter_cancel(struct page_counter *counter, unsigned long nr_pages);
@ -41,7 +53,9 @@ bool page_counter_try_charge(struct page_counter *counter,
unsigned long nr_pages,
struct page_counter **fail);
void page_counter_uncharge(struct page_counter *counter, unsigned long nr_pages);
int page_counter_limit(struct page_counter *counter, unsigned long limit);
void page_counter_set_min(struct page_counter *counter, unsigned long nr_pages);
void page_counter_set_low(struct page_counter *counter, unsigned long nr_pages);
int page_counter_set_max(struct page_counter *counter, unsigned long nr_pages);
int page_counter_memparse(const char *buf, const char *max,
unsigned long *nr_pages);

View File

@ -122,7 +122,7 @@ pud_t pud_mkdevmap(pud_t pud);
#endif
#endif /* __HAVE_ARCH_PTE_DEVMAP */
#ifdef __HAVE_ARCH_PTE_SPECIAL
#ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
static inline bool pfn_t_special(pfn_t pfn)
{
return (pfn.val & PFN_SPECIAL) == PFN_SPECIAL;
@ -132,5 +132,5 @@ static inline bool pfn_t_special(pfn_t pfn)
{
return false;
}
#endif /* __HAVE_ARCH_PTE_SPECIAL */
#endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */
#endif /* _LINUX_PFN_T_H_ */

View File

@ -163,9 +163,13 @@ static inline gfp_t current_gfp_context(gfp_t flags)
}
#ifdef CONFIG_LOCKDEP
extern void __fs_reclaim_acquire(void);
extern void __fs_reclaim_release(void);
extern void fs_reclaim_acquire(gfp_t gfp_mask);
extern void fs_reclaim_release(gfp_t gfp_mask);
#else
static inline void __fs_reclaim_acquire(void) { }
static inline void __fs_reclaim_release(void) { }
static inline void fs_reclaim_acquire(gfp_t gfp_mask) { }
static inline void fs_reclaim_release(gfp_t gfp_mask) { }
#endif

View File

@ -110,19 +110,6 @@ static inline bool shmem_file(struct file *file)
extern bool shmem_charge(struct inode *inode, long pages);
extern void shmem_uncharge(struct inode *inode, long pages);
#ifdef CONFIG_TMPFS
extern long memfd_fcntl(struct file *file, unsigned int cmd, unsigned long arg);
#else
static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned long a)
{
return -EINVAL;
}
#endif
#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
extern bool shmem_huge_enabled(struct vm_area_struct *vma);
#else

View File

@ -67,9 +67,10 @@ struct kmem_cache {
/*
* If debugging is enabled, then the allocator can add additional
* fields and/or padding to every object. size contains the total
* object size including these internal fields, the following two
* variables contain the offset to the user object and its size.
* fields and/or padding to every object. 'size' contains the total
* object size including these internal fields, while 'obj_offset'
* and 'object_size' contain the offset to the user object and its
* size.
*/
int obj_offset;
#endif /* CONFIG_DEBUG_SLAB */

View File

@ -101,7 +101,6 @@ struct kmem_cache {
void (*ctor)(void *);
unsigned int inuse; /* Offset to metadata */
unsigned int align; /* Alignment */
unsigned int reserved; /* Reserved bytes at the end of slabs */
unsigned int red_left_pad; /* Left redzone padding size */
const char *name; /* Name (only for display!) */
struct list_head list; /* List of slab caches */

View File

@ -10,14 +10,14 @@
#define DECLARE_BITMAP(name,bits) \
unsigned long name[BITS_TO_LONGS(bits)]
typedef __u32 __kernel_dev_t;
typedef u32 __kernel_dev_t;
typedef __kernel_fd_set fd_set;
typedef __kernel_dev_t dev_t;
typedef __kernel_ino_t ino_t;
typedef __kernel_mode_t mode_t;
typedef unsigned short umode_t;
typedef __u32 nlink_t;
typedef u32 nlink_t;
typedef __kernel_off_t off_t;
typedef __kernel_pid_t pid_t;
typedef __kernel_daddr_t daddr_t;
@ -95,29 +95,29 @@ typedef unsigned long ulong;
#ifndef __BIT_TYPES_DEFINED__
#define __BIT_TYPES_DEFINED__
typedef __u8 u_int8_t;
typedef __s8 int8_t;
typedef __u16 u_int16_t;
typedef __s16 int16_t;
typedef __u32 u_int32_t;
typedef __s32 int32_t;
typedef u8 u_int8_t;
typedef s8 int8_t;
typedef u16 u_int16_t;
typedef s16 int16_t;
typedef u32 u_int32_t;
typedef s32 int32_t;
#endif /* !(__BIT_TYPES_DEFINED__) */
typedef __u8 uint8_t;
typedef __u16 uint16_t;
typedef __u32 uint32_t;
typedef u8 uint8_t;
typedef u16 uint16_t;
typedef u32 uint32_t;
#if defined(__GNUC__)
typedef __u64 uint64_t;
typedef __u64 u_int64_t;
typedef __s64 int64_t;
typedef u64 uint64_t;
typedef u64 u_int64_t;
typedef s64 int64_t;
#endif
/* this is a special 64bit data type that is 8-byte aligned */
#define aligned_u64 __u64 __attribute__((aligned(8)))
#define aligned_be64 __be64 __attribute__((aligned(8)))
#define aligned_le64 __le64 __attribute__((aligned(8)))
#define aligned_u64 __aligned_u64
#define aligned_be64 __aligned_be64
#define aligned_le64 __aligned_le64
/**
* The type used for indexing onto a disc or disc partition.

View File

@ -31,10 +31,12 @@
extern int handle_userfault(struct vm_fault *vmf, unsigned long reason);
extern ssize_t mcopy_atomic(struct mm_struct *dst_mm, unsigned long dst_start,
unsigned long src_start, unsigned long len);
unsigned long src_start, unsigned long len,
bool *mmap_changing);
extern ssize_t mfill_zeropage(struct mm_struct *dst_mm,
unsigned long dst_start,
unsigned long len);
unsigned long len,
bool *mmap_changing);
/* mm helpers */
static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,

View File

@ -1,6 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
/*
* Copyright 1997 Transmeta Corporation - All Rights Reserved
* Copyright 1997 Transmeta Corporation - All Rights Reserved
* Copyright 1999-2000 Jeremy Fitzhardinge <jeremy@goop.org>
* Copyright 2005-2006,2013,2017-2018 Ian Kent <raven@themaw.net>
*
* This file is part of the Linux kernel and is made available under
* the terms of the GNU General Public License, version 2, or at your
@ -8,7 +10,6 @@
*
* ----------------------------------------------------------------------- */
#ifndef _UAPI_LINUX_AUTO_FS_H
#define _UAPI_LINUX_AUTO_FS_H
@ -18,13 +19,11 @@
#include <sys/ioctl.h>
#endif /* __KERNEL__ */
#define AUTOFS_PROTO_VERSION 5
#define AUTOFS_MIN_PROTO_VERSION 3
#define AUTOFS_MAX_PROTO_VERSION 5
/* This file describes autofs v3 */
#define AUTOFS_PROTO_VERSION 3
/* Range of protocol versions defined */
#define AUTOFS_MAX_PROTO_VERSION AUTOFS_PROTO_VERSION
#define AUTOFS_MIN_PROTO_VERSION AUTOFS_PROTO_VERSION
#define AUTOFS_PROTO_SUBVERSION 2
/*
* The wait_queue_token (autofs_wqt_t) is part of a structure which is passed
@ -76,9 +75,155 @@ enum {
#define AUTOFS_IOC_READY _IO(AUTOFS_IOCTL, AUTOFS_IOC_READY_CMD)
#define AUTOFS_IOC_FAIL _IO(AUTOFS_IOCTL, AUTOFS_IOC_FAIL_CMD)
#define AUTOFS_IOC_CATATONIC _IO(AUTOFS_IOCTL, AUTOFS_IOC_CATATONIC_CMD)
#define AUTOFS_IOC_PROTOVER _IOR(AUTOFS_IOCTL, AUTOFS_IOC_PROTOVER_CMD, int)
#define AUTOFS_IOC_SETTIMEOUT32 _IOWR(AUTOFS_IOCTL, AUTOFS_IOC_SETTIMEOUT_CMD, compat_ulong_t)
#define AUTOFS_IOC_SETTIMEOUT _IOWR(AUTOFS_IOCTL, AUTOFS_IOC_SETTIMEOUT_CMD, unsigned long)
#define AUTOFS_IOC_EXPIRE _IOR(AUTOFS_IOCTL, AUTOFS_IOC_EXPIRE_CMD, struct autofs_packet_expire)
#define AUTOFS_IOC_PROTOVER _IOR(AUTOFS_IOCTL, \
AUTOFS_IOC_PROTOVER_CMD, int)
#define AUTOFS_IOC_SETTIMEOUT32 _IOWR(AUTOFS_IOCTL, \
AUTOFS_IOC_SETTIMEOUT_CMD, \
compat_ulong_t)
#define AUTOFS_IOC_SETTIMEOUT _IOWR(AUTOFS_IOCTL, \
AUTOFS_IOC_SETTIMEOUT_CMD, \
unsigned long)
#define AUTOFS_IOC_EXPIRE _IOR(AUTOFS_IOCTL, \
AUTOFS_IOC_EXPIRE_CMD, \
struct autofs_packet_expire)
/* autofs version 4 and later definitions */
/* Mask for expire behaviour */
#define AUTOFS_EXP_IMMEDIATE 1
#define AUTOFS_EXP_LEAVES 2
#define AUTOFS_TYPE_ANY 0U
#define AUTOFS_TYPE_INDIRECT 1U
#define AUTOFS_TYPE_DIRECT 2U
#define AUTOFS_TYPE_OFFSET 4U
static inline void set_autofs_type_indirect(unsigned int *type)
{
*type = AUTOFS_TYPE_INDIRECT;
}
static inline unsigned int autofs_type_indirect(unsigned int type)
{
return (type == AUTOFS_TYPE_INDIRECT);
}
static inline void set_autofs_type_direct(unsigned int *type)
{
*type = AUTOFS_TYPE_DIRECT;
}
static inline unsigned int autofs_type_direct(unsigned int type)
{
return (type == AUTOFS_TYPE_DIRECT);
}
static inline void set_autofs_type_offset(unsigned int *type)
{
*type = AUTOFS_TYPE_OFFSET;
}
static inline unsigned int autofs_type_offset(unsigned int type)
{
return (type == AUTOFS_TYPE_OFFSET);
}
static inline unsigned int autofs_type_trigger(unsigned int type)
{
return (type == AUTOFS_TYPE_DIRECT || type == AUTOFS_TYPE_OFFSET);
}
/*
* This isn't really a type as we use it to say "no type set" to
* indicate we want to search for "any" mount in the
* autofs_dev_ioctl_ismountpoint() device ioctl function.
*/
static inline void set_autofs_type_any(unsigned int *type)
{
*type = AUTOFS_TYPE_ANY;
}
static inline unsigned int autofs_type_any(unsigned int type)
{
return (type == AUTOFS_TYPE_ANY);
}
/* Daemon notification packet types */
enum autofs_notify {
NFY_NONE,
NFY_MOUNT,
NFY_EXPIRE
};
/* Kernel protocol version 4 packet types */
/* Expire entry (umount request) */
#define autofs_ptype_expire_multi 2
/* Kernel protocol version 5 packet types */
/* Indirect mount missing and expire requests. */
#define autofs_ptype_missing_indirect 3
#define autofs_ptype_expire_indirect 4
/* Direct mount missing and expire requests */
#define autofs_ptype_missing_direct 5
#define autofs_ptype_expire_direct 6
/* v4 multi expire (via pipe) */
struct autofs_packet_expire_multi {
struct autofs_packet_hdr hdr;
autofs_wqt_t wait_queue_token;
int len;
char name[NAME_MAX+1];
};
union autofs_packet_union {
struct autofs_packet_hdr hdr;
struct autofs_packet_missing missing;
struct autofs_packet_expire expire;
struct autofs_packet_expire_multi expire_multi;
};
/* autofs v5 common packet struct */
struct autofs_v5_packet {
struct autofs_packet_hdr hdr;
autofs_wqt_t wait_queue_token;
__u32 dev;
__u64 ino;
__u32 uid;
__u32 gid;
__u32 pid;
__u32 tgid;
__u32 len;
char name[NAME_MAX+1];
};
typedef struct autofs_v5_packet autofs_packet_missing_indirect_t;
typedef struct autofs_v5_packet autofs_packet_expire_indirect_t;
typedef struct autofs_v5_packet autofs_packet_missing_direct_t;
typedef struct autofs_v5_packet autofs_packet_expire_direct_t;
union autofs_v5_packet_union {
struct autofs_packet_hdr hdr;
struct autofs_v5_packet v5_packet;
autofs_packet_missing_indirect_t missing_indirect;
autofs_packet_expire_indirect_t expire_indirect;
autofs_packet_missing_direct_t missing_direct;
autofs_packet_expire_direct_t expire_direct;
};
enum {
AUTOFS_IOC_EXPIRE_MULTI_CMD = 0x66, /* AUTOFS_IOC_EXPIRE_CMD + 1 */
AUTOFS_IOC_PROTOSUBVER_CMD,
AUTOFS_IOC_ASKUMOUNT_CMD = 0x70, /* AUTOFS_DEV_IOCTL_VERSION_CMD - 1 */
};
#define AUTOFS_IOC_EXPIRE_MULTI _IOW(AUTOFS_IOCTL, \
AUTOFS_IOC_EXPIRE_MULTI_CMD, int)
#define AUTOFS_IOC_PROTOSUBVER _IOR(AUTOFS_IOCTL, \
AUTOFS_IOC_PROTOSUBVER_CMD, int)
#define AUTOFS_IOC_ASKUMOUNT _IOR(AUTOFS_IOCTL, \
AUTOFS_IOC_ASKUMOUNT_CMD, int)
#endif /* _UAPI_LINUX_AUTO_FS_H */

View File

@ -7,156 +7,9 @@
* option, any later version, incorporated herein by reference.
*/
#ifndef _LINUX_AUTO_FS4_H
#define _LINUX_AUTO_FS4_H
#ifndef _UAPI_LINUX_AUTO_FS4_H
#define _UAPI_LINUX_AUTO_FS4_H
/* Include common v3 definitions */
#include <linux/types.h>
#include <linux/auto_fs.h>
/* autofs v4 definitions */
#undef AUTOFS_PROTO_VERSION
#undef AUTOFS_MIN_PROTO_VERSION
#undef AUTOFS_MAX_PROTO_VERSION
#define AUTOFS_PROTO_VERSION 5
#define AUTOFS_MIN_PROTO_VERSION 3
#define AUTOFS_MAX_PROTO_VERSION 5
#define AUTOFS_PROTO_SUBVERSION 2
/* Mask for expire behaviour */
#define AUTOFS_EXP_IMMEDIATE 1
#define AUTOFS_EXP_LEAVES 2
#define AUTOFS_TYPE_ANY 0U
#define AUTOFS_TYPE_INDIRECT 1U
#define AUTOFS_TYPE_DIRECT 2U
#define AUTOFS_TYPE_OFFSET 4U
static inline void set_autofs_type_indirect(unsigned int *type)
{
*type = AUTOFS_TYPE_INDIRECT;
}
static inline unsigned int autofs_type_indirect(unsigned int type)
{
return (type == AUTOFS_TYPE_INDIRECT);
}
static inline void set_autofs_type_direct(unsigned int *type)
{
*type = AUTOFS_TYPE_DIRECT;
}
static inline unsigned int autofs_type_direct(unsigned int type)
{
return (type == AUTOFS_TYPE_DIRECT);
}
static inline void set_autofs_type_offset(unsigned int *type)
{
*type = AUTOFS_TYPE_OFFSET;
}
static inline unsigned int autofs_type_offset(unsigned int type)
{
return (type == AUTOFS_TYPE_OFFSET);
}
static inline unsigned int autofs_type_trigger(unsigned int type)
{
return (type == AUTOFS_TYPE_DIRECT || type == AUTOFS_TYPE_OFFSET);
}
/*
* This isn't really a type as we use it to say "no type set" to
* indicate we want to search for "any" mount in the
* autofs_dev_ioctl_ismountpoint() device ioctl function.
*/
static inline void set_autofs_type_any(unsigned int *type)
{
*type = AUTOFS_TYPE_ANY;
}
static inline unsigned int autofs_type_any(unsigned int type)
{
return (type == AUTOFS_TYPE_ANY);
}
/* Daemon notification packet types */
enum autofs_notify {
NFY_NONE,
NFY_MOUNT,
NFY_EXPIRE
};
/* Kernel protocol version 4 packet types */
/* Expire entry (umount request) */
#define autofs_ptype_expire_multi 2
/* Kernel protocol version 5 packet types */
/* Indirect mount missing and expire requests. */
#define autofs_ptype_missing_indirect 3
#define autofs_ptype_expire_indirect 4
/* Direct mount missing and expire requests */
#define autofs_ptype_missing_direct 5
#define autofs_ptype_expire_direct 6
/* v4 multi expire (via pipe) */
struct autofs_packet_expire_multi {
struct autofs_packet_hdr hdr;
autofs_wqt_t wait_queue_token;
int len;
char name[NAME_MAX+1];
};
union autofs_packet_union {
struct autofs_packet_hdr hdr;
struct autofs_packet_missing missing;
struct autofs_packet_expire expire;
struct autofs_packet_expire_multi expire_multi;
};
/* autofs v5 common packet struct */
struct autofs_v5_packet {
struct autofs_packet_hdr hdr;
autofs_wqt_t wait_queue_token;
__u32 dev;
__u64 ino;
__u32 uid;
__u32 gid;
__u32 pid;
__u32 tgid;
__u32 len;
char name[NAME_MAX+1];
};
typedef struct autofs_v5_packet autofs_packet_missing_indirect_t;
typedef struct autofs_v5_packet autofs_packet_expire_indirect_t;
typedef struct autofs_v5_packet autofs_packet_missing_direct_t;
typedef struct autofs_v5_packet autofs_packet_expire_direct_t;
union autofs_v5_packet_union {
struct autofs_packet_hdr hdr;
struct autofs_v5_packet v5_packet;
autofs_packet_missing_indirect_t missing_indirect;
autofs_packet_expire_indirect_t expire_indirect;
autofs_packet_missing_direct_t missing_direct;
autofs_packet_expire_direct_t expire_direct;
};
enum {
AUTOFS_IOC_EXPIRE_MULTI_CMD = 0x66, /* AUTOFS_IOC_EXPIRE_CMD + 1 */
AUTOFS_IOC_PROTOSUBVER_CMD,
AUTOFS_IOC_ASKUMOUNT_CMD = 0x70, /* AUTOFS_DEV_IOCTL_VERSION_CMD - 1 */
};
#define AUTOFS_IOC_EXPIRE_MULTI _IOW(AUTOFS_IOCTL, AUTOFS_IOC_EXPIRE_MULTI_CMD, int)
#define AUTOFS_IOC_PROTOSUBVER _IOR(AUTOFS_IOCTL, AUTOFS_IOC_PROTOSUBVER_CMD, int)
#define AUTOFS_IOC_ASKUMOUNT _IOR(AUTOFS_IOCTL, AUTOFS_IOC_ASKUMOUNT_CMD, int)
#endif /* _LINUX_AUTO_FS4_H */
#endif /* _UAPI_LINUX_AUTO_FS4_H */

View File

@ -35,6 +35,6 @@
#define KPF_BALLOON 23
#define KPF_ZERO_PAGE 24
#define KPF_IDLE 25
#define KPF_PGTABLE 26
#endif /* _UAPILINUX_KERNEL_PAGE_FLAGS_H */

View File

@ -460,6 +460,7 @@ static int __init crash_save_vmcoreinfo_init(void)
VMCOREINFO_NUMBER(PG_hwpoison);
#endif
VMCOREINFO_NUMBER(PG_head_mask);
#define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy)
VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE);
#ifdef CONFIG_HUGETLB_PAGE
VMCOREINFO_NUMBER(HUGETLB_PAGE_DTOR);

View File

@ -899,6 +899,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
mm->pinned_vm = 0;
memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
spin_lock_init(&mm->page_table_lock);
spin_lock_init(&mm->arg_lock);
mm_init_cpumask(mm);
mm_init_aio(mm);
mm_init_owner(mm, p);

View File

@ -44,6 +44,7 @@ int __read_mostly sysctl_hung_task_warnings = 10;
static int __read_mostly did_panic;
static bool hung_task_show_lock;
static bool hung_task_call_panic;
static struct task_struct *watchdog_task;
@ -127,10 +128,8 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
touch_nmi_watchdog();
if (sysctl_hung_task_panic) {
if (hung_task_show_lock)
debug_show_all_locks();
trigger_all_cpu_backtrace();
panic("hung_task: blocked tasks");
hung_task_show_lock = true;
hung_task_call_panic = true;
}
}
@ -193,6 +192,10 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
rcu_read_unlock();
if (hung_task_show_lock)
debug_show_all_locks();
if (hung_task_call_panic) {
trigger_all_cpu_backtrace();
panic("hung_task: blocked tasks");
}
}
static long hung_timeout_jiffies(unsigned long last_checked,

View File

@ -2018,7 +2018,11 @@ static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data
return error;
}
down_write(&mm->mmap_sem);
/*
* arg_lock protects concurent updates but we still need mmap_sem for
* read to exclude races with sys_brk.
*/
down_read(&mm->mmap_sem);
/*
* We don't validate if these members are pointing to
@ -2032,6 +2036,7 @@ static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data
* to any problem in kernel itself
*/
spin_lock(&mm->arg_lock);
mm->start_code = prctl_map.start_code;
mm->end_code = prctl_map.end_code;
mm->start_data = prctl_map.start_data;
@ -2043,6 +2048,7 @@ static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data
mm->arg_end = prctl_map.arg_end;
mm->env_start = prctl_map.env_start;
mm->env_end = prctl_map.env_end;
spin_unlock(&mm->arg_lock);
/*
* Note this update of @saved_auxv is lockless thus
@ -2055,7 +2061,7 @@ static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data
if (prctl_map.auxv_size)
memcpy(mm->saved_auxv, user_auxv, sizeof(user_auxv));
up_write(&mm->mmap_sem);
up_read(&mm->mmap_sem);
return 0;
}
#endif /* CONFIG_CHECKPOINT_RESTORE */

View File

@ -64,12 +64,9 @@ EXPORT_SYMBOL(__bitmap_equal);
void __bitmap_complement(unsigned long *dst, const unsigned long *src, unsigned int bits)
{
unsigned int k, lim = bits/BITS_PER_LONG;
unsigned int k, lim = BITS_TO_LONGS(bits);
for (k = 0; k < lim; ++k)
dst[k] = ~src[k];
if (bits % BITS_PER_LONG)
dst[k] = ~src[k];
}
EXPORT_SYMBOL(__bitmap_complement);

View File

@ -30,10 +30,7 @@ int alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *locks_mask,
}
if (sizeof(spinlock_t) != 0) {
if (gfpflags_allow_blocking(gfp))
tlocks = kvmalloc(size * sizeof(spinlock_t), gfp);
else
tlocks = kmalloc_array(size, sizeof(spinlock_t), gfp);
tlocks = kvmalloc_array(size, sizeof(spinlock_t), gfp);
if (!tlocks)
return -ENOMEM;
for (i = 0; i < size; i++)

View File

@ -4,9 +4,9 @@
#include <linux/idr.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/xarray.h>
DEFINE_PER_CPU(struct ida_bitmap *, ida_bitmap);
static DEFINE_SPINLOCK(simple_ida_lock);
/**
* idr_alloc_u32() - Allocate an ID.
@ -581,7 +581,7 @@ again:
if (!ida_pre_get(ida, gfp_mask))
return -ENOMEM;
spin_lock_irqsave(&simple_ida_lock, flags);
xa_lock_irqsave(&ida->ida_rt, flags);
ret = ida_get_new_above(ida, start, &id);
if (!ret) {
if (id > max) {
@ -591,7 +591,7 @@ again:
ret = id;
}
}
spin_unlock_irqrestore(&simple_ida_lock, flags);
xa_unlock_irqrestore(&ida->ida_rt, flags);
if (unlikely(ret == -EAGAIN))
goto again;
@ -615,8 +615,8 @@ void ida_simple_remove(struct ida *ida, unsigned int id)
unsigned long flags;
BUG_ON((int)id < 0);
spin_lock_irqsave(&simple_ida_lock, flags);
xa_lock_irqsave(&ida->ida_rt, flags);
ida_remove(ida, id);
spin_unlock_irqrestore(&simple_ida_lock, flags);
xa_unlock_irqrestore(&ida->ida_rt, flags);
}
EXPORT_SYMBOL(ida_simple_remove);

View File

@ -65,13 +65,6 @@
typedef mpi_limb_t *mpi_ptr_t; /* pointer to a limb */
typedef int mpi_size_t; /* (must be a signed type) */
static inline int RESIZE_IF_NEEDED(MPI a, unsigned b)
{
if (a->alloced < b)
return mpi_resize(a, b);
return 0;
}
/* Copy N limbs from S to D. */
#define MPN_COPY(d, s, n) \
do { \
@ -80,13 +73,6 @@ static inline int RESIZE_IF_NEEDED(MPI a, unsigned b)
(d)[_i] = (s)[_i]; \
} while (0)
#define MPN_COPY_INCR(d, s, n) \
do { \
mpi_size_t _i; \
for (_i = 0; _i < (n); _i++) \
(d)[_i] = (s)[_i]; \
} while (0)
#define MPN_COPY_DECR(d, s, n) \
do { \
mpi_size_t _i; \
@ -111,15 +97,6 @@ static inline int RESIZE_IF_NEEDED(MPI a, unsigned b)
} \
} while (0)
#define MPN_NORMALIZE_NOT_ZERO(d, n) \
do { \
for (;;) { \
if ((d)[(n)-1]) \
break; \
(n)--; \
} \
} while (0)
#define MPN_MUL_N_RECURSE(prodp, up, vp, size, tspace) \
do { \
if ((size) < KARATSUBA_THRESHOLD) \
@ -128,46 +105,11 @@ static inline int RESIZE_IF_NEEDED(MPI a, unsigned b)
mul_n(prodp, up, vp, size, tspace); \
} while (0);
/* Divide the two-limb number in (NH,,NL) by D, with DI being the largest
* limb not larger than (2**(2*BITS_PER_MP_LIMB))/D - (2**BITS_PER_MP_LIMB).
* If this would yield overflow, DI should be the largest possible number
* (i.e., only ones). For correct operation, the most significant bit of D
* has to be set. Put the quotient in Q and the remainder in R.
*/
#define UDIV_QRNND_PREINV(q, r, nh, nl, d, di) \
do { \
mpi_limb_t _q, _ql, _r; \
mpi_limb_t _xh, _xl; \
umul_ppmm(_q, _ql, (nh), (di)); \
_q += (nh); /* DI is 2**BITS_PER_MPI_LIMB too small */ \
umul_ppmm(_xh, _xl, _q, (d)); \
sub_ddmmss(_xh, _r, (nh), (nl), _xh, _xl); \
if (_xh) { \
sub_ddmmss(_xh, _r, _xh, _r, 0, (d)); \
_q++; \
if (_xh) { \
sub_ddmmss(_xh, _r, _xh, _r, 0, (d)); \
_q++; \
} \
} \
if (_r >= (d)) { \
_r -= (d); \
_q++; \
} \
(r) = _r; \
(q) = _q; \
} while (0)
/*-- mpiutil.c --*/
mpi_ptr_t mpi_alloc_limb_space(unsigned nlimbs);
void mpi_free_limb_space(mpi_ptr_t a);
void mpi_assign_limb_space(MPI a, mpi_ptr_t ap, unsigned nlimbs);
/*-- mpi-bit.c --*/
void mpi_rshift_limbs(MPI a, unsigned int count);
int mpi_lshift_limbs(MPI a, unsigned int count);
/*-- mpihelp-add.c --*/
static inline mpi_limb_t mpihelp_add_1(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr,
mpi_size_t s1_size, mpi_limb_t s2_limb);
mpi_limb_t mpihelp_add_n(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr,
@ -175,7 +117,6 @@ mpi_limb_t mpihelp_add_n(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr,
static inline mpi_limb_t mpihelp_add(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr, mpi_size_t s1_size,
mpi_ptr_t s2_ptr, mpi_size_t s2_size);
/*-- mpihelp-sub.c --*/
static inline mpi_limb_t mpihelp_sub_1(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr,
mpi_size_t s1_size, mpi_limb_t s2_limb);
mpi_limb_t mpihelp_sub_n(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr,
@ -183,10 +124,10 @@ mpi_limb_t mpihelp_sub_n(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr,
static inline mpi_limb_t mpihelp_sub(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr, mpi_size_t s1_size,
mpi_ptr_t s2_ptr, mpi_size_t s2_size);
/*-- mpihelp-cmp.c --*/
/*-- mpih-cmp.c --*/
int mpihelp_cmp(mpi_ptr_t op1_ptr, mpi_ptr_t op2_ptr, mpi_size_t size);
/*-- mpihelp-mul.c --*/
/*-- mpih-mul.c --*/
struct karatsuba_ctx {
struct karatsuba_ctx *next;
@ -202,7 +143,6 @@ mpi_limb_t mpihelp_addmul_1(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr,
mpi_size_t s1_size, mpi_limb_t s2_limb);
mpi_limb_t mpihelp_submul_1(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr,
mpi_size_t s1_size, mpi_limb_t s2_limb);
int mpihelp_mul_n(mpi_ptr_t prodp, mpi_ptr_t up, mpi_ptr_t vp, mpi_size_t size);
int mpihelp_mul(mpi_ptr_t prodp, mpi_ptr_t up, mpi_size_t usize,
mpi_ptr_t vp, mpi_size_t vsize, mpi_limb_t *_result);
void mpih_sqr_n_basecase(mpi_ptr_t prodp, mpi_ptr_t up, mpi_size_t size);
@ -214,21 +154,16 @@ int mpihelp_mul_karatsuba_case(mpi_ptr_t prodp,
mpi_ptr_t vp, mpi_size_t vsize,
struct karatsuba_ctx *ctx);
/*-- mpihelp-mul_1.c (or xxx/cpu/ *.S) --*/
/*-- generic_mpih-mul1.c --*/
mpi_limb_t mpihelp_mul_1(mpi_ptr_t res_ptr, mpi_ptr_t s1_ptr,
mpi_size_t s1_size, mpi_limb_t s2_limb);
/*-- mpihelp-div.c --*/
mpi_limb_t mpihelp_mod_1(mpi_ptr_t dividend_ptr, mpi_size_t dividend_size,
mpi_limb_t divisor_limb);
/*-- mpih-div.c --*/
mpi_limb_t mpihelp_divrem(mpi_ptr_t qp, mpi_size_t qextra_limbs,
mpi_ptr_t np, mpi_size_t nsize,
mpi_ptr_t dp, mpi_size_t dsize);
mpi_limb_t mpihelp_divmod_1(mpi_ptr_t quot_ptr,
mpi_ptr_t dividend_ptr, mpi_size_t dividend_size,
mpi_limb_t divisor_limb);
/*-- mpihelp-shift.c --*/
/*-- generic_mpih-[lr]shift.c --*/
mpi_limb_t mpihelp_lshift(mpi_ptr_t wp, mpi_ptr_t up, mpi_size_t usize,
unsigned cnt);
mpi_limb_t mpihelp_rshift(mpi_ptr_t wp, mpi_ptr_t up, mpi_size_t usize,

View File

@ -112,18 +112,6 @@ static inline void alloc_global_tags(struct percpu_ida *pool,
min(pool->nr_free, pool->percpu_batch_size));
}
static inline unsigned alloc_local_tag(struct percpu_ida_cpu *tags)
{
int tag = -ENOSPC;
spin_lock(&tags->lock);
if (tags->nr_free)
tag = tags->freelist[--tags->nr_free];
spin_unlock(&tags->lock);
return tag;
}
/**
* percpu_ida_alloc - allocate a tag
* @pool: pool to allocate from
@ -147,20 +135,22 @@ int percpu_ida_alloc(struct percpu_ida *pool, int state)
DEFINE_WAIT(wait);
struct percpu_ida_cpu *tags;
unsigned long flags;
int tag;
int tag = -ENOSPC;
local_irq_save(flags);
tags = this_cpu_ptr(pool->tag_cpu);
tags = raw_cpu_ptr(pool->tag_cpu);
spin_lock_irqsave(&tags->lock, flags);
/* Fastpath */
tag = alloc_local_tag(tags);
if (likely(tag >= 0)) {
local_irq_restore(flags);
if (likely(tags->nr_free >= 0)) {
tag = tags->freelist[--tags->nr_free];
spin_unlock_irqrestore(&tags->lock, flags);
return tag;
}
spin_unlock_irqrestore(&tags->lock, flags);
while (1) {
spin_lock(&pool->lock);
spin_lock_irqsave(&pool->lock, flags);
tags = this_cpu_ptr(pool->tag_cpu);
/*
* prepare_to_wait() must come before steal_tags(), in case
@ -184,8 +174,7 @@ int percpu_ida_alloc(struct percpu_ida *pool, int state)
&pool->cpus_have_tags);
}
spin_unlock(&pool->lock);
local_irq_restore(flags);
spin_unlock_irqrestore(&pool->lock, flags);
if (tag >= 0 || state == TASK_RUNNING)
break;
@ -196,9 +185,6 @@ int percpu_ida_alloc(struct percpu_ida *pool, int state)
}
schedule();
local_irq_save(flags);
tags = this_cpu_ptr(pool->tag_cpu);
}
if (state != TASK_RUNNING)
finish_wait(&pool->wait, &wait);
@ -222,28 +208,24 @@ void percpu_ida_free(struct percpu_ida *pool, unsigned tag)
BUG_ON(tag >= pool->nr_tags);
local_irq_save(flags);
tags = this_cpu_ptr(pool->tag_cpu);
tags = raw_cpu_ptr(pool->tag_cpu);
spin_lock(&tags->lock);
spin_lock_irqsave(&tags->lock, flags);
tags->freelist[tags->nr_free++] = tag;
nr_free = tags->nr_free;
spin_unlock(&tags->lock);
if (nr_free == 1) {
cpumask_set_cpu(smp_processor_id(),
&pool->cpus_have_tags);
wake_up(&pool->wait);
}
spin_unlock_irqrestore(&tags->lock, flags);
if (nr_free == pool->percpu_max_size) {
spin_lock(&pool->lock);
spin_lock_irqsave(&pool->lock, flags);
spin_lock(&tags->lock);
/*
* Global lock held and irqs disabled, don't need percpu
* lock
*/
if (tags->nr_free == pool->percpu_max_size) {
move_tags(pool->freelist, &pool->nr_free,
tags->freelist, &tags->nr_free,
@ -251,10 +233,9 @@ void percpu_ida_free(struct percpu_ida *pool, unsigned tag)
wake_up(&pool->wait);
}
spin_unlock(&pool->lock);
spin_unlock(&tags->lock);
spin_unlock_irqrestore(&pool->lock, flags);
}
local_irq_restore(flags);
}
EXPORT_SYMBOL_GPL(percpu_ida_free);
@ -346,29 +327,27 @@ int percpu_ida_for_each_free(struct percpu_ida *pool, percpu_ida_cb fn,
struct percpu_ida_cpu *remote;
unsigned cpu, i, err = 0;
local_irq_save(flags);
for_each_possible_cpu(cpu) {
remote = per_cpu_ptr(pool->tag_cpu, cpu);
spin_lock(&remote->lock);
spin_lock_irqsave(&remote->lock, flags);
for (i = 0; i < remote->nr_free; i++) {
err = fn(remote->freelist[i], data);
if (err)
break;
}
spin_unlock(&remote->lock);
spin_unlock_irqrestore(&remote->lock, flags);
if (err)
goto out;
}
spin_lock(&pool->lock);
spin_lock_irqsave(&pool->lock, flags);
for (i = 0; i < pool->nr_free; i++) {
err = fn(pool->freelist[i], data);
if (err)
break;
}
spin_unlock(&pool->lock);
spin_unlock_irqrestore(&pool->lock, flags);
out:
local_irq_restore(flags);
return err;
}
EXPORT_SYMBOL_GPL(percpu_ida_for_each_free);

Some files were not shown because too many files have changed in this diff Show More