Merge branch 'akpm' (patches from Andrew)

Merge more updates from Andrew Morton:

 - a few block updates that fell in my lap

 - lib/ updates

 - checkpatch

 - autofs

 - ipc

 - a ton of misc other things

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (100 commits)
  mm: split gfp_mask and mapping flags into separate fields
  fs: use mapping_set_error instead of opencoded set_bit
  treewide: remove redundant #include <linux/kconfig.h>
  hung_task: allow hung_task_panic when hung_task_warnings is 0
  kthread: add kerneldoc for kthread_create()
  kthread: better support freezable kthread workers
  kthread: allow to modify delayed kthread work
  kthread: allow to cancel kthread work
  kthread: initial support for delayed kthread work
  kthread: detect when a kthread work is used by more workers
  kthread: add kthread_destroy_worker()
  kthread: add kthread_create_worker*()
  kthread: allow to call __kthread_create_on_node() with va_list args
  kthread/smpboot: do not park in kthread_create_on_cpu()
  kthread: kthread worker API cleanup
  kthread: rename probe_kthread_data() to kthread_probe_data()
  scripts/tags.sh: enable code completion in VIM
  mm: kmemleak: avoid using __va() on addresses that don't have a lowmem mapping
  kdump, vmcoreinfo: report memory sections virtual addresses
  ipc/sem.c: add cond_resched in exit_sme
  ...
This commit is contained in:
Linus Torvalds 2016-10-11 17:34:10 -07:00
commit a379f71a30
200 changed files with 2299 additions and 1060 deletions

View File

@ -126,3 +126,20 @@ means that we won't try quite as hard to get them.
NOTE: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM,
though ARM64 patches will likely be posted soon.
DMA_ATTR_NO_WARN
----------------
This tells the DMA-mapping subsystem to suppress allocation failure reports
(similarly to __GFP_NOWARN).
On some architectures allocation failures are reported with error messages
to the system logs. Although this can help to identify and debug problems,
drivers which handle failures (eg, retry later) have no problems with them,
and can actually flood the system logs with error messages that aren't any
problem at all, depending on the implementation of the retry mechanism.
So, this provides a way for drivers to avoid those error messages on calls
where allocation failures are not a problem, and shouldn't bother the logs.
NOTE: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC.

View File

@ -57,7 +57,7 @@ Call Trace:
[<ffffffff817db154>] kernel_thread_helper+0x4/0x10
[<ffffffff81066430>] ? finish_task_switch+0x80/0x110
[<ffffffff817d9c04>] ? retint_restore_args+0xe/0xe
[<ffffffff81097510>] ? __init_kthread_worker+0x70/0x70
[<ffffffff81097510>] ? __kthread_init_worker+0x70/0x70
[<ffffffff817db150>] ? gs_change+0xb/0xb
Line 2776 of block/cfq-iosched.c in v3.0-rc5 is as follows:

View File

@ -162,6 +162,15 @@ See the include/linux/kmemleak.h header for the functions prototype.
- ``kmemleak_alloc_recursive`` - as kmemleak_alloc but checks the recursiveness
- ``kmemleak_free_recursive`` - as kmemleak_free but checks the recursiveness
The following functions take a physical address as the object pointer
and only perform the corresponding action if the address has a lowmem
mapping:
- ``kmemleak_alloc_phys``
- ``kmemleak_free_part_phys``
- ``kmemleak_not_leak_phys``
- ``kmemleak_ignore_phys``
Dealing with false positives/negatives
--------------------------------------

View File

@ -179,8 +179,19 @@ struct autofs_dev_ioctl {
* including this struct */
__s32 ioctlfd; /* automount command fd */
__u32 arg1; /* Command parameters */
__u32 arg2;
union {
struct args_protover protover;
struct args_protosubver protosubver;
struct args_openmount openmount;
struct args_ready ready;
struct args_fail fail;
struct args_setpipefd setpipefd;
struct args_timeout timeout;
struct args_requester requester;
struct args_expire expire;
struct args_askumount askumount;
struct args_ismountpoint ismountpoint;
};
char path[0];
};
@ -192,8 +203,8 @@ optionally be used to check a specific mount corresponding to a given
mount point file descriptor, and when requesting the uid and gid of the
last successful mount on a directory within the autofs file system.
The fields arg1 and arg2 are used to communicate parameters and results of
calls made as described below.
The union is used to communicate parameters and results of calls made
as described below.
The path field is used to pass a path where it is needed and the size field
is used account for the increased structure length when translating the
@ -245,9 +256,9 @@ AUTOFS_DEV_IOCTL_PROTOVER_CMD and AUTOFS_DEV_IOCTL_PROTOSUBVER_CMD
Get the major and minor version of the autofs4 protocol version understood
by loaded module. This call requires an initialized struct autofs_dev_ioctl
with the ioctlfd field set to a valid autofs mount point descriptor
and sets the requested version number in structure field arg1. These
commands return 0 on success or one of the negative error codes if
validation fails.
and sets the requested version number in version field of struct args_protover
or sub_version field of struct args_protosubver. These commands return
0 on success or one of the negative error codes if validation fails.
AUTOFS_DEV_IOCTL_OPENMOUNT and AUTOFS_DEV_IOCTL_CLOSEMOUNT
@ -256,9 +267,9 @@ AUTOFS_DEV_IOCTL_OPENMOUNT and AUTOFS_DEV_IOCTL_CLOSEMOUNT
Obtain and release a file descriptor for an autofs managed mount point
path. The open call requires an initialized struct autofs_dev_ioctl with
the path field set and the size field adjusted appropriately as well
as the arg1 field set to the device number of the autofs mount. The
device number can be obtained from the mount options shown in
/proc/mounts. The close call requires an initialized struct
as the devid field of struct args_openmount set to the device number of
the autofs mount. The device number can be obtained from the mount options
shown in /proc/mounts. The close call requires an initialized struct
autofs_dev_ioct with the ioctlfd field set to the descriptor obtained
from the open call. The release of the file descriptor can also be done
with close(2) so any open descriptors will also be closed at process exit.
@ -272,10 +283,10 @@ AUTOFS_DEV_IOCTL_READY_CMD and AUTOFS_DEV_IOCTL_FAIL_CMD
Return mount and expire result status from user space to the kernel.
Both of these calls require an initialized struct autofs_dev_ioctl
with the ioctlfd field set to the descriptor obtained from the open
call and the arg1 field set to the wait queue token number, received
by user space in the foregoing mount or expire request. The arg2 field
is set to the status to be returned. For the ready call this is always
0 and for the fail call it is set to the errno of the operation.
call and the token field of struct args_ready or struct args_fail set
to the wait queue token number, received by user space in the foregoing
mount or expire request. The status field of struct args_fail is set to
the errno of the operation. It is set to 0 on success.
AUTOFS_DEV_IOCTL_SETPIPEFD_CMD
@ -290,9 +301,10 @@ mount be catatonic (see next call).
The call requires an initialized struct autofs_dev_ioctl with the
ioctlfd field set to the descriptor obtained from the open call and
the arg1 field set to descriptor of the pipe. On success the call
also sets the process group id used to identify the controlling process
(eg. the owning automount(8) daemon) to the process group of the caller.
the pipefd field of struct args_setpipefd set to descriptor of the pipe.
On success the call also sets the process group id used to identify the
controlling process (eg. the owning automount(8) daemon) to the process
group of the caller.
AUTOFS_DEV_IOCTL_CATATONIC_CMD
@ -323,9 +335,8 @@ mount on the given path dentry.
The call requires an initialized struct autofs_dev_ioctl with the path
field set to the mount point in question and the size field adjusted
appropriately as well as the arg1 field set to the device number of the
containing autofs mount. Upon return the struct field arg1 contains the
uid and arg2 the gid.
appropriately. Upon return the uid field of struct args_requester contains
the uid and gid field the gid.
When reconstructing an autofs mount tree with active mounts we need to
re-connect to mounts that may have used the original process uid and
@ -343,8 +354,9 @@ this ioctl is called until no further expire candidates are found.
The call requires an initialized struct autofs_dev_ioctl with the
ioctlfd field set to the descriptor obtained from the open call. In
addition an immediate expire, independent of the mount timeout, can be
requested by setting the arg1 field to 1. If no expire candidates can
be found the ioctl returns -1 with errno set to EAGAIN.
requested by setting the how field of struct args_expire to 1. If no
expire candidates can be found the ioctl returns -1 with errno set to
EAGAIN.
This call causes the kernel module to check the mount corresponding
to the given ioctlfd for mounts that can be expired, issues an expire
@ -357,7 +369,8 @@ Checks if an autofs mount point is in use.
The call requires an initialized struct autofs_dev_ioctl with the
ioctlfd field set to the descriptor obtained from the open call and
it returns the result in the arg1 field, 1 for busy and 0 otherwise.
it returns the result in the may_umount field of struct args_askumount,
1 for busy and 0 otherwise.
AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD
@ -369,12 +382,12 @@ The call requires an initialized struct autofs_dev_ioctl. There are two
possible variations. Both use the path field set to the path of the mount
point to check and the size field adjusted appropriately. One uses the
ioctlfd field to identify a specific mount point to check while the other
variation uses the path and optionally arg1 set to an autofs mount type.
The call returns 1 if this is a mount point and sets arg1 to the device
number of the mount and field arg2 to the relevant super block magic
number (described below) or 0 if it isn't a mountpoint. In both cases
the the device number (as returned by new_encode_dev()) is returned
in field arg1.
variation uses the path and optionally in.type field of struct args_ismountpoint
set to an autofs mount type. The call returns 1 if this is a mount point
and sets out.devid field to the device number of the mount and out.magic
field to the relevant super block magic number (described below) or 0 if
it isn't a mountpoint. In both cases the the device number (as returned
by new_encode_dev()) is returned in out.devid field.
If supplied with a file descriptor we're looking for a specific mount,
not necessarily at the top of the mounted stack. In this case the path

View File

@ -203,9 +203,9 @@ initiated or is being considered, otherwise it returns 0.
Mountpoint expiry
-----------------
The VFS has a mechansim for automatically expiring unused mounts,
The VFS has a mechanism for automatically expiring unused mounts,
much as it can expire any unused dentry information from the dcache.
This is guided by the MNT_SHRINKABLE flag. This only applies to
This is guided by the MNT_SHRINKABLE flag. This only applies to
mounts that were created by `d_automount()` returning a filesystem to be
mounted. As autofs doesn't return such a filesystem but leaves the
mounting to the automount daemon, it must involve the automount daemon
@ -298,7 +298,7 @@ remove directories and symlinks using normal filesystem operations.
autofs knows whether a process requesting some operation is the daemon
or not based on its process-group id number (see getpgid(1)).
When an autofs filesystem it mounted the pgid of the mounting
When an autofs filesystem is mounted the pgid of the mounting
processes is recorded unless the "pgrp=" option is given, in which
case that number is recorded instead. Any request arriving from a
process in that process group is considered to come from the daemon.
@ -450,7 +450,7 @@ Commands are:
numbers for existing filesystems can be found in
`/proc/self/mountinfo`.
- **AUTOFS_DEV_IOCTL_CLOSEMOUNT_CMD**: same as `close(ioctlfd)`.
- **AUTOFS_DEV_IOCTL_SETPIPEFD_CMD**: if the filesystem is in
- **AUTOFS_DEV_IOCTL_SETPIPEFD_CMD**: if the filesystem is in
catatonic mode, this can provide the write end of a new pipe
in `arg1` to re-establish communication with a daemon. The
process group of the calling process is used to identify the

View File

@ -33,6 +33,37 @@ can also be entered as
Double-quotes can be used to protect spaces in values, e.g.:
param="spaces in here"
cpu lists:
----------
Some kernel parameters take a list of CPUs as a value, e.g. isolcpus,
nohz_full, irqaffinity, rcu_nocbs. The format of this list is:
<cpu number>,...,<cpu number>
or
<cpu number>-<cpu number>
(must be a positive range in ascending order)
or a mixture
<cpu number>,...,<cpu number>-<cpu number>
Note that for the special case of a range one can split the range into equal
sized groups and for each group use some amount from the beginning of that
group:
<cpu number>-cpu number>:<used size>/<group size>
For example one can add to the command line following parameter:
isolcpus=1,2,10-20,100-2000:2/25
where the final item represents CPUs 100,101,125,126,150,151,...
This document may not be entirely up to date and comprehensive. The command
"modinfo -p ${modulename}" shows a current list of all parameters of a loadable
module. Loadable modules, after being loaded into the running kernel, also
@ -1789,13 +1820,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
See Documentation/filesystems/nfs/nfsroot.txt.
irqaffinity= [SMP] Set the default irq affinity mask
Format:
<cpu number>,...,<cpu number>
or
<cpu number>-<cpu number>
(must be a positive range in ascending order)
or a mixture
<cpu number>,...,<cpu number>-<cpu number>
The argument is a cpu list, as described above.
irqfixup [HW]
When an interrupt is not handled search all handlers
@ -1812,13 +1837,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Format: <RDP>,<reset>,<pci_scan>,<verbosity>
isolcpus= [KNL,SMP] Isolate CPUs from the general scheduler.
Format:
<cpu number>,...,<cpu number>
or
<cpu number>-<cpu number>
(must be a positive range in ascending order)
or a mixture
<cpu number>,...,<cpu number>-<cpu number>
The argument is a cpu list, as described above.
This option can be used to specify one or more CPUs
to isolate from the general SMP balancing and scheduling
@ -2680,6 +2699,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Default: on
nohz_full= [KNL,BOOT]
The argument is a cpu list, as described above.
In kernels built with CONFIG_NO_HZ_FULL=y, set
the specified list of CPUs whose tick will be stopped
whenever possible. The boot CPU will be forced outside
@ -3285,6 +3305,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
See Documentation/blockdev/ramdisk.txt.
rcu_nocbs= [KNL]
The argument is a cpu list, as described above.
In kernels built with CONFIG_RCU_NOCB_CPU=y, set
the specified list of CPUs to be no-callback CPUs.
Invocation of these CPUs' RCU callbacks will

View File

@ -26,7 +26,6 @@
#ifndef __ASM_ARM_TRUSTED_FOUNDATIONS_H
#define __ASM_ARM_TRUSTED_FOUNDATIONS_H
#include <linux/kconfig.h>
#include <linux/printk.h>
#include <linux/bug.h>
#include <linux/of.h>

View File

@ -318,8 +318,7 @@ unsigned long get_wchan(struct task_struct *p)
unsigned long arch_randomize_brk(struct mm_struct *mm)
{
unsigned long range_end = mm->brk + 0x02000000;
return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
return randomize_page(mm->brk, 0x02000000);
}
#ifdef CONFIG_MMU

View File

@ -7,7 +7,6 @@
#ifndef __ASSEMBLY__
#include <linux/init.h>
#include <linux/kconfig.h>
#include <linux/types.h>
#include <linux/stddef.h>
#include <linux/stringify.h>

View File

@ -372,12 +372,8 @@ unsigned long arch_align_stack(unsigned long sp)
unsigned long arch_randomize_brk(struct mm_struct *mm)
{
unsigned long range_end = mm->brk;
if (is_compat_task())
range_end += 0x02000000;
return randomize_page(mm->brk, 0x02000000);
else
range_end += 0x40000000;
return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
return randomize_page(mm->brk, 0x40000000);
}

View File

@ -267,6 +267,17 @@ static void octeon_crash_shutdown(struct pt_regs *regs)
default_machine_crash_shutdown(regs);
}
#ifdef CONFIG_SMP
void octeon_crash_smp_send_stop(void)
{
int cpu;
/* disable watchdogs */
for_each_online_cpu(cpu)
cvmx_write_csr(CVMX_CIU_WDOGX(cpu_logical_map(cpu)), 0);
}
#endif
#endif /* CONFIG_KEXEC */
#ifdef CONFIG_CAVIUM_RESERVE32
@ -911,6 +922,9 @@ void __init prom_init(void)
_machine_kexec_shutdown = octeon_shutdown;
_machine_crash_shutdown = octeon_crash_shutdown;
_machine_kexec_prepare = octeon_kexec_prepare;
#ifdef CONFIG_SMP
_crash_smp_send_stop = octeon_crash_smp_send_stop;
#endif
#endif
octeon_user_io_init();

View File

@ -45,6 +45,7 @@ extern const unsigned char kexec_smp_wait[];
extern unsigned long secondary_kexec_args[4];
extern void (*relocated_kexec_smp_wait) (void *);
extern atomic_t kexec_ready_to_reboot;
extern void (*_crash_smp_send_stop)(void);
#endif
#endif

View File

@ -14,7 +14,6 @@
#include <linux/io.h>
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/kconfig.h>
#include <boot_param.h>
/* loongson internal northbridge initialization */

View File

@ -47,9 +47,14 @@ static void crash_shutdown_secondary(void *passed_regs)
static void crash_kexec_prepare_cpus(void)
{
static int cpus_stopped;
unsigned int msecs;
unsigned int ncpus;
unsigned int ncpus = num_online_cpus() - 1;/* Excluding the panic cpu */
if (cpus_stopped)
return;
ncpus = num_online_cpus() - 1;/* Excluding the panic cpu */
dump_send_ipi(crash_shutdown_secondary);
smp_wmb();
@ -64,6 +69,17 @@ static void crash_kexec_prepare_cpus(void)
cpu_relax();
mdelay(1);
}
cpus_stopped = 1;
}
/* Override the weak function in kernel/panic.c */
void crash_smp_send_stop(void)
{
if (_crash_smp_send_stop)
_crash_smp_send_stop();
crash_kexec_prepare_cpus();
}
#else /* !defined(CONFIG_SMP) */

View File

@ -25,6 +25,7 @@ void (*_machine_crash_shutdown)(struct pt_regs *regs) = NULL;
#ifdef CONFIG_SMP
void (*relocated_kexec_smp_wait) (void *);
atomic_t kexec_ready_to_reboot = ATOMIC_INIT(0);
void (*_crash_smp_send_stop)(void) = NULL;
#endif
int

View File

@ -35,7 +35,6 @@
*/
#include <linux/sched.h>
#include <linux/debugfs.h>
#include <linux/kconfig.h>
#include <linux/percpu-defs.h>
#include <linux/perf_event.h>

View File

@ -14,7 +14,6 @@
#include <linux/errno.h>
#include <linux/filter.h>
#include <linux/if_vlan.h>
#include <linux/kconfig.h>
#include <linux/moduleloader.h>
#include <linux/netdevice.h>
#include <linux/string.h>

View File

@ -479,7 +479,8 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl,
/* Handle failure */
if (unlikely(entry == DMA_ERROR_CODE)) {
if (printk_ratelimit())
if (!(attrs & DMA_ATTR_NO_WARN) &&
printk_ratelimit())
dev_info(dev, "iommu_alloc failed, tbl %p "
"vaddr %lx npages %lu\n", tbl, vaddr,
npages);
@ -776,7 +777,8 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl,
mask >> tbl->it_page_shift, align,
attrs);
if (dma_handle == DMA_ERROR_CODE) {
if (printk_ratelimit()) {
if (!(attrs & DMA_ATTR_NO_WARN) &&
printk_ratelimit()) {
dev_info(dev, "iommu_alloc failed, tbl %p "
"vaddr %p npages %d\n", tbl, vaddr,
npages);

View File

@ -88,6 +88,5 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
unsigned long arch_randomize_brk(struct mm_struct *mm)
{
unsigned long range_end = mm->brk + 0x02000000;
return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
return randomize_page(mm->brk, 0x02000000);
}

View File

@ -295,8 +295,7 @@ unsigned long get_wchan(struct task_struct *p)
unsigned long arch_randomize_brk(struct mm_struct *mm)
{
unsigned long range_end = mm->brk + 0x02000000;
return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
return randomize_page(mm->brk, 0x02000000);
}
/*

View File

@ -210,6 +210,7 @@ struct kexec_entry64_regs {
typedef void crash_vmclear_fn(void);
extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss;
extern void kdump_nmi_shootdown_cpus(void);
#endif /* __ASSEMBLY__ */

View File

@ -47,6 +47,7 @@ struct smp_ops {
void (*smp_cpus_done)(unsigned max_cpus);
void (*stop_other_cpus)(int wait);
void (*crash_stop_other_cpus)(void);
void (*smp_send_reschedule)(int cpu);
int (*cpu_up)(unsigned cpu, struct task_struct *tidle);

View File

@ -133,15 +133,31 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
disable_local_APIC();
}
static void kdump_nmi_shootdown_cpus(void)
void kdump_nmi_shootdown_cpus(void)
{
nmi_shootdown_cpus(kdump_nmi_callback);
disable_local_APIC();
}
/* Override the weak function in kernel/panic.c */
void crash_smp_send_stop(void)
{
static int cpus_stopped;
if (cpus_stopped)
return;
if (smp_ops.crash_stop_other_cpus)
smp_ops.crash_stop_other_cpus();
else
smp_send_stop();
cpus_stopped = 1;
}
#else
static void kdump_nmi_shootdown_cpus(void)
void crash_smp_send_stop(void)
{
/* There are no cpus to shootdown */
}
@ -160,7 +176,7 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
/* The kernel is broken so disable interrupts */
local_irq_disable();
kdump_nmi_shootdown_cpus();
crash_smp_send_stop();
/*
* VMCLEAR VMCSs loaded on this cpu if needed.

View File

@ -337,6 +337,9 @@ void arch_crash_save_vmcoreinfo(void)
#endif
vmcoreinfo_append_str("KERNELOFFSET=%lx\n",
kaslr_offset());
VMCOREINFO_PAGE_OFFSET(PAGE_OFFSET);
VMCOREINFO_VMALLOC_START(VMALLOC_START);
VMCOREINFO_VMEMMAP_START(VMEMMAP_START);
}
/* arch-dependent functionality related to kexec file-based syscall */

View File

@ -509,8 +509,7 @@ unsigned long arch_align_stack(unsigned long sp)
unsigned long arch_randomize_brk(struct mm_struct *mm)
{
unsigned long range_end = mm->brk + 0x02000000;
return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
return randomize_page(mm->brk, 0x02000000);
}
/*

View File

@ -32,6 +32,8 @@
#include <asm/nmi.h>
#include <asm/mce.h>
#include <asm/trace/irq_vectors.h>
#include <asm/kexec.h>
/*
* Some notes on x86 processor bugs affecting SMP operation:
*
@ -342,6 +344,9 @@ struct smp_ops smp_ops = {
.smp_cpus_done = native_smp_cpus_done,
.stop_other_cpus = native_stop_other_cpus,
#if defined(CONFIG_KEXEC_CORE)
.crash_stop_other_cpus = kdump_nmi_shootdown_cpus,
#endif
.smp_send_reschedule = native_smp_send_reschedule,
.cpu_up = native_cpu_up,

View File

@ -101,7 +101,6 @@ static void find_start_end(unsigned long flags, unsigned long *begin,
unsigned long *end)
{
if (!test_thread_flag(TIF_ADDR32) && (flags & MAP_32BIT)) {
unsigned long new_begin;
/* This is usually used needed to map code in small
model, so it needs to be in the first 31bit. Limit
it to that. This means we need to move the
@ -112,9 +111,7 @@ static void find_start_end(unsigned long flags, unsigned long *begin,
*begin = 0x40000000;
*end = 0x80000000;
if (current->flags & PF_RANDOMIZE) {
new_begin = randomize_range(*begin, *begin + 0x02000000, 0);
if (new_begin)
*begin = new_begin;
*begin = randomize_page(*begin, 0x02000000);
}
} else {
*begin = current->mm->mmap_legacy_base;

View File

@ -212,7 +212,7 @@ static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian)
*/
smp_mb();
if (atomic_dec_if_positive(&ps->pending) > 0)
queue_kthread_work(&pit->worker, &pit->expired);
kthread_queue_work(&pit->worker, &pit->expired);
}
void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu)
@ -233,7 +233,7 @@ void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu)
static void destroy_pit_timer(struct kvm_pit *pit)
{
hrtimer_cancel(&pit->pit_state.timer);
flush_kthread_work(&pit->expired);
kthread_flush_work(&pit->expired);
}
static void pit_do_work(struct kthread_work *work)
@ -272,7 +272,7 @@ static enum hrtimer_restart pit_timer_fn(struct hrtimer *data)
if (atomic_read(&ps->reinject))
atomic_inc(&ps->pending);
queue_kthread_work(&pt->worker, &pt->expired);
kthread_queue_work(&pt->worker, &pt->expired);
if (ps->is_periodic) {
hrtimer_add_expires_ns(&ps->timer, ps->period);
@ -324,7 +324,7 @@ static void create_pit_timer(struct kvm_pit *pit, u32 val, int is_period)
/* TODO The new value only affected after the retriggered */
hrtimer_cancel(&ps->timer);
flush_kthread_work(&pit->expired);
kthread_flush_work(&pit->expired);
ps->period = interval;
ps->is_periodic = is_period;
@ -667,13 +667,13 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
pid_nr = pid_vnr(pid);
put_pid(pid);
init_kthread_worker(&pit->worker);
kthread_init_worker(&pit->worker);
pit->worker_task = kthread_run(kthread_worker_fn, &pit->worker,
"kvm-pit/%d", pid_nr);
if (IS_ERR(pit->worker_task))
goto fail_kthread;
init_kthread_work(&pit->expired, pit_do_work);
kthread_init_work(&pit->expired, pit_do_work);
pit->kvm = kvm;
@ -730,7 +730,7 @@ void kvm_free_pit(struct kvm *kvm)
kvm_io_bus_unregister_dev(kvm, KVM_PIO_BUS, &pit->speaker_dev);
kvm_pit_set_reinject(pit, false);
hrtimer_cancel(&pit->pit_state.timer);
flush_kthread_work(&pit->expired);
kthread_flush_work(&pit->expired);
kthread_stop(pit->worker_task);
kvm_free_irq_source_id(kvm, pit->irq_source_id);
kfree(pit);

View File

@ -31,6 +31,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
unsigned int granularity;
enum req_op op;
int alignment;
sector_t bs_mask;
if (!q)
return -ENXIO;
@ -50,6 +51,10 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
op = REQ_OP_DISCARD;
}
bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1;
if ((sector | nr_sects) & bs_mask)
return -EINVAL;
/* Zero-sector (unknown) and one-sector granularities are the same. */
granularity = max(q->limits.discard_granularity >> 9, 1U);
alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
@ -150,10 +155,15 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
unsigned int max_write_same_sectors;
struct bio *bio = NULL;
int ret = 0;
sector_t bs_mask;
if (!q)
return -ENXIO;
bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1;
if ((sector | nr_sects) & bs_mask)
return -EINVAL;
/* Ensure that max_write_same_sectors doesn't overflow bi_size */
max_write_same_sectors = UINT_MAX >> 9;
@ -202,6 +212,11 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
int ret;
struct bio *bio = NULL;
unsigned int sz;
sector_t bs_mask;
bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1;
if ((sector | nr_sects) & bs_mask)
return -EINVAL;
while (nr_sects != 0) {
bio = next_bio(bio, min(nr_sects, (sector_t)BIO_MAX_PAGES),

View File

@ -225,7 +225,8 @@ static int blk_ioctl_zeroout(struct block_device *bdev, fmode_t mode,
unsigned long arg)
{
uint64_t range[2];
uint64_t start, len;
struct address_space *mapping;
uint64_t start, end, len;
if (!(mode & FMODE_WRITE))
return -EBADF;
@ -235,18 +236,23 @@ static int blk_ioctl_zeroout(struct block_device *bdev, fmode_t mode,
start = range[0];
len = range[1];
end = start + len - 1;
if (start & 511)
return -EINVAL;
if (len & 511)
return -EINVAL;
start >>= 9;
len >>= 9;
if (start + len > (i_size_read(bdev->bd_inode) >> 9))
if (end >= (uint64_t)i_size_read(bdev->bd_inode))
return -EINVAL;
if (end < start)
return -EINVAL;
return blkdev_issue_zeroout(bdev, start, len, GFP_KERNEL, false);
/* Invalidate the page cache, including dirty pages */
mapping = bdev->bd_inode->i_mapping;
truncate_inode_pages_range(mapping, start, end);
return blkdev_issue_zeroout(bdev, start >> 9, len >> 9, GFP_KERNEL,
false);
}
static int put_ushort(unsigned long arg, unsigned short val)

View File

@ -47,7 +47,7 @@ static void crypto_pump_requests(struct crypto_engine *engine,
/* If another context is idling then defer */
if (engine->idling) {
queue_kthread_work(&engine->kworker, &engine->pump_requests);
kthread_queue_work(&engine->kworker, &engine->pump_requests);
goto out;
}
@ -58,7 +58,7 @@ static void crypto_pump_requests(struct crypto_engine *engine,
/* Only do teardown in the thread */
if (!in_kthread) {
queue_kthread_work(&engine->kworker,
kthread_queue_work(&engine->kworker,
&engine->pump_requests);
goto out;
}
@ -189,7 +189,7 @@ int crypto_transfer_cipher_request(struct crypto_engine *engine,
ret = ablkcipher_enqueue_request(&engine->queue, req);
if (!engine->busy && need_pump)
queue_kthread_work(&engine->kworker, &engine->pump_requests);
kthread_queue_work(&engine->kworker, &engine->pump_requests);
spin_unlock_irqrestore(&engine->queue_lock, flags);
return ret;
@ -231,7 +231,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine,
ret = ahash_enqueue_request(&engine->queue, req);
if (!engine->busy && need_pump)
queue_kthread_work(&engine->kworker, &engine->pump_requests);
kthread_queue_work(&engine->kworker, &engine->pump_requests);
spin_unlock_irqrestore(&engine->queue_lock, flags);
return ret;
@ -284,7 +284,7 @@ void crypto_finalize_cipher_request(struct crypto_engine *engine,
req->base.complete(&req->base, err);
queue_kthread_work(&engine->kworker, &engine->pump_requests);
kthread_queue_work(&engine->kworker, &engine->pump_requests);
}
EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request);
@ -321,7 +321,7 @@ void crypto_finalize_hash_request(struct crypto_engine *engine,
req->base.complete(&req->base, err);
queue_kthread_work(&engine->kworker, &engine->pump_requests);
kthread_queue_work(&engine->kworker, &engine->pump_requests);
}
EXPORT_SYMBOL_GPL(crypto_finalize_hash_request);
@ -345,7 +345,7 @@ int crypto_engine_start(struct crypto_engine *engine)
engine->running = true;
spin_unlock_irqrestore(&engine->queue_lock, flags);
queue_kthread_work(&engine->kworker, &engine->pump_requests);
kthread_queue_work(&engine->kworker, &engine->pump_requests);
return 0;
}
@ -422,7 +422,7 @@ struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt)
crypto_init_queue(&engine->queue, CRYPTO_ENGINE_MAX_QLEN);
spin_lock_init(&engine->queue_lock);
init_kthread_worker(&engine->kworker);
kthread_init_worker(&engine->kworker);
engine->kworker_task = kthread_run(kthread_worker_fn,
&engine->kworker, "%s",
engine->name);
@ -430,7 +430,7 @@ struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt)
dev_err(dev, "failed to create crypto request pump task\n");
return NULL;
}
init_kthread_work(&engine->pump_requests, crypto_pump_work);
kthread_init_work(&engine->pump_requests, crypto_pump_work);
if (engine->rt) {
dev_info(dev, "will run requests pump with realtime priority\n");
@ -455,7 +455,7 @@ int crypto_engine_exit(struct crypto_engine *engine)
if (ret)
return ret;
flush_kthread_worker(&engine->kworker);
kthread_flush_worker(&engine->kworker);
kthread_stop(engine->kworker_task);
return 0;

View File

@ -840,13 +840,13 @@ static void loop_config_discard(struct loop_device *lo)
static void loop_unprepare_queue(struct loop_device *lo)
{
flush_kthread_worker(&lo->worker);
kthread_flush_worker(&lo->worker);
kthread_stop(lo->worker_task);
}
static int loop_prepare_queue(struct loop_device *lo)
{
init_kthread_worker(&lo->worker);
kthread_init_worker(&lo->worker);
lo->worker_task = kthread_run(kthread_worker_fn,
&lo->worker, "loop%d", lo->lo_number);
if (IS_ERR(lo->worker_task))
@ -1658,7 +1658,7 @@ static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
break;
}
queue_kthread_work(&lo->worker, &cmd->work);
kthread_queue_work(&lo->worker, &cmd->work);
return BLK_MQ_RQ_QUEUE_OK;
}
@ -1696,7 +1696,7 @@ static int loop_init_request(void *data, struct request *rq,
struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);
cmd->rq = rq;
init_kthread_work(&cmd->work, loop_queue_work);
kthread_init_work(&cmd->work, loop_queue_work);
return 0;
}

View File

@ -2100,23 +2100,37 @@ unsigned long get_random_long(void)
}
EXPORT_SYMBOL(get_random_long);
/*
* randomize_range() returns a start address such that
/**
* randomize_page - Generate a random, page aligned address
* @start: The smallest acceptable address the caller will take.
* @range: The size of the area, starting at @start, within which the
* random address must fall.
*
* [...... <range> .....]
* start end
* If @start + @range would overflow, @range is capped.
*
* a <range> with size "len" starting at the return value is inside in the
* area defined by [start, end], but is otherwise randomized.
* NOTE: Historical use of randomize_range, which this replaces, presumed that
* @start was already page aligned. We now align it regardless.
*
* Return: A page aligned address within [start, start + range). On error,
* @start is returned.
*/
unsigned long
randomize_range(unsigned long start, unsigned long end, unsigned long len)
randomize_page(unsigned long start, unsigned long range)
{
unsigned long range = end - len - start;
if (!PAGE_ALIGNED(start)) {
range -= PAGE_ALIGN(start) - start;
start = PAGE_ALIGN(start);
}
if (end <= start + len)
return 0;
return PAGE_ALIGN(get_random_int() % range + start);
if (start > ULONG_MAX - range)
range = ULONG_MAX - start;
range >>= PAGE_SHIFT;
if (range == 0)
return start;
return start + (get_random_long() % range << PAGE_SHIFT);
}
/* Interface for in-kernel drivers of true hardware RNGs.

View File

@ -38,7 +38,6 @@
#include <linux/workqueue.h>
#include <linux/module.h>
#include <linux/dma-mapping.h>
#include <linux/kconfig.h>
#include "../tty/hvc/hvc_console.h"
#define is_rproc_enabled IS_ENABLED(CONFIG_REMOTEPROC)

View File

@ -129,7 +129,7 @@ void rvt_cq_enter(struct rvt_cq *cq, struct ib_wc *entry, bool solicited)
if (likely(worker)) {
cq->notify = RVT_CQ_NONE;
cq->triggered++;
queue_kthread_work(worker, &cq->comptask);
kthread_queue_work(worker, &cq->comptask);
}
}
@ -265,7 +265,7 @@ struct ib_cq *rvt_create_cq(struct ib_device *ibdev,
cq->ibcq.cqe = entries;
cq->notify = RVT_CQ_NONE;
spin_lock_init(&cq->lock);
init_kthread_work(&cq->comptask, send_complete);
kthread_init_work(&cq->comptask, send_complete);
cq->queue = wc;
ret = &cq->ibcq;
@ -295,7 +295,7 @@ int rvt_destroy_cq(struct ib_cq *ibcq)
struct rvt_cq *cq = ibcq_to_rvtcq(ibcq);
struct rvt_dev_info *rdi = cq->rdi;
flush_kthread_work(&cq->comptask);
kthread_flush_work(&cq->comptask);
spin_lock(&rdi->n_cqs_lock);
rdi->n_cqs_allocated--;
spin_unlock(&rdi->n_cqs_lock);
@ -514,7 +514,7 @@ int rvt_driver_cq_init(struct rvt_dev_info *rdi)
rdi->worker = kzalloc(sizeof(*rdi->worker), GFP_KERNEL);
if (!rdi->worker)
return -ENOMEM;
init_kthread_worker(rdi->worker);
kthread_init_worker(rdi->worker);
task = kthread_create_on_node(
kthread_worker_fn,
rdi->worker,
@ -547,7 +547,7 @@ void rvt_cq_exit(struct rvt_dev_info *rdi)
/* blocks future queuing from send_complete() */
rdi->worker = NULL;
smp_wmb(); /* See rdi_cq_enter */
flush_kthread_worker(worker);
kthread_flush_worker(worker);
kthread_stop(worker->task);
kfree(worker);
}

View File

@ -9,7 +9,6 @@
#include <linux/kernel.h>
#include <linux/device.h>
#include <linux/kconfig.h>
#include <linux/list.h>
#include <linux/pm.h>
#include <linux/rmi.h>

View File

@ -17,7 +17,6 @@
#include <linux/bitmap.h>
#include <linux/delay.h>
#include <linux/fs.h>
#include <linux/kconfig.h>
#include <linux/pm.h>
#include <linux/slab.h>
#include <linux/of.h>

View File

@ -8,7 +8,6 @@
*/
#include <linux/kernel.h>
#include <linux/kconfig.h>
#include <linux/rmi.h>
#include <linux/slab.h>
#include <linux/uaccess.h>

View File

@ -12,7 +12,6 @@
#include <linux/device.h>
#include <linux/input.h>
#include <linux/input/mt.h>
#include <linux/kconfig.h>
#include <linux/rmi.h>
#include <linux/slab.h>
#include <linux/of.h>

View File

@ -52,7 +52,6 @@
#include <linux/bitops.h>
#include <linux/cpumask.h>
#include <linux/kconfig.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/interrupt.h>

View File

@ -12,7 +12,6 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/bitops.h>
#include <linux/kconfig.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/interrupt.h>

View File

@ -13,7 +13,6 @@
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/kconfig.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/of.h>

View File

@ -18,7 +18,6 @@
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/kconfig.h>
#include <linux/platform_device.h>
#include <linux/spinlock.h>
#include <linux/of.h>

View File

@ -581,7 +581,7 @@ static void init_tio(struct dm_rq_target_io *tio, struct request *rq,
if (!md->init_tio_pdu)
memset(&tio->info, 0, sizeof(tio->info));
if (md->kworker_task)
init_kthread_work(&tio->work, map_tio_request);
kthread_init_work(&tio->work, map_tio_request);
}
static struct dm_rq_target_io *dm_old_prep_tio(struct request *rq,
@ -831,7 +831,7 @@ static void dm_old_request_fn(struct request_queue *q)
tio = tio_from_request(rq);
/* Establish tio->ti before queuing work (map_tio_request) */
tio->ti = ti;
queue_kthread_work(&md->kworker, &tio->work);
kthread_queue_work(&md->kworker, &tio->work);
BUG_ON(!irqs_disabled());
}
}
@ -853,7 +853,7 @@ int dm_old_init_request_queue(struct mapped_device *md)
blk_queue_prep_rq(md->queue, dm_old_prep_fn);
/* Initialize the request-based DM worker thread */
init_kthread_worker(&md->kworker);
kthread_init_worker(&md->kworker);
md->kworker_task = kthread_run(kthread_worker_fn, &md->kworker,
"kdmwork-%s", dm_device_name(md));
if (IS_ERR(md->kworker_task))

View File

@ -1891,7 +1891,7 @@ static void __dm_destroy(struct mapped_device *md, bool wait)
spin_unlock_irq(q->queue_lock);
if (dm_request_based(md) && md->kworker_task)
flush_kthread_worker(&md->kworker);
kthread_flush_worker(&md->kworker);
/*
* Take suspend_lock so that presuspend and postsuspend methods
@ -2147,7 +2147,7 @@ static int __dm_suspend(struct mapped_device *md, struct dm_table *map,
if (dm_request_based(md)) {
dm_stop_queue(md->queue);
if (md->kworker_task)
flush_kthread_worker(&md->kworker);
kthread_flush_worker(&md->kworker);
}
flush_workqueue(md->wq);

View File

@ -25,7 +25,6 @@
#ifndef AF9013_H
#define AF9013_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
/* AF9013/5 GPIOs (mostly guessed)

View File

@ -22,8 +22,6 @@
#ifndef AF9033_H
#define AF9033_H
#include <linux/kconfig.h>
/*
* I2C address (TODO: are these in 8-bit format?)
* 0x38, 0x3a, 0x3c, 0x3e

View File

@ -22,7 +22,6 @@
#ifndef __DVB_ASCOT2E_H__
#define __DVB_ASCOT2E_H__
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include <linux/i2c.h>

View File

@ -22,7 +22,6 @@
#ifndef __ATBM8830_H__
#define __ATBM8830_H__
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include <linux/i2c.h>

View File

@ -22,7 +22,6 @@
#ifndef __AU8522_H__
#define __AU8522_H__
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
enum au8522_if_freq {

View File

@ -28,7 +28,6 @@
#ifndef CX22702_H
#define CX22702_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct cx22702_config {

View File

@ -22,8 +22,6 @@
#ifndef CX24113_H
#define CX24113_H
#include <linux/kconfig.h>
struct dvb_frontend;
struct cx24113_config {

View File

@ -21,7 +21,6 @@
#ifndef CX24116_H
#define CX24116_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct cx24116_config {

View File

@ -22,7 +22,6 @@
#ifndef CX24117_H
#define CX24117_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct cx24117_config {

View File

@ -20,7 +20,6 @@
#ifndef CX24120_H
#define CX24120_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include <linux/firmware.h>

View File

@ -21,7 +21,6 @@
#ifndef CX24123_H
#define CX24123_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct cx24123_config {

View File

@ -22,7 +22,6 @@
#ifndef CXD2820R_H
#define CXD2820R_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#define CXD2820R_GPIO_D (0 << 0) /* disable */

View File

@ -22,7 +22,6 @@
#ifndef CXD2841ER_H
#define CXD2841ER_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
enum cxd2841er_xtal {

View File

@ -13,8 +13,6 @@
#ifndef DIB3000MC_H
#define DIB3000MC_H
#include <linux/kconfig.h>
#include "dibx000_common.h"
struct dib3000mc_config {

View File

@ -1,8 +1,6 @@
#ifndef DIB7000M_H
#define DIB7000M_H
#include <linux/kconfig.h>
#include "dibx000_common.h"
struct dib7000m_config {

View File

@ -1,8 +1,6 @@
#ifndef DIB7000P_H
#define DIB7000P_H
#include <linux/kconfig.h>
#include "dibx000_common.h"
struct dib7000p_config {

View File

@ -24,7 +24,6 @@
#ifndef _DRXD_H_
#define _DRXD_H_
#include <linux/kconfig.h>
#include <linux/types.h>
#include <linux/i2c.h>

View File

@ -1,7 +1,6 @@
#ifndef _DRXK_H_
#define _DRXK_H_
#include <linux/kconfig.h>
#include <linux/types.h>
#include <linux/i2c.h>

View File

@ -22,7 +22,6 @@
#ifndef DS3000_H
#define DS3000_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct ds3000_config {

View File

@ -22,7 +22,6 @@
#ifndef DVB_DUMMY_FE_H
#define DVB_DUMMY_FE_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include "dvb_frontend.h"

View File

@ -22,7 +22,6 @@
#ifndef EC100_H
#define EC100_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct ec100_config {

View File

@ -23,7 +23,6 @@
#ifndef HD29L2_H
#define HD29L2_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct hd29l2_config {

View File

@ -21,7 +21,6 @@
#ifndef __DVB_HELENE_H__
#define __DVB_HELENE_H__
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include <linux/i2c.h>

View File

@ -22,7 +22,6 @@
#ifndef __DVB_HORUS3A_H__
#define __DVB_HORUS3A_H__
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include <linux/i2c.h>

View File

@ -20,7 +20,6 @@
#ifndef DVB_IX2505V_H
#define DVB_IX2505V_H
#include <linux/kconfig.h>
#include <linux/i2c.h>
#include "dvb_frontend.h"

View File

@ -22,7 +22,6 @@
#ifndef _LG2160_H_
#define _LG2160_H_
#include <linux/kconfig.h>
#include <linux/i2c.h>
#include "dvb_frontend.h"

View File

@ -22,7 +22,6 @@
#ifndef _LGDT3305_H_
#define _LGDT3305_H_
#include <linux/kconfig.h>
#include <linux/i2c.h>
#include "dvb_frontend.h"

View File

@ -23,7 +23,6 @@
#ifndef LGS8GL5_H
#define LGS8GL5_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct lgs8gl5_config {

View File

@ -26,7 +26,6 @@
#ifndef __LGS8GXX_H__
#define __LGS8GXX_H__
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include <linux/i2c.h>

View File

@ -23,8 +23,6 @@
#ifndef _LNBH24_H
#define _LNBH24_H
#include <linux/kconfig.h>
/* system register bits */
#define LNBH24_OLF 0x01
#define LNBH24_OTF 0x02

View File

@ -22,7 +22,6 @@
#define LNBH25_H
#include <linux/i2c.h>
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
/* 22 kHz tone enabled. Tone output controlled by DSQIN pin */

View File

@ -27,8 +27,6 @@
#ifndef _LNBP21_H
#define _LNBP21_H
#include <linux/kconfig.h>
/* system register bits */
/* [RO] 0=OK; 1=over current limit flag */
#define LNBP21_OLF 0x01

View File

@ -28,8 +28,6 @@
#ifndef _LNBP22_H
#define _LNBP22_H
#include <linux/kconfig.h>
/* Enable */
#define LNBP22_EN 0x10
/* Voltage selection */

View File

@ -20,7 +20,6 @@
#ifndef M88RS2000_H
#define M88RS2000_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include "dvb_frontend.h"

View File

@ -16,7 +16,6 @@
#ifndef MB86A20S_H
#define MB86A20S_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
/**

View File

@ -22,7 +22,6 @@
#ifndef __S5H1409_H__
#define __S5H1409_H__
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct s5h1409_config {

View File

@ -22,7 +22,6 @@
#ifndef __S5H1411_H__
#define __S5H1411_H__
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#define S5H1411_I2C_TOP_ADDR (0x32 >> 1)

View File

@ -22,7 +22,6 @@
#ifndef __S5H1432_H__
#define __S5H1432_H__
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#define S5H1432_I2C_TOP_ADDR (0x02 >> 1)

View File

@ -17,7 +17,6 @@
#ifndef S921_H
#define S921_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct s921_config {

View File

@ -1,7 +1,6 @@
#ifndef SI21XX_H
#define SI21XX_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include "dvb_frontend.h"

View File

@ -17,7 +17,6 @@
#ifndef SP2_H
#define SP2_H
#include <linux/kconfig.h>
#include "dvb_ca_en50221.h"
/*

View File

@ -23,7 +23,6 @@
#ifndef __DVB_STB6000_H__
#define __DVB_STB6000_H__
#include <linux/kconfig.h>
#include <linux/i2c.h>
#include "dvb_frontend.h"

View File

@ -27,7 +27,6 @@
#ifndef STV0288_H
#define STV0288_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include "dvb_frontend.h"

View File

@ -26,7 +26,6 @@
#ifndef STV0367_H
#define STV0367_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include "dvb_frontend.h"

View File

@ -26,7 +26,6 @@
#ifndef STV0900_H
#define STV0900_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include "dvb_frontend.h"

View File

@ -25,7 +25,6 @@
#ifndef __DVB_STV6110_H__
#define __DVB_STV6110_H__
#include <linux/kconfig.h>
#include <linux/i2c.h>
#include "dvb_frontend.h"

View File

@ -22,7 +22,6 @@
#ifndef TDA10048_H
#define TDA10048_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
#include <linux/firmware.h>

View File

@ -1,8 +1,6 @@
#ifndef _TDA18271C2DD_H_
#define _TDA18271C2DD_H_
#include <linux/kconfig.h>
#if IS_REACHABLE(CONFIG_DVB_TDA18271C2DD)
struct dvb_frontend *tda18271c2dd_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c, u8 adr);

View File

@ -22,7 +22,6 @@
#ifndef TS2020_H
#define TS2020_H
#include <linux/kconfig.h>
#include <linux/dvb/frontend.h>
struct ts2020_config {

View File

@ -21,7 +21,6 @@
#ifndef DVB_ZL10036_H
#define DVB_ZL10036_H
#include <linux/kconfig.h>
#include <linux/i2c.h>
#include "dvb_frontend.h"

View File

@ -22,8 +22,6 @@
#ifndef ZL10039_H
#define ZL10039_H
#include <linux/kconfig.h>
#if IS_REACHABLE(CONFIG_DVB_ZL10039)
struct dvb_frontend *zl10039_attach(struct dvb_frontend *fe,
u8 i2c_addr,

View File

@ -20,8 +20,6 @@
#ifndef __ALTERA_CI_H
#define __ALTERA_CI_H
#include <linux/kconfig.h>
#define ALT_DATA 0x000000ff
#define ALT_TDI 0x00008000
#define ALT_TDO 0x00004000

View File

@ -750,7 +750,7 @@ static int ivtv_init_struct1(struct ivtv *itv)
spin_lock_init(&itv->lock);
spin_lock_init(&itv->dma_reg_lock);
init_kthread_worker(&itv->irq_worker);
kthread_init_worker(&itv->irq_worker);
itv->irq_worker_task = kthread_run(kthread_worker_fn, &itv->irq_worker,
"%s", itv->v4l2_dev.name);
if (IS_ERR(itv->irq_worker_task)) {
@ -760,7 +760,7 @@ static int ivtv_init_struct1(struct ivtv *itv)
/* must use the FIFO scheduler as it is realtime sensitive */
sched_setscheduler(itv->irq_worker_task, SCHED_FIFO, &param);
init_kthread_work(&itv->irq_work, ivtv_irq_work_handler);
kthread_init_work(&itv->irq_work, ivtv_irq_work_handler);
/* Initial settings */
itv->cxhdl.port = CX2341X_PORT_MEMORY;
@ -1441,7 +1441,7 @@ static void ivtv_remove(struct pci_dev *pdev)
del_timer_sync(&itv->dma_timer);
/* Kill irq worker */
flush_kthread_worker(&itv->irq_worker);
kthread_flush_worker(&itv->irq_worker);
kthread_stop(itv->irq_worker_task);
ivtv_streams_cleanup(itv);

View File

@ -1062,7 +1062,7 @@ irqreturn_t ivtv_irq_handler(int irq, void *dev_id)
}
if (test_and_clear_bit(IVTV_F_I_HAVE_WORK, &itv->i_flags)) {
queue_kthread_work(&itv->irq_worker, &itv->irq_work);
kthread_queue_work(&itv->irq_worker, &itv->irq_work);
}
spin_unlock(&itv->dma_reg_lock);

View File

@ -1,7 +1,6 @@
#ifndef LINUX_FC0011_H_
#define LINUX_FC0011_H_
#include <linux/kconfig.h>
#include "dvb_frontend.h"

Some files were not shown because too many files have changed in this diff Show More