Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial

Pull trivial tree from Jiri Kosina:
 "The usual trivial updates all over the tree -- mostly typo fixes and
  documentation updates"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (52 commits)
  doc: Documentation/cputopology.txt fix typo
  treewide: Convert retrun typos to return
  Fix comment typo for init_cma_reserved_pageblock
  Documentation/trace: Correcting and extending tracepoint documentation
  mm/hotplug: fix a typo in Documentation/memory-hotplug.txt
  power: Documentation: Update s2ram link
  doc: fix a typo in Documentation/00-INDEX
  Documentation/printk-formats.txt: No casts needed for u64/s64
  doc: Fix typo "is is" in Documentations
  treewide: Fix printks with 0x%#
  zram: doc fixes
  Documentation/kmemcheck: update kmemcheck documentation
  doc: documentation/hwspinlock.txt fix typo
  PM / Hibernate: add section for resume options
  doc: filesystems : Fix typo in Documentations/filesystems
  scsi/megaraid fixed several typos in comments
  ppc: init_32: Fix error typo "CONFIG_START_KERNEL"
  treewide: Add __GFP_NOWARN to k.alloc calls with v.alloc fallbacks
  page_isolation: Fix a comment typo in test_pages_isolated()
  doc: fix a typo about irq affinity
  ...
This commit is contained in:
Linus Torvalds 2013-09-06 09:36:28 -07:00
commit 2e515bf096
106 changed files with 269 additions and 204 deletions

11
CREDITS
View File

@ -637,14 +637,13 @@ S: 14509 NE 39th Street #1096
S: Bellevue, Washington 98007 S: Bellevue, Washington 98007
S: USA S: USA
N: Christopher L. Cheney N: Chris Cheney
E: ccheney@debian.org E: chris.cheney@gmail.com
E: ccheney@cheney.cx E: ccheney@redhat.com
W: http://www.cheney.cx
P: 1024D/8E384AF2 2D31 1927 87D7 1F24 9FF9 1BC5 D106 5AB3 8E38 4AF2 P: 1024D/8E384AF2 2D31 1927 87D7 1F24 9FF9 1BC5 D106 5AB3 8E38 4AF2
D: Vista Imaging usb webcam driver D: Vista Imaging usb webcam driver
S: 314 Prince of Wales S: 2308 Therrell Way
S: Conroe, TX 77304 S: McKinney, TX 75070
S: USA S: USA
N: Stuart Cheshire N: Stuart Cheshire

View File

@ -40,7 +40,7 @@ IPMI.txt
IRQ-affinity.txt IRQ-affinity.txt
- how to select which CPU(s) handle which interrupt events on SMP. - how to select which CPU(s) handle which interrupt events on SMP.
IRQ-domain.txt IRQ-domain.txt
- info on inerrupt numbering and setting up IRQ domains. - info on interrupt numbering and setting up IRQ domains.
IRQ.txt IRQ.txt
- description of what an IRQ is. - description of what an IRQ is.
Intel-IOMMU.txt Intel-IOMMU.txt

View File

@ -5,20 +5,21 @@ Description:
The disksize file is read-write and specifies the disk size The disksize file is read-write and specifies the disk size
which represents the limit on the *uncompressed* worth of data which represents the limit on the *uncompressed* worth of data
that can be stored in this disk. that can be stored in this disk.
Unit: bytes
What: /sys/block/zram<id>/initstate What: /sys/block/zram<id>/initstate
Date: August 2010 Date: August 2010
Contact: Nitin Gupta <ngupta@vflare.org> Contact: Nitin Gupta <ngupta@vflare.org>
Description: Description:
The disksize file is read-only and shows the initialization The initstate file is read-only and shows the initialization
state of the device. state of the device.
What: /sys/block/zram<id>/reset What: /sys/block/zram<id>/reset
Date: August 2010 Date: August 2010
Contact: Nitin Gupta <ngupta@vflare.org> Contact: Nitin Gupta <ngupta@vflare.org>
Description: Description:
The disksize file is write-only and allows resetting the The reset file is write-only and allows resetting the
device. The reset operation frees all the memory assocaited device. The reset operation frees all the memory associated
with this device. with this device.
What: /sys/block/zram<id>/num_reads What: /sys/block/zram<id>/num_reads
@ -48,7 +49,7 @@ Contact: Nitin Gupta <ngupta@vflare.org>
Description: Description:
The notify_free file is read-only and specifies the number of The notify_free file is read-only and specifies the number of
swap slot free notifications received by this device. These swap slot free notifications received by this device. These
notifications are send to a swap block device when a swap slot notifications are sent to a swap block device when a swap slot
is freed. This statistic is applicable only when this disk is is freed. This statistic is applicable only when this disk is
being used as a swap disk. being used as a swap disk.

View File

@ -132,7 +132,7 @@ devices.</para>
<row> <row>
<entry>&v4l2-fract;</entry> <entry>&v4l2-fract;</entry>
<entry><structfield>timeperframe</structfield></entry> <entry><structfield>timeperframe</structfield></entry>
<entry><para>This is is the desired period between <entry><para>This is the desired period between
successive frames captured by the driver, in seconds. The successive frames captured by the driver, in seconds. The
field is intended to skip frames on the driver side, saving I/O field is intended to skip frames on the driver side, saving I/O
bandwidth.</para><para>Applications store here the desired frame bandwidth.</para><para>Applications store here the desired frame
@ -193,7 +193,7 @@ applications must set the array to zero.</entry>
<row> <row>
<entry>&v4l2-fract;</entry> <entry>&v4l2-fract;</entry>
<entry><structfield>timeperframe</structfield></entry> <entry><structfield>timeperframe</structfield></entry>
<entry>This is is the desired period between <entry>This is the desired period between
successive frames output by the driver, in seconds.</entry> successive frames output by the driver, in seconds.</entry>
</row> </row>
<row> <row>

View File

@ -57,8 +57,8 @@ i.e counters for the CPU0-3 did not change.
Here is an example of limiting that same irq (44) to cpus 1024 to 1031: Here is an example of limiting that same irq (44) to cpus 1024 to 1031:
[root@moon 44]# echo 1024-1031 > smp_affinity [root@moon 44]# echo 1024-1031 > smp_affinity_list
[root@moon 44]# cat smp_affinity [root@moon 44]# cat smp_affinity_list
1024-1031 1024-1031
Note that to do this with a bitmask would require 32 bitmasks of zero Note that to do this with a bitmask would require 32 bitmasks of zero

View File

@ -109,6 +109,16 @@ probably didn't even receive earlier versions of the patch.
If the patch fixes a logged bug entry, refer to that bug entry by If the patch fixes a logged bug entry, refer to that bug entry by
number and URL. number and URL.
If you want to refer to a specific commit, don't just refer to the
SHA-1 ID of the commit. Please also include the oneline summary of
the commit, to make it easier for reviewers to know what it is about.
Example:
Commit e21d2170f36602ae2708 ("video: remove unnecessary
platform_set_drvdata()") removed the unnecessary
platform_set_drvdata(), but left the variable "dev" unused,
delete it.
3) Separate your changes. 3) Separate your changes.

View File

@ -207,7 +207,7 @@ passing those. One idea is to return this in _DSM method like:
Return (Local0) Return (Local0)
} }
Then the at25 SPI driver can get this configation by calling _DSM on its Then the at25 SPI driver can get this configuration by calling _DSM on its
ACPI handle like: ACPI handle like:
struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL }; struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };

View File

@ -78,7 +78,7 @@ to NULL. Drivers should use the following idiom:
The most common usage of these functions will probably be to specify The most common usage of these functions will probably be to specify
the maximum time from when an interrupt occurs, to when the device the maximum time from when an interrupt occurs, to when the device
becomes accessible. To accomplish this, driver writers should use the becomes accessible. To accomplish this, driver writers should use the
set_max_mpu_wakeup_lat() function to to constrain the MPU wakeup set_max_mpu_wakeup_lat() function to constrain the MPU wakeup
latency, and the set_max_dev_wakeup_lat() function to constrain the latency, and the set_max_dev_wakeup_lat() function to constrain the
device wakeup latency (from clk_enable() to accessibility). For device wakeup latency (from clk_enable() to accessibility). For
example, example,

View File

@ -69,7 +69,7 @@ one, this value should be decreased relative to fifo_expire_async.
group_idle group_idle
----------- -----------
This parameter forces idling at the CFQ group level instead of CFQ This parameter forces idling at the CFQ group level instead of CFQ
queue level. This was introduced after after a bottleneck was observed queue level. This was introduced after a bottleneck was observed
in higher end storage due to idle on sequential queue and allow dispatch in higher end storage due to idle on sequential queue and allow dispatch
from a single queue. The idea with this parameter is that it can be run with from a single queue. The idea with this parameter is that it can be run with
slice_idle=0 and group_idle=8, so that idling does not happen on individual slice_idle=0 and group_idle=8, so that idling does not happen on individual

View File

@ -57,7 +57,7 @@ changes occur:
interface must make sure that any previous page table interface must make sure that any previous page table
modifications for the address space 'vma->vm_mm' in the range modifications for the address space 'vma->vm_mm' in the range
'start' to 'end-1' will be visible to the cpu. That is, after 'start' to 'end-1' will be visible to the cpu. That is, after
running, here will be no entries in the TLB for 'mm' for running, there will be no entries in the TLB for 'mm' for
virtual addresses in the range 'start' to 'end-1'. virtual addresses in the range 'start' to 'end-1'.
The "vma" is the backing store being used for the region. The "vma" is the backing store being used for the region.
@ -375,8 +375,8 @@ maps this page at its virtual address.
void flush_icache_page(struct vm_area_struct *vma, struct page *page) void flush_icache_page(struct vm_area_struct *vma, struct page *page)
All the functionality of flush_icache_page can be implemented in All the functionality of flush_icache_page can be implemented in
flush_dcache_page and update_mmu_cache. In 2.7 the hope is to flush_dcache_page and update_mmu_cache. In the future, the hope
remove this interface completely. is to remove this interface completely.
The final category of APIs is for I/O to deliberately aliased address The final category of APIs is for I/O to deliberately aliased address
ranges inside the kernel. Such aliases are set up by use of the ranges inside the kernel. Such aliases are set up by use of the

View File

@ -22,7 +22,7 @@ to /proc/cpuinfo.
4) /sys/devices/system/cpu/cpuX/topology/thread_siblings: 4) /sys/devices/system/cpu/cpuX/topology/thread_siblings:
internel kernel map of cpuX's hardware threads within the same internal kernel map of cpuX's hardware threads within the same
core as cpuX core as cpuX
5) /sys/devices/system/cpu/cpuX/topology/core_siblings: 5) /sys/devices/system/cpu/cpuX/topology/core_siblings:

View File

@ -276,7 +276,7 @@ mainline get there via -mm.
The current -mm patch is available in the "mmotm" (-mm of the moment) The current -mm patch is available in the "mmotm" (-mm of the moment)
directory at: directory at:
http://userweb.kernel.org/~akpm/mmotm/ http://www.ozlabs.org/~akpm/mmotm/
Use of the MMOTM tree is likely to be a frustrating experience, though; Use of the MMOTM tree is likely to be a frustrating experience, though;
there is a definite chance that it will not even compile. there is a definite chance that it will not even compile.
@ -287,7 +287,7 @@ the mainline is expected to look like after the next merge window closes.
Linux-next trees are announced on the linux-kernel and linux-next mailing Linux-next trees are announced on the linux-kernel and linux-next mailing
lists when they are assembled; they can be downloaded from: lists when they are assembled; they can be downloaded from:
http://www.kernel.org/pub/linux/kernel/people/sfr/linux-next/ http://www.kernel.org/pub/linux/kernel/next/
Some information about linux-next has been gathered at: Some information about linux-next has been gathered at:

View File

@ -22,7 +22,7 @@ This contains the board-specific information.
- compatible: must be "stericsson,s365". - compatible: must be "stericsson,s365".
- vana15-supply: the regulator supplying the 1.5V to drive the - vana15-supply: the regulator supplying the 1.5V to drive the
board. board.
- syscon: a pointer to the syscon node so we can acccess the - syscon: a pointer to the syscon node so we can access the
syscon registers to set the board as self-powered. syscon registers to set the board as self-powered.
Example: Example:

View File

@ -32,8 +32,8 @@ numbers - see motherboard's TRM for more details.
The node describing a config device must refer to the sysreg node via The node describing a config device must refer to the sysreg node via
"arm,vexpress,config-bridge" phandle (can be also defined in the node's "arm,vexpress,config-bridge" phandle (can be also defined in the node's
parent) and relies on the board topology properties - see main vexpress parent) and relies on the board topology properties - see main vexpress
node documentation for more details. It must must also define the node documentation for more details. It must also define the following
following property: property:
- arm,vexpress-sysreg,func : must contain two cells: - arm,vexpress-sysreg,func : must contain two cells:
- first cell defines function number (eg. 1 for clock generator, - first cell defines function number (eg. 1 for clock generator,
2 for voltage regulators etc.) 2 for voltage regulators etc.)

View File

@ -5,7 +5,7 @@ TI C6X SoCs contain a region of miscellaneous registers which provide various
function for SoC control or status. Details vary considerably among from SoC function for SoC control or status. Details vary considerably among from SoC
to SoC with no two being alike. to SoC with no two being alike.
In general, the Device State Configuraion Registers (DSCR) will provide one or In general, the Device State Configuration Registers (DSCR) will provide one or
more configuration registers often protected by a lock register where one or more configuration registers often protected by a lock register where one or
more key values must be written to a lock register in order to unlock the more key values must be written to a lock register in order to unlock the
configuration register for writes. These configuration register may be used to configuration register for writes. These configuration register may be used to

View File

@ -2,7 +2,7 @@
The Samsung Audio Subsystem clock controller generates and supplies clocks The Samsung Audio Subsystem clock controller generates and supplies clocks
to Audio Subsystem block available in the S5PV210 and Exynos SoCs. The clock to Audio Subsystem block available in the S5PV210 and Exynos SoCs. The clock
binding described here is applicable to all SoC's in Exynos family. binding described here is applicable to all SoCs in Exynos family.
Required Properties: Required Properties:

View File

@ -17,7 +17,7 @@ Optional properties for the SRC node:
- disable-mxtal: if present this will disable the MXTALO, - disable-mxtal: if present this will disable the MXTALO,
i.e. the driver output for the main (~19.2 MHz) chrystal, i.e. the driver output for the main (~19.2 MHz) chrystal,
if the board has its own circuitry for providing this if the board has its own circuitry for providing this
osciallator oscillator
PLL nodes: these nodes represent the two PLLs on the system, PLL nodes: these nodes represent the two PLLs on the system,

View File

@ -18,14 +18,14 @@ dma0: dma@ffffec00 {
DMA clients connected to the Atmel DMA controller must use the format DMA clients connected to the Atmel DMA controller must use the format
described in the dma.txt file, using a three-cell specifier for each channel: described in the dma.txt file, using a three-cell specifier for each channel:
a phandle plus two interger cells. a phandle plus two integer cells.
The three cells in order are: The three cells in order are:
1. A phandle pointing to the DMA controller. 1. A phandle pointing to the DMA controller.
2. The memory interface (16 most significant bits), the peripheral interface 2. The memory interface (16 most significant bits), the peripheral interface
(16 less significant bits). (16 less significant bits).
3. Parameters for the at91 DMA configuration register which are device 3. Parameters for the at91 DMA configuration register which are device
dependant: dependent:
- bit 7-0: peripheral identifier for the hardware handshaking interface. The - bit 7-0: peripheral identifier for the hardware handshaking interface. The
identifier can be different for tx and rx. identifier can be different for tx and rx.
- bit 11-8: FIFO configuration. 0 for half FIFO, 1 for ALAP, 1 for ASAP. - bit 11-8: FIFO configuration. 0 for half FIFO, 1 for ALAP, 1 for ASAP.

View File

@ -34,7 +34,7 @@ Clients have to specify the DMA requests with phandles in a list.
Required properties: Required properties:
- dmas: List of one or more DMA request specifiers. One DMA request specifier - dmas: List of one or more DMA request specifiers. One DMA request specifier
consists of a phandle to the DMA controller followed by the integer consists of a phandle to the DMA controller followed by the integer
specifiying the request line. specifying the request line.
- dma-names: List of string identifiers for the DMA requests. For the correct - dma-names: List of string identifiers for the DMA requests. For the correct
names, have a look at the specific client driver. names, have a look at the specific client driver.

View File

@ -37,14 +37,14 @@ Each dmas request consists of 4 cells:
1. A phandle pointing to the DMA controller 1. A phandle pointing to the DMA controller
2. Device Type 2. Device Type
3. The DMA request line number (only when 'use fixed channel' is set) 3. The DMA request line number (only when 'use fixed channel' is set)
4. A 32bit mask specifying; mode, direction and endianess [NB: This list will grow] 4. A 32bit mask specifying; mode, direction and endianness [NB: This list will grow]
0x00000001: Mode: 0x00000001: Mode:
Logical channel when unset Logical channel when unset
Physical channel when set Physical channel when set
0x00000002: Direction: 0x00000002: Direction:
Memory to Device when unset Memory to Device when unset
Device to Memory when set Device to Memory when set
0x00000004: Endianess: 0x00000004: Endianness:
Little endian when unset Little endian when unset
Big endian when set Big endian when set
0x00000008: Use fixed channel: 0x00000008: Use fixed channel:

View File

@ -4,7 +4,7 @@ Google's ChromeOS EC is a Cortex-M device which talks to the AP and
implements various function such as keyboard and battery charging. implements various function such as keyboard and battery charging.
The EC can be connect through various means (I2C, SPI, LPC) and the The EC can be connect through various means (I2C, SPI, LPC) and the
compatible string used depends on the inteface. Each connection method has compatible string used depends on the interface. Each connection method has
its own driver which connects to the top level interface-agnostic EC driver. its own driver which connects to the top level interface-agnostic EC driver.
Other Linux driver (such as cros-ec-keyb for the matrix keyboard) connect to Other Linux driver (such as cros-ec-keyb for the matrix keyboard) connect to
the top-level driver. the top-level driver.

View File

@ -8,7 +8,7 @@ Required properties:
Example: Example:
can0: can@f000c000 { can0: can@f000c000 {
compatbile = "atmel,at91sam9x5-can"; compatible = "atmel,at91sam9x5-can";
reg = <0xf000c000 0x300>; reg = <0xf000c000 0x300>;
interrupts = <40 4 5> interrupts = <40 4 5>
}; };

View File

@ -37,7 +37,7 @@ Bank: 3 (A, B and C)
0xffffffff 0x7fff3ccf /* pioB */ 0xffffffff 0x7fff3ccf /* pioB */
0xffffffff 0x007fffff /* pioC */ 0xffffffff 0x007fffff /* pioC */
For each peripheral/bank we will descibe in a u32 if a pin can can be For each peripheral/bank we will descibe in a u32 if a pin can be
configured in it by putting 1 to the pin bit (1 << pin) configured in it by putting 1 to the pin bit (1 << pin)
Let's take the pioA on peripheral B Let's take the pioA on peripheral B

View File

@ -7,7 +7,7 @@ UART node.
Required properties: Required properties:
- rs485-rts-delay: prop-encoded-array <a b> where: - rs485-rts-delay: prop-encoded-array <a b> where:
* a is the delay beteween rts signal and beginning of data sent in milliseconds. * a is the delay between rts signal and beginning of data sent in milliseconds.
it corresponds to the delay before sending data. it corresponds to the delay before sending data.
* b is the delay between end of data sent and rts signal in milliseconds * b is the delay between end of data sent and rts signal in milliseconds
it corresponds to the delay after sending data and actual release of the line. it corresponds to the delay after sending data and actual release of the line.

View File

@ -321,7 +321,7 @@ Access to a dma_buf from the kernel context involves three steps:
When the importer is done accessing the range specified in begin_cpu_access, When the importer is done accessing the range specified in begin_cpu_access,
it needs to announce this to the exporter (to facilitate cache flushing and it needs to announce this to the exporter (to facilitate cache flushing and
unpinning of any pinned resources). The result of of any dma_buf kmap calls unpinning of any pinned resources). The result of any dma_buf kmap calls
after end_cpu_access is undefined. after end_cpu_access is undefined.
Interface: Interface:

View File

@ -83,8 +83,7 @@ Where's this all leading?
The klibc distribution contains some of the necessary software to make The klibc distribution contains some of the necessary software to make
early userspace useful. The klibc distribution is currently early userspace useful. The klibc distribution is currently
maintained separately from the kernel, but this may change early in maintained separately from the kernel.
the 2.7 era (it missed the boat for 2.5).
You can obtain somewhat infrequent snapshots of klibc from You can obtain somewhat infrequent snapshots of klibc from
ftp://ftp.kernel.org/pub/linux/libs/klibc/ ftp://ftp.kernel.org/pub/linux/libs/klibc/

View File

@ -150,7 +150,7 @@ C. Boot options
C. Attaching, Detaching and Unloading C. Attaching, Detaching and Unloading
Before going on on how to attach, detach and unload the framebuffer console, an Before going on how to attach, detach and unload the framebuffer console, an
illustration of the dependencies may help. illustration of the dependencies may help.
The console layer, as with most subsystems, needs a driver that interfaces with The console layer, as with most subsystems, needs a driver that interfaces with

View File

@ -571,7 +571,7 @@ mode "640x480-60"
# 160 chars 800 lines # 160 chars 800 lines
# Blank Time 4.798 us 0.564 ms # Blank Time 4.798 us 0.564 ms
# 50 chars 28 lines # 50 chars 28 lines
# Polarity negtive positive # Polarity negative positive
# #
mode "1280x800-60" mode "1280x800-60"
# D: 83.500 MHz, H: 49.702 kHz, V: 60.00 Hz # D: 83.500 MHz, H: 49.702 kHz, V: 60.00 Hz

View File

@ -32,7 +32,7 @@
Start viafb with default settings: Start viafb with default settings:
#modprobe viafb #modprobe viafb
Start viafb with with user options: Start viafb with user options:
#modprobe viafb viafb_mode=800x600 viafb_bpp=16 viafb_refresh=60 #modprobe viafb viafb_mode=800x600 viafb_bpp=16 viafb_refresh=60
viafb_active_dev=CRT+DVI viafb_dvi_port=DVP1 viafb_active_dev=CRT+DVI viafb_dvi_port=DVP1
viafb_mode1=1024x768 viafb_bpp=16 viafb_refresh1=60 viafb_mode1=1024x768 viafb_bpp=16 viafb_refresh1=60

View File

@ -87,7 +87,7 @@ Unless otherwise specified, all options default to off.
device=<devicepath> device=<devicepath>
Specify a device during mount so that ioctls on the control device Specify a device during mount so that ioctls on the control device
can be avoided. Especialy useful when trying to mount a multi-device can be avoided. Especially useful when trying to mount a multi-device
setup as root. May be specified multiple times for multiple devices. setup as root. May be specified multiple times for multiple devices.
discard discard

View File

@ -2,7 +2,7 @@
Ext4 Filesystem Ext4 Filesystem
=============== ===============
Ext4 is an an advanced level of the ext3 filesystem which incorporates Ext4 is an advanced level of the ext3 filesystem which incorporates
scalability and reliability enhancements for supporting large filesystems scalability and reliability enhancements for supporting large filesystems
(64 bit) in keeping with increasing disk capacities and state-of-the-art (64 bit) in keeping with increasing disk capacities and state-of-the-art
feature requirements. feature requirements.

View File

@ -93,7 +93,7 @@ For a filesystem to be exportable it must:
2/ make sure that d_splice_alias is used rather than d_add 2/ make sure that d_splice_alias is used rather than d_add
when ->lookup finds an inode for a given parent and name. when ->lookup finds an inode for a given parent and name.
If inode is NULL, d_splice_alias(inode, dentry) is eqivalent to If inode is NULL, d_splice_alias(inode, dentry) is equivalent to
d_add(dentry, inode), NULL d_add(dentry, inode), NULL

View File

@ -12,7 +12,7 @@ struct pnfs_layout_hdr
---------------------- ----------------------
The on-the-wire command LAYOUTGET corresponds to struct The on-the-wire command LAYOUTGET corresponds to struct
pnfs_layout_segment, usually referred to by the variable name lseg. pnfs_layout_segment, usually referred to by the variable name lseg.
Each nfs_inode may hold a pointer to a cache of of these layout Each nfs_inode may hold a pointer to a cache of these layout
segments in nfsi->layout, of type struct pnfs_layout_hdr. segments in nfsi->layout, of type struct pnfs_layout_hdr.
We reference the header for the inode pointing to it, across each We reference the header for the inode pointing to it, across each

View File

@ -149,7 +149,7 @@ Bitmap system area
------------------ ------------------
The bitmap itself is divided into three parts. The bitmap itself is divided into three parts.
First the system area, that is split into two halfs. First the system area, that is split into two halves.
Then userspace. Then userspace.
The requirement for a static, fixed preallocated system area comes from how The requirement for a static, fixed preallocated system area comes from how

View File

@ -31,7 +31,7 @@ Semantics
Each relay channel has one buffer per CPU, each buffer has one or more Each relay channel has one buffer per CPU, each buffer has one or more
sub-buffers. Messages are written to the first sub-buffer until it is sub-buffers. Messages are written to the first sub-buffer until it is
too full to contain a new message, in which case it it is written to too full to contain a new message, in which case it is written to
the next (if available). Messages are never split across sub-buffers. the next (if available). Messages are never split across sub-buffers.
At this point, userspace can be notified so it empties the first At this point, userspace can be notified so it empties the first
sub-buffer, while the kernel continues writing to the next. sub-buffer, while the kernel continues writing to the next.

View File

@ -24,7 +24,7 @@ flag between KOBJ_NS_TYPE_NONE and KOBJ_NS_TYPES, and s_ns will
point to the namespace to which it belongs. point to the namespace to which it belongs.
Each sysfs superblock's sysfs_super_info contains an array void Each sysfs superblock's sysfs_super_info contains an array void
*ns[KOBJ_NS_TYPES]. When a a task in a tagging namespace *ns[KOBJ_NS_TYPES]. When a task in a tagging namespace
kobj_nstype first mounts sysfs, a new superblock is created. It kobj_nstype first mounts sysfs, a new superblock is created. It
will be differentiated from other sysfs mounts by having its will be differentiated from other sysfs mounts by having its
s_fs_info->ns[kobj_nstype] set to the new namespace. Note that s_fs_info->ns[kobj_nstype] set to the new namespace. Note that

View File

@ -135,7 +135,7 @@ default behaviour.
If the memory cost of 8 log buffers is too high on small If the memory cost of 8 log buffers is too high on small
systems, then it may be reduced at some cost to performance systems, then it may be reduced at some cost to performance
on metadata intensive workloads. The logbsize option below on metadata intensive workloads. The logbsize option below
controls the size of each buffer and so is also relevent to controls the size of each buffer and so is also relevant to
this case. this case.
logbsize=value logbsize=value

View File

@ -213,7 +213,7 @@ The individual methods perform the following tasks:
methods: for example the SPEC driver may define that its carrier methods: for example the SPEC driver may define that its carrier
I2C memory is seen at offset 1M and the internal SPI flash is seen I2C memory is seen at offset 1M and the internal SPI flash is seen
at offset 16M. This multiplexing of several flash memories in the at offset 16M. This multiplexing of several flash memories in the
same address space is is carrier-specific and should only be used same address space is carrier-specific and should only be used
by a driver that has verified the `carrier_name' field. by a driver that has verified the `carrier_name' field.

View File

@ -299,7 +299,7 @@ Byte 1:
min threshold (scale as bank 0x26) min threshold (scale as bank 0x26)
Warning for the adventerous Warning for the adventurous
=========================== ===========================
A word of caution to those who want to experiment and see if they can figure A word of caution to those who want to experiment and see if they can figure

View File

@ -1,7 +1,7 @@
How to Get Your Patch Accepted Into the Hwmon Subsystem How to Get Your Patch Accepted Into the Hwmon Subsystem
------------------------------------------------------- -------------------------------------------------------
This text is is a collection of suggestions for people writing patches or This text is a collection of suggestions for people writing patches or
drivers for the hwmon subsystem. Following these suggestions will greatly drivers for the hwmon subsystem. Following these suggestions will greatly
increase the chances of your change being accepted. increase the chances of your change being accepted.

View File

@ -241,7 +241,7 @@ int hwspinlock_example2(void)
locks). locks).
Should be called from a process context (this function might sleep). Should be called from a process context (this function might sleep).
Returns the address of hwspinlock on success, or NULL on error (e.g. Returns the address of hwspinlock on success, or NULL on error (e.g.
if the hwspinlock is sill in use). if the hwspinlock is still in use).
5. Important structs 5. Important structs

View File

@ -196,8 +196,8 @@ static int example_probe(struct i2c_client *i2c_client,
Update the detach method, by changing the name to _remove and Update the detach method, by changing the name to _remove and
to delete the i2c_detach_client call. It is possible that you to delete the i2c_detach_client call. It is possible that you
can also remove the ret variable as it is not not needed for can also remove the ret variable as it is not needed for any
any of the core functions. of the core functions.
- static int example_detach(struct i2c_client *client) - static int example_detach(struct i2c_client *client)
+ static int example_remove(struct i2c_client *client) + static int example_remove(struct i2c_client *client)

View File

@ -91,9 +91,9 @@ information from the kmemcheck warnings, which is extremely valuable in
debugging a problem. This option is not mandatory, however, because it slows debugging a problem. This option is not mandatory, however, because it slows
down the compilation process and produces a much bigger kernel image. down the compilation process and produces a much bigger kernel image.
Now the kmemcheck menu should be visible (under "Kernel hacking" / "kmemcheck: Now the kmemcheck menu should be visible (under "Kernel hacking" / "Memory
trap use of uninitialized memory"). Here follows a description of the Debugging" / "kmemcheck: trap use of uninitialized memory"). Here follows
kmemcheck configuration variables: a description of the kmemcheck configuration variables:
o CONFIG_KMEMCHECK o CONFIG_KMEMCHECK

View File

@ -71,7 +71,7 @@ To register the chip at address 0x63 on specific adapter, set the platform data
according to include/linux/platform_data/leds-lm3556.h, set the i2c board info according to include/linux/platform_data/leds-lm3556.h, set the i2c board info
Example: Example:
static struct i2c_board_info __initdata board_i2c_ch4[] = { static struct i2c_board_info board_i2c_ch4[] __initdata = {
{ {
I2C_BOARD_INFO(LM3556_NAME, 0x63), I2C_BOARD_INFO(LM3556_NAME, 0x63),
.platform_data = &lm3556_pdata, .platform_data = &lm3556_pdata,

View File

@ -37,7 +37,7 @@ registered using the i2c_board_info mechanism.
To register the chip at address 0x60 on adapter 0, set the platform data To register the chip at address 0x60 on adapter 0, set the platform data
according to include/linux/leds-lp3944.h, set the i2c board info: according to include/linux/leds-lp3944.h, set the i2c board info:
static struct i2c_board_info __initdata a910_i2c_board_info[] = { static struct i2c_board_info a910_i2c_board_info[] __initdata = {
{ {
I2C_BOARD_INFO("lp3944", 0x60), I2C_BOARD_INFO("lp3944", 0x60),
.platform_data = &a910_lp3944_leds, .platform_data = &a910_lp3944_leds,

View File

@ -163,7 +163,7 @@ a recent addition and not present on older kernels.
at read: contains online/offline state of memory. at read: contains online/offline state of memory.
at write: user can specify "online_kernel", at write: user can specify "online_kernel",
"online_movable", "online", "offline" command "online_movable", "online", "offline" command
which will be performed on al sections in the block. which will be performed on all sections in the block.
'phys_device' : read-only: designed to show the name of physical memory 'phys_device' : read-only: designed to show the name of physical memory
device. This is not well implemented now. device. This is not well implemented now.
'removable' : read-only: contains an integer value indicating 'removable' : read-only: contains an integer value indicating

View File

@ -543,7 +543,7 @@ THe code within the for loop was changed to:
} }
As you can see tmppar is used to accumulate the parity within a for As you can see tmppar is used to accumulate the parity within a for
iteration. In the last 3 statements is is added to par and, if needed, iteration. In the last 3 statements is added to par and, if needed,
to rp12 and rp14. to rp12 and rp14.
While making the changes I also found that I could exploit that tmppar While making the changes I also found that I could exploit that tmppar

View File

@ -179,7 +179,7 @@ use the PM_TRACE mechanism documented in Documentation/power/s2ram.txt .
To verify that the STR works, it is generally more convenient to use the s2ram To verify that the STR works, it is generally more convenient to use the s2ram
tool available from http://suspend.sf.net and documented at tool available from http://suspend.sf.net and documented at
http://en.opensuse.org/SDB:Suspend_to_RAM. http://en.opensuse.org/SDB:Suspend_to_RAM (S2RAM_LINK).
Namely, after writing "freezer", "devices", "platform", "processors", or "core" Namely, after writing "freezer", "devices", "platform", "processors", or "core"
into /sys/power/pm_test (available if the kernel is compiled with into /sys/power/pm_test (available if the kernel is compiled with
@ -194,10 +194,10 @@ Among other things, the testing with the help of /sys/power/pm_test may allow
you to identify drivers that fail to suspend or resume their devices. They you to identify drivers that fail to suspend or resume their devices. They
should be unloaded every time before an STR transition. should be unloaded every time before an STR transition.
Next, you can follow the instructions at http://en.opensuse.org/s2ram to test Next, you can follow the instructions at S2RAM_LINK to test the system, but if
the system, but if it does not work "out of the box", you may need to boot it it does not work "out of the box", you may need to boot it with
with "init=/bin/bash" and test s2ram in the minimal configuration. In that "init=/bin/bash" and test s2ram in the minimal configuration. In that case,
case, you may be able to search for failing drivers by following the procedure you may be able to search for failing drivers by following the procedure
analogous to the one described in section 1. If you find some failing drivers, analogous to the one described in section 1. If you find some failing drivers,
you will have to unload them every time before an STR transition (ie. before you will have to unload them every time before an STR transition (ie. before
you run s2ram), and please report the problems with them. you run s2ram), and please report the problems with them.

View File

@ -50,6 +50,19 @@ echo N > /sys/power/image_size
before suspend (it is limited to 500 MB by default). before suspend (it is limited to 500 MB by default).
. The resume process checks for the presence of the resume device,
if found, it then checks the contents for the hibernation image signature.
If both are found, it resumes the hibernation image.
. The resume process may be triggered in two ways:
1) During lateinit: If resume=/dev/your_swap_partition is specified on
the kernel command line, lateinit runs the resume process. If the
resume device has not been probed yet, the resume process fails and
bootup continues.
2) Manually from an initrd or initramfs: May be run from
the init script by using the /sys/power/resume file. It is vital
that this be done prior to remounting any filesystems (even as
read-only) otherwise data may be corrupted.
Article about goals and implementation of Software Suspend for Linux Article about goals and implementation of Software Suspend for Linux
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -326,7 +339,7 @@ Q: How can distributions ship a swsusp-supporting kernel with modular
disk drivers (especially SATA)? disk drivers (especially SATA)?
A: Well, it can be done, load the drivers, then do echo into A: Well, it can be done, load the drivers, then do echo into
/sys/power/disk/resume file from initrd. Be sure not to mount /sys/power/resume file from initrd. Be sure not to mount
anything, not even read-only mount, or you are going to lose your anything, not even read-only mount, or you are going to lose your
data. data.

View File

@ -97,7 +97,7 @@ IPv4 addresses:
%pI4 1.2.3.4 %pI4 1.2.3.4
%pi4 001.002.003.004 %pi4 001.002.003.004
%p[Ii][hnbl] %p[Ii]4[hnbl]
For printing IPv4 dot-separated decimal addresses. The 'I4' and 'i4' For printing IPv4 dot-separated decimal addresses. The 'I4' and 'i4'
specifiers result in a printed address with ('i4') or without ('I4') specifiers result in a printed address with ('i4') or without ('I4')
@ -194,11 +194,11 @@ struct va_format:
u64 SHOULD be printed with %llu/%llx, (unsigned long long): u64 SHOULD be printed with %llu/%llx, (unsigned long long):
printk("%llu", (unsigned long long)u64_var); printk("%llu", u64_var);
s64 SHOULD be printed with %lld/%llx, (long long): s64 SHOULD be printed with %lld/%llx, (long long):
printk("%lld", (long long)s64_var); printk("%lld", s64_var);
If <type> is dependent on a config option for its size (e.g., sector_t, If <type> is dependent on a config option for its size (e.g., sector_t,
blkcnt_t) or is architecture-dependent for its size (e.g., tcflag_t), use a blkcnt_t) or is architecture-dependent for its size (e.g., tcflag_t), use a

View File

@ -300,7 +300,7 @@ initialization.
------------------------------------------- -------------------------------------------
RapidIO subsystem code organization allows addition of new enumeration/discovery RapidIO subsystem code organization allows addition of new enumeration/discovery
methods as new configuration options without significant impact to to the core methods as new configuration options without significant impact to the core
RapidIO code. RapidIO code.
A new enumeration/discovery method has to be attached to one or more mport A new enumeration/discovery method has to be attached to one or more mport

View File

@ -151,7 +151,7 @@ To send a request to the controller:
generated. generated.
- The host read the outbound list copy pointer shadow register and compare - The host read the outbound list copy pointer shadow register and compare
with previous saved read ponter N. If they are different, the host will with previous saved read pointer N. If they are different, the host will
read the (N+1)th outbound list unit. read the (N+1)th outbound list unit.
The host get the index of the request from the (N+1)th outbound list The host get the index of the request from the (N+1)th outbound list

View File

@ -120,7 +120,7 @@ Mic Phantom+48V: switch for +48V phantom power for electrostatic microphones on
Make sure this is not turned on while any other source is connected to input 1/2. Make sure this is not turned on while any other source is connected to input 1/2.
It might damage the source and/or the maya44 card. It might damage the source and/or the maya44 card.
Mic/Line input: if switch is is on, input jack 1/2 is microphone input (mono), otherwise line input (stereo). Mic/Line input: if switch is on, input jack 1/2 is microphone input (mono), otherwise line input (stereo).
Bypass: analogue bypass from ADC input to output for channel 1+2. Same as "Monitor" in the windows driver. Bypass: analogue bypass from ADC input to output for channel 1+2. Same as "Monitor" in the windows driver.
Bypass 1: same for channel 3+4. Bypass 1: same for channel 3+4.

View File

@ -73,7 +73,7 @@ The main requirements are:
Design Design
The new API shares a number of concepts with with the PCM API for flow The new API shares a number of concepts with the PCM API for flow
control. Start, pause, resume, drain and stop commands have the same control. Start, pause, resume, drain and stop commands have the same
semantics no matter what the content is. semantics no matter what the content is.
@ -130,7 +130,7 @@ the settings should remain the exception.
The timestamp becomes a multiple field structure. It lists the number The timestamp becomes a multiple field structure. It lists the number
of bytes transferred, the number of samples processed and the number of bytes transferred, the number of samples processed and the number
of samples rendered/grabbed. All these values can be used to determine of samples rendered/grabbed. All these values can be used to determine
the avarage bitrate, figure out if the ring buffer needs to be the average bitrate, figure out if the ring buffer needs to be
refilled or the delay due to decoding/encoding/io on the DSP. refilled or the delay due to decoding/encoding/io on the DSP.
Note that the list of codecs/profiles/modes was derived from the Note that the list of codecs/profiles/modes was derived from the

View File

@ -47,7 +47,7 @@ versions of the sysfs interface.
at device creation and removal at device creation and removal
- the unique key to the device at that point in time - the unique key to the device at that point in time
- the kernel's path to the device directory without the leading - the kernel's path to the device directory without the leading
/sys, and always starting with with a slash /sys, and always starting with a slash
- all elements of a devpath must be real directories. Symlinks - all elements of a devpath must be real directories. Symlinks
pointing to /sys/devices must always be resolved to their real pointing to /sys/devices must always be resolved to their real
target and the target path must be used to access the device. target and the target path must be used to access the device.

View File

@ -300,7 +300,7 @@ def tcm_mod_build_configfs(proto_ident, fabric_mod_dir_var, fabric_mod_name):
buf += " int ret;\n\n" buf += " int ret;\n\n"
buf += " if (strstr(name, \"tpgt_\") != name)\n" buf += " if (strstr(name, \"tpgt_\") != name)\n"
buf += " return ERR_PTR(-EINVAL);\n" buf += " return ERR_PTR(-EINVAL);\n"
buf += " if (strict_strtoul(name + 5, 10, &tpgt) || tpgt > UINT_MAX)\n" buf += " if (kstrtoul(name + 5, 10, &tpgt) || tpgt > UINT_MAX)\n"
buf += " return ERR_PTR(-EINVAL);\n\n" buf += " return ERR_PTR(-EINVAL);\n\n"
buf += " tpg = kzalloc(sizeof(struct " + fabric_mod_name + "_tpg), GFP_KERNEL);\n" buf += " tpg = kzalloc(sizeof(struct " + fabric_mod_name + "_tpg), GFP_KERNEL);\n"
buf += " if (!tpg) {\n" buf += " if (!tpg) {\n"

View File

@ -735,7 +735,7 @@ Here are the available options:
function as well as the function being traced. function as well as the function being traced.
print-parent: print-parent:
bash-4000 [01] 1477.606694: simple_strtoul <-strict_strtoul bash-4000 [01] 1477.606694: simple_strtoul <-kstrtoul
noprint-parent: noprint-parent:
bash-4000 [01] 1477.606694: simple_strtoul bash-4000 [01] 1477.606694: simple_strtoul
@ -759,7 +759,7 @@ Here are the available options:
latency-format option is enabled. latency-format option is enabled.
bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \ bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
(+0.000ms): simple_strtoul (strict_strtoul) (+0.000ms): simple_strtoul (kstrtoul)
raw - This will display raw numbers. This option is best for raw - This will display raw numbers. This option is best for
use with user applications that can translate the raw use with user applications that can translate the raw

View File

@ -40,7 +40,13 @@ Two elements are required for tracepoints :
In order to use tracepoints, you should include linux/tracepoint.h. In order to use tracepoints, you should include linux/tracepoint.h.
In include/trace/subsys.h : In include/trace/events/subsys.h :
#undef TRACE_SYSTEM
#define TRACE_SYSTEM subsys
#if !defined(_TRACE_SUBSYS_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_SUBSYS_H
#include <linux/tracepoint.h> #include <linux/tracepoint.h>
@ -48,10 +54,16 @@ DECLARE_TRACE(subsys_eventname,
TP_PROTO(int firstarg, struct task_struct *p), TP_PROTO(int firstarg, struct task_struct *p),
TP_ARGS(firstarg, p)); TP_ARGS(firstarg, p));
#endif /* _TRACE_SUBSYS_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
In subsys/file.c (where the tracing statement must be added) : In subsys/file.c (where the tracing statement must be added) :
#include <trace/subsys.h> #include <trace/events/subsys.h>
#define CREATE_TRACE_POINTS
DEFINE_TRACE(subsys_eventname); DEFINE_TRACE(subsys_eventname);
void somefct(void) void somefct(void)
@ -72,6 +84,9 @@ Where :
- TP_ARGS(firstarg, p) are the parameters names, same as found in the - TP_ARGS(firstarg, p) are the parameters names, same as found in the
prototype. prototype.
- if you use the header in multiple source files, #define CREATE_TRACE_POINTS
should appear only in one source file.
Connecting a function (probe) to a tracepoint is done by providing a Connecting a function (probe) to a tracepoint is done by providing a
probe (function to call) for the specific tracepoint through probe (function to call) for the specific tracepoint through
register_trace_subsys_eventname(). Removing a probe is done through register_trace_subsys_eventname(). Removing a probe is done through

View File

@ -53,7 +53,7 @@ incompatible change are allowed. However, there is an extension
facility that allows backward-compatible extensions to the API to be facility that allows backward-compatible extensions to the API to be
queried and used. queried and used.
The extension mechanism is not based on on the Linux version number. The extension mechanism is not based on the Linux version number.
Instead, kvm defines extension identifiers and a facility to query Instead, kvm defines extension identifiers and a facility to query
whether a particular extension identifier is available. If it is, a whether a particular extension identifier is available. If it is, a
set of ioctls is available for application use. set of ioctls is available for application use.

View File

@ -58,7 +58,7 @@ Protocol 2.11: (Kernel 3.6) Added a field for offset of EFI handover
protocol entry point. protocol entry point.
Protocol 2.12: (Kernel 3.8) Added the xloadflags field and extension fields Protocol 2.12: (Kernel 3.8) Added the xloadflags field and extension fields
to struct boot_params for for loading bzImage and ramdisk to struct boot_params for loading bzImage and ramdisk
above 4G in 64bit. above 4G in 64bit.
**** MEMORY LAYOUT **** MEMORY LAYOUT

View File

@ -146,7 +146,7 @@ Majordomo lists of VGER.KERNEL.ORG at:
<http://vger.kernel.org/vger-lists.html> <http://vger.kernel.org/vger-lists.html>
如果改动影响了用户空间和内核之间的接口,请给 MAN-PAGES 的维护者(列在 如果改动影响了用户空间和内核之间的接口,请给 MAN-PAGES 的维护者(列在
MAITAINERS 文件里的发送一个手册页man-pages补丁或者至少通知一下改 MAINTAINERS 文件里的发送一个手册页man-pages补丁或者至少通知一下改
变,让一些信息有途径进入手册页。 变,让一些信息有途径进入手册页。
即使在第四步的时候,维护者没有作出回应,也要确认在修改他们的代码的时候 即使在第四步的时候,维护者没有作出回应,也要确认在修改他们的代码的时候

View File

@ -78,7 +78,7 @@ restore_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs)
err |= __copy_from_user(regs->iaoq, sc->sc_iaoq, sizeof(regs->iaoq)); err |= __copy_from_user(regs->iaoq, sc->sc_iaoq, sizeof(regs->iaoq));
err |= __copy_from_user(regs->iasq, sc->sc_iasq, sizeof(regs->iasq)); err |= __copy_from_user(regs->iasq, sc->sc_iasq, sizeof(regs->iasq));
err |= __get_user(regs->sar, &sc->sc_sar); err |= __get_user(regs->sar, &sc->sc_sar);
DBG(2,"restore_sigcontext: iaoq is 0x%#lx / 0x%#lx\n", DBG(2,"restore_sigcontext: iaoq is %#lx / %#lx\n",
regs->iaoq[0],regs->iaoq[1]); regs->iaoq[0],regs->iaoq[1]);
DBG(2,"restore_sigcontext: r28 is %ld\n", regs->gr[28]); DBG(2,"restore_sigcontext: r28 is %ld\n", regs->gr[28]);
return err; return err;

View File

@ -52,7 +52,7 @@
#if defined(CONFIG_KERNEL_START_BOOL) || defined(CONFIG_LOWMEM_SIZE_BOOL) #if defined(CONFIG_KERNEL_START_BOOL) || defined(CONFIG_LOWMEM_SIZE_BOOL)
/* The amount of lowmem must be within 0xF0000000 - KERNELBASE. */ /* The amount of lowmem must be within 0xF0000000 - KERNELBASE. */
#if (CONFIG_LOWMEM_SIZE > (0xF0000000 - PAGE_OFFSET)) #if (CONFIG_LOWMEM_SIZE > (0xF0000000 - PAGE_OFFSET))
#error "You must adjust CONFIG_LOWMEM_SIZE or CONFIG_START_KERNEL" #error "You must adjust CONFIG_LOWMEM_SIZE or CONFIG_KERNEL_START"
#endif #endif
#endif #endif
#define MAX_LOW_MEM CONFIG_LOWMEM_SIZE #define MAX_LOW_MEM CONFIG_LOWMEM_SIZE

View File

@ -391,7 +391,7 @@ EXPORT_SYMBOL_GPL(__crypto_alloc_tfm);
* @mask: Mask for type comparison * @mask: Mask for type comparison
* *
* This function should not be used by new algorithm types. * This function should not be used by new algorithm types.
* Plesae use crypto_alloc_tfm instead. * Please use crypto_alloc_tfm instead.
* *
* crypto_alloc_base() will first attempt to locate an already loaded * crypto_alloc_base() will first attempt to locate an already loaded
* algorithm. If that fails and the kernel supports dynamically loadable * algorithm. If that fails and the kernel supports dynamically loadable

View File

@ -393,7 +393,7 @@ static struct page **bm_realloc_pages(struct drbd_bitmap *b, unsigned long want)
* we must not block on IO to ourselves. * we must not block on IO to ourselves.
* Context is receiver thread or dmsetup. */ * Context is receiver thread or dmsetup. */
bytes = sizeof(struct page *)*want; bytes = sizeof(struct page *)*want;
new_pages = kzalloc(bytes, GFP_NOIO); new_pages = kzalloc(bytes, GFP_NOIO | __GFP_NOWARN);
if (!new_pages) { if (!new_pages) {
new_pages = __vmalloc(bytes, new_pages = __vmalloc(bytes,
GFP_NOIO | __GFP_HIGHMEM | __GFP_ZERO, GFP_NOIO | __GFP_HIGHMEM | __GFP_ZERO,

View File

@ -200,14 +200,14 @@ static int __init init_acpi_pm_clocksource(void)
if ((value2 < value1) && ((value2) < 0xFFF)) if ((value2 < value1) && ((value2) < 0xFFF))
break; break;
printk(KERN_INFO "PM-Timer had inconsistent results:" printk(KERN_INFO "PM-Timer had inconsistent results:"
" 0x%#llx, 0x%#llx - aborting.\n", " %#llx, %#llx - aborting.\n",
value1, value2); value1, value2);
pmtmr_ioport = 0; pmtmr_ioport = 0;
return -EINVAL; return -EINVAL;
} }
if (i == ACPI_PM_READ_CHECKS) { if (i == ACPI_PM_READ_CHECKS) {
printk(KERN_INFO "PM-Timer failed consistency check " printk(KERN_INFO "PM-Timer failed consistency check "
" (0x%#llx) - aborting.\n", value1); " (%#llx) - aborting.\n", value1);
pmtmr_ioport = 0; pmtmr_ioport = 0;
return -ENODEV; return -ENODEV;
} }

View File

@ -282,7 +282,7 @@ static int get_empty_message_digest(
} }
} else { } else {
dev_dbg(device_data->dev, "[%s] Continue hash " dev_dbg(device_data->dev, "[%s] Continue hash "
"calculation, since hmac key avalable", "calculation, since hmac key available",
__func__); __func__);
} }
} }

View File

@ -1541,7 +1541,7 @@ int r300_init(struct radeon_device *rdev)
rdev->accel_working = true; rdev->accel_working = true;
r = r300_startup(rdev); r = r300_startup(rdev);
if (r) { if (r) {
/* Somethings want wront with the accel init stop accel */ /* Something went wrong with the accel init, so stop accel */
dev_err(rdev->dev, "Disabling GPU acceleration\n"); dev_err(rdev->dev, "Disabling GPU acceleration\n");
r100_cp_fini(rdev); r100_cp_fini(rdev);
radeon_wb_fini(rdev); radeon_wb_fini(rdev);

View File

@ -222,7 +222,8 @@ int ipz_queue_ctor(struct ehca_pd *pd, struct ipz_queue *queue,
queue->small_page = NULL; queue->small_page = NULL;
/* allocate queue page pointers */ /* allocate queue page pointers */
queue->queue_pages = kzalloc(nr_of_pages * sizeof(void *), GFP_KERNEL); queue->queue_pages = kzalloc(nr_of_pages * sizeof(void *),
GFP_KERNEL | __GFP_NOWARN);
if (!queue->queue_pages) { if (!queue->queue_pages) {
queue->queue_pages = vzalloc(nr_of_pages * sizeof(void *)); queue->queue_pages = vzalloc(nr_of_pages * sizeof(void *));
if (!queue->queue_pages) { if (!queue->queue_pages) {

View File

@ -6,7 +6,7 @@
* Based on the usbvideo vicam driver, which is: * Based on the usbvideo vicam driver, which is:
* *
* Copyright (c) 2002 Joe Burks (jburks@wavicle.org), * Copyright (c) 2002 Joe Burks (jburks@wavicle.org),
* Christopher L Cheney (ccheney@cheney.cx), * Chris Cheney (chris.cheney@gmail.com),
* Pavel Machek (pavel@ucw.cz), * Pavel Machek (pavel@ucw.cz),
* John Tyner (jtyner@cs.ucr.edu), * John Tyner (jtyner@cs.ucr.edu),
* Monroe Williams (monroe@pobox.com) * Monroe Williams (monroe@pobox.com)

View File

@ -1157,7 +1157,7 @@ static void cxgb_redirect(struct dst_entry *old, struct dst_entry *new,
*/ */
void *cxgb_alloc_mem(unsigned long size) void *cxgb_alloc_mem(unsigned long size)
{ {
void *p = kzalloc(size, GFP_KERNEL); void *p = kzalloc(size, GFP_KERNEL | __GFP_NOWARN);
if (!p) if (!p)
p = vzalloc(size); p = vzalloc(size);

View File

@ -1142,7 +1142,7 @@ out: release_firmware(fw);
*/ */
void *t4_alloc_mem(size_t size) void *t4_alloc_mem(size_t size)
{ {
void *p = kzalloc(size, GFP_KERNEL); void *p = kzalloc(size, GFP_KERNEL | __GFP_NOWARN);
if (!p) if (!p)
p = vzalloc(size); p = vzalloc(size);

View File

@ -3148,7 +3148,7 @@ int qlcnic_83xx_set_settings(struct qlcnic_adapter *adapter,
status = qlcnic_83xx_set_port_config(adapter); status = qlcnic_83xx_set_port_config(adapter);
if (status) { if (status) {
dev_info(&adapter->pdev->dev, dev_info(&adapter->pdev->dev,
"Faild to Set Link Speed and autoneg.\n"); "Failed to Set Link Speed and autoneg.\n");
adapter->ahw->port_config = config; adapter->ahw->port_config = config;
} }
return status; return status;

View File

@ -1781,7 +1781,7 @@ static int qlcnic_83xx_process_rcv_ring(struct qlcnic_host_sds_ring *sds_ring,
break; break;
default: default:
dev_info(&adapter->pdev->dev, dev_info(&adapter->pdev->dev,
"Unkonwn opcode: 0x%x\n", opcode); "Unknown opcode: 0x%x\n", opcode);
goto skip; goto skip;
} }

View File

@ -1709,7 +1709,7 @@ static irqreturn_t sis900_interrupt(int irq, void *dev_instance)
if(netif_msg_intr(sis_priv)) if(netif_msg_intr(sis_priv))
printk(KERN_DEBUG "%s: exiting interrupt, " printk(KERN_DEBUG "%s: exiting interrupt, "
"interrupt status = 0x%#8.8x.\n", "interrupt status = %#8.8x\n",
net_dev->name, sr32(isr)); net_dev->name, sr32(isr));
spin_unlock (&sis_priv->lock); spin_unlock (&sis_priv->lock);

View File

@ -1199,7 +1199,7 @@ bool wsm_flush_tx(struct cw1200_common *priv)
if (priv->bh_error) { if (priv->bh_error) {
/* In case of failure do not wait for magic. */ /* In case of failure do not wait for magic. */
pr_err("[WSM] Fatal error occured, will not flush TX.\n"); pr_err("[WSM] Fatal error occurred, will not flush TX.\n");
return false; return false;
} else { } else {
/* Get a timestamp of "oldest" frame */ /* Get a timestamp of "oldest" frame */

View File

@ -199,7 +199,7 @@ static void iwl_mvm_te_handle_notif(struct iwl_mvm *mvm,
* and know the dtim period. * and know the dtim period.
*/ */
iwl_mvm_te_check_disconnect(mvm, te_data->vif, iwl_mvm_te_check_disconnect(mvm, te_data->vif,
"No assocation and the time event is over already..."); "No association and the time event is over already...");
iwl_mvm_te_clear_data(mvm, te_data); iwl_mvm_te_clear_data(mvm, te_data);
} else if (le32_to_cpu(notif->action) & TE_V2_NOTIF_HOST_EVENT_START) { } else if (le32_to_cpu(notif->action) & TE_V2_NOTIF_HOST_EVENT_START) {
te_data->running = true; te_data->running = true;

View File

@ -341,7 +341,7 @@ static void _rtl88e_fill_h2c_command(struct ieee80211_hw *hw,
wait_h2c_limit--; wait_h2c_limit--;
if (wait_h2c_limit == 0) { if (wait_h2c_limit == 0) {
RT_TRACE(rtlpriv, COMP_CMD, DBG_LOUD, RT_TRACE(rtlpriv, COMP_CMD, DBG_LOUD,
"Wating too long for FW read " "Waiting too long for FW read "
"clear HMEBox(%d)!\n", boxnum); "clear HMEBox(%d)!\n", boxnum);
break; break;
} }
@ -351,7 +351,7 @@ static void _rtl88e_fill_h2c_command(struct ieee80211_hw *hw,
isfw_read = _rtl88e_check_fw_read_last_h2c(hw, boxnum); isfw_read = _rtl88e_check_fw_read_last_h2c(hw, boxnum);
u1b_tmp = rtl_read_byte(rtlpriv, 0x130); u1b_tmp = rtl_read_byte(rtlpriv, 0x130);
RT_TRACE(rtlpriv, COMP_CMD, DBG_LOUD, RT_TRACE(rtlpriv, COMP_CMD, DBG_LOUD,
"Wating for FW read clear HMEBox(%d)!!! " "Waiting for FW read clear HMEBox(%d)!!! "
"0x130 = %2x\n", boxnum, u1b_tmp); "0x130 = %2x\n", boxnum, u1b_tmp);
} }

View File

@ -416,7 +416,7 @@ static void rtl92d_dm_dig(struct ieee80211_hw *hw)
/* because we will send data pkt when scanning /* because we will send data pkt when scanning
* this will cause some ap like gear-3700 wep TP * this will cause some ap like gear-3700 wep TP
* lower if we retrun here, this is the diff of * lower if we return here, this is the diff of
* mac80211 driver vs ieee80211 driver */ * mac80211 driver vs ieee80211 driver */
/* if (rtlpriv->mac80211.act_scanning) /* if (rtlpriv->mac80211.act_scanning)
* return; */ * return; */

View File

@ -330,7 +330,7 @@ static void _rtl8723ae_fill_h2c_command(struct ieee80211_hw *hw,
wait_h2c_limmit--; wait_h2c_limmit--;
if (wait_h2c_limmit == 0) { if (wait_h2c_limmit == 0) {
RT_TRACE(rtlpriv, COMP_CMD, DBG_LOUD, RT_TRACE(rtlpriv, COMP_CMD, DBG_LOUD,
"Wating too long for FW read clear HMEBox(%d)!\n", "Waiting too long for FW read clear HMEBox(%d)!\n",
boxnum); boxnum);
break; break;
} }
@ -340,7 +340,7 @@ static void _rtl8723ae_fill_h2c_command(struct ieee80211_hw *hw,
isfw_rd = rtl8723ae_check_fw_read_last_h2c(hw, boxnum); isfw_rd = rtl8723ae_check_fw_read_last_h2c(hw, boxnum);
u1tmp = rtl_read_byte(rtlpriv, 0x1BF); u1tmp = rtl_read_byte(rtlpriv, 0x1BF);
RT_TRACE(rtlpriv, COMP_CMD, DBG_LOUD, RT_TRACE(rtlpriv, COMP_CMD, DBG_LOUD,
"Wating for FW read clear HMEBox(%d)!!! " "Waiting for FW read clear HMEBox(%d)!!! "
"0x1BF = %2x\n", boxnum, u1tmp); "0x1BF = %2x\n", boxnum, u1tmp);
} }

View File

@ -554,7 +554,7 @@ static irqreturn_t pm860x_vchg_handler(int irq, void *data)
OVTEMP_AUTORECOVER, OVTEMP_AUTORECOVER,
OVTEMP_AUTORECOVER); OVTEMP_AUTORECOVER);
dev_dbg(info->dev, dev_dbg(info->dev,
"%s, pm8606 over-temp occure\n", __func__); "%s, pm8606 over-temp occurred\n", __func__);
} }
} }
@ -562,7 +562,7 @@ static irqreturn_t pm860x_vchg_handler(int irq, void *data)
set_vchg_threshold(info, VCHG_OVP_LOW, 0); set_vchg_threshold(info, VCHG_OVP_LOW, 0);
info->allowed = 0; info->allowed = 0;
dev_dbg(info->dev, dev_dbg(info->dev,
"%s,pm8607 over-vchg occure,vchg = %dmv\n", "%s,pm8607 over-vchg occurred,vchg = %dmv\n",
__func__, vchg); __func__, vchg);
} else if (vchg < VCHG_OVP_LOW) { } else if (vchg < VCHG_OVP_LOW) {
set_vchg_threshold(info, VCHG_NORMAL_LOW, set_vchg_threshold(info, VCHG_NORMAL_LOW,

View File

@ -386,7 +386,7 @@ static int pm2_int_reg2(void *pm2_data, int val)
if (val & (PM2XXX_INT3_ITCHPRECHARGEWD | if (val & (PM2XXX_INT3_ITCHPRECHARGEWD |
PM2XXX_INT3_ITCHCCWD | PM2XXX_INT3_ITCHCVWD)) { PM2XXX_INT3_ITCHCCWD | PM2XXX_INT3_ITCHCVWD)) {
dev_dbg(pm2->dev, dev_dbg(pm2->dev,
"Watchdog occured for precharge, CC and CV charge\n"); "Watchdog occurred for precharge, CC and CV charge\n");
} }
return ret; return ret;

View File

@ -206,7 +206,7 @@ bfad_im_abort_handler(struct scsi_cmnd *cmnd)
spin_lock_irqsave(&bfad->bfad_lock, flags); spin_lock_irqsave(&bfad->bfad_lock, flags);
hal_io = (struct bfa_ioim_s *) cmnd->host_scribble; hal_io = (struct bfa_ioim_s *) cmnd->host_scribble;
if (!hal_io) { if (!hal_io) {
/* IO has been completed, retrun success */ /* IO has been completed, return success */
rc = SUCCESS; rc = SUCCESS;
goto out; goto out;
} }

View File

@ -658,11 +658,11 @@ static inline u32 cxgbi_tag_nonrsvd_bits(struct cxgbi_tag_format *tformat,
static inline void *cxgbi_alloc_big_mem(unsigned int size, static inline void *cxgbi_alloc_big_mem(unsigned int size,
gfp_t gfp) gfp_t gfp)
{ {
void *p = kmalloc(size, gfp); void *p = kzalloc(size, gfp | __GFP_NOWARN);
if (!p) if (!p)
p = vmalloc(size); p = vzalloc(size);
if (p)
memset(p, 0, size);
return p; return p;
} }

View File

@ -1054,7 +1054,7 @@ free_and_out:
} }
/* /*
* Lookup bus/target/lun and retrun corresponding struct hpsa_scsi_dev_t * * Lookup bus/target/lun and return corresponding struct hpsa_scsi_dev_t *
* Assume's h->devlock is held. * Assume's h->devlock is held.
*/ */
static struct hpsa_scsi_dev_t *lookup_hpsa_scsi_dev(struct ctlr_info *h, static struct hpsa_scsi_dev_t *lookup_hpsa_scsi_dev(struct ctlr_info *h,

View File

@ -816,7 +816,7 @@ lpfc_issue_reset(struct device *dev, struct device_attribute *attr,
* the readyness after performing a firmware reset. * the readyness after performing a firmware reset.
* *
* Returns: * Returns:
* zero for success, -EPERM when port does not have privilage to perform the * zero for success, -EPERM when port does not have privilege to perform the
* reset, -EIO when port timeout from recovering from the reset. * reset, -EIO when port timeout from recovering from the reset.
* *
* Note: * Note:
@ -833,7 +833,7 @@ lpfc_sli4_pdev_status_reg_wait(struct lpfc_hba *phba)
lpfc_readl(phba->sli4_hba.u.if_type2.STATUSregaddr, lpfc_readl(phba->sli4_hba.u.if_type2.STATUSregaddr,
&portstat_reg.word0); &portstat_reg.word0);
/* verify if privilaged for the request operation */ /* verify if privileged for the request operation */
if (!bf_get(lpfc_sliport_status_rn, &portstat_reg) && if (!bf_get(lpfc_sliport_status_rn, &portstat_reg) &&
!bf_get(lpfc_sliport_status_err, &portstat_reg)) !bf_get(lpfc_sliport_status_err, &portstat_reg))
return -EPERM; return -EPERM;
@ -925,9 +925,9 @@ lpfc_sli4_pdev_reg_request(struct lpfc_hba *phba, uint32_t opcode)
rc = lpfc_sli4_pdev_status_reg_wait(phba); rc = lpfc_sli4_pdev_status_reg_wait(phba);
if (rc == -EPERM) { if (rc == -EPERM) {
/* no privilage for reset */ /* no privilege for reset */
lpfc_printf_log(phba, KERN_ERR, LOG_SLI, lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
"3150 No privilage to perform the requested " "3150 No privilege to perform the requested "
"access: x%x\n", reg_val); "access: x%x\n", reg_val);
} else if (rc == -EIO) { } else if (rc == -EIO) {
/* reset failed, there is nothing more we can do */ /* reset failed, there is nothing more we can do */

View File

@ -2628,7 +2628,7 @@ err_get_xri_exit:
* @phba: Pointer to HBA context object * @phba: Pointer to HBA context object
* *
* This function allocates BSG_MBOX_SIZE (4KB) page size dma buffer and. * This function allocates BSG_MBOX_SIZE (4KB) page size dma buffer and.
* retruns the pointer to the buffer. * returns the pointer to the buffer.
**/ **/
static struct lpfc_dmabuf * static struct lpfc_dmabuf *
lpfc_bsg_dma_page_alloc(struct lpfc_hba *phba) lpfc_bsg_dma_page_alloc(struct lpfc_hba *phba)

View File

@ -549,7 +549,7 @@ out_probe_one:
/** /**
* megaraid_detach_one - release framework resources and call LLD release routine * megaraid_detach_one - release framework resources and call LLD release routine
* @pdev : handle for our PCI cofiguration space * @pdev : handle for our PCI configuration space
* *
* This routine is called during driver unload. We free all the allocated * This routine is called during driver unload. We free all the allocated
* resources and call the corresponding LLD so that it can also release all * resources and call the corresponding LLD so that it can also release all
@ -979,7 +979,7 @@ megaraid_fini_mbox(adapter_t *adapter)
* @adapter : soft state of the raid controller * @adapter : soft state of the raid controller
* *
* Allocate and align the shared mailbox. This maibox is used to issue * Allocate and align the shared mailbox. This maibox is used to issue
* all the commands. For IO based controllers, the mailbox is also regsitered * all the commands. For IO based controllers, the mailbox is also registered
* with the FW. Allocate memory for all commands as well. * with the FW. Allocate memory for all commands as well.
* This is our big allocator. * This is our big allocator.
*/ */
@ -2027,7 +2027,7 @@ megaraid_mbox_prepare_pthru(adapter_t *adapter, scb_t *scb,
* @scb : scsi control block * @scb : scsi control block
* @scp : scsi command from the mid-layer * @scp : scsi command from the mid-layer
* *
* Prepare a command for the scsi physical devices. This rountine prepares * Prepare a command for the scsi physical devices. This routine prepares
* commands for devices which can take extended CDBs (>10 bytes). * commands for devices which can take extended CDBs (>10 bytes).
*/ */
static void static void
@ -2586,7 +2586,7 @@ megaraid_abort_handler(struct scsi_cmnd *scp)
} }
/** /**
* megaraid_reset_handler - device reset hadler for mailbox based driver * megaraid_reset_handler - device reset handler for mailbox based driver
* @scp : reference command * @scp : reference command
* *
* Reset handler for the mailbox based controller. First try to find out if * Reset handler for the mailbox based controller. First try to find out if
@ -3446,7 +3446,7 @@ megaraid_mbox_display_scb(adapter_t *adapter, scb_t *scb)
* megaraid_mbox_setup_device_map - manage device ids * megaraid_mbox_setup_device_map - manage device ids
* @adapter : Driver's soft state * @adapter : Driver's soft state
* *
* Manange the device ids to have an appropriate mapping between the kernel * Manage the device ids to have an appropriate mapping between the kernel
* scsi addresses and megaraid scsi and logical drive addresses. We export * scsi addresses and megaraid scsi and logical drive addresses. We export
* scsi devices on their actual addresses, whereas the logical drives are * scsi devices on their actual addresses, whereas the logical drives are
* exported on a virtual scsi channel. * exported on a virtual scsi channel.

View File

@ -896,7 +896,7 @@ hinfo_to_cinfo(mraid_hba_info_t *hinfo, mcontroller_t *cinfo)
/** /**
* mraid_mm_register_adp - Registration routine for low level drivers * mraid_mm_register_adp - Registration routine for low level drivers
* @lld_adp : Adapter objejct * @lld_adp : Adapter object
*/ */
int int
mraid_mm_register_adp(mraid_mmadp_t *lld_adp) mraid_mm_register_adp(mraid_mmadp_t *lld_adp)

View File

@ -88,7 +88,7 @@ enum MR_RAID_FLAGS_IO_SUB_TYPE {
#define MEGASAS_FUSION_IN_RESET 0 #define MEGASAS_FUSION_IN_RESET 0
/* /*
* Raid Context structure which describes MegaRAID specific IO Paramenters * Raid Context structure which describes MegaRAID specific IO Parameters
* This resides at offset 0x60 where the SGL normally starts in MPT IO Frames * This resides at offset 0x60 where the SGL normally starts in MPT IO Frames
*/ */

View File

@ -1895,7 +1895,7 @@ done:
bsg_job->reply->reply_payload_rcv_len = 0; bsg_job->reply->reply_payload_rcv_len = 0;
bsg_job->reply->result = (DID_OK) << 16; bsg_job->reply->result = (DID_OK) << 16;
bsg_job->job_done(bsg_job); bsg_job->job_done(bsg_job);
/* Always retrun success, vendor rsp carries correct status */ /* Always return success, vendor rsp carries correct status */
return 0; return 0;
} }

View File

@ -1865,7 +1865,7 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
p_sysid = utsname(); p_sysid = utsname();
if (!p_sysid) { if (!p_sysid) {
ql_log(ql_log_warn, vha, 0x303c, ql_log(ql_log_warn, vha, 0x303c,
"Not able to get the system informtion\n"); "Not able to get the system information\n");
goto done_free_sp; goto done_free_sp;
} }
break; break;

View File

@ -40,7 +40,7 @@
* to glue code. These bitbang setup() and cleanup() routines are always * to glue code. These bitbang setup() and cleanup() routines are always
* used, though maybe they're called from controller-aware code. * used, though maybe they're called from controller-aware code.
* *
* chipselect() and friends may use use spi_device->controller_data and * chipselect() and friends may use spi_device->controller_data and
* controller registers as appropriate. * controller registers as appropriate.
* *
* *

View File

@ -45,7 +45,7 @@ static int kgdboc_reset_connect(struct input_handler *handler,
{ {
input_reset_device(dev); input_reset_device(dev);
/* Retrun an error - we do not want to bind, just to reset */ /* Return an error - we do not want to bind, just to reset */
return -ENODEV; return -ENODEV;
} }

View File

@ -219,7 +219,7 @@ static int fs_path_ensure_buf(struct fs_path *p, int len)
len = PAGE_ALIGN(len); len = PAGE_ALIGN(len);
if (p->buf == p->inline_buf) { if (p->buf == p->inline_buf) {
tmp_buf = kmalloc(len, GFP_NOFS); tmp_buf = kmalloc(len, GFP_NOFS | __GFP_NOWARN);
if (!tmp_buf) { if (!tmp_buf) {
tmp_buf = vmalloc(len); tmp_buf = vmalloc(len);
if (!tmp_buf) if (!tmp_buf)

View File

@ -162,7 +162,7 @@ void *ext4_kvmalloc(size_t size, gfp_t flags)
{ {
void *ret; void *ret;
ret = kmalloc(size, flags); ret = kmalloc(size, flags | __GFP_NOWARN);
if (!ret) if (!ret)
ret = __vmalloc(size, flags, PAGE_KERNEL); ret = __vmalloc(size, flags, PAGE_KERNEL);
return ret; return ret;
@ -172,7 +172,7 @@ void *ext4_kvzalloc(size_t size, gfp_t flags)
{ {
void *ret; void *ret;
ret = kzalloc(size, flags); ret = kzalloc(size, flags | __GFP_NOWARN);
if (!ret) if (!ret)
ret = __vmalloc(size, flags | __GFP_ZERO, PAGE_KERNEL); ret = __vmalloc(size, flags | __GFP_ZERO, PAGE_KERNEL);
return ret; return ret;

View File

@ -1859,7 +1859,7 @@ static int leaf_dealloc(struct gfs2_inode *dip, u32 index, u32 len,
memset(&rlist, 0, sizeof(struct gfs2_rgrp_list)); memset(&rlist, 0, sizeof(struct gfs2_rgrp_list));
ht = kzalloc(size, GFP_NOFS); ht = kzalloc(size, GFP_NOFS | __GFP_NOWARN);
if (ht == NULL) if (ht == NULL)
ht = vzalloc(size); ht = vzalloc(size);
if (!ht) if (!ht)

View File

@ -60,7 +60,6 @@ Mellon the rights to redistribute these changes without encumbrance.
#if defined(__linux__) #if defined(__linux__)
typedef unsigned long long u_quad_t; typedef unsigned long long u_quad_t;
#else
#endif #endif
#include <uapi/linux/coda.h> #include <uapi/linux/coda.h>
#endif #endif

View File

@ -69,7 +69,7 @@ typedef union ktime ktime_t; /* Kill this */
* @secs: seconds to set * @secs: seconds to set
* @nsecs: nanoseconds to set * @nsecs: nanoseconds to set
* *
* Return the ktime_t representation of the value * Return: The ktime_t representation of the value.
*/ */
static inline ktime_t ktime_set(const long secs, const unsigned long nsecs) static inline ktime_t ktime_set(const long secs, const unsigned long nsecs)
{ {
@ -151,7 +151,7 @@ static inline ktime_t ktime_set(const long secs, const unsigned long nsecs)
* @lhs: minuend * @lhs: minuend
* @rhs: subtrahend * @rhs: subtrahend
* *
* Returns the remainder of the subtraction * Return: The remainder of the subtraction.
*/ */
static inline ktime_t ktime_sub(const ktime_t lhs, const ktime_t rhs) static inline ktime_t ktime_sub(const ktime_t lhs, const ktime_t rhs)
{ {
@ -169,7 +169,7 @@ static inline ktime_t ktime_sub(const ktime_t lhs, const ktime_t rhs)
* @add1: addend1 * @add1: addend1
* @add2: addend2 * @add2: addend2
* *
* Returns the sum of @add1 and @add2. * Return: The sum of @add1 and @add2.
*/ */
static inline ktime_t ktime_add(const ktime_t add1, const ktime_t add2) static inline ktime_t ktime_add(const ktime_t add1, const ktime_t add2)
{ {
@ -195,7 +195,7 @@ static inline ktime_t ktime_add(const ktime_t add1, const ktime_t add2)
* @kt: addend * @kt: addend
* @nsec: the scalar nsec value to add * @nsec: the scalar nsec value to add
* *
* Returns the sum of @kt and @nsec in ktime_t format * Return: The sum of @kt and @nsec in ktime_t format.
*/ */
extern ktime_t ktime_add_ns(const ktime_t kt, u64 nsec); extern ktime_t ktime_add_ns(const ktime_t kt, u64 nsec);
@ -204,7 +204,7 @@ extern ktime_t ktime_add_ns(const ktime_t kt, u64 nsec);
* @kt: minuend * @kt: minuend
* @nsec: the scalar nsec value to subtract * @nsec: the scalar nsec value to subtract
* *
* Returns the subtraction of @nsec from @kt in ktime_t format * Return: The subtraction of @nsec from @kt in ktime_t format.
*/ */
extern ktime_t ktime_sub_ns(const ktime_t kt, u64 nsec); extern ktime_t ktime_sub_ns(const ktime_t kt, u64 nsec);
@ -212,7 +212,7 @@ extern ktime_t ktime_sub_ns(const ktime_t kt, u64 nsec);
* timespec_to_ktime - convert a timespec to ktime_t format * timespec_to_ktime - convert a timespec to ktime_t format
* @ts: the timespec variable to convert * @ts: the timespec variable to convert
* *
* Returns a ktime_t variable with the converted timespec value * Return: A ktime_t variable with the converted timespec value.
*/ */
static inline ktime_t timespec_to_ktime(const struct timespec ts) static inline ktime_t timespec_to_ktime(const struct timespec ts)
{ {
@ -224,7 +224,7 @@ static inline ktime_t timespec_to_ktime(const struct timespec ts)
* timeval_to_ktime - convert a timeval to ktime_t format * timeval_to_ktime - convert a timeval to ktime_t format
* @tv: the timeval variable to convert * @tv: the timeval variable to convert
* *
* Returns a ktime_t variable with the converted timeval value * Return: A ktime_t variable with the converted timeval value.
*/ */
static inline ktime_t timeval_to_ktime(const struct timeval tv) static inline ktime_t timeval_to_ktime(const struct timeval tv)
{ {
@ -237,7 +237,7 @@ static inline ktime_t timeval_to_ktime(const struct timeval tv)
* ktime_to_timespec - convert a ktime_t variable to timespec format * ktime_to_timespec - convert a ktime_t variable to timespec format
* @kt: the ktime_t variable to convert * @kt: the ktime_t variable to convert
* *
* Returns the timespec representation of the ktime value * Return: The timespec representation of the ktime value.
*/ */
static inline struct timespec ktime_to_timespec(const ktime_t kt) static inline struct timespec ktime_to_timespec(const ktime_t kt)
{ {
@ -249,7 +249,7 @@ static inline struct timespec ktime_to_timespec(const ktime_t kt)
* ktime_to_timeval - convert a ktime_t variable to timeval format * ktime_to_timeval - convert a ktime_t variable to timeval format
* @kt: the ktime_t variable to convert * @kt: the ktime_t variable to convert
* *
* Returns the timeval representation of the ktime value * Return: The timeval representation of the ktime value.
*/ */
static inline struct timeval ktime_to_timeval(const ktime_t kt) static inline struct timeval ktime_to_timeval(const ktime_t kt)
{ {
@ -262,7 +262,7 @@ static inline struct timeval ktime_to_timeval(const ktime_t kt)
* ktime_to_ns - convert a ktime_t variable to scalar nanoseconds * ktime_to_ns - convert a ktime_t variable to scalar nanoseconds
* @kt: the ktime_t variable to convert * @kt: the ktime_t variable to convert
* *
* Returns the scalar nanoseconds representation of @kt * Return: The scalar nanoseconds representation of @kt.
*/ */
static inline s64 ktime_to_ns(const ktime_t kt) static inline s64 ktime_to_ns(const ktime_t kt)
{ {
@ -276,7 +276,9 @@ static inline s64 ktime_to_ns(const ktime_t kt)
* @cmp1: comparable1 * @cmp1: comparable1
* @cmp2: comparable2 * @cmp2: comparable2
* *
* Compare two ktime_t variables, returns 1 if equal * Compare two ktime_t variables.
*
* Return: 1 if equal.
*/ */
static inline int ktime_equal(const ktime_t cmp1, const ktime_t cmp2) static inline int ktime_equal(const ktime_t cmp1, const ktime_t cmp2)
{ {
@ -288,7 +290,7 @@ static inline int ktime_equal(const ktime_t cmp1, const ktime_t cmp2)
* @cmp1: comparable1 * @cmp1: comparable1
* @cmp2: comparable2 * @cmp2: comparable2
* *
* Returns ... * Return: ...
* cmp1 < cmp2: return <0 * cmp1 < cmp2: return <0
* cmp1 == cmp2: return 0 * cmp1 == cmp2: return 0
* cmp1 > cmp2: return >0 * cmp1 > cmp2: return >0
@ -342,7 +344,7 @@ extern ktime_t ktime_add_safe(const ktime_t lhs, const ktime_t rhs);
* @kt: the ktime_t variable to convert * @kt: the ktime_t variable to convert
* @ts: the timespec variable to store the result in * @ts: the timespec variable to store the result in
* *
* Returns true if there was a successful conversion, false if kt was 0. * Return: %true if there was a successful conversion, %false if kt was 0.
*/ */
static inline __must_check bool ktime_to_timespec_cond(const ktime_t kt, static inline __must_check bool ktime_to_timespec_cond(const ktime_t kt,
struct timespec *ts) struct timespec *ts)

View File

@ -541,6 +541,8 @@ static int worker_pool_assign_id(struct worker_pool *pool)
* This must be called either with pwq_lock held or sched RCU read locked. * This must be called either with pwq_lock held or sched RCU read locked.
* If the pwq needs to be used beyond the locking in effect, the caller is * If the pwq needs to be used beyond the locking in effect, the caller is
* responsible for guaranteeing that the pwq stays online. * responsible for guaranteeing that the pwq stays online.
*
* Return: The unbound pool_workqueue for @node.
*/ */
static struct pool_workqueue *unbound_pwq_by_node(struct workqueue_struct *wq, static struct pool_workqueue *unbound_pwq_by_node(struct workqueue_struct *wq,
int node) int node)
@ -639,8 +641,6 @@ static struct pool_workqueue *get_work_pwq(struct work_struct *work)
* get_work_pool - return the worker_pool a given work was associated with * get_work_pool - return the worker_pool a given work was associated with
* @work: the work item of interest * @work: the work item of interest
* *
* Return the worker_pool @work was last associated with. %NULL if none.
*
* Pools are created and destroyed under wq_pool_mutex, and allows read * Pools are created and destroyed under wq_pool_mutex, and allows read
* access under sched-RCU read lock. As such, this function should be * access under sched-RCU read lock. As such, this function should be
* called under wq_pool_mutex or with preemption disabled. * called under wq_pool_mutex or with preemption disabled.
@ -649,6 +649,8 @@ static struct pool_workqueue *get_work_pwq(struct work_struct *work)
* mentioned locking is in effect. If the returned pool needs to be used * mentioned locking is in effect. If the returned pool needs to be used
* beyond the critical section, the caller is responsible for ensuring the * beyond the critical section, the caller is responsible for ensuring the
* returned pool is and stays online. * returned pool is and stays online.
*
* Return: The worker_pool @work was last associated with. %NULL if none.
*/ */
static struct worker_pool *get_work_pool(struct work_struct *work) static struct worker_pool *get_work_pool(struct work_struct *work)
{ {
@ -672,7 +674,7 @@ static struct worker_pool *get_work_pool(struct work_struct *work)
* get_work_pool_id - return the worker pool ID a given work is associated with * get_work_pool_id - return the worker pool ID a given work is associated with
* @work: the work item of interest * @work: the work item of interest
* *
* Return the worker_pool ID @work was last associated with. * Return: The worker_pool ID @work was last associated with.
* %WORK_OFFQ_POOL_NONE if none. * %WORK_OFFQ_POOL_NONE if none.
*/ */
static int get_work_pool_id(struct work_struct *work) static int get_work_pool_id(struct work_struct *work)
@ -831,7 +833,7 @@ void wq_worker_waking_up(struct task_struct *task, int cpu)
* CONTEXT: * CONTEXT:
* spin_lock_irq(rq->lock) * spin_lock_irq(rq->lock)
* *
* RETURNS: * Return:
* Worker task on @cpu to wake up, %NULL if none. * Worker task on @cpu to wake up, %NULL if none.
*/ */
struct task_struct *wq_worker_sleeping(struct task_struct *task, int cpu) struct task_struct *wq_worker_sleeping(struct task_struct *task, int cpu)
@ -966,8 +968,8 @@ static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
* CONTEXT: * CONTEXT:
* spin_lock_irq(pool->lock). * spin_lock_irq(pool->lock).
* *
* RETURNS: * Return:
* Pointer to worker which is executing @work if found, NULL * Pointer to worker which is executing @work if found, %NULL
* otherwise. * otherwise.
*/ */
static struct worker *find_worker_executing_work(struct worker_pool *pool, static struct worker *find_worker_executing_work(struct worker_pool *pool,
@ -1155,14 +1157,16 @@ out_put:
* @flags: place to store irq state * @flags: place to store irq state
* *
* Try to grab PENDING bit of @work. This function can handle @work in any * Try to grab PENDING bit of @work. This function can handle @work in any
* stable state - idle, on timer or on worklist. Return values are * stable state - idle, on timer or on worklist.
* *
* Return:
* 1 if @work was pending and we successfully stole PENDING * 1 if @work was pending and we successfully stole PENDING
* 0 if @work was idle and we claimed PENDING * 0 if @work was idle and we claimed PENDING
* -EAGAIN if PENDING couldn't be grabbed at the moment, safe to busy-retry * -EAGAIN if PENDING couldn't be grabbed at the moment, safe to busy-retry
* -ENOENT if someone else is canceling @work, this state may persist * -ENOENT if someone else is canceling @work, this state may persist
* for arbitrarily long * for arbitrarily long
* *
* Note:
* On >= 0 return, the caller owns @work's PENDING bit. To avoid getting * On >= 0 return, the caller owns @work's PENDING bit. To avoid getting
* interrupted while holding PENDING and @work off queue, irq must be * interrupted while holding PENDING and @work off queue, irq must be
* disabled on entry. This, combined with delayed_work->timer being * disabled on entry. This, combined with delayed_work->timer being
@ -1404,10 +1408,10 @@ retry:
* @wq: workqueue to use * @wq: workqueue to use
* @work: work to queue * @work: work to queue
* *
* Returns %false if @work was already on a queue, %true otherwise.
*
* We queue the work to a specific CPU, the caller must ensure it * We queue the work to a specific CPU, the caller must ensure it
* can't go away. * can't go away.
*
* Return: %false if @work was already on a queue, %true otherwise.
*/ */
bool queue_work_on(int cpu, struct workqueue_struct *wq, bool queue_work_on(int cpu, struct workqueue_struct *wq,
struct work_struct *work) struct work_struct *work)
@ -1477,7 +1481,7 @@ static void __queue_delayed_work(int cpu, struct workqueue_struct *wq,
* @dwork: work to queue * @dwork: work to queue
* @delay: number of jiffies to wait before queueing * @delay: number of jiffies to wait before queueing
* *
* Returns %false if @work was already on a queue, %true otherwise. If * Return: %false if @work was already on a queue, %true otherwise. If
* @delay is zero and @dwork is idle, it will be scheduled for immediate * @delay is zero and @dwork is idle, it will be scheduled for immediate
* execution. * execution.
*/ */
@ -1513,7 +1517,7 @@ EXPORT_SYMBOL(queue_delayed_work_on);
* zero, @work is guaranteed to be scheduled immediately regardless of its * zero, @work is guaranteed to be scheduled immediately regardless of its
* current state. * current state.
* *
* Returns %false if @dwork was idle and queued, %true if @dwork was * Return: %false if @dwork was idle and queued, %true if @dwork was
* pending and its timer was modified. * pending and its timer was modified.
* *
* This function is safe to call from any context including IRQ handler. * This function is safe to call from any context including IRQ handler.
@ -1628,7 +1632,7 @@ static void worker_leave_idle(struct worker *worker)
* Might sleep. Called without any lock but returns with pool->lock * Might sleep. Called without any lock but returns with pool->lock
* held. * held.
* *
* RETURNS: * Return:
* %true if the associated pool is online (@worker is successfully * %true if the associated pool is online (@worker is successfully
* bound), %false if offline. * bound), %false if offline.
*/ */
@ -1689,7 +1693,7 @@ static struct worker *alloc_worker(void)
* CONTEXT: * CONTEXT:
* Might sleep. Does GFP_KERNEL allocations. * Might sleep. Does GFP_KERNEL allocations.
* *
* RETURNS: * Return:
* Pointer to the newly created worker. * Pointer to the newly created worker.
*/ */
static struct worker *create_worker(struct worker_pool *pool) static struct worker *create_worker(struct worker_pool *pool)
@ -1789,6 +1793,8 @@ static void start_worker(struct worker *worker)
* @pool: the target pool * @pool: the target pool
* *
* Grab the managership of @pool and create and start a new worker for it. * Grab the managership of @pool and create and start a new worker for it.
*
* Return: 0 on success. A negative error code otherwise.
*/ */
static int create_and_start_worker(struct worker_pool *pool) static int create_and_start_worker(struct worker_pool *pool)
{ {
@ -1933,7 +1939,7 @@ static void pool_mayday_timeout(unsigned long __pool)
* multiple times. Does GFP_KERNEL allocations. Called only from * multiple times. Does GFP_KERNEL allocations. Called only from
* manager. * manager.
* *
* RETURNS: * Return:
* %false if no action was taken and pool->lock stayed locked, %true * %false if no action was taken and pool->lock stayed locked, %true
* otherwise. * otherwise.
*/ */
@ -1990,7 +1996,7 @@ restart:
* spin_lock_irq(pool->lock) which may be released and regrabbed * spin_lock_irq(pool->lock) which may be released and regrabbed
* multiple times. Called only from manager. * multiple times. Called only from manager.
* *
* RETURNS: * Return:
* %false if no action was taken and pool->lock stayed locked, %true * %false if no action was taken and pool->lock stayed locked, %true
* otherwise. * otherwise.
*/ */
@ -2033,7 +2039,7 @@ static bool maybe_destroy_workers(struct worker_pool *pool)
* spin_lock_irq(pool->lock) which may be released and regrabbed * spin_lock_irq(pool->lock) which may be released and regrabbed
* multiple times. Does GFP_KERNEL allocations. * multiple times. Does GFP_KERNEL allocations.
* *
* RETURNS: * Return:
* %false if the pool don't need management and the caller can safely start * %false if the pool don't need management and the caller can safely start
* processing works, %true indicates that the function released pool->lock * processing works, %true indicates that the function released pool->lock
* and reacquired it to perform some management function and that the * and reacquired it to perform some management function and that the
@ -2259,6 +2265,8 @@ static void process_scheduled_works(struct worker *worker)
* work items regardless of their specific target workqueue. The only * work items regardless of their specific target workqueue. The only
* exception is work items which belong to workqueues with a rescuer which * exception is work items which belong to workqueues with a rescuer which
* will be explained in rescuer_thread(). * will be explained in rescuer_thread().
*
* Return: 0
*/ */
static int worker_thread(void *__worker) static int worker_thread(void *__worker)
{ {
@ -2357,6 +2365,8 @@ sleep:
* those works so that forward progress can be guaranteed. * those works so that forward progress can be guaranteed.
* *
* This should happen rarely. * This should happen rarely.
*
* Return: 0
*/ */
static int rescuer_thread(void *__rescuer) static int rescuer_thread(void *__rescuer)
{ {
@ -2529,7 +2539,7 @@ static void insert_wq_barrier(struct pool_workqueue *pwq,
* CONTEXT: * CONTEXT:
* mutex_lock(wq->mutex). * mutex_lock(wq->mutex).
* *
* RETURNS: * Return:
* %true if @flush_color >= 0 and there's something to flush. %false * %true if @flush_color >= 0 and there's something to flush. %false
* otherwise. * otherwise.
*/ */
@ -2850,7 +2860,7 @@ static bool __flush_work(struct work_struct *work)
* Wait until @work has finished execution. @work is guaranteed to be idle * Wait until @work has finished execution. @work is guaranteed to be idle
* on return if it hasn't been requeued since flush started. * on return if it hasn't been requeued since flush started.
* *
* RETURNS: * Return:
* %true if flush_work() waited for the work to finish execution, * %true if flush_work() waited for the work to finish execution,
* %false if it was already idle. * %false if it was already idle.
*/ */
@ -2902,7 +2912,7 @@ static bool __cancel_work_timer(struct work_struct *work, bool is_dwork)
* The caller must ensure that the workqueue on which @work was last * The caller must ensure that the workqueue on which @work was last
* queued can't be destroyed before this function returns. * queued can't be destroyed before this function returns.
* *
* RETURNS: * Return:
* %true if @work was pending, %false otherwise. * %true if @work was pending, %false otherwise.
*/ */
bool cancel_work_sync(struct work_struct *work) bool cancel_work_sync(struct work_struct *work)
@ -2919,7 +2929,7 @@ EXPORT_SYMBOL_GPL(cancel_work_sync);
* immediate execution. Like flush_work(), this function only * immediate execution. Like flush_work(), this function only
* considers the last queueing instance of @dwork. * considers the last queueing instance of @dwork.
* *
* RETURNS: * Return:
* %true if flush_work() waited for the work to finish execution, * %true if flush_work() waited for the work to finish execution,
* %false if it was already idle. * %false if it was already idle.
*/ */
@ -2937,11 +2947,15 @@ EXPORT_SYMBOL(flush_delayed_work);
* cancel_delayed_work - cancel a delayed work * cancel_delayed_work - cancel a delayed work
* @dwork: delayed_work to cancel * @dwork: delayed_work to cancel
* *
* Kill off a pending delayed_work. Returns %true if @dwork was pending * Kill off a pending delayed_work.
* and canceled; %false if wasn't pending. Note that the work callback *
* function may still be running on return, unless it returns %true and the * Return: %true if @dwork was pending and canceled; %false if it wasn't
* work doesn't re-arm itself. Explicitly flush or use * pending.
* cancel_delayed_work_sync() to wait on it. *
* Note:
* The work callback function may still be running on return, unless
* it returns %true and the work doesn't re-arm itself. Explicitly flush or
* use cancel_delayed_work_sync() to wait on it.
* *
* This function is safe to call from any context including IRQ handler. * This function is safe to call from any context including IRQ handler.
*/ */
@ -2970,7 +2984,7 @@ EXPORT_SYMBOL(cancel_delayed_work);
* *
* This is cancel_work_sync() for delayed works. * This is cancel_work_sync() for delayed works.
* *
* RETURNS: * Return:
* %true if @dwork was pending, %false otherwise. * %true if @dwork was pending, %false otherwise.
*/ */
bool cancel_delayed_work_sync(struct delayed_work *dwork) bool cancel_delayed_work_sync(struct delayed_work *dwork)
@ -2987,7 +3001,7 @@ EXPORT_SYMBOL(cancel_delayed_work_sync);
* system workqueue and blocks until all CPUs have completed. * system workqueue and blocks until all CPUs have completed.
* schedule_on_each_cpu() is very slow. * schedule_on_each_cpu() is very slow.
* *
* RETURNS: * Return:
* 0 on success, -errno on failure. * 0 on success, -errno on failure.
*/ */
int schedule_on_each_cpu(work_func_t func) int schedule_on_each_cpu(work_func_t func)
@ -3055,7 +3069,7 @@ EXPORT_SYMBOL(flush_scheduled_work);
* Executes the function immediately if process context is available, * Executes the function immediately if process context is available,
* otherwise schedules the function for delayed execution. * otherwise schedules the function for delayed execution.
* *
* Returns: 0 - function was executed * Return: 0 - function was executed
* 1 - function was scheduled for execution * 1 - function was scheduled for execution
*/ */
int execute_in_process_context(work_func_t fn, struct execute_work *ew) int execute_in_process_context(work_func_t fn, struct execute_work *ew)
@ -3315,7 +3329,7 @@ static void wq_device_release(struct device *dev)
* apply_workqueue_attrs() may race against userland updating the * apply_workqueue_attrs() may race against userland updating the
* attributes. * attributes.
* *
* Returns 0 on success, -errno on failure. * Return: 0 on success, -errno on failure.
*/ */
int workqueue_sysfs_register(struct workqueue_struct *wq) int workqueue_sysfs_register(struct workqueue_struct *wq)
{ {
@ -3408,7 +3422,9 @@ void free_workqueue_attrs(struct workqueue_attrs *attrs)
* @gfp_mask: allocation mask to use * @gfp_mask: allocation mask to use
* *
* Allocate a new workqueue_attrs, initialize with default settings and * Allocate a new workqueue_attrs, initialize with default settings and
* return it. Returns NULL on failure. * return it.
*
* Return: The allocated new workqueue_attr on success. %NULL on failure.
*/ */
struct workqueue_attrs *alloc_workqueue_attrs(gfp_t gfp_mask) struct workqueue_attrs *alloc_workqueue_attrs(gfp_t gfp_mask)
{ {
@ -3467,7 +3483,8 @@ static bool wqattrs_equal(const struct workqueue_attrs *a,
* @pool: worker_pool to initialize * @pool: worker_pool to initialize
* *
* Initiailize a newly zalloc'd @pool. It also allocates @pool->attrs. * Initiailize a newly zalloc'd @pool. It also allocates @pool->attrs.
* Returns 0 on success, -errno on failure. Even on failure, all fields *
* Return: 0 on success, -errno on failure. Even on failure, all fields
* inside @pool proper are initialized and put_unbound_pool() can be called * inside @pool proper are initialized and put_unbound_pool() can be called
* on @pool safely to release it. * on @pool safely to release it.
*/ */
@ -3574,9 +3591,12 @@ static void put_unbound_pool(struct worker_pool *pool)
* Obtain a worker_pool which has the same attributes as @attrs, bump the * Obtain a worker_pool which has the same attributes as @attrs, bump the
* reference count and return it. If there already is a matching * reference count and return it. If there already is a matching
* worker_pool, it will be used; otherwise, this function attempts to * worker_pool, it will be used; otherwise, this function attempts to
* create a new one. On failure, returns NULL. * create a new one.
* *
* Should be called with wq_pool_mutex held. * Should be called with wq_pool_mutex held.
*
* Return: On success, a worker_pool with the same attributes as @attrs.
* On failure, %NULL.
*/ */
static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs) static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
{ {
@ -3812,9 +3832,7 @@ static void free_unbound_pwq(struct pool_workqueue *pwq)
* *
* Calculate the cpumask a workqueue with @attrs should use on @node. If * Calculate the cpumask a workqueue with @attrs should use on @node. If
* @cpu_going_down is >= 0, that cpu is considered offline during * @cpu_going_down is >= 0, that cpu is considered offline during
* calculation. The result is stored in @cpumask. This function returns * calculation. The result is stored in @cpumask.
* %true if the resulting @cpumask is different from @attrs->cpumask,
* %false if equal.
* *
* If NUMA affinity is not enabled, @attrs->cpumask is always used. If * If NUMA affinity is not enabled, @attrs->cpumask is always used. If
* enabled and @node has online CPUs requested by @attrs, the returned * enabled and @node has online CPUs requested by @attrs, the returned
@ -3823,6 +3841,9 @@ static void free_unbound_pwq(struct pool_workqueue *pwq)
* *
* The caller is responsible for ensuring that the cpumask of @node stays * The caller is responsible for ensuring that the cpumask of @node stays
* stable. * stable.
*
* Return: %true if the resulting @cpumask is different from @attrs->cpumask,
* %false if equal.
*/ */
static bool wq_calc_node_cpumask(const struct workqueue_attrs *attrs, int node, static bool wq_calc_node_cpumask(const struct workqueue_attrs *attrs, int node,
int cpu_going_down, cpumask_t *cpumask) int cpu_going_down, cpumask_t *cpumask)
@ -3876,8 +3897,9 @@ static struct pool_workqueue *numa_pwq_tbl_install(struct workqueue_struct *wq,
* items finish. Note that a work item which repeatedly requeues itself * items finish. Note that a work item which repeatedly requeues itself
* back-to-back will stay on its current pwq. * back-to-back will stay on its current pwq.
* *
* Performs GFP_KERNEL allocations. Returns 0 on success and -errno on * Performs GFP_KERNEL allocations.
* failure. *
* Return: 0 on success and -errno on failure.
*/ */
int apply_workqueue_attrs(struct workqueue_struct *wq, int apply_workqueue_attrs(struct workqueue_struct *wq,
const struct workqueue_attrs *attrs) const struct workqueue_attrs *attrs)
@ -4345,6 +4367,8 @@ EXPORT_SYMBOL_GPL(workqueue_set_max_active);
* *
* Determine whether %current is a workqueue rescuer. Can be used from * Determine whether %current is a workqueue rescuer. Can be used from
* work functions to determine whether it's being run off the rescuer task. * work functions to determine whether it's being run off the rescuer task.
*
* Return: %true if %current is a workqueue rescuer. %false otherwise.
*/ */
bool current_is_workqueue_rescuer(void) bool current_is_workqueue_rescuer(void)
{ {
@ -4368,7 +4392,7 @@ bool current_is_workqueue_rescuer(void)
* workqueue being congested on one CPU doesn't mean the workqueue is also * workqueue being congested on one CPU doesn't mean the workqueue is also
* contested on other CPUs / NUMA nodes. * contested on other CPUs / NUMA nodes.
* *
* RETURNS: * Return:
* %true if congested, %false otherwise. * %true if congested, %false otherwise.
*/ */
bool workqueue_congested(int cpu, struct workqueue_struct *wq) bool workqueue_congested(int cpu, struct workqueue_struct *wq)
@ -4401,7 +4425,7 @@ EXPORT_SYMBOL_GPL(workqueue_congested);
* synchronization around this function and the test result is * synchronization around this function and the test result is
* unreliable and only useful as advisory hints or for debugging. * unreliable and only useful as advisory hints or for debugging.
* *
* RETURNS: * Return:
* OR'd bitmask of WORK_BUSY_* bits. * OR'd bitmask of WORK_BUSY_* bits.
*/ */
unsigned int work_busy(struct work_struct *work) unsigned int work_busy(struct work_struct *work)
@ -4779,9 +4803,10 @@ static void work_for_cpu_fn(struct work_struct *work)
* @fn: the function to run * @fn: the function to run
* @arg: the function arg * @arg: the function arg
* *
* This will return the value @fn returns.
* It is up to the caller to ensure that the cpu doesn't go offline. * It is up to the caller to ensure that the cpu doesn't go offline.
* The caller must not hold any locks which would prevent @fn from completing. * The caller must not hold any locks which would prevent @fn from completing.
*
* Return: The value @fn returns.
*/ */
long work_on_cpu(int cpu, long (*fn)(void *), void *arg) long work_on_cpu(int cpu, long (*fn)(void *), void *arg)
{ {
@ -4853,7 +4878,7 @@ void freeze_workqueues_begin(void)
* CONTEXT: * CONTEXT:
* Grabs and releases wq_pool_mutex. * Grabs and releases wq_pool_mutex.
* *
* RETURNS: * Return:
* %true if some freezable workqueues are still busy. %false if freezing * %true if some freezable workqueues are still busy. %false if freezing
* is complete. * is complete.
*/ */

Some files were not shown because too many files have changed in this diff Show More