In some cases it may be necessary to perform certain setup/cleanup
operations on a device object representing a physical device after
it has been associated with an ACPI companion by acpi_bind_one() or
before disassociating it from that companion by acpi_unbind_one(),
respectively. If there is a struct acpi_bus_type object for the
given device's bus type, the .setup()/.cleanup() callbacks from there
are executed for these purposes. However, an analogous mechanism will
be necessary for devices whose bus types don't have corresponding
struct acpi_bus_type objects and that have specific ACPI scan handlers.
For those devices, add new .bind() and .unbind() callbacks to struct
acpi_scan_handler that will be executed by acpi_platform_notify()
right after the given device has been associated with an ACPI
comapnion and by acpi_platform_notify_remove() right before calling
acpi_unbind_one() for that device, respectively.
To make that work for scan handlers registering new devices in their
.attach() callbacks, modify acpi_scan_attach_handler() to set the
ACPI device object's handler field before calling .attach() from the
scan handler at hand.
This changeset includes a fix from Mika Westerberg.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Add a new latency tolerance device PM QoS type to be use for
specifying active state (RPM_ACTIVE) memory access (DMA) latency
tolerance requirements for devices. It may be used to prevent
hardware from choosing overly aggressive energy-saving operation
modes (causing too much latency to appear) for the whole platform.
This feature reqiures hardware support, so it only will be
available for devices having a new .set_latency_tolerance()
callback in struct dev_pm_info populated, in which case the
routine pointed to by it should implement whatever is necessary
to transfer the effective requirement value to the hardware.
Whenever the effective latency tolerance changes for the device,
its .set_latency_tolerance() callback will be executed and the
effective value will be passed to it. If that value is negative,
which means that the list of latency tolerance requirements for
the device is empty, the callback is expected to switch the
underlying hardware latency tolerance control mechanism to an
autonomous mode if available. If that value is PM_QOS_LATENCY_ANY,
in turn, and the hardware supports a special "no requirement"
setting, the callback is expected to use it. That allows software
to prevent the hardware from automatically updating the device's
latency tolerance in response to its power state changes (e.g. during
transitions from D3cold to D0), which generally may be done in the
autonomous latency tolerance control mode.
If .set_latency_tolerance() is present for the device, a new
pm_qos_latency_tolerance_us attribute will be present in the
devivce's power directory in sysfs. Then, user space can use
that attribute to specify its latency tolerance requirement for
the device, if any. Writing "any" to it means "no requirement, but
do not let the hardware control latency tolerance" and writing
"auto" to it allows the hardware to be switched to the autonomous
mode if there are no other requirements from the kernel side in the
device's list.
This changeset includes a fix from Mika Westerberg.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Add a new field, no_constraints_value, to struct pm_qos_constraints
representing a list of PM QoS constraint requests to be returned by
pm_qos_get_value() when that list of requests is empty.
That field will be equal to default_value for all of the existing
global PM QoS classes and for the resume latency device PM QoS type,
but it will be different from default_value for the new latency
tolerance device PM QoS type introduced by the next changeset.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Rename symbols, variables, functions and structure fields related do
the resume latency device PM QoS type so that it is clear where they
belong (in particular, to avoid confusion with the latency tolerance
device PM QoS type introduced by a subsequent changeset).
Update the PM QoS documentation to better reflect its current state.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Pull SELinux fixes from James Morris.
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
SELinux: Fix kernel BUG on empty security contexts.
selinux: add SOCK_DIAG_BY_FAMILY to the list of netlink message types
Pull vfs fixes from Al Viro:
"A couple of fixes, both -stable fodder. The O_SYNC bug is fairly
old..."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
fix a kmap leak in virtio_console
fix O_SYNC|O_APPEND syncing the wrong range on write()
While we are at it, don't do kmap() under kmap_atomic(), *especially*
for a page we'd allocated with GFP_KERNEL. It's spelled "page_address",
and had that been more than that, we'd have a real trouble - kmap_high()
can block, and doing that while holding kmap_atomic() is a Bad Idea(tm).
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
It actually goes back to 2004 ([PATCH] Concurrent O_SYNC write support)
when sync_page_range() had been introduced; generic_file_write{,v}() correctly
synced
pos_after_write - written .. pos_after_write - 1
but generic_file_aio_write() synced
pos_before_write .. pos_before_write + written - 1
instead. Which is not the same thing with O_APPEND, obviously.
A couple of years later correct variant had been killed off when
everything switched to use of generic_file_aio_write().
All users of generic_file_aio_write() are affected, and the same bug
has been copied into other instances of ->aio_write().
The fix is trivial; the only subtle point is that generic_write_sync()
ought to be inlined to avoid calculations useless for the majority of
calls.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Pull btrfs fixes from Chris Mason:
"This is a small collection of fixes"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs:
Btrfs: fix data corruption when reading/updating compressed extents
Btrfs: don't loop forever if we can't run because of the tree mod log
btrfs: reserve no transaction units in btrfs_ioctl_set_features
btrfs: commit transaction after setting label and features
Btrfs: fix assert screwup for the pending move stuff
Pull perf fixes from Ingo Molnar:
"Tooling fixes, mostly related to the KASLR fallout, but also other
fixes"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf buildid-cache: Check relocation when checking for existing kcore
perf tools: Adjust kallsyms for relocated kernel
perf tests: No need to set up ref_reloc_sym
perf symbols: Prevent the use of kcore if the kernel has moved
perf record: Get ref_reloc_sym from kernel map
perf machine: Set up ref_reloc_sym in machine__create_kernel_maps()
perf machine: Add machine__get_kallsyms_filename()
perf tools: Add kallsyms__get_function_start()
perf symbols: Fix symbol annotation for relocated kernel
perf tools: Fix include for non x86 architectures
perf tools: Fix AAAAARGH64 memory barriers
perf tools: Demangle kernel and kernel module symbols too
perf/doc: Remove mention of non-existent set_perf_event_pending() from design.txt
When using a mix of compressed file extents and prealloc extents, it
is possible to fill a page of a file with random, garbage data from
some unrelated previous use of the page, instead of a sequence of zeroes.
A simple sequence of steps to get into such case, taken from the test
case I made for xfstests, is:
_scratch_mkfs
_scratch_mount "-o compress-force=lzo"
$XFS_IO_PROG -f -c "pwrite -S 0x06 -b 18670 266978 18670" $SCRATCH_MNT/foobar
$XFS_IO_PROG -c "falloc 26450 665194" $SCRATCH_MNT/foobar
$XFS_IO_PROG -c "truncate 542872" $SCRATCH_MNT/foobar
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foobar
This results in the following file items in the fs tree:
item 4 key (257 INODE_ITEM 0) itemoff 15879 itemsize 160
inode generation 6 transid 6 size 542872 block group 0 mode 100600
item 5 key (257 INODE_REF 256) itemoff 15863 itemsize 16
inode ref index 2 namelen 6 name: foobar
item 6 key (257 EXTENT_DATA 0) itemoff 15810 itemsize 53
extent data disk byte 0 nr 0 gen 6
extent data offset 0 nr 24576 ram 266240
extent compression 0
item 7 key (257 EXTENT_DATA 24576) itemoff 15757 itemsize 53
prealloc data disk byte 12849152 nr 241664 gen 6
prealloc data offset 0 nr 241664
item 8 key (257 EXTENT_DATA 266240) itemoff 15704 itemsize 53
extent data disk byte 12845056 nr 4096 gen 6
extent data offset 0 nr 20480 ram 20480
extent compression 2
item 9 key (257 EXTENT_DATA 286720) itemoff 15651 itemsize 53
prealloc data disk byte 13090816 nr 405504 gen 6
prealloc data offset 0 nr 258048
The on disk extent at offset 266240 (which corresponds to 1 single disk block),
contains 5 compressed chunks of file data. Each of the first 4 compress 4096
bytes of file data, while the last one only compresses 3024 bytes of file data.
Therefore a read into the file region [285648 ; 286720[ (length = 4096 - 3024 =
1072 bytes) should always return zeroes (our next extent is a prealloc one).
The solution here is the compression code path to zero the remaining (untouched)
bytes of the last page it uncompressed data into, as the information about how
much space the file data consumes in the last page is not known in the upper layer
fs/btrfs/extent_io.c:__do_readpage(). In __do_readpage we were correctly zeroing
the remainder of the page but only if it corresponds to the last page of the inode
and if the inode's size is not a multiple of the page size.
This would cause not only returning random data on reads, but also permanently
storing random data when updating parts of the region that should be zeroed.
For the example above, it means updating a single byte in the region [285648 ; 286720[
would store that byte correctly but also store random data on disk.
A test case for xfstests follows soon.
Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
Signed-off-by: Chris Mason <clm@fb.com>
A user reported a 100% cpu hang with my new delayed ref code. Turns out I
forgot to increase the count check when we can't run a delayed ref because of
the tree mod log. If we can't run any delayed refs during this there is no
point in continuing to look, and we need to break out. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
Added in patch "btrfs: add ioctls to query/change feature bits online"
modifications to superblock don't need to reserve metadata blocks when
starting a transaction.
Signed-off-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Chris Mason <clm@fb.com>
The set_fslabel ioctl uses btrfs_end_transaction, which means it's
possible that the change will be lost if the system crashes, same for
the newly set features. Let's use btrfs_commit_transaction instead.
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Chris Mason <clm@fb.com>
Wang noticed that he was failing btrfs/030 even though me and Filipe couldn't
reproduce. Turns out this is because Wang didn't have CONFIG_BTRFS_ASSERT set,
which meant that a key part of Filipe's original patch was not being built in.
This appears to be a mess up with merging Filipe's patch as it does not exist in
his original patch. Fix this by changing how we make sure del_waiting_dir_move
asserts that it did not error and take the function out of the ifdef check.
This makes btrfs/030 pass with the assert on or off. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Filipe Manana <fdmanana@gmail.com>
Signed-off-by: Chris Mason <clm@fb.com>
- Protect pinctrl_list_add() with the proper mutex. This
was identified by RedHat. Caused nasty locking warnings
was rootcased by Stanislaw Gruszka.
- Avoid adding dangerous debugfs files when either half of
the subsystem is unused: pinmux or pinconf.
- Various fixes to various drivers: locking, hardware
particulars, DT parsing, error codes.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJS9pQYAAoJEEEQszewGV1zK3QQALzJ5J//W0LOiLG7fhJMJAfI
A2xec10h9T3UV42CvCwaAP6cukjFqeEP/TK6bf14dO7emTnGp/T+JrwgSpiv+9Ht
tMIBDhJ6qhQQJtfzsbgeEiXPu8+OnfSO0uCk3YfvpJTsvyjgqCV6Kqf2s/rmt87c
DRmzYiT73vS6b1m69CKQBSPk5zdHt2lcchTvrjh5Kk/MJ9kkoOH+64RaXt8U3imT
q/VKJHd8e+b4zNCljsxd/gAQ4fBmbCnXWl/rp5Mp0m8X6iAsR24wkn7Z/0L5+ulF
BNl2PPYsTAJvYA2VREMrVjK70x6HqlL2fk8sQWNmwZXQcivN2ab4I6CC7p/yPJFZ
nTu03IY0ryW1367QB2c6TVPFVYUtSeJhjEihoKa9FooXvcs+EP1u51uVcgphqIbL
AnjfLdft4+7s+hPzL2h2tKMBJDmnV4wOc9y9HtbmY7aSbAnncr4WuysnoxznyNDC
Q9e/+P2f54OqzYLmnWQNpmwpFEe/NQS/D79U7vYxpfDIKVkPWs+eTJAb70SWjJM0
WTk+S4/Vxr9xuSqT2TRENp3x7vwwhlP7aXidqOG2A0G3XEqPlAiRalzOS3pTmMhh
j7IK2Lw+dSUx1kK3K/Q9guv4FKvG0JraW8dgryT6jSOEMTAXtYXkV2xblWdzdLWz
iBj5zhnWUlbWxMO9B8qP
=pxHE
-----END PGP SIGNATURE-----
Merge tag 'pinctrl-v3.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
Pull pinctrl fixes from Linus Walleij:
"First round of pin control fixes for v3.14:
- Protect pinctrl_list_add() with the proper mutex. This was
identified by RedHat. Caused nasty locking warnings was rootcased
by Stanislaw Gruszka.
- Avoid adding dangerous debugfs files when either half of the
subsystem is unused: pinmux or pinconf.
- Various fixes to various drivers: locking, hardware particulars, DT
parsing, error codes"
* tag 'pinctrl-v3.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
pinctrl: tegra: return correct error type
pinctrl: do not init debugfs entries for unimplemented functionalities
pinctrl: protect pinctrl_list add
pinctrl: sirf: correct the pin index of ac97_pins group
pinctrl: imx27: fix offset calculation in imx_read_2bit
pinctrl: vt8500: Change devicetree data parsing
pinctrl: imx27: fix wrong offset to ICONFB
pinctrl: at91: use locked variant of irq_set_handler
Pull x86 fixes from Peter Anvin:
"Quite a varied little collection of fixes. Most of them are
relatively small or isolated; the biggest one is Mel Gorman's fixes
for TLB range flushing.
A couple of AMD-related fixes (including not crashing when given an
invalid microcode image) and fix a crash when compiled with gcov"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86, microcode, AMD: Unify valid container checks
x86, hweight: Fix BUG when booting with CONFIG_GCOV_PROFILE_ALL=y
x86/efi: Allow mapping BGRT on x86-32
x86: Fix the initialization of physnode_map
x86, cpu hotplug: Fix stack frame warning in check_irq_vectors_for_cpu_disable()
x86/intel/mid: Fix X86_INTEL_MID dependencies
arch/x86/mm/srat: Skip NUMA_NO_NODE while parsing SLIT
mm, x86: Revisit tlb_flushall_shift tuning for page flushes except on IvyBridge
x86: mm: change tlb_flushall_shift for IvyBridge
x86/mm: Eliminate redundant page table walk during TLB range flushing
x86/mm: Clean up inconsistencies when flushing TLB ranges
mm, x86: Account for TLB flushes only when debugging
x86/AMD/NB: Fix amd_set_subcaches() parameter type
x86/quirks: Add workaround for AMD F16h Erratum792
x86, doc, kconfig: Fix dud URL for Microcode data
I missed a couple errors in reviewing the patches converting jfs
to use the generic posix ACL function. Setting ACL's currently
fails with -EOPNOTSUPP.
Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Reported-by: Michael L. Semon <mlsemon35@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
On archs like S390 or um this driver cannot build nor work.
Make it depend on HAS_IOMEM to bypass build failures.
drivers/built-in.o: In function `dw_wdt_drv_probe':
drivers/watchdog/dw_wdt.c:302: undefined reference to `devm_ioremap_resource'
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Here is a single kernfs fix to resolve a much-reported lockdep issue with the
removal of entries in sysfs.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iEYEABECAAYFAlL1WqIACgkQMUfUDdst+ykauACeMlmM0Ro0nCjU5e9Dq9qFw0ZJ
u2oAn3qxgNIRKIjZTxDfXmXgFr0sTfTW
=UyO3
-----END PGP SIGNATURE-----
Merge tag 'driver-core-3.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull driver core fix from Greg KH:
"Here is a single kernfs fix to resolve a much-reported lockdep issue
with the removal of entries in sysfs"
* tag 'driver-core-3.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core:
kernfs: make kernfs_deactivate() honor KERNFS_LOCKDEP flag
Pull ceph fixes from Sage Weil:
"There is an RBD fix for a crash due to the immutable bio changes, an
error path fix, and a locking fix in the recent redirect support"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
libceph: do not dereference a NULL bio pointer
libceph: take map_sem for read in handle_reply()
libceph: factor out logic from ceph_osdc_start_request()
libceph: fix error handling in ceph_osdc_init()
does not conflict with the dynamic linker's one (64K)
- VDSO gettimeofday fix
- Barrier fixes for atomic operations and cache flushing
- TLB invalidation when overriding early page mappings during boot
- Wired up new 32-bit arm (compat) syscalls
- LSM_MMAP_MIN_ADDR when COMPAT is enabled
- defconfig update
- Clean-up (comments, pgd_alloc).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJS9RgeAAoJEGvWsS0AyF7xeOoP/RDrU9spbXz0Kmb/pv8zeta+
LPZXQbf+stTD0nSUp1CoqUVKPMN8maMa6MOOP3KBFD23nya3UOko65dr2hpXz8qh
ripzD7XG9tpr1Npox1PN9O7vVpqKNCNdhwflOwpAZkQSXW7q6rduwFwLrvaFcb4z
ST41oz/tUhm0xerJFGypoj/peHr1WnAKz8mqMtnPBnQY9u0mk/l5+kEcFGH0pQXl
qkuvxDPBPKG/vL8KEHBCpwQ+i+1GWa2Duk8qY7VkJYYYnKfkrqCRiRwFycBF5tE/
sde021ERxyH8V3CD+Y21btAKW3dOn5/3sJq0gU8w+5vo/Axfm0uLOvHy5Ij9YQ9W
D6yFbNuFoqeeSYQjro2x81lwarDSHldsmcsSEoP1mDMflFQS1rw9wdEu+r9+PYxr
Za275GwDkApCSd5p+xgMrezwG1Xf26Nr6AigNXz7I8AhupxUff9IGM72wy/u5w7Z
rqNNF1jjWjiJXvb9Xb00ZdBaQBb5wcYEyCMvStgqplm1FvuLzMBkwZtPblir0ldl
8+3LTKtCcThN2rqjkwTq+d4H1wSL/0WxK9VlRtK1SEVu8mZOwlCQc9sfavFkhT8q
q0TmAVGXFxHrzpgtlJ5DQumUb7QdJTqWmeG5ohQu5kIr7nYZ1OhEXavHdxErWxUk
WpbPGYFsQDcAoIvnhh3P
=jhHX
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas:
- Relax VDSO alignment requirements so that the kernel-picked one (4K)
does not conflict with the dynamic linker's one (64K)
- VDSO gettimeofday fix
- Barrier fixes for atomic operations and cache flushing
- TLB invalidation when overriding early page mappings during boot
- Wired up new 32-bit arm (compat) syscalls
- LSM_MMAP_MIN_ADDR when COMPAT is enabled
- defconfig update
- Clean-up (comments, pgd_alloc).
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: defconfig: Expand default enabled features
arm64: asm: remove redundant "cc" clobbers
arm64: atomics: fix use of acquire + release for full barrier semantics
arm64: barriers: allow dsb macro to take option parameter
security: select correct default LSM_MMAP_MIN_ADDR on arm on arm64
arm64: compat: Wire up new AArch32 syscalls
arm64: vdso: update wtm fields for CLOCK_MONOTONIC_COARSE
arm64: vdso: fix coarse clock handling
arm64: simplify pgd_alloc
arm64: fix typo: s/SERRROR/SERROR/
arm64: Invalidate the TLB when replacing pmd entries during boot
arm64: Align CMA sizes to PAGE_SIZE
arm64: add DSB after icache flush in __flush_icache_all()
arm64: vdso: prevent ld from aligning PT_LOAD segments to 64k
Pull MIPS updates from Ralf Baechle:
"hree minor patches. All have sat in -next for a few days"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
MIPS: fpu.h: Fix build when CONFIG_BUG is not set
MIPS: Wire up sched_setattr/sched_getattr syscalls
MIPS: Alchemy: Fix DB1100 GPIO registration
Pull media fixes from Mauro Carvalho Chehab:
"A series of small fixes. Mostly driver ones. There is one core
regression fix on a patch that was meant to fix some race issues on
vb2, but that actually caused more harm than good. So, we're just
reverting it for now"
* 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
[media] adv7842: Composite free-run platfrom-data fix
[media] v4l2-dv-timings: fix GTF calculation
[media] hdpvr: Fix memory leak in debug
[media] af9035: add ID [2040:f900] Hauppauge WinTV-MiniStick 2
[media] mxl111sf: Fix compile when CONFIG_DVB_USB_MXL111SF is unset
[media] mxl111sf: Fix unintentional garbage stack read
[media] cx24117: use a valid dev pointer for dev_err printout
[media] cx24117: remove dead code in always 'false' if statement
[media] update Michael Krufky's email address
[media] vb2: Check if there are buffers before streamon
[media] Revert "[media] videobuf_vm_{open,close} race fixes"
[media] go7007-loader: fix usb_dev leak
[media] media: bt8xx: add missing put_device call
[media] exynos4-is: Compile in fimc-lite runtime PM callbacks conditionally
[media] exynos4-is: Compile in fimc runtime PM callbacks conditionally
[media] exynos4-is: Fix error paths in probe() for !pm_runtime_enabled()
[media] s5p-jpeg: Fix wrong NV12 format parameters
[media] s5k5baf: allow to handle arbitrary long i2c sequences
Fix da9055 interrupt initialization
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJS9R1xAAoJEMsfJm/On5mBQBMP/RQIuKe1qDHtT92r0OoiUzuN
WsOFGp0sFvUERrzHPL6GLxhCjiePx9yxNbeL+skRFf4DTdcU9D0souG8tztN7ucH
5FFRmUQJFRwath0aIMBQ03ylaTmbpcAvMNWYDixAiEu4AzjH2HRH9eyRecCG1aEg
Ufo6ssFkFGRrsJyvfUsdz8UUi+A7GjnfZwrx5iemN0bQPX+KjdXBSwj/MLipnowu
D6WTF1Ll/J405Pv2+6Vs40z3eq9B36vcKVUIzRI2BHnp65+iahvew/XbOQRafbQK
RNCw/h0kSsVF+mreiGK3jiOI2JBjy/v+Nu1Bb0y7APHq+sykaWdCcUQ5OXt0Caiw
oTepdT15FzGHYz8JVVOpr7v1kyvEo7V1XvsV0l2fGh5dAzymFTYB0GnwRkcnVcVH
Tmw0YlWlbYQ0d+EN1lXHdNIiJf20jsCeKepPZh984wgOCFTwqQlFnQfCnBbKSDTy
CnZ866ff1bcG+lG24BqKC0V/dFBmoCLXvkPHgsRrs+u1UucZi6mwnlrvB+hb9LUQ
Sbc41cpHCQnomZK4hApOXoKn0Ve5vUJBsMa6fWLMzbPfHrwjSQLwIhvuU33+320u
kXxcbeSKRV6AdvFxi91l0KTFGHrlP3O+Lr0Dz8hfRYaNVoQsJI4uN9PLZL7Ra0OS
MXwd9ZQIDkiRQsXc+Ghp
=iu54
-----END PGP SIGNATURE-----
Merge tag 'hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging
Pull hwmon fixes from Guenter Roeck:
"Fix PMBus driver problem with some multi-page voltage sensors and fix
da9055 interrupt initialization"
* tag 'hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
hwmon: (da9055) Remove use of regmap_irq_get_virq()
hwmon: (pmbus) Support per-page exponent in linear mode
- Fix for a recent ACPI hotplug regression causing a NULL pointer
dereference to occur while handling ACPI eject notifications for
already ejected devices. From Toshi Kani.
- Four concurrency-related fixes for ACPIPHP. Two of them add
missing locking and the other two fix race conditions related to
reference counting.
- ACPIPHP fix to avoid NULL pointer dereferences during device removal
involving Virtual Funcions.
- intel_pstate fix to make it compute the percentage of time the CPU
is busy properly. From Dirk Brandewie.
- Removal of two unnecessary NULL pointer checks in ACPI code and a
fix for sscanf() format string from Dan Carpenter and Luis G.F.
- New ACPI video blacklist entry for HP EliteBook Revolve 810 from
Mika Westerberg.
/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABCAAGBQJS9BjXAAoJEILEb/54YlRxETkP/iypvMbG0LfnTOC3xGE5tkqP
G/E/QR7km3+DKMq9AY/GcQ9B5i1NXqy9ffbwQPmIAy3LMbCkFSb/6GfmIgBKKVpy
LGHOnqe89DqEvYYiXgLlvXgn6QLf3Kh6Dlyenc0WYuFjhefatnxK0WOyDxzgSh2M
+walAqi8Mxu5nNiFFs9qInhV71Wriy0m6PFzCDs5ObbAbJmvRQeBGsyiPW8V+im1
tuPQ4w0p4Kt+oTr1Plq61DMuOBYi2A4ShWU10WsxS37iSK00GdbBycXt3kvdEAKe
RFDBcVNyEMcw3GOXAA9Fz7eXX+S/RxWg3yaeqsy+hr7Ev1haJVAiOvxNU2J5fcyp
RmpI/QHGStePqL+Ua7dYSO31quaclB/HwlEhgFPDzgSQI0qG6HxWlSz5nJLso2+c
ZwDMzek9maIT7/S5Xwq/yCOo0VUB5xx2lLxZa5oUXv65h2e888ilmKwJJvvhrIUL
7zpnQ7PYRaTqYJgXefNKuT04nioNSkNnAIyUgpHKMeMZibyEFHWPGSICTinP1Gjj
uk690wuFKrPawyXLr8mweOkElqa6fT8DgywGwJLfTqObhQghrapNOiM2W5m4yrDN
mFv/uQgwFxgdm+ZM2E2utnD4W3ozo+3GdptCgGMIPP1JMXigX7GElUFCU+RL3D8K
OrvlQEb5jmKNIFzOMGtf
=FSnS
-----END PGP SIGNATURE-----
Merge tag 'pm+acpi-3.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull ACPI and power management fixes from Rafael Wysocki:
"These include a fix for a recent ACPI hotplug regression, four
concurrency related fixes and one PCI device removal fix for
ACPI-based PCI hotplug (ACPIPHP), intel_pstate fix that should go into
stable, three simple ACPI cleanups and a new entry for the ACPI video
blacklist.
Specifics:
- Fix for a recent ACPI hotplug regression causing a NULL pointer
dereference to occur while handling ACPI eject notifications for
already ejected devices. From Toshi Kani.
- Four concurrency-related fixes for ACPIPHP. Two of them add
missing locking and the other two fix race conditions related to
reference counting.
- ACPIPHP fix to avoid NULL pointer dereferences during device
removal involving Virtual Funcions.
- intel_pstate fix to make it compute the percentage of time the CPU
is busy properly. From Dirk Brandewie.
- Removal of two unnecessary NULL pointer checks in ACPI code and a
fix for sscanf() format string from Dan Carpenter and Luis G.F.
- New ACPI video blacklist entry for HP EliteBook Revolve 810 from
Mika Westerberg"
* tag 'pm+acpi-3.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
ACPI / hotplug: Fix panic on eject to ejected device
ACPI / battery: Fix incorrect sscanf() string in acpi_battery_init_alarm()
ACPI / proc: remove unneeded NULL check
ACPI / utils: remove a pointless NULL check
ACPI / video: Add HP EliteBook Revolve 810 to the blacklist
intel_pstate: Take core C0 time into account for core busy calculation
ACPI / hotplug / PCI: Fix bridge removal race vs dock events
ACPI / hotplug / PCI: Fix bridge removal race in handle_hotplug_event()
ACPI / hotplug / PCI: Scan root bus under the PCI rescan-remove lock
ACPI / hotplug / PCI: Move PCI rescan-remove locking to hotplug_event()
ACPI / hotplug / PCI: Remove entries from bus->devices in reverse order
Commit f38a5181d9 ("ceph: Convert to immutable biovecs") introduced
a NULL pointer dereference, which broke rbd in -rc1. Fix it.
Cc: Kent Overstreet <kmo@daterainc.com>
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Handling redirect replies requires both map_sem and request_mutex.
Taking map_sem unconditionally near the top of handle_reply() avoids
possible race conditions that arise from releasing request_mutex to be
able to acquire map_sem in redirect reply case. (Lock ordering is:
map_sem, request_mutex, crush_mutex.)
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Factor out logic from ceph_osdc_start_request() into a new helper,
__ceph_osdc_start_request(). ceph_osdc_start_request() now amounts to
taking locks and calling __ceph_osdc_start_request().
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
FPGA implementations of the Cortex-A57 and Cortex-A53 are now available
in the form of the SMM-A57 and SMM-A53 Soft Macrocell Models (SMMs) for
Versatile Express. As these attach to a Motherboard Express V2M-P1 it
would be useful to have support for some V2M-P1 peripherals enabled by
default.
Additionally a couple of of features have been introduced since the last
defconfig update (CMA, jump labels) that would be good to have enabled
by default to ensure they are build and boot tested.
This patch updates the arm64 defconfig to enable support for these
devices and features. The arm64 Kconfig is modified to select
HAVE_PATA_PLATFORM, which is required to enable support for the
CompactFlash controller on the V2M-P1.
A few options which don't need to appear in defconfig are trimmed:
* BLK_DEV - selected by default
* EXPERIMENTAL - otherwise gone from the kernel
* MII - selected by drivers which require it
* USB_SUPPORT - selected by default
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
cbnz/tbnz don't update the condition flags, so remove the "cc" clobbers
from inline asm blocks that only use these instructions to implement
conditional branches.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Linux requires a number of atomic operations to provide full barrier
semantics, that is no memory accesses after the operation can be
observed before any accesses up to and including the operation in
program order.
On arm64, these operations have been incorrectly implemented as follows:
// A, B, C are independent memory locations
<Access [A]>
// atomic_op (B)
1: ldaxr x0, [B] // Exclusive load with acquire
<op(B)>
stlxr w1, x0, [B] // Exclusive store with release
cbnz w1, 1b
<Access [C]>
The assumption here being that two half barriers are equivalent to a
full barrier, so the only permitted ordering would be A -> B -> C
(where B is the atomic operation involving both a load and a store).
Unfortunately, this is not the case by the letter of the architecture
and, in fact, the accesses to A and C are permitted to pass their
nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs
or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the
store-release on B). This is a clear violation of the full barrier
requirement.
The simple way to fix this is to implement the same algorithm as ARMv7
using explicit barriers:
<Access [A]>
// atomic_op (B)
dmb ish // Full barrier
1: ldxr x0, [B] // Exclusive load
<op(B)>
stxr w1, x0, [B] // Exclusive store
cbnz w1, 1b
dmb ish // Full barrier
<Access [C]>
but this has the undesirable effect of introducing *two* full barrier
instructions. A better approach is actually the following, non-intuitive
sequence:
<Access [A]>
// atomic_op (B)
1: ldxr x0, [B] // Exclusive load
<op(B)>
stlxr w1, x0, [B] // Exclusive store with release
cbnz w1, 1b
dmb ish // Full barrier
<Access [C]>
The simple observations here are:
- The dmb ensures that no subsequent accesses (e.g. the access to C)
can enter or pass the atomic sequence.
- The dmb also ensures that no prior accesses (e.g. the access to A)
can pass the atomic sequence.
- Therefore, no prior access can pass a subsequent access, or
vice-versa (i.e. A is strictly ordered before C).
- The stlxr ensures that no prior access can pass the store component
of the atomic operation.
The only tricky part remaining is the ordering between the ldxr and the
access to A, since the absence of the first dmb means that we're now
permitting re-ordering between the ldxr and any prior accesses.
From an (arbitrary) observer's point of view, there are two scenarios:
1. We have observed the ldxr. This means that if we perform a store to
[B], the ldxr will still return older data. If we can observe the
ldxr, then we can potentially observe the permitted re-ordering
with the access to A, which is clearly an issue when compared to
the dmb variant of the code. Thankfully, the exclusive monitor will
save us here since it will be cleared as a result of the store and
the ldxr will retry. Notice that any use of a later memory
observation to imply observation of the ldxr will also imply
observation of the access to A, since the stlxr/dmb ensure strict
ordering.
2. We have not observed the ldxr. This means we can perform a store
and influence the later ldxr. However, that doesn't actually tell
us anything about the access to [A], so we've not lost anything
here either when compared to the dmb variant.
This patch implements this solution for our barriered atomic operations,
ensuring that we satisfy the full barrier requirements where they are
needed.
Cc: <stable@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Remove use of regmap_irq_get_virq() in driver probe which was
conflicting with use of platform_get_irq_byname().
platform_get_irq_byname() already returns the VIRQ number due
to MFD core translation so using regmap_irq_get_virq() on that
returned value results in an incorrect IRQ being requested.
The driver probes then fail because of this.
Signed-off-by: Adam Thomson <Adam.Thomson.Opensource@diasemi.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Merge a bunch of fixes from Andrew Morton:
"Commit 579f82901f ("swap: add a simple detector for inappropriate
swapin readahead") is a feature. No probs if you decide to defer it
until the next merge window.
It has been sitting in my tree for over a year because of my dislike
of all the magic numbers, but recent discussion with Hugh has made me
give up"
* emailed patches fron Andrew Morton <akpm@linux-foundation.org>:
mm: __set_page_dirty uses spin_lock_irqsave instead of spin_lock_irq
arch/x86/mm/numa.c: fix array index overflow when synchronizing nid to memblock.reserved.
arch/x86/mm/numa.c: initialize numa_kernel_nodes in numa_clear_kernel_node_hotplug()
mm: __set_page_dirty_nobuffers() uses spin_lock_irqsave() instead of spin_lock_irq()
mm/swap: fix race on swap_info reuse between swapoff and swapon
swap: add a simple detector for inappropriate swapin readahead
ocfs2: free allocated clusters if error occurs after ocfs2_claim_clusters
Documentation/kernel-parameters.txt: fix memmap= language
To use spin_{un}lock_irq is dangerous if caller disabled interrupt.
During aio buffer migration, we have a possibility to see the following
call stack.
aio_migratepage [disable interrupt]
migrate_page_copy
clear_page_dirty_for_io
set_page_dirty
__set_page_dirty_buffers
__set_page_dirty
spin_lock_irq
This mean, current aio migration is a deadlockable. spin_lock_irqsave
is a safer alternative and we should use it.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Reported-by: David Rientjes rientjes@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The following path will cause array out of bound.
memblock_add_region() will always set nid in memblock.reserved to
MAX_NUMNODES. In numa_register_memblks(), after we set all nid to
correct valus in memblock.reserved, we called setup_node_data(), and
used memblock_alloc_nid() to allocate memory, with nid set to
MAX_NUMNODES.
The nodemask_t type can be seen as a bit array. And the index is 0 ~
MAX_NUMNODES-1.
After that, when we call node_set() in numa_clear_kernel_node_hotplug(),
the nodemask_t got an index of value MAX_NUMNODES, which is out of [0 ~
MAX_NUMNODES-1].
See below:
numa_init()
|---> numa_register_memblks()
| |---> memblock_set_node(memory) set correct nid in memblock.memory
| |---> memblock_set_node(reserved) set correct nid in memblock.reserved
| |......
| |---> setup_node_data()
| |---> memblock_alloc_nid() here, nid is set to MAX_NUMNODES (1024)
|......
|---> numa_clear_kernel_node_hotplug()
|---> node_set() here, we have an index 1024, and overflowed
This patch moves nid setting to numa_clear_kernel_node_hotplug() to fix
this problem.
Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Tested-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Reported-by: Dave Jones <davej@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Tested-by: Dave Jones <davej@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
On-stack variable numa_kernel_nodes in numa_clear_kernel_node_hotplug()
was not initialized. So we need to initialize it.
[akpm@linux-foundation.org: use NODE_MASK_NONE, per David]
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Tested-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Reported-by: Dave Jones <davej@redhat.com>
Reported-by: David Rientjes <rientjes@google.com>
Tested-by: Dave Jones <davej@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
During aio stress test, we observed the following lockdep warning. This
mean AIO+numa_balancing is currently deadlockable.
The problem is, aio_migratepage disable interrupt, but
__set_page_dirty_nobuffers unintentionally enable it again.
Generally, all helper function should use spin_lock_irqsave() instead of
spin_lock_irq() because they don't know caller at all.
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&(&ctx->completion_lock)->rlock);
<Interrupt>
lock(&(&ctx->completion_lock)->rlock);
*** DEADLOCK ***
dump_stack+0x19/0x1b
print_usage_bug+0x1f7/0x208
mark_lock+0x21d/0x2a0
mark_held_locks+0xb9/0x140
trace_hardirqs_on_caller+0x105/0x1d0
trace_hardirqs_on+0xd/0x10
_raw_spin_unlock_irq+0x2c/0x50
__set_page_dirty_nobuffers+0x8c/0xf0
migrate_page_copy+0x434/0x540
aio_migratepage+0xb1/0x140
move_to_new_page+0x7d/0x230
migrate_pages+0x5e5/0x700
migrate_misplaced_page+0xbc/0xf0
do_numa_page+0x102/0x190
handle_pte_fault+0x241/0x970
handle_mm_fault+0x265/0x370
__do_page_fault+0x172/0x5a0
do_page_fault+0x1a/0x70
page_fault+0x28/0x30
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Larry Woodman <lwoodman@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
swapoff clear swap_info's SWP_USED flag prematurely and free its
resources after that. A concurrent swapon will reuse this swap_info
while its previous resources are not cleared completely.
These late freed resources are:
- p->percpu_cluster
- swap_cgroup_ctrl[type]
- block_device setting
- inode->i_flags &= ~S_SWAPFILE
This patch clears the SWP_USED flag after all its resources are freed,
so that swapon can reuse this swap_info by alloc_swap_info() safely.
[akpm@linux-foundation.org: tidy up code comment]
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is a patch to improve swap readahead algorithm. It's from Hugh and
I slightly changed it.
Hugh's original changelog:
swapin readahead does a blind readahead, whether or not the swapin is
sequential. This may be ok on harddisk, because large reads have
relatively small costs, and if the readahead pages are unneeded they can
be reclaimed easily - though, what if their allocation forced reclaim of
useful pages? But on SSD devices large reads are more expensive than
small ones: if the readahead pages are unneeded, reading them in caused
significant overhead.
This patch adds very simplistic random read detection. Stealing the
PageReadahead technique from Konstantin Khlebnikov's patch, avoiding the
vma/anon_vma sophistications of Shaohua Li's patch, swapin_nr_pages()
simply looks at readahead's current success rate, and narrows or widens
its readahead window accordingly. There is little science to its
heuristic: it's about as stupid as can be whilst remaining effective.
The table below shows elapsed times (in centiseconds) when running a
single repetitive swapping load across a 1000MB mapping in 900MB ram
with 1GB swap (the harddisk tests had taken painfully too long when I
used mem=500M, but SSD shows similar results for that).
Vanilla is the 3.6-rc7 kernel on which I started; Shaohua denotes his
Sep 3 patch in mmotm and linux-next; HughOld denotes my Oct 1 patch
which Shaohua showed to be defective; HughNew this Nov 14 patch, with
page_cluster as usual at default of 3 (8-page reads); HughPC4 this same
patch with page_cluster 4 (16-page reads); HughPC0 with page_cluster 0
(1-page reads: no readahead).
HDD for swapping to harddisk, SSD for swapping to VertexII SSD. Seq for
sequential access to the mapping, cycling five times around; Rand for
the same number of random touches. Anon for a MAP_PRIVATE anon mapping;
Shmem for a MAP_SHARED anon mapping, equivalent to tmpfs.
One weakness of Shaohua's vma/anon_vma approach was that it did not
optimize Shmem: seen below. Konstantin's approach was perhaps mistuned,
50% slower on Seq: did not compete and is not shown below.
HDD Vanilla Shaohua HughOld HughNew HughPC4 HughPC0
Seq Anon 73921 76210 75611 76904 78191 121542
Seq Shmem 73601 73176 73855 72947 74543 118322
Rand Anon 895392 831243 871569 845197 846496 841680
Rand Shmem 1058375 1053486 827935 764955 764376 756489
SSD Vanilla Shaohua HughOld HughNew HughPC4 HughPC0
Seq Anon 24634 24198 24673 25107 21614 70018
Seq Shmem 24959 24932 25052 25703 22030 69678
Rand Anon 43014 26146 28075 25989 26935 25901
Rand Shmem 45349 45215 28249 24268 24138 24332
These tests are, of course, two extremes of a very simple case: under
heavier mixed loads I've not yet observed any consistent improvement or
degradation, and wider testing would be welcome.
Shaohua Li:
Test shows Vanilla is slightly better in sequential workload than Hugh's
patch. I observed with Hugh's patch sometimes the readahead size is
shrinked too fast (from 8 to 1 immediately) in sequential workload if
there is no hit. And in such case, continuing doing readahead is good
actually.
I don't prepare a sophisticated algorithm for the sequential workload
because so far we can't guarantee sequential accessed pages are swap out
sequentially. So I slightly change Hugh's heuristic - don't shrink
readahead size too fast.
Here is my test result (unit second, 3 runs average):
Vanilla Hugh New
Seq 356 370 360
Random 4525 2447 2444
Attached graph is the swapin/swapout throughput I collected with 'vmstat
2'. The first part is running a random workload (till around 1200 of
the x-axis) and the second part is running a sequential workload.
swapin and swapout throughput are almost identical in steady state in
both workloads. These are expected behavior. while in Vanilla, swapin
is much bigger than swapout especially in random workload (because wrong
readahead).
Original patches by: Shaohua Li and Konstantin Khlebnikov.
[fengguang.wu@intel.com: swapin_nr_pages() can be static]
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Konstantin Khlebnikov <khlebnikov@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Even if using the same jbd2 handle, we cannot rollback a transaction.
So once some error occurs after successfully allocating clusters, the
allocated clusters will never be used and it means they are lost. For
example, call ocfs2_claim_clusters successfully when expanding a file,
but failed in ocfs2_insert_extent. So we need free the allocated
clusters if they are not used indeed.
Signed-off-by: Zongxun Wang <wangzongxun@huawei.com>
Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
Acked-by: Joel Becker <jlbec@evilplan.org>
Cc: Mark Fasheh <mfasheh@suse.com>
Cc: Li Zefan <lizefan@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Clean up descriptions of memmap= boot options.
Add periods (full stops), drop commas, change "used" to "reserved" or
"marked".
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Andiry Xu <andiry.xu@gmail.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>