License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2012-03-28 17:30:01 +00:00
|
|
|
#ifndef __ASM_BARRIER_H
|
|
|
|
#define __ASM_BARRIER_H
|
|
|
|
|
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
|
|
|
|
#define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t");
|
|
|
|
|
|
|
|
#if __LINUX_ARM_ARCH__ >= 7 || \
|
|
|
|
(__LINUX_ARM_ARCH__ == 6 && defined(CONFIG_CPU_32v6K))
|
|
|
|
#define sev() __asm__ __volatile__ ("sev" : : : "memory")
|
|
|
|
#define wfe() __asm__ __volatile__ ("wfe" : : : "memory")
|
|
|
|
#define wfi() __asm__ __volatile__ ("wfi" : : : "memory")
|
ARM: avoid Cortex-A9 livelock on tight dmb loops
machine_crash_nonpanic_core() does this:
while (1)
cpu_relax();
because the kernel has crashed, and we have no known safe way to deal
with the CPU. So, we place the CPU into an infinite loop which we
expect it to never exit - at least not until the system as a whole is
reset by some method.
In the absence of erratum 754327, this code assembles to:
b .
In other words, an infinite loop. When erratum 754327 is enabled,
this becomes:
1: dmb
b 1b
It has been observed that on some systems (eg, OMAP4) where, if a
crash is triggered, the system tries to kexec into the panic kernel,
but fails after taking the secondary CPU down - placing it into one
of these loops. This causes the system to livelock, and the most
noticable effect is the system stops after issuing:
Loading crashdump kernel...
to the system console.
The tested as working solution I came up with was to add wfe() to
these infinite loops thusly:
while (1) {
cpu_relax();
wfe();
}
which, without 754327 builds to:
1: wfe
b 1b
or with 754327 is enabled:
1: dmb
wfe
b 1b
Adding "wfe" does two things depending on the environment we're running
under:
- where we're running on bare metal, and the processor implements
"wfe", it stops us spinning endlessly in a loop where we're never
going to do any useful work.
- if we're running in a VM, it allows the CPU to be given back to the
hypervisor and rescheduled for other purposes (maybe a different VM)
rather than wasting CPU cycles inside a crashed VM.
However, in light of erratum 794072, Will Deacon wanted to see 10 nops
as well - which is reasonable to cover the case where we have erratum
754327 enabled _and_ we have a processor that doesn't implement the
wfe hint.
So, we now end up with:
1: wfe
b 1b
when erratum 754327 is disabled, or:
1: dmb
nop
nop
nop
nop
nop
nop
nop
nop
nop
nop
wfe
b 1b
when erratum 754327 is enabled. We also get the dmb + 10 nop
sequence elsewhere in the kernel, in terminating loops.
This is reasonable - it means we get the workaround for erratum
794072 when erratum 754327 is enabled, but still relinquish the dead
processor - either by placing it in a lower power mode when wfe is
implemented as such or by returning it to the hypervisior, or in the
case where wfe is a no-op, we use the workaround specified in erratum
794072 to avoid the problem.
These as two entirely orthogonal problems - the 10 nops addresses
erratum 794072, and the wfe is an optimisation that makes the system
more efficient when crashed either in terms of power consumption or
by allowing the host/other VMs to make use of the CPU.
I don't see any reason not to use kexec() inside a VM - it has the
potential to provide automated recovery from a failure of the VMs
kernel with the opportunity for saving a crashdump of the failure.
A panic() with a reboot timeout won't do that, and reading the
libvirt documentation, setting on_reboot to "preserve" won't either
(the documentation states "The preserve action for an on_reboot event
is treated as a destroy".) Surely it has to be a good thing to
avoiding having CPUs spinning inside a VM that is doing no useful
work.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-04-10 10:35:36 +00:00
|
|
|
#else
|
|
|
|
#define wfe() do { } while (0)
|
2012-03-28 17:30:01 +00:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#if __LINUX_ARM_ARCH__ >= 7
|
2013-05-10 17:07:19 +00:00
|
|
|
#define isb(option) __asm__ __volatile__ ("isb " #option : : : "memory")
|
|
|
|
#define dsb(option) __asm__ __volatile__ ("dsb " #option : : : "memory")
|
|
|
|
#define dmb(option) __asm__ __volatile__ ("dmb " #option : : : "memory")
|
2018-05-11 10:15:29 +00:00
|
|
|
#ifdef CONFIG_THUMB2_KERNEL
|
|
|
|
#define CSDB ".inst.w 0xf3af8014"
|
|
|
|
#else
|
|
|
|
#define CSDB ".inst 0xe320f014"
|
|
|
|
#endif
|
|
|
|
#define csdb() __asm__ __volatile__(CSDB : : : "memory")
|
2012-03-28 17:30:01 +00:00
|
|
|
#elif defined(CONFIG_CPU_XSC3) || __LINUX_ARM_ARCH__ == 6
|
2013-05-10 17:07:19 +00:00
|
|
|
#define isb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \
|
2012-03-28 17:30:01 +00:00
|
|
|
: : "r" (0) : "memory")
|
2013-05-10 17:07:19 +00:00
|
|
|
#define dsb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
|
2012-03-28 17:30:01 +00:00
|
|
|
: : "r" (0) : "memory")
|
2013-05-10 17:07:19 +00:00
|
|
|
#define dmb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 5" \
|
2012-03-28 17:30:01 +00:00
|
|
|
: : "r" (0) : "memory")
|
|
|
|
#elif defined(CONFIG_CPU_FA526)
|
2013-05-10 17:07:19 +00:00
|
|
|
#define isb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \
|
2012-03-28 17:30:01 +00:00
|
|
|
: : "r" (0) : "memory")
|
2013-05-10 17:07:19 +00:00
|
|
|
#define dsb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
|
2012-03-28 17:30:01 +00:00
|
|
|
: : "r" (0) : "memory")
|
2013-05-10 17:07:19 +00:00
|
|
|
#define dmb(x) __asm__ __volatile__ ("" : : : "memory")
|
2012-03-28 17:30:01 +00:00
|
|
|
#else
|
2013-05-10 17:07:19 +00:00
|
|
|
#define isb(x) __asm__ __volatile__ ("" : : : "memory")
|
|
|
|
#define dsb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
|
2012-03-28 17:30:01 +00:00
|
|
|
: : "r" (0) : "memory")
|
2013-05-10 17:07:19 +00:00
|
|
|
#define dmb(x) __asm__ __volatile__ ("" : : : "memory")
|
2012-03-28 17:30:01 +00:00
|
|
|
#endif
|
|
|
|
|
2018-05-11 10:15:29 +00:00
|
|
|
#ifndef CSDB
|
|
|
|
#define CSDB
|
|
|
|
#endif
|
|
|
|
#ifndef csdb
|
|
|
|
#define csdb()
|
|
|
|
#endif
|
|
|
|
|
ARM: move heavy barrier support out of line
The existing memory barrier macro causes a significant amount of code
to be inserted inline at every call site. For example, in
gpio_set_irq_type(), we have this for mb():
c0344c08: f57ff04e dsb st
c0344c0c: e59f8190 ldr r8, [pc, #400] ; c0344da4 <gpio_set_irq_type+0x230>
c0344c10: e3590004 cmp r9, #4
c0344c14: e5983014 ldr r3, [r8, #20]
c0344c18: 0a000054 beq c0344d70 <gpio_set_irq_type+0x1fc>
c0344c1c: e3530000 cmp r3, #0
c0344c20: 0a000004 beq c0344c38 <gpio_set_irq_type+0xc4>
c0344c24: e50b2030 str r2, [fp, #-48] ; 0xffffffd0
c0344c28: e50bc034 str ip, [fp, #-52] ; 0xffffffcc
c0344c2c: e12fff33 blx r3
c0344c30: e51bc034 ldr ip, [fp, #-52] ; 0xffffffcc
c0344c34: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0
c0344c38: e5963004 ldr r3, [r6, #4]
Moving the outer_cache_sync() call out of line reduces the impact of
the barrier:
c0344968: f57ff04e dsb st
c034496c: e35a0004 cmp sl, #4
c0344970: e50b2030 str r2, [fp, #-48] ; 0xffffffd0
c0344974: 0a000044 beq c0344a8c <gpio_set_irq_type+0x1b8>
c0344978: ebf363dd bl c001d8f4 <arm_heavy_mb>
c034497c: e5953004 ldr r3, [r5, #4]
This should reduce the cache footprint of this code. Overall, this
results in a reduction of around 20K in the kernel size:
text data bss dec hex filename
10773970 667392 10369656 21811018 14ccf4a ../build/imx6/vmlinux-old
10754219 667392 10369656 21791267 14c8223 ../build/imx6/vmlinux-new
Another advantage to this approach is that we can finally resolve the
issue of SoCs which have their own memory barrier requirements within
multiplatform kernels (such as OMAP.) Here, the bus interconnects
need additional handling to ensure that writes become visible in the
correct order (eg, between dma_map() operations, writes to DMA
coherent memory, and MMIO accesses.)
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-01 22:44:46 +00:00
|
|
|
#ifdef CONFIG_ARM_HEAVY_MB
|
2015-06-03 12:10:16 +00:00
|
|
|
extern void (*soc_mb)(void);
|
ARM: move heavy barrier support out of line
The existing memory barrier macro causes a significant amount of code
to be inserted inline at every call site. For example, in
gpio_set_irq_type(), we have this for mb():
c0344c08: f57ff04e dsb st
c0344c0c: e59f8190 ldr r8, [pc, #400] ; c0344da4 <gpio_set_irq_type+0x230>
c0344c10: e3590004 cmp r9, #4
c0344c14: e5983014 ldr r3, [r8, #20]
c0344c18: 0a000054 beq c0344d70 <gpio_set_irq_type+0x1fc>
c0344c1c: e3530000 cmp r3, #0
c0344c20: 0a000004 beq c0344c38 <gpio_set_irq_type+0xc4>
c0344c24: e50b2030 str r2, [fp, #-48] ; 0xffffffd0
c0344c28: e50bc034 str ip, [fp, #-52] ; 0xffffffcc
c0344c2c: e12fff33 blx r3
c0344c30: e51bc034 ldr ip, [fp, #-52] ; 0xffffffcc
c0344c34: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0
c0344c38: e5963004 ldr r3, [r6, #4]
Moving the outer_cache_sync() call out of line reduces the impact of
the barrier:
c0344968: f57ff04e dsb st
c034496c: e35a0004 cmp sl, #4
c0344970: e50b2030 str r2, [fp, #-48] ; 0xffffffd0
c0344974: 0a000044 beq c0344a8c <gpio_set_irq_type+0x1b8>
c0344978: ebf363dd bl c001d8f4 <arm_heavy_mb>
c034497c: e5953004 ldr r3, [r5, #4]
This should reduce the cache footprint of this code. Overall, this
results in a reduction of around 20K in the kernel size:
text data bss dec hex filename
10773970 667392 10369656 21811018 14ccf4a ../build/imx6/vmlinux-old
10754219 667392 10369656 21791267 14c8223 ../build/imx6/vmlinux-new
Another advantage to this approach is that we can finally resolve the
issue of SoCs which have their own memory barrier requirements within
multiplatform kernels (such as OMAP.) Here, the bus interconnects
need additional handling to ensure that writes become visible in the
correct order (eg, between dma_map() operations, writes to DMA
coherent memory, and MMIO accesses.)
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-01 22:44:46 +00:00
|
|
|
extern void arm_heavy_mb(void);
|
|
|
|
#define __arm_heavy_mb(x...) do { dsb(x); arm_heavy_mb(); } while (0)
|
|
|
|
#else
|
|
|
|
#define __arm_heavy_mb(x...) dsb(x)
|
|
|
|
#endif
|
|
|
|
|
2016-06-21 01:32:25 +00:00
|
|
|
#if defined(CONFIG_ARM_DMA_MEM_BUFFERABLE) || defined(CONFIG_SMP)
|
ARM: move heavy barrier support out of line
The existing memory barrier macro causes a significant amount of code
to be inserted inline at every call site. For example, in
gpio_set_irq_type(), we have this for mb():
c0344c08: f57ff04e dsb st
c0344c0c: e59f8190 ldr r8, [pc, #400] ; c0344da4 <gpio_set_irq_type+0x230>
c0344c10: e3590004 cmp r9, #4
c0344c14: e5983014 ldr r3, [r8, #20]
c0344c18: 0a000054 beq c0344d70 <gpio_set_irq_type+0x1fc>
c0344c1c: e3530000 cmp r3, #0
c0344c20: 0a000004 beq c0344c38 <gpio_set_irq_type+0xc4>
c0344c24: e50b2030 str r2, [fp, #-48] ; 0xffffffd0
c0344c28: e50bc034 str ip, [fp, #-52] ; 0xffffffcc
c0344c2c: e12fff33 blx r3
c0344c30: e51bc034 ldr ip, [fp, #-52] ; 0xffffffcc
c0344c34: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0
c0344c38: e5963004 ldr r3, [r6, #4]
Moving the outer_cache_sync() call out of line reduces the impact of
the barrier:
c0344968: f57ff04e dsb st
c034496c: e35a0004 cmp sl, #4
c0344970: e50b2030 str r2, [fp, #-48] ; 0xffffffd0
c0344974: 0a000044 beq c0344a8c <gpio_set_irq_type+0x1b8>
c0344978: ebf363dd bl c001d8f4 <arm_heavy_mb>
c034497c: e5953004 ldr r3, [r5, #4]
This should reduce the cache footprint of this code. Overall, this
results in a reduction of around 20K in the kernel size:
text data bss dec hex filename
10773970 667392 10369656 21811018 14ccf4a ../build/imx6/vmlinux-old
10754219 667392 10369656 21791267 14c8223 ../build/imx6/vmlinux-new
Another advantage to this approach is that we can finally resolve the
issue of SoCs which have their own memory barrier requirements within
multiplatform kernels (such as OMAP.) Here, the bus interconnects
need additional handling to ensure that writes become visible in the
correct order (eg, between dma_map() operations, writes to DMA
coherent memory, and MMIO accesses.)
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-01 22:44:46 +00:00
|
|
|
#define mb() __arm_heavy_mb()
|
2012-03-28 17:30:01 +00:00
|
|
|
#define rmb() dsb()
|
ARM: move heavy barrier support out of line
The existing memory barrier macro causes a significant amount of code
to be inserted inline at every call site. For example, in
gpio_set_irq_type(), we have this for mb():
c0344c08: f57ff04e dsb st
c0344c0c: e59f8190 ldr r8, [pc, #400] ; c0344da4 <gpio_set_irq_type+0x230>
c0344c10: e3590004 cmp r9, #4
c0344c14: e5983014 ldr r3, [r8, #20]
c0344c18: 0a000054 beq c0344d70 <gpio_set_irq_type+0x1fc>
c0344c1c: e3530000 cmp r3, #0
c0344c20: 0a000004 beq c0344c38 <gpio_set_irq_type+0xc4>
c0344c24: e50b2030 str r2, [fp, #-48] ; 0xffffffd0
c0344c28: e50bc034 str ip, [fp, #-52] ; 0xffffffcc
c0344c2c: e12fff33 blx r3
c0344c30: e51bc034 ldr ip, [fp, #-52] ; 0xffffffcc
c0344c34: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0
c0344c38: e5963004 ldr r3, [r6, #4]
Moving the outer_cache_sync() call out of line reduces the impact of
the barrier:
c0344968: f57ff04e dsb st
c034496c: e35a0004 cmp sl, #4
c0344970: e50b2030 str r2, [fp, #-48] ; 0xffffffd0
c0344974: 0a000044 beq c0344a8c <gpio_set_irq_type+0x1b8>
c0344978: ebf363dd bl c001d8f4 <arm_heavy_mb>
c034497c: e5953004 ldr r3, [r5, #4]
This should reduce the cache footprint of this code. Overall, this
results in a reduction of around 20K in the kernel size:
text data bss dec hex filename
10773970 667392 10369656 21811018 14ccf4a ../build/imx6/vmlinux-old
10754219 667392 10369656 21791267 14c8223 ../build/imx6/vmlinux-new
Another advantage to this approach is that we can finally resolve the
issue of SoCs which have their own memory barrier requirements within
multiplatform kernels (such as OMAP.) Here, the bus interconnects
need additional handling to ensure that writes become visible in the
correct order (eg, between dma_map() operations, writes to DMA
coherent memory, and MMIO accesses.)
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-01 22:44:46 +00:00
|
|
|
#define wmb() __arm_heavy_mb(st)
|
2014-12-11 23:02:06 +00:00
|
|
|
#define dma_rmb() dmb(osh)
|
|
|
|
#define dma_wmb() dmb(oshst)
|
2012-03-28 17:30:01 +00:00
|
|
|
#else
|
2012-08-21 10:26:24 +00:00
|
|
|
#define mb() barrier()
|
|
|
|
#define rmb() barrier()
|
|
|
|
#define wmb() barrier()
|
2014-12-11 23:02:06 +00:00
|
|
|
#define dma_rmb() barrier()
|
|
|
|
#define dma_wmb() barrier()
|
2012-03-28 17:30:01 +00:00
|
|
|
#endif
|
|
|
|
|
2015-12-27 13:04:42 +00:00
|
|
|
#define __smp_mb() dmb(ish)
|
|
|
|
#define __smp_rmb() __smp_mb()
|
|
|
|
#define __smp_wmb() dmb(ishst)
|
2012-03-28 17:30:01 +00:00
|
|
|
|
2018-05-11 14:06:58 +00:00
|
|
|
#ifdef CONFIG_CPU_SPECTRE
|
|
|
|
static inline unsigned long array_index_mask_nospec(unsigned long idx,
|
|
|
|
unsigned long sz)
|
|
|
|
{
|
|
|
|
unsigned long mask;
|
|
|
|
|
|
|
|
asm volatile(
|
|
|
|
"cmp %1, %2\n"
|
|
|
|
" sbc %0, %1, %1\n"
|
|
|
|
CSDB
|
|
|
|
: "=r" (mask)
|
|
|
|
: "r" (idx), "Ir" (sz)
|
|
|
|
: "cc");
|
|
|
|
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
#define array_index_mask_nospec array_index_mask_nospec
|
|
|
|
#endif
|
|
|
|
|
2015-12-21 07:22:18 +00:00
|
|
|
#include <asm-generic/barrier.h>
|
2014-03-12 16:11:00 +00:00
|
|
|
|
2012-03-28 17:30:01 +00:00
|
|
|
#endif /* !__ASSEMBLY__ */
|
|
|
|
#endif /* __ASM_BARRIER_H */
|