License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2005-04-16 22:20:36 +00:00
|
|
|
/* smp.c: Sparc64 SMP support.
|
|
|
|
*
|
2008-03-26 08:11:55 +00:00
|
|
|
* Copyright (C) 1997, 2007, 2008 David S. Miller (davem@davemloft.net)
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
|
2011-07-22 17:18:16 +00:00
|
|
|
#include <linux/export.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/kernel.h>
|
2017-02-01 18:08:20 +00:00
|
|
|
#include <linux/sched/mm.h>
|
2017-02-08 17:51:36 +00:00
|
|
|
#include <linux/sched/hotplug.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/pagemap.h>
|
|
|
|
#include <linux/threads.h>
|
|
|
|
#include <linux/smp.h>
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/kernel_stat.h>
|
|
|
|
#include <linux/delay.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/seq_file.h>
|
|
|
|
#include <linux/cache.h>
|
|
|
|
#include <linux/jiffies.h>
|
|
|
|
#include <linux/profile.h>
|
2018-10-30 22:09:49 +00:00
|
|
|
#include <linux/memblock.h>
|
2009-04-09 03:32:02 +00:00
|
|
|
#include <linux/vmalloc.h>
|
2010-04-07 11:41:33 +00:00
|
|
|
#include <linux/ftrace.h>
|
2008-10-13 03:55:24 +00:00
|
|
|
#include <linux/cpu.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2014-05-16 21:26:05 +00:00
|
|
|
#include <linux/kgdb.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
#include <asm/head.h>
|
|
|
|
#include <asm/ptrace.h>
|
2011-07-26 23:09:06 +00:00
|
|
|
#include <linux/atomic.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <asm/tlbflush.h>
|
|
|
|
#include <asm/mmu_context.h>
|
|
|
|
#include <asm/cpudata.h>
|
2007-07-14 07:58:53 +00:00
|
|
|
#include <asm/hvtramp.h>
|
|
|
|
#include <asm/io.h>
|
2008-03-26 08:11:55 +00:00
|
|
|
#include <asm/timer.h>
|
2014-05-16 21:26:07 +00:00
|
|
|
#include <asm/setup.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
#include <asm/irq.h>
|
2006-10-08 12:23:28 +00:00
|
|
|
#include <asm/irq_regs.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <asm/page.h>
|
|
|
|
#include <asm/pgtable.h>
|
|
|
|
#include <asm/oplib.h>
|
2016-12-24 19:46:01 +00:00
|
|
|
#include <linux/uaccess.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <asm/starfire.h>
|
|
|
|
#include <asm/tlb.h>
|
2006-02-27 07:24:22 +00:00
|
|
|
#include <asm/sections.h>
|
2006-06-22 06:34:02 +00:00
|
|
|
#include <asm/prom.h>
|
2007-05-25 22:49:59 +00:00
|
|
|
#include <asm/mdesc.h>
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
#include <asm/ldc.h>
|
2007-07-16 10:49:40 +00:00
|
|
|
#include <asm/hypervisor.h>
|
2011-02-15 23:04:07 +00:00
|
|
|
#include <asm/pcr.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2009-06-04 09:10:11 +00:00
|
|
|
#include "cpumap.h"
|
2014-05-16 21:25:57 +00:00
|
|
|
#include "kernel.h"
|
2009-06-04 09:10:11 +00:00
|
|
|
|
2007-10-16 08:24:05 +00:00
|
|
|
DEFINE_PER_CPU(cpumask_t, cpu_sibling_map) = CPU_MASK_NONE;
|
2007-06-05 00:01:39 +00:00
|
|
|
cpumask_t cpu_core_map[NR_CPUS] __read_mostly =
|
|
|
|
{ [0 ... NR_CPUS-1] = CPU_MASK_NONE };
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
|
2015-04-22 16:28:31 +00:00
|
|
|
cpumask_t cpu_core_sib_map[NR_CPUS] __read_mostly = {
|
|
|
|
[0 ... NR_CPUS-1] = CPU_MASK_NONE };
|
|
|
|
|
2016-10-20 00:33:29 +00:00
|
|
|
cpumask_t cpu_core_sib_cache_map[NR_CPUS] __read_mostly = {
|
|
|
|
[0 ... NR_CPUS - 1] = CPU_MASK_NONE };
|
|
|
|
|
2007-10-16 08:24:05 +00:00
|
|
|
EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
EXPORT_SYMBOL(cpu_core_map);
|
2015-04-22 16:28:31 +00:00
|
|
|
EXPORT_SYMBOL(cpu_core_sib_map);
|
2016-10-20 00:33:29 +00:00
|
|
|
EXPORT_SYMBOL(cpu_core_sib_cache_map);
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
static cpumask_t smp_commenced_mask;
|
|
|
|
|
2017-07-21 16:23:57 +00:00
|
|
|
static DEFINE_PER_CPU(bool, poke);
|
|
|
|
static bool cpu_poke;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
void smp_info(struct seq_file *m)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
seq_printf(m, "State:\n");
|
2006-03-23 11:01:05 +00:00
|
|
|
for_each_online_cpu(i)
|
|
|
|
seq_printf(m, "CPU%d:\t\tonline\n", i);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void smp_bogo(struct seq_file *m)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2006-03-23 11:01:05 +00:00
|
|
|
for_each_online_cpu(i)
|
|
|
|
seq_printf(m,
|
|
|
|
"Cpu%dClkTck\t: %016lx\n",
|
|
|
|
i, cpu_data(i).clock_tick);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2007-03-05 23:28:37 +00:00
|
|
|
extern void setup_sparc64_timer(void);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
static volatile unsigned long callin_flag = 0;
|
|
|
|
|
sparc: delete __cpuinit/__CPUINIT usage from all users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/sparc uses of the __cpuinit macros from
C files and removes __CPUINIT from assembly files. Note that even
though arch/sparc/kernel/trampoline_64.S has instances of ".previous"
in it, they are all paired off against explicit ".section" directives,
and not implicitly paired with __CPUINIT (unlike mips and arm were).
[1] https://lkml.org/lkml/2013/5/20/589
Cc: "David S. Miller" <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-17 19:43:14 +00:00
|
|
|
void smp_callin(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
int cpuid = hard_smp_processor_id();
|
|
|
|
|
2006-02-27 07:24:22 +00:00
|
|
|
__local_per_cpu_offset = __per_cpu_offset(cpuid);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2006-02-14 21:49:32 +00:00
|
|
|
if (tlb_type == hypervisor)
|
2006-02-11 22:41:18 +00:00
|
|
|
sun4v_ktsb_register();
|
2006-02-08 05:51:08 +00:00
|
|
|
|
2006-02-27 07:24:22 +00:00
|
|
|
__flush_tlb_all();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-03-05 23:28:37 +00:00
|
|
|
setup_sparc64_timer();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-23 22:52:08 +00:00
|
|
|
if (cheetah_pcache_forced_on)
|
|
|
|
cheetah_enable_pcache();
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
callin_flag = 1;
|
|
|
|
__asm__ __volatile__("membar #Sync\n\t"
|
|
|
|
"flush %%g6" : : : "memory");
|
|
|
|
|
|
|
|
/* Clear this or we will die instantly when we
|
|
|
|
* schedule back to this idler...
|
|
|
|
*/
|
2005-07-25 02:36:26 +00:00
|
|
|
current_thread_info()->new_child = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/* Attach to the address space of init_task. */
|
2017-02-27 22:30:07 +00:00
|
|
|
mmgrab(&init_mm);
|
2005-04-16 22:20:36 +00:00
|
|
|
current->active_mm = &init_mm;
|
|
|
|
|
2008-10-13 03:55:24 +00:00
|
|
|
/* inform the notifiers about the new cpu */
|
|
|
|
notify_cpu_starting(cpuid);
|
|
|
|
|
2011-05-16 20:38:07 +00:00
|
|
|
while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
|
2005-08-29 19:46:22 +00:00
|
|
|
rmb();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2011-05-16 20:38:07 +00:00
|
|
|
set_cpu_online(cpuid, true);
|
2005-11-09 05:39:01 +00:00
|
|
|
|
|
|
|
/* idle thread is expected to have preempt disabled */
|
|
|
|
preempt_disable();
|
2013-04-11 19:38:50 +00:00
|
|
|
|
2013-12-12 14:09:50 +00:00
|
|
|
local_irq_enable();
|
|
|
|
|
2016-02-26 18:43:40 +00:00
|
|
|
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void cpu_panic(void)
|
|
|
|
{
|
|
|
|
printk("CPU[%d]: Returns from cpu_idle!\n", smp_processor_id());
|
|
|
|
panic("SMP bolixed\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
/* This tick register synchronization scheme is taken entirely from
|
|
|
|
* the ia64 port, see arch/ia64/kernel/smpboot.c for details and credit.
|
|
|
|
*
|
|
|
|
* The only change I've made is to rework it so that the master
|
|
|
|
* initiates the synchonization instead of the slave. -DaveM
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define MASTER 0
|
|
|
|
#define SLAVE (SMP_CACHE_BYTES/sizeof(unsigned long))
|
|
|
|
|
|
|
|
#define NUM_ROUNDS 64 /* magic value */
|
|
|
|
#define NUM_ITERS 5 /* likewise */
|
|
|
|
|
2014-04-16 20:45:24 +00:00
|
|
|
static DEFINE_RAW_SPINLOCK(itc_sync_lock);
|
2005-04-16 22:20:36 +00:00
|
|
|
static unsigned long go[SLAVE + 1];
|
|
|
|
|
|
|
|
#define DEBUG_TICK_SYNC 0
|
|
|
|
|
|
|
|
static inline long get_delta (long *rt, long *master)
|
|
|
|
{
|
|
|
|
unsigned long best_t0 = 0, best_t1 = ~0UL, best_tm = 0;
|
|
|
|
unsigned long tcenter, t0, t1, tm;
|
|
|
|
unsigned long i;
|
|
|
|
|
|
|
|
for (i = 0; i < NUM_ITERS; i++) {
|
|
|
|
t0 = tick_ops->get_tick();
|
|
|
|
go[MASTER] = 1;
|
2008-11-15 21:33:25 +00:00
|
|
|
membar_safe("#StoreLoad");
|
2005-04-16 22:20:36 +00:00
|
|
|
while (!(tm = go[SLAVE]))
|
2005-08-29 19:46:22 +00:00
|
|
|
rmb();
|
2005-04-16 22:20:36 +00:00
|
|
|
go[SLAVE] = 0;
|
2005-08-29 19:46:22 +00:00
|
|
|
wmb();
|
2005-04-16 22:20:36 +00:00
|
|
|
t1 = tick_ops->get_tick();
|
|
|
|
|
|
|
|
if (t1 - t0 < best_t1 - best_t0)
|
|
|
|
best_t0 = t0, best_t1 = t1, best_tm = tm;
|
|
|
|
}
|
|
|
|
|
|
|
|
*rt = best_t1 - best_t0;
|
|
|
|
*master = best_tm - best_t0;
|
|
|
|
|
|
|
|
/* average best_t0 and best_t1 without overflow: */
|
|
|
|
tcenter = (best_t0/2 + best_t1/2);
|
|
|
|
if (best_t0 % 2 + best_t1 % 2 == 2)
|
|
|
|
tcenter++;
|
|
|
|
return tcenter - best_tm;
|
|
|
|
}
|
|
|
|
|
|
|
|
void smp_synchronize_tick_client(void)
|
|
|
|
{
|
|
|
|
long i, delta, adj, adjust_latency = 0, done = 0;
|
2011-02-27 07:40:02 +00:00
|
|
|
unsigned long flags, rt, master_time_stamp;
|
2005-04-16 22:20:36 +00:00
|
|
|
#if DEBUG_TICK_SYNC
|
|
|
|
struct {
|
|
|
|
long rt; /* roundtrip time */
|
|
|
|
long master; /* master's timestamp */
|
|
|
|
long diff; /* difference between midpoint and master's timestamp */
|
|
|
|
long lat; /* estimate of itc adjustment latency */
|
|
|
|
} t[NUM_ROUNDS];
|
|
|
|
#endif
|
|
|
|
|
|
|
|
go[MASTER] = 1;
|
|
|
|
|
|
|
|
while (go[MASTER])
|
2005-08-29 19:46:22 +00:00
|
|
|
rmb();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
local_irq_save(flags);
|
|
|
|
{
|
|
|
|
for (i = 0; i < NUM_ROUNDS; i++) {
|
|
|
|
delta = get_delta(&rt, &master_time_stamp);
|
2011-02-27 07:40:02 +00:00
|
|
|
if (delta == 0)
|
2005-04-16 22:20:36 +00:00
|
|
|
done = 1; /* let's lock on to this... */
|
|
|
|
|
|
|
|
if (!done) {
|
|
|
|
if (i > 0) {
|
|
|
|
adjust_latency += -delta;
|
|
|
|
adj = -delta + adjust_latency/4;
|
|
|
|
} else
|
|
|
|
adj = -delta;
|
|
|
|
|
2007-03-05 23:28:37 +00:00
|
|
|
tick_ops->add_tick(adj);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
#if DEBUG_TICK_SYNC
|
|
|
|
t[i].rt = rt;
|
|
|
|
t[i].master = master_time_stamp;
|
|
|
|
t[i].diff = delta;
|
|
|
|
t[i].lat = adjust_latency/4;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
}
|
|
|
|
local_irq_restore(flags);
|
|
|
|
|
|
|
|
#if DEBUG_TICK_SYNC
|
|
|
|
for (i = 0; i < NUM_ROUNDS; i++)
|
|
|
|
printk("rt=%5ld master=%5ld diff=%5ld adjlat=%5ld\n",
|
|
|
|
t[i].rt, t[i].master, t[i].diff, t[i].lat);
|
|
|
|
#endif
|
|
|
|
|
2007-11-20 07:43:00 +00:00
|
|
|
printk(KERN_INFO "CPU %d: synchronized TICK with master CPU "
|
|
|
|
"(last diff %ld cycles, maxerr %lu cycles)\n",
|
|
|
|
smp_processor_id(), delta, rt);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void smp_start_sync_tick_client(int cpu);
|
|
|
|
|
|
|
|
static void smp_synchronize_one_tick(int cpu)
|
|
|
|
{
|
|
|
|
unsigned long flags, i;
|
|
|
|
|
|
|
|
go[MASTER] = 0;
|
|
|
|
|
|
|
|
smp_start_sync_tick_client(cpu);
|
|
|
|
|
|
|
|
/* wait for client to be ready */
|
|
|
|
while (!go[MASTER])
|
2005-08-29 19:46:22 +00:00
|
|
|
rmb();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/* now let the client proceed into his loop */
|
|
|
|
go[MASTER] = 0;
|
2008-11-15 21:33:25 +00:00
|
|
|
membar_safe("#StoreLoad");
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2014-04-16 20:45:24 +00:00
|
|
|
raw_spin_lock_irqsave(&itc_sync_lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
for (i = 0; i < NUM_ROUNDS*NUM_ITERS; i++) {
|
|
|
|
while (!go[MASTER])
|
2005-08-29 19:46:22 +00:00
|
|
|
rmb();
|
2005-04-16 22:20:36 +00:00
|
|
|
go[MASTER] = 0;
|
2005-08-29 19:46:22 +00:00
|
|
|
wmb();
|
2005-04-16 22:20:36 +00:00
|
|
|
go[SLAVE] = tick_ops->get_tick();
|
2008-11-15 21:33:25 +00:00
|
|
|
membar_safe("#StoreLoad");
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
}
|
2014-04-16 20:45:24 +00:00
|
|
|
raw_spin_unlock_irqrestore(&itc_sync_lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2007-07-14 07:45:16 +00:00
|
|
|
#if defined(CONFIG_SUN_LDOMS) && defined(CONFIG_HOTPLUG_CPU)
|
sparc: delete __cpuinit/__CPUINIT usage from all users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/sparc uses of the __cpuinit macros from
C files and removes __CPUINIT from assembly files. Note that even
though arch/sparc/kernel/trampoline_64.S has instances of ".previous"
in it, they are all paired off against explicit ".section" directives,
and not implicitly paired with __CPUINIT (unlike mips and arm were).
[1] https://lkml.org/lkml/2013/5/20/589
Cc: "David S. Miller" <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-17 19:43:14 +00:00
|
|
|
static void ldom_startcpu_cpuid(unsigned int cpu, unsigned long thread_reg,
|
|
|
|
void **descrp)
|
2007-07-14 07:45:16 +00:00
|
|
|
{
|
|
|
|
extern unsigned long sparc64_ttable_tl0;
|
|
|
|
extern unsigned long kern_locked_tte_data;
|
|
|
|
struct hvtramp_descr *hdesc;
|
|
|
|
unsigned long trampoline_ra;
|
|
|
|
struct trap_per_cpu *tb;
|
|
|
|
u64 tte_vaddr, tte_data;
|
|
|
|
unsigned long hv_err;
|
2008-03-22 00:01:38 +00:00
|
|
|
int i;
|
2007-07-14 07:45:16 +00:00
|
|
|
|
2008-03-22 00:01:38 +00:00
|
|
|
hdesc = kzalloc(sizeof(*hdesc) +
|
|
|
|
(sizeof(struct hvtramp_mapping) *
|
|
|
|
num_kernel_image_mappings - 1),
|
|
|
|
GFP_KERNEL);
|
2007-07-14 07:45:16 +00:00
|
|
|
if (!hdesc) {
|
2007-07-14 07:58:53 +00:00
|
|
|
printk(KERN_ERR "ldom_startcpu_cpuid: Cannot allocate "
|
2007-07-14 07:45:16 +00:00
|
|
|
"hvtramp_descr.\n");
|
|
|
|
return;
|
|
|
|
}
|
2009-04-01 00:15:40 +00:00
|
|
|
*descrp = hdesc;
|
2007-07-14 07:45:16 +00:00
|
|
|
|
|
|
|
hdesc->cpu = cpu;
|
2008-03-22 00:01:38 +00:00
|
|
|
hdesc->num_mappings = num_kernel_image_mappings;
|
2007-07-14 07:45:16 +00:00
|
|
|
|
|
|
|
tb = &trap_block[cpu];
|
|
|
|
|
|
|
|
hdesc->fault_info_va = (unsigned long) &tb->fault_info;
|
|
|
|
hdesc->fault_info_pa = kimage_addr_to_ra(&tb->fault_info);
|
|
|
|
|
|
|
|
hdesc->thread_reg = thread_reg;
|
|
|
|
|
|
|
|
tte_vaddr = (unsigned long) KERNBASE;
|
|
|
|
tte_data = kern_locked_tte_data;
|
|
|
|
|
2008-03-22 00:01:38 +00:00
|
|
|
for (i = 0; i < hdesc->num_mappings; i++) {
|
|
|
|
hdesc->maps[i].vaddr = tte_vaddr;
|
|
|
|
hdesc->maps[i].tte = tte_data;
|
2007-07-14 07:45:16 +00:00
|
|
|
tte_vaddr += 0x400000;
|
|
|
|
tte_data += 0x400000;
|
|
|
|
}
|
|
|
|
|
|
|
|
trampoline_ra = kimage_addr_to_ra(hv_cpu_startup);
|
|
|
|
|
|
|
|
hv_err = sun4v_cpu_start(cpu, trampoline_ra,
|
|
|
|
kimage_addr_to_ra(&sparc64_ttable_tl0),
|
|
|
|
__pa(hdesc));
|
2007-07-16 10:49:40 +00:00
|
|
|
if (hv_err)
|
|
|
|
printk(KERN_ERR "ldom_startcpu_cpuid: sun4v_cpu_start() "
|
|
|
|
"gives error %lu\n", hv_err);
|
2007-07-14 07:45:16 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
extern unsigned long sparc64_cpu_startup;
|
|
|
|
|
|
|
|
/* The OBP cpu startup callback truncates the 3rd arg cookie to
|
|
|
|
* 32-bits (I think) so to be safe we have it read the pointer
|
|
|
|
* contained here so we work on >4GB machines. -DaveM
|
|
|
|
*/
|
|
|
|
static struct thread_info *cpu_new_thread = NULL;
|
|
|
|
|
sparc: delete __cpuinit/__CPUINIT usage from all users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/sparc uses of the __cpuinit macros from
C files and removes __CPUINIT from assembly files. Note that even
though arch/sparc/kernel/trampoline_64.S has instances of ".previous"
in it, they are all paired off against explicit ".section" directives,
and not implicitly paired with __CPUINIT (unlike mips and arm were).
[1] https://lkml.org/lkml/2013/5/20/589
Cc: "David S. Miller" <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-17 19:43:14 +00:00
|
|
|
static int smp_boot_one_cpu(unsigned int cpu, struct task_struct *idle)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
unsigned long entry =
|
|
|
|
(unsigned long)(&sparc64_cpu_startup);
|
|
|
|
unsigned long cookie =
|
|
|
|
(unsigned long)(&cpu_new_thread);
|
2009-04-01 00:15:40 +00:00
|
|
|
void *descr = NULL;
|
2006-02-15 10:26:54 +00:00
|
|
|
int timeout, ret;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
callin_flag = 0;
|
2012-04-20 13:05:56 +00:00
|
|
|
cpu_new_thread = task_thread_info(idle);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2006-02-15 10:26:54 +00:00
|
|
|
if (tlb_type == hypervisor) {
|
2007-07-14 07:45:16 +00:00
|
|
|
#if defined(CONFIG_SUN_LDOMS) && defined(CONFIG_HOTPLUG_CPU)
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
if (ldom_domaining_enabled)
|
|
|
|
ldom_startcpu_cpuid(cpu,
|
2009-04-01 00:15:40 +00:00
|
|
|
(unsigned long) cpu_new_thread,
|
|
|
|
&descr);
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
else
|
|
|
|
#endif
|
|
|
|
prom_startcpu_cpuid(cpu, entry, cookie);
|
2006-02-15 10:26:54 +00:00
|
|
|
} else {
|
2007-05-25 22:49:59 +00:00
|
|
|
struct device_node *dp = of_find_node_by_cpuid(cpu);
|
2006-02-15 10:26:54 +00:00
|
|
|
|
2010-01-28 21:06:53 +00:00
|
|
|
prom_startcpu(dp->phandle, entry, cookie);
|
2006-02-15 10:26:54 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
for (timeout = 0; timeout < 50000; timeout++) {
|
2005-04-16 22:20:36 +00:00
|
|
|
if (callin_flag)
|
|
|
|
break;
|
|
|
|
udelay(100);
|
|
|
|
}
|
[SPARC64]: Get SUN4V SMP working.
The sibling cpu bringup is extremely fragile. We can only
perform the most basic calls until we take over the trap
table from the firmware/hypervisor on the new cpu.
This means no accesses to %g4, %g5, %g6 since those can't be
TLB translated without our trap handlers.
In order to achieve this:
1) Change sun4v_init_mondo_queues() so that it can operate in
several modes.
It can allocate the queues, or install them in the current
processor, or both.
The boot cpu does both in it's call early on.
Later, the boot cpu allocates the sibling cpu queue, starts
the sibling cpu, then the sibling cpu loads them in.
2) init_cur_cpu_trap() is changed to take the current_thread_info()
as an argument instead of reading %g6 directly on the current
cpu.
3) Create a trampoline stack for the sibling cpus. We do our basic
kernel calls using this stack, which is locked into the kernel
image, then go to our proper thread stack after taking over the
trap table.
4) While we are in this delicate startup state, we put 0xdeadbeef
into %g4/%g5/%g6 in order to catch accidental accesses.
5) On the final prom_set_trap_table*() call, we put &init_thread_union
into %g6. This is a hack to make prom_world(0) work. All that
wants to do is restore the %asi register using
get_thread_current_ds().
Longer term we should just do the OBP calls to set the trap table by
hand just like we do for everything else. This would avoid that silly
prom_world(0) issue, then we can remove the init_thread_union hack.
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-02-17 09:29:17 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
if (callin_flag) {
|
|
|
|
ret = 0;
|
|
|
|
} else {
|
|
|
|
printk("Processor %d is stuck.\n", cpu);
|
|
|
|
ret = -ENODEV;
|
|
|
|
}
|
|
|
|
cpu_new_thread = NULL;
|
|
|
|
|
2009-04-01 00:15:40 +00:00
|
|
|
kfree(descr);
|
2007-07-15 08:08:03 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void spitfire_xcall_helper(u64 data0, u64 data1, u64 data2, u64 pstate, unsigned long cpu)
|
|
|
|
{
|
|
|
|
u64 result, target;
|
|
|
|
int stuck, tmp;
|
|
|
|
|
|
|
|
if (this_is_starfire) {
|
|
|
|
/* map to real upaid */
|
|
|
|
cpu = (((cpu & 0x3c) << 1) |
|
|
|
|
((cpu & 0x40) >> 4) |
|
|
|
|
(cpu & 0x3));
|
|
|
|
}
|
|
|
|
|
|
|
|
target = (cpu << 14) | 0x70;
|
|
|
|
again:
|
|
|
|
/* Ok, this is the real Spitfire Errata #54.
|
|
|
|
* One must read back from a UDB internal register
|
|
|
|
* after writes to the UDB interrupt dispatch, but
|
|
|
|
* before the membar Sync for that write.
|
|
|
|
* So we use the high UDB control register (ASI 0x7f,
|
|
|
|
* ADDR 0x20) for the dummy read. -DaveM
|
|
|
|
*/
|
|
|
|
tmp = 0x40;
|
|
|
|
__asm__ __volatile__(
|
|
|
|
"wrpr %1, %2, %%pstate\n\t"
|
|
|
|
"stxa %4, [%0] %3\n\t"
|
|
|
|
"stxa %5, [%0+%8] %3\n\t"
|
|
|
|
"add %0, %8, %0\n\t"
|
|
|
|
"stxa %6, [%0+%8] %3\n\t"
|
|
|
|
"membar #Sync\n\t"
|
|
|
|
"stxa %%g0, [%7] %3\n\t"
|
|
|
|
"membar #Sync\n\t"
|
|
|
|
"mov 0x20, %%g1\n\t"
|
|
|
|
"ldxa [%%g1] 0x7f, %%g0\n\t"
|
|
|
|
"membar #Sync"
|
|
|
|
: "=r" (tmp)
|
|
|
|
: "r" (pstate), "i" (PSTATE_IE), "i" (ASI_INTR_W),
|
|
|
|
"r" (data0), "r" (data1), "r" (data2), "r" (target),
|
|
|
|
"r" (0x10), "0" (tmp)
|
|
|
|
: "g1");
|
|
|
|
|
|
|
|
/* NOTE: PSTATE_IE is still clear. */
|
|
|
|
stuck = 100000;
|
|
|
|
do {
|
|
|
|
__asm__ __volatile__("ldxa [%%g0] %1, %0"
|
|
|
|
: "=r" (result)
|
|
|
|
: "i" (ASI_INTR_DISPATCH_STAT));
|
|
|
|
if (result == 0) {
|
|
|
|
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
|
|
|
|
: : "r" (pstate));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
stuck -= 1;
|
|
|
|
if (stuck == 0)
|
|
|
|
break;
|
|
|
|
} while (result & 0x1);
|
|
|
|
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
|
|
|
|
: : "r" (pstate));
|
|
|
|
if (stuck == 0) {
|
2009-01-06 21:19:28 +00:00
|
|
|
printk("CPU[%d]: mondo stuckage result[%016llx]\n",
|
2005-04-16 22:20:36 +00:00
|
|
|
smp_processor_id(), result);
|
|
|
|
} else {
|
|
|
|
udelay(2);
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-08-04 23:42:58 +00:00
|
|
|
static void spitfire_xcall_deliver(struct trap_per_cpu *tb, int cnt)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-08-04 23:42:58 +00:00
|
|
|
u64 *mondo, data0, data1, data2;
|
|
|
|
u16 *cpu_list;
|
2005-04-16 22:20:36 +00:00
|
|
|
u64 pstate;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
__asm__ __volatile__("rdpr %%pstate, %0" : "=r" (pstate));
|
2008-08-04 23:42:58 +00:00
|
|
|
cpu_list = __va(tb->cpu_list_pa);
|
|
|
|
mondo = __va(tb->cpu_mondo_block_pa);
|
|
|
|
data0 = mondo[0];
|
|
|
|
data1 = mondo[1];
|
|
|
|
data2 = mondo[2];
|
|
|
|
for (i = 0; i < cnt; i++)
|
|
|
|
spitfire_xcall_helper(data0, data1, data2, pstate, cpu_list[i]);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Cheetah now allows to send the whole 64-bytes of data in the interrupt
|
|
|
|
* packet, but we have no use for that. However we do take advantage of
|
|
|
|
* the new pipelining feature (ie. dispatch to multiple cpus simultaneously).
|
|
|
|
*/
|
2008-08-04 23:42:58 +00:00
|
|
|
static void cheetah_xcall_deliver(struct trap_per_cpu *tb, int cnt)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2007-05-26 08:14:43 +00:00
|
|
|
int nack_busy_id, is_jbus, need_more;
|
2008-08-04 23:42:58 +00:00
|
|
|
u64 *mondo, pstate, ver, busy_mask;
|
|
|
|
u16 *cpu_list;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-08-04 23:42:58 +00:00
|
|
|
cpu_list = __va(tb->cpu_list_pa);
|
|
|
|
mondo = __va(tb->cpu_mondo_block_pa);
|
2008-08-04 06:24:26 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/* Unfortunately, someone at Sun had the brilliant idea to make the
|
|
|
|
* busy/nack fields hard-coded by ITID number for this Ultra-III
|
|
|
|
* derivative processor.
|
|
|
|
*/
|
|
|
|
__asm__ ("rdpr %%ver, %0" : "=r" (ver));
|
2006-02-27 07:27:19 +00:00
|
|
|
is_jbus = ((ver >> 32) == __JALAPENO_ID ||
|
|
|
|
(ver >> 32) == __SERRANO_ID);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
__asm__ __volatile__("rdpr %%pstate, %0" : "=r" (pstate));
|
|
|
|
|
|
|
|
retry:
|
2007-05-26 08:14:43 +00:00
|
|
|
need_more = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
__asm__ __volatile__("wrpr %0, %1, %%pstate\n\t"
|
|
|
|
: : "r" (pstate), "i" (PSTATE_IE));
|
|
|
|
|
|
|
|
/* Setup the dispatch data registers. */
|
|
|
|
__asm__ __volatile__("stxa %0, [%3] %6\n\t"
|
|
|
|
"stxa %1, [%4] %6\n\t"
|
|
|
|
"stxa %2, [%5] %6\n\t"
|
|
|
|
"membar #Sync\n\t"
|
|
|
|
: /* no outputs */
|
2008-08-04 23:42:58 +00:00
|
|
|
: "r" (mondo[0]), "r" (mondo[1]), "r" (mondo[2]),
|
2005-04-16 22:20:36 +00:00
|
|
|
"r" (0x40), "r" (0x50), "r" (0x60),
|
|
|
|
"i" (ASI_INTR_W));
|
|
|
|
|
|
|
|
nack_busy_id = 0;
|
2007-12-12 15:31:46 +00:00
|
|
|
busy_mask = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2008-08-04 23:42:58 +00:00
|
|
|
for (i = 0; i < cnt; i++) {
|
|
|
|
u64 target, nr;
|
|
|
|
|
|
|
|
nr = cpu_list[i];
|
|
|
|
if (nr == 0xffff)
|
|
|
|
continue;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-08-04 23:42:58 +00:00
|
|
|
target = (nr << 14) | 0x70;
|
2007-12-12 15:31:46 +00:00
|
|
|
if (is_jbus) {
|
2008-08-04 23:42:58 +00:00
|
|
|
busy_mask |= (0x1UL << (nr * 2));
|
2007-12-12 15:31:46 +00:00
|
|
|
} else {
|
2005-04-16 22:20:36 +00:00
|
|
|
target |= (nack_busy_id << 24);
|
2007-12-12 15:31:46 +00:00
|
|
|
busy_mask |= (0x1UL <<
|
|
|
|
(nack_busy_id * 2));
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
__asm__ __volatile__(
|
|
|
|
"stxa %%g0, [%0] %1\n\t"
|
|
|
|
"membar #Sync\n\t"
|
|
|
|
: /* no outputs */
|
|
|
|
: "r" (target), "i" (ASI_INTR_W));
|
|
|
|
nack_busy_id++;
|
2007-05-26 08:14:43 +00:00
|
|
|
if (nack_busy_id == 32) {
|
|
|
|
need_more = 1;
|
|
|
|
break;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now, poll for completion. */
|
|
|
|
{
|
2007-12-12 15:31:46 +00:00
|
|
|
u64 dispatch_stat, nack_mask;
|
2005-04-16 22:20:36 +00:00
|
|
|
long stuck;
|
|
|
|
|
|
|
|
stuck = 100000 * nack_busy_id;
|
2007-12-12 15:31:46 +00:00
|
|
|
nack_mask = busy_mask << 1;
|
2005-04-16 22:20:36 +00:00
|
|
|
do {
|
|
|
|
__asm__ __volatile__("ldxa [%%g0] %1, %0"
|
|
|
|
: "=r" (dispatch_stat)
|
|
|
|
: "i" (ASI_INTR_DISPATCH_STAT));
|
2007-12-12 15:31:46 +00:00
|
|
|
if (!(dispatch_stat & (busy_mask | nack_mask))) {
|
2005-04-16 22:20:36 +00:00
|
|
|
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
|
|
|
|
: : "r" (pstate));
|
2007-05-26 08:14:43 +00:00
|
|
|
if (unlikely(need_more)) {
|
2008-08-04 23:42:58 +00:00
|
|
|
int i, this_cnt = 0;
|
|
|
|
for (i = 0; i < cnt; i++) {
|
|
|
|
if (cpu_list[i] == 0xffff)
|
|
|
|
continue;
|
|
|
|
cpu_list[i] = 0xffff;
|
|
|
|
this_cnt++;
|
|
|
|
if (this_cnt == 32)
|
2007-05-26 08:14:43 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
goto retry;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (!--stuck)
|
|
|
|
break;
|
2007-12-12 15:31:46 +00:00
|
|
|
} while (dispatch_stat & busy_mask);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
__asm__ __volatile__("wrpr %0, 0x0, %%pstate"
|
|
|
|
: : "r" (pstate));
|
|
|
|
|
2007-12-12 15:31:46 +00:00
|
|
|
if (dispatch_stat & busy_mask) {
|
2005-04-16 22:20:36 +00:00
|
|
|
/* Busy bits will not clear, continue instead
|
|
|
|
* of freezing up on this cpu.
|
|
|
|
*/
|
2009-01-06 21:19:28 +00:00
|
|
|
printk("CPU[%d]: mondo stuckage result[%016llx]\n",
|
2005-04-16 22:20:36 +00:00
|
|
|
smp_processor_id(), dispatch_stat);
|
|
|
|
} else {
|
|
|
|
int i, this_busy_nack = 0;
|
|
|
|
|
|
|
|
/* Delay some random time with interrupts enabled
|
|
|
|
* to prevent deadlock.
|
|
|
|
*/
|
|
|
|
udelay(2 * nack_busy_id);
|
|
|
|
|
|
|
|
/* Clear out the mask bits for cpus which did not
|
|
|
|
* NACK us.
|
|
|
|
*/
|
2008-08-04 23:42:58 +00:00
|
|
|
for (i = 0; i < cnt; i++) {
|
|
|
|
u64 check_mask, nr;
|
|
|
|
|
|
|
|
nr = cpu_list[i];
|
|
|
|
if (nr == 0xffff)
|
|
|
|
continue;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2006-02-27 07:27:19 +00:00
|
|
|
if (is_jbus)
|
2008-08-04 23:42:58 +00:00
|
|
|
check_mask = (0x2UL << (2*nr));
|
2005-04-16 22:20:36 +00:00
|
|
|
else
|
|
|
|
check_mask = (0x2UL <<
|
|
|
|
this_busy_nack);
|
|
|
|
if ((dispatch_stat & check_mask) == 0)
|
2008-08-04 23:42:58 +00:00
|
|
|
cpu_list[i] = 0xffff;
|
2005-04-16 22:20:36 +00:00
|
|
|
this_busy_nack += 2;
|
2007-05-26 08:14:43 +00:00
|
|
|
if (this_busy_nack == 64)
|
|
|
|
break;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
#define CPU_MONDO_COUNTER(cpuid) (cpu_mondo_counter[cpuid])
|
|
|
|
#define MONDO_USEC_WAIT_MIN 2
|
|
|
|
#define MONDO_USEC_WAIT_MAX 100
|
|
|
|
#define MONDO_RETRY_LIMIT 500000
|
|
|
|
|
|
|
|
/* Multi-cpu list version.
|
|
|
|
*
|
|
|
|
* Deliver xcalls to 'cnt' number of cpus in 'cpu_list'.
|
|
|
|
* Sometimes not all cpus receive the mondo, requiring us to re-send
|
|
|
|
* the mondo until all cpus have received, or cpus are truly stuck
|
|
|
|
* unable to receive mondo, and we timeout.
|
|
|
|
* Occasionally a target cpu strand is borrowed briefly by hypervisor to
|
|
|
|
* perform guest service, such as PCIe error handling. Consider the
|
|
|
|
* service time, 1 second overall wait is reasonable for 1 cpu.
|
|
|
|
* Here two in-between mondo check wait time are defined: 2 usec for
|
|
|
|
* single cpu quick turn around and up to 100usec for large cpu count.
|
|
|
|
* Deliver mondo to large number of cpus could take longer, we adjusts
|
|
|
|
* the retry count as long as target cpus are making forward progress.
|
|
|
|
*/
|
2008-08-04 23:42:58 +00:00
|
|
|
static void hypervisor_xcall_deliver(struct trap_per_cpu *tb, int cnt)
|
2006-02-04 11:10:53 +00:00
|
|
|
{
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
int this_cpu, tot_cpus, prev_sent, i, rem;
|
|
|
|
int usec_wait, retries, tot_retries;
|
|
|
|
u16 first_cpu = 0xffff;
|
|
|
|
unsigned long xc_rcvd = 0;
|
2008-08-04 23:18:40 +00:00
|
|
|
unsigned long status;
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
int ecpuerror_id = 0;
|
|
|
|
int enocpu_id = 0;
|
2006-02-28 23:10:26 +00:00
|
|
|
u16 *cpu_list;
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
u16 cpu;
|
2007-05-14 09:01:52 +00:00
|
|
|
|
2006-02-28 23:10:26 +00:00
|
|
|
this_cpu = smp_processor_id();
|
|
|
|
cpu_list = __va(tb->cpu_list_pa);
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
usec_wait = cnt * MONDO_USEC_WAIT_MIN;
|
|
|
|
if (usec_wait > MONDO_USEC_WAIT_MAX)
|
|
|
|
usec_wait = MONDO_USEC_WAIT_MAX;
|
|
|
|
retries = tot_retries = 0;
|
|
|
|
tot_cpus = cnt;
|
2006-03-03 05:50:47 +00:00
|
|
|
prev_sent = 0;
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
|
2006-02-09 00:41:20 +00:00
|
|
|
do {
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
int n_sent, mondo_delivered, target_cpu_busy;
|
2006-02-09 00:41:20 +00:00
|
|
|
|
2006-02-28 23:10:26 +00:00
|
|
|
status = sun4v_cpu_mondo_send(cnt,
|
|
|
|
tb->cpu_list_pa,
|
|
|
|
tb->cpu_mondo_block_pa);
|
|
|
|
|
|
|
|
/* HV_EOK means all cpus received the xcall, we're done. */
|
|
|
|
if (likely(status == HV_EOK))
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
goto xcall_done;
|
|
|
|
|
|
|
|
/* If not these non-fatal errors, panic */
|
|
|
|
if (unlikely((status != HV_EWOULDBLOCK) &&
|
|
|
|
(status != HV_ECPUERROR) &&
|
|
|
|
(status != HV_ENOCPU)))
|
|
|
|
goto fatal_errors;
|
2006-02-28 23:10:26 +00:00
|
|
|
|
2006-03-03 05:50:47 +00:00
|
|
|
/* First, see if we made any forward progress.
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
*
|
|
|
|
* Go through the cpu_list, count the target cpus that have
|
|
|
|
* received our mondo (n_sent), and those that did not (rem).
|
|
|
|
* Re-pack cpu_list with the cpus remain to be retried in the
|
|
|
|
* front - this simplifies tracking the truly stalled cpus.
|
2006-03-03 05:50:47 +00:00
|
|
|
*
|
|
|
|
* The hypervisor indicates successful sends by setting
|
|
|
|
* cpu list entries to the value 0xffff.
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
*
|
|
|
|
* EWOULDBLOCK means some target cpus did not receive the
|
|
|
|
* mondo and retry usually helps.
|
|
|
|
*
|
|
|
|
* ECPUERROR means at least one target cpu is in error state,
|
|
|
|
* it's usually safe to skip the faulty cpu and retry.
|
|
|
|
*
|
|
|
|
* ENOCPU means one of the target cpu doesn't belong to the
|
|
|
|
* domain, perhaps offlined which is unexpected, but not
|
|
|
|
* fatal and it's okay to skip the offlined cpu.
|
2006-02-28 23:10:26 +00:00
|
|
|
*/
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
rem = 0;
|
2006-03-03 05:50:47 +00:00
|
|
|
n_sent = 0;
|
2006-02-28 23:10:26 +00:00
|
|
|
for (i = 0; i < cnt; i++) {
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
cpu = cpu_list[i];
|
|
|
|
if (likely(cpu == 0xffff)) {
|
2006-03-03 05:50:47 +00:00
|
|
|
n_sent++;
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
} else if ((status == HV_ECPUERROR) &&
|
|
|
|
(sun4v_cpu_state(cpu) == HV_CPU_STATE_ERROR)) {
|
|
|
|
ecpuerror_id = cpu + 1;
|
|
|
|
} else if (status == HV_ENOCPU && !cpu_online(cpu)) {
|
|
|
|
enocpu_id = cpu + 1;
|
|
|
|
} else {
|
|
|
|
cpu_list[rem++] = cpu;
|
|
|
|
}
|
2006-02-09 00:41:20 +00:00
|
|
|
}
|
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
/* No cpu remained, we're done. */
|
|
|
|
if (rem == 0)
|
|
|
|
break;
|
2006-03-03 05:50:47 +00:00
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
/* Otherwise, update the cpu count for retry. */
|
|
|
|
cnt = rem;
|
2006-03-03 05:50:47 +00:00
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
/* Record the overall number of mondos received by the
|
|
|
|
* first of the remaining cpus.
|
2006-02-28 23:10:26 +00:00
|
|
|
*/
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
if (first_cpu != cpu_list[0]) {
|
|
|
|
first_cpu = cpu_list[0];
|
|
|
|
xc_rcvd = CPU_MONDO_COUNTER(first_cpu);
|
|
|
|
}
|
2006-02-28 23:10:26 +00:00
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
/* Was any mondo delivered successfully? */
|
|
|
|
mondo_delivered = (n_sent > prev_sent);
|
|
|
|
prev_sent = n_sent;
|
2006-02-28 23:10:26 +00:00
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
/* or, was any target cpu busy processing other mondos? */
|
|
|
|
target_cpu_busy = (xc_rcvd < CPU_MONDO_COUNTER(first_cpu));
|
|
|
|
xc_rcvd = CPU_MONDO_COUNTER(first_cpu);
|
2006-02-28 23:10:26 +00:00
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
/* Retry count is for no progress. If we're making progress,
|
|
|
|
* reset the retry count.
|
2006-03-03 05:50:47 +00:00
|
|
|
*/
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
if (likely(mondo_delivered || target_cpu_busy)) {
|
|
|
|
tot_retries += retries;
|
|
|
|
retries = 0;
|
|
|
|
} else if (unlikely(retries > MONDO_RETRY_LIMIT)) {
|
|
|
|
goto fatal_mondo_timeout;
|
2006-02-28 23:10:26 +00:00
|
|
|
}
|
2006-02-09 00:41:20 +00:00
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
/* Delay a little bit to let other cpus catch up on
|
|
|
|
* their cpu mondo queue work.
|
|
|
|
*/
|
|
|
|
if (!mondo_delivered)
|
|
|
|
udelay(usec_wait);
|
2006-02-28 23:10:26 +00:00
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
retries++;
|
|
|
|
} while (1);
|
2006-02-28 23:10:26 +00:00
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
xcall_done:
|
|
|
|
if (unlikely(ecpuerror_id > 0)) {
|
|
|
|
pr_crit("CPU[%d]: SUN4V mondo cpu error, target cpu(%d) was in error state\n",
|
|
|
|
this_cpu, ecpuerror_id - 1);
|
|
|
|
} else if (unlikely(enocpu_id > 0)) {
|
|
|
|
pr_crit("CPU[%d]: SUN4V mondo cpu error, target cpu(%d) does not belong to the domain\n",
|
|
|
|
this_cpu, enocpu_id - 1);
|
|
|
|
}
|
2006-02-28 23:10:26 +00:00
|
|
|
return;
|
|
|
|
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
fatal_errors:
|
|
|
|
/* fatal errors include bad alignment, etc */
|
|
|
|
pr_crit("CPU[%d]: Args were cnt(%d) cpulist_pa(%lx) mondo_block_pa(%lx)\n",
|
|
|
|
this_cpu, tot_cpus, tb->cpu_list_pa, tb->cpu_mondo_block_pa);
|
|
|
|
panic("Unexpected SUN4V mondo error %lu\n", status);
|
|
|
|
|
2006-02-28 23:10:26 +00:00
|
|
|
fatal_mondo_timeout:
|
sparc64: Measure receiver forward progress to avoid send mondo timeout
A large sun4v SPARC system may have moments of intensive xcall activities,
usually caused by unmapping many pages on many CPUs concurrently. This can
flood receivers with CPU mondo interrupts for an extended period, causing
some unlucky senders to hit send-mondo timeout. This problem gets worse
as cpu count increases because sometimes mappings must be invalidated on
all CPUs, and sometimes all CPUs may gang up on a single CPU.
But a busy system is not a broken system. In the above scenario, as long
as the receiver is making forward progress processing mondo interrupts,
the sender should continue to retry.
This patch implements the receiver's forward progress meter by introducing
a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range
of 0..NR_CPUS. The receiver increments its counter as soon as it receives
a mondo and the sender tracks the receiver's counter. If the receiver has
stopped making forward progress when the retry limit is reached, the sender
declares send-mondo-timeout and panic; otherwise, the receiver is allowed
to keep making forward progress.
In addition, it's been observed that PCIe hotplug events generate Correctable
Errors that are handled by hypervisor and then OS. Hypervisor 'borrows'
a guest cpu strand briefly to provide the service. If the cpu strand is
simultaneously the only cpu targeted by a mondo, it may not be available
for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second
is the agreed wait time between hypervisor and guest OS, this patch makes
the adjustment.
Orabug: 25476541
Orabug: 26417466
Signed-off-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Reviewed-by: Rob Gardner <rob.gardner@oracle.com>
Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-07-11 18:00:54 +00:00
|
|
|
/* some cpus being non-responsive to the cpu mondo */
|
|
|
|
pr_crit("CPU[%d]: SUN4V mondo timeout, cpu(%d) made no forward progress after %d retries. Total target cpus(%d).\n",
|
|
|
|
this_cpu, first_cpu, (tot_retries + retries), tot_cpus);
|
|
|
|
panic("SUN4V mondo timeout panic\n");
|
2006-02-09 00:41:20 +00:00
|
|
|
}
|
2006-02-04 11:10:53 +00:00
|
|
|
|
2008-08-04 23:42:58 +00:00
|
|
|
static void (*xcall_deliver_impl)(struct trap_per_cpu *, int);
|
2008-08-04 23:16:20 +00:00
|
|
|
|
|
|
|
static void xcall_deliver(u64 data0, u64 data1, u64 data2, const cpumask_t *mask)
|
|
|
|
{
|
2008-08-04 23:42:58 +00:00
|
|
|
struct trap_per_cpu *tb;
|
|
|
|
int this_cpu, i, cnt;
|
2008-08-04 23:18:40 +00:00
|
|
|
unsigned long flags;
|
2008-08-04 23:42:58 +00:00
|
|
|
u16 *cpu_list;
|
|
|
|
u64 *mondo;
|
2008-08-04 23:18:40 +00:00
|
|
|
|
|
|
|
/* We have to do this whole thing with interrupts fully disabled.
|
|
|
|
* Otherwise if we send an xcall from interrupt context it will
|
|
|
|
* corrupt both our mondo block and cpu list state.
|
|
|
|
*
|
|
|
|
* One consequence of this is that we cannot use timeout mechanisms
|
|
|
|
* that depend upon interrupts being delivered locally. So, for
|
|
|
|
* example, we cannot sample jiffies and expect it to advance.
|
|
|
|
*
|
|
|
|
* Fortunately, udelay() uses %stick/%tick so we can use that.
|
|
|
|
*/
|
|
|
|
local_irq_save(flags);
|
2008-08-04 23:42:58 +00:00
|
|
|
|
|
|
|
this_cpu = smp_processor_id();
|
|
|
|
tb = &trap_block[this_cpu];
|
|
|
|
|
|
|
|
mondo = __va(tb->cpu_mondo_block_pa);
|
|
|
|
mondo[0] = data0;
|
|
|
|
mondo[1] = data1;
|
|
|
|
mondo[2] = data2;
|
|
|
|
wmb();
|
|
|
|
|
|
|
|
cpu_list = __va(tb->cpu_list_pa);
|
|
|
|
|
|
|
|
/* Setup the initial cpu list. */
|
|
|
|
cnt = 0;
|
2008-12-08 09:10:08 +00:00
|
|
|
for_each_cpu(i, mask) {
|
2008-08-04 23:42:58 +00:00
|
|
|
if (i == this_cpu || !cpu_online(i))
|
|
|
|
continue;
|
|
|
|
cpu_list[cnt++] = i;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (cnt)
|
|
|
|
xcall_deliver_impl(tb, cnt);
|
|
|
|
|
2008-08-04 23:18:40 +00:00
|
|
|
local_irq_restore(flags);
|
2008-08-04 23:16:20 +00:00
|
|
|
}
|
2008-08-04 05:52:41 +00:00
|
|
|
|
2008-08-04 07:51:18 +00:00
|
|
|
/* Send cross call to all processors mentioned in MASK_P
|
|
|
|
* except self. Really, there are only two cases currently,
|
2011-05-16 20:38:07 +00:00
|
|
|
* "cpu_online_mask" and "mm_cpumask(mm)".
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2008-08-04 23:56:15 +00:00
|
|
|
static void smp_cross_call_masked(unsigned long *func, u32 ctx, u64 data1, u64 data2, const cpumask_t *mask)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
u64 data0 = (((u64)ctx)<<32 | (((u64)func) & 0xffffffff));
|
|
|
|
|
2008-08-04 23:56:15 +00:00
|
|
|
xcall_deliver(data0, data1, data2, mask);
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-08-04 23:56:15 +00:00
|
|
|
/* Send cross call to all processors except self. */
|
|
|
|
static void smp_cross_call(unsigned long *func, u32 ctx, u64 data1, u64 data2)
|
|
|
|
{
|
2011-05-16 20:38:07 +00:00
|
|
|
smp_cross_call_masked(func, ctx, data1, data2, cpu_online_mask);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
extern unsigned long xcall_sync_tick;
|
|
|
|
|
|
|
|
static void smp_start_sync_tick_client(int cpu)
|
|
|
|
{
|
2008-08-04 07:02:31 +00:00
|
|
|
xcall_deliver((u64) &xcall_sync_tick, 0, 0,
|
2011-05-16 20:38:07 +00:00
|
|
|
cpumask_of(cpu));
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
extern unsigned long xcall_call_function;
|
|
|
|
|
2009-03-16 04:10:22 +00:00
|
|
|
void arch_send_call_function_ipi_mask(const struct cpumask *mask)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2009-03-16 04:10:22 +00:00
|
|
|
xcall_deliver((u64) &xcall_call_function, 0, 0, mask);
|
2008-07-18 06:44:50 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-07-18 06:44:50 +00:00
|
|
|
extern unsigned long xcall_call_function_single;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-07-18 06:44:50 +00:00
|
|
|
void arch_send_call_function_single_ipi(int cpu)
|
|
|
|
{
|
2008-08-04 06:56:28 +00:00
|
|
|
xcall_deliver((u64) &xcall_call_function_single, 0, 0,
|
2011-05-16 20:38:07 +00:00
|
|
|
cpumask_of(cpu));
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2010-04-07 11:41:33 +00:00
|
|
|
void __irq_entry smp_call_function_client(int irq, struct pt_regs *regs)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-07-18 06:44:50 +00:00
|
|
|
clear_softint(1 << irq);
|
2014-11-07 17:50:48 +00:00
|
|
|
irq_enter();
|
2008-07-18 06:44:50 +00:00
|
|
|
generic_smp_call_function_interrupt();
|
2014-11-07 17:50:48 +00:00
|
|
|
irq_exit();
|
2008-07-18 06:44:50 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2010-04-07 11:41:33 +00:00
|
|
|
void __irq_entry smp_call_function_single_client(int irq, struct pt_regs *regs)
|
2008-07-18 06:44:50 +00:00
|
|
|
{
|
2005-04-16 22:20:36 +00:00
|
|
|
clear_softint(1 << irq);
|
2014-11-07 17:50:48 +00:00
|
|
|
irq_enter();
|
2008-07-18 06:44:50 +00:00
|
|
|
generic_smp_call_function_single_interrupt();
|
2014-11-07 17:50:48 +00:00
|
|
|
irq_exit();
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2006-02-01 02:31:38 +00:00
|
|
|
static void tsb_sync(void *info)
|
|
|
|
{
|
2006-03-28 21:29:26 +00:00
|
|
|
struct trap_per_cpu *tp = &trap_block[raw_smp_processor_id()];
|
2006-02-01 02:31:38 +00:00
|
|
|
struct mm_struct *mm = info;
|
|
|
|
|
2011-11-29 04:31:00 +00:00
|
|
|
/* It is not valid to test "current->active_mm == mm" here.
|
2006-03-28 21:29:26 +00:00
|
|
|
*
|
|
|
|
* The value of "current" is not changed atomically with
|
|
|
|
* switch_mm(). But that's OK, we just need to check the
|
|
|
|
* current cpu's trap block PGD physical address.
|
|
|
|
*/
|
|
|
|
if (tp->pgd_paddr == __pa(mm->pgd))
|
2006-02-01 02:31:38 +00:00
|
|
|
tsb_context_switch(mm);
|
|
|
|
}
|
|
|
|
|
|
|
|
void smp_tsb_sync(struct mm_struct *mm)
|
|
|
|
{
|
2009-03-16 04:10:39 +00:00
|
|
|
smp_call_function_many(mm_cpumask(mm), tsb_sync, mm, 1);
|
2006-02-01 02:31:38 +00:00
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
extern unsigned long xcall_flush_tlb_mm;
|
sparc64: Fix race in TLB batch processing.
As reported by Dave Kleikamp, when we emit cross calls to do batched
TLB flush processing we have a race because we do not synchronize on
the sibling cpus completing the cross call.
So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.)
and either flushes are missed or flushes will flush the wrong
addresses.
Fix this by using generic infrastructure to synchonize on the
completion of the cross call.
This first required getting the flush_tlb_pending() call out from
switch_to() which operates with locks held and interrupts disabled.
The problem is that smp_call_function_many() cannot be invoked with
IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().
We get the batch processing outside of locked IRQ disabled sections by
using some ideas from the powerpc port. Namely, we only batch inside
of arch_{enter,leave}_lazy_mmu_mode() calls. If we're not in such a
region, we flush TLBs synchronously.
1) Get rid of xcall_flush_tlb_pending and per-cpu type
implementations.
2) Do TLB batch cross calls instead via:
smp_call_function_many()
tlb_pending_func()
__flush_tlb_pending()
3) Batch only in lazy mmu sequences:
a) Add 'active' member to struct tlb_batch
b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
c) Set 'active' in arch_enter_lazy_mmu_mode()
d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
e) Check 'active' in tlb_batch_add_one() and do a synchronous
flush if it's clear.
4) Add infrastructure for synchronous TLB page flushes.
a) Implement __flush_tlb_page and per-cpu variants, patch
as needed.
b) Likewise for xcall_flush_tlb_page.
c) Implement smp_flush_tlb_page() to invoke the cross-call.
d) Wire up global_flush_tlb_page() to the right routine based
upon CONFIG_SMP
5) It turns out that singleton batches are very common, 2 out of every
3 batch flushes have only a single entry in them.
The batch flush waiting is very expensive, both because of the poll
on sibling cpu completeion, as well as because passing the tlb batch
pointer to the sibling cpus invokes a shared memory dereference.
Therefore, in flush_tlb_pending(), if there is only one entry in
the batch perform a completely asynchronous global_flush_tlb_page()
instead.
Reported-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Dave Kleikamp <dave.kleikamp@oracle.com>
2013-04-19 21:26:26 +00:00
|
|
|
extern unsigned long xcall_flush_tlb_page;
|
2005-04-16 22:20:36 +00:00
|
|
|
extern unsigned long xcall_flush_tlb_kernel_range;
|
2008-05-20 06:46:00 +00:00
|
|
|
extern unsigned long xcall_fetch_glob_regs;
|
2012-10-16 16:34:01 +00:00
|
|
|
extern unsigned long xcall_fetch_glob_pmu;
|
|
|
|
extern unsigned long xcall_fetch_glob_pmu_n4;
|
2005-04-16 22:20:36 +00:00
|
|
|
extern unsigned long xcall_receive_signal;
|
2006-03-07 06:50:44 +00:00
|
|
|
extern unsigned long xcall_new_mmu_context_version;
|
2008-04-29 09:38:50 +00:00
|
|
|
#ifdef CONFIG_KGDB
|
|
|
|
extern unsigned long xcall_kgdb_capture;
|
|
|
|
#endif
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
#ifdef DCACHE_ALIASING_POSSIBLE
|
|
|
|
extern unsigned long xcall_flush_dcache_page_cheetah;
|
|
|
|
#endif
|
|
|
|
extern unsigned long xcall_flush_dcache_page_spitfire;
|
|
|
|
|
2007-10-27 07:13:04 +00:00
|
|
|
static inline void __local_flush_dcache_page(struct page *page)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
#ifdef DCACHE_ALIASING_POSSIBLE
|
|
|
|
__flush_dcache_page(page_address(page),
|
|
|
|
((tlb_type == spitfire) &&
|
mm: fix races between swapoff and flush dcache
Thanks to commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB
trunks"), after swapoff the address_space associated with the swap
device will be freed. So page_mapping() users which may touch the
address_space need some kind of mechanism to prevent the address_space
from being freed during accessing.
The dcache flushing functions (flush_dcache_page(), etc) in architecture
specific code may access the address_space of swap device for anonymous
pages in swap cache via page_mapping() function. But in some cases
there are no mechanisms to prevent the swap device from being swapoff,
for example,
CPU1 CPU2
__get_user_pages() swapoff()
flush_dcache_page()
mapping = page_mapping()
... exit_swap_address_space()
... kvfree(spaces)
mapping_mapped(mapping)
The address space may be accessed after being freed.
But from cachetlb.txt and Russell King, flush_dcache_page() only care
about file cache pages, for anonymous pages, flush_anon_page() should be
used. The implementation of flush_dcache_page() in all architectures
follows this too. They will check whether page_mapping() is NULL and
whether mapping_mapped() is true to determine whether to flush the
dcache immediately. And they will use interval tree (mapping->i_mmap)
to find all user space mappings. While mapping_mapped() and
mapping->i_mmap isn't used by anonymous pages in swap cache at all.
So, to fix the race between swapoff and flush dcache, __page_mapping()
is add to return the address_space for file cache pages and NULL
otherwise. All page_mapping() invoking in flush dcache functions are
replaced with page_mapping_file().
[akpm@linux-foundation.org: simplify page_mapping_file(), per Mike]
Link: http://lkml.kernel.org/r/20180305083634.15174-1-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-05 23:24:39 +00:00
|
|
|
page_mapping_file(page) != NULL));
|
2005-04-16 22:20:36 +00:00
|
|
|
#else
|
mm: fix races between swapoff and flush dcache
Thanks to commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB
trunks"), after swapoff the address_space associated with the swap
device will be freed. So page_mapping() users which may touch the
address_space need some kind of mechanism to prevent the address_space
from being freed during accessing.
The dcache flushing functions (flush_dcache_page(), etc) in architecture
specific code may access the address_space of swap device for anonymous
pages in swap cache via page_mapping() function. But in some cases
there are no mechanisms to prevent the swap device from being swapoff,
for example,
CPU1 CPU2
__get_user_pages() swapoff()
flush_dcache_page()
mapping = page_mapping()
... exit_swap_address_space()
... kvfree(spaces)
mapping_mapped(mapping)
The address space may be accessed after being freed.
But from cachetlb.txt and Russell King, flush_dcache_page() only care
about file cache pages, for anonymous pages, flush_anon_page() should be
used. The implementation of flush_dcache_page() in all architectures
follows this too. They will check whether page_mapping() is NULL and
whether mapping_mapped() is true to determine whether to flush the
dcache immediately. And they will use interval tree (mapping->i_mmap)
to find all user space mappings. While mapping_mapped() and
mapping->i_mmap isn't used by anonymous pages in swap cache at all.
So, to fix the race between swapoff and flush dcache, __page_mapping()
is add to return the address_space for file cache pages and NULL
otherwise. All page_mapping() invoking in flush dcache functions are
replaced with page_mapping_file().
[akpm@linux-foundation.org: simplify page_mapping_file(), per Mike]
Link: http://lkml.kernel.org/r/20180305083634.15174-1-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-05 23:24:39 +00:00
|
|
|
if (page_mapping_file(page) != NULL &&
|
2005-04-16 22:20:36 +00:00
|
|
|
tlb_type == spitfire)
|
|
|
|
__flush_icache_page(__pa(page_address(page)));
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void smp_flush_dcache_page_impl(struct page *page, int cpu)
|
|
|
|
{
|
2006-02-04 11:10:53 +00:00
|
|
|
int this_cpu;
|
|
|
|
|
|
|
|
if (tlb_type == hypervisor)
|
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_DEBUG_DCFLUSH
|
|
|
|
atomic_inc(&dcpage_flushes);
|
|
|
|
#endif
|
2006-02-04 11:10:53 +00:00
|
|
|
|
|
|
|
this_cpu = get_cpu();
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
if (cpu == this_cpu) {
|
|
|
|
__local_flush_dcache_page(page);
|
|
|
|
} else if (cpu_online(cpu)) {
|
|
|
|
void *pg_addr = page_address(page);
|
2008-08-04 06:07:18 +00:00
|
|
|
u64 data0 = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if (tlb_type == spitfire) {
|
2008-08-04 06:07:18 +00:00
|
|
|
data0 = ((u64)&xcall_flush_dcache_page_spitfire);
|
mm: fix races between swapoff and flush dcache
Thanks to commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB
trunks"), after swapoff the address_space associated with the swap
device will be freed. So page_mapping() users which may touch the
address_space need some kind of mechanism to prevent the address_space
from being freed during accessing.
The dcache flushing functions (flush_dcache_page(), etc) in architecture
specific code may access the address_space of swap device for anonymous
pages in swap cache via page_mapping() function. But in some cases
there are no mechanisms to prevent the swap device from being swapoff,
for example,
CPU1 CPU2
__get_user_pages() swapoff()
flush_dcache_page()
mapping = page_mapping()
... exit_swap_address_space()
... kvfree(spaces)
mapping_mapped(mapping)
The address space may be accessed after being freed.
But from cachetlb.txt and Russell King, flush_dcache_page() only care
about file cache pages, for anonymous pages, flush_anon_page() should be
used. The implementation of flush_dcache_page() in all architectures
follows this too. They will check whether page_mapping() is NULL and
whether mapping_mapped() is true to determine whether to flush the
dcache immediately. And they will use interval tree (mapping->i_mmap)
to find all user space mappings. While mapping_mapped() and
mapping->i_mmap isn't used by anonymous pages in swap cache at all.
So, to fix the race between swapoff and flush dcache, __page_mapping()
is add to return the address_space for file cache pages and NULL
otherwise. All page_mapping() invoking in flush dcache functions are
replaced with page_mapping_file().
[akpm@linux-foundation.org: simplify page_mapping_file(), per Mike]
Link: http://lkml.kernel.org/r/20180305083634.15174-1-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-05 23:24:39 +00:00
|
|
|
if (page_mapping_file(page) != NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
data0 |= ((u64)1 << 32);
|
2006-02-04 11:10:53 +00:00
|
|
|
} else if (tlb_type == cheetah || tlb_type == cheetah_plus) {
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef DCACHE_ALIASING_POSSIBLE
|
2008-08-04 06:07:18 +00:00
|
|
|
data0 = ((u64)&xcall_flush_dcache_page_cheetah);
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif
|
|
|
|
}
|
2008-08-04 06:07:18 +00:00
|
|
|
if (data0) {
|
|
|
|
xcall_deliver(data0, __pa(pg_addr),
|
2011-05-16 20:38:07 +00:00
|
|
|
(u64) pg_addr, cpumask_of(cpu));
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef CONFIG_DEBUG_DCFLUSH
|
2008-08-04 06:07:18 +00:00
|
|
|
atomic_inc(&dcpage_flushes_xcall);
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif
|
2008-08-04 06:07:18 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
put_cpu();
|
|
|
|
}
|
|
|
|
|
|
|
|
void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
|
|
|
|
{
|
2008-08-04 06:07:18 +00:00
|
|
|
void *pg_addr;
|
|
|
|
u64 data0;
|
2006-02-04 11:10:53 +00:00
|
|
|
|
|
|
|
if (tlb_type == hypervisor)
|
|
|
|
return;
|
|
|
|
|
2011-02-27 07:40:02 +00:00
|
|
|
preempt_disable();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_DEBUG_DCFLUSH
|
|
|
|
atomic_inc(&dcpage_flushes);
|
|
|
|
#endif
|
2008-08-04 06:07:18 +00:00
|
|
|
data0 = 0;
|
|
|
|
pg_addr = page_address(page);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (tlb_type == spitfire) {
|
|
|
|
data0 = ((u64)&xcall_flush_dcache_page_spitfire);
|
mm: fix races between swapoff and flush dcache
Thanks to commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB
trunks"), after swapoff the address_space associated with the swap
device will be freed. So page_mapping() users which may touch the
address_space need some kind of mechanism to prevent the address_space
from being freed during accessing.
The dcache flushing functions (flush_dcache_page(), etc) in architecture
specific code may access the address_space of swap device for anonymous
pages in swap cache via page_mapping() function. But in some cases
there are no mechanisms to prevent the swap device from being swapoff,
for example,
CPU1 CPU2
__get_user_pages() swapoff()
flush_dcache_page()
mapping = page_mapping()
... exit_swap_address_space()
... kvfree(spaces)
mapping_mapped(mapping)
The address space may be accessed after being freed.
But from cachetlb.txt and Russell King, flush_dcache_page() only care
about file cache pages, for anonymous pages, flush_anon_page() should be
used. The implementation of flush_dcache_page() in all architectures
follows this too. They will check whether page_mapping() is NULL and
whether mapping_mapped() is true to determine whether to flush the
dcache immediately. And they will use interval tree (mapping->i_mmap)
to find all user space mappings. While mapping_mapped() and
mapping->i_mmap isn't used by anonymous pages in swap cache at all.
So, to fix the race between swapoff and flush dcache, __page_mapping()
is add to return the address_space for file cache pages and NULL
otherwise. All page_mapping() invoking in flush dcache functions are
replaced with page_mapping_file().
[akpm@linux-foundation.org: simplify page_mapping_file(), per Mike]
Link: http://lkml.kernel.org/r/20180305083634.15174-1-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-05 23:24:39 +00:00
|
|
|
if (page_mapping_file(page) != NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
data0 |= ((u64)1 << 32);
|
2006-02-04 11:10:53 +00:00
|
|
|
} else if (tlb_type == cheetah || tlb_type == cheetah_plus) {
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef DCACHE_ALIASING_POSSIBLE
|
|
|
|
data0 = ((u64)&xcall_flush_dcache_page_cheetah);
|
|
|
|
#endif
|
|
|
|
}
|
2008-08-04 06:07:18 +00:00
|
|
|
if (data0) {
|
|
|
|
xcall_deliver(data0, __pa(pg_addr),
|
2011-05-16 20:38:07 +00:00
|
|
|
(u64) pg_addr, cpu_online_mask);
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef CONFIG_DEBUG_DCFLUSH
|
2008-08-04 06:07:18 +00:00
|
|
|
atomic_inc(&dcpage_flushes_xcall);
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif
|
2008-08-04 06:07:18 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
__local_flush_dcache_page(page);
|
|
|
|
|
2011-02-27 07:40:02 +00:00
|
|
|
preempt_enable();
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2008-04-29 09:38:50 +00:00
|
|
|
#ifdef CONFIG_KGDB
|
2018-12-05 03:38:25 +00:00
|
|
|
void kgdb_roundup_cpus(void)
|
2008-04-29 09:38:50 +00:00
|
|
|
{
|
|
|
|
smp_cross_call(&xcall_kgdb_capture, 0, 0, 0);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-05-20 06:46:00 +00:00
|
|
|
void smp_fetch_global_regs(void)
|
|
|
|
{
|
|
|
|
smp_cross_call(&xcall_fetch_glob_regs, 0, 0, 0);
|
|
|
|
}
|
|
|
|
|
2012-10-16 16:34:01 +00:00
|
|
|
void smp_fetch_global_pmu(void)
|
|
|
|
{
|
|
|
|
if (tlb_type == hypervisor &&
|
|
|
|
sun4v_chip_type >= SUN4V_CHIP_NIAGARA4)
|
|
|
|
smp_cross_call(&xcall_fetch_glob_pmu_n4, 0, 0, 0);
|
|
|
|
else
|
|
|
|
smp_cross_call(&xcall_fetch_glob_pmu, 0, 0, 0);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/* We know that the window frames of the user have been flushed
|
|
|
|
* to the stack before we get here because all callers of us
|
|
|
|
* are flush_tlb_*() routines, and these run after flush_cache_*()
|
|
|
|
* which performs the flushw.
|
|
|
|
*
|
|
|
|
* The SMP TLB coherency scheme we use works as follows:
|
|
|
|
*
|
|
|
|
* 1) mm->cpu_vm_mask is a bit mask of which cpus an address
|
|
|
|
* space has (potentially) executed on, this is the heuristic
|
|
|
|
* we use to avoid doing cross calls.
|
|
|
|
*
|
|
|
|
* Also, for flushing from kswapd and also for clones, we
|
|
|
|
* use cpu_vm_mask as the list of cpus to make run the TLB.
|
|
|
|
*
|
|
|
|
* 2) TLB context numbers are shared globally across all processors
|
|
|
|
* in the system, this allows us to play several games to avoid
|
|
|
|
* cross calls.
|
|
|
|
*
|
|
|
|
* One invariant is that when a cpu switches to a process, and
|
|
|
|
* that processes tsk->active_mm->cpu_vm_mask does not have the
|
|
|
|
* current cpu's bit set, that tlb context is flushed locally.
|
|
|
|
*
|
|
|
|
* If the address space is non-shared (ie. mm->count == 1) we avoid
|
|
|
|
* cross calls when we want to flush the currently running process's
|
|
|
|
* tlb state. This is done by clearing all cpu bits except the current
|
sparc64: Fix MM refcount check in smp_flush_tlb_pending().
As explained by Benjamin Herrenschmidt:
> CPU 0 is running the context, task->mm == task->active_mm == your
> context. The CPU is in userspace happily churning things.
>
> CPU 1 used to run it, not anymore, it's now running fancyfsd which
> is a kernel thread, but current->active_mm still points to that
> same context.
>
> Because there's only one "real" user, mm_users is 1 (but mm_count is
> elevated, it's just that the presence on CPU 1 as active_mm has no
> effect on mm_count().
>
> At this point, fancyfsd decides to invalidate a mapping currently mapped
> by that context, for example because a networked file has changed
> remotely or something like that, using unmap_mapping_ranges().
>
> So CPU 1 goes into the zapping code, which eventually ends up calling
> flush_tlb_pending(). Your test will succeed, as current->active_mm is
> indeed the target mm for the flush, and mm_users is indeed 1. So you
> will -not- send an IPI to the other CPU, and CPU 0 will continue happily
> accessing the pages that should have been unmapped.
To fix this problem, check ->mm instead of ->active_mm, and this
means:
> So if you test current->mm, you effectively account for mm_users == 1,
> so the only way the mm can be active on another processor is as a lazy
> mm for a kernel thread. So your test should work properly as long
> as you don't have a HW that will do speculative TLB reloads into the
> TLB on that other CPU (and even if you do, you flush-on-switch-in should
> get rid of any crap here).
And therefore we should be OK.
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-27 08:09:17 +00:00
|
|
|
* processor's in current->mm->cpu_vm_mask and performing the
|
2005-04-16 22:20:36 +00:00
|
|
|
* flush locally only. This will force any subsequent cpus which run
|
|
|
|
* this task to flush the context from the local tlb if the process
|
|
|
|
* migrates to another cpu (again).
|
|
|
|
*
|
|
|
|
* 3) For shared address spaces (threads) and swapping we bite the
|
|
|
|
* bullet for most cases and perform the cross call (but only to
|
|
|
|
* the cpus listed in cpu_vm_mask).
|
|
|
|
*
|
|
|
|
* The performance gain from "optimizing" away the cross call for threads is
|
|
|
|
* questionable (in theory the big win for threads is the massive sharing of
|
|
|
|
* address space state across processors).
|
|
|
|
*/
|
2005-11-07 22:09:58 +00:00
|
|
|
|
|
|
|
/* This currently is only used by the hugetlb arch pre-fault
|
|
|
|
* hook on UltraSPARC-III+ and later when changing the pagesize
|
|
|
|
* bits of the context register for an address space.
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
void smp_flush_tlb_mm(struct mm_struct *mm)
|
|
|
|
{
|
2005-11-07 22:09:58 +00:00
|
|
|
u32 ctx = CTX_HWBITS(mm->context);
|
|
|
|
int cpu = get_cpu();
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-11-07 22:09:58 +00:00
|
|
|
if (atomic_read(&mm->mm_users) == 1) {
|
2009-03-16 04:10:39 +00:00
|
|
|
cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
|
2005-11-07 22:09:58 +00:00
|
|
|
goto local_flush_and_out;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-11-07 22:09:58 +00:00
|
|
|
smp_cross_call_masked(&xcall_flush_tlb_mm,
|
|
|
|
ctx, 0, 0,
|
2009-03-16 04:10:39 +00:00
|
|
|
mm_cpumask(mm));
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-11-07 22:09:58 +00:00
|
|
|
local_flush_and_out:
|
|
|
|
__flush_tlb_mm(ctx, SECONDARY_CONTEXT);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-11-07 22:09:58 +00:00
|
|
|
put_cpu();
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
sparc64: Fix race in TLB batch processing.
As reported by Dave Kleikamp, when we emit cross calls to do batched
TLB flush processing we have a race because we do not synchronize on
the sibling cpus completing the cross call.
So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.)
and either flushes are missed or flushes will flush the wrong
addresses.
Fix this by using generic infrastructure to synchonize on the
completion of the cross call.
This first required getting the flush_tlb_pending() call out from
switch_to() which operates with locks held and interrupts disabled.
The problem is that smp_call_function_many() cannot be invoked with
IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().
We get the batch processing outside of locked IRQ disabled sections by
using some ideas from the powerpc port. Namely, we only batch inside
of arch_{enter,leave}_lazy_mmu_mode() calls. If we're not in such a
region, we flush TLBs synchronously.
1) Get rid of xcall_flush_tlb_pending and per-cpu type
implementations.
2) Do TLB batch cross calls instead via:
smp_call_function_many()
tlb_pending_func()
__flush_tlb_pending()
3) Batch only in lazy mmu sequences:
a) Add 'active' member to struct tlb_batch
b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
c) Set 'active' in arch_enter_lazy_mmu_mode()
d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
e) Check 'active' in tlb_batch_add_one() and do a synchronous
flush if it's clear.
4) Add infrastructure for synchronous TLB page flushes.
a) Implement __flush_tlb_page and per-cpu variants, patch
as needed.
b) Likewise for xcall_flush_tlb_page.
c) Implement smp_flush_tlb_page() to invoke the cross-call.
d) Wire up global_flush_tlb_page() to the right routine based
upon CONFIG_SMP
5) It turns out that singleton batches are very common, 2 out of every
3 batch flushes have only a single entry in them.
The batch flush waiting is very expensive, both because of the poll
on sibling cpu completeion, as well as because passing the tlb batch
pointer to the sibling cpus invokes a shared memory dereference.
Therefore, in flush_tlb_pending(), if there is only one entry in
the batch perform a completely asynchronous global_flush_tlb_page()
instead.
Reported-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Dave Kleikamp <dave.kleikamp@oracle.com>
2013-04-19 21:26:26 +00:00
|
|
|
struct tlb_pending_info {
|
|
|
|
unsigned long ctx;
|
|
|
|
unsigned long nr;
|
|
|
|
unsigned long *vaddrs;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void tlb_pending_func(void *info)
|
|
|
|
{
|
|
|
|
struct tlb_pending_info *t = info;
|
|
|
|
|
|
|
|
__flush_tlb_pending(t->ctx, t->nr, t->vaddrs);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long *vaddrs)
|
|
|
|
{
|
|
|
|
u32 ctx = CTX_HWBITS(mm->context);
|
sparc64: Fix race in TLB batch processing.
As reported by Dave Kleikamp, when we emit cross calls to do batched
TLB flush processing we have a race because we do not synchronize on
the sibling cpus completing the cross call.
So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.)
and either flushes are missed or flushes will flush the wrong
addresses.
Fix this by using generic infrastructure to synchonize on the
completion of the cross call.
This first required getting the flush_tlb_pending() call out from
switch_to() which operates with locks held and interrupts disabled.
The problem is that smp_call_function_many() cannot be invoked with
IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().
We get the batch processing outside of locked IRQ disabled sections by
using some ideas from the powerpc port. Namely, we only batch inside
of arch_{enter,leave}_lazy_mmu_mode() calls. If we're not in such a
region, we flush TLBs synchronously.
1) Get rid of xcall_flush_tlb_pending and per-cpu type
implementations.
2) Do TLB batch cross calls instead via:
smp_call_function_many()
tlb_pending_func()
__flush_tlb_pending()
3) Batch only in lazy mmu sequences:
a) Add 'active' member to struct tlb_batch
b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
c) Set 'active' in arch_enter_lazy_mmu_mode()
d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
e) Check 'active' in tlb_batch_add_one() and do a synchronous
flush if it's clear.
4) Add infrastructure for synchronous TLB page flushes.
a) Implement __flush_tlb_page and per-cpu variants, patch
as needed.
b) Likewise for xcall_flush_tlb_page.
c) Implement smp_flush_tlb_page() to invoke the cross-call.
d) Wire up global_flush_tlb_page() to the right routine based
upon CONFIG_SMP
5) It turns out that singleton batches are very common, 2 out of every
3 batch flushes have only a single entry in them.
The batch flush waiting is very expensive, both because of the poll
on sibling cpu completeion, as well as because passing the tlb batch
pointer to the sibling cpus invokes a shared memory dereference.
Therefore, in flush_tlb_pending(), if there is only one entry in
the batch perform a completely asynchronous global_flush_tlb_page()
instead.
Reported-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Dave Kleikamp <dave.kleikamp@oracle.com>
2013-04-19 21:26:26 +00:00
|
|
|
struct tlb_pending_info info;
|
2005-04-16 22:20:36 +00:00
|
|
|
int cpu = get_cpu();
|
|
|
|
|
sparc64: Fix race in TLB batch processing.
As reported by Dave Kleikamp, when we emit cross calls to do batched
TLB flush processing we have a race because we do not synchronize on
the sibling cpus completing the cross call.
So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.)
and either flushes are missed or flushes will flush the wrong
addresses.
Fix this by using generic infrastructure to synchonize on the
completion of the cross call.
This first required getting the flush_tlb_pending() call out from
switch_to() which operates with locks held and interrupts disabled.
The problem is that smp_call_function_many() cannot be invoked with
IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().
We get the batch processing outside of locked IRQ disabled sections by
using some ideas from the powerpc port. Namely, we only batch inside
of arch_{enter,leave}_lazy_mmu_mode() calls. If we're not in such a
region, we flush TLBs synchronously.
1) Get rid of xcall_flush_tlb_pending and per-cpu type
implementations.
2) Do TLB batch cross calls instead via:
smp_call_function_many()
tlb_pending_func()
__flush_tlb_pending()
3) Batch only in lazy mmu sequences:
a) Add 'active' member to struct tlb_batch
b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
c) Set 'active' in arch_enter_lazy_mmu_mode()
d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
e) Check 'active' in tlb_batch_add_one() and do a synchronous
flush if it's clear.
4) Add infrastructure for synchronous TLB page flushes.
a) Implement __flush_tlb_page and per-cpu variants, patch
as needed.
b) Likewise for xcall_flush_tlb_page.
c) Implement smp_flush_tlb_page() to invoke the cross-call.
d) Wire up global_flush_tlb_page() to the right routine based
upon CONFIG_SMP
5) It turns out that singleton batches are very common, 2 out of every
3 batch flushes have only a single entry in them.
The batch flush waiting is very expensive, both because of the poll
on sibling cpu completeion, as well as because passing the tlb batch
pointer to the sibling cpus invokes a shared memory dereference.
Therefore, in flush_tlb_pending(), if there is only one entry in
the batch perform a completely asynchronous global_flush_tlb_page()
instead.
Reported-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Dave Kleikamp <dave.kleikamp@oracle.com>
2013-04-19 21:26:26 +00:00
|
|
|
info.ctx = ctx;
|
|
|
|
info.nr = nr;
|
|
|
|
info.vaddrs = vaddrs;
|
|
|
|
|
sparc64: Fix MM refcount check in smp_flush_tlb_pending().
As explained by Benjamin Herrenschmidt:
> CPU 0 is running the context, task->mm == task->active_mm == your
> context. The CPU is in userspace happily churning things.
>
> CPU 1 used to run it, not anymore, it's now running fancyfsd which
> is a kernel thread, but current->active_mm still points to that
> same context.
>
> Because there's only one "real" user, mm_users is 1 (but mm_count is
> elevated, it's just that the presence on CPU 1 as active_mm has no
> effect on mm_count().
>
> At this point, fancyfsd decides to invalidate a mapping currently mapped
> by that context, for example because a networked file has changed
> remotely or something like that, using unmap_mapping_ranges().
>
> So CPU 1 goes into the zapping code, which eventually ends up calling
> flush_tlb_pending(). Your test will succeed, as current->active_mm is
> indeed the target mm for the flush, and mm_users is indeed 1. So you
> will -not- send an IPI to the other CPU, and CPU 0 will continue happily
> accessing the pages that should have been unmapped.
To fix this problem, check ->mm instead of ->active_mm, and this
means:
> So if you test current->mm, you effectively account for mm_users == 1,
> so the only way the mm can be active on another processor is as a lazy
> mm for a kernel thread. So your test should work properly as long
> as you don't have a HW that will do speculative TLB reloads into the
> TLB on that other CPU (and even if you do, you flush-on-switch-in should
> get rid of any crap here).
And therefore we should be OK.
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-27 08:09:17 +00:00
|
|
|
if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
|
2009-03-16 04:10:39 +00:00
|
|
|
cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
|
[SPARC64] mm: context switch ptlock
sparc64 is unique among architectures in taking the page_table_lock in
its context switch (well, cris does too, but erroneously, and it's not
yet SMP anyway).
This seems to be a private affair between switch_mm and activate_mm,
using page_table_lock as a per-mm lock, without any relation to its uses
elsewhere. That's fine, but comment it as such; and unlock sooner in
switch_mm, more like in activate_mm (preemption is disabled here).
There is a block of "if (0)"ed code in smp_flush_tlb_pending which would
have liked to rely on the page_table_lock, in switch_mm and elsewhere;
but its comment explains how dup_mmap's flush_tlb_mm defeated it. And
though that could have been changed at any time over the past few years,
now the chance vanishes as we push the page_table_lock downwards, and
perhaps split it per page table page. Just delete that block of code.
Which leaves the mysterious spin_unlock_wait(&oldmm->page_table_lock)
in kernel/fork.c copy_mm. Textual analysis (supported by Nick Piggin)
suggests that the comment was written by DaveM, and that it relates to
the defeated approach in the sparc64 smp_flush_tlb_pending. Just delete
this block too.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-11-07 22:09:01 +00:00
|
|
|
else
|
sparc64: Fix race in TLB batch processing.
As reported by Dave Kleikamp, when we emit cross calls to do batched
TLB flush processing we have a race because we do not synchronize on
the sibling cpus completing the cross call.
So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.)
and either flushes are missed or flushes will flush the wrong
addresses.
Fix this by using generic infrastructure to synchonize on the
completion of the cross call.
This first required getting the flush_tlb_pending() call out from
switch_to() which operates with locks held and interrupts disabled.
The problem is that smp_call_function_many() cannot be invoked with
IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().
We get the batch processing outside of locked IRQ disabled sections by
using some ideas from the powerpc port. Namely, we only batch inside
of arch_{enter,leave}_lazy_mmu_mode() calls. If we're not in such a
region, we flush TLBs synchronously.
1) Get rid of xcall_flush_tlb_pending and per-cpu type
implementations.
2) Do TLB batch cross calls instead via:
smp_call_function_many()
tlb_pending_func()
__flush_tlb_pending()
3) Batch only in lazy mmu sequences:
a) Add 'active' member to struct tlb_batch
b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
c) Set 'active' in arch_enter_lazy_mmu_mode()
d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
e) Check 'active' in tlb_batch_add_one() and do a synchronous
flush if it's clear.
4) Add infrastructure for synchronous TLB page flushes.
a) Implement __flush_tlb_page and per-cpu variants, patch
as needed.
b) Likewise for xcall_flush_tlb_page.
c) Implement smp_flush_tlb_page() to invoke the cross-call.
d) Wire up global_flush_tlb_page() to the right routine based
upon CONFIG_SMP
5) It turns out that singleton batches are very common, 2 out of every
3 batch flushes have only a single entry in them.
The batch flush waiting is very expensive, both because of the poll
on sibling cpu completeion, as well as because passing the tlb batch
pointer to the sibling cpus invokes a shared memory dereference.
Therefore, in flush_tlb_pending(), if there is only one entry in
the batch perform a completely asynchronous global_flush_tlb_page()
instead.
Reported-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Dave Kleikamp <dave.kleikamp@oracle.com>
2013-04-19 21:26:26 +00:00
|
|
|
smp_call_function_many(mm_cpumask(mm), tlb_pending_func,
|
|
|
|
&info, 1);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
__flush_tlb_pending(ctx, nr, vaddrs);
|
|
|
|
|
|
|
|
put_cpu();
|
|
|
|
}
|
|
|
|
|
sparc64: Fix race in TLB batch processing.
As reported by Dave Kleikamp, when we emit cross calls to do batched
TLB flush processing we have a race because we do not synchronize on
the sibling cpus completing the cross call.
So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.)
and either flushes are missed or flushes will flush the wrong
addresses.
Fix this by using generic infrastructure to synchonize on the
completion of the cross call.
This first required getting the flush_tlb_pending() call out from
switch_to() which operates with locks held and interrupts disabled.
The problem is that smp_call_function_many() cannot be invoked with
IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().
We get the batch processing outside of locked IRQ disabled sections by
using some ideas from the powerpc port. Namely, we only batch inside
of arch_{enter,leave}_lazy_mmu_mode() calls. If we're not in such a
region, we flush TLBs synchronously.
1) Get rid of xcall_flush_tlb_pending and per-cpu type
implementations.
2) Do TLB batch cross calls instead via:
smp_call_function_many()
tlb_pending_func()
__flush_tlb_pending()
3) Batch only in lazy mmu sequences:
a) Add 'active' member to struct tlb_batch
b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
c) Set 'active' in arch_enter_lazy_mmu_mode()
d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
e) Check 'active' in tlb_batch_add_one() and do a synchronous
flush if it's clear.
4) Add infrastructure for synchronous TLB page flushes.
a) Implement __flush_tlb_page and per-cpu variants, patch
as needed.
b) Likewise for xcall_flush_tlb_page.
c) Implement smp_flush_tlb_page() to invoke the cross-call.
d) Wire up global_flush_tlb_page() to the right routine based
upon CONFIG_SMP
5) It turns out that singleton batches are very common, 2 out of every
3 batch flushes have only a single entry in them.
The batch flush waiting is very expensive, both because of the poll
on sibling cpu completeion, as well as because passing the tlb batch
pointer to the sibling cpus invokes a shared memory dereference.
Therefore, in flush_tlb_pending(), if there is only one entry in
the batch perform a completely asynchronous global_flush_tlb_page()
instead.
Reported-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Dave Kleikamp <dave.kleikamp@oracle.com>
2013-04-19 21:26:26 +00:00
|
|
|
void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr)
|
|
|
|
{
|
|
|
|
unsigned long context = CTX_HWBITS(mm->context);
|
|
|
|
int cpu = get_cpu();
|
|
|
|
|
|
|
|
if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
|
|
|
|
cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
|
|
|
|
else
|
|
|
|
smp_cross_call_masked(&xcall_flush_tlb_page,
|
|
|
|
context, vaddr, 0,
|
|
|
|
mm_cpumask(mm));
|
|
|
|
__flush_tlb_page(context, vaddr);
|
|
|
|
|
|
|
|
put_cpu();
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
void smp_flush_tlb_kernel_range(unsigned long start, unsigned long end)
|
|
|
|
{
|
|
|
|
start &= PAGE_MASK;
|
|
|
|
end = PAGE_ALIGN(end);
|
|
|
|
if (start != end) {
|
|
|
|
smp_cross_call(&xcall_flush_tlb_kernel_range,
|
|
|
|
0, start, end);
|
|
|
|
|
|
|
|
__flush_tlb_kernel_range(start, end);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* CPU capture. */
|
|
|
|
/* #define CAPTURE_DEBUG */
|
|
|
|
extern unsigned long xcall_capture;
|
|
|
|
|
|
|
|
static atomic_t smp_capture_depth = ATOMIC_INIT(0);
|
|
|
|
static atomic_t smp_capture_registry = ATOMIC_INIT(0);
|
|
|
|
static unsigned long penguins_are_doing_time;
|
|
|
|
|
|
|
|
void smp_capture(void)
|
|
|
|
{
|
2014-03-26 17:29:28 +00:00
|
|
|
int result = atomic_add_return(1, &smp_capture_depth);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if (result == 1) {
|
|
|
|
int ncpus = num_online_cpus();
|
|
|
|
|
|
|
|
#ifdef CAPTURE_DEBUG
|
|
|
|
printk("CPU[%d]: Sending penguins to jail...",
|
|
|
|
smp_processor_id());
|
|
|
|
#endif
|
|
|
|
penguins_are_doing_time = 1;
|
|
|
|
atomic_inc(&smp_capture_registry);
|
|
|
|
smp_cross_call(&xcall_capture, 0, 0, 0);
|
|
|
|
while (atomic_read(&smp_capture_registry) != ncpus)
|
2005-08-29 19:46:22 +00:00
|
|
|
rmb();
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifdef CAPTURE_DEBUG
|
|
|
|
printk("done\n");
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void smp_release(void)
|
|
|
|
{
|
|
|
|
if (atomic_dec_and_test(&smp_capture_depth)) {
|
|
|
|
#ifdef CAPTURE_DEBUG
|
|
|
|
printk("CPU[%d]: Giving pardon to "
|
|
|
|
"imprisoned penguins\n",
|
|
|
|
smp_processor_id());
|
|
|
|
#endif
|
|
|
|
penguins_are_doing_time = 0;
|
2008-11-15 21:33:25 +00:00
|
|
|
membar_safe("#StoreLoad");
|
2005-04-16 22:20:36 +00:00
|
|
|
atomic_dec(&smp_capture_registry);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-11-24 05:55:29 +00:00
|
|
|
/* Imprisoned penguins run with %pil == PIL_NORMAL_MAX, but PSTATE_IE
|
|
|
|
* set, so they can service tlb flush xcalls...
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
extern void prom_world(int);
|
2006-02-01 02:32:29 +00:00
|
|
|
|
2010-04-07 11:41:33 +00:00
|
|
|
void __irq_entry smp_penguin_jailcell(int irq, struct pt_regs *regs)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
clear_softint(1 << irq);
|
|
|
|
|
|
|
|
preempt_disable();
|
|
|
|
|
|
|
|
__asm__ __volatile__("flushw");
|
|
|
|
prom_world(1);
|
|
|
|
atomic_inc(&smp_capture_registry);
|
2008-11-15 21:33:25 +00:00
|
|
|
membar_safe("#StoreLoad");
|
2005-04-16 22:20:36 +00:00
|
|
|
while (penguins_are_doing_time)
|
2005-08-29 19:46:22 +00:00
|
|
|
rmb();
|
2005-04-16 22:20:36 +00:00
|
|
|
atomic_dec(&smp_capture_registry);
|
|
|
|
prom_world(0);
|
|
|
|
|
|
|
|
preempt_enable();
|
|
|
|
}
|
|
|
|
|
|
|
|
/* /proc/profile writes can call this, don't __init it please. */
|
|
|
|
int setup_profiling_timer(unsigned int multiplier)
|
|
|
|
{
|
2007-02-22 14:24:10 +00:00
|
|
|
return -EINVAL;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void __init smp_prepare_cpus(unsigned int max_cpus)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2012-12-21 22:03:26 +00:00
|
|
|
void smp_prepare_boot_cpu(void)
|
2006-02-25 21:39:56 +00:00
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2008-08-04 05:52:41 +00:00
|
|
|
void __init smp_setup_processor_id(void)
|
|
|
|
{
|
|
|
|
if (tlb_type == spitfire)
|
2008-08-04 23:16:20 +00:00
|
|
|
xcall_deliver_impl = spitfire_xcall_deliver;
|
2008-08-04 05:52:41 +00:00
|
|
|
else if (tlb_type == cheetah || tlb_type == cheetah_plus)
|
2008-08-04 23:16:20 +00:00
|
|
|
xcall_deliver_impl = cheetah_xcall_deliver;
|
2008-08-04 05:52:41 +00:00
|
|
|
else
|
2008-08-04 23:16:20 +00:00
|
|
|
xcall_deliver_impl = hypervisor_xcall_deliver;
|
2008-08-04 05:52:41 +00:00
|
|
|
}
|
|
|
|
|
2016-09-15 20:54:40 +00:00
|
|
|
void __init smp_fill_in_cpu_possible_map(void)
|
|
|
|
{
|
|
|
|
int possible_cpus = num_possible_cpus();
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (possible_cpus > nr_cpu_ids)
|
|
|
|
possible_cpus = nr_cpu_ids;
|
|
|
|
|
|
|
|
for (i = 0; i < possible_cpus; i++)
|
|
|
|
set_cpu_possible(i, true);
|
|
|
|
for (; i < NR_CPUS; i++)
|
|
|
|
set_cpu_possible(i, false);
|
|
|
|
}
|
|
|
|
|
2012-12-21 22:03:26 +00:00
|
|
|
void smp_fill_in_sib_core_maps(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2007-05-25 22:49:59 +00:00
|
|
|
unsigned int i;
|
|
|
|
|
2007-07-16 10:49:40 +00:00
|
|
|
for_each_present_cpu(i) {
|
2007-05-25 22:49:59 +00:00
|
|
|
unsigned int j;
|
|
|
|
|
2011-05-16 20:38:07 +00:00
|
|
|
cpumask_clear(&cpu_core_map[i]);
|
2007-05-25 22:49:59 +00:00
|
|
|
if (cpu_data(i).core_id == 0) {
|
2011-05-16 20:38:07 +00:00
|
|
|
cpumask_set_cpu(i, &cpu_core_map[i]);
|
2007-05-25 22:49:59 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2007-07-16 10:49:40 +00:00
|
|
|
for_each_present_cpu(j) {
|
2007-05-25 22:49:59 +00:00
|
|
|
if (cpu_data(i).core_id ==
|
|
|
|
cpu_data(j).core_id)
|
2011-05-16 20:38:07 +00:00
|
|
|
cpumask_set_cpu(j, &cpu_core_map[i]);
|
2007-06-05 00:01:39 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-04-22 16:28:31 +00:00
|
|
|
for_each_present_cpu(i) {
|
|
|
|
unsigned int j;
|
|
|
|
|
|
|
|
for_each_present_cpu(j) {
|
2016-10-20 00:33:29 +00:00
|
|
|
if (cpu_data(i).max_cache_id ==
|
|
|
|
cpu_data(j).max_cache_id)
|
|
|
|
cpumask_set_cpu(j, &cpu_core_sib_cache_map[i]);
|
|
|
|
|
2015-04-22 16:28:31 +00:00
|
|
|
if (cpu_data(i).sock_id == cpu_data(j).sock_id)
|
|
|
|
cpumask_set_cpu(j, &cpu_core_sib_map[i]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-07-16 10:49:40 +00:00
|
|
|
for_each_present_cpu(i) {
|
2007-06-05 00:01:39 +00:00
|
|
|
unsigned int j;
|
|
|
|
|
2011-05-16 20:38:07 +00:00
|
|
|
cpumask_clear(&per_cpu(cpu_sibling_map, i));
|
2007-06-05 00:01:39 +00:00
|
|
|
if (cpu_data(i).proc_id == -1) {
|
2011-05-16 20:38:07 +00:00
|
|
|
cpumask_set_cpu(i, &per_cpu(cpu_sibling_map, i));
|
2007-06-05 00:01:39 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2007-07-16 10:49:40 +00:00
|
|
|
for_each_present_cpu(j) {
|
2007-06-05 00:01:39 +00:00
|
|
|
if (cpu_data(i).proc_id ==
|
|
|
|
cpu_data(j).proc_id)
|
2011-05-16 20:38:07 +00:00
|
|
|
cpumask_set_cpu(j, &per_cpu(cpu_sibling_map, i));
|
2007-05-25 22:49:59 +00:00
|
|
|
}
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
sparc: delete __cpuinit/__CPUINIT usage from all users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/sparc uses of the __cpuinit macros from
C files and removes __CPUINIT from assembly files. Note that even
though arch/sparc/kernel/trampoline_64.S has instances of ".previous"
in it, they are all paired off against explicit ".section" directives,
and not implicitly paired with __CPUINIT (unlike mips and arm were).
[1] https://lkml.org/lkml/2013/5/20/589
Cc: "David S. Miller" <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-17 19:43:14 +00:00
|
|
|
int __cpu_up(unsigned int cpu, struct task_struct *tidle)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2012-04-20 13:05:56 +00:00
|
|
|
int ret = smp_boot_one_cpu(cpu, tidle);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if (!ret) {
|
2011-05-16 20:38:07 +00:00
|
|
|
cpumask_set_cpu(cpu, &smp_commenced_mask);
|
|
|
|
while (!cpu_online(cpu))
|
2005-04-16 22:20:36 +00:00
|
|
|
mb();
|
2011-05-16 20:38:07 +00:00
|
|
|
if (!cpu_online(cpu)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
ret = -ENODEV;
|
|
|
|
} else {
|
2006-02-12 07:22:47 +00:00
|
|
|
/* On SUN4V, writes to %tick and %stick are
|
|
|
|
* not allowed.
|
|
|
|
*/
|
|
|
|
if (tlb_type != hypervisor)
|
|
|
|
smp_synchronize_one_tick(cpu);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
#ifdef CONFIG_HOTPLUG_CPU
|
2007-07-16 10:49:40 +00:00
|
|
|
void cpu_play_dead(void)
|
|
|
|
{
|
|
|
|
int cpu = smp_processor_id();
|
|
|
|
unsigned long pstate;
|
|
|
|
|
|
|
|
idle_task_exit();
|
|
|
|
|
|
|
|
if (tlb_type == hypervisor) {
|
|
|
|
struct trap_per_cpu *tb = &trap_block[cpu];
|
|
|
|
|
|
|
|
sun4v_cpu_qconf(HV_CPU_QUEUE_CPU_MONDO,
|
|
|
|
tb->cpu_mondo_pa, 0);
|
|
|
|
sun4v_cpu_qconf(HV_CPU_QUEUE_DEVICE_MONDO,
|
|
|
|
tb->dev_mondo_pa, 0);
|
|
|
|
sun4v_cpu_qconf(HV_CPU_QUEUE_RES_ERROR,
|
|
|
|
tb->resum_mondo_pa, 0);
|
|
|
|
sun4v_cpu_qconf(HV_CPU_QUEUE_NONRES_ERROR,
|
|
|
|
tb->nonresum_mondo_pa, 0);
|
|
|
|
}
|
|
|
|
|
2011-05-16 20:38:07 +00:00
|
|
|
cpumask_clear_cpu(cpu, &smp_commenced_mask);
|
2007-07-16 10:49:40 +00:00
|
|
|
membar_safe("#Sync");
|
|
|
|
|
|
|
|
local_irq_disable();
|
|
|
|
|
|
|
|
__asm__ __volatile__(
|
|
|
|
"rdpr %%pstate, %0\n\t"
|
|
|
|
"wrpr %0, %1, %%pstate"
|
|
|
|
: "=r" (pstate)
|
|
|
|
: "i" (PSTATE_IE));
|
|
|
|
|
|
|
|
while (1)
|
|
|
|
barrier();
|
|
|
|
}
|
|
|
|
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
int __cpu_disable(void)
|
|
|
|
{
|
2007-07-16 10:49:40 +00:00
|
|
|
int cpu = smp_processor_id();
|
|
|
|
cpuinfo_sparc *c;
|
|
|
|
int i;
|
|
|
|
|
2011-05-16 20:38:07 +00:00
|
|
|
for_each_cpu(i, &cpu_core_map[cpu])
|
|
|
|
cpumask_clear_cpu(cpu, &cpu_core_map[i]);
|
|
|
|
cpumask_clear(&cpu_core_map[cpu]);
|
2007-07-16 10:49:40 +00:00
|
|
|
|
2011-05-16 20:38:07 +00:00
|
|
|
for_each_cpu(i, &per_cpu(cpu_sibling_map, cpu))
|
|
|
|
cpumask_clear_cpu(cpu, &per_cpu(cpu_sibling_map, i));
|
|
|
|
cpumask_clear(&per_cpu(cpu_sibling_map, cpu));
|
2007-07-16 10:49:40 +00:00
|
|
|
|
|
|
|
c = &cpu_data(cpu);
|
|
|
|
|
|
|
|
c->core_id = 0;
|
|
|
|
c->proc_id = -1;
|
|
|
|
|
|
|
|
smp_wmb();
|
|
|
|
|
|
|
|
/* Make sure no interrupts point to this cpu. */
|
|
|
|
fixup_irqs();
|
|
|
|
|
|
|
|
local_irq_enable();
|
|
|
|
mdelay(1);
|
|
|
|
local_irq_disable();
|
|
|
|
|
2011-05-16 20:38:07 +00:00
|
|
|
set_cpu_online(cpu, false);
|
2008-09-03 09:15:30 +00:00
|
|
|
|
2009-06-04 09:10:11 +00:00
|
|
|
cpu_map_rebuild();
|
|
|
|
|
2007-07-16 10:49:40 +00:00
|
|
|
return 0;
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void __cpu_die(unsigned int cpu)
|
|
|
|
{
|
2007-07-16 10:49:40 +00:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < 100; i++) {
|
|
|
|
smp_rmb();
|
2011-05-16 20:38:07 +00:00
|
|
|
if (!cpumask_test_cpu(cpu, &smp_commenced_mask))
|
2007-07-16 10:49:40 +00:00
|
|
|
break;
|
|
|
|
msleep(100);
|
|
|
|
}
|
2011-05-16 20:38:07 +00:00
|
|
|
if (cpumask_test_cpu(cpu, &smp_commenced_mask)) {
|
2007-07-16 10:49:40 +00:00
|
|
|
printk(KERN_ERR "CPU %u didn't die...\n", cpu);
|
|
|
|
} else {
|
|
|
|
#if defined(CONFIG_SUN_LDOMS)
|
|
|
|
unsigned long hv_err;
|
|
|
|
int limit = 100;
|
|
|
|
|
|
|
|
do {
|
|
|
|
hv_err = sun4v_cpu_stop(cpu);
|
|
|
|
if (hv_err == HV_EOK) {
|
2011-05-16 20:38:07 +00:00
|
|
|
set_cpu_present(cpu, false);
|
2007-07-16 10:49:40 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
} while (--limit > 0);
|
|
|
|
if (limit <= 0) {
|
|
|
|
printk(KERN_ERR "sun4v_cpu_stop() fails err=%lu\n",
|
|
|
|
hv_err);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
[SPARC64]: Initial LDOM cpu hotplug support.
Only adding cpus is supports at the moment, removal
will come next.
When new cpus are configured, the machine description is
updated. When we get the configure request we pass in a
cpu mask of to-be-added cpus to the mdesc CPU node parser
so it only fetches information for those cpus. That code
also proceeds to update the SMT/multi-core scheduling bitmaps.
cpu_up() does all the work and we return the status back
over the DS channel.
CPUs via dr-cpu need to be booted straight out of the
hypervisor, and this requires:
1) A new trampoline mechanism. CPUs are booted straight
out of the hypervisor with MMU disabled and running in
physical addresses with no mappings installed in the TLB.
The new hvtramp.S code sets up the critical cpu state,
installs the locked TLB mappings for the kernel, and
turns the MMU on. It then proceeds to follow the logic
of the existing trampoline.S SMP cpu bringup code.
2) All calls into OBP have to be disallowed when domaining
is enabled. Since cpus boot straight into the kernel from
the hypervisor, OBP has no state about that cpu and therefore
cannot handle being invoked on that cpu.
Luckily it's only a handful of interfaces which can be called
after the OBP device tree is obtained. For example, rebooting,
halting, powering-off, and setting options node variables.
CPU removal support will require some infrastructure changes
here. Namely we'll have to process the requests via a true
kernel thread instead of in a workqueue. workqueues run on
a per-cpu thread, but when unconfiguring we might need to
force the thread to execute on another cpu if the current cpu
is the one being removed. Removal of a cpu also causes the kernel
to destroy that cpu's workqueue running thread.
Another issue on removal is that we may have interrupts still
pointing to the cpu-to-be-removed. So new code will be needed
to walk the active INO list and retarget those cpus as-needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-07-13 23:03:42 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
void __init smp_cpus_done(unsigned int max_cpus)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2017-07-21 16:23:57 +00:00
|
|
|
static void send_cpu_ipi(int cpu)
|
|
|
|
{
|
|
|
|
xcall_deliver((u64) &xcall_receive_signal,
|
|
|
|
0, 0, cpumask_of(cpu));
|
|
|
|
}
|
|
|
|
|
|
|
|
void scheduler_poke(void)
|
|
|
|
{
|
|
|
|
if (!cpu_poke)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!__this_cpu_read(poke))
|
|
|
|
return;
|
|
|
|
|
|
|
|
__this_cpu_write(poke, false);
|
|
|
|
set_softint(1 << PIL_SMP_RECEIVE_SIGNAL);
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned long send_cpu_poke(int cpu)
|
|
|
|
{
|
|
|
|
unsigned long hv_err;
|
|
|
|
|
|
|
|
per_cpu(poke, cpu) = true;
|
|
|
|
hv_err = sun4v_cpu_poke(cpu);
|
|
|
|
if (hv_err != HV_EOK) {
|
|
|
|
per_cpu(poke, cpu) = false;
|
|
|
|
pr_err_ratelimited("%s: sun4v_cpu_poke() fails err=%lu\n",
|
|
|
|
__func__, hv_err);
|
|
|
|
}
|
|
|
|
|
|
|
|
return hv_err;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
void smp_send_reschedule(int cpu)
|
|
|
|
{
|
2013-09-14 12:00:09 +00:00
|
|
|
if (cpu == smp_processor_id()) {
|
|
|
|
WARN_ON_ONCE(preemptible());
|
|
|
|
set_softint(1 << PIL_SMP_RECEIVE_SIGNAL);
|
2017-07-21 16:23:57 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Use cpu poke to resume idle cpu if supported. */
|
|
|
|
if (cpu_poke && idle_cpu(cpu)) {
|
|
|
|
unsigned long ret;
|
|
|
|
|
|
|
|
ret = send_cpu_poke(cpu);
|
|
|
|
if (ret == HV_EOK)
|
|
|
|
return;
|
2013-09-14 12:00:09 +00:00
|
|
|
}
|
2017-07-21 16:23:57 +00:00
|
|
|
|
|
|
|
/* Use IPI in following cases:
|
|
|
|
* - cpu poke not supported
|
|
|
|
* - cpu not idle
|
|
|
|
* - send_cpu_poke() returns with error
|
|
|
|
*/
|
|
|
|
send_cpu_ipi(cpu);
|
|
|
|
}
|
|
|
|
|
|
|
|
void smp_init_cpu_poke(void)
|
|
|
|
{
|
|
|
|
unsigned long major;
|
|
|
|
unsigned long minor;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (tlb_type != hypervisor)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ret = sun4v_hvapi_get(HV_GRP_CORE, &major, &minor);
|
|
|
|
if (ret) {
|
|
|
|
pr_debug("HV_GRP_CORE is not registered\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (major == 1 && minor >= 6) {
|
|
|
|
/* CPU POKE is registered. */
|
|
|
|
cpu_poke = true;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
pr_debug("CPU_POKE not supported\n");
|
2008-08-04 06:56:28 +00:00
|
|
|
}
|
|
|
|
|
2010-04-07 11:41:33 +00:00
|
|
|
void __irq_entry smp_receive_signal_client(int irq, struct pt_regs *regs)
|
2008-08-04 06:56:28 +00:00
|
|
|
{
|
|
|
|
clear_softint(1 << irq);
|
2011-04-05 15:23:39 +00:00
|
|
|
scheduler_ipi();
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2015-01-07 00:31:39 +00:00
|
|
|
static void stop_this_cpu(void *dummy)
|
|
|
|
{
|
2017-02-01 19:34:37 +00:00
|
|
|
set_cpu_online(smp_processor_id(), false);
|
2015-01-07 00:31:39 +00:00
|
|
|
prom_stopself();
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
void smp_send_stop(void)
|
|
|
|
{
|
2015-01-07 00:31:39 +00:00
|
|
|
int cpu;
|
|
|
|
|
|
|
|
if (tlb_type == hypervisor) {
|
2017-02-01 19:34:38 +00:00
|
|
|
int this_cpu = smp_processor_id();
|
|
|
|
#ifdef CONFIG_SERIAL_SUNHV
|
|
|
|
sunhv_migrate_hvcons_irq(this_cpu);
|
|
|
|
#endif
|
2015-01-07 00:31:39 +00:00
|
|
|
for_each_online_cpu(cpu) {
|
2017-02-01 19:34:38 +00:00
|
|
|
if (cpu == this_cpu)
|
2015-01-07 00:31:39 +00:00
|
|
|
continue;
|
2017-02-01 19:34:37 +00:00
|
|
|
|
|
|
|
set_cpu_online(cpu, false);
|
2015-01-07 00:31:39 +00:00
|
|
|
#ifdef CONFIG_SUN_LDOMS
|
|
|
|
if (ldom_domaining_enabled) {
|
|
|
|
unsigned long hv_err;
|
|
|
|
hv_err = sun4v_cpu_stop(cpu);
|
|
|
|
if (hv_err)
|
|
|
|
printk(KERN_ERR "sun4v_cpu_stop() "
|
|
|
|
"failed err=%lu\n", hv_err);
|
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
prom_stopcpu_cpuid(cpu);
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
smp_call_function(stop_this_cpu, NULL, 0);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2009-04-09 03:32:02 +00:00
|
|
|
/**
|
|
|
|
* pcpu_alloc_bootmem - NUMA friendly alloc_bootmem wrapper for percpu
|
|
|
|
* @cpu: cpu to allocate for
|
|
|
|
* @size: size allocation in bytes
|
|
|
|
* @align: alignment
|
|
|
|
*
|
|
|
|
* Allocate @size bytes aligned at @align for cpu @cpu. This wrapper
|
|
|
|
* does the right thing for NUMA regardless of the current
|
|
|
|
* configuration.
|
|
|
|
*
|
|
|
|
* RETURNS:
|
|
|
|
* Pointer to the allocated area on success, NULL on failure.
|
|
|
|
*/
|
2009-08-14 06:00:53 +00:00
|
|
|
static void * __init pcpu_alloc_bootmem(unsigned int cpu, size_t size,
|
|
|
|
size_t align)
|
2009-04-09 03:32:02 +00:00
|
|
|
{
|
|
|
|
const unsigned long goal = __pa(MAX_DMA_ADDRESS);
|
|
|
|
#ifdef CONFIG_NEED_MULTIPLE_NODES
|
|
|
|
int node = cpu_to_node(cpu);
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
if (!node_online(node) || !NODE_DATA(node)) {
|
2018-10-30 22:09:03 +00:00
|
|
|
ptr = memblock_alloc_from(size, align, goal);
|
2009-04-09 03:32:02 +00:00
|
|
|
pr_info("cpu %d has no node %d or node-local memory\n",
|
|
|
|
cpu, node);
|
|
|
|
pr_debug("per cpu data for cpu%d %lu bytes at %016lx\n",
|
|
|
|
cpu, size, __pa(ptr));
|
|
|
|
} else {
|
2018-10-30 22:08:45 +00:00
|
|
|
ptr = memblock_alloc_try_nid(size, align, goal,
|
2018-10-30 22:09:44 +00:00
|
|
|
MEMBLOCK_ALLOC_ACCESSIBLE, node);
|
2009-04-09 03:32:02 +00:00
|
|
|
pr_debug("per cpu data for cpu%d %lu bytes on node%d at "
|
|
|
|
"%016lx\n", cpu, size, node, __pa(ptr));
|
|
|
|
}
|
|
|
|
return ptr;
|
|
|
|
#else
|
2018-10-30 22:09:03 +00:00
|
|
|
return memblock_alloc_from(size, align, goal);
|
2009-04-09 03:32:02 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2009-08-14 06:00:53 +00:00
|
|
|
static void __init pcpu_free_bootmem(void *ptr, size_t size)
|
2009-04-09 03:32:02 +00:00
|
|
|
{
|
2018-10-30 22:09:21 +00:00
|
|
|
memblock_free(__pa(ptr), size);
|
2009-08-14 06:00:53 +00:00
|
|
|
}
|
2009-04-09 03:32:02 +00:00
|
|
|
|
2009-09-24 09:18:55 +00:00
|
|
|
static int __init pcpu_cpu_distance(unsigned int from, unsigned int to)
|
2009-08-14 06:00:53 +00:00
|
|
|
{
|
|
|
|
if (cpu_to_node(from) == cpu_to_node(to))
|
|
|
|
return LOCAL_DISTANCE;
|
|
|
|
else
|
|
|
|
return REMOTE_DISTANCE;
|
2009-04-09 03:32:02 +00:00
|
|
|
}
|
|
|
|
|
2009-09-24 09:18:55 +00:00
|
|
|
static void __init pcpu_populate_pte(unsigned long addr)
|
|
|
|
{
|
|
|
|
pgd_t *pgd = pgd_offset_k(addr);
|
|
|
|
pud_t *pud;
|
|
|
|
pmd_t *pmd;
|
|
|
|
|
2014-09-27 04:19:46 +00:00
|
|
|
if (pgd_none(*pgd)) {
|
|
|
|
pud_t *new;
|
|
|
|
|
2018-10-30 22:09:03 +00:00
|
|
|
new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
|
2019-03-12 06:30:10 +00:00
|
|
|
if (!new)
|
|
|
|
goto err_alloc;
|
2014-09-27 04:19:46 +00:00
|
|
|
pgd_populate(&init_mm, pgd, new);
|
|
|
|
}
|
|
|
|
|
2009-09-24 09:18:55 +00:00
|
|
|
pud = pud_offset(pgd, addr);
|
|
|
|
if (pud_none(*pud)) {
|
|
|
|
pmd_t *new;
|
|
|
|
|
2018-10-30 22:09:03 +00:00
|
|
|
new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
|
2019-03-12 06:30:10 +00:00
|
|
|
if (!new)
|
|
|
|
goto err_alloc;
|
2009-09-24 09:18:55 +00:00
|
|
|
pud_populate(&init_mm, pud, new);
|
|
|
|
}
|
|
|
|
|
|
|
|
pmd = pmd_offset(pud, addr);
|
|
|
|
if (!pmd_present(*pmd)) {
|
|
|
|
pte_t *new;
|
|
|
|
|
2018-10-30 22:09:03 +00:00
|
|
|
new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
|
2019-03-12 06:30:10 +00:00
|
|
|
if (!new)
|
|
|
|
goto err_alloc;
|
2009-09-24 09:18:55 +00:00
|
|
|
pmd_populate_kernel(&init_mm, pmd, new);
|
|
|
|
}
|
2019-03-12 06:30:10 +00:00
|
|
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
err_alloc:
|
|
|
|
panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n",
|
|
|
|
__func__, PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
|
2009-09-24 09:18:55 +00:00
|
|
|
}
|
|
|
|
|
2009-04-01 23:15:20 +00:00
|
|
|
void __init setup_per_cpu_areas(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2009-08-14 06:00:53 +00:00
|
|
|
unsigned long delta;
|
|
|
|
unsigned int cpu;
|
2009-09-24 09:18:55 +00:00
|
|
|
int rc = -EINVAL;
|
|
|
|
|
|
|
|
if (pcpu_chosen_fc != PCPU_FC_PAGE) {
|
|
|
|
rc = pcpu_embed_first_chunk(PERCPU_MODULE_RESERVE,
|
|
|
|
PERCPU_DYNAMIC_RESERVE, 4 << 20,
|
|
|
|
pcpu_cpu_distance,
|
|
|
|
pcpu_alloc_bootmem,
|
|
|
|
pcpu_free_bootmem);
|
|
|
|
if (rc)
|
|
|
|
pr_warning("PERCPU: %s allocator failed (%d), "
|
|
|
|
"falling back to page size\n",
|
|
|
|
pcpu_fc_names[pcpu_chosen_fc], rc);
|
|
|
|
}
|
|
|
|
if (rc < 0)
|
|
|
|
rc = pcpu_page_first_chunk(PERCPU_MODULE_RESERVE,
|
|
|
|
pcpu_alloc_bootmem,
|
|
|
|
pcpu_free_bootmem,
|
|
|
|
pcpu_populate_pte);
|
|
|
|
if (rc < 0)
|
|
|
|
panic("cannot initialize percpu area (err=%d)", rc);
|
2006-12-15 07:40:57 +00:00
|
|
|
|
2009-04-09 03:32:02 +00:00
|
|
|
delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start;
|
2009-08-14 06:00:51 +00:00
|
|
|
for_each_possible_cpu(cpu)
|
|
|
|
__per_cpu_offset(cpu) = delta + pcpu_unit_offsets[cpu];
|
2006-05-31 08:24:02 +00:00
|
|
|
|
|
|
|
/* Setup %g5 for the boot cpu. */
|
|
|
|
__local_per_cpu_offset = __per_cpu_offset(smp_processor_id());
|
2009-05-27 05:37:25 +00:00
|
|
|
|
|
|
|
of_fill_in_cpu_data();
|
|
|
|
if (tlb_type == hypervisor)
|
2009-06-15 10:06:18 +00:00
|
|
|
mdesc_fill_in_cpu_data(cpu_all_mask);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|