forked from Minki/linux
powerpc updates for 4.7
Highlights: - Support for Power ISA 3.0 (Power9) Radix Tree MMU from Aneesh Kumar K.V - Live patching support for ppc64le (also merged via livepatching.git) Various cleanups & minor fixes from: - Aaro Koskinen, Alexey Kardashevskiy, Andrew Donnellan, Aneesh Kumar K.V, Chris Smart, Daniel Axtens, Frederic Barrat, Gavin Shan, Ian Munsie, Lennart Sorensen, Madhavan Srinivasan, Mahesh Salgaonkar, Markus Elfring, Michael Ellerman, Oliver O'Halloran, Paul Gortmaker, Paul Mackerras, Rashmica Gupta, Russell Currey, Suraj Jitindar Singh, Thiago Jung Bauermann, Valentin Rothberg, Vipin K Parashar. General: - Update LMB associativity index during DLPAR add/remove from Nathan Fontenot - Fix branching to OOL handlers in relocatable kernel from Hari Bathini - Add support for userspace Power9 copy/paste from Chris Smart - Always use STRICT_MM_TYPECHECKS from Michael Ellerman - Add mask of possible MMU features from Michael Ellerman PCI: - Enable pass through of NVLink to guests from Alexey Kardashevskiy - Cleanups in preparation for powernv PCI hotplug from Gavin Shan - Don't report error in eeh_pe_reset_and_recover() from Gavin Shan - Restore initial state in eeh_pe_reset_and_recover() from Gavin Shan - Revert "powerpc/eeh: Fix crash in eeh_add_device_early() on Cell" from Guilherme G. Piccoli - Remove the dependency on EEH struct in DDW mechanism from Guilherme G. Piccoli selftests: - Test cp_abort during context switch from Chris Smart - Add several tests for transactional memory support from Rashmica Gupta perf: - Add support for sampling interrupt register state from Anju T - Add support for unwinding perf-stackdump from Chandan Kumar cxl: - Configure the PSL for two CAPI ports on POWER8NVL from Philippe Bergheaud - Allow initialization on timebase sync failures from Frederic Barrat - Increase timeout for detection of AFU mmio hang from Frederic Barrat - Handle num_of_processes larger than can fit in the SPA from Ian Munsie - Ensure PSL interrupt is configured for contexts with no AFU IRQs from Ian Munsie - Add kernel API to allow a context to operate with relocate disabled from Ian Munsie - Check periodically the coherent platform function's state from Christophe Lombard Freescale: - Updates from Scott: "Contains 86xx fixes, minor device tree fixes, an erratum workaround, and a kconfig dependency fix." -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAABAgAGBQJXPsGzAAoJEFHr6jzI4aWAVoAP/iKdrDe0eYHlVAE9SqnbsiZs lgDxdsC8P3fsmP1G9o/HkKhC82zHl/La8Ztz8dtqa+LkSzbfliWP1ztJsI7GsBFo tyCKzWnX9Rwvd3meHu/o/SQ29TNLm/PbPyyRqpj5QPbJ8XCXkAXR7ZZZqjvcMsJW /AgIr7Cgf53tl9oZzzl/c7CnNHhMq+NBdA71vhWtUx+T97wfJEGyKW6HhZyHDbEU iAki7fu77ZpEqC/Fh9swf0dCGBJ+a132NoMVo0AdV7EQLznUYlQpQEqa+1PyHZOP /ArOzf2mDg6m3PfCo1eiB07v8PnVZ3llEUbVAJNg3GUxbE4SHrqq/kwm0iElm3p/ DvFxerCwdX9vmskJX4wDs+pSZRabXYj9XVMptsgFzA4joWrqqb7mBHqaort88YcY YSljEt1bHyXmiJ+dBya40qARsWUkCVN7ZgEzdxckq0KI3w7g2tqpqIbO2lClWT6t B3GpqQ4jp34+d1M14FB91fIGK7tMvOhSInE0Mv9+tPvRsepXqiiU/SwdAtRlr3m2 zs/K+4FYcVjJ3Rmpgc+tI38PbZxHe212I35YN6L1LP+4ZfAtzz0NyKdooTIBtkbO 19pX4WbBjKq8zK+YutrySncBIrbnI6VjW51vtRhgVKZliPFO/6zKagyU6FbxM+E5 udQES+t3F/9gvtxgxtDe =YvyQ -----END PGP SIGNATURE----- Merge tag 'powerpc-4.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc updates from Michael Ellerman: "Highlights: - Support for Power ISA 3.0 (Power9) Radix Tree MMU from Aneesh Kumar K.V - Live patching support for ppc64le (also merged via livepatching.git) Various cleanups & minor fixes from: - Aaro Koskinen, Alexey Kardashevskiy, Andrew Donnellan, Aneesh Kumar K.V, Chris Smart, Daniel Axtens, Frederic Barrat, Gavin Shan, Ian Munsie, Lennart Sorensen, Madhavan Srinivasan, Mahesh Salgaonkar, Markus Elfring, Michael Ellerman, Oliver O'Halloran, Paul Gortmaker, Paul Mackerras, Rashmica Gupta, Russell Currey, Suraj Jitindar Singh, Thiago Jung Bauermann, Valentin Rothberg, Vipin K Parashar. General: - Update LMB associativity index during DLPAR add/remove from Nathan Fontenot - Fix branching to OOL handlers in relocatable kernel from Hari Bathini - Add support for userspace Power9 copy/paste from Chris Smart - Always use STRICT_MM_TYPECHECKS from Michael Ellerman - Add mask of possible MMU features from Michael Ellerman PCI: - Enable pass through of NVLink to guests from Alexey Kardashevskiy - Cleanups in preparation for powernv PCI hotplug from Gavin Shan - Don't report error in eeh_pe_reset_and_recover() from Gavin Shan - Restore initial state in eeh_pe_reset_and_recover() from Gavin Shan - Revert "powerpc/eeh: Fix crash in eeh_add_device_early() on Cell" from Guilherme G Piccoli - Remove the dependency on EEH struct in DDW mechanism from Guilherme G Piccoli selftests: - Test cp_abort during context switch from Chris Smart - Add several tests for transactional memory support from Rashmica Gupta perf: - Add support for sampling interrupt register state from Anju T - Add support for unwinding perf-stackdump from Chandan Kumar cxl: - Configure the PSL for two CAPI ports on POWER8NVL from Philippe Bergheaud - Allow initialization on timebase sync failures from Frederic Barrat - Increase timeout for detection of AFU mmio hang from Frederic Barrat - Handle num_of_processes larger than can fit in the SPA from Ian Munsie - Ensure PSL interrupt is configured for contexts with no AFU IRQs from Ian Munsie - Add kernel API to allow a context to operate with relocate disabled from Ian Munsie - Check periodically the coherent platform function's state from Christophe Lombard Freescale: - Updates from Scott: "Contains 86xx fixes, minor device tree fixes, an erratum workaround, and a kconfig dependency fix." * tag 'powerpc-4.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (192 commits) powerpc/86xx: Fix PCI interrupt map definition powerpc/86xx: Move pci1 definition to the include file powerpc/fsl: Fix build of the dtb embedded kernel images powerpc/fsl: Fix rcpm compatible string powerpc/fsl: Remove FSL_SOC dependency from FSL_LBC powerpc/fsl-pci: Add a workaround for PCI 5 errata powerpc/fsl: Fix SPI compatible on t208xrdb and t1040rdb powerpc/powernv/npu: Add PE to PHB's list powerpc/powernv: Fix insufficient memory allocation powerpc/iommu: Remove the dependency on EEH struct in DDW mechanism Revert "powerpc/eeh: Fix crash in eeh_add_device_early() on Cell" powerpc/eeh: Drop unnecessary label in eeh_pe_change_owner() powerpc/eeh: Ignore handlers in eeh_pe_reset_and_recover() powerpc/eeh: Restore initial state in eeh_pe_reset_and_recover() powerpc/eeh: Don't report error in eeh_pe_reset_and_recover() Revert "powerpc/powernv: Exclude root bus in pnv_pci_reset_secondary_bus()" powerpc/powernv/npu: Enable NVLink pass through powerpc/powernv/npu: Rework TCE Kill handling powerpc/powernv/npu: Add set/unset window helpers powerpc/powernv/ioda2: Export debug helper pe_level_printk() ...
This commit is contained in:
commit
c04a588029
@ -233,3 +233,11 @@ Description: read/write
|
||||
0 = don't trust, the image may be different (default)
|
||||
1 = trust that the image will not change.
|
||||
Users: https://github.com/ibm-capi/libcxl
|
||||
|
||||
What: /sys/class/cxl/<card>/psl_timebase_synced
|
||||
Date: March 2016
|
||||
Contact: linuxppc-dev@lists.ozlabs.org
|
||||
Description: read only
|
||||
Returns 1 if the psl timebase register is synchronized
|
||||
with the core timebase register, 0 otherwise.
|
||||
Users: https://github.com/ibm-capi/libcxl
|
||||
|
@ -27,7 +27,7 @@
|
||||
| nios2: | TODO |
|
||||
| openrisc: | TODO |
|
||||
| parisc: | TODO |
|
||||
| powerpc: | TODO |
|
||||
| powerpc: | ok |
|
||||
| s390: | TODO |
|
||||
| score: | TODO |
|
||||
| sh: | TODO |
|
||||
|
@ -27,7 +27,7 @@
|
||||
| nios2: | TODO |
|
||||
| openrisc: | TODO |
|
||||
| parisc: | TODO |
|
||||
| powerpc: | TODO |
|
||||
| powerpc: | ok |
|
||||
| s390: | TODO |
|
||||
| score: | TODO |
|
||||
| sh: | TODO |
|
||||
|
@ -12,7 +12,7 @@ Overview:
|
||||
The IBM POWER-based pSeries and iSeries computers include PCI bus
|
||||
controller chips that have extended capabilities for detecting and
|
||||
reporting a large variety of PCI bus error conditions. These features
|
||||
go under the name of "EEH", for "Extended Error Handling". The EEH
|
||||
go under the name of "EEH", for "Enhanced Error Handling". The EEH
|
||||
hardware features allow PCI bus errors to be cleared and a PCI
|
||||
card to be "rebooted", without also having to reboot the operating
|
||||
system.
|
||||
|
13
MAINTAINERS
13
MAINTAINERS
@ -6675,6 +6675,19 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
|
||||
S: Supported
|
||||
F: Documentation/powerpc/
|
||||
F: arch/powerpc/
|
||||
F: drivers/char/tpm/tpm_ibmvtpm*
|
||||
F: drivers/crypto/nx/
|
||||
F: drivers/crypto/vmx/
|
||||
F: drivers/net/ethernet/ibm/ibmveth.*
|
||||
F: drivers/net/ethernet/ibm/ibmvnic.*
|
||||
F: drivers/pci/hotplug/rpa*
|
||||
F: drivers/scsi/ibmvscsi/
|
||||
N: opal
|
||||
N: /pmac
|
||||
N: powermac
|
||||
N: powernv
|
||||
N: [^a-z0-9]ps3
|
||||
N: pseries
|
||||
|
||||
LINUX FOR POWER MACINTOSH
|
||||
M: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
||||
|
@ -116,6 +116,8 @@ config PPC
|
||||
select GENERIC_ATOMIC64 if PPC32
|
||||
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
|
||||
select HAVE_PERF_EVENTS
|
||||
select HAVE_PERF_REGS
|
||||
select HAVE_PERF_USER_STACK_DUMP
|
||||
select HAVE_REGS_AND_STACK_ACCESS_API
|
||||
select HAVE_HW_BREAKPOINT if PERF_EVENTS && PPC_BOOK3S_64
|
||||
select ARCH_WANT_IPC_PARSE_VERSION
|
||||
@ -606,9 +608,9 @@ endchoice
|
||||
|
||||
config FORCE_MAX_ZONEORDER
|
||||
int "Maximum zone order"
|
||||
range 9 64 if PPC64 && PPC_64K_PAGES
|
||||
range 8 9 if PPC64 && PPC_64K_PAGES
|
||||
default "9" if PPC64 && PPC_64K_PAGES
|
||||
range 13 64 if PPC64 && !PPC_64K_PAGES
|
||||
range 9 13 if PPC64 && !PPC_64K_PAGES
|
||||
default "13" if PPC64 && !PPC_64K_PAGES
|
||||
range 9 64 if PPC32 && PPC_16K_PAGES
|
||||
default "9" if PPC32 && PPC_16K_PAGES
|
||||
@ -795,7 +797,6 @@ config 4xx_SOC
|
||||
|
||||
config FSL_LBC
|
||||
bool "Freescale Local Bus support"
|
||||
depends on FSL_SOC
|
||||
help
|
||||
Enables reporting of errors from the Freescale local bus
|
||||
controller. Also contains some common code used by
|
||||
|
@ -19,14 +19,6 @@ config PPC_WERROR
|
||||
depends on !PPC_DISABLE_WERROR
|
||||
default y
|
||||
|
||||
config STRICT_MM_TYPECHECKS
|
||||
bool "Do extra type checking on mm types"
|
||||
default n
|
||||
help
|
||||
This option turns on extra type checking for some mm related types.
|
||||
|
||||
If you don't know what this means, say N.
|
||||
|
||||
config PRINT_STACK_DEPTH
|
||||
int "Stack depth to print" if DEBUG_KERNEL
|
||||
default 64
|
||||
|
@ -362,9 +362,6 @@ $(obj)/cuImage.initrd.%: vmlinux $(obj)/%.dtb $(wrapperbits)
|
||||
$(obj)/cuImage.%: vmlinux $(obj)/%.dtb $(wrapperbits)
|
||||
$(call if_changed,wrap,cuboot-$*,,$(obj)/$*.dtb)
|
||||
|
||||
$(obj)/cuImage.%: vmlinux $(obj)/fsl/%.dtb $(wrapperbits)
|
||||
$(call if_changed,wrap,cuboot-$*,,$(obj)/fsl/$*.dtb)
|
||||
|
||||
$(obj)/simpleImage.initrd.%: vmlinux $(obj)/%.dtb $(wrapperbits)
|
||||
$(call if_changed,wrap,simpleboot-$*,,$(obj)/$*.dtb,$(obj)/ramdisk.image.gz)
|
||||
|
||||
@ -381,6 +378,9 @@ $(obj)/treeImage.%: vmlinux $(obj)/%.dtb $(wrapperbits)
|
||||
$(obj)/%.dtb: $(src)/dts/%.dts FORCE
|
||||
$(call if_changed_dep,dtc)
|
||||
|
||||
$(obj)/%.dtb: $(src)/dts/fsl/%.dts FORCE
|
||||
$(call if_changed_dep,dtc)
|
||||
|
||||
# If there isn't a platform selected then just strip the vmlinux.
|
||||
ifeq (,$(image-y))
|
||||
image-y := vmlinux.strip
|
||||
|
@ -211,6 +211,10 @@
|
||||
0x0 0x00400000>;
|
||||
};
|
||||
};
|
||||
|
||||
pci1: pcie@fef09000 {
|
||||
status = "disabled";
|
||||
};
|
||||
};
|
||||
|
||||
/include/ "mpc8641si-post.dtsi"
|
||||
|
@ -24,10 +24,6 @@
|
||||
model = "GEF_SBC310";
|
||||
compatible = "gef,sbc310";
|
||||
|
||||
aliases {
|
||||
pci1 = &pci1;
|
||||
};
|
||||
|
||||
memory {
|
||||
device_type = "memory";
|
||||
reg = <0x0 0x40000000>; // set by uboot
|
||||
@ -223,29 +219,11 @@
|
||||
};
|
||||
|
||||
pci1: pcie@fef09000 {
|
||||
compatible = "fsl,mpc8641-pcie";
|
||||
device_type = "pci";
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
reg = <0xfef09000 0x1000>;
|
||||
bus-range = <0x0 0xff>;
|
||||
ranges = <0x02000000 0x0 0xc0000000 0xc0000000 0x0 0x20000000
|
||||
0x01000000 0x0 0x00000000 0xfe400000 0x0 0x00400000>;
|
||||
clock-frequency = <100000000>;
|
||||
interrupts = <0x19 0x2 0 0>;
|
||||
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
|
||||
interrupt-map = <
|
||||
0x0000 0x0 0x0 0x1 &mpic 0x4 0x2
|
||||
0x0000 0x0 0x0 0x2 &mpic 0x5 0x2
|
||||
0x0000 0x0 0x0 0x3 &mpic 0x6 0x2
|
||||
0x0000 0x0 0x0 0x4 &mpic 0x7 0x2
|
||||
>;
|
||||
|
||||
pcie@0 {
|
||||
reg = <0 0 0 0 0>;
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
device_type = "pci";
|
||||
ranges = <0x02000000 0x0 0xc0000000
|
||||
0x02000000 0x0 0xc0000000
|
||||
0x0 0x20000000
|
||||
|
@ -209,6 +209,10 @@
|
||||
0x0 0x00400000>;
|
||||
};
|
||||
};
|
||||
|
||||
pci1: pcie@fef09000 {
|
||||
status = "disabled";
|
||||
};
|
||||
};
|
||||
|
||||
/include/ "mpc8641si-post.dtsi"
|
||||
|
@ -15,10 +15,6 @@
|
||||
model = "MPC8641HPCN";
|
||||
compatible = "fsl,mpc8641hpcn";
|
||||
|
||||
aliases {
|
||||
pci1 = &pci1;
|
||||
};
|
||||
|
||||
memory {
|
||||
device_type = "memory";
|
||||
reg = <0x00000000 0x40000000>; // 1G at 0x0
|
||||
@ -359,29 +355,11 @@
|
||||
};
|
||||
|
||||
pci1: pcie@ffe09000 {
|
||||
compatible = "fsl,mpc8641-pcie";
|
||||
device_type = "pci";
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
reg = <0xffe09000 0x1000>;
|
||||
bus-range = <0 0xff>;
|
||||
ranges = <0x02000000 0x0 0xa0000000 0xa0000000 0x0 0x20000000
|
||||
0x01000000 0x0 0x00000000 0xffc10000 0x0 0x00010000>;
|
||||
clock-frequency = <100000000>;
|
||||
interrupts = <25 2 0 0>;
|
||||
interrupt-map-mask = <0xf800 0 0 7>;
|
||||
interrupt-map = <
|
||||
/* IDSEL 0x0 */
|
||||
0x0000 0 0 1 &mpic 4 1
|
||||
0x0000 0 0 2 &mpic 5 1
|
||||
0x0000 0 0 3 &mpic 6 1
|
||||
0x0000 0 0 4 &mpic 7 1
|
||||
>;
|
||||
|
||||
pcie@0 {
|
||||
reg = <0 0 0 0 0>;
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
device_type = "pci";
|
||||
ranges = <0x02000000 0x0 0xa0000000
|
||||
0x02000000 0x0 0xa0000000
|
||||
0x0 0x20000000
|
||||
|
@ -17,10 +17,6 @@
|
||||
#address-cells = <2>;
|
||||
#size-cells = <2>;
|
||||
|
||||
aliases {
|
||||
pci1 = &pci1;
|
||||
};
|
||||
|
||||
memory {
|
||||
device_type = "memory";
|
||||
reg = <0x0 0x00000000 0x0 0x40000000>; // 1G at 0x0
|
||||
@ -326,29 +322,11 @@
|
||||
};
|
||||
|
||||
pci1: pcie@fffe09000 {
|
||||
compatible = "fsl,mpc8641-pcie";
|
||||
device_type = "pci";
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
reg = <0x0f 0xffe09000 0x0 0x1000>;
|
||||
bus-range = <0x0 0xff>;
|
||||
ranges = <0x02000000 0x0 0xe0000000 0x0c 0x20000000 0x0 0x20000000
|
||||
0x01000000 0x0 0x00000000 0x0f 0xffc10000 0x0 0x00010000>;
|
||||
clock-frequency = <100000000>;
|
||||
interrupts = <25 2 0 0>;
|
||||
interrupt-map-mask = <0xf800 0 0 7>;
|
||||
interrupt-map = <
|
||||
/* IDSEL 0x0 */
|
||||
0x0000 0 0 1 &mpic 4 1
|
||||
0x0000 0 0 2 &mpic 5 1
|
||||
0x0000 0 0 3 &mpic 6 1
|
||||
0x0000 0 0 4 &mpic 7 1
|
||||
>;
|
||||
|
||||
pcie@0 {
|
||||
reg = <0 0 0 0 0>;
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
device_type = "pci";
|
||||
ranges = <0x02000000 0x0 0xe0000000
|
||||
0x02000000 0x0 0xe0000000
|
||||
0x0 0x20000000
|
||||
|
@ -102,19 +102,46 @@
|
||||
bus-range = <0x0 0xff>;
|
||||
clock-frequency = <100000000>;
|
||||
interrupts = <24 2 0 0>;
|
||||
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
|
||||
|
||||
interrupt-map = <
|
||||
0x0000 0x0 0x0 0x1 &mpic 0x0 0x1
|
||||
0x0000 0x0 0x0 0x2 &mpic 0x1 0x1
|
||||
0x0000 0x0 0x0 0x3 &mpic 0x2 0x1
|
||||
0x0000 0x0 0x0 0x4 &mpic 0x3 0x1
|
||||
>;
|
||||
|
||||
pcie@0 {
|
||||
reg = <0 0 0 0 0>;
|
||||
#interrupt-cells = <1>;
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
device_type = "pci";
|
||||
interrupts = <24 2 0 0>;
|
||||
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
|
||||
interrupt-map = <
|
||||
0x0000 0x0 0x0 0x1 &mpic 0x0 0x1 0x0 0x0
|
||||
0x0000 0x0 0x0 0x2 &mpic 0x1 0x1 0x0 0x0
|
||||
0x0000 0x0 0x0 0x3 &mpic 0x2 0x1 0x0 0x0
|
||||
0x0000 0x0 0x0 0x4 &mpic 0x3 0x1 0x0 0x0
|
||||
>;
|
||||
};
|
||||
};
|
||||
|
||||
&pci1 {
|
||||
compatible = "fsl,mpc8641-pcie";
|
||||
device_type = "pci";
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
bus-range = <0x0 0xff>;
|
||||
clock-frequency = <100000000>;
|
||||
interrupts = <25 2 0 0>;
|
||||
|
||||
pcie@0 {
|
||||
reg = <0 0 0 0 0>;
|
||||
#interrupt-cells = <1>;
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
device_type = "pci";
|
||||
interrupts = <25 2 0 0>;
|
||||
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
|
||||
interrupt-map = <
|
||||
0x0000 0x0 0x0 0x1 &mpic 0x4 0x1 0x0 0x0
|
||||
0x0000 0x0 0x0 0x2 &mpic 0x5 0x1 0x0 0x0
|
||||
0x0000 0x0 0x0 0x3 &mpic 0x6 0x1 0x0 0x0
|
||||
0x0000 0x0 0x0 0x4 &mpic 0x7 0x1 0x0 0x0
|
||||
>;
|
||||
};
|
||||
};
|
||||
|
@ -25,6 +25,7 @@
|
||||
serial0 = &serial0;
|
||||
serial1 = &serial1;
|
||||
pci0 = &pci0;
|
||||
pci1 = &pci1;
|
||||
};
|
||||
|
||||
cpus {
|
||||
|
@ -19,10 +19,6 @@
|
||||
model = "SBC8641D";
|
||||
compatible = "wind,sbc8641";
|
||||
|
||||
aliases {
|
||||
pci1 = &pci1;
|
||||
};
|
||||
|
||||
memory {
|
||||
device_type = "memory";
|
||||
reg = <0x00000000 0x20000000>; // 512M at 0x0
|
||||
@ -165,30 +161,11 @@
|
||||
};
|
||||
|
||||
pci1: pcie@f8009000 {
|
||||
compatible = "fsl,mpc8641-pcie";
|
||||
device_type = "pci";
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
reg = <0xf8009000 0x1000>;
|
||||
bus-range = <0 0xff>;
|
||||
ranges = <0x02000000 0x0 0xa0000000 0xa0000000 0x0 0x20000000
|
||||
0x01000000 0x0 0x00000000 0xe3000000 0x0 0x00100000>;
|
||||
clock-frequency = <100000000>;
|
||||
interrupts = <25 2 0 0>;
|
||||
interrupt-map-mask = <0xf800 0 0 7>;
|
||||
interrupt-map = <
|
||||
/* IDSEL 0x0 */
|
||||
0x0000 0 0 1 &mpic 4 1
|
||||
0x0000 0 0 2 &mpic 5 1
|
||||
0x0000 0 0 3 &mpic 6 1
|
||||
0x0000 0 0 4 &mpic 7 1
|
||||
>;
|
||||
|
||||
pcie@0 {
|
||||
reg = <0 0 0 0 0>;
|
||||
#size-cells = <2>;
|
||||
#address-cells = <3>;
|
||||
device_type = "pci";
|
||||
ranges = <0x02000000 0x0 0xa0000000
|
||||
0x02000000 0x0 0xa0000000
|
||||
0x0 0x20000000
|
||||
|
@ -263,7 +263,7 @@
|
||||
};
|
||||
|
||||
rcpm: global-utilities@e2000 {
|
||||
compatible = "fsl,t1023-rcpm", "fsl,qoriq-rcpm-2.0";
|
||||
compatible = "fsl,t1023-rcpm", "fsl,qoriq-rcpm-2.1";
|
||||
reg = <0xe2000 0x1000>;
|
||||
};
|
||||
|
||||
|
@ -472,7 +472,7 @@
|
||||
};
|
||||
|
||||
rcpm: global-utilities@e2000 {
|
||||
compatible = "fsl,t1040-rcpm", "fsl,qoriq-rcpm-2.0";
|
||||
compatible = "fsl,t1040-rcpm", "fsl,qoriq-rcpm-2.1";
|
||||
reg = <0xe2000 0x1000>;
|
||||
};
|
||||
|
||||
|
@ -109,7 +109,7 @@
|
||||
flash@0 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
compatible = "micron,n25q512a", "jedec,spi-nor";
|
||||
compatible = "micron,n25q512ax3", "jedec,spi-nor";
|
||||
reg = <0>;
|
||||
spi-max-frequency = <10000000>; /* input clock */
|
||||
};
|
||||
|
@ -113,7 +113,7 @@
|
||||
flash@0 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
compatible = "micron,n25q512a", "jedec,spi-nor";
|
||||
compatible = "micron,n25q512ax3", "jedec,spi-nor";
|
||||
reg = <0>;
|
||||
spi-max-frequency = <10000000>; /* input clock */
|
||||
};
|
||||
|
@ -39,8 +39,5 @@
|
||||
#define _PMD_PRESENT_MASK (PAGE_MASK)
|
||||
#define _PMD_BAD (~PAGE_MASK)
|
||||
|
||||
/* Hash table based platforms need atomic updates of the linux PTE */
|
||||
#define PTE_ATOMIC_UPDATES 1
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_BOOK3S_32_HASH_H */
|
||||
|
@ -1,5 +1,5 @@
|
||||
#ifndef _ASM_POWERPC_MMU_HASH32_H_
|
||||
#define _ASM_POWERPC_MMU_HASH32_H_
|
||||
#ifndef _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_
|
||||
#define _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_
|
||||
/*
|
||||
* 32-bit hash table MMU support
|
||||
*/
|
||||
@ -90,4 +90,4 @@ typedef struct {
|
||||
#define mmu_virtual_psize MMU_PAGE_4K
|
||||
#define mmu_linear_psize MMU_PAGE_256M
|
||||
|
||||
#endif /* _ASM_POWERPC_MMU_HASH32_H_ */
|
||||
#endif /* _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_ */
|
||||
|
109
arch/powerpc/include/asm/book3s/32/pgalloc.h
Normal file
109
arch/powerpc/include/asm/book3s/32/pgalloc.h
Normal file
@ -0,0 +1,109 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_32_PGALLOC_H
|
||||
#define _ASM_POWERPC_BOOK3S_32_PGALLOC_H
|
||||
|
||||
#include <linux/threads.h>
|
||||
|
||||
/* For 32-bit, all levels of page tables are just drawn from get_free_page() */
|
||||
#define MAX_PGTABLE_INDEX_SIZE 0
|
||||
|
||||
extern void __bad_pte(pmd_t *pmd);
|
||||
|
||||
extern pgd_t *pgd_alloc(struct mm_struct *mm);
|
||||
extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
|
||||
|
||||
/*
|
||||
* We don't have any real pmd's, and this code never triggers because
|
||||
* the pgd will always be present..
|
||||
*/
|
||||
/* #define pmd_alloc_one(mm,address) ({ BUG(); ((pmd_t *)2); }) */
|
||||
#define pmd_free(mm, x) do { } while (0)
|
||||
#define __pmd_free_tlb(tlb,x,a) do { } while (0)
|
||||
/* #define pgd_populate(mm, pmd, pte) BUG() */
|
||||
|
||||
#ifndef CONFIG_BOOKE
|
||||
|
||||
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pte_t *pte)
|
||||
{
|
||||
*pmdp = __pmd(__pa(pte) | _PMD_PRESENT);
|
||||
}
|
||||
|
||||
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pgtable_t pte_page)
|
||||
{
|
||||
*pmdp = __pmd((page_to_pfn(pte_page) << PAGE_SHIFT) | _PMD_PRESENT);
|
||||
}
|
||||
|
||||
#define pmd_pgtable(pmd) pmd_page(pmd)
|
||||
#else
|
||||
|
||||
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pte_t *pte)
|
||||
{
|
||||
*pmdp = __pmd((unsigned long)pte | _PMD_PRESENT);
|
||||
}
|
||||
|
||||
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pgtable_t pte_page)
|
||||
{
|
||||
*pmdp = __pmd((unsigned long)lowmem_page_address(pte_page) | _PMD_PRESENT);
|
||||
}
|
||||
|
||||
#define pmd_pgtable(pmd) pmd_page(pmd)
|
||||
#endif
|
||||
|
||||
extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr);
|
||||
extern pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr);
|
||||
|
||||
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
|
||||
{
|
||||
free_page((unsigned long)pte);
|
||||
}
|
||||
|
||||
static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
|
||||
{
|
||||
pgtable_page_dtor(ptepage);
|
||||
__free_page(ptepage);
|
||||
}
|
||||
|
||||
static inline void pgtable_free(void *table, unsigned index_size)
|
||||
{
|
||||
BUG_ON(index_size); /* 32-bit doesn't use this */
|
||||
free_page((unsigned long)table);
|
||||
}
|
||||
|
||||
#define check_pgt_cache() do { } while (0)
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
static inline void pgtable_free_tlb(struct mmu_gather *tlb,
|
||||
void *table, int shift)
|
||||
{
|
||||
unsigned long pgf = (unsigned long)table;
|
||||
BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
|
||||
pgf |= shift;
|
||||
tlb_remove_table(tlb, (void *)pgf);
|
||||
}
|
||||
|
||||
static inline void __tlb_remove_table(void *_table)
|
||||
{
|
||||
void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
|
||||
unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
|
||||
|
||||
pgtable_free(table, shift);
|
||||
}
|
||||
#else
|
||||
static inline void pgtable_free_tlb(struct mmu_gather *tlb,
|
||||
void *table, int shift)
|
||||
{
|
||||
pgtable_free(table, shift);
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
|
||||
unsigned long address)
|
||||
{
|
||||
tlb_flush_pgtable(tlb, address);
|
||||
pgtable_page_dtor(table);
|
||||
pgtable_free_tlb(tlb, page_address(table), 0);
|
||||
}
|
||||
#endif /* _ASM_POWERPC_BOOK3S_32_PGALLOC_H */
|
@ -5,58 +5,31 @@
|
||||
* for each page table entry. The PMD and PGD level use a 32b record for
|
||||
* each entry by assuming that each entry is page aligned.
|
||||
*/
|
||||
#define PTE_INDEX_SIZE 9
|
||||
#define PMD_INDEX_SIZE 7
|
||||
#define PUD_INDEX_SIZE 9
|
||||
#define PGD_INDEX_SIZE 9
|
||||
#define H_PTE_INDEX_SIZE 9
|
||||
#define H_PMD_INDEX_SIZE 7
|
||||
#define H_PUD_INDEX_SIZE 9
|
||||
#define H_PGD_INDEX_SIZE 9
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#define PTE_TABLE_SIZE (sizeof(pte_t) << PTE_INDEX_SIZE)
|
||||
#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
|
||||
#define PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
|
||||
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
|
||||
#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
|
||||
#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
|
||||
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
|
||||
|
||||
/* PMD_SHIFT determines what a second-level page table entry can map */
|
||||
#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
|
||||
#define PMD_SIZE (1UL << PMD_SHIFT)
|
||||
#define PMD_MASK (~(PMD_SIZE-1))
|
||||
#define H_PTE_TABLE_SIZE (sizeof(pte_t) << H_PTE_INDEX_SIZE)
|
||||
#define H_PMD_TABLE_SIZE (sizeof(pmd_t) << H_PMD_INDEX_SIZE)
|
||||
#define H_PUD_TABLE_SIZE (sizeof(pud_t) << H_PUD_INDEX_SIZE)
|
||||
#define H_PGD_TABLE_SIZE (sizeof(pgd_t) << H_PGD_INDEX_SIZE)
|
||||
|
||||
/* With 4k base page size, hugepage PTEs go at the PMD level */
|
||||
#define MIN_HUGEPTE_SHIFT PMD_SHIFT
|
||||
|
||||
/* PUD_SHIFT determines what a third-level page table entry can map */
|
||||
#define PUD_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
|
||||
#define PUD_SIZE (1UL << PUD_SHIFT)
|
||||
#define PUD_MASK (~(PUD_SIZE-1))
|
||||
|
||||
/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
|
||||
#define PGDIR_SHIFT (PUD_SHIFT + PUD_INDEX_SIZE)
|
||||
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
|
||||
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
||||
|
||||
/* Bits to mask out from a PMD to get to the PTE page */
|
||||
#define PMD_MASKED_BITS 0
|
||||
/* Bits to mask out from a PUD to get to the PMD page */
|
||||
#define PUD_MASKED_BITS 0
|
||||
/* Bits to mask out from a PGD to get to the PUD page */
|
||||
#define PGD_MASKED_BITS 0
|
||||
|
||||
/* PTE flags to conserve for HPTE identification */
|
||||
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \
|
||||
_PAGE_F_SECOND | _PAGE_F_GIX)
|
||||
|
||||
/* shift to put page number into pte */
|
||||
#define PTE_RPN_SHIFT (12)
|
||||
#define PTE_RPN_SIZE (45) /* gives 57-bit real addresses */
|
||||
|
||||
#define _PAGE_4K_PFN 0
|
||||
#ifndef __ASSEMBLY__
|
||||
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | \
|
||||
H_PAGE_F_SECOND | H_PAGE_F_GIX)
|
||||
/*
|
||||
* Not supported by 4k linux page size
|
||||
*/
|
||||
#define H_PAGE_4K_PFN 0x0
|
||||
#define H_PAGE_THP_HUGE 0x0
|
||||
#define H_PAGE_COMBO 0x0
|
||||
#define H_PTE_FRAG_NR 0
|
||||
#define H_PTE_FRAG_SIZE_SHIFT 0
|
||||
/*
|
||||
* On all 4K setups, remap_4k_pfn() equates to remap_pfn_range()
|
||||
*/
|
||||
@ -64,26 +37,7 @@
|
||||
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot))
|
||||
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
/*
|
||||
* For 4k page size, we support explicit hugepage via hugepd
|
||||
*/
|
||||
static inline int pmd_huge(pmd_t pmd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int pud_huge(pud_t pud)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int pgd_huge(pgd_t pgd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#define pgd_huge pgd_huge
|
||||
|
||||
static inline int hugepd_ok(hugepd_t hpd)
|
||||
static inline int hash__hugepd_ok(hugepd_t hpd)
|
||||
{
|
||||
/*
|
||||
* if it is not a pte and have hugepd shift mask
|
||||
@ -94,7 +48,65 @@ static inline int hugepd_ok(hugepd_t hpd)
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
#define is_hugepd(hpd) (hugepd_ok(hpd))
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
|
||||
static inline char *get_hpte_slot_array(pmd_t *pmdp)
|
||||
{
|
||||
BUG();
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline unsigned int hpte_valid(unsigned char *hpte_slot_array, int index)
|
||||
{
|
||||
BUG();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline unsigned int hpte_hash_index(unsigned char *hpte_slot_array,
|
||||
int index)
|
||||
{
|
||||
BUG();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
|
||||
unsigned int index, unsigned int hidx)
|
||||
{
|
||||
BUG();
|
||||
}
|
||||
|
||||
static inline int hash__pmd_trans_huge(pmd_t pmd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
|
||||
{
|
||||
BUG();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
|
||||
{
|
||||
BUG();
|
||||
return pmd;
|
||||
}
|
||||
|
||||
extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp,
|
||||
unsigned long clr, unsigned long set);
|
||||
extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pgtable_t pgtable);
|
||||
extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
|
||||
extern void hash__pmdp_huge_split_prepare(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp);
|
||||
extern int hash__has_transparent_hugepage(void);
|
||||
#endif
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
@ -1,73 +1,44 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_64_HASH_64K_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_HASH_64K_H
|
||||
|
||||
#define PTE_INDEX_SIZE 8
|
||||
#define PMD_INDEX_SIZE 5
|
||||
#define PUD_INDEX_SIZE 5
|
||||
#define PGD_INDEX_SIZE 12
|
||||
|
||||
#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
|
||||
#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
|
||||
#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
|
||||
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
|
||||
#define H_PTE_INDEX_SIZE 8
|
||||
#define H_PMD_INDEX_SIZE 5
|
||||
#define H_PUD_INDEX_SIZE 5
|
||||
#define H_PGD_INDEX_SIZE 12
|
||||
|
||||
/* With 4k base page size, hugepage PTEs go at the PMD level */
|
||||
#define MIN_HUGEPTE_SHIFT PAGE_SHIFT
|
||||
|
||||
/* PMD_SHIFT determines what a second-level page table entry can map */
|
||||
#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
|
||||
#define PMD_SIZE (1UL << PMD_SHIFT)
|
||||
#define PMD_MASK (~(PMD_SIZE-1))
|
||||
|
||||
/* PUD_SHIFT determines what a third-level page table entry can map */
|
||||
#define PUD_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
|
||||
#define PUD_SIZE (1UL << PUD_SHIFT)
|
||||
#define PUD_MASK (~(PUD_SIZE-1))
|
||||
|
||||
/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
|
||||
#define PGDIR_SHIFT (PUD_SHIFT + PUD_INDEX_SIZE)
|
||||
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
|
||||
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
||||
|
||||
#define _PAGE_COMBO 0x00001000 /* this is a combo 4k page */
|
||||
#define _PAGE_4K_PFN 0x00002000 /* PFN is for a single 4k page */
|
||||
#define H_PAGE_COMBO 0x00001000 /* this is a combo 4k page */
|
||||
#define H_PAGE_4K_PFN 0x00002000 /* PFN is for a single 4k page */
|
||||
/*
|
||||
* Used to track subpage group valid if _PAGE_COMBO is set
|
||||
* This overloads _PAGE_F_GIX and _PAGE_F_SECOND
|
||||
* We need to differentiate between explicit huge page and THP huge
|
||||
* page, since THP huge page also need to track real subpage details
|
||||
*/
|
||||
#define _PAGE_COMBO_VALID (_PAGE_F_GIX | _PAGE_F_SECOND)
|
||||
#define H_PAGE_THP_HUGE H_PAGE_4K_PFN
|
||||
|
||||
/*
|
||||
* Used to track subpage group valid if H_PAGE_COMBO is set
|
||||
* This overloads H_PAGE_F_GIX and H_PAGE_F_SECOND
|
||||
*/
|
||||
#define H_PAGE_COMBO_VALID (H_PAGE_F_GIX | H_PAGE_F_SECOND)
|
||||
|
||||
/* PTE flags to conserve for HPTE identification */
|
||||
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_F_SECOND | \
|
||||
_PAGE_F_GIX | _PAGE_HASHPTE | _PAGE_COMBO)
|
||||
|
||||
/* Shift to put page number into pte.
|
||||
*
|
||||
* That gives us a max RPN of 41 bits, which means a max of 57 bits
|
||||
* of addressable physical space, or 53 bits for the special 4k PFNs.
|
||||
*/
|
||||
#define PTE_RPN_SHIFT (16)
|
||||
#define PTE_RPN_SIZE (41)
|
||||
|
||||
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_F_SECOND | \
|
||||
H_PAGE_F_GIX | H_PAGE_HASHPTE | H_PAGE_COMBO)
|
||||
/*
|
||||
* we support 16 fragments per PTE page of 64K size.
|
||||
*/
|
||||
#define PTE_FRAG_NR 16
|
||||
#define H_PTE_FRAG_NR 16
|
||||
/*
|
||||
* We use a 2K PTE page fragment and another 2K for storing
|
||||
* real_pte_t hash index
|
||||
*/
|
||||
#define PTE_FRAG_SIZE_SHIFT 12
|
||||
#define H_PTE_FRAG_SIZE_SHIFT 12
|
||||
#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT)
|
||||
|
||||
/* Bits to mask out from a PMD to get to the PTE page */
|
||||
#define PMD_MASKED_BITS 0xc0000000000000ffUL
|
||||
/* Bits to mask out from a PUD to get to the PMD page */
|
||||
#define PUD_MASKED_BITS 0xc0000000000000ffUL
|
||||
/* Bits to mask out from a PGD to get to the PUD page */
|
||||
#define PGD_MASKED_BITS 0xc0000000000000ffUL
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#include <asm/errno.h>
|
||||
|
||||
/*
|
||||
* With 64K pages on hash table, we have a special PTE format that
|
||||
@ -83,9 +54,9 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
|
||||
|
||||
rpte.pte = pte;
|
||||
rpte.hidx = 0;
|
||||
if (pte_val(pte) & _PAGE_COMBO) {
|
||||
if (pte_val(pte) & H_PAGE_COMBO) {
|
||||
/*
|
||||
* Make sure we order the hidx load against the _PAGE_COMBO
|
||||
* Make sure we order the hidx load against the H_PAGE_COMBO
|
||||
* check. The store side ordering is done in __hash_page_4K
|
||||
*/
|
||||
smp_rmb();
|
||||
@ -97,9 +68,9 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
|
||||
|
||||
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
|
||||
{
|
||||
if ((pte_val(rpte.pte) & _PAGE_COMBO))
|
||||
if ((pte_val(rpte.pte) & H_PAGE_COMBO))
|
||||
return (rpte.hidx >> (index<<2)) & 0xf;
|
||||
return (pte_val(rpte.pte) >> _PAGE_F_GIX_SHIFT) & 0xf;
|
||||
return (pte_val(rpte.pte) >> H_PAGE_F_GIX_SHIFT) & 0xf;
|
||||
}
|
||||
|
||||
#define __rpte_to_pte(r) ((r).pte)
|
||||
@ -122,79 +93,32 @@ extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index);
|
||||
#define pte_iterate_hashed_end() } while(0); } } while(0)
|
||||
|
||||
#define pte_pagesize_index(mm, addr, pte) \
|
||||
(((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
|
||||
(((pte) & H_PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
|
||||
|
||||
#define remap_4k_pfn(vma, addr, pfn, prot) \
|
||||
(WARN_ON(((pfn) >= (1UL << PTE_RPN_SIZE))) ? -EINVAL : \
|
||||
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, \
|
||||
__pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)))
|
||||
extern int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
|
||||
unsigned long pfn, unsigned long size, pgprot_t);
|
||||
static inline int hash__remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
|
||||
unsigned long pfn, pgprot_t prot)
|
||||
{
|
||||
if (pfn > (PTE_RPN_MASK >> PAGE_SHIFT)) {
|
||||
WARN(1, "remap_4k_pfn called with wrong pfn value\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
return remap_pfn_range(vma, addr, pfn, PAGE_SIZE,
|
||||
__pgprot(pgprot_val(prot) | H_PAGE_4K_PFN));
|
||||
}
|
||||
|
||||
#define PTE_TABLE_SIZE PTE_FRAG_SIZE
|
||||
#define H_PTE_TABLE_SIZE PTE_FRAG_SIZE
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
#define PMD_TABLE_SIZE ((sizeof(pmd_t) << PMD_INDEX_SIZE) + (sizeof(unsigned long) << PMD_INDEX_SIZE))
|
||||
#define H_PMD_TABLE_SIZE ((sizeof(pmd_t) << PMD_INDEX_SIZE) + \
|
||||
(sizeof(unsigned long) << PMD_INDEX_SIZE))
|
||||
#else
|
||||
#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
|
||||
#define H_PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
|
||||
#endif
|
||||
#define PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
|
||||
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
|
||||
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
/*
|
||||
* We have PGD_INDEX_SIZ = 12 and PTE_INDEX_SIZE = 8, so that we can have
|
||||
* 16GB hugepage pte in PGD and 16MB hugepage pte at PMD;
|
||||
*
|
||||
* Defined in such a way that we can optimize away code block at build time
|
||||
* if CONFIG_HUGETLB_PAGE=n.
|
||||
*/
|
||||
static inline int pmd_huge(pmd_t pmd)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
return !!(pmd_val(pmd) & _PAGE_PTE);
|
||||
}
|
||||
|
||||
static inline int pud_huge(pud_t pud)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
return !!(pud_val(pud) & _PAGE_PTE);
|
||||
}
|
||||
|
||||
static inline int pgd_huge(pgd_t pgd)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
return !!(pgd_val(pgd) & _PAGE_PTE);
|
||||
}
|
||||
#define pgd_huge pgd_huge
|
||||
|
||||
#ifdef CONFIG_DEBUG_VM
|
||||
extern int hugepd_ok(hugepd_t hpd);
|
||||
#define is_hugepd(hpd) (hugepd_ok(hpd))
|
||||
#else
|
||||
/*
|
||||
* With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't
|
||||
* need to setup hugepage directory for them. Our pte and page directory format
|
||||
* enable us to have this enabled.
|
||||
*/
|
||||
static inline int hugepd_ok(hugepd_t hpd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#define is_hugepd(pdep) 0
|
||||
#endif /* CONFIG_DEBUG_VM */
|
||||
|
||||
#endif /* CONFIG_HUGETLB_PAGE */
|
||||
#define H_PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
|
||||
#define H_PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
extern unsigned long pmd_hugepage_update(struct mm_struct *mm,
|
||||
unsigned long addr,
|
||||
pmd_t *pmdp,
|
||||
unsigned long clr,
|
||||
unsigned long set);
|
||||
static inline char *get_hpte_slot_array(pmd_t *pmdp)
|
||||
{
|
||||
/*
|
||||
@ -253,50 +177,35 @@ static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
|
||||
* that for explicit huge pages.
|
||||
*
|
||||
*/
|
||||
static inline int pmd_trans_huge(pmd_t pmd)
|
||||
static inline int hash__pmd_trans_huge(pmd_t pmd)
|
||||
{
|
||||
return !!((pmd_val(pmd) & (_PAGE_PTE | _PAGE_THP_HUGE)) ==
|
||||
(_PAGE_PTE | _PAGE_THP_HUGE));
|
||||
return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE)) ==
|
||||
(_PAGE_PTE | H_PAGE_THP_HUGE));
|
||||
}
|
||||
|
||||
static inline int pmd_large(pmd_t pmd)
|
||||
static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
|
||||
{
|
||||
return !!(pmd_val(pmd) & _PAGE_PTE);
|
||||
return (((pmd_raw(pmd_a) ^ pmd_raw(pmd_b)) & ~cpu_to_be64(_PAGE_HPTEFLAGS)) == 0);
|
||||
}
|
||||
|
||||
static inline pmd_t pmd_mknotpresent(pmd_t pmd)
|
||||
static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
|
||||
{
|
||||
return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMD_SAME
|
||||
static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
|
||||
{
|
||||
return (((pmd_val(pmd_a) ^ pmd_val(pmd_b)) & ~_PAGE_HPTEFLAGS) == 0);
|
||||
}
|
||||
|
||||
static inline int __pmdp_test_and_clear_young(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp)
|
||||
{
|
||||
unsigned long old;
|
||||
|
||||
if ((pmd_val(*pmdp) & (_PAGE_ACCESSED | _PAGE_HASHPTE)) == 0)
|
||||
return 0;
|
||||
old = pmd_hugepage_update(mm, addr, pmdp, _PAGE_ACCESSED, 0);
|
||||
return ((old & _PAGE_ACCESSED) != 0);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
|
||||
static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp)
|
||||
{
|
||||
|
||||
if ((pmd_val(*pmdp) & _PAGE_RW) == 0)
|
||||
return;
|
||||
|
||||
pmd_hugepage_update(mm, addr, pmdp, _PAGE_RW, 0);
|
||||
return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE));
|
||||
}
|
||||
|
||||
extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp,
|
||||
unsigned long clr, unsigned long set);
|
||||
extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pgtable_t pgtable);
|
||||
extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
|
||||
extern void hash__pmdp_huge_split_prepare(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp);
|
||||
extern int hash__has_transparent_hugepage(void);
|
||||
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
|
@ -13,48 +13,12 @@
|
||||
* We could create separate kernel read-only if we used the 3 PP bits
|
||||
* combinations that newer processors provide but we currently don't.
|
||||
*/
|
||||
#define _PAGE_BIT_SWAP_TYPE 0
|
||||
|
||||
#define _PAGE_EXEC 0x00001 /* execute permission */
|
||||
#define _PAGE_RW 0x00002 /* read & write access allowed */
|
||||
#define _PAGE_READ 0x00004 /* read access allowed */
|
||||
#define _PAGE_USER 0x00008 /* page may be accessed by userspace */
|
||||
#define _PAGE_GUARDED 0x00010 /* G: guarded (side-effect) page */
|
||||
/* M (memory coherence) is always set in the HPTE, so we don't need it here */
|
||||
#define _PAGE_COHERENT 0x0
|
||||
#define _PAGE_NO_CACHE 0x00020 /* I: cache inhibit */
|
||||
#define _PAGE_WRITETHRU 0x00040 /* W: cache write-through */
|
||||
#define _PAGE_DIRTY 0x00080 /* C: page changed */
|
||||
#define _PAGE_ACCESSED 0x00100 /* R: page referenced */
|
||||
#define _PAGE_SPECIAL 0x00400 /* software: special page */
|
||||
#define _PAGE_BUSY 0x00800 /* software: PTE & hash are busy */
|
||||
|
||||
#ifdef CONFIG_MEM_SOFT_DIRTY
|
||||
#define _PAGE_SOFT_DIRTY 0x200 /* software: software dirty tracking */
|
||||
#else
|
||||
#define _PAGE_SOFT_DIRTY 0x000
|
||||
#endif
|
||||
|
||||
#define _PAGE_F_GIX_SHIFT 57
|
||||
#define _PAGE_F_GIX (7ul << 57) /* HPTE index within HPTEG */
|
||||
#define _PAGE_F_SECOND (1ul << 60) /* HPTE is in 2ndary HPTEG */
|
||||
#define _PAGE_HASHPTE (1ul << 61) /* PTE has associated HPTE */
|
||||
#define _PAGE_PTE (1ul << 62) /* distinguishes PTEs from pointers */
|
||||
#define _PAGE_PRESENT (1ul << 63) /* pte contains a translation */
|
||||
|
||||
/*
|
||||
* We need to differentiate between explicit huge page and THP huge
|
||||
* page, since THP huge page also need to track real subpage details
|
||||
*/
|
||||
#define _PAGE_THP_HUGE _PAGE_4K_PFN
|
||||
|
||||
/*
|
||||
* set of bits not changed in pmd_modify.
|
||||
*/
|
||||
#define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
|
||||
_PAGE_ACCESSED | _PAGE_THP_HUGE | _PAGE_PTE | \
|
||||
_PAGE_SOFT_DIRTY)
|
||||
|
||||
#define H_PAGE_BUSY 0x00800 /* software: PTE & hash are busy */
|
||||
#define H_PTE_NONE_MASK _PAGE_HPTEFLAGS
|
||||
#define H_PAGE_F_GIX_SHIFT 57
|
||||
#define H_PAGE_F_GIX (7ul << 57) /* HPTE index within HPTEG */
|
||||
#define H_PAGE_F_SECOND (1ul << 60) /* HPTE is in 2ndary HPTEG */
|
||||
#define H_PAGE_HASHPTE (1ul << 61) /* PTE has associated HPTE */
|
||||
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
#include <asm/book3s/64/hash-64k.h>
|
||||
@ -65,29 +29,33 @@
|
||||
/*
|
||||
* Size of EA range mapped by our pagetables.
|
||||
*/
|
||||
#define PGTABLE_EADDR_SIZE (PTE_INDEX_SIZE + PMD_INDEX_SIZE + \
|
||||
PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
|
||||
#define PGTABLE_RANGE (ASM_CONST(1) << PGTABLE_EADDR_SIZE)
|
||||
#define H_PGTABLE_EADDR_SIZE (H_PTE_INDEX_SIZE + H_PMD_INDEX_SIZE + \
|
||||
H_PUD_INDEX_SIZE + H_PGD_INDEX_SIZE + PAGE_SHIFT)
|
||||
#define H_PGTABLE_RANGE (ASM_CONST(1) << H_PGTABLE_EADDR_SIZE)
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
#define PMD_CACHE_INDEX (PMD_INDEX_SIZE + 1)
|
||||
/*
|
||||
* only with hash we need to use the second half of pmd page table
|
||||
* to store pointer to deposited pgtable_t
|
||||
*/
|
||||
#define H_PMD_CACHE_INDEX (H_PMD_INDEX_SIZE + 1)
|
||||
#else
|
||||
#define PMD_CACHE_INDEX PMD_INDEX_SIZE
|
||||
#define H_PMD_CACHE_INDEX H_PMD_INDEX_SIZE
|
||||
#endif
|
||||
/*
|
||||
* Define the address range of the kernel non-linear virtual area
|
||||
*/
|
||||
#define KERN_VIRT_START ASM_CONST(0xD000000000000000)
|
||||
#define KERN_VIRT_SIZE ASM_CONST(0x0000100000000000)
|
||||
#define H_KERN_VIRT_START ASM_CONST(0xD000000000000000)
|
||||
#define H_KERN_VIRT_SIZE ASM_CONST(0x0000100000000000)
|
||||
|
||||
/*
|
||||
* The vmalloc space starts at the beginning of that region, and
|
||||
* occupies half of it on hash CPUs and a quarter of it on Book3E
|
||||
* (we keep a quarter for the virtual memmap)
|
||||
*/
|
||||
#define VMALLOC_START KERN_VIRT_START
|
||||
#define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1)
|
||||
#define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE)
|
||||
#define H_VMALLOC_START H_KERN_VIRT_START
|
||||
#define H_VMALLOC_SIZE (H_KERN_VIRT_SIZE >> 1)
|
||||
#define H_VMALLOC_END (H_VMALLOC_START + H_VMALLOC_SIZE)
|
||||
|
||||
/*
|
||||
* Region IDs
|
||||
@ -96,7 +64,7 @@
|
||||
#define REGION_MASK (0xfUL << REGION_SHIFT)
|
||||
#define REGION_ID(ea) (((unsigned long)(ea)) >> REGION_SHIFT)
|
||||
|
||||
#define VMALLOC_REGION_ID (REGION_ID(VMALLOC_START))
|
||||
#define VMALLOC_REGION_ID (REGION_ID(H_VMALLOC_START))
|
||||
#define KERNEL_REGION_ID (REGION_ID(PAGE_OFFSET))
|
||||
#define VMEMMAP_REGION_ID (0xfUL) /* Server only */
|
||||
#define USER_REGION_ID (0UL)
|
||||
@ -105,381 +73,97 @@
|
||||
* Defines the address of the vmemap area, in its own region on
|
||||
* hash table CPUs.
|
||||
*/
|
||||
#define VMEMMAP_BASE (VMEMMAP_REGION_ID << REGION_SHIFT)
|
||||
#define H_VMEMMAP_BASE (VMEMMAP_REGION_ID << REGION_SHIFT)
|
||||
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
#define HAVE_ARCH_UNMAPPED_AREA
|
||||
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
|
||||
#endif /* CONFIG_PPC_MM_SLICES */
|
||||
|
||||
/* No separate kernel read-only */
|
||||
#define _PAGE_KERNEL_RW (_PAGE_RW | _PAGE_DIRTY) /* user access blocked by key */
|
||||
#define _PAGE_KERNEL_RO _PAGE_KERNEL_RW
|
||||
#define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC)
|
||||
|
||||
/* Strong Access Ordering */
|
||||
#define _PAGE_SAO (_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
|
||||
|
||||
/* No page size encoding in the linux PTE */
|
||||
#define _PAGE_PSIZE 0
|
||||
|
||||
/* PTEIDX nibble */
|
||||
#define _PTEIDX_SECONDARY 0x8
|
||||
#define _PTEIDX_GROUP_IX 0x7
|
||||
|
||||
/* Hash table based platforms need atomic updates of the linux PTE */
|
||||
#define PTE_ATOMIC_UPDATES 1
|
||||
#define _PTE_NONE_MASK _PAGE_HPTEFLAGS
|
||||
/*
|
||||
* The mask convered by the RPN must be a ULL on 32-bit platforms with
|
||||
* 64-bit PTEs
|
||||
*/
|
||||
#define PTE_RPN_MASK (((1UL << PTE_RPN_SIZE) - 1) << PTE_RPN_SHIFT)
|
||||
/*
|
||||
* _PAGE_CHG_MASK masks of bits that are to be preserved across
|
||||
* pgprot changes
|
||||
*/
|
||||
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
|
||||
_PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \
|
||||
_PAGE_SOFT_DIRTY)
|
||||
/*
|
||||
* Mask of bits returned by pte_pgprot()
|
||||
*/
|
||||
#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
|
||||
_PAGE_WRITETHRU | _PAGE_4K_PFN | \
|
||||
_PAGE_USER | _PAGE_ACCESSED | \
|
||||
_PAGE_RW | _PAGE_DIRTY | _PAGE_EXEC | \
|
||||
_PAGE_SOFT_DIRTY)
|
||||
/*
|
||||
* We define 2 sets of base prot bits, one for basic pages (ie,
|
||||
* cacheable kernel and user pages) and one for non cacheable
|
||||
* pages. We always set _PAGE_COHERENT when SMP is enabled or
|
||||
* the processor might need it for DMA coherency.
|
||||
*/
|
||||
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
|
||||
#define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT)
|
||||
|
||||
/* Permission masks used to generate the __P and __S table,
|
||||
*
|
||||
* Note:__pgprot is defined in arch/powerpc/include/asm/page.h
|
||||
*
|
||||
* Write permissions imply read permissions for now (we could make write-only
|
||||
* pages on BookE but we don't bother for now). Execute permission control is
|
||||
* possible on platforms that define _PAGE_EXEC
|
||||
*
|
||||
* Note due to the way vm flags are laid out, the bits are XWR
|
||||
*/
|
||||
#define PAGE_NONE __pgprot(_PAGE_BASE)
|
||||
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
|
||||
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | \
|
||||
_PAGE_EXEC)
|
||||
#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_USER )
|
||||
#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
|
||||
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_USER )
|
||||
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
|
||||
|
||||
#define __P000 PAGE_NONE
|
||||
#define __P001 PAGE_READONLY
|
||||
#define __P010 PAGE_COPY
|
||||
#define __P011 PAGE_COPY
|
||||
#define __P100 PAGE_READONLY_X
|
||||
#define __P101 PAGE_READONLY_X
|
||||
#define __P110 PAGE_COPY_X
|
||||
#define __P111 PAGE_COPY_X
|
||||
|
||||
#define __S000 PAGE_NONE
|
||||
#define __S001 PAGE_READONLY
|
||||
#define __S010 PAGE_SHARED
|
||||
#define __S011 PAGE_SHARED
|
||||
#define __S100 PAGE_READONLY_X
|
||||
#define __S101 PAGE_READONLY_X
|
||||
#define __S110 PAGE_SHARED_X
|
||||
#define __S111 PAGE_SHARED_X
|
||||
|
||||
/* Permission masks used for kernel mappings */
|
||||
#define PAGE_KERNEL __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
|
||||
#define PAGE_KERNEL_NC __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
|
||||
_PAGE_NO_CACHE)
|
||||
#define PAGE_KERNEL_NCG __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
|
||||
_PAGE_NO_CACHE | _PAGE_GUARDED)
|
||||
#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX)
|
||||
#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
|
||||
#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
|
||||
|
||||
/* Protection used for kernel text. We want the debuggers to be able to
|
||||
* set breakpoints anywhere, so don't write protect the kernel text
|
||||
* on platforms where such control is possible.
|
||||
*/
|
||||
#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
|
||||
defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
|
||||
#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
|
||||
#else
|
||||
#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
|
||||
#endif
|
||||
|
||||
/* Make modules code happy. We don't set RO yet */
|
||||
#define PAGE_KERNEL_EXEC PAGE_KERNEL_X
|
||||
#define PAGE_AGP (PAGE_KERNEL_NC)
|
||||
|
||||
#define PMD_BAD_BITS (PTE_TABLE_SIZE-1)
|
||||
#define PUD_BAD_BITS (PMD_TABLE_SIZE-1)
|
||||
#define H_PMD_BAD_BITS (PTE_TABLE_SIZE-1)
|
||||
#define H_PUD_BAD_BITS (PMD_TABLE_SIZE-1)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#define pmd_bad(pmd) (pmd_val(pmd) & PMD_BAD_BITS)
|
||||
#define pmd_page_vaddr(pmd) __va(pmd_val(pmd) & ~PMD_MASKED_BITS)
|
||||
|
||||
#define pud_bad(pud) (pud_val(pud) & PUD_BAD_BITS)
|
||||
#define pud_page_vaddr(pud) __va(pud_val(pud) & ~PUD_MASKED_BITS)
|
||||
|
||||
/* Pointers in the page table tree are physical addresses */
|
||||
#define __pgtable_ptr_val(ptr) __pa(ptr)
|
||||
|
||||
#define pgd_index(address) (((address) >> (PGDIR_SHIFT)) & (PTRS_PER_PGD - 1))
|
||||
#define pud_index(address) (((address) >> (PUD_SHIFT)) & (PTRS_PER_PUD - 1))
|
||||
#define pmd_index(address) (((address) >> (PMD_SHIFT)) & (PTRS_PER_PMD - 1))
|
||||
#define pte_index(address) (((address) >> (PAGE_SHIFT)) & (PTRS_PER_PTE - 1))
|
||||
#define hash__pmd_bad(pmd) (pmd_val(pmd) & H_PMD_BAD_BITS)
|
||||
#define hash__pud_bad(pud) (pud_val(pud) & H_PUD_BAD_BITS)
|
||||
static inline int hash__pgd_bad(pgd_t pgd)
|
||||
{
|
||||
return (pgd_val(pgd) == 0);
|
||||
}
|
||||
|
||||
extern void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, unsigned long pte, int huge);
|
||||
extern unsigned long htab_convert_pte_flags(unsigned long pteflags);
|
||||
/* Atomic PTE updates */
|
||||
static inline unsigned long pte_update(struct mm_struct *mm,
|
||||
unsigned long addr,
|
||||
pte_t *ptep, unsigned long clr,
|
||||
unsigned long set,
|
||||
int huge)
|
||||
static inline unsigned long hash__pte_update(struct mm_struct *mm,
|
||||
unsigned long addr,
|
||||
pte_t *ptep, unsigned long clr,
|
||||
unsigned long set,
|
||||
int huge)
|
||||
{
|
||||
unsigned long old, tmp;
|
||||
__be64 old_be, tmp_be;
|
||||
unsigned long old;
|
||||
|
||||
__asm__ __volatile__(
|
||||
"1: ldarx %0,0,%3 # pte_update\n\
|
||||
andi. %1,%0,%6\n\
|
||||
and. %1,%0,%6\n\
|
||||
bne- 1b \n\
|
||||
andc %1,%0,%4 \n\
|
||||
or %1,%1,%7\n\
|
||||
stdcx. %1,0,%3 \n\
|
||||
bne- 1b"
|
||||
: "=&r" (old), "=&r" (tmp), "=m" (*ptep)
|
||||
: "r" (ptep), "r" (clr), "m" (*ptep), "i" (_PAGE_BUSY), "r" (set)
|
||||
: "=&r" (old_be), "=&r" (tmp_be), "=m" (*ptep)
|
||||
: "r" (ptep), "r" (cpu_to_be64(clr)), "m" (*ptep),
|
||||
"r" (cpu_to_be64(H_PAGE_BUSY)), "r" (cpu_to_be64(set))
|
||||
: "cc" );
|
||||
/* huge pages use the old page table lock */
|
||||
if (!huge)
|
||||
assert_pte_locked(mm, addr);
|
||||
|
||||
if (old & _PAGE_HASHPTE)
|
||||
old = be64_to_cpu(old_be);
|
||||
if (old & H_PAGE_HASHPTE)
|
||||
hpte_need_flush(mm, addr, ptep, old, huge);
|
||||
|
||||
return old;
|
||||
}
|
||||
|
||||
static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
unsigned long old;
|
||||
|
||||
if ((pte_val(*ptep) & (_PAGE_ACCESSED | _PAGE_HASHPTE)) == 0)
|
||||
return 0;
|
||||
old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0);
|
||||
return (old & _PAGE_ACCESSED) != 0;
|
||||
}
|
||||
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
|
||||
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
|
||||
({ \
|
||||
int __r; \
|
||||
__r = __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep); \
|
||||
__r; \
|
||||
})
|
||||
|
||||
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
|
||||
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep)
|
||||
{
|
||||
|
||||
if ((pte_val(*ptep) & _PAGE_RW) == 0)
|
||||
return;
|
||||
|
||||
pte_update(mm, addr, ptep, _PAGE_RW, 0, 0);
|
||||
}
|
||||
|
||||
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
if ((pte_val(*ptep) & _PAGE_RW) == 0)
|
||||
return;
|
||||
|
||||
pte_update(mm, addr, ptep, _PAGE_RW, 0, 1);
|
||||
}
|
||||
|
||||
/*
|
||||
* We currently remove entries from the hashtable regardless of whether
|
||||
* the entry was young or dirty. The generic routines only flush if the
|
||||
* entry was young or dirty which is not good enough.
|
||||
*
|
||||
* We should be more intelligent about this but for the moment we override
|
||||
* these functions and force a tlb flush unconditionally
|
||||
*/
|
||||
#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
|
||||
#define ptep_clear_flush_young(__vma, __address, __ptep) \
|
||||
({ \
|
||||
int __young = __ptep_test_and_clear_young((__vma)->vm_mm, __address, \
|
||||
__ptep); \
|
||||
__young; \
|
||||
})
|
||||
|
||||
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
|
||||
static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
|
||||
return __pte(old);
|
||||
}
|
||||
|
||||
static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t * ptep)
|
||||
{
|
||||
pte_update(mm, addr, ptep, ~0UL, 0, 0);
|
||||
}
|
||||
|
||||
|
||||
/* Set the dirty and/or accessed bits atomically in a linux PTE, this
|
||||
* function doesn't need to flush the hash entry
|
||||
*/
|
||||
static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
|
||||
static inline void hash__ptep_set_access_flags(pte_t *ptep, pte_t entry)
|
||||
{
|
||||
unsigned long bits = pte_val(entry) &
|
||||
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC |
|
||||
_PAGE_SOFT_DIRTY);
|
||||
__be64 old, tmp, val, mask;
|
||||
|
||||
unsigned long old, tmp;
|
||||
mask = cpu_to_be64(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_READ | _PAGE_WRITE |
|
||||
_PAGE_EXEC | _PAGE_SOFT_DIRTY);
|
||||
|
||||
val = pte_raw(entry) & mask;
|
||||
|
||||
__asm__ __volatile__(
|
||||
"1: ldarx %0,0,%4\n\
|
||||
andi. %1,%0,%6\n\
|
||||
and. %1,%0,%6\n\
|
||||
bne- 1b \n\
|
||||
or %0,%3,%0\n\
|
||||
stdcx. %0,0,%4\n\
|
||||
bne- 1b"
|
||||
:"=&r" (old), "=&r" (tmp), "=m" (*ptep)
|
||||
:"r" (bits), "r" (ptep), "m" (*ptep), "i" (_PAGE_BUSY)
|
||||
:"r" (val), "r" (ptep), "m" (*ptep), "r" (cpu_to_be64(H_PAGE_BUSY))
|
||||
:"cc");
|
||||
}
|
||||
|
||||
static inline int pgd_bad(pgd_t pgd)
|
||||
static inline int hash__pte_same(pte_t pte_a, pte_t pte_b)
|
||||
{
|
||||
return (pgd_val(pgd) == 0);
|
||||
return (((pte_raw(pte_a) ^ pte_raw(pte_b)) & ~cpu_to_be64(_PAGE_HPTEFLAGS)) == 0);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PTE_SAME
|
||||
#define pte_same(A,B) (((pte_val(A) ^ pte_val(B)) & ~_PAGE_HPTEFLAGS) == 0)
|
||||
static inline unsigned long pgd_page_vaddr(pgd_t pgd)
|
||||
static inline int hash__pte_none(pte_t pte)
|
||||
{
|
||||
return (unsigned long)__va(pgd_val(pgd) & ~PGD_MASKED_BITS);
|
||||
}
|
||||
|
||||
|
||||
/* Generic accessors to PTE bits */
|
||||
static inline int pte_write(pte_t pte) { return !!(pte_val(pte) & _PAGE_RW);}
|
||||
static inline int pte_dirty(pte_t pte) { return !!(pte_val(pte) & _PAGE_DIRTY); }
|
||||
static inline int pte_young(pte_t pte) { return !!(pte_val(pte) & _PAGE_ACCESSED); }
|
||||
static inline int pte_special(pte_t pte) { return !!(pte_val(pte) & _PAGE_SPECIAL); }
|
||||
static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
|
||||
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
|
||||
static inline bool pte_soft_dirty(pte_t pte)
|
||||
{
|
||||
return !!(pte_val(pte) & _PAGE_SOFT_DIRTY);
|
||||
}
|
||||
static inline pte_t pte_mksoft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SOFT_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_clear_soft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_SOFT_DIRTY);
|
||||
}
|
||||
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
|
||||
|
||||
#ifdef CONFIG_NUMA_BALANCING
|
||||
/*
|
||||
* These work without NUMA balancing but the kernel does not care. See the
|
||||
* comment in include/asm-generic/pgtable.h . On powerpc, this will only
|
||||
* work for user pages and always return true for kernel pages.
|
||||
*/
|
||||
static inline int pte_protnone(pte_t pte)
|
||||
{
|
||||
return (pte_val(pte) &
|
||||
(_PAGE_PRESENT | _PAGE_USER)) == _PAGE_PRESENT;
|
||||
}
|
||||
#endif /* CONFIG_NUMA_BALANCING */
|
||||
|
||||
static inline int pte_present(pte_t pte)
|
||||
{
|
||||
return !!(pte_val(pte) & _PAGE_PRESENT);
|
||||
}
|
||||
|
||||
/* Conversion functions: convert a page and protection to a page entry,
|
||||
* and a page entry and page directory to the page they refer to.
|
||||
*
|
||||
* Even if PTEs can be unsigned long long, a PFN is always an unsigned
|
||||
* long for now.
|
||||
*/
|
||||
static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot)
|
||||
{
|
||||
return __pte((((pte_basic_t)(pfn) << PTE_RPN_SHIFT) & PTE_RPN_MASK) |
|
||||
pgprot_val(pgprot));
|
||||
}
|
||||
|
||||
static inline unsigned long pte_pfn(pte_t pte)
|
||||
{
|
||||
return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT;
|
||||
}
|
||||
|
||||
/* Generic modifiers for PTE bits */
|
||||
static inline pte_t pte_wrprotect(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_RW);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkclean(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkold(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkwrite(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_RW);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkdirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_DIRTY | _PAGE_SOFT_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkyoung(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_ACCESSED);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkspecial(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SPECIAL);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkhuge(pte_t pte)
|
||||
{
|
||||
return pte;
|
||||
}
|
||||
|
||||
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
||||
{
|
||||
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
|
||||
return (pte_val(pte) & ~H_PTE_NONE_MASK) == 0;
|
||||
}
|
||||
|
||||
/* This low level function performs the actual PTE insertion
|
||||
@ -487,8 +171,8 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
||||
* an horrible mess that I'm not going to try to clean up now but
|
||||
* I'm keeping it in one place rather than spread around
|
||||
*/
|
||||
static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t pte, int percpu)
|
||||
static inline void hash__set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t pte, int percpu)
|
||||
{
|
||||
/*
|
||||
* Anything else just stores the PTE normally. That covers all 64-bit
|
||||
@ -497,53 +181,6 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
*ptep = pte;
|
||||
}
|
||||
|
||||
/*
|
||||
* Macro to mark a page protection value as "uncacheable".
|
||||
*/
|
||||
|
||||
#define _PAGE_CACHE_CTL (_PAGE_COHERENT | _PAGE_GUARDED | _PAGE_NO_CACHE | \
|
||||
_PAGE_WRITETHRU)
|
||||
|
||||
#define pgprot_noncached pgprot_noncached
|
||||
static inline pgprot_t pgprot_noncached(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_NO_CACHE | _PAGE_GUARDED);
|
||||
}
|
||||
|
||||
#define pgprot_noncached_wc pgprot_noncached_wc
|
||||
static inline pgprot_t pgprot_noncached_wc(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_NO_CACHE);
|
||||
}
|
||||
|
||||
#define pgprot_cached pgprot_cached
|
||||
static inline pgprot_t pgprot_cached(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_COHERENT);
|
||||
}
|
||||
|
||||
#define pgprot_cached_wthru pgprot_cached_wthru
|
||||
static inline pgprot_t pgprot_cached_wthru(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_COHERENT | _PAGE_WRITETHRU);
|
||||
}
|
||||
|
||||
#define pgprot_cached_noncoherent pgprot_cached_noncoherent
|
||||
static inline pgprot_t pgprot_cached_noncoherent(pgprot_t prot)
|
||||
{
|
||||
return __pgprot(pgprot_val(prot) & ~_PAGE_CACHE_CTL);
|
||||
}
|
||||
|
||||
#define pgprot_writecombine pgprot_writecombine
|
||||
static inline pgprot_t pgprot_writecombine(pgprot_t prot)
|
||||
{
|
||||
return pgprot_noncached_wc(prot);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
extern void hpte_do_hugepage_flush(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp, unsigned long old_pmd);
|
||||
@ -556,6 +193,14 @@ static inline void hpte_do_hugepage_flush(struct mm_struct *mm,
|
||||
}
|
||||
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
||||
|
||||
|
||||
extern int hash__map_kernel_page(unsigned long ea, unsigned long pa,
|
||||
unsigned long flags);
|
||||
extern int __meminit hash__vmemmap_create_mapping(unsigned long start,
|
||||
unsigned long page_size,
|
||||
unsigned long phys);
|
||||
extern void hash__vmemmap_remove_mapping(unsigned long start,
|
||||
unsigned long page_size);
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_H */
|
||||
|
14
arch/powerpc/include/asm/book3s/64/hugetlb-radix.h
Normal file
14
arch/powerpc/include/asm/book3s/64/hugetlb-radix.h
Normal file
@ -0,0 +1,14 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_64_HUGETLB_RADIX_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_HUGETLB_RADIX_H
|
||||
/*
|
||||
* For radix we want generic code to handle hugetlb. But then if we want
|
||||
* both hash and radix to be enabled together we need to workaround the
|
||||
* limitations.
|
||||
*/
|
||||
void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
|
||||
void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
|
||||
extern unsigned long
|
||||
radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
|
||||
unsigned long len, unsigned long pgoff,
|
||||
unsigned long flags);
|
||||
#endif
|
@ -1,5 +1,5 @@
|
||||
#ifndef _ASM_POWERPC_MMU_HASH64_H_
|
||||
#define _ASM_POWERPC_MMU_HASH64_H_
|
||||
#ifndef _ASM_POWERPC_BOOK3S_64_MMU_HASH_H_
|
||||
#define _ASM_POWERPC_BOOK3S_64_MMU_HASH_H_
|
||||
/*
|
||||
* PowerPC64 memory management structures
|
||||
*
|
||||
@ -78,6 +78,10 @@
|
||||
#define HPTE_V_SECONDARY ASM_CONST(0x0000000000000002)
|
||||
#define HPTE_V_VALID ASM_CONST(0x0000000000000001)
|
||||
|
||||
/*
|
||||
* ISA 3.0 have a different HPTE format.
|
||||
*/
|
||||
#define HPTE_R_3_0_SSIZE_SHIFT 58
|
||||
#define HPTE_R_PP0 ASM_CONST(0x8000000000000000)
|
||||
#define HPTE_R_TS ASM_CONST(0x4000000000000000)
|
||||
#define HPTE_R_KEY_HI ASM_CONST(0x3000000000000000)
|
||||
@ -115,6 +119,7 @@
|
||||
#define POWER7_TLB_SETS 128 /* # sets in POWER7 TLB */
|
||||
#define POWER8_TLB_SETS 512 /* # sets in POWER8 TLB */
|
||||
#define POWER9_TLB_SETS_HASH 256 /* # sets in POWER9 TLB Hash mode */
|
||||
#define POWER9_TLB_SETS_RADIX 128 /* # sets in POWER9 TLB Radix mode */
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
@ -127,24 +132,6 @@ extern struct hash_pte *htab_address;
|
||||
extern unsigned long htab_size_bytes;
|
||||
extern unsigned long htab_hash_mask;
|
||||
|
||||
/*
|
||||
* Page size definition
|
||||
*
|
||||
* shift : is the "PAGE_SHIFT" value for that page size
|
||||
* sllp : is a bit mask with the value of SLB L || LP to be or'ed
|
||||
* directly to a slbmte "vsid" value
|
||||
* penc : is the HPTE encoding mask for the "LP" field:
|
||||
*
|
||||
*/
|
||||
struct mmu_psize_def
|
||||
{
|
||||
unsigned int shift; /* number of bits */
|
||||
int penc[MMU_PAGE_COUNT]; /* HPTE encoding */
|
||||
unsigned int tlbiel; /* tlbiel supported for that page size */
|
||||
unsigned long avpnm; /* bits to mask out in AVPN in the HPTE */
|
||||
unsigned long sllp; /* SLB L||LP (exact mask to use in slbmte) */
|
||||
};
|
||||
extern struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT];
|
||||
|
||||
static inline int shift_to_mmu_psize(unsigned int shift)
|
||||
{
|
||||
@ -210,11 +197,6 @@ static inline int segment_shift(int ssize)
|
||||
/*
|
||||
* The current system page and segment sizes
|
||||
*/
|
||||
extern int mmu_linear_psize;
|
||||
extern int mmu_virtual_psize;
|
||||
extern int mmu_vmalloc_psize;
|
||||
extern int mmu_vmemmap_psize;
|
||||
extern int mmu_io_psize;
|
||||
extern int mmu_kernel_ssize;
|
||||
extern int mmu_highuser_ssize;
|
||||
extern u16 mmu_slb_size;
|
||||
@ -247,7 +229,8 @@ static inline unsigned long hpte_encode_avpn(unsigned long vpn, int psize,
|
||||
*/
|
||||
v = (vpn >> (23 - VPN_SHIFT)) & ~(mmu_psize_defs[psize].avpnm);
|
||||
v <<= HPTE_V_AVPN_SHIFT;
|
||||
v |= ((unsigned long) ssize) << HPTE_V_SSIZE_SHIFT;
|
||||
if (!cpu_has_feature(CPU_FTR_ARCH_300))
|
||||
v |= ((unsigned long) ssize) << HPTE_V_SSIZE_SHIFT;
|
||||
return v;
|
||||
}
|
||||
|
||||
@ -271,8 +254,12 @@ static inline unsigned long hpte_encode_v(unsigned long vpn, int base_psize,
|
||||
* aligned for the requested page size
|
||||
*/
|
||||
static inline unsigned long hpte_encode_r(unsigned long pa, int base_psize,
|
||||
int actual_psize)
|
||||
int actual_psize, int ssize)
|
||||
{
|
||||
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_300))
|
||||
pa |= ((unsigned long) ssize) << HPTE_R_3_0_SSIZE_SHIFT;
|
||||
|
||||
/* A 4K page needs no special encoding */
|
||||
if (actual_psize == MMU_PAGE_4K)
|
||||
return pa & HPTE_R_RPN;
|
||||
@ -476,7 +463,7 @@ extern void slb_set_size(u16 size);
|
||||
add rt,rt,rx
|
||||
|
||||
/* 4 bits per slice and we have one slice per 1TB */
|
||||
#define SLICE_ARRAY_SIZE (PGTABLE_RANGE >> 41)
|
||||
#define SLICE_ARRAY_SIZE (H_PGTABLE_RANGE >> 41)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
@ -512,38 +499,6 @@ static inline void subpage_prot_free(struct mm_struct *mm) {}
|
||||
static inline void subpage_prot_init_new_context(struct mm_struct *mm) { }
|
||||
#endif /* CONFIG_PPC_SUBPAGE_PROT */
|
||||
|
||||
typedef unsigned long mm_context_id_t;
|
||||
struct spinlock;
|
||||
|
||||
typedef struct {
|
||||
mm_context_id_t id;
|
||||
u16 user_psize; /* page size index */
|
||||
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
u64 low_slices_psize; /* SLB page size encodings */
|
||||
unsigned char high_slices_psize[SLICE_ARRAY_SIZE];
|
||||
#else
|
||||
u16 sllp; /* SLB page size encoding */
|
||||
#endif
|
||||
unsigned long vdso_base;
|
||||
#ifdef CONFIG_PPC_SUBPAGE_PROT
|
||||
struct subpage_prot_table spt;
|
||||
#endif /* CONFIG_PPC_SUBPAGE_PROT */
|
||||
#ifdef CONFIG_PPC_ICSWX
|
||||
struct spinlock *cop_lockp; /* guard acop and cop_pid */
|
||||
unsigned long acop; /* mask of enabled coprocessor types */
|
||||
unsigned int cop_pid; /* pid value used with coprocessors */
|
||||
#endif /* CONFIG_PPC_ICSWX */
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
/* for 4K PTE fragment support */
|
||||
void *pte_frag;
|
||||
#endif
|
||||
#ifdef CONFIG_SPAPR_TCE_IOMMU
|
||||
struct list_head iommu_group_mem_list;
|
||||
#endif
|
||||
} mm_context_t;
|
||||
|
||||
|
||||
#if 0
|
||||
/*
|
||||
* The code below is equivalent to this function for arguments
|
||||
@ -579,7 +534,7 @@ static inline unsigned long get_vsid(unsigned long context, unsigned long ea,
|
||||
/*
|
||||
* Bad address. We return VSID 0 for that
|
||||
*/
|
||||
if ((ea & ~REGION_MASK) >= PGTABLE_RANGE)
|
||||
if ((ea & ~REGION_MASK) >= H_PGTABLE_RANGE)
|
||||
return 0;
|
||||
|
||||
if (ssize == MMU_SEGSIZE_256M)
|
||||
@ -613,4 +568,4 @@ unsigned htab_shift_for_mem_size(unsigned long mem_size);
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_POWERPC_MMU_HASH64_H_ */
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_MMU_HASH_H_ */
|
||||
|
137
arch/powerpc/include/asm/book3s/64/mmu.h
Normal file
137
arch/powerpc/include/asm/book3s/64/mmu.h
Normal file
@ -0,0 +1,137 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_64_MMU_H_
|
||||
#define _ASM_POWERPC_BOOK3S_64_MMU_H_
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* Page size definition
|
||||
*
|
||||
* shift : is the "PAGE_SHIFT" value for that page size
|
||||
* sllp : is a bit mask with the value of SLB L || LP to be or'ed
|
||||
* directly to a slbmte "vsid" value
|
||||
* penc : is the HPTE encoding mask for the "LP" field:
|
||||
*
|
||||
*/
|
||||
struct mmu_psize_def {
|
||||
unsigned int shift; /* number of bits */
|
||||
int penc[MMU_PAGE_COUNT]; /* HPTE encoding */
|
||||
unsigned int tlbiel; /* tlbiel supported for that page size */
|
||||
unsigned long avpnm; /* bits to mask out in AVPN in the HPTE */
|
||||
union {
|
||||
unsigned long sllp; /* SLB L||LP (exact mask to use in slbmte) */
|
||||
unsigned long ap; /* Ap encoding used by PowerISA 3.0 */
|
||||
};
|
||||
};
|
||||
extern struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT];
|
||||
|
||||
#define radix_enabled() mmu_has_feature(MMU_FTR_RADIX)
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
/* 64-bit classic hash table MMU */
|
||||
#include <asm/book3s/64/mmu-hash.h>
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* ISA 3.0 partiton and process table entry format
|
||||
*/
|
||||
struct prtb_entry {
|
||||
__be64 prtb0;
|
||||
__be64 prtb1;
|
||||
};
|
||||
extern struct prtb_entry *process_tb;
|
||||
|
||||
struct patb_entry {
|
||||
__be64 patb0;
|
||||
__be64 patb1;
|
||||
};
|
||||
extern struct patb_entry *partition_tb;
|
||||
|
||||
#define PATB_HR (1UL << 63)
|
||||
#define PATB_GR (1UL << 63)
|
||||
#define RPDB_MASK 0x0ffffffffffff00fUL
|
||||
#define RPDB_SHIFT (1UL << 8)
|
||||
/*
|
||||
* Limit process table to PAGE_SIZE table. This
|
||||
* also limit the max pid we can support.
|
||||
* MAX_USER_CONTEXT * 16 bytes of space.
|
||||
*/
|
||||
#define PRTB_SIZE_SHIFT (CONTEXT_BITS + 4)
|
||||
/*
|
||||
* Power9 currently only support 64K partition table size.
|
||||
*/
|
||||
#define PATB_SIZE_SHIFT 16
|
||||
|
||||
typedef unsigned long mm_context_id_t;
|
||||
struct spinlock;
|
||||
|
||||
typedef struct {
|
||||
mm_context_id_t id;
|
||||
u16 user_psize; /* page size index */
|
||||
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
u64 low_slices_psize; /* SLB page size encodings */
|
||||
unsigned char high_slices_psize[SLICE_ARRAY_SIZE];
|
||||
#else
|
||||
u16 sllp; /* SLB page size encoding */
|
||||
#endif
|
||||
unsigned long vdso_base;
|
||||
#ifdef CONFIG_PPC_SUBPAGE_PROT
|
||||
struct subpage_prot_table spt;
|
||||
#endif /* CONFIG_PPC_SUBPAGE_PROT */
|
||||
#ifdef CONFIG_PPC_ICSWX
|
||||
struct spinlock *cop_lockp; /* guard acop and cop_pid */
|
||||
unsigned long acop; /* mask of enabled coprocessor types */
|
||||
unsigned int cop_pid; /* pid value used with coprocessors */
|
||||
#endif /* CONFIG_PPC_ICSWX */
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
/* for 4K PTE fragment support */
|
||||
void *pte_frag;
|
||||
#endif
|
||||
#ifdef CONFIG_SPAPR_TCE_IOMMU
|
||||
struct list_head iommu_group_mem_list;
|
||||
#endif
|
||||
} mm_context_t;
|
||||
|
||||
/*
|
||||
* The current system page and segment sizes
|
||||
*/
|
||||
extern int mmu_linear_psize;
|
||||
extern int mmu_virtual_psize;
|
||||
extern int mmu_vmalloc_psize;
|
||||
extern int mmu_vmemmap_psize;
|
||||
extern int mmu_io_psize;
|
||||
|
||||
/* MMU initialization */
|
||||
extern void radix_init_native(void);
|
||||
extern void hash__early_init_mmu(void);
|
||||
extern void radix__early_init_mmu(void);
|
||||
static inline void early_init_mmu(void)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__early_init_mmu();
|
||||
return hash__early_init_mmu();
|
||||
}
|
||||
extern void hash__early_init_mmu_secondary(void);
|
||||
extern void radix__early_init_mmu_secondary(void);
|
||||
static inline void early_init_mmu_secondary(void)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__early_init_mmu_secondary();
|
||||
return hash__early_init_mmu_secondary();
|
||||
}
|
||||
|
||||
extern void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base,
|
||||
phys_addr_t first_memblock_size);
|
||||
extern void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base,
|
||||
phys_addr_t first_memblock_size);
|
||||
static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base,
|
||||
phys_addr_t first_memblock_size)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__setup_initial_memory_limit(first_memblock_base,
|
||||
first_memblock_size);
|
||||
return hash__setup_initial_memory_limit(first_memblock_base,
|
||||
first_memblock_size);
|
||||
}
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_MMU_H_ */
|
207
arch/powerpc/include/asm/book3s/64/pgalloc.h
Normal file
207
arch/powerpc/include/asm/book3s/64/pgalloc.h
Normal file
@ -0,0 +1,207 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_64_PGALLOC_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_PGALLOC_H
|
||||
/*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public License
|
||||
* as published by the Free Software Foundation; either version
|
||||
* 2 of the License, or (at your option) any later version.
|
||||
*/
|
||||
|
||||
#include <linux/slab.h>
|
||||
#include <linux/cpumask.h>
|
||||
#include <linux/percpu.h>
|
||||
|
||||
struct vmemmap_backing {
|
||||
struct vmemmap_backing *list;
|
||||
unsigned long phys;
|
||||
unsigned long virt_addr;
|
||||
};
|
||||
extern struct vmemmap_backing *vmemmap_list;
|
||||
|
||||
/*
|
||||
* Functions that deal with pagetables that could be at any level of
|
||||
* the table need to be passed an "index_size" so they know how to
|
||||
* handle allocation. For PTE pages (which are linked to a struct
|
||||
* page for now, and drawn from the main get_free_pages() pool), the
|
||||
* allocation size will be (2^index_size * sizeof(pointer)) and
|
||||
* allocations are drawn from the kmem_cache in PGT_CACHE(index_size).
|
||||
*
|
||||
* The maximum index size needs to be big enough to allow any
|
||||
* pagetable sizes we need, but small enough to fit in the low bits of
|
||||
* any page table pointer. In other words all pagetables, even tiny
|
||||
* ones, must be aligned to allow at least enough low 0 bits to
|
||||
* contain this value. This value is also used as a mask, so it must
|
||||
* be one less than a power of two.
|
||||
*/
|
||||
#define MAX_PGTABLE_INDEX_SIZE 0xf
|
||||
|
||||
extern struct kmem_cache *pgtable_cache[];
|
||||
#define PGT_CACHE(shift) ({ \
|
||||
BUG_ON(!(shift)); \
|
||||
pgtable_cache[(shift) - 1]; \
|
||||
})
|
||||
|
||||
#define PGALLOC_GFP GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO
|
||||
|
||||
extern pte_t *pte_fragment_alloc(struct mm_struct *, unsigned long, int);
|
||||
extern void pte_fragment_free(unsigned long *, int);
|
||||
extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
|
||||
#ifdef CONFIG_SMP
|
||||
extern void __tlb_remove_table(void *_table);
|
||||
#endif
|
||||
|
||||
static inline pgd_t *radix__pgd_alloc(struct mm_struct *mm)
|
||||
{
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
return (pgd_t *)__get_free_page(PGALLOC_GFP);
|
||||
#else
|
||||
struct page *page;
|
||||
page = alloc_pages(PGALLOC_GFP, 4);
|
||||
if (!page)
|
||||
return NULL;
|
||||
return (pgd_t *) page_address(page);
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline void radix__pgd_free(struct mm_struct *mm, pgd_t *pgd)
|
||||
{
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
free_page((unsigned long)pgd);
|
||||
#else
|
||||
free_pages((unsigned long)pgd, 4);
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline pgd_t *pgd_alloc(struct mm_struct *mm)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pgd_alloc(mm);
|
||||
return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE), GFP_KERNEL);
|
||||
}
|
||||
|
||||
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pgd_free(mm, pgd);
|
||||
kmem_cache_free(PGT_CACHE(PGD_INDEX_SIZE), pgd);
|
||||
}
|
||||
|
||||
static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
|
||||
{
|
||||
pgd_set(pgd, __pgtable_ptr_val(pud) | PGD_VAL_BITS);
|
||||
}
|
||||
|
||||
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
|
||||
{
|
||||
return kmem_cache_alloc(PGT_CACHE(PUD_INDEX_SIZE),
|
||||
GFP_KERNEL|__GFP_REPEAT);
|
||||
}
|
||||
|
||||
static inline void pud_free(struct mm_struct *mm, pud_t *pud)
|
||||
{
|
||||
kmem_cache_free(PGT_CACHE(PUD_INDEX_SIZE), pud);
|
||||
}
|
||||
|
||||
static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
|
||||
{
|
||||
pud_set(pud, __pgtable_ptr_val(pmd) | PUD_VAL_BITS);
|
||||
}
|
||||
|
||||
static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud,
|
||||
unsigned long address)
|
||||
{
|
||||
pgtable_free_tlb(tlb, pud, PUD_INDEX_SIZE);
|
||||
}
|
||||
|
||||
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
|
||||
{
|
||||
return kmem_cache_alloc(PGT_CACHE(PMD_CACHE_INDEX),
|
||||
GFP_KERNEL|__GFP_REPEAT);
|
||||
}
|
||||
|
||||
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
|
||||
{
|
||||
kmem_cache_free(PGT_CACHE(PMD_CACHE_INDEX), pmd);
|
||||
}
|
||||
|
||||
static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd,
|
||||
unsigned long address)
|
||||
{
|
||||
return pgtable_free_tlb(tlb, pmd, PMD_CACHE_INDEX);
|
||||
}
|
||||
|
||||
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
|
||||
pte_t *pte)
|
||||
{
|
||||
pmd_set(pmd, __pgtable_ptr_val(pte) | PMD_VAL_BITS);
|
||||
}
|
||||
|
||||
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
|
||||
pgtable_t pte_page)
|
||||
{
|
||||
pmd_set(pmd, __pgtable_ptr_val(pte_page) | PMD_VAL_BITS);
|
||||
}
|
||||
|
||||
static inline pgtable_t pmd_pgtable(pmd_t pmd)
|
||||
{
|
||||
return (pgtable_t)pmd_page_vaddr(pmd);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PPC_4K_PAGES
|
||||
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
|
||||
unsigned long address)
|
||||
{
|
||||
return (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
|
||||
}
|
||||
|
||||
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
|
||||
unsigned long address)
|
||||
{
|
||||
struct page *page;
|
||||
pte_t *pte;
|
||||
|
||||
pte = pte_alloc_one_kernel(mm, address);
|
||||
if (!pte)
|
||||
return NULL;
|
||||
page = virt_to_page(pte);
|
||||
if (!pgtable_page_ctor(page)) {
|
||||
__free_page(page);
|
||||
return NULL;
|
||||
}
|
||||
return pte;
|
||||
}
|
||||
#else /* if CONFIG_PPC_64K_PAGES */
|
||||
|
||||
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
|
||||
unsigned long address)
|
||||
{
|
||||
return (pte_t *)pte_fragment_alloc(mm, address, 1);
|
||||
}
|
||||
|
||||
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
|
||||
unsigned long address)
|
||||
{
|
||||
return (pgtable_t)pte_fragment_alloc(mm, address, 0);
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
|
||||
{
|
||||
pte_fragment_free((unsigned long *)pte, 1);
|
||||
}
|
||||
|
||||
static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
|
||||
{
|
||||
pte_fragment_free((unsigned long *)ptepage, 0);
|
||||
}
|
||||
|
||||
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
|
||||
unsigned long address)
|
||||
{
|
||||
tlb_flush_pgtable(tlb, address);
|
||||
pgtable_free_tlb(tlb, table, 0);
|
||||
}
|
||||
|
||||
#define check_pgt_cache() do { } while (0)
|
||||
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_PGALLOC_H */
|
53
arch/powerpc/include/asm/book3s/64/pgtable-4k.h
Normal file
53
arch/powerpc/include/asm/book3s/64/pgtable-4k.h
Normal file
@ -0,0 +1,53 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H
|
||||
/*
|
||||
* hash 4k can't share hugetlb and also doesn't support THP
|
||||
*/
|
||||
#ifndef __ASSEMBLY__
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
static inline int pmd_huge(pmd_t pmd)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
if (radix_enabled())
|
||||
return !!(pmd_val(pmd) & _PAGE_PTE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int pud_huge(pud_t pud)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
if (radix_enabled())
|
||||
return !!(pud_val(pud) & _PAGE_PTE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int pgd_huge(pgd_t pgd)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
if (radix_enabled())
|
||||
return !!(pgd_val(pgd) & _PAGE_PTE);
|
||||
return 0;
|
||||
}
|
||||
#define pgd_huge pgd_huge
|
||||
/*
|
||||
* With radix , we have hugepage ptes in the pud and pmd entries. We don't
|
||||
* need to setup hugepage directory for them. Our pte and page directory format
|
||||
* enable us to have this enabled.
|
||||
*/
|
||||
static inline int hugepd_ok(hugepd_t hpd)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return 0;
|
||||
return hash__hugepd_ok(hpd);
|
||||
}
|
||||
#define is_hugepd(hpd) (hugepd_ok(hpd))
|
||||
#endif /* CONFIG_HUGETLB_PAGE */
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /*_ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H */
|
64
arch/powerpc/include/asm/book3s/64/pgtable-64k.h
Normal file
64
arch/powerpc/include/asm/book3s/64/pgtable-64k.h
Normal file
@ -0,0 +1,64 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_64_PGTABLE_64K_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_PGTABLE_64K_H
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
/*
|
||||
* We have PGD_INDEX_SIZ = 12 and PTE_INDEX_SIZE = 8, so that we can have
|
||||
* 16GB hugepage pte in PGD and 16MB hugepage pte at PMD;
|
||||
*
|
||||
* Defined in such a way that we can optimize away code block at build time
|
||||
* if CONFIG_HUGETLB_PAGE=n.
|
||||
*/
|
||||
static inline int pmd_huge(pmd_t pmd)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
return !!(pmd_val(pmd) & _PAGE_PTE);
|
||||
}
|
||||
|
||||
static inline int pud_huge(pud_t pud)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
return !!(pud_val(pud) & _PAGE_PTE);
|
||||
}
|
||||
|
||||
static inline int pgd_huge(pgd_t pgd)
|
||||
{
|
||||
/*
|
||||
* leaf pte for huge page
|
||||
*/
|
||||
return !!(pgd_val(pgd) & _PAGE_PTE);
|
||||
}
|
||||
#define pgd_huge pgd_huge
|
||||
|
||||
#ifdef CONFIG_DEBUG_VM
|
||||
extern int hugepd_ok(hugepd_t hpd);
|
||||
#define is_hugepd(hpd) (hugepd_ok(hpd))
|
||||
#else
|
||||
/*
|
||||
* With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't
|
||||
* need to setup hugepage directory for them. Our pte and page directory format
|
||||
* enable us to have this enabled.
|
||||
*/
|
||||
static inline int hugepd_ok(hugepd_t hpd)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#define is_hugepd(pdep) 0
|
||||
#endif /* CONFIG_DEBUG_VM */
|
||||
|
||||
#endif /* CONFIG_HUGETLB_PAGE */
|
||||
|
||||
static inline int remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
|
||||
unsigned long pfn, pgprot_t prot)
|
||||
{
|
||||
if (radix_enabled())
|
||||
BUG();
|
||||
return hash__remap_4k_pfn(vma, addr, pfn, prot);
|
||||
}
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif /*_ASM_POWERPC_BOOK3S_64_PGTABLE_64K_H */
|
@ -1,13 +1,247 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_64_PGTABLE_H_
|
||||
#define _ASM_POWERPC_BOOK3S_64_PGTABLE_H_
|
||||
|
||||
/*
|
||||
* This file contains the functions and defines necessary to modify and use
|
||||
* the ppc64 hashed page table.
|
||||
* Common bits between hash and Radix page table
|
||||
*/
|
||||
#define _PAGE_BIT_SWAP_TYPE 0
|
||||
|
||||
#define _PAGE_EXEC 0x00001 /* execute permission */
|
||||
#define _PAGE_WRITE 0x00002 /* write access allowed */
|
||||
#define _PAGE_READ 0x00004 /* read access allowed */
|
||||
#define _PAGE_RW (_PAGE_READ | _PAGE_WRITE)
|
||||
#define _PAGE_RWX (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)
|
||||
#define _PAGE_PRIVILEGED 0x00008 /* kernel access only */
|
||||
#define _PAGE_SAO 0x00010 /* Strong access order */
|
||||
#define _PAGE_NON_IDEMPOTENT 0x00020 /* non idempotent memory */
|
||||
#define _PAGE_TOLERANT 0x00030 /* tolerant memory, cache inhibited */
|
||||
#define _PAGE_DIRTY 0x00080 /* C: page changed */
|
||||
#define _PAGE_ACCESSED 0x00100 /* R: page referenced */
|
||||
/*
|
||||
* Software bits
|
||||
*/
|
||||
#define _RPAGE_SW0 0x2000000000000000UL
|
||||
#define _RPAGE_SW1 0x00800
|
||||
#define _RPAGE_SW2 0x00400
|
||||
#define _RPAGE_SW3 0x00200
|
||||
#ifdef CONFIG_MEM_SOFT_DIRTY
|
||||
#define _PAGE_SOFT_DIRTY _RPAGE_SW3 /* software: software dirty tracking */
|
||||
#else
|
||||
#define _PAGE_SOFT_DIRTY 0x00000
|
||||
#endif
|
||||
#define _PAGE_SPECIAL _RPAGE_SW2 /* software: special page */
|
||||
|
||||
|
||||
#define _PAGE_PTE (1ul << 62) /* distinguishes PTEs from pointers */
|
||||
#define _PAGE_PRESENT (1ul << 63) /* pte contains a translation */
|
||||
/*
|
||||
* Drivers request for cache inhibited pte mapping using _PAGE_NO_CACHE
|
||||
* Instead of fixing all of them, add an alternate define which
|
||||
* maps CI pte mapping.
|
||||
*/
|
||||
#define _PAGE_NO_CACHE _PAGE_TOLERANT
|
||||
/*
|
||||
* We support 57 bit real address in pte. Clear everything above 57, and
|
||||
* every thing below PAGE_SHIFT;
|
||||
*/
|
||||
#define PTE_RPN_MASK (((1UL << 57) - 1) & (PAGE_MASK))
|
||||
/*
|
||||
* set of bits not changed in pmd_modify. Even though we have hash specific bits
|
||||
* in here, on radix we expect them to be zero.
|
||||
*/
|
||||
#define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
|
||||
_PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_PTE | \
|
||||
_PAGE_SOFT_DIRTY)
|
||||
/*
|
||||
* user access blocked by key
|
||||
*/
|
||||
#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY)
|
||||
#define _PAGE_KERNEL_RO (_PAGE_PRIVILEGED | _PAGE_READ)
|
||||
#define _PAGE_KERNEL_RWX (_PAGE_PRIVILEGED | _PAGE_DIRTY | \
|
||||
_PAGE_RW | _PAGE_EXEC)
|
||||
/*
|
||||
* No page size encoding in the linux PTE
|
||||
*/
|
||||
#define _PAGE_PSIZE 0
|
||||
/*
|
||||
* _PAGE_CHG_MASK masks of bits that are to be preserved across
|
||||
* pgprot changes
|
||||
*/
|
||||
#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
|
||||
_PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \
|
||||
_PAGE_SOFT_DIRTY)
|
||||
/*
|
||||
* Mask of bits returned by pte_pgprot()
|
||||
*/
|
||||
#define PAGE_PROT_BITS (_PAGE_SAO | _PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT | \
|
||||
H_PAGE_4K_PFN | _PAGE_PRIVILEGED | _PAGE_ACCESSED | \
|
||||
_PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_EXEC | \
|
||||
_PAGE_SOFT_DIRTY)
|
||||
/*
|
||||
* We define 2 sets of base prot bits, one for basic pages (ie,
|
||||
* cacheable kernel and user pages) and one for non cacheable
|
||||
* pages. We always set _PAGE_COHERENT when SMP is enabled or
|
||||
* the processor might need it for DMA coherency.
|
||||
*/
|
||||
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
|
||||
#define _PAGE_BASE (_PAGE_BASE_NC)
|
||||
|
||||
/* Permission masks used to generate the __P and __S table,
|
||||
*
|
||||
* Note:__pgprot is defined in arch/powerpc/include/asm/page.h
|
||||
*
|
||||
* Write permissions imply read permissions for now (we could make write-only
|
||||
* pages on BookE but we don't bother for now). Execute permission control is
|
||||
* possible on platforms that define _PAGE_EXEC
|
||||
*
|
||||
* Note due to the way vm flags are laid out, the bits are XWR
|
||||
*/
|
||||
#define PAGE_NONE __pgprot(_PAGE_BASE | _PAGE_PRIVILEGED)
|
||||
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_RW)
|
||||
#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_RW | _PAGE_EXEC)
|
||||
#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_READ)
|
||||
#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
|
||||
#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_READ)
|
||||
#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
|
||||
|
||||
#define __P000 PAGE_NONE
|
||||
#define __P001 PAGE_READONLY
|
||||
#define __P010 PAGE_COPY
|
||||
#define __P011 PAGE_COPY
|
||||
#define __P100 PAGE_READONLY_X
|
||||
#define __P101 PAGE_READONLY_X
|
||||
#define __P110 PAGE_COPY_X
|
||||
#define __P111 PAGE_COPY_X
|
||||
|
||||
#define __S000 PAGE_NONE
|
||||
#define __S001 PAGE_READONLY
|
||||
#define __S010 PAGE_SHARED
|
||||
#define __S011 PAGE_SHARED
|
||||
#define __S100 PAGE_READONLY_X
|
||||
#define __S101 PAGE_READONLY_X
|
||||
#define __S110 PAGE_SHARED_X
|
||||
#define __S111 PAGE_SHARED_X
|
||||
|
||||
/* Permission masks used for kernel mappings */
|
||||
#define PAGE_KERNEL __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
|
||||
#define PAGE_KERNEL_NC __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
|
||||
_PAGE_TOLERANT)
|
||||
#define PAGE_KERNEL_NCG __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
|
||||
_PAGE_NON_IDEMPOTENT)
|
||||
#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX)
|
||||
#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
|
||||
#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
|
||||
|
||||
/*
|
||||
* Protection used for kernel text. We want the debuggers to be able to
|
||||
* set breakpoints anywhere, so don't write protect the kernel text
|
||||
* on platforms where such control is possible.
|
||||
*/
|
||||
#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) || \
|
||||
defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
|
||||
#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
|
||||
#else
|
||||
#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
|
||||
#endif
|
||||
|
||||
/* Make modules code happy. We don't set RO yet */
|
||||
#define PAGE_KERNEL_EXEC PAGE_KERNEL_X
|
||||
#define PAGE_AGP (PAGE_KERNEL_NC)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* page table defines
|
||||
*/
|
||||
extern unsigned long __pte_index_size;
|
||||
extern unsigned long __pmd_index_size;
|
||||
extern unsigned long __pud_index_size;
|
||||
extern unsigned long __pgd_index_size;
|
||||
extern unsigned long __pmd_cache_index;
|
||||
#define PTE_INDEX_SIZE __pte_index_size
|
||||
#define PMD_INDEX_SIZE __pmd_index_size
|
||||
#define PUD_INDEX_SIZE __pud_index_size
|
||||
#define PGD_INDEX_SIZE __pgd_index_size
|
||||
#define PMD_CACHE_INDEX __pmd_cache_index
|
||||
/*
|
||||
* Because of use of pte fragments and THP, size of page table
|
||||
* are not always derived out of index size above.
|
||||
*/
|
||||
extern unsigned long __pte_table_size;
|
||||
extern unsigned long __pmd_table_size;
|
||||
extern unsigned long __pud_table_size;
|
||||
extern unsigned long __pgd_table_size;
|
||||
#define PTE_TABLE_SIZE __pte_table_size
|
||||
#define PMD_TABLE_SIZE __pmd_table_size
|
||||
#define PUD_TABLE_SIZE __pud_table_size
|
||||
#define PGD_TABLE_SIZE __pgd_table_size
|
||||
|
||||
extern unsigned long __pmd_val_bits;
|
||||
extern unsigned long __pud_val_bits;
|
||||
extern unsigned long __pgd_val_bits;
|
||||
#define PMD_VAL_BITS __pmd_val_bits
|
||||
#define PUD_VAL_BITS __pud_val_bits
|
||||
#define PGD_VAL_BITS __pgd_val_bits
|
||||
|
||||
extern unsigned long __pte_frag_nr;
|
||||
#define PTE_FRAG_NR __pte_frag_nr
|
||||
extern unsigned long __pte_frag_size_shift;
|
||||
#define PTE_FRAG_SIZE_SHIFT __pte_frag_size_shift
|
||||
#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT)
|
||||
/*
|
||||
* Pgtable size used by swapper, init in asm code
|
||||
*/
|
||||
#define MAX_PGD_TABLE_SIZE (sizeof(pgd_t) << RADIX_PGD_INDEX_SIZE)
|
||||
|
||||
#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
|
||||
#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
|
||||
#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
|
||||
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
|
||||
|
||||
/* PMD_SHIFT determines what a second-level page table entry can map */
|
||||
#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
|
||||
#define PMD_SIZE (1UL << PMD_SHIFT)
|
||||
#define PMD_MASK (~(PMD_SIZE-1))
|
||||
|
||||
/* PUD_SHIFT determines what a third-level page table entry can map */
|
||||
#define PUD_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
|
||||
#define PUD_SIZE (1UL << PUD_SHIFT)
|
||||
#define PUD_MASK (~(PUD_SIZE-1))
|
||||
|
||||
/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
|
||||
#define PGDIR_SHIFT (PUD_SHIFT + PUD_INDEX_SIZE)
|
||||
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
|
||||
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
||||
|
||||
/* Bits to mask out from a PMD to get to the PTE page */
|
||||
#define PMD_MASKED_BITS 0xc0000000000000ffUL
|
||||
/* Bits to mask out from a PUD to get to the PMD page */
|
||||
#define PUD_MASKED_BITS 0xc0000000000000ffUL
|
||||
/* Bits to mask out from a PGD to get to the PUD page */
|
||||
#define PGD_MASKED_BITS 0xc0000000000000ffUL
|
||||
|
||||
extern unsigned long __vmalloc_start;
|
||||
extern unsigned long __vmalloc_end;
|
||||
#define VMALLOC_START __vmalloc_start
|
||||
#define VMALLOC_END __vmalloc_end
|
||||
|
||||
extern unsigned long __kernel_virt_start;
|
||||
extern unsigned long __kernel_virt_size;
|
||||
#define KERN_VIRT_START __kernel_virt_start
|
||||
#define KERN_VIRT_SIZE __kernel_virt_size
|
||||
extern struct page *vmemmap;
|
||||
extern unsigned long ioremap_bot;
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#include <asm/book3s/64/hash.h>
|
||||
#include <asm/barrier.h>
|
||||
#include <asm/book3s/64/radix.h>
|
||||
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
#include <asm/book3s/64/pgtable-64k.h>
|
||||
#else
|
||||
#include <asm/book3s/64/pgtable-4k.h>
|
||||
#endif
|
||||
|
||||
#include <asm/barrier.h>
|
||||
/*
|
||||
* The second half of the kernel virtual space is used for IO mappings,
|
||||
* it's itself carved into the PIO region (ISA and PHB IO space) and
|
||||
@ -26,8 +260,6 @@
|
||||
#define IOREMAP_BASE (PHB_IO_END)
|
||||
#define IOREMAP_END (KERN_VIRT_START + KERN_VIRT_SIZE)
|
||||
|
||||
#define vmemmap ((struct page *)VMEMMAP_BASE)
|
||||
|
||||
/* Advertise special mapping type for AGP */
|
||||
#define HAVE_PAGE_AGP
|
||||
|
||||
@ -45,7 +277,7 @@
|
||||
|
||||
#define __real_pte(e,p) ((real_pte_t){(e)})
|
||||
#define __rpte_to_pte(r) ((r).pte)
|
||||
#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >>_PAGE_F_GIX_SHIFT)
|
||||
#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >> H_PAGE_F_GIX_SHIFT)
|
||||
|
||||
#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \
|
||||
do { \
|
||||
@ -62,6 +294,327 @@
|
||||
|
||||
#endif /* __real_pte */
|
||||
|
||||
static inline unsigned long pte_update(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, unsigned long clr,
|
||||
unsigned long set, int huge)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pte_update(mm, addr, ptep, clr, set, huge);
|
||||
return hash__pte_update(mm, addr, ptep, clr, set, huge);
|
||||
}
|
||||
/*
|
||||
* For hash even if we have _PAGE_ACCESSED = 0, we do a pte_update.
|
||||
* We currently remove entries from the hashtable regardless of whether
|
||||
* the entry was young or dirty.
|
||||
*
|
||||
* We should be more intelligent about this but for the moment we override
|
||||
* these functions and force a tlb flush unconditionally
|
||||
* For radix: H_PAGE_HASHPTE should be zero. Hence we can use the same
|
||||
* function for both hash and radix.
|
||||
*/
|
||||
static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
unsigned long old;
|
||||
|
||||
if ((pte_val(*ptep) & (_PAGE_ACCESSED | H_PAGE_HASHPTE)) == 0)
|
||||
return 0;
|
||||
old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0);
|
||||
return (old & _PAGE_ACCESSED) != 0;
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
|
||||
#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
|
||||
({ \
|
||||
int __r; \
|
||||
__r = __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep); \
|
||||
__r; \
|
||||
})
|
||||
|
||||
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
|
||||
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep)
|
||||
{
|
||||
|
||||
if ((pte_val(*ptep) & _PAGE_WRITE) == 0)
|
||||
return;
|
||||
|
||||
pte_update(mm, addr, ptep, _PAGE_WRITE, 0, 0);
|
||||
}
|
||||
|
||||
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
if ((pte_val(*ptep) & _PAGE_WRITE) == 0)
|
||||
return;
|
||||
|
||||
pte_update(mm, addr, ptep, _PAGE_WRITE, 0, 1);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
|
||||
static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
|
||||
return __pte(old);
|
||||
}
|
||||
|
||||
static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t * ptep)
|
||||
{
|
||||
pte_update(mm, addr, ptep, ~0UL, 0, 0);
|
||||
}
|
||||
static inline int pte_write(pte_t pte) { return !!(pte_val(pte) & _PAGE_WRITE);}
|
||||
static inline int pte_dirty(pte_t pte) { return !!(pte_val(pte) & _PAGE_DIRTY); }
|
||||
static inline int pte_young(pte_t pte) { return !!(pte_val(pte) & _PAGE_ACCESSED); }
|
||||
static inline int pte_special(pte_t pte) { return !!(pte_val(pte) & _PAGE_SPECIAL); }
|
||||
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
|
||||
static inline bool pte_soft_dirty(pte_t pte)
|
||||
{
|
||||
return !!(pte_val(pte) & _PAGE_SOFT_DIRTY);
|
||||
}
|
||||
static inline pte_t pte_mksoft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SOFT_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_clear_soft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_SOFT_DIRTY);
|
||||
}
|
||||
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
|
||||
|
||||
#ifdef CONFIG_NUMA_BALANCING
|
||||
/*
|
||||
* These work without NUMA balancing but the kernel does not care. See the
|
||||
* comment in include/asm-generic/pgtable.h . On powerpc, this will only
|
||||
* work for user pages and always return true for kernel pages.
|
||||
*/
|
||||
static inline int pte_protnone(pte_t pte)
|
||||
{
|
||||
return (pte_val(pte) & (_PAGE_PRESENT | _PAGE_PRIVILEGED)) ==
|
||||
(_PAGE_PRESENT | _PAGE_PRIVILEGED);
|
||||
}
|
||||
#endif /* CONFIG_NUMA_BALANCING */
|
||||
|
||||
static inline int pte_present(pte_t pte)
|
||||
{
|
||||
return !!(pte_val(pte) & _PAGE_PRESENT);
|
||||
}
|
||||
/*
|
||||
* Conversion functions: convert a page and protection to a page entry,
|
||||
* and a page entry and page directory to the page they refer to.
|
||||
*
|
||||
* Even if PTEs can be unsigned long long, a PFN is always an unsigned
|
||||
* long for now.
|
||||
*/
|
||||
static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot)
|
||||
{
|
||||
return __pte((((pte_basic_t)(pfn) << PAGE_SHIFT) & PTE_RPN_MASK) |
|
||||
pgprot_val(pgprot));
|
||||
}
|
||||
|
||||
static inline unsigned long pte_pfn(pte_t pte)
|
||||
{
|
||||
return (pte_val(pte) & PTE_RPN_MASK) >> PAGE_SHIFT;
|
||||
}
|
||||
|
||||
/* Generic modifiers for PTE bits */
|
||||
static inline pte_t pte_wrprotect(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_WRITE);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkclean(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkold(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkwrite(pte_t pte)
|
||||
{
|
||||
/*
|
||||
* write implies read, hence set both
|
||||
*/
|
||||
return __pte(pte_val(pte) | _PAGE_RW);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkdirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_DIRTY | _PAGE_SOFT_DIRTY);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkyoung(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_ACCESSED);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkspecial(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SPECIAL);
|
||||
}
|
||||
|
||||
static inline pte_t pte_mkhuge(pte_t pte)
|
||||
{
|
||||
return pte;
|
||||
}
|
||||
|
||||
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
||||
{
|
||||
/* FIXME!! check whether this need to be a conditional */
|
||||
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
|
||||
}
|
||||
|
||||
static inline bool pte_user(pte_t pte)
|
||||
{
|
||||
return !(pte_val(pte) & _PAGE_PRIVILEGED);
|
||||
}
|
||||
|
||||
/* Encode and de-code a swap entry */
|
||||
#define MAX_SWAPFILES_CHECK() do { \
|
||||
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS); \
|
||||
/* \
|
||||
* Don't have overlapping bits with _PAGE_HPTEFLAGS \
|
||||
* We filter HPTEFLAGS on set_pte. \
|
||||
*/ \
|
||||
BUILD_BUG_ON(_PAGE_HPTEFLAGS & (0x1f << _PAGE_BIT_SWAP_TYPE)); \
|
||||
BUILD_BUG_ON(_PAGE_HPTEFLAGS & _PAGE_SWP_SOFT_DIRTY); \
|
||||
} while (0)
|
||||
/*
|
||||
* on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
|
||||
*/
|
||||
#define SWP_TYPE_BITS 5
|
||||
#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
|
||||
& ((1UL << SWP_TYPE_BITS) - 1))
|
||||
#define __swp_offset(x) (((x).val & PTE_RPN_MASK) >> PAGE_SHIFT)
|
||||
#define __swp_entry(type, offset) ((swp_entry_t) { \
|
||||
((type) << _PAGE_BIT_SWAP_TYPE) \
|
||||
| (((offset) << PAGE_SHIFT) & PTE_RPN_MASK)})
|
||||
/*
|
||||
* swp_entry_t must be independent of pte bits. We build a swp_entry_t from
|
||||
* swap type and offset we get from swap and convert that to pte to find a
|
||||
* matching pte in linux page table.
|
||||
* Clear bits not found in swap entries here.
|
||||
*/
|
||||
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) & ~_PAGE_PTE })
|
||||
#define __swp_entry_to_pte(x) __pte((x).val | _PAGE_PTE)
|
||||
|
||||
#ifdef CONFIG_MEM_SOFT_DIRTY
|
||||
#define _PAGE_SWP_SOFT_DIRTY (1UL << (SWP_TYPE_BITS + _PAGE_BIT_SWAP_TYPE))
|
||||
#else
|
||||
#define _PAGE_SWP_SOFT_DIRTY 0UL
|
||||
#endif /* CONFIG_MEM_SOFT_DIRTY */
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
|
||||
static inline pte_t pte_swp_mksoft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SWP_SOFT_DIRTY);
|
||||
}
|
||||
static inline bool pte_swp_soft_dirty(pte_t pte)
|
||||
{
|
||||
return !!(pte_val(pte) & _PAGE_SWP_SOFT_DIRTY);
|
||||
}
|
||||
static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_SWP_SOFT_DIRTY);
|
||||
}
|
||||
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
|
||||
|
||||
static inline bool check_pte_access(unsigned long access, unsigned long ptev)
|
||||
{
|
||||
/*
|
||||
* This check for _PAGE_RWX and _PAGE_PRESENT bits
|
||||
*/
|
||||
if (access & ~ptev)
|
||||
return false;
|
||||
/*
|
||||
* This check for access to privilege space
|
||||
*/
|
||||
if ((access & _PAGE_PRIVILEGED) != (ptev & _PAGE_PRIVILEGED))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
/*
|
||||
* Generic functions with hash/radix callbacks
|
||||
*/
|
||||
|
||||
static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__ptep_set_access_flags(ptep, entry);
|
||||
return hash__ptep_set_access_flags(ptep, entry);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PTE_SAME
|
||||
static inline int pte_same(pte_t pte_a, pte_t pte_b)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pte_same(pte_a, pte_b);
|
||||
return hash__pte_same(pte_a, pte_b);
|
||||
}
|
||||
|
||||
static inline int pte_none(pte_t pte)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pte_none(pte);
|
||||
return hash__pte_none(pte);
|
||||
}
|
||||
|
||||
static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t pte, int percpu)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__set_pte_at(mm, addr, ptep, pte, percpu);
|
||||
return hash__set_pte_at(mm, addr, ptep, pte, percpu);
|
||||
}
|
||||
|
||||
#define _PAGE_CACHE_CTL (_PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT)
|
||||
|
||||
#define pgprot_noncached pgprot_noncached
|
||||
static inline pgprot_t pgprot_noncached(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_NON_IDEMPOTENT);
|
||||
}
|
||||
|
||||
#define pgprot_noncached_wc pgprot_noncached_wc
|
||||
static inline pgprot_t pgprot_noncached_wc(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
|
||||
_PAGE_TOLERANT);
|
||||
}
|
||||
|
||||
#define pgprot_cached pgprot_cached
|
||||
static inline pgprot_t pgprot_cached(pgprot_t prot)
|
||||
{
|
||||
return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL));
|
||||
}
|
||||
|
||||
#define pgprot_writecombine pgprot_writecombine
|
||||
static inline pgprot_t pgprot_writecombine(pgprot_t prot)
|
||||
{
|
||||
return pgprot_noncached_wc(prot);
|
||||
}
|
||||
/*
|
||||
* check a pte mapping have cache inhibited property
|
||||
*/
|
||||
static inline bool pte_ci(pte_t pte)
|
||||
{
|
||||
unsigned long pte_v = pte_val(pte);
|
||||
|
||||
if (((pte_v & _PAGE_CACHE_CTL) == _PAGE_TOLERANT) ||
|
||||
((pte_v & _PAGE_CACHE_CTL) == _PAGE_NON_IDEMPOTENT))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline void pmd_set(pmd_t *pmdp, unsigned long val)
|
||||
{
|
||||
*pmdp = __pmd(val);
|
||||
@ -75,6 +628,13 @@ static inline void pmd_clear(pmd_t *pmdp)
|
||||
#define pmd_none(pmd) (!pmd_val(pmd))
|
||||
#define pmd_present(pmd) (!pmd_none(pmd))
|
||||
|
||||
static inline int pmd_bad(pmd_t pmd)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pmd_bad(pmd);
|
||||
return hash__pmd_bad(pmd);
|
||||
}
|
||||
|
||||
static inline void pud_set(pud_t *pudp, unsigned long val)
|
||||
{
|
||||
*pudp = __pud(val);
|
||||
@ -100,6 +660,15 @@ static inline pud_t pte_pud(pte_t pte)
|
||||
return __pud(pte_val(pte));
|
||||
}
|
||||
#define pud_write(pud) pte_write(pud_pte(pud))
|
||||
|
||||
static inline int pud_bad(pud_t pud)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pud_bad(pud);
|
||||
return hash__pud_bad(pud);
|
||||
}
|
||||
|
||||
|
||||
#define pgd_write(pgd) pte_write(pgd_pte(pgd))
|
||||
static inline void pgd_set(pgd_t *pgdp, unsigned long val)
|
||||
{
|
||||
@ -124,8 +693,27 @@ static inline pgd_t pte_pgd(pte_t pte)
|
||||
return __pgd(pte_val(pte));
|
||||
}
|
||||
|
||||
static inline int pgd_bad(pgd_t pgd)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pgd_bad(pgd);
|
||||
return hash__pgd_bad(pgd);
|
||||
}
|
||||
|
||||
extern struct page *pgd_page(pgd_t pgd);
|
||||
|
||||
/* Pointers in the page table tree are physical addresses */
|
||||
#define __pgtable_ptr_val(ptr) __pa(ptr)
|
||||
|
||||
#define pmd_page_vaddr(pmd) __va(pmd_val(pmd) & ~PMD_MASKED_BITS)
|
||||
#define pud_page_vaddr(pud) __va(pud_val(pud) & ~PUD_MASKED_BITS)
|
||||
#define pgd_page_vaddr(pgd) __va(pgd_val(pgd) & ~PGD_MASKED_BITS)
|
||||
|
||||
#define pgd_index(address) (((address) >> (PGDIR_SHIFT)) & (PTRS_PER_PGD - 1))
|
||||
#define pud_index(address) (((address) >> (PUD_SHIFT)) & (PTRS_PER_PUD - 1))
|
||||
#define pmd_index(address) (((address) >> (PMD_SHIFT)) & (PTRS_PER_PMD - 1))
|
||||
#define pte_index(address) (((address) >> (PAGE_SHIFT)) & (PTRS_PER_PTE - 1))
|
||||
|
||||
/*
|
||||
* Find an entry in a page-table-directory. We combine the address region
|
||||
* (the high order N bits) and the pgd portion of the address.
|
||||
@ -156,74 +744,42 @@ extern struct page *pgd_page(pgd_t pgd);
|
||||
#define pgd_ERROR(e) \
|
||||
pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
|
||||
|
||||
/* Encode and de-code a swap entry */
|
||||
#define MAX_SWAPFILES_CHECK() do { \
|
||||
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS); \
|
||||
/* \
|
||||
* Don't have overlapping bits with _PAGE_HPTEFLAGS \
|
||||
* We filter HPTEFLAGS on set_pte. \
|
||||
*/ \
|
||||
BUILD_BUG_ON(_PAGE_HPTEFLAGS & (0x1f << _PAGE_BIT_SWAP_TYPE)); \
|
||||
BUILD_BUG_ON(_PAGE_HPTEFLAGS & _PAGE_SWP_SOFT_DIRTY); \
|
||||
} while (0)
|
||||
/*
|
||||
* on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
|
||||
*/
|
||||
#define SWP_TYPE_BITS 5
|
||||
#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
|
||||
& ((1UL << SWP_TYPE_BITS) - 1))
|
||||
#define __swp_offset(x) (((x).val & PTE_RPN_MASK) >> PTE_RPN_SHIFT)
|
||||
#define __swp_entry(type, offset) ((swp_entry_t) { \
|
||||
((type) << _PAGE_BIT_SWAP_TYPE) \
|
||||
| (((offset) << PTE_RPN_SHIFT) & PTE_RPN_MASK)})
|
||||
/*
|
||||
* swp_entry_t must be independent of pte bits. We build a swp_entry_t from
|
||||
* swap type and offset we get from swap and convert that to pte to find a
|
||||
* matching pte in linux page table.
|
||||
* Clear bits not found in swap entries here.
|
||||
*/
|
||||
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) & ~_PAGE_PTE })
|
||||
#define __swp_entry_to_pte(x) __pte((x).val | _PAGE_PTE)
|
||||
|
||||
#ifdef CONFIG_MEM_SOFT_DIRTY
|
||||
#define _PAGE_SWP_SOFT_DIRTY (1UL << (SWP_TYPE_BITS + _PAGE_BIT_SWAP_TYPE))
|
||||
#else
|
||||
#define _PAGE_SWP_SOFT_DIRTY 0UL
|
||||
#endif /* CONFIG_MEM_SOFT_DIRTY */
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
|
||||
static inline pte_t pte_swp_mksoft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) | _PAGE_SWP_SOFT_DIRTY);
|
||||
}
|
||||
static inline bool pte_swp_soft_dirty(pte_t pte)
|
||||
{
|
||||
return !!(pte_val(pte) & _PAGE_SWP_SOFT_DIRTY);
|
||||
}
|
||||
static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
|
||||
{
|
||||
return __pte(pte_val(pte) & ~_PAGE_SWP_SOFT_DIRTY);
|
||||
}
|
||||
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
|
||||
|
||||
void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
|
||||
void pgtable_cache_init(void);
|
||||
|
||||
static inline int map_kernel_page(unsigned long ea, unsigned long pa,
|
||||
unsigned long flags)
|
||||
{
|
||||
if (radix_enabled()) {
|
||||
#if defined(CONFIG_PPC_RADIX_MMU) && defined(DEBUG_VM)
|
||||
unsigned long page_size = 1 << mmu_psize_defs[mmu_io_psize].shift;
|
||||
WARN((page_size != PAGE_SIZE), "I/O page size != PAGE_SIZE");
|
||||
#endif
|
||||
return radix__map_kernel_page(ea, pa, __pgprot(flags), PAGE_SIZE);
|
||||
}
|
||||
return hash__map_kernel_page(ea, pa, flags);
|
||||
}
|
||||
|
||||
static inline int __meminit vmemmap_create_mapping(unsigned long start,
|
||||
unsigned long page_size,
|
||||
unsigned long phys)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__vmemmap_create_mapping(start, page_size, phys);
|
||||
return hash__vmemmap_create_mapping(start, page_size, phys);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MEMORY_HOTPLUG
|
||||
static inline void vmemmap_remove_mapping(unsigned long start,
|
||||
unsigned long page_size)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__vmemmap_remove_mapping(start, page_size);
|
||||
return hash__vmemmap_remove_mapping(start, page_size);
|
||||
}
|
||||
#endif
|
||||
struct page *realmode_pfn_to_page(unsigned long pfn);
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
|
||||
extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
|
||||
extern pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot);
|
||||
extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp, pmd_t pmd);
|
||||
extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
|
||||
pmd_t *pmd);
|
||||
#define has_transparent_hugepage has_transparent_hugepage
|
||||
extern int has_transparent_hugepage(void);
|
||||
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
||||
|
||||
|
||||
static inline pte_t pmd_pte(pmd_t pmd)
|
||||
{
|
||||
return __pte(pmd_val(pmd));
|
||||
@ -238,7 +794,6 @@ static inline pte_t *pmdp_ptep(pmd_t *pmd)
|
||||
{
|
||||
return (pte_t *)pmd;
|
||||
}
|
||||
|
||||
#define pmd_pfn(pmd) pte_pfn(pmd_pte(pmd))
|
||||
#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
|
||||
#define pmd_young(pmd) pte_young(pmd_pte(pmd))
|
||||
@ -265,9 +820,87 @@ static inline int pmd_protnone(pmd_t pmd)
|
||||
#define __HAVE_ARCH_PMD_WRITE
|
||||
#define pmd_write(pmd) pte_write(pmd_pte(pmd))
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
|
||||
extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
|
||||
extern pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot);
|
||||
extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp, pmd_t pmd);
|
||||
extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
|
||||
pmd_t *pmd);
|
||||
extern int hash__has_transparent_hugepage(void);
|
||||
static inline int has_transparent_hugepage(void)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__has_transparent_hugepage();
|
||||
return hash__has_transparent_hugepage();
|
||||
}
|
||||
#define has_transparent_hugepage has_transparent_hugepage
|
||||
|
||||
static inline unsigned long
|
||||
pmd_hugepage_update(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp,
|
||||
unsigned long clr, unsigned long set)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pmd_hugepage_update(mm, addr, pmdp, clr, set);
|
||||
return hash__pmd_hugepage_update(mm, addr, pmdp, clr, set);
|
||||
}
|
||||
|
||||
static inline int pmd_large(pmd_t pmd)
|
||||
{
|
||||
return !!(pmd_val(pmd) & _PAGE_PTE);
|
||||
}
|
||||
|
||||
static inline pmd_t pmd_mknotpresent(pmd_t pmd)
|
||||
{
|
||||
return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT);
|
||||
}
|
||||
/*
|
||||
* For radix we should always find H_PAGE_HASHPTE zero. Hence
|
||||
* the below will work for radix too
|
||||
*/
|
||||
static inline int __pmdp_test_and_clear_young(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp)
|
||||
{
|
||||
unsigned long old;
|
||||
|
||||
if ((pmd_val(*pmdp) & (_PAGE_ACCESSED | H_PAGE_HASHPTE)) == 0)
|
||||
return 0;
|
||||
old = pmd_hugepage_update(mm, addr, pmdp, _PAGE_ACCESSED, 0);
|
||||
return ((old & _PAGE_ACCESSED) != 0);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
|
||||
static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp)
|
||||
{
|
||||
|
||||
if ((pmd_val(*pmdp) & _PAGE_WRITE) == 0)
|
||||
return;
|
||||
|
||||
pmd_hugepage_update(mm, addr, pmdp, _PAGE_WRITE, 0);
|
||||
}
|
||||
|
||||
static inline int pmd_trans_huge(pmd_t pmd)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pmd_trans_huge(pmd);
|
||||
return hash__pmd_trans_huge(pmd);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMD_SAME
|
||||
static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pmd_same(pmd_a, pmd_b);
|
||||
return hash__pmd_same(pmd_a, pmd_b);
|
||||
}
|
||||
|
||||
static inline pmd_t pmd_mkhuge(pmd_t pmd)
|
||||
{
|
||||
return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_THP_HUGE));
|
||||
if (radix_enabled())
|
||||
return radix__pmd_mkhuge(pmd);
|
||||
return hash__pmd_mkhuge(pmd);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
|
||||
@ -278,37 +911,63 @@ extern int pmdp_set_access_flags(struct vm_area_struct *vma,
|
||||
#define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
|
||||
extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
#define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
|
||||
extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
|
||||
#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR
|
||||
extern pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp);
|
||||
static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pmdp_huge_get_and_clear(mm, addr, pmdp);
|
||||
return hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
|
||||
}
|
||||
|
||||
extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pmdp_collapse_flush(vma, address, pmdp);
|
||||
return hash__pmdp_collapse_flush(vma, address, pmdp);
|
||||
}
|
||||
#define pmdp_collapse_flush pmdp_collapse_flush
|
||||
|
||||
#define __HAVE_ARCH_PGTABLE_DEPOSIT
|
||||
extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pgtable_t pgtable);
|
||||
static inline void pgtable_trans_huge_deposit(struct mm_struct *mm,
|
||||
pmd_t *pmdp, pgtable_t pgtable)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pgtable_trans_huge_deposit(mm, pmdp, pgtable);
|
||||
return hash__pgtable_trans_huge_deposit(mm, pmdp, pgtable);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PGTABLE_WITHDRAW
|
||||
extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
|
||||
static inline pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm,
|
||||
pmd_t *pmdp)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pgtable_trans_huge_withdraw(mm, pmdp);
|
||||
return hash__pgtable_trans_huge_withdraw(mm, pmdp);
|
||||
}
|
||||
|
||||
#define __HAVE_ARCH_PMDP_INVALIDATE
|
||||
extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
|
||||
pmd_t *pmdp);
|
||||
|
||||
#define __HAVE_ARCH_PMDP_HUGE_SPLIT_PREPARE
|
||||
extern void pmdp_huge_split_prepare(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
static inline void pmdp_huge_split_prepare(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__pmdp_huge_split_prepare(vma, address, pmdp);
|
||||
return hash__pmdp_huge_split_prepare(vma, address, pmdp);
|
||||
}
|
||||
|
||||
#define pmd_move_must_withdraw pmd_move_must_withdraw
|
||||
struct spinlock;
|
||||
static inline int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
|
||||
struct spinlock *old_pmd_ptl)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return false;
|
||||
/*
|
||||
* Archs like ppc64 use pgtable to store per pmd
|
||||
* specific information. So when we switch the pmd,
|
||||
@ -316,5 +975,6 @@ static inline int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
|
||||
*/
|
||||
return true;
|
||||
}
|
||||
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_PGTABLE_H_ */
|
||||
|
12
arch/powerpc/include/asm/book3s/64/radix-4k.h
Normal file
12
arch/powerpc/include/asm/book3s/64/radix-4k.h
Normal file
@ -0,0 +1,12 @@
|
||||
#ifndef _ASM_POWERPC_PGTABLE_RADIX_4K_H
|
||||
#define _ASM_POWERPC_PGTABLE_RADIX_4K_H
|
||||
|
||||
/*
|
||||
* For 4K page size supported index is 13/9/9/9
|
||||
*/
|
||||
#define RADIX_PTE_INDEX_SIZE 9 /* 2MB huge page */
|
||||
#define RADIX_PMD_INDEX_SIZE 9 /* 1G huge page */
|
||||
#define RADIX_PUD_INDEX_SIZE 9
|
||||
#define RADIX_PGD_INDEX_SIZE 13
|
||||
|
||||
#endif /* _ASM_POWERPC_PGTABLE_RADIX_4K_H */
|
12
arch/powerpc/include/asm/book3s/64/radix-64k.h
Normal file
12
arch/powerpc/include/asm/book3s/64/radix-64k.h
Normal file
@ -0,0 +1,12 @@
|
||||
#ifndef _ASM_POWERPC_PGTABLE_RADIX_64K_H
|
||||
#define _ASM_POWERPC_PGTABLE_RADIX_64K_H
|
||||
|
||||
/*
|
||||
* For 64K page size supported index is 13/9/9/5
|
||||
*/
|
||||
#define RADIX_PTE_INDEX_SIZE 5 /* 2MB huge page */
|
||||
#define RADIX_PMD_INDEX_SIZE 9 /* 1G huge page */
|
||||
#define RADIX_PUD_INDEX_SIZE 9
|
||||
#define RADIX_PGD_INDEX_SIZE 13
|
||||
|
||||
#endif /* _ASM_POWERPC_PGTABLE_RADIX_64K_H */
|
232
arch/powerpc/include/asm/book3s/64/radix.h
Normal file
232
arch/powerpc/include/asm/book3s/64/radix.h
Normal file
@ -0,0 +1,232 @@
|
||||
#ifndef _ASM_POWERPC_PGTABLE_RADIX_H
|
||||
#define _ASM_POWERPC_PGTABLE_RADIX_H
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#include <asm/cmpxchg.h>
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PPC_64K_PAGES
|
||||
#include <asm/book3s/64/radix-64k.h>
|
||||
#else
|
||||
#include <asm/book3s/64/radix-4k.h>
|
||||
#endif
|
||||
|
||||
/* An empty PTE can still have a R or C writeback */
|
||||
#define RADIX_PTE_NONE_MASK (_PAGE_DIRTY | _PAGE_ACCESSED)
|
||||
|
||||
/* Bits to set in a RPMD/RPUD/RPGD */
|
||||
#define RADIX_PMD_VAL_BITS (0x8000000000000000UL | RADIX_PTE_INDEX_SIZE)
|
||||
#define RADIX_PUD_VAL_BITS (0x8000000000000000UL | RADIX_PMD_INDEX_SIZE)
|
||||
#define RADIX_PGD_VAL_BITS (0x8000000000000000UL | RADIX_PUD_INDEX_SIZE)
|
||||
|
||||
/* Don't have anything in the reserved bits and leaf bits */
|
||||
#define RADIX_PMD_BAD_BITS 0x60000000000000e0UL
|
||||
#define RADIX_PUD_BAD_BITS 0x60000000000000e0UL
|
||||
#define RADIX_PGD_BAD_BITS 0x60000000000000e0UL
|
||||
|
||||
/*
|
||||
* Size of EA range mapped by our pagetables.
|
||||
*/
|
||||
#define RADIX_PGTABLE_EADDR_SIZE (RADIX_PTE_INDEX_SIZE + RADIX_PMD_INDEX_SIZE + \
|
||||
RADIX_PUD_INDEX_SIZE + RADIX_PGD_INDEX_SIZE + PAGE_SHIFT)
|
||||
#define RADIX_PGTABLE_RANGE (ASM_CONST(1) << RADIX_PGTABLE_EADDR_SIZE)
|
||||
|
||||
/*
|
||||
* We support 52 bit address space, Use top bit for kernel
|
||||
* virtual mapping. Also make sure kernel fit in the top
|
||||
* quadrant.
|
||||
*
|
||||
* +------------------+
|
||||
* +------------------+ Kernel virtual map (0xc008000000000000)
|
||||
* | |
|
||||
* | |
|
||||
* | |
|
||||
* 0b11......+------------------+ Kernel linear map (0xc....)
|
||||
* | |
|
||||
* | 2 quadrant |
|
||||
* | |
|
||||
* 0b10......+------------------+
|
||||
* | |
|
||||
* | 1 quadrant |
|
||||
* | |
|
||||
* 0b01......+------------------+
|
||||
* | |
|
||||
* | 0 quadrant |
|
||||
* | |
|
||||
* 0b00......+------------------+
|
||||
*
|
||||
*
|
||||
* 3rd quadrant expanded:
|
||||
* +------------------------------+
|
||||
* | |
|
||||
* | |
|
||||
* | |
|
||||
* +------------------------------+ Kernel IO map end (0xc010000000000000)
|
||||
* | |
|
||||
* | |
|
||||
* | 1/2 of virtual map |
|
||||
* | |
|
||||
* | |
|
||||
* +------------------------------+ Kernel IO map start
|
||||
* | |
|
||||
* | 1/4 of virtual map |
|
||||
* | |
|
||||
* +------------------------------+ Kernel vmemap start
|
||||
* | |
|
||||
* | 1/4 of virtual map |
|
||||
* | |
|
||||
* +------------------------------+ Kernel virt start (0xc008000000000000)
|
||||
* | |
|
||||
* | |
|
||||
* | |
|
||||
* +------------------------------+ Kernel linear (0xc.....)
|
||||
*/
|
||||
|
||||
#define RADIX_KERN_VIRT_START ASM_CONST(0xc008000000000000)
|
||||
#define RADIX_KERN_VIRT_SIZE ASM_CONST(0x0008000000000000)
|
||||
|
||||
/*
|
||||
* The vmalloc space starts at the beginning of that region, and
|
||||
* occupies a quarter of it on radix config.
|
||||
* (we keep a quarter for the virtual memmap)
|
||||
*/
|
||||
#define RADIX_VMALLOC_START RADIX_KERN_VIRT_START
|
||||
#define RADIX_VMALLOC_SIZE (RADIX_KERN_VIRT_SIZE >> 2)
|
||||
#define RADIX_VMALLOC_END (RADIX_VMALLOC_START + RADIX_VMALLOC_SIZE)
|
||||
/*
|
||||
* Defines the address of the vmemap area, in its own region on
|
||||
* hash table CPUs.
|
||||
*/
|
||||
#define RADIX_VMEMMAP_BASE (RADIX_VMALLOC_END)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#define RADIX_PTE_TABLE_SIZE (sizeof(pte_t) << RADIX_PTE_INDEX_SIZE)
|
||||
#define RADIX_PMD_TABLE_SIZE (sizeof(pmd_t) << RADIX_PMD_INDEX_SIZE)
|
||||
#define RADIX_PUD_TABLE_SIZE (sizeof(pud_t) << RADIX_PUD_INDEX_SIZE)
|
||||
#define RADIX_PGD_TABLE_SIZE (sizeof(pgd_t) << RADIX_PGD_INDEX_SIZE)
|
||||
|
||||
static inline unsigned long radix__pte_update(struct mm_struct *mm,
|
||||
unsigned long addr,
|
||||
pte_t *ptep, unsigned long clr,
|
||||
unsigned long set,
|
||||
int huge)
|
||||
{
|
||||
pte_t pte;
|
||||
unsigned long old_pte, new_pte;
|
||||
|
||||
do {
|
||||
pte = READ_ONCE(*ptep);
|
||||
old_pte = pte_val(pte);
|
||||
new_pte = (old_pte | set) & ~clr;
|
||||
|
||||
} while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
|
||||
|
||||
/* We already do a sync in cmpxchg, is ptesync needed ?*/
|
||||
asm volatile("ptesync" : : : "memory");
|
||||
/* huge pages use the old page table lock */
|
||||
if (!huge)
|
||||
assert_pte_locked(mm, addr);
|
||||
|
||||
return old_pte;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set the dirty and/or accessed bits atomically in a linux PTE, this
|
||||
* function doesn't need to invalidate tlb.
|
||||
*/
|
||||
static inline void radix__ptep_set_access_flags(pte_t *ptep, pte_t entry)
|
||||
{
|
||||
pte_t pte;
|
||||
unsigned long old_pte, new_pte;
|
||||
unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED |
|
||||
_PAGE_RW | _PAGE_EXEC);
|
||||
do {
|
||||
pte = READ_ONCE(*ptep);
|
||||
old_pte = pte_val(pte);
|
||||
new_pte = old_pte | set;
|
||||
|
||||
} while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
|
||||
|
||||
/* We already do a sync in cmpxchg, is ptesync needed ?*/
|
||||
asm volatile("ptesync" : : : "memory");
|
||||
}
|
||||
|
||||
static inline int radix__pte_same(pte_t pte_a, pte_t pte_b)
|
||||
{
|
||||
return ((pte_raw(pte_a) ^ pte_raw(pte_b)) == 0);
|
||||
}
|
||||
|
||||
static inline int radix__pte_none(pte_t pte)
|
||||
{
|
||||
return (pte_val(pte) & ~RADIX_PTE_NONE_MASK) == 0;
|
||||
}
|
||||
|
||||
static inline void radix__set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t pte, int percpu)
|
||||
{
|
||||
*ptep = pte;
|
||||
asm volatile("ptesync" : : : "memory");
|
||||
}
|
||||
|
||||
static inline int radix__pmd_bad(pmd_t pmd)
|
||||
{
|
||||
return !!(pmd_val(pmd) & RADIX_PMD_BAD_BITS);
|
||||
}
|
||||
|
||||
static inline int radix__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
|
||||
{
|
||||
return ((pmd_raw(pmd_a) ^ pmd_raw(pmd_b)) == 0);
|
||||
}
|
||||
|
||||
static inline int radix__pud_bad(pud_t pud)
|
||||
{
|
||||
return !!(pud_val(pud) & RADIX_PUD_BAD_BITS);
|
||||
}
|
||||
|
||||
|
||||
static inline int radix__pgd_bad(pgd_t pgd)
|
||||
{
|
||||
return !!(pgd_val(pgd) & RADIX_PGD_BAD_BITS);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
|
||||
static inline int radix__pmd_trans_huge(pmd_t pmd)
|
||||
{
|
||||
return !!(pmd_val(pmd) & _PAGE_PTE);
|
||||
}
|
||||
|
||||
static inline pmd_t radix__pmd_mkhuge(pmd_t pmd)
|
||||
{
|
||||
return __pmd(pmd_val(pmd) | _PAGE_PTE);
|
||||
}
|
||||
static inline void radix__pmdp_huge_split_prepare(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp)
|
||||
{
|
||||
/* Nothing to do for radix. */
|
||||
return;
|
||||
}
|
||||
|
||||
extern unsigned long radix__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr,
|
||||
pmd_t *pmdp, unsigned long clr,
|
||||
unsigned long set);
|
||||
extern pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma,
|
||||
unsigned long address, pmd_t *pmdp);
|
||||
extern void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
|
||||
pgtable_t pgtable);
|
||||
extern pgtable_t radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
|
||||
extern pmd_t radix__pmdp_huge_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pmd_t *pmdp);
|
||||
extern int radix__has_transparent_hugepage(void);
|
||||
#endif
|
||||
|
||||
extern int __meminit radix__vmemmap_create_mapping(unsigned long start,
|
||||
unsigned long page_size,
|
||||
unsigned long phys);
|
||||
extern void radix__vmemmap_remove_mapping(unsigned long start,
|
||||
unsigned long page_size);
|
||||
|
||||
extern int radix__map_kernel_page(unsigned long ea, unsigned long pa,
|
||||
pgprot_t flags, unsigned int psz);
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif
|
@ -1,8 +1,6 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_64_TLBFLUSH_HASH_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_TLBFLUSH_HASH_H
|
||||
|
||||
#define MMU_NO_CONTEXT 0
|
||||
|
||||
/*
|
||||
* TLB flushing for 64-bit hash-MMU CPUs
|
||||
*/
|
||||
@ -29,14 +27,21 @@ extern void __flush_tlb_pending(struct ppc64_tlb_batch *batch);
|
||||
|
||||
static inline void arch_enter_lazy_mmu_mode(void)
|
||||
{
|
||||
struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch);
|
||||
struct ppc64_tlb_batch *batch;
|
||||
|
||||
if (radix_enabled())
|
||||
return;
|
||||
batch = this_cpu_ptr(&ppc64_tlb_batch);
|
||||
batch->active = 1;
|
||||
}
|
||||
|
||||
static inline void arch_leave_lazy_mmu_mode(void)
|
||||
{
|
||||
struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch);
|
||||
struct ppc64_tlb_batch *batch;
|
||||
|
||||
if (radix_enabled())
|
||||
return;
|
||||
batch = this_cpu_ptr(&ppc64_tlb_batch);
|
||||
|
||||
if (batch->index)
|
||||
__flush_tlb_pending(batch);
|
||||
@ -52,40 +57,42 @@ extern void flush_hash_range(unsigned long number, int local);
|
||||
extern void flush_hash_hugepage(unsigned long vsid, unsigned long addr,
|
||||
pmd_t *pmdp, unsigned int psize, int ssize,
|
||||
unsigned long flags);
|
||||
|
||||
static inline void local_flush_tlb_mm(struct mm_struct *mm)
|
||||
static inline void hash__local_flush_tlb_mm(struct mm_struct *mm)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void flush_tlb_mm(struct mm_struct *mm)
|
||||
static inline void hash__flush_tlb_mm(struct mm_struct *mm)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void local_flush_tlb_page(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
static inline void hash__local_flush_tlb_page(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void flush_tlb_page(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
static inline void hash__flush_tlb_page(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void flush_tlb_page_nohash(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
static inline void hash__flush_tlb_page_nohash(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void flush_tlb_range(struct vm_area_struct *vma,
|
||||
unsigned long start, unsigned long end)
|
||||
static inline void hash__flush_tlb_range(struct vm_area_struct *vma,
|
||||
unsigned long start, unsigned long end)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void flush_tlb_kernel_range(unsigned long start,
|
||||
unsigned long end)
|
||||
static inline void hash__flush_tlb_kernel_range(unsigned long start,
|
||||
unsigned long end)
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
struct mmu_gather;
|
||||
extern void hash__tlb_flush(struct mmu_gather *tlb);
|
||||
/* Private function for use by PCI IO mapping code */
|
||||
extern void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
|
||||
unsigned long end);
|
||||
|
33
arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
Normal file
33
arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
Normal file
@ -0,0 +1,33 @@
|
||||
#ifndef _ASM_POWERPC_TLBFLUSH_RADIX_H
|
||||
#define _ASM_POWERPC_TLBFLUSH_RADIX_H
|
||||
|
||||
struct vm_area_struct;
|
||||
struct mm_struct;
|
||||
struct mmu_gather;
|
||||
|
||||
static inline int mmu_get_ap(int psize)
|
||||
{
|
||||
return mmu_psize_defs[psize].ap;
|
||||
}
|
||||
|
||||
extern void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
|
||||
unsigned long end);
|
||||
extern void radix__flush_tlb_kernel_range(unsigned long start, unsigned long end);
|
||||
|
||||
extern void radix__local_flush_tlb_mm(struct mm_struct *mm);
|
||||
extern void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
|
||||
extern void radix___local_flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
|
||||
unsigned long ap, int nid);
|
||||
extern void radix__tlb_flush(struct mmu_gather *tlb);
|
||||
#ifdef CONFIG_SMP
|
||||
extern void radix__flush_tlb_mm(struct mm_struct *mm);
|
||||
extern void radix__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
|
||||
extern void radix___flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
|
||||
unsigned long ap, int nid);
|
||||
#else
|
||||
#define radix__flush_tlb_mm(mm) radix__local_flush_tlb_mm(mm)
|
||||
#define radix__flush_tlb_page(vma,addr) radix__local_flush_tlb_page(vma,addr)
|
||||
#define radix___flush_tlb_page(mm,addr,p,i) radix___local_flush_tlb_page(mm,addr,p,i)
|
||||
#endif
|
||||
|
||||
#endif
|
76
arch/powerpc/include/asm/book3s/64/tlbflush.h
Normal file
76
arch/powerpc/include/asm/book3s/64/tlbflush.h
Normal file
@ -0,0 +1,76 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_64_TLBFLUSH_H
|
||||
#define _ASM_POWERPC_BOOK3S_64_TLBFLUSH_H
|
||||
|
||||
#define MMU_NO_CONTEXT ~0UL
|
||||
|
||||
|
||||
#include <asm/book3s/64/tlbflush-hash.h>
|
||||
#include <asm/book3s/64/tlbflush-radix.h>
|
||||
|
||||
static inline void flush_tlb_range(struct vm_area_struct *vma,
|
||||
unsigned long start, unsigned long end)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__flush_tlb_range(vma, start, end);
|
||||
return hash__flush_tlb_range(vma, start, end);
|
||||
}
|
||||
|
||||
static inline void flush_tlb_kernel_range(unsigned long start,
|
||||
unsigned long end)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__flush_tlb_kernel_range(start, end);
|
||||
return hash__flush_tlb_kernel_range(start, end);
|
||||
}
|
||||
|
||||
static inline void local_flush_tlb_mm(struct mm_struct *mm)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__local_flush_tlb_mm(mm);
|
||||
return hash__local_flush_tlb_mm(mm);
|
||||
}
|
||||
|
||||
static inline void local_flush_tlb_page(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__local_flush_tlb_page(vma, vmaddr);
|
||||
return hash__local_flush_tlb_page(vma, vmaddr);
|
||||
}
|
||||
|
||||
static inline void flush_tlb_page_nohash(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__flush_tlb_page(vma, vmaddr);
|
||||
return hash__flush_tlb_page_nohash(vma, vmaddr);
|
||||
}
|
||||
|
||||
static inline void tlb_flush(struct mmu_gather *tlb)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__tlb_flush(tlb);
|
||||
return hash__tlb_flush(tlb);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
static inline void flush_tlb_mm(struct mm_struct *mm)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__flush_tlb_mm(mm);
|
||||
return hash__flush_tlb_mm(mm);
|
||||
}
|
||||
|
||||
static inline void flush_tlb_page(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__flush_tlb_page(vma, vmaddr);
|
||||
return hash__flush_tlb_page(vma, vmaddr);
|
||||
}
|
||||
#else
|
||||
#define flush_tlb_mm(mm) local_flush_tlb_mm(mm)
|
||||
#define flush_tlb_page(vma, addr) local_flush_tlb_page(vma, addr)
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
#endif /* _ASM_POWERPC_BOOK3S_64_TLBFLUSH_H */
|
19
arch/powerpc/include/asm/book3s/pgalloc.h
Normal file
19
arch/powerpc/include/asm/book3s/pgalloc.h
Normal file
@ -0,0 +1,19 @@
|
||||
#ifndef _ASM_POWERPC_BOOK3S_PGALLOC_H
|
||||
#define _ASM_POWERPC_BOOK3S_PGALLOC_H
|
||||
|
||||
#include <linux/mm.h>
|
||||
|
||||
extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
|
||||
static inline void tlb_flush_pgtable(struct mmu_gather *tlb,
|
||||
unsigned long address)
|
||||
{
|
||||
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
#include <asm/book3s/64/pgalloc.h>
|
||||
#else
|
||||
#include <asm/book3s/32/pgalloc.h>
|
||||
#endif
|
||||
|
||||
#endif /* _ASM_POWERPC_BOOK3S_PGALLOC_H */
|
@ -8,6 +8,8 @@
|
||||
extern struct kmem_cache *hugepte_cache;
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
|
||||
#include <asm/book3s/64/hugetlb-radix.h>
|
||||
/*
|
||||
* This should work for other subarchs too. But right now we use the
|
||||
* new format only for 64bit book3s
|
||||
@ -31,7 +33,19 @@ static inline unsigned int hugepd_shift(hugepd_t hpd)
|
||||
{
|
||||
return mmu_psize_to_shift(hugepd_mmu_psize(hpd));
|
||||
}
|
||||
static inline void flush_hugetlb_page(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__flush_hugetlb_page(vma, vmaddr);
|
||||
}
|
||||
|
||||
static inline void __local_flush_hugetlb_page(struct vm_area_struct *vma,
|
||||
unsigned long vmaddr)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__local_flush_hugetlb_page(vma, vmaddr);
|
||||
}
|
||||
#else
|
||||
|
||||
static inline pte_t *hugepd_page(hugepd_t hpd)
|
||||
|
@ -276,19 +276,24 @@ static inline unsigned long hpte_make_readonly(unsigned long ptel)
|
||||
return ptel;
|
||||
}
|
||||
|
||||
static inline int hpte_cache_flags_ok(unsigned long ptel, unsigned long io_type)
|
||||
static inline bool hpte_cache_flags_ok(unsigned long hptel, bool is_ci)
|
||||
{
|
||||
unsigned int wimg = ptel & HPTE_R_WIMG;
|
||||
unsigned int wimg = hptel & HPTE_R_WIMG;
|
||||
|
||||
/* Handle SAO */
|
||||
if (wimg == (HPTE_R_W | HPTE_R_I | HPTE_R_M) &&
|
||||
cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
wimg = HPTE_R_M;
|
||||
|
||||
if (!io_type)
|
||||
if (!is_ci)
|
||||
return wimg == HPTE_R_M;
|
||||
|
||||
return (wimg & (HPTE_R_W | HPTE_R_I)) == io_type;
|
||||
/*
|
||||
* if host is mapped cache inhibited, make sure hptel also have
|
||||
* cache inhibited.
|
||||
*/
|
||||
if (wimg & HPTE_R_W) /* FIXME!! is this ok for all guest. ? */
|
||||
return false;
|
||||
return !!(wimg & HPTE_R_I);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -305,9 +310,9 @@ static inline pte_t kvmppc_read_update_linux_pte(pte_t *ptep, int writing)
|
||||
*/
|
||||
old_pte = READ_ONCE(*ptep);
|
||||
/*
|
||||
* wait until _PAGE_BUSY is clear then set it atomically
|
||||
* wait until H_PAGE_BUSY is clear then set it atomically
|
||||
*/
|
||||
if (unlikely(pte_val(old_pte) & _PAGE_BUSY)) {
|
||||
if (unlikely(pte_val(old_pte) & H_PAGE_BUSY)) {
|
||||
cpu_relax();
|
||||
continue;
|
||||
}
|
||||
@ -319,27 +324,12 @@ static inline pte_t kvmppc_read_update_linux_pte(pte_t *ptep, int writing)
|
||||
if (writing && pte_write(old_pte))
|
||||
new_pte = pte_mkdirty(new_pte);
|
||||
|
||||
if (pte_val(old_pte) == __cmpxchg_u64((unsigned long *)ptep,
|
||||
pte_val(old_pte),
|
||||
pte_val(new_pte))) {
|
||||
if (pte_xchg(ptep, old_pte, new_pte))
|
||||
break;
|
||||
}
|
||||
}
|
||||
return new_pte;
|
||||
}
|
||||
|
||||
|
||||
/* Return HPTE cache control bits corresponding to Linux pte bits */
|
||||
static inline unsigned long hpte_cache_bits(unsigned long pte_val)
|
||||
{
|
||||
#if _PAGE_NO_CACHE == HPTE_R_I && _PAGE_WRITETHRU == HPTE_R_W
|
||||
return pte_val & (HPTE_R_W | HPTE_R_I);
|
||||
#else
|
||||
return ((pte_val & _PAGE_NO_CACHE) ? HPTE_R_I : 0) +
|
||||
((pte_val & _PAGE_WRITETHRU) ? HPTE_R_W : 0);
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline bool hpte_read_permission(unsigned long pp, unsigned long key)
|
||||
{
|
||||
if (key)
|
||||
|
@ -256,6 +256,7 @@ struct machdep_calls {
|
||||
#ifdef CONFIG_ARCH_RANDOM
|
||||
int (*get_random_seed)(unsigned long *v);
|
||||
#endif
|
||||
int (*update_partition_table)(u64);
|
||||
};
|
||||
|
||||
extern void e500_idle(void);
|
||||
|
@ -88,6 +88,11 @@
|
||||
*/
|
||||
#define MMU_FTR_1T_SEGMENT ASM_CONST(0x40000000)
|
||||
|
||||
/*
|
||||
* Radix page table available
|
||||
*/
|
||||
#define MMU_FTR_RADIX ASM_CONST(0x80000000)
|
||||
|
||||
/* MMU feature bit sets for various CPUs */
|
||||
#define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 \
|
||||
MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
|
||||
@ -110,9 +115,25 @@
|
||||
DECLARE_PER_CPU(int, next_tlbcam_idx);
|
||||
#endif
|
||||
|
||||
enum {
|
||||
MMU_FTRS_POSSIBLE = MMU_FTR_HPTE_TABLE | MMU_FTR_TYPE_8xx |
|
||||
MMU_FTR_TYPE_40x | MMU_FTR_TYPE_44x | MMU_FTR_TYPE_FSL_E |
|
||||
MMU_FTR_TYPE_47x | MMU_FTR_USE_HIGH_BATS | MMU_FTR_BIG_PHYS |
|
||||
MMU_FTR_USE_TLBIVAX_BCAST | MMU_FTR_USE_TLBILX |
|
||||
MMU_FTR_LOCK_BCAST_INVAL | MMU_FTR_NEED_DTLB_SW_LRU |
|
||||
MMU_FTR_USE_TLBRSRV | MMU_FTR_USE_PAIRED_MAS |
|
||||
MMU_FTR_NO_SLBIE_B | MMU_FTR_16M_PAGE | MMU_FTR_TLBIEL |
|
||||
MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_CI_LARGE_PAGE |
|
||||
MMU_FTR_1T_SEGMENT |
|
||||
#ifdef CONFIG_PPC_RADIX_MMU
|
||||
MMU_FTR_RADIX |
|
||||
#endif
|
||||
0,
|
||||
};
|
||||
|
||||
static inline int mmu_has_feature(unsigned long feature)
|
||||
{
|
||||
return (cur_cpu_spec->mmu_features & feature);
|
||||
return (MMU_FTRS_POSSIBLE & cur_cpu_spec->mmu_features & feature);
|
||||
}
|
||||
|
||||
static inline void mmu_clear_feature(unsigned long feature)
|
||||
@ -122,13 +143,6 @@ static inline void mmu_clear_feature(unsigned long feature)
|
||||
|
||||
extern unsigned int __start___mmu_ftr_fixup, __stop___mmu_ftr_fixup;
|
||||
|
||||
/* MMU initialization */
|
||||
extern void early_init_mmu(void);
|
||||
extern void early_init_mmu_secondary(void);
|
||||
|
||||
extern void setup_initial_memory_limit(phys_addr_t first_memblock_base,
|
||||
phys_addr_t first_memblock_size);
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
/* This is our real memory area size on ppc64 server, on embedded, we
|
||||
* make it match the size our of bolted TLB area
|
||||
@ -181,10 +195,20 @@ static inline void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
|
||||
|
||||
#define MMU_PAGE_COUNT 15
|
||||
|
||||
#if defined(CONFIG_PPC_STD_MMU_64)
|
||||
/* 64-bit classic hash table MMU */
|
||||
#include <asm/book3s/64/mmu-hash.h>
|
||||
#elif defined(CONFIG_PPC_STD_MMU_32)
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
#include <asm/book3s/64/mmu.h>
|
||||
#else /* CONFIG_PPC_BOOK3S_64 */
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/* MMU initialization */
|
||||
extern void early_init_mmu(void);
|
||||
extern void early_init_mmu_secondary(void);
|
||||
extern void setup_initial_memory_limit(phys_addr_t first_memblock_base,
|
||||
phys_addr_t first_memblock_size);
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_PPC_STD_MMU_32)
|
||||
/* 32-bit classic hash table MMU */
|
||||
#include <asm/book3s/32/mmu-hash.h>
|
||||
#elif defined(CONFIG_40x)
|
||||
@ -201,6 +225,9 @@ static inline void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
|
||||
# include <asm/mmu-8xx.h>
|
||||
#endif
|
||||
|
||||
#ifndef radix_enabled
|
||||
#define radix_enabled() (0)
|
||||
#endif
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_MMU_H_ */
|
||||
|
@ -33,16 +33,27 @@ extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
|
||||
extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);
|
||||
extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem);
|
||||
#endif
|
||||
|
||||
extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next);
|
||||
extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm);
|
||||
extern void set_context(unsigned long id, pgd_t *pgd);
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
extern void radix__switch_mmu_context(struct mm_struct *prev,
|
||||
struct mm_struct *next);
|
||||
static inline void switch_mmu_context(struct mm_struct *prev,
|
||||
struct mm_struct *next,
|
||||
struct task_struct *tsk)
|
||||
{
|
||||
if (radix_enabled())
|
||||
return radix__switch_mmu_context(prev, next);
|
||||
return switch_slb(tsk, next);
|
||||
}
|
||||
|
||||
extern int __init_new_context(void);
|
||||
extern void __destroy_context(int context_id);
|
||||
static inline void mmu_context_init(void) { }
|
||||
#else
|
||||
extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next,
|
||||
struct task_struct *tsk);
|
||||
extern unsigned long __init_new_context(void);
|
||||
extern void __destroy_context(unsigned long context_id);
|
||||
extern void mmu_context_init(void);
|
||||
@ -88,17 +99,11 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
|
||||
if (cpu_has_feature(CPU_FTR_ALTIVEC))
|
||||
asm volatile ("dssall");
|
||||
#endif /* CONFIG_ALTIVEC */
|
||||
|
||||
/* The actual HW switching method differs between the various
|
||||
* sub architectures.
|
||||
/*
|
||||
* The actual HW switching method differs between the various
|
||||
* sub architectures. Out of line for now
|
||||
*/
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
switch_slb(tsk, next);
|
||||
#else
|
||||
/* Out of line for now */
|
||||
switch_mmu_context(prev, next);
|
||||
#endif
|
||||
|
||||
switch_mmu_context(prev, next, tsk);
|
||||
}
|
||||
|
||||
#define deactivate_mm(tsk,mm) do { } while (0)
|
||||
|
@ -53,7 +53,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
|
||||
|
||||
#ifndef CONFIG_PPC_64K_PAGES
|
||||
|
||||
#define pgd_populate(MM, PGD, PUD) pgd_set(PGD, __pgtable_ptr_val(PUD))
|
||||
#define pgd_populate(MM, PGD, PUD) pgd_set(PGD, (unsigned long)PUD)
|
||||
|
||||
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
|
||||
{
|
||||
@ -68,19 +68,19 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud)
|
||||
|
||||
static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
|
||||
{
|
||||
pud_set(pud, __pgtable_ptr_val(pmd));
|
||||
pud_set(pud, (unsigned long)pmd);
|
||||
}
|
||||
|
||||
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
|
||||
pte_t *pte)
|
||||
{
|
||||
pmd_set(pmd, __pgtable_ptr_val(pte));
|
||||
pmd_set(pmd, (unsigned long)pte);
|
||||
}
|
||||
|
||||
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
|
||||
pgtable_t pte_page)
|
||||
{
|
||||
pmd_set(pmd, __pgtable_ptr_val(page_address(pte_page)));
|
||||
pmd_set(pmd, (unsigned long)page_address(pte_page));
|
||||
}
|
||||
|
||||
#define pmd_pgtable(pmd) pmd_page(pmd)
|
||||
@ -119,119 +119,65 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
|
||||
__free_page(ptepage);
|
||||
}
|
||||
|
||||
static inline void pgtable_free(void *table, unsigned index_size)
|
||||
{
|
||||
if (!index_size)
|
||||
free_page((unsigned long)table);
|
||||
else {
|
||||
BUG_ON(index_size > MAX_PGTABLE_INDEX_SIZE);
|
||||
kmem_cache_free(PGT_CACHE(index_size), table);
|
||||
}
|
||||
}
|
||||
|
||||
extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
|
||||
#ifdef CONFIG_SMP
|
||||
static inline void pgtable_free_tlb(struct mmu_gather *tlb,
|
||||
void *table, int shift)
|
||||
{
|
||||
unsigned long pgf = (unsigned long)table;
|
||||
BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
|
||||
pgf |= shift;
|
||||
tlb_remove_table(tlb, (void *)pgf);
|
||||
}
|
||||
|
||||
static inline void __tlb_remove_table(void *_table)
|
||||
{
|
||||
void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
|
||||
unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
|
||||
|
||||
pgtable_free(table, shift);
|
||||
}
|
||||
#else /* !CONFIG_SMP */
|
||||
static inline void pgtable_free_tlb(struct mmu_gather *tlb,
|
||||
void *table, int shift)
|
||||
{
|
||||
pgtable_free(table, shift);
|
||||
}
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
extern void __tlb_remove_table(void *_table);
|
||||
#endif
|
||||
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
|
||||
unsigned long address)
|
||||
{
|
||||
tlb_flush_pgtable(tlb, address);
|
||||
pgtable_page_dtor(table);
|
||||
pgtable_free_tlb(tlb, page_address(table), 0);
|
||||
}
|
||||
|
||||
#else /* if CONFIG_PPC_64K_PAGES */
|
||||
|
||||
extern pte_t *page_table_alloc(struct mm_struct *, unsigned long, int);
|
||||
extern void page_table_free(struct mm_struct *, unsigned long *, int);
|
||||
extern pte_t *pte_fragment_alloc(struct mm_struct *, unsigned long, int);
|
||||
extern void pte_fragment_free(unsigned long *, int);
|
||||
extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
|
||||
#ifdef CONFIG_SMP
|
||||
extern void __tlb_remove_table(void *_table);
|
||||
#endif
|
||||
|
||||
#ifndef __PAGETABLE_PUD_FOLDED
|
||||
/* book3s 64 is 4 level page table */
|
||||
static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
|
||||
{
|
||||
pgd_set(pgd, __pgtable_ptr_val(pud));
|
||||
}
|
||||
|
||||
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
|
||||
{
|
||||
return kmem_cache_alloc(PGT_CACHE(PUD_INDEX_SIZE),
|
||||
GFP_KERNEL|__GFP_REPEAT);
|
||||
}
|
||||
|
||||
static inline void pud_free(struct mm_struct *mm, pud_t *pud)
|
||||
{
|
||||
kmem_cache_free(PGT_CACHE(PUD_INDEX_SIZE), pud);
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
|
||||
{
|
||||
pud_set(pud, __pgtable_ptr_val(pmd));
|
||||
}
|
||||
#define pud_populate(mm, pud, pmd) pud_set(pud, (unsigned long)pmd)
|
||||
|
||||
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
|
||||
pte_t *pte)
|
||||
{
|
||||
pmd_set(pmd, __pgtable_ptr_val(pte));
|
||||
pmd_set(pmd, (unsigned long)pte);
|
||||
}
|
||||
|
||||
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
|
||||
pgtable_t pte_page)
|
||||
{
|
||||
pmd_set(pmd, __pgtable_ptr_val(pte_page));
|
||||
pmd_set(pmd, (unsigned long)pte_page);
|
||||
}
|
||||
|
||||
static inline pgtable_t pmd_pgtable(pmd_t pmd)
|
||||
{
|
||||
return (pgtable_t)pmd_page_vaddr(pmd);
|
||||
return (pgtable_t)(pmd_val(pmd) & ~PMD_MASKED_BITS);
|
||||
}
|
||||
|
||||
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
|
||||
unsigned long address)
|
||||
{
|
||||
return (pte_t *)page_table_alloc(mm, address, 1);
|
||||
return (pte_t *)pte_fragment_alloc(mm, address, 1);
|
||||
}
|
||||
|
||||
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
|
||||
unsigned long address)
|
||||
{
|
||||
return (pgtable_t)page_table_alloc(mm, address, 0);
|
||||
return (pgtable_t)pte_fragment_alloc(mm, address, 0);
|
||||
}
|
||||
|
||||
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
|
||||
{
|
||||
page_table_free(mm, (unsigned long *)pte, 1);
|
||||
pte_fragment_fre((unsigned long *)pte, 1);
|
||||
}
|
||||
|
||||
static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
|
||||
{
|
||||
page_table_free(mm, (unsigned long *)ptepage, 0);
|
||||
pte_fragment_free((unsigned long *)ptepage, 0);
|
||||
}
|
||||
|
||||
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
|
||||
@ -255,11 +201,11 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
|
||||
|
||||
#define __pmd_free_tlb(tlb, pmd, addr) \
|
||||
pgtable_free_tlb(tlb, pmd, PMD_CACHE_INDEX)
|
||||
#ifndef __PAGETABLE_PUD_FOLDED
|
||||
#ifndef CONFIG_PPC_64K_PAGES
|
||||
#define __pud_free_tlb(tlb, pud, addr) \
|
||||
pgtable_free_tlb(tlb, pud, PUD_INDEX_SIZE)
|
||||
|
||||
#endif /* __PAGETABLE_PUD_FOLDED */
|
||||
#endif /* CONFIG_PPC_64K_PAGES */
|
||||
|
||||
#define check_pgt_cache() do { } while (0)
|
||||
|
@ -108,9 +108,6 @@
|
||||
#ifndef __ASSEMBLY__
|
||||
/* pte_clear moved to later in this file */
|
||||
|
||||
/* Pointers in the page table tree are virtual addresses */
|
||||
#define __pgtable_ptr_val(ptr) ((unsigned long)(ptr))
|
||||
|
||||
#define PMD_BAD_BITS (PTE_TABLE_SIZE-1)
|
||||
#define PUD_BAD_BITS (PMD_TABLE_SIZE-1)
|
||||
|
||||
@ -362,6 +359,13 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
|
||||
|
||||
void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
|
||||
void pgtable_cache_init(void);
|
||||
extern int map_kernel_page(unsigned long ea, unsigned long pa,
|
||||
unsigned long flags);
|
||||
extern int __meminit vmemmap_create_mapping(unsigned long start,
|
||||
unsigned long page_size,
|
||||
unsigned long phys);
|
||||
extern void vmemmap_remove_mapping(unsigned long start,
|
||||
unsigned long page_size);
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_POWERPC_NOHASH_64_PGTABLE_H */
|
||||
|
23
arch/powerpc/include/asm/nohash/pgalloc.h
Normal file
23
arch/powerpc/include/asm/nohash/pgalloc.h
Normal file
@ -0,0 +1,23 @@
|
||||
#ifndef _ASM_POWERPC_NOHASH_PGALLOC_H
|
||||
#define _ASM_POWERPC_NOHASH_PGALLOC_H
|
||||
|
||||
#include <linux/mm.h>
|
||||
|
||||
extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
|
||||
#ifdef CONFIG_PPC64
|
||||
extern void tlb_flush_pgtable(struct mmu_gather *tlb, unsigned long address);
|
||||
#else
|
||||
/* 44x etc which is BOOKE not BOOK3E */
|
||||
static inline void tlb_flush_pgtable(struct mmu_gather *tlb,
|
||||
unsigned long address)
|
||||
{
|
||||
|
||||
}
|
||||
#endif /* !CONFIG_PPC_BOOK3E */
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
#include <asm/nohash/64/pgalloc.h>
|
||||
#else
|
||||
#include <asm/nohash/32/pgalloc.h>
|
||||
#endif
|
||||
#endif /* _ASM_POWERPC_NOHASH_PGALLOC_H */
|
@ -368,16 +368,16 @@ enum OpalLPCAddressType {
|
||||
};
|
||||
|
||||
enum opal_msg_type {
|
||||
OPAL_MSG_ASYNC_COMP = 0, /* params[0] = token, params[1] = rc,
|
||||
OPAL_MSG_ASYNC_COMP = 0, /* params[0] = token, params[1] = rc,
|
||||
* additional params function-specific
|
||||
*/
|
||||
OPAL_MSG_MEM_ERR,
|
||||
OPAL_MSG_EPOW,
|
||||
OPAL_MSG_SHUTDOWN, /* params[0] = 1 reboot, 0 shutdown */
|
||||
OPAL_MSG_HMI_EVT,
|
||||
OPAL_MSG_DPO,
|
||||
OPAL_MSG_PRD,
|
||||
OPAL_MSG_OCC,
|
||||
OPAL_MSG_MEM_ERR = 1,
|
||||
OPAL_MSG_EPOW = 2,
|
||||
OPAL_MSG_SHUTDOWN = 3, /* params[0] = 1 reboot, 0 shutdown */
|
||||
OPAL_MSG_HMI_EVT = 4,
|
||||
OPAL_MSG_DPO = 5,
|
||||
OPAL_MSG_PRD = 6,
|
||||
OPAL_MSG_OCC = 7,
|
||||
OPAL_MSG_TYPE_MAX,
|
||||
};
|
||||
|
||||
|
@ -288,7 +288,11 @@ extern long long virt_phys_offset;
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
#include <asm/pgtable-be-types.h>
|
||||
#else
|
||||
#include <asm/pgtable-types.h>
|
||||
#endif
|
||||
|
||||
typedef struct { signed long pd; } hugepd_t;
|
||||
|
||||
@ -312,12 +316,20 @@ void arch_free_page(struct page *page, int order);
|
||||
#endif
|
||||
|
||||
struct vm_area_struct;
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
/*
|
||||
* For BOOK3s 64 with 4k and 64K linux page size
|
||||
* we want to use pointers, because the page table
|
||||
* actually store pfn
|
||||
*/
|
||||
typedef pte_t *pgtable_t;
|
||||
#else
|
||||
#if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC64)
|
||||
typedef pte_t *pgtable_t;
|
||||
#else
|
||||
typedef struct page *pgtable_t;
|
||||
#endif
|
||||
#endif
|
||||
|
||||
#include <asm-generic/memory_model.h>
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
@ -93,7 +93,7 @@ extern u64 ppc64_pft_size;
|
||||
|
||||
#define SLICE_LOW_TOP (0x100000000ul)
|
||||
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
|
||||
#define SLICE_NUM_HIGH (PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
|
||||
#define SLICE_NUM_HIGH (H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
|
||||
|
||||
#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
|
||||
#define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT)
|
||||
@ -128,8 +128,6 @@ extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize);
|
||||
extern void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
|
||||
unsigned long len, unsigned int psize);
|
||||
|
||||
#define slice_mm_new_context(mm) ((mm)->context.id == MMU_NO_CONTEXT)
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
#else
|
||||
#define slice_init()
|
||||
@ -151,7 +149,6 @@ do { \
|
||||
|
||||
#define slice_set_range_psize(mm, start, len, psize) \
|
||||
slice_set_user_psize((mm), (psize))
|
||||
#define slice_mm_new_context(mm) 1
|
||||
#endif /* CONFIG_PPC_MM_SLICES */
|
||||
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
|
@ -17,33 +17,34 @@ struct device_node;
|
||||
* PCI controller operations
|
||||
*/
|
||||
struct pci_controller_ops {
|
||||
void (*dma_dev_setup)(struct pci_dev *dev);
|
||||
void (*dma_dev_setup)(struct pci_dev *pdev);
|
||||
void (*dma_bus_setup)(struct pci_bus *bus);
|
||||
|
||||
int (*probe_mode)(struct pci_bus *);
|
||||
int (*probe_mode)(struct pci_bus *bus);
|
||||
|
||||
/* Called when pci_enable_device() is called. Returns true to
|
||||
* allow assignment/enabling of the device. */
|
||||
bool (*enable_device_hook)(struct pci_dev *);
|
||||
bool (*enable_device_hook)(struct pci_dev *pdev);
|
||||
|
||||
void (*disable_device)(struct pci_dev *);
|
||||
void (*disable_device)(struct pci_dev *pdev);
|
||||
|
||||
void (*release_device)(struct pci_dev *);
|
||||
void (*release_device)(struct pci_dev *pdev);
|
||||
|
||||
/* Called during PCI resource reassignment */
|
||||
resource_size_t (*window_alignment)(struct pci_bus *, unsigned long type);
|
||||
void (*reset_secondary_bus)(struct pci_dev *dev);
|
||||
resource_size_t (*window_alignment)(struct pci_bus *bus,
|
||||
unsigned long type);
|
||||
void (*reset_secondary_bus)(struct pci_dev *pdev);
|
||||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
int (*setup_msi_irqs)(struct pci_dev *dev,
|
||||
int (*setup_msi_irqs)(struct pci_dev *pdev,
|
||||
int nvec, int type);
|
||||
void (*teardown_msi_irqs)(struct pci_dev *dev);
|
||||
void (*teardown_msi_irqs)(struct pci_dev *pdev);
|
||||
#endif
|
||||
|
||||
int (*dma_set_mask)(struct pci_dev *dev, u64 dma_mask);
|
||||
u64 (*dma_get_required_mask)(struct pci_dev *dev);
|
||||
int (*dma_set_mask)(struct pci_dev *pdev, u64 dma_mask);
|
||||
u64 (*dma_get_required_mask)(struct pci_dev *pdev);
|
||||
|
||||
void (*shutdown)(struct pci_controller *);
|
||||
void (*shutdown)(struct pci_controller *hose);
|
||||
};
|
||||
|
||||
/*
|
||||
@ -208,14 +209,14 @@ struct pci_dn {
|
||||
#ifdef CONFIG_EEH
|
||||
struct eeh_dev *edev; /* eeh device */
|
||||
#endif
|
||||
#define IODA_INVALID_PE (-1)
|
||||
#define IODA_INVALID_PE 0xFFFFFFFF
|
||||
#ifdef CONFIG_PPC_POWERNV
|
||||
int pe_number;
|
||||
unsigned int pe_number;
|
||||
int vf_index; /* VF index in the PF */
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
u16 vfs_expanded; /* number of VFs IOV BAR expanded */
|
||||
u16 num_vfs; /* number of VFs enabled*/
|
||||
int *pe_num_map; /* PE# for the first VF PE or array */
|
||||
unsigned int *pe_num_map; /* PE# for the first VF PE or array */
|
||||
bool m64_single_mode; /* Use M64 BAR in Single Mode */
|
||||
#define IODA_INVALID_M64 (-1)
|
||||
int (*m64_map)[PCI_SRIOV_NUM_BARS];
|
||||
@ -234,7 +235,9 @@ extern struct pci_dn *pci_get_pdn_by_devfn(struct pci_bus *bus,
|
||||
extern struct pci_dn *pci_get_pdn(struct pci_dev *pdev);
|
||||
extern struct pci_dn *add_dev_pci_data(struct pci_dev *pdev);
|
||||
extern void remove_dev_pci_data(struct pci_dev *pdev);
|
||||
extern void *update_dn_pci_info(struct device_node *dn, void *data);
|
||||
extern struct pci_dn *pci_add_device_node_info(struct pci_controller *hose,
|
||||
struct device_node *dn);
|
||||
extern void pci_remove_device_node_info(struct device_node *dn);
|
||||
|
||||
static inline int pci_device_from_OF_node(struct device_node *np,
|
||||
u8 *bus, u8 *devfn)
|
||||
@ -256,13 +259,13 @@ static inline struct eeh_dev *pdn_to_eeh_dev(struct pci_dn *pdn)
|
||||
#endif
|
||||
|
||||
/** Find the bus corresponding to the indicated device node */
|
||||
extern struct pci_bus *pcibios_find_pci_bus(struct device_node *dn);
|
||||
extern struct pci_bus *pci_find_bus_by_node(struct device_node *dn);
|
||||
|
||||
/** Remove all of the PCI devices under this bus */
|
||||
extern void pcibios_remove_pci_devices(struct pci_bus *bus);
|
||||
extern void pci_hp_remove_devices(struct pci_bus *bus);
|
||||
|
||||
/** Discover new pci devices under this bus, and add them */
|
||||
extern void pcibios_add_pci_devices(struct pci_bus *bus);
|
||||
extern void pci_hp_add_devices(struct pci_bus *bus);
|
||||
|
||||
|
||||
extern void isa_bridge_find_early(struct pci_controller *hose);
|
||||
|
@ -1,25 +1,12 @@
|
||||
#ifndef _ASM_POWERPC_PGALLOC_H
|
||||
#define _ASM_POWERPC_PGALLOC_H
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#include <linux/mm.h>
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3E
|
||||
extern void tlb_flush_pgtable(struct mmu_gather *tlb, unsigned long address);
|
||||
#else /* CONFIG_PPC_BOOK3E */
|
||||
static inline void tlb_flush_pgtable(struct mmu_gather *tlb,
|
||||
unsigned long address)
|
||||
{
|
||||
}
|
||||
#endif /* !CONFIG_PPC_BOOK3E */
|
||||
|
||||
extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
#include <asm/pgalloc-64.h>
|
||||
#ifdef CONFIG_PPC_BOOK3S
|
||||
#include <asm/book3s/pgalloc.h>
|
||||
#else
|
||||
#include <asm/pgalloc-32.h>
|
||||
#include <asm/nohash/pgalloc.h>
|
||||
#endif
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_PGALLOC_H */
|
||||
|
92
arch/powerpc/include/asm/pgtable-be-types.h
Normal file
92
arch/powerpc/include/asm/pgtable-be-types.h
Normal file
@ -0,0 +1,92 @@
|
||||
#ifndef _ASM_POWERPC_PGTABLE_BE_TYPES_H
|
||||
#define _ASM_POWERPC_PGTABLE_BE_TYPES_H
|
||||
|
||||
#include <asm/cmpxchg.h>
|
||||
|
||||
/* PTE level */
|
||||
typedef struct { __be64 pte; } pte_t;
|
||||
#define __pte(x) ((pte_t) { cpu_to_be64(x) })
|
||||
static inline unsigned long pte_val(pte_t x)
|
||||
{
|
||||
return be64_to_cpu(x.pte);
|
||||
}
|
||||
|
||||
static inline __be64 pte_raw(pte_t x)
|
||||
{
|
||||
return x.pte;
|
||||
}
|
||||
|
||||
/* PMD level */
|
||||
#ifdef CONFIG_PPC64
|
||||
typedef struct { __be64 pmd; } pmd_t;
|
||||
#define __pmd(x) ((pmd_t) { cpu_to_be64(x) })
|
||||
static inline unsigned long pmd_val(pmd_t x)
|
||||
{
|
||||
return be64_to_cpu(x.pmd);
|
||||
}
|
||||
|
||||
static inline __be64 pmd_raw(pmd_t x)
|
||||
{
|
||||
return x.pmd;
|
||||
}
|
||||
|
||||
/*
|
||||
* 64 bit hash always use 4 level table. Everybody else use 4 level
|
||||
* only for 4K page size.
|
||||
*/
|
||||
#if defined(CONFIG_PPC_BOOK3S_64) || !defined(CONFIG_PPC_64K_PAGES)
|
||||
typedef struct { __be64 pud; } pud_t;
|
||||
#define __pud(x) ((pud_t) { cpu_to_be64(x) })
|
||||
static inline unsigned long pud_val(pud_t x)
|
||||
{
|
||||
return be64_to_cpu(x.pud);
|
||||
}
|
||||
#endif /* CONFIG_PPC_BOOK3S_64 || !CONFIG_PPC_64K_PAGES */
|
||||
#endif /* CONFIG_PPC64 */
|
||||
|
||||
/* PGD level */
|
||||
typedef struct { __be64 pgd; } pgd_t;
|
||||
#define __pgd(x) ((pgd_t) { cpu_to_be64(x) })
|
||||
static inline unsigned long pgd_val(pgd_t x)
|
||||
{
|
||||
return be64_to_cpu(x.pgd);
|
||||
}
|
||||
|
||||
/* Page protection bits */
|
||||
typedef struct { unsigned long pgprot; } pgprot_t;
|
||||
#define pgprot_val(x) ((x).pgprot)
|
||||
#define __pgprot(x) ((pgprot_t) { (x) })
|
||||
|
||||
/*
|
||||
* With hash config 64k pages additionally define a bigger "real PTE" type that
|
||||
* gathers the "second half" part of the PTE for pseudo 64k pages
|
||||
*/
|
||||
#if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC_STD_MMU_64)
|
||||
typedef struct { pte_t pte; unsigned long hidx; } real_pte_t;
|
||||
#else
|
||||
typedef struct { pte_t pte; } real_pte_t;
|
||||
#endif
|
||||
|
||||
static inline bool pte_xchg(pte_t *ptep, pte_t old, pte_t new)
|
||||
{
|
||||
unsigned long *p = (unsigned long *)ptep;
|
||||
__be64 prev;
|
||||
|
||||
prev = (__force __be64)__cmpxchg_u64(p, (__force unsigned long)pte_raw(old),
|
||||
(__force unsigned long)pte_raw(new));
|
||||
|
||||
return pte_raw(old) == prev;
|
||||
}
|
||||
|
||||
static inline bool pmd_xchg(pmd_t *pmdp, pmd_t old, pmd_t new)
|
||||
{
|
||||
unsigned long *p = (unsigned long *)pmdp;
|
||||
__be64 prev;
|
||||
|
||||
prev = (__force __be64)__cmpxchg_u64(p, (__force unsigned long)pmd_raw(old),
|
||||
(__force unsigned long)pmd_raw(new));
|
||||
|
||||
return pmd_raw(old) == prev;
|
||||
}
|
||||
|
||||
#endif /* _ASM_POWERPC_PGTABLE_BE_TYPES_H */
|
@ -1,9 +1,6 @@
|
||||
#ifndef _ASM_POWERPC_PGTABLE_TYPES_H
|
||||
#define _ASM_POWERPC_PGTABLE_TYPES_H
|
||||
|
||||
#ifdef CONFIG_STRICT_MM_TYPECHECKS
|
||||
/* These are used to make use of C type-checking. */
|
||||
|
||||
/* PTE level */
|
||||
typedef struct { pte_basic_t pte; } pte_t;
|
||||
#define __pte(x) ((pte_t) { (x) })
|
||||
@ -48,49 +45,6 @@ typedef struct { unsigned long pgprot; } pgprot_t;
|
||||
#define pgprot_val(x) ((x).pgprot)
|
||||
#define __pgprot(x) ((pgprot_t) { (x) })
|
||||
|
||||
#else
|
||||
|
||||
/*
|
||||
* .. while these make it easier on the compiler
|
||||
*/
|
||||
|
||||
typedef pte_basic_t pte_t;
|
||||
#define __pte(x) (x)
|
||||
static inline pte_basic_t pte_val(pte_t pte)
|
||||
{
|
||||
return pte;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PPC64
|
||||
typedef unsigned long pmd_t;
|
||||
#define __pmd(x) (x)
|
||||
static inline unsigned long pmd_val(pmd_t pmd)
|
||||
{
|
||||
return pmd;
|
||||
}
|
||||
|
||||
#if defined(CONFIG_PPC_BOOK3S_64) || !defined(CONFIG_PPC_64K_PAGES)
|
||||
typedef unsigned long pud_t;
|
||||
#define __pud(x) (x)
|
||||
static inline unsigned long pud_val(pud_t pud)
|
||||
{
|
||||
return pud;
|
||||
}
|
||||
#endif /* CONFIG_PPC_BOOK3S_64 || !CONFIG_PPC_64K_PAGES */
|
||||
#endif /* CONFIG_PPC64 */
|
||||
|
||||
typedef unsigned long pgd_t;
|
||||
#define __pgd(x) (x)
|
||||
static inline unsigned long pgd_val(pgd_t pgd)
|
||||
{
|
||||
return pgd;
|
||||
}
|
||||
|
||||
typedef unsigned long pgprot_t;
|
||||
#define pgprot_val(x) (x)
|
||||
#define __pgprot(x) (x)
|
||||
|
||||
#endif /* CONFIG_STRICT_MM_TYPECHECKS */
|
||||
/*
|
||||
* With hash config 64k pages additionally define a bigger "real PTE" type that
|
||||
* gathers the "second half" part of the PTE for pseudo 64k pages
|
||||
@ -100,4 +54,16 @@ typedef struct { pte_t pte; unsigned long hidx; } real_pte_t;
|
||||
#else
|
||||
typedef struct { pte_t pte; } real_pte_t;
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
#include <asm/cmpxchg.h>
|
||||
|
||||
static inline bool pte_xchg(pte_t *ptep, pte_t old, pte_t new)
|
||||
{
|
||||
unsigned long *p = (unsigned long *)ptep;
|
||||
|
||||
return pte_val(old) == __cmpxchg_u64(p, pte_val(old), pte_val(new));
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* _ASM_POWERPC_PGTABLE_TYPES_H */
|
||||
|
@ -131,6 +131,7 @@
|
||||
/* sorted alphabetically */
|
||||
#define PPC_INST_BHRBE 0x7c00025c
|
||||
#define PPC_INST_CLRBHRB 0x7c00035c
|
||||
#define PPC_INST_CP_ABORT 0x7c00068c
|
||||
#define PPC_INST_DCBA 0x7c0005ec
|
||||
#define PPC_INST_DCBA_MASK 0xfc0007fe
|
||||
#define PPC_INST_DCBAL 0x7c2005ec
|
||||
@ -285,6 +286,7 @@
|
||||
#endif
|
||||
|
||||
/* Deal with instructions that older assemblers aren't aware of */
|
||||
#define PPC_CP_ABORT stringify_in_c(.long PPC_INST_CP_ABORT)
|
||||
#define PPC_DCBAL(a, b) stringify_in_c(.long PPC_INST_DCBAL | \
|
||||
__PPC_RA(a) | __PPC_RB(b))
|
||||
#define PPC_DCBZL(a, b) stringify_in_c(.long PPC_INST_DCBZL | \
|
||||
|
@ -33,9 +33,9 @@ extern struct pci_dev *isa_bridge_pcidev; /* may be NULL if no ISA bus */
|
||||
struct device_node;
|
||||
struct pci_dn;
|
||||
|
||||
typedef void *(*traverse_func)(struct device_node *me, void *data);
|
||||
void *traverse_pci_devices(struct device_node *start, traverse_func pre,
|
||||
void *data);
|
||||
void *pci_traverse_device_nodes(struct device_node *start,
|
||||
void *(*fn)(struct device_node *, void *),
|
||||
void *data);
|
||||
void *traverse_pci_dn(struct pci_dn *root,
|
||||
void *(*fn)(struct pci_dn *, void *),
|
||||
void *data);
|
||||
|
@ -427,7 +427,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601)
|
||||
li r4,1024; \
|
||||
mtctr r4; \
|
||||
lis r4,KERNELBASE@h; \
|
||||
.machine push; \
|
||||
.machine "power4"; \
|
||||
0: tlbie r4; \
|
||||
.machine pop; \
|
||||
addi r4,r4,0x1000; \
|
||||
bdnz 0b
|
||||
#endif
|
||||
|
@ -76,6 +76,16 @@
|
||||
*/
|
||||
#ifndef __ASSEMBLY__
|
||||
extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
|
||||
|
||||
/*
|
||||
* Don't just check for any non zero bits in __PAGE_USER, since for book3e
|
||||
* and PTE_64BIT, PAGE_KERNEL_X contains _PAGE_BAP_SR which is also in
|
||||
* _PAGE_USER. Need to explicitly match _PAGE_BAP_UR bit in that case too.
|
||||
*/
|
||||
static inline bool pte_user(pte_t pte)
|
||||
{
|
||||
return (pte_val(pte) & _PAGE_USER) == _PAGE_USER;
|
||||
}
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
/* Location of the PFN in the PTE. Most 32-bit platforms use the same
|
||||
@ -184,13 +194,6 @@ extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
|
||||
/* Make modules code happy. We don't set RO yet */
|
||||
#define PAGE_KERNEL_EXEC PAGE_KERNEL_X
|
||||
|
||||
/*
|
||||
* Don't just check for any non zero bits in __PAGE_USER, since for book3e
|
||||
* and PTE_64BIT, PAGE_KERNEL_X contains _PAGE_BAP_SR which is also in
|
||||
* _PAGE_USER. Need to explicitly match _PAGE_BAP_UR bit in that case too.
|
||||
*/
|
||||
#define pte_user(val) ((val & _PAGE_USER) == _PAGE_USER)
|
||||
|
||||
/* Advertise special mapping type for AGP */
|
||||
#define PAGE_AGP (PAGE_KERNEL_NC)
|
||||
#define HAVE_PAGE_AGP
|
||||
@ -198,3 +201,12 @@ extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
|
||||
/* Advertise support for _PAGE_SPECIAL */
|
||||
#define __HAVE_ARCH_PTE_SPECIAL
|
||||
|
||||
#ifndef _PAGE_READ
|
||||
/* if not defined, we should not find _PAGE_WRITE too */
|
||||
#define _PAGE_READ 0
|
||||
#define _PAGE_WRITE _PAGE_RW
|
||||
#endif
|
||||
|
||||
#ifndef H_PAGE_4K_PFN
|
||||
#define H_PAGE_4K_PFN 0
|
||||
#endif
|
||||
|
@ -347,6 +347,7 @@
|
||||
#define LPCR_LPES_SH 2
|
||||
#define LPCR_RMI 0x00000002 /* real mode is cache inhibit */
|
||||
#define LPCR_HDICE 0x00000001 /* Hyp Decr enable (HV,PR,EE) */
|
||||
#define LPCR_UPRT 0x00400000 /* Use Process Table (ISA 3) */
|
||||
#ifndef SPRN_LPID
|
||||
#define SPRN_LPID 0x13F /* Logical Partition Identifier */
|
||||
#endif
|
||||
@ -587,6 +588,7 @@
|
||||
#define SPRN_PIR 0x3FF /* Processor Identification Register */
|
||||
#endif
|
||||
#define SPRN_TIR 0x1BE /* Thread Identification Register */
|
||||
#define SPRN_PTCR 0x1D0 /* Partition table control Register */
|
||||
#define SPRN_PSPB 0x09F /* Problem State Priority Boost reg */
|
||||
#define SPRN_PTEHI 0x3D5 /* 981 7450 PTE HI word (S/W TLB load) */
|
||||
#define SPRN_PTELO 0x3D6 /* 982 7450 PTE LO word (S/W TLB load) */
|
||||
@ -1182,6 +1184,7 @@
|
||||
#define PVR_970GX 0x0045
|
||||
#define PVR_POWER7p 0x004A
|
||||
#define PVR_POWER8E 0x004B
|
||||
#define PVR_POWER8NVL 0x004C
|
||||
#define PVR_POWER8 0x004D
|
||||
#define PVR_BE 0x0070
|
||||
#define PVR_PA6T 0x0090
|
||||
|
@ -58,6 +58,7 @@ extern void __flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
|
||||
|
||||
#elif defined(CONFIG_PPC_STD_MMU_32)
|
||||
|
||||
#define MMU_NO_CONTEXT (0)
|
||||
/*
|
||||
* TLB flushing for "classic" hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx
|
||||
*/
|
||||
@ -78,7 +79,7 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm)
|
||||
}
|
||||
|
||||
#elif defined(CONFIG_PPC_STD_MMU_64)
|
||||
#include <asm/book3s/64/tlbflush-hash.h>
|
||||
#include <asm/book3s/64/tlbflush.h>
|
||||
#else
|
||||
#error Unsupported MMU type
|
||||
#endif
|
||||
|
50
arch/powerpc/include/uapi/asm/perf_regs.h
Normal file
50
arch/powerpc/include/uapi/asm/perf_regs.h
Normal file
@ -0,0 +1,50 @@
|
||||
#ifndef _UAPI_ASM_POWERPC_PERF_REGS_H
|
||||
#define _UAPI_ASM_POWERPC_PERF_REGS_H
|
||||
|
||||
enum perf_event_powerpc_regs {
|
||||
PERF_REG_POWERPC_R0,
|
||||
PERF_REG_POWERPC_R1,
|
||||
PERF_REG_POWERPC_R2,
|
||||
PERF_REG_POWERPC_R3,
|
||||
PERF_REG_POWERPC_R4,
|
||||
PERF_REG_POWERPC_R5,
|
||||
PERF_REG_POWERPC_R6,
|
||||
PERF_REG_POWERPC_R7,
|
||||
PERF_REG_POWERPC_R8,
|
||||
PERF_REG_POWERPC_R9,
|
||||
PERF_REG_POWERPC_R10,
|
||||
PERF_REG_POWERPC_R11,
|
||||
PERF_REG_POWERPC_R12,
|
||||
PERF_REG_POWERPC_R13,
|
||||
PERF_REG_POWERPC_R14,
|
||||
PERF_REG_POWERPC_R15,
|
||||
PERF_REG_POWERPC_R16,
|
||||
PERF_REG_POWERPC_R17,
|
||||
PERF_REG_POWERPC_R18,
|
||||
PERF_REG_POWERPC_R19,
|
||||
PERF_REG_POWERPC_R20,
|
||||
PERF_REG_POWERPC_R21,
|
||||
PERF_REG_POWERPC_R22,
|
||||
PERF_REG_POWERPC_R23,
|
||||
PERF_REG_POWERPC_R24,
|
||||
PERF_REG_POWERPC_R25,
|
||||
PERF_REG_POWERPC_R26,
|
||||
PERF_REG_POWERPC_R27,
|
||||
PERF_REG_POWERPC_R28,
|
||||
PERF_REG_POWERPC_R29,
|
||||
PERF_REG_POWERPC_R30,
|
||||
PERF_REG_POWERPC_R31,
|
||||
PERF_REG_POWERPC_NIP,
|
||||
PERF_REG_POWERPC_MSR,
|
||||
PERF_REG_POWERPC_ORIG_R3,
|
||||
PERF_REG_POWERPC_CTR,
|
||||
PERF_REG_POWERPC_LINK,
|
||||
PERF_REG_POWERPC_XER,
|
||||
PERF_REG_POWERPC_CCR,
|
||||
PERF_REG_POWERPC_SOFTE,
|
||||
PERF_REG_POWERPC_TRAP,
|
||||
PERF_REG_POWERPC_DAR,
|
||||
PERF_REG_POWERPC_DSISR,
|
||||
PERF_REG_POWERPC_MAX,
|
||||
};
|
||||
#endif /* _UAPI_ASM_POWERPC_PERF_REGS_H */
|
@ -438,7 +438,11 @@ int main(void)
|
||||
DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry));
|
||||
#endif
|
||||
|
||||
#ifdef MAX_PGD_TABLE_SIZE
|
||||
DEFINE(PGD_TABLE_SIZE, MAX_PGD_TABLE_SIZE);
|
||||
#else
|
||||
DEFINE(PGD_TABLE_SIZE, PGD_TABLE_SIZE);
|
||||
#endif
|
||||
DEFINE(PTE_SIZE, sizeof(pte_t));
|
||||
|
||||
#ifdef CONFIG_KVM
|
||||
|
@ -162,7 +162,7 @@ void btext_map(void)
|
||||
offset = ((unsigned long) dispDeviceBase) - base;
|
||||
size = dispDeviceRowBytes * dispDeviceRect[3] + offset
|
||||
+ dispDeviceRect[0];
|
||||
vbase = __ioremap(base, size, _PAGE_NO_CACHE);
|
||||
vbase = __ioremap(base, size, pgprot_val(pgprot_noncached_wc(__pgprot(0))));
|
||||
if (vbase == 0)
|
||||
return;
|
||||
logicalDisplayBase = vbase + offset;
|
||||
|
@ -63,7 +63,6 @@ extern void __setup_cpu_745x(unsigned long offset, struct cpu_spec* spec);
|
||||
extern void __setup_cpu_ppc970(unsigned long offset, struct cpu_spec* spec);
|
||||
extern void __setup_cpu_ppc970MP(unsigned long offset, struct cpu_spec* spec);
|
||||
extern void __setup_cpu_pa6t(unsigned long offset, struct cpu_spec* spec);
|
||||
extern void __setup_cpu_a2(unsigned long offset, struct cpu_spec* spec);
|
||||
extern void __restore_cpu_pa6t(void);
|
||||
extern void __restore_cpu_ppc970(void);
|
||||
extern void __setup_cpu_power7(unsigned long offset, struct cpu_spec* spec);
|
||||
@ -72,7 +71,6 @@ extern void __setup_cpu_power8(unsigned long offset, struct cpu_spec* spec);
|
||||
extern void __restore_cpu_power8(void);
|
||||
extern void __setup_cpu_power9(unsigned long offset, struct cpu_spec* spec);
|
||||
extern void __restore_cpu_power9(void);
|
||||
extern void __restore_cpu_a2(void);
|
||||
extern void __flush_tlb_power7(unsigned int action);
|
||||
extern void __flush_tlb_power8(unsigned int action);
|
||||
extern void __flush_tlb_power9(unsigned int action);
|
||||
|
@ -48,7 +48,7 @@
|
||||
|
||||
|
||||
/** Overview:
|
||||
* EEH, or "Extended Error Handling" is a PCI bridge technology for
|
||||
* EEH, or "Enhanced Error Handling" is a PCI bridge technology for
|
||||
* dealing with PCI bus errors that can't be dealt with within the
|
||||
* usual PCI framework, except by check-stopping the CPU. Systems
|
||||
* that are designed for high-availability/reliability cannot afford
|
||||
@ -1068,7 +1068,7 @@ void eeh_add_device_early(struct pci_dn *pdn)
|
||||
struct pci_controller *phb;
|
||||
struct eeh_dev *edev = pdn_to_eeh_dev(pdn);
|
||||
|
||||
if (!edev || !eeh_enabled())
|
||||
if (!edev)
|
||||
return;
|
||||
|
||||
if (!eeh_has_flag(EEH_PROBE_MODE_DEVTREE))
|
||||
@ -1336,14 +1336,11 @@ static int eeh_pe_change_owner(struct eeh_pe *pe)
|
||||
id->subdevice != pdev->subsystem_device)
|
||||
continue;
|
||||
|
||||
goto reset;
|
||||
return eeh_pe_reset_and_recover(pe);
|
||||
}
|
||||
}
|
||||
|
||||
return eeh_unfreeze_pe(pe, true);
|
||||
|
||||
reset:
|
||||
return eeh_pe_reset_and_recover(pe);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -171,6 +171,16 @@ static void *eeh_dev_save_state(void *data, void *userdata)
|
||||
if (!edev)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* We cannot access the config space on some adapters.
|
||||
* Otherwise, it will cause fenced PHB. We don't save
|
||||
* the content in their config space and will restore
|
||||
* from the initial config space saved when the EEH
|
||||
* device is created.
|
||||
*/
|
||||
if (edev->pe && (edev->pe->state & EEH_PE_CFG_RESTRICTED))
|
||||
return NULL;
|
||||
|
||||
pdev = eeh_dev_to_pci_dev(edev);
|
||||
if (!pdev)
|
||||
return NULL;
|
||||
@ -312,6 +322,19 @@ static void *eeh_dev_restore_state(void *data, void *userdata)
|
||||
if (!edev)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* The content in the config space isn't saved because
|
||||
* the blocked config space on some adapters. We have
|
||||
* to restore the initial saved config space when the
|
||||
* EEH device is created.
|
||||
*/
|
||||
if (edev->pe && (edev->pe->state & EEH_PE_CFG_RESTRICTED)) {
|
||||
if (list_is_last(&edev->list, &edev->pe->edevs))
|
||||
eeh_pe_restore_bars(edev->pe);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pdev = eeh_dev_to_pci_dev(edev);
|
||||
if (!pdev)
|
||||
return NULL;
|
||||
@ -552,7 +575,7 @@ static int eeh_clear_pe_frozen_state(struct eeh_pe *pe,
|
||||
|
||||
int eeh_pe_reset_and_recover(struct eeh_pe *pe)
|
||||
{
|
||||
int result, ret;
|
||||
int ret;
|
||||
|
||||
/* Bail if the PE is being recovered */
|
||||
if (pe->state & EEH_PE_RECOVERING)
|
||||
@ -564,9 +587,6 @@ int eeh_pe_reset_and_recover(struct eeh_pe *pe)
|
||||
/* Save states */
|
||||
eeh_pe_dev_traverse(pe, eeh_dev_save_state, NULL);
|
||||
|
||||
/* Report error */
|
||||
eeh_pe_dev_traverse(pe, eeh_report_error, &result);
|
||||
|
||||
/* Issue reset */
|
||||
ret = eeh_reset_pe(pe);
|
||||
if (ret) {
|
||||
@ -581,15 +601,9 @@ int eeh_pe_reset_and_recover(struct eeh_pe *pe)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Notify completion of reset */
|
||||
eeh_pe_dev_traverse(pe, eeh_report_reset, &result);
|
||||
|
||||
/* Restore device state */
|
||||
eeh_pe_dev_traverse(pe, eeh_dev_restore_state, NULL);
|
||||
|
||||
/* Resume */
|
||||
eeh_pe_dev_traverse(pe, eeh_report_resume, NULL);
|
||||
|
||||
/* Clear recovery mode */
|
||||
eeh_pe_state_clear(pe, EEH_PE_RECOVERING);
|
||||
|
||||
@ -621,7 +635,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
|
||||
* We don't remove the corresponding PE instances because
|
||||
* we need the information afterwords. The attached EEH
|
||||
* devices are expected to be attached soon when calling
|
||||
* into pcibios_add_pci_devices().
|
||||
* into pci_hp_add_devices().
|
||||
*/
|
||||
eeh_pe_state_mark(pe, EEH_PE_KEEP);
|
||||
if (bus) {
|
||||
@ -630,7 +644,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
|
||||
} else {
|
||||
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
|
||||
pci_lock_rescan_remove();
|
||||
pcibios_remove_pci_devices(bus);
|
||||
pci_hp_remove_devices(bus);
|
||||
pci_unlock_rescan_remove();
|
||||
}
|
||||
} else if (frozen_bus) {
|
||||
@ -681,7 +695,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
|
||||
if (pe->type & EEH_PE_VF)
|
||||
eeh_add_virt_device(edev, NULL);
|
||||
else
|
||||
pcibios_add_pci_devices(bus);
|
||||
pci_hp_add_devices(bus);
|
||||
} else if (frozen_bus && rmv_data->removed) {
|
||||
pr_info("EEH: Sleep 5s ahead of partial hotplug\n");
|
||||
ssleep(5);
|
||||
@ -691,7 +705,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
|
||||
if (pe->type & EEH_PE_VF)
|
||||
eeh_add_virt_device(edev, NULL);
|
||||
else
|
||||
pcibios_add_pci_devices(frozen_bus);
|
||||
pci_hp_add_devices(frozen_bus);
|
||||
}
|
||||
eeh_pe_state_clear(pe, EEH_PE_KEEP);
|
||||
|
||||
@ -896,7 +910,7 @@ perm_error:
|
||||
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
|
||||
|
||||
pci_lock_rescan_remove();
|
||||
pcibios_remove_pci_devices(frozen_bus);
|
||||
pci_hp_remove_devices(frozen_bus);
|
||||
pci_unlock_rescan_remove();
|
||||
}
|
||||
}
|
||||
@ -981,7 +995,7 @@ static void eeh_handle_special_event(void)
|
||||
bus = eeh_pe_bus_get(phb_pe);
|
||||
eeh_pe_dev_traverse(pe,
|
||||
eeh_report_failure, NULL);
|
||||
pcibios_remove_pci_devices(bus);
|
||||
pci_hp_remove_devices(bus);
|
||||
}
|
||||
pci_unlock_rescan_remove();
|
||||
}
|
||||
|
@ -36,7 +36,7 @@
|
||||
|
||||
static DEFINE_SPINLOCK(eeh_eventlist_lock);
|
||||
static struct semaphore eeh_eventlist_sem;
|
||||
LIST_HEAD(eeh_eventlist);
|
||||
static LIST_HEAD(eeh_eventlist);
|
||||
|
||||
/**
|
||||
* eeh_event_handler - Dispatch EEH events.
|
||||
|
@ -249,7 +249,7 @@ static void *__eeh_pe_get(void *data, void *flag)
|
||||
} else {
|
||||
if (edev->pe_config_addr &&
|
||||
(edev->pe_config_addr == pe->addr))
|
||||
return pe;
|
||||
return pe;
|
||||
}
|
||||
|
||||
/* Try BDF address */
|
||||
|
@ -37,6 +37,7 @@
|
||||
#include <asm/hw_irq.h>
|
||||
#include <asm/context_tracking.h>
|
||||
#include <asm/tm.h>
|
||||
#include <asm/ppc-opcode.h>
|
||||
|
||||
/*
|
||||
* System calls.
|
||||
@ -509,6 +510,14 @@ BEGIN_FTR_SECTION
|
||||
ldarx r6,0,r1
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS)
|
||||
|
||||
BEGIN_FTR_SECTION
|
||||
/*
|
||||
* A cp_abort (copy paste abort) here ensures that when context switching, a
|
||||
* copy from one process can't leak into the paste of another.
|
||||
*/
|
||||
PPC_CP_ABORT
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S
|
||||
/* Cancel all explict user streams as they will have no use after context
|
||||
* switch and will stop the HW from creating streams itself
|
||||
@ -520,7 +529,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS)
|
||||
std r6,PACACURRENT(r13) /* Set new 'current' */
|
||||
|
||||
ld r8,KSP(r4) /* new stack pointer */
|
||||
#ifdef CONFIG_PPC_BOOK3S
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
BEGIN_MMU_FTR_SECTION
|
||||
b 2f
|
||||
END_MMU_FTR_SECTION_IFSET(MMU_FTR_RADIX)
|
||||
BEGIN_FTR_SECTION
|
||||
clrrdi r6,r8,28 /* get its ESID */
|
||||
clrrdi r9,r1,28 /* get current sp ESID */
|
||||
@ -566,7 +578,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
|
||||
slbmte r7,r0
|
||||
isync
|
||||
2:
|
||||
#endif /* !CONFIG_PPC_BOOK3S */
|
||||
#endif /* CONFIG_PPC_STD_MMU_64 */
|
||||
|
||||
CURRENT_THREAD_INFO(r7, r8) /* base of new stack */
|
||||
/* Note: this uses SWITCH_FRAME_SIZE rather than INT_FRAME_SIZE
|
||||
|
@ -189,7 +189,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
|
||||
#endif /* CONFIG_PPC_P7_NAP */
|
||||
EXCEPTION_PROLOG_0(PACA_EXMC)
|
||||
BEGIN_FTR_SECTION
|
||||
b machine_check_pSeries_early
|
||||
b machine_check_powernv_early
|
||||
FTR_SECTION_ELSE
|
||||
b machine_check_pSeries_0
|
||||
ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
|
||||
@ -209,11 +209,6 @@ data_access_slb_pSeries:
|
||||
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x380)
|
||||
std r3,PACA_EXSLB+EX_R3(r13)
|
||||
mfspr r3,SPRN_DAR
|
||||
#ifdef __DISABLED__
|
||||
/* Keep that around for when we re-implement dynamic VSIDs */
|
||||
cmpdi r3,0
|
||||
bge slb_miss_user_pseries
|
||||
#endif /* __DISABLED__ */
|
||||
mfspr r12,SPRN_SRR1
|
||||
#ifndef CONFIG_RELOCATABLE
|
||||
b slb_miss_realmode
|
||||
@ -240,11 +235,6 @@ instruction_access_slb_pSeries:
|
||||
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x480)
|
||||
std r3,PACA_EXSLB+EX_R3(r13)
|
||||
mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */
|
||||
#ifdef __DISABLED__
|
||||
/* Keep that around for when we re-implement dynamic VSIDs */
|
||||
cmpdi r3,0
|
||||
bge slb_miss_user_pseries
|
||||
#endif /* __DISABLED__ */
|
||||
mfspr r12,SPRN_SRR1
|
||||
#ifndef CONFIG_RELOCATABLE
|
||||
b slb_miss_realmode
|
||||
@ -443,7 +433,7 @@ denorm_exception_hv:
|
||||
|
||||
.align 7
|
||||
/* moved from 0x200 */
|
||||
machine_check_pSeries_early:
|
||||
machine_check_powernv_early:
|
||||
BEGIN_FTR_SECTION
|
||||
EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
|
||||
/*
|
||||
@ -709,34 +699,6 @@ system_reset_fwnmi:
|
||||
|
||||
#endif /* CONFIG_PPC_PSERIES */
|
||||
|
||||
#ifdef __DISABLED__
|
||||
/*
|
||||
* This is used for when the SLB miss handler has to go virtual,
|
||||
* which doesn't happen for now anymore but will once we re-implement
|
||||
* dynamic VSIDs for shared page tables
|
||||
*/
|
||||
slb_miss_user_pseries:
|
||||
std r10,PACA_EXGEN+EX_R10(r13)
|
||||
std r11,PACA_EXGEN+EX_R11(r13)
|
||||
std r12,PACA_EXGEN+EX_R12(r13)
|
||||
GET_SCRATCH0(r10)
|
||||
ld r11,PACA_EXSLB+EX_R9(r13)
|
||||
ld r12,PACA_EXSLB+EX_R3(r13)
|
||||
std r10,PACA_EXGEN+EX_R13(r13)
|
||||
std r11,PACA_EXGEN+EX_R9(r13)
|
||||
std r12,PACA_EXGEN+EX_R3(r13)
|
||||
clrrdi r12,r13,32
|
||||
mfmsr r10
|
||||
mfspr r11,SRR0 /* save SRR0 */
|
||||
ori r12,r12,slb_miss_user_common@l /* virt addr of handler */
|
||||
ori r10,r10,MSR_IR|MSR_DR|MSR_RI
|
||||
mtspr SRR0,r12
|
||||
mfspr r12,SRR1 /* and SRR1 */
|
||||
mtspr SRR1,r10
|
||||
rfid
|
||||
b . /* prevent spec. execution */
|
||||
#endif /* __DISABLED__ */
|
||||
|
||||
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
|
||||
kvmppc_skip_interrupt:
|
||||
/*
|
||||
@ -764,11 +726,10 @@ kvmppc_skip_Hinterrupt:
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Code from here down to __end_handlers is invoked from the
|
||||
* exception prologs above. Because the prologs assemble the
|
||||
* addresses of these handlers using the LOAD_HANDLER macro,
|
||||
* which uses an ori instruction, these handlers must be in
|
||||
* the first 64k of the kernel image.
|
||||
* Ensure that any handlers that get invoked from the exception prologs
|
||||
* above are below the first 64KB (0x10000) of the kernel image because
|
||||
* the prologs assemble the addresses of these handlers using the
|
||||
* LOAD_HANDLER macro, which uses an ori instruction.
|
||||
*/
|
||||
|
||||
/*** Common interrupt handlers ***/
|
||||
@ -953,11 +914,6 @@ hv_facility_unavailable_relon_trampoline:
|
||||
#endif
|
||||
STD_RELON_EXCEPTION_PSERIES(0x5700, 0x1700, altivec_assist)
|
||||
|
||||
/* Other future vectors */
|
||||
.align 7
|
||||
.globl __end_interrupts
|
||||
__end_interrupts:
|
||||
|
||||
.align 7
|
||||
system_call_entry:
|
||||
b system_call_common
|
||||
@ -983,7 +939,13 @@ data_access_common:
|
||||
ld r3,PACA_EXGEN+EX_DAR(r13)
|
||||
lwz r4,PACA_EXGEN+EX_DSISR(r13)
|
||||
li r5,0x300
|
||||
std r3,_DAR(r1)
|
||||
std r4,_DSISR(r1)
|
||||
BEGIN_MMU_FTR_SECTION
|
||||
b do_hash_page /* Try to handle as hpte fault */
|
||||
MMU_FTR_SECTION_ELSE
|
||||
b handle_page_fault
|
||||
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
|
||||
|
||||
.align 7
|
||||
.globl h_data_storage_common
|
||||
@ -1008,74 +970,16 @@ instruction_access_common:
|
||||
ld r3,_NIP(r1)
|
||||
andis. r4,r12,0x5820
|
||||
li r5,0x400
|
||||
std r3,_DAR(r1)
|
||||
std r4,_DSISR(r1)
|
||||
BEGIN_MMU_FTR_SECTION
|
||||
b do_hash_page /* Try to handle as hpte fault */
|
||||
MMU_FTR_SECTION_ELSE
|
||||
b handle_page_fault
|
||||
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
|
||||
|
||||
STD_EXCEPTION_COMMON(0xe20, h_instr_storage, unknown_exception)
|
||||
|
||||
/*
|
||||
* Here is the common SLB miss user that is used when going to virtual
|
||||
* mode for SLB misses, that is currently not used
|
||||
*/
|
||||
#ifdef __DISABLED__
|
||||
.align 7
|
||||
.globl slb_miss_user_common
|
||||
slb_miss_user_common:
|
||||
mflr r10
|
||||
std r3,PACA_EXGEN+EX_DAR(r13)
|
||||
stw r9,PACA_EXGEN+EX_CCR(r13)
|
||||
std r10,PACA_EXGEN+EX_LR(r13)
|
||||
std r11,PACA_EXGEN+EX_SRR0(r13)
|
||||
bl slb_allocate_user
|
||||
|
||||
ld r10,PACA_EXGEN+EX_LR(r13)
|
||||
ld r3,PACA_EXGEN+EX_R3(r13)
|
||||
lwz r9,PACA_EXGEN+EX_CCR(r13)
|
||||
ld r11,PACA_EXGEN+EX_SRR0(r13)
|
||||
mtlr r10
|
||||
beq- slb_miss_fault
|
||||
|
||||
andi. r10,r12,MSR_RI /* check for unrecoverable exception */
|
||||
beq- unrecov_user_slb
|
||||
mfmsr r10
|
||||
|
||||
.machine push
|
||||
.machine "power4"
|
||||
mtcrf 0x80,r9
|
||||
.machine pop
|
||||
|
||||
clrrdi r10,r10,2 /* clear RI before setting SRR0/1 */
|
||||
mtmsrd r10,1
|
||||
|
||||
mtspr SRR0,r11
|
||||
mtspr SRR1,r12
|
||||
|
||||
ld r9,PACA_EXGEN+EX_R9(r13)
|
||||
ld r10,PACA_EXGEN+EX_R10(r13)
|
||||
ld r11,PACA_EXGEN+EX_R11(r13)
|
||||
ld r12,PACA_EXGEN+EX_R12(r13)
|
||||
ld r13,PACA_EXGEN+EX_R13(r13)
|
||||
rfid
|
||||
b .
|
||||
|
||||
slb_miss_fault:
|
||||
EXCEPTION_PROLOG_COMMON(0x380, PACA_EXGEN)
|
||||
ld r4,PACA_EXGEN+EX_DAR(r13)
|
||||
li r5,0
|
||||
std r4,_DAR(r1)
|
||||
std r5,_DSISR(r1)
|
||||
b handle_page_fault
|
||||
|
||||
unrecov_user_slb:
|
||||
EXCEPTION_PROLOG_COMMON(0x4200, PACA_EXGEN)
|
||||
RECONCILE_IRQ_STATE(r10, r11)
|
||||
bl save_nvgprs
|
||||
1: addi r3,r1,STACK_FRAME_OVERHEAD
|
||||
bl unrecoverable_exception
|
||||
b 1b
|
||||
|
||||
#endif /* __DISABLED__ */
|
||||
|
||||
|
||||
/*
|
||||
* Machine check is different because we use a different
|
||||
* save area: PACA_EXMC instead of PACA_EXGEN.
|
||||
@ -1230,10 +1134,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
|
||||
STD_EXCEPTION_COMMON(0xf60, facility_unavailable, facility_unavailable_exception)
|
||||
STD_EXCEPTION_COMMON(0xf80, hv_facility_unavailable, facility_unavailable_exception)
|
||||
|
||||
.align 7
|
||||
.globl __end_handlers
|
||||
__end_handlers:
|
||||
|
||||
/* Equivalents to the above handlers for relocation-on interrupt vectors */
|
||||
STD_RELON_EXCEPTION_HV_OOL(0xe40, emulation_assist)
|
||||
MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell)
|
||||
@ -1244,6 +1144,17 @@ __end_handlers:
|
||||
STD_RELON_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
|
||||
STD_RELON_EXCEPTION_HV_OOL(0xf80, hv_facility_unavailable)
|
||||
|
||||
/*
|
||||
* The __end_interrupts marker must be past the out-of-line (OOL)
|
||||
* handlers, so that they are copied to real address 0x100 when running
|
||||
* a relocatable kernel. This ensures they can be reached from the short
|
||||
* trampoline handlers (like 0x4f00, 0x4f20, etc.) which branch
|
||||
* directly, without using LOAD_HANDLER().
|
||||
*/
|
||||
.align 7
|
||||
.globl __end_interrupts
|
||||
__end_interrupts:
|
||||
|
||||
#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
|
||||
/*
|
||||
* Data area reserved for FWNMI option.
|
||||
@ -1476,8 +1387,11 @@ slb_miss_realmode:
|
||||
stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
|
||||
std r10,PACA_EXSLB+EX_LR(r13) /* save LR */
|
||||
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
BEGIN_MMU_FTR_SECTION
|
||||
bl slb_allocate_realmode
|
||||
|
||||
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_RADIX)
|
||||
#endif
|
||||
/* All done -- return from exception. */
|
||||
|
||||
ld r10,PACA_EXSLB+EX_LR(r13)
|
||||
@ -1485,7 +1399,9 @@ slb_miss_realmode:
|
||||
lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */
|
||||
|
||||
mtlr r10
|
||||
|
||||
BEGIN_MMU_FTR_SECTION
|
||||
b 2f
|
||||
END_MMU_FTR_SECTION_IFSET(MMU_FTR_RADIX)
|
||||
andi. r10,r12,MSR_RI /* check for unrecoverable exception */
|
||||
beq- 2f
|
||||
|
||||
@ -1536,9 +1452,7 @@ power4_fixup_nap:
|
||||
*/
|
||||
.align 7
|
||||
do_hash_page:
|
||||
std r3,_DAR(r1)
|
||||
std r4,_DSISR(r1)
|
||||
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
andis. r0,r4,0xa410 /* weird error? */
|
||||
bne- handle_page_fault /* if not, try to insert a HPTE */
|
||||
andis. r0,r4,DSISR_DABRMATCH@h
|
||||
@ -1566,6 +1480,7 @@ do_hash_page:
|
||||
|
||||
/* Error */
|
||||
blt- 13f
|
||||
#endif /* CONFIG_PPC_STD_MMU_64 */
|
||||
|
||||
/* Here we have a page fault that hash_page can't handle. */
|
||||
handle_page_fault:
|
||||
@ -1592,6 +1507,7 @@ handle_dabr_fault:
|
||||
12: b ret_from_except_lite
|
||||
|
||||
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
/* We have a page fault that hash_page could handle but HV refused
|
||||
* the PTE insertion
|
||||
*/
|
||||
@ -1601,6 +1517,7 @@ handle_dabr_fault:
|
||||
ld r4,_DAR(r1)
|
||||
bl low_hash_fault
|
||||
b ret_from_except
|
||||
#endif
|
||||
|
||||
/*
|
||||
* We come here as a result of a DSI at a point where we don't want
|
||||
|
@ -607,3 +607,13 @@ unsigned long __init arch_syscall_addr(int nr)
|
||||
return sys_call_table[nr*2];
|
||||
}
|
||||
#endif /* CONFIG_FTRACE_SYSCALLS && CONFIG_PPC64 */
|
||||
|
||||
#if defined(CONFIG_PPC64) && (!defined(_CALL_ELF) || _CALL_ELF != 2)
|
||||
char *arch_ftrace_match_adjust(char *str, const char *search)
|
||||
{
|
||||
if (str[0] == '.' && search[0] != '.')
|
||||
return str + 1;
|
||||
else
|
||||
return str;
|
||||
}
|
||||
#endif /* defined(CONFIG_PPC64) && (!defined(_CALL_ELF) || _CALL_ELF != 2) */
|
||||
|
@ -973,13 +973,16 @@ start_here_common:
|
||||
* This stuff goes at the beginning of the bss, which is page-aligned.
|
||||
*/
|
||||
.section ".bss"
|
||||
|
||||
.align PAGE_SHIFT
|
||||
|
||||
.globl empty_zero_page
|
||||
empty_zero_page:
|
||||
.space PAGE_SIZE
|
||||
/*
|
||||
* pgd dir should be aligned to PGD_TABLE_SIZE which is 64K.
|
||||
* We will need to find a better way to fix this
|
||||
*/
|
||||
.align 16
|
||||
|
||||
.globl swapper_pg_dir
|
||||
swapper_pg_dir:
|
||||
.space PGD_TABLE_SIZE
|
||||
|
||||
.globl empty_zero_page
|
||||
empty_zero_page:
|
||||
.space PAGE_SIZE
|
||||
|
@ -408,7 +408,7 @@ static ssize_t modalias_show(struct device *dev,
|
||||
return len+1;
|
||||
}
|
||||
|
||||
struct device_attribute ibmebus_bus_device_attrs[] = {
|
||||
static struct device_attribute ibmebus_bus_device_attrs[] = {
|
||||
__ATTR_RO(devspec),
|
||||
__ATTR_RO(name),
|
||||
__ATTR_RO(modalias),
|
||||
|
@ -109,14 +109,14 @@ static void pci_process_ISA_OF_ranges(struct device_node *isa_node,
|
||||
size = 0x10000;
|
||||
|
||||
__ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE,
|
||||
size, _PAGE_NO_CACHE|_PAGE_GUARDED);
|
||||
size, pgprot_val(pgprot_noncached(__pgprot(0))));
|
||||
return;
|
||||
|
||||
inval_range:
|
||||
printk(KERN_ERR "no ISA IO ranges or unexpected isa range, "
|
||||
"mapping 64k\n");
|
||||
__ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE,
|
||||
0x10000, _PAGE_NO_CACHE|_PAGE_GUARDED);
|
||||
0x10000, pgprot_val(pgprot_noncached(__pgprot(0))));
|
||||
}
|
||||
|
||||
|
||||
|
@ -228,17 +228,12 @@ static struct property memory_limit_prop = {
|
||||
|
||||
static void __init export_crashk_values(struct device_node *node)
|
||||
{
|
||||
struct property *prop;
|
||||
|
||||
/* There might be existing crash kernel properties, but we can't
|
||||
* be sure what's in them, so remove them. */
|
||||
prop = of_find_property(node, "linux,crashkernel-base", NULL);
|
||||
if (prop)
|
||||
of_remove_property(node, prop);
|
||||
|
||||
prop = of_find_property(node, "linux,crashkernel-size", NULL);
|
||||
if (prop)
|
||||
of_remove_property(node, prop);
|
||||
of_remove_property(node, of_find_property(node,
|
||||
"linux,crashkernel-base", NULL));
|
||||
of_remove_property(node, of_find_property(node,
|
||||
"linux,crashkernel-size", NULL));
|
||||
|
||||
if (crashk_res.start != 0) {
|
||||
crashk_base = cpu_to_be_ulong(crashk_res.start),
|
||||
@ -258,16 +253,13 @@ static void __init export_crashk_values(struct device_node *node)
|
||||
static int __init kexec_setup(void)
|
||||
{
|
||||
struct device_node *node;
|
||||
struct property *prop;
|
||||
|
||||
node = of_find_node_by_path("/chosen");
|
||||
if (!node)
|
||||
return -ENOENT;
|
||||
|
||||
/* remove any stale properties so ours can be found */
|
||||
prop = of_find_property(node, kernel_end_prop.name, NULL);
|
||||
if (prop)
|
||||
of_remove_property(node, prop);
|
||||
of_remove_property(node, of_find_property(node, kernel_end_prop.name, NULL));
|
||||
|
||||
/* information needed by userspace when using default_machine_kexec */
|
||||
kernel_end = cpu_to_be_ulong(__pa(_end));
|
||||
|
@ -76,6 +76,7 @@ int default_machine_kexec_prepare(struct kimage *image)
|
||||
* end of the blocked region (begin >= high). Use the
|
||||
* boolean identity !(a || b) === (!a && !b).
|
||||
*/
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
if (htab_address) {
|
||||
low = __pa(htab_address);
|
||||
high = low + htab_size_bytes;
|
||||
@ -88,6 +89,7 @@ int default_machine_kexec_prepare(struct kimage *image)
|
||||
return -ETXTBSY;
|
||||
}
|
||||
}
|
||||
#endif /* CONFIG_PPC_STD_MMU_64 */
|
||||
|
||||
/* We also should not overwrite the tce tables */
|
||||
for_each_node_by_type(node, "pci") {
|
||||
@ -381,7 +383,7 @@ void default_machine_kexec(struct kimage *image)
|
||||
/* NOTREACHED */
|
||||
}
|
||||
|
||||
#ifndef CONFIG_PPC_BOOK3E
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
/* Values we need to export to the second kernel via the device tree. */
|
||||
static unsigned long htab_base;
|
||||
static unsigned long htab_size;
|
||||
@ -401,7 +403,6 @@ static struct property htab_size_prop = {
|
||||
static int __init export_htab_values(void)
|
||||
{
|
||||
struct device_node *node;
|
||||
struct property *prop;
|
||||
|
||||
/* On machines with no htab htab_address is NULL */
|
||||
if (!htab_address)
|
||||
@ -412,12 +413,8 @@ static int __init export_htab_values(void)
|
||||
return -ENODEV;
|
||||
|
||||
/* remove any stale propertys so ours can be found */
|
||||
prop = of_find_property(node, htab_base_prop.name, NULL);
|
||||
if (prop)
|
||||
of_remove_property(node, prop);
|
||||
prop = of_find_property(node, htab_size_prop.name, NULL);
|
||||
if (prop)
|
||||
of_remove_property(node, prop);
|
||||
of_remove_property(node, of_find_property(node, htab_base_prop.name, NULL));
|
||||
of_remove_property(node, of_find_property(node, htab_size_prop.name, NULL));
|
||||
|
||||
htab_base = cpu_to_be64(__pa(htab_address));
|
||||
of_add_property(node, &htab_base_prop);
|
||||
@ -428,4 +425,4 @@ static int __init export_htab_values(void)
|
||||
return 0;
|
||||
}
|
||||
late_initcall(export_htab_values);
|
||||
#endif /* !CONFIG_PPC_BOOK3E */
|
||||
#endif /* CONFIG_PPC_STD_MMU_64 */
|
||||
|
@ -37,7 +37,7 @@ static DEFINE_PER_CPU(int, mce_queue_count);
|
||||
static DEFINE_PER_CPU(struct machine_check_event[MAX_MC_EVT], mce_event_queue);
|
||||
|
||||
static void machine_check_process_queued_event(struct irq_work *work);
|
||||
struct irq_work mce_event_process_work = {
|
||||
static struct irq_work mce_event_process_work = {
|
||||
.func = machine_check_process_queued_event,
|
||||
};
|
||||
|
||||
|
@ -72,11 +72,15 @@ void __flush_tlb_power8(unsigned int action)
|
||||
|
||||
void __flush_tlb_power9(unsigned int action)
|
||||
{
|
||||
if (radix_enabled())
|
||||
flush_tlb_206(POWER9_TLB_SETS_RADIX, action);
|
||||
|
||||
flush_tlb_206(POWER9_TLB_SETS_HASH, action);
|
||||
}
|
||||
|
||||
|
||||
/* flush SLBs and reload */
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
static void flush_and_reload_slb(void)
|
||||
{
|
||||
struct slb_shadow *slb;
|
||||
@ -110,6 +114,7 @@ static void flush_and_reload_slb(void)
|
||||
asm volatile("slbmte %0,%1" : : "r" (rs), "r" (rb));
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
static long mce_handle_derror(uint64_t dsisr, uint64_t slb_error_bits)
|
||||
{
|
||||
@ -120,6 +125,7 @@ static long mce_handle_derror(uint64_t dsisr, uint64_t slb_error_bits)
|
||||
* reset the error bits whenever we handle them so that at the end
|
||||
* we can check whether we handled all of them or not.
|
||||
* */
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
if (dsisr & slb_error_bits) {
|
||||
flush_and_reload_slb();
|
||||
/* reset error bits */
|
||||
@ -131,6 +137,7 @@ static long mce_handle_derror(uint64_t dsisr, uint64_t slb_error_bits)
|
||||
/* reset error bits */
|
||||
dsisr &= ~P7_DSISR_MC_TLB_MULTIHIT_MFTLB;
|
||||
}
|
||||
#endif
|
||||
/* Any other errors we don't understand? */
|
||||
if (dsisr & 0xffffffffUL)
|
||||
handled = 0;
|
||||
@ -150,6 +157,7 @@ static long mce_handle_common_ierror(uint64_t srr1)
|
||||
switch (P7_SRR1_MC_IFETCH(srr1)) {
|
||||
case 0:
|
||||
break;
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
case P7_SRR1_MC_IFETCH_SLB_PARITY:
|
||||
case P7_SRR1_MC_IFETCH_SLB_MULTIHIT:
|
||||
/* flush and reload SLBs for SLB errors. */
|
||||
@ -162,6 +170,7 @@ static long mce_handle_common_ierror(uint64_t srr1)
|
||||
handled = 1;
|
||||
}
|
||||
break;
|
||||
#endif
|
||||
default:
|
||||
break;
|
||||
}
|
||||
@ -175,10 +184,12 @@ static long mce_handle_ierror_p7(uint64_t srr1)
|
||||
|
||||
handled = mce_handle_common_ierror(srr1);
|
||||
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
if (P7_SRR1_MC_IFETCH(srr1) == P7_SRR1_MC_IFETCH_SLB_BOTH) {
|
||||
flush_and_reload_slb();
|
||||
handled = 1;
|
||||
}
|
||||
#endif
|
||||
return handled;
|
||||
}
|
||||
|
||||
@ -321,10 +332,12 @@ static long mce_handle_ierror_p8(uint64_t srr1)
|
||||
|
||||
handled = mce_handle_common_ierror(srr1);
|
||||
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
if (P7_SRR1_MC_IFETCH(srr1) == P8_SRR1_MC_IFETCH_ERAT_MULTIHIT) {
|
||||
flush_and_reload_slb();
|
||||
handled = 1;
|
||||
}
|
||||
#endif
|
||||
return handled;
|
||||
}
|
||||
|
||||
|
@ -599,12 +599,6 @@ _GLOBAL(__bswapdi2)
|
||||
mr r4,r10
|
||||
blr
|
||||
|
||||
_GLOBAL(abs)
|
||||
srawi r4,r3,31
|
||||
xor r3,r3,r4
|
||||
sub r3,r3,r4
|
||||
blr
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
_GLOBAL(start_secondary_resume)
|
||||
/* Reset stack */
|
||||
|
@ -15,8 +15,6 @@
|
||||
* parsing code.
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/fs.h>
|
||||
@ -1231,12 +1229,4 @@ static int __init nvram_init(void)
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void __exit nvram_cleanup(void)
|
||||
{
|
||||
misc_deregister( &nvram_dev );
|
||||
}
|
||||
|
||||
module_init(nvram_init);
|
||||
module_exit(nvram_cleanup);
|
||||
MODULE_LICENSE("GPL");
|
||||
device_initcall(nvram_init);
|
||||
|
@ -21,6 +21,35 @@
|
||||
#include <asm/firmware.h>
|
||||
#include <asm/eeh.h>
|
||||
|
||||
static struct pci_bus *find_bus_among_children(struct pci_bus *bus,
|
||||
struct device_node *dn)
|
||||
{
|
||||
struct pci_bus *child = NULL;
|
||||
struct pci_bus *tmp;
|
||||
|
||||
if (pci_bus_to_OF_node(bus) == dn)
|
||||
return bus;
|
||||
|
||||
list_for_each_entry(tmp, &bus->children, node) {
|
||||
child = find_bus_among_children(tmp, dn);
|
||||
if (child)
|
||||
break;
|
||||
}
|
||||
|
||||
return child;
|
||||
}
|
||||
|
||||
struct pci_bus *pci_find_bus_by_node(struct device_node *dn)
|
||||
{
|
||||
struct pci_dn *pdn = PCI_DN(dn);
|
||||
|
||||
if (!pdn || !pdn->phb || !pdn->phb->bus)
|
||||
return NULL;
|
||||
|
||||
return find_bus_among_children(pdn->phb->bus, dn);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_find_bus_by_node);
|
||||
|
||||
/**
|
||||
* pcibios_release_device - release PCI device
|
||||
* @dev: PCI device
|
||||
@ -38,20 +67,20 @@ void pcibios_release_device(struct pci_dev *dev)
|
||||
}
|
||||
|
||||
/**
|
||||
* pcibios_remove_pci_devices - remove all devices under this bus
|
||||
* pci_hp_remove_devices - remove all devices under this bus
|
||||
* @bus: the indicated PCI bus
|
||||
*
|
||||
* Remove all of the PCI devices under this bus both from the
|
||||
* linux pci device tree, and from the powerpc EEH address cache.
|
||||
*/
|
||||
void pcibios_remove_pci_devices(struct pci_bus *bus)
|
||||
void pci_hp_remove_devices(struct pci_bus *bus)
|
||||
{
|
||||
struct pci_dev *dev, *tmp;
|
||||
struct pci_bus *child_bus;
|
||||
|
||||
/* First go down child busses */
|
||||
list_for_each_entry(child_bus, &bus->children, node)
|
||||
pcibios_remove_pci_devices(child_bus);
|
||||
pci_hp_remove_devices(child_bus);
|
||||
|
||||
pr_debug("PCI: Removing devices on bus %04x:%02x\n",
|
||||
pci_domain_nr(bus), bus->number);
|
||||
@ -60,11 +89,10 @@ void pcibios_remove_pci_devices(struct pci_bus *bus)
|
||||
pci_stop_and_remove_bus_device(dev);
|
||||
}
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL_GPL(pcibios_remove_pci_devices);
|
||||
EXPORT_SYMBOL_GPL(pci_hp_remove_devices);
|
||||
|
||||
/**
|
||||
* pcibios_add_pci_devices - adds new pci devices to bus
|
||||
* pci_hp_add_devices - adds new pci devices to bus
|
||||
* @bus: the indicated PCI bus
|
||||
*
|
||||
* This routine will find and fixup new pci devices under
|
||||
@ -74,7 +102,7 @@ EXPORT_SYMBOL_GPL(pcibios_remove_pci_devices);
|
||||
* is how this routine differs from other, similar pcibios
|
||||
* routines.)
|
||||
*/
|
||||
void pcibios_add_pci_devices(struct pci_bus * bus)
|
||||
void pci_hp_add_devices(struct pci_bus *bus)
|
||||
{
|
||||
int slotno, mode, pass, max;
|
||||
struct pci_dev *dev;
|
||||
@ -92,7 +120,8 @@ void pcibios_add_pci_devices(struct pci_bus * bus)
|
||||
if (mode == PCI_PROBE_DEVTREE) {
|
||||
/* use ofdt-based probe */
|
||||
of_rescan_bus(dn, bus);
|
||||
} else if (mode == PCI_PROBE_NORMAL) {
|
||||
} else if (mode == PCI_PROBE_NORMAL &&
|
||||
dn->child && PCI_DN(dn->child)) {
|
||||
/*
|
||||
* Use legacy probe. In the partial hotplug case, we
|
||||
* probably have grandchildren devices unplugged. So
|
||||
@ -114,4 +143,4 @@ void pcibios_add_pci_devices(struct pci_bus * bus)
|
||||
}
|
||||
pcibios_finish_adding_to_bus(bus);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pcibios_add_pci_devices);
|
||||
EXPORT_SYMBOL_GPL(pci_hp_add_devices);
|
||||
|
@ -38,7 +38,7 @@
|
||||
* ISA drivers use hard coded offsets. If no ISA bus exists nothing
|
||||
* is mapped on the first 64K of IO space
|
||||
*/
|
||||
unsigned long pci_io_base = ISA_IO_BASE;
|
||||
unsigned long pci_io_base;
|
||||
EXPORT_SYMBOL(pci_io_base);
|
||||
|
||||
static int __init pcibios_init(void)
|
||||
@ -47,6 +47,7 @@ static int __init pcibios_init(void)
|
||||
|
||||
printk(KERN_INFO "PCI: Probing PCI hardware\n");
|
||||
|
||||
pci_io_base = ISA_IO_BASE;
|
||||
/* For now, override phys_mem_access_prot. If we need it,g
|
||||
* later, we may move that initialization to each ppc_md
|
||||
*/
|
||||
@ -159,7 +160,7 @@ static int pcibios_map_phb_io_space(struct pci_controller *hose)
|
||||
|
||||
/* Establish the mapping */
|
||||
if (__ioremap_at(phys_page, area->addr, size_page,
|
||||
_PAGE_NO_CACHE | _PAGE_GUARDED) == NULL)
|
||||
pgprot_val(pgprot_noncached(__pgprot(0)))) == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Fixup hose IO resource */
|
||||
|
@ -282,13 +282,9 @@ void remove_dev_pci_data(struct pci_dev *pdev)
|
||||
#endif /* CONFIG_PCI_IOV */
|
||||
}
|
||||
|
||||
/*
|
||||
* Traverse_func that inits the PCI fields of the device node.
|
||||
* NOTE: this *must* be done before read/write config to the device.
|
||||
*/
|
||||
void *update_dn_pci_info(struct device_node *dn, void *data)
|
||||
struct pci_dn *pci_add_device_node_info(struct pci_controller *hose,
|
||||
struct device_node *dn)
|
||||
{
|
||||
struct pci_controller *phb = data;
|
||||
const __be32 *type = of_get_property(dn, "ibm,pci-config-space-type", NULL);
|
||||
const __be32 *regs;
|
||||
struct device_node *parent;
|
||||
@ -299,7 +295,7 @@ void *update_dn_pci_info(struct device_node *dn, void *data)
|
||||
return NULL;
|
||||
dn->data = pdn;
|
||||
pdn->node = dn;
|
||||
pdn->phb = phb;
|
||||
pdn->phb = hose;
|
||||
#ifdef CONFIG_PPC_POWERNV
|
||||
pdn->pe_number = IODA_INVALID_PE;
|
||||
#endif
|
||||
@ -331,8 +327,32 @@ void *update_dn_pci_info(struct device_node *dn, void *data)
|
||||
if (pdn->parent)
|
||||
list_add_tail(&pdn->list, &pdn->parent->child_list);
|
||||
|
||||
return NULL;
|
||||
return pdn;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_add_device_node_info);
|
||||
|
||||
void pci_remove_device_node_info(struct device_node *dn)
|
||||
{
|
||||
struct pci_dn *pdn = dn ? PCI_DN(dn) : NULL;
|
||||
#ifdef CONFIG_EEH
|
||||
struct eeh_dev *edev = pdn_to_eeh_dev(pdn);
|
||||
|
||||
if (edev)
|
||||
edev->pdn = NULL;
|
||||
#endif
|
||||
|
||||
if (!pdn)
|
||||
return;
|
||||
|
||||
WARN_ON(!list_empty(&pdn->child_list));
|
||||
list_del(&pdn->list);
|
||||
if (pdn->parent)
|
||||
of_node_put(pdn->parent->node);
|
||||
|
||||
dn->data = NULL;
|
||||
kfree(pdn);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_remove_device_node_info);
|
||||
|
||||
/*
|
||||
* Traverse a device tree stopping each PCI device in the tree.
|
||||
@ -352,8 +372,9 @@ void *update_dn_pci_info(struct device_node *dn, void *data)
|
||||
* one of these nodes we also assume its siblings are non-pci for
|
||||
* performance.
|
||||
*/
|
||||
void *traverse_pci_devices(struct device_node *start, traverse_func pre,
|
||||
void *data)
|
||||
void *pci_traverse_device_nodes(struct device_node *start,
|
||||
void *(*fn)(struct device_node *, void *),
|
||||
void *data)
|
||||
{
|
||||
struct device_node *dn, *nextdn;
|
||||
void *ret;
|
||||
@ -368,8 +389,11 @@ void *traverse_pci_devices(struct device_node *start, traverse_func pre,
|
||||
if (classp)
|
||||
class = of_read_number(classp, 1);
|
||||
|
||||
if (pre && ((ret = pre(dn, data)) != NULL))
|
||||
return ret;
|
||||
if (fn) {
|
||||
ret = fn(dn, data);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* If we are a PCI bridge, go down */
|
||||
if (dn->child && ((class >> 8) == PCI_CLASS_BRIDGE_PCI ||
|
||||
@ -391,6 +415,7 @@ void *traverse_pci_devices(struct device_node *start, traverse_func pre,
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_traverse_device_nodes);
|
||||
|
||||
static struct pci_dn *pci_dn_next_one(struct pci_dn *root,
|
||||
struct pci_dn *pdn)
|
||||
@ -432,6 +457,18 @@ void *traverse_pci_dn(struct pci_dn *root,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *add_pdn(struct device_node *dn, void *data)
|
||||
{
|
||||
struct pci_controller *hose = data;
|
||||
struct pci_dn *pdn;
|
||||
|
||||
pdn = pci_add_device_node_info(hose, dn);
|
||||
if (!pdn)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_devs_phb_init_dynamic - setup pci devices under this PHB
|
||||
* phb: pci-to-host bridge (top-level bridge connecting to cpu)
|
||||
@ -446,8 +483,7 @@ void pci_devs_phb_init_dynamic(struct pci_controller *phb)
|
||||
struct pci_dn *pdn;
|
||||
|
||||
/* PHB nodes themselves must not match */
|
||||
update_dn_pci_info(dn, phb);
|
||||
pdn = dn->data;
|
||||
pdn = pci_add_device_node_info(phb, dn);
|
||||
if (pdn) {
|
||||
pdn->devfn = pdn->busno = -1;
|
||||
pdn->vendor_id = pdn->device_id = pdn->class_code = 0;
|
||||
@ -456,7 +492,7 @@ void pci_devs_phb_init_dynamic(struct pci_controller *phb)
|
||||
}
|
||||
|
||||
/* Update dn->phb ptrs for new phb and children devices */
|
||||
traverse_pci_devices(dn, update_dn_pci_info, phb);
|
||||
pci_traverse_device_nodes(dn, add_pdn, phb);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -38,6 +38,7 @@
|
||||
#include <linux/random.h>
|
||||
#include <linux/hw_breakpoint.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/elf-randomize.h>
|
||||
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/io.h>
|
||||
@ -55,6 +56,7 @@
|
||||
#include <asm/firmware.h>
|
||||
#endif
|
||||
#include <asm/code-patching.h>
|
||||
#include <asm/exec.h>
|
||||
#include <asm/livepatch.h>
|
||||
|
||||
#include <linux/kprobes.h>
|
||||
@ -1077,7 +1079,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
|
||||
}
|
||||
#endif /* CONFIG_PPC64 */
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
batch = this_cpu_ptr(&ppc64_tlb_batch);
|
||||
if (batch->active) {
|
||||
current_thread_info()->local_flags |= _TLF_LAZY_MMU;
|
||||
@ -1085,7 +1087,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
|
||||
__flush_tlb_pending(batch);
|
||||
batch->active = 0;
|
||||
}
|
||||
#endif /* CONFIG_PPC_BOOK3S_64 */
|
||||
#endif /* CONFIG_PPC_STD_MMU_64 */
|
||||
|
||||
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
|
||||
switch_booke_debug_regs(&new->thread.debug);
|
||||
@ -1131,7 +1133,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
|
||||
|
||||
last = _switch(old_thread, new_thread);
|
||||
|
||||
#ifdef CONFIG_PPC_BOOK3S_64
|
||||
#ifdef CONFIG_PPC_STD_MMU_64
|
||||
if (current_thread_info()->local_flags & _TLF_LAZY_MMU) {
|
||||
current_thread_info()->local_flags &= ~_TLF_LAZY_MMU;
|
||||
batch = this_cpu_ptr(&ppc64_tlb_batch);
|
||||
@ -1140,8 +1142,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
|
||||
|
||||
if (current_thread_info()->task->thread.regs)
|
||||
restore_math(current_thread_info()->task->thread.regs);
|
||||
|
||||
#endif /* CONFIG_PPC_BOOK3S_64 */
|
||||
#endif /* CONFIG_PPC_STD_MMU_64 */
|
||||
|
||||
return last;
|
||||
}
|
||||
@ -1376,6 +1377,9 @@ static void setup_ksp_vsid(struct task_struct *p, unsigned long sp)
|
||||
unsigned long sp_vsid;
|
||||
unsigned long llp = mmu_psize_defs[mmu_linear_psize].sllp;
|
||||
|
||||
if (radix_enabled())
|
||||
return;
|
||||
|
||||
if (mmu_has_feature(MMU_FTR_1T_SEGMENT))
|
||||
sp_vsid = get_kernel_vsid(sp, MMU_SEGSIZE_1T)
|
||||
<< SLB_VSID_SHIFT_1T;
|
||||
@ -1924,7 +1928,8 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
|
||||
* the heap, we can put it above 1TB so it is backed by a 1TB
|
||||
* segment. Otherwise the heap will be in the bottom 1TB
|
||||
* which always uses 256MB segments and this may result in a
|
||||
* performance penalty.
|
||||
* performance penalty. We don't need to worry about radix. For
|
||||
* radix, mmu_highuser_ssize remains unchanged from 256MB.
|
||||
*/
|
||||
if (!is_32bit_task() && (mmu_highuser_ssize == MMU_SEGSIZE_1T))
|
||||
base = max_t(unsigned long, mm->brk, 1UL << SID_SHIFT_1T);
|
||||
|
@ -34,6 +34,7 @@
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_fdt.h>
|
||||
#include <linux/libfdt.h>
|
||||
#include <linux/cpu.h>
|
||||
|
||||
#include <asm/prom.h>
|
||||
#include <asm/rtas.h>
|
||||
@ -167,6 +168,7 @@ static struct ibm_pa_feature {
|
||||
*/
|
||||
{CPU_FTR_TM_COMP, 0, 0,
|
||||
PPC_FEATURE2_HTM_COMP|PPC_FEATURE2_HTM_NOSC_COMP, 22, 0, 0},
|
||||
{0, MMU_FTR_RADIX, 0, 0, 40, 0, 0},
|
||||
};
|
||||
|
||||
static void __init scan_features(unsigned long node, const unsigned char *ftrs,
|
||||
|
@ -442,7 +442,7 @@ static void do_event_scan(void)
|
||||
}
|
||||
|
||||
static void rtas_event_scan(struct work_struct *w);
|
||||
DECLARE_DELAYED_WORK(event_scan_work, rtas_event_scan);
|
||||
static DECLARE_DELAYED_WORK(event_scan_work, rtas_event_scan);
|
||||
|
||||
/*
|
||||
* Delay should be at least one second since some machines have problems if
|
||||
|
@ -128,9 +128,7 @@ void machine_restart(char *cmd)
|
||||
machine_shutdown();
|
||||
if (ppc_md.restart)
|
||||
ppc_md.restart(cmd);
|
||||
#ifdef CONFIG_SMP
|
||||
smp_send_stop();
|
||||
#endif
|
||||
printk(KERN_EMERG "System Halted, OK to turn off power\n");
|
||||
local_irq_disable();
|
||||
while (1) ;
|
||||
@ -141,9 +139,7 @@ void machine_power_off(void)
|
||||
machine_shutdown();
|
||||
if (pm_power_off)
|
||||
pm_power_off();
|
||||
#ifdef CONFIG_SMP
|
||||
smp_send_stop();
|
||||
#endif
|
||||
printk(KERN_EMERG "System Halted, OK to turn off power\n");
|
||||
local_irq_disable();
|
||||
while (1) ;
|
||||
@ -159,9 +155,7 @@ void machine_halt(void)
|
||||
machine_shutdown();
|
||||
if (ppc_md.halt)
|
||||
ppc_md.halt();
|
||||
#ifdef CONFIG_SMP
|
||||
smp_send_stop();
|
||||
#endif
|
||||
printk(KERN_EMERG "System Halted, OK to turn off power\n");
|
||||
local_irq_disable();
|
||||
while (1) ;
|
||||
|
@ -31,6 +31,6 @@ void save_processor_state(void)
|
||||
void restore_processor_state(void)
|
||||
{
|
||||
#ifdef CONFIG_PPC32
|
||||
switch_mmu_context(current->active_mm, current->active_mm);
|
||||
switch_mmu_context(current->active_mm, current->active_mm, NULL);
|
||||
#endif
|
||||
}
|
||||
|
@ -55,6 +55,7 @@
|
||||
#include <linux/delay.h>
|
||||
#include <linux/irq_work.h>
|
||||
#include <linux/clk-provider.h>
|
||||
#include <linux/suspend.h>
|
||||
#include <asm/trace.h>
|
||||
|
||||
#include <asm/io.h>
|
||||
|
@ -87,7 +87,7 @@ struct vio_cmo_dev_entry {
|
||||
* @curr: bytes currently allocated
|
||||
* @high: high water mark for IO data usage
|
||||
*/
|
||||
struct vio_cmo {
|
||||
static struct vio_cmo {
|
||||
spinlock_t lock;
|
||||
struct delayed_work balance_q;
|
||||
struct list_head device_list;
|
||||
@ -615,7 +615,7 @@ static u64 vio_dma_get_required_mask(struct device *dev)
|
||||
return dma_iommu_ops.get_required_mask(dev);
|
||||
}
|
||||
|
||||
struct dma_map_ops vio_dma_mapping_ops = {
|
||||
static struct dma_map_ops vio_dma_mapping_ops = {
|
||||
.alloc = vio_dma_iommu_alloc_coherent,
|
||||
.free = vio_dma_iommu_free_coherent,
|
||||
.mmap = dma_direct_mmap_coherent,
|
||||
|
@ -447,7 +447,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
||||
struct revmap_entry *rev;
|
||||
struct page *page, *pages[1];
|
||||
long index, ret, npages;
|
||||
unsigned long is_io;
|
||||
bool is_ci;
|
||||
unsigned int writing, write_ok;
|
||||
struct vm_area_struct *vma;
|
||||
unsigned long rcbits;
|
||||
@ -503,7 +503,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
||||
smp_rmb();
|
||||
|
||||
ret = -EFAULT;
|
||||
is_io = 0;
|
||||
is_ci = false;
|
||||
pfn = 0;
|
||||
page = NULL;
|
||||
pte_size = PAGE_SIZE;
|
||||
@ -521,7 +521,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
||||
pfn = vma->vm_pgoff +
|
||||
((hva - vma->vm_start) >> PAGE_SHIFT);
|
||||
pte_size = psize;
|
||||
is_io = hpte_cache_bits(pgprot_val(vma->vm_page_prot));
|
||||
is_ci = pte_ci(__pte((pgprot_val(vma->vm_page_prot))));
|
||||
write_ok = vma->vm_flags & VM_WRITE;
|
||||
}
|
||||
up_read(¤t->mm->mmap_sem);
|
||||
@ -558,10 +558,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
|
||||
goto out_put;
|
||||
|
||||
/* Check WIMG vs. the actual page we're accessing */
|
||||
if (!hpte_cache_flags_ok(r, is_io)) {
|
||||
if (is_io)
|
||||
if (!hpte_cache_flags_ok(r, is_ci)) {
|
||||
if (is_ci)
|
||||
goto out_put;
|
||||
|
||||
/*
|
||||
* Allow guest to map emulated device memory as
|
||||
* uncacheable, but actually make it cacheable.
|
||||
|
@ -3272,6 +3272,12 @@ static int kvmppc_core_check_processor_compat_hv(void)
|
||||
if (!cpu_has_feature(CPU_FTR_HVMODE) ||
|
||||
!cpu_has_feature(CPU_FTR_ARCH_206))
|
||||
return -EIO;
|
||||
/*
|
||||
* Disable KVM for Power9, untill the required bits merged.
|
||||
*/
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_300))
|
||||
return -EIO;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -175,7 +175,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
|
||||
unsigned long g_ptel;
|
||||
struct kvm_memory_slot *memslot;
|
||||
unsigned hpage_shift;
|
||||
unsigned long is_io;
|
||||
bool is_ci;
|
||||
unsigned long *rmap;
|
||||
pte_t *ptep;
|
||||
unsigned int writing;
|
||||
@ -199,7 +199,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
|
||||
gfn = gpa >> PAGE_SHIFT;
|
||||
memslot = __gfn_to_memslot(kvm_memslots_raw(kvm), gfn);
|
||||
pa = 0;
|
||||
is_io = ~0ul;
|
||||
is_ci = false;
|
||||
rmap = NULL;
|
||||
if (!(memslot && !(memslot->flags & KVM_MEMSLOT_INVALID))) {
|
||||
/* Emulated MMIO - mark this with key=31 */
|
||||
@ -250,7 +250,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
|
||||
if (writing && !pte_write(pte))
|
||||
/* make the actual HPTE be read-only */
|
||||
ptel = hpte_make_readonly(ptel);
|
||||
is_io = hpte_cache_bits(pte_val(pte));
|
||||
is_ci = pte_ci(pte);
|
||||
pa = pte_pfn(pte) << PAGE_SHIFT;
|
||||
pa |= hva & (host_pte_size - 1);
|
||||
pa |= gpa & ~PAGE_MASK;
|
||||
@ -267,9 +267,9 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
|
||||
else
|
||||
pteh |= HPTE_V_ABSENT;
|
||||
|
||||
/* Check WIMG */
|
||||
if (is_io != ~0ul && !hpte_cache_flags_ok(ptel, is_io)) {
|
||||
if (is_io)
|
||||
/*If we had host pte mapping then Check WIMG */
|
||||
if (ptep && !hpte_cache_flags_ok(ptel, is_ci)) {
|
||||
if (is_ci)
|
||||
return H_PARAMETER;
|
||||
/*
|
||||
* Allow guest to map emulated device memory as
|
||||
|
@ -1713,7 +1713,11 @@ static void kvmppc_core_destroy_vm_pr(struct kvm *kvm)
|
||||
|
||||
static int kvmppc_core_check_processor_compat_pr(void)
|
||||
{
|
||||
/* we are always compatible */
|
||||
/*
|
||||
* Disable KVM for Power9 untill the required bits merged.
|
||||
*/
|
||||
if (cpu_has_feature(CPU_FTR_ARCH_300))
|
||||
return -EIO;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -217,7 +217,7 @@ _GLOBAL(memcpy)
|
||||
bdnz 40b
|
||||
65: blr
|
||||
|
||||
_GLOBAL(generic_memcpy)
|
||||
generic_memcpy:
|
||||
srwi. r7,r5,3
|
||||
addi r6,r3,-4
|
||||
addi r4,r4,-4
|
||||
|
@ -925,6 +925,7 @@ int __kprobes analyse_instr(struct instruction_op *op, struct pt_regs *regs,
|
||||
}
|
||||
}
|
||||
#endif
|
||||
break; /* illegal instruction */
|
||||
|
||||
case 31:
|
||||
switch ((instr >> 1) & 0x3ff) {
|
||||
@ -1818,9 +1819,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
|
||||
case 4:
|
||||
__get_user_asmx(val, op.ea, err, "lwarx");
|
||||
break;
|
||||
#ifdef __powerpc64__
|
||||
case 8:
|
||||
__get_user_asmx(val, op.ea, err, "ldarx");
|
||||
break;
|
||||
#endif
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
@ -1841,9 +1844,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
|
||||
case 4:
|
||||
__put_user_asmx(op.val, op.ea, err, "stwcx.", cr);
|
||||
break;
|
||||
#ifdef __powerpc64__
|
||||
case 8:
|
||||
__put_user_asmx(op.val, op.ea, err, "stdcx.", cr);
|
||||
break;
|
||||
#endif
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
|
@ -17,7 +17,17 @@
|
||||
*
|
||||
* Author: Anton Blanchard <anton@au.ibm.com>
|
||||
*/
|
||||
|
||||
/*
|
||||
* Sparse (as at v0.5.0) gets very, very confused by this file.
|
||||
* Make it a bit simpler for it.
|
||||
*/
|
||||
#if !defined(__CHECKER__)
|
||||
#include <altivec.h>
|
||||
#else
|
||||
#define vec_xor(a, b) a ^ b
|
||||
#define vector __attribute__((vector_size(16)))
|
||||
#endif
|
||||
|
||||
#include <linux/preempt.h>
|
||||
#include <linux/export.h>
|
||||
|
@ -13,10 +13,11 @@ obj-$(CONFIG_PPC_MMU_NOHASH) += mmu_context_nohash.o tlb_nohash.o \
|
||||
tlb_nohash_low.o
|
||||
obj-$(CONFIG_PPC_BOOK3E) += tlb_low_$(CONFIG_WORD_SIZE)e.o
|
||||
hash64-$(CONFIG_PPC_NATIVE) := hash_native_64.o
|
||||
obj-$(CONFIG_PPC_STD_MMU_64) += hash_utils_64.o slb_low.o slb.o $(hash64-y)
|
||||
obj-$(CONFIG_PPC_STD_MMU_32) += ppc_mmu_32.o hash_low_32.o
|
||||
obj-$(CONFIG_PPC_STD_MMU) += tlb_hash$(CONFIG_WORD_SIZE).o \
|
||||
mmu_context_hash$(CONFIG_WORD_SIZE).o
|
||||
obj-$(CONFIG_PPC_BOOK3E_64) += pgtable-book3e.o
|
||||
obj-$(CONFIG_PPC_STD_MMU_64) += pgtable-hash64.o hash_utils_64.o slb_low.o slb.o $(hash64-y) mmu_context_book3s64.o pgtable-book3s64.o
|
||||
obj-$(CONFIG_PPC_RADIX_MMU) += pgtable-radix.o tlb-radix.o
|
||||
obj-$(CONFIG_PPC_STD_MMU_32) += ppc_mmu_32.o hash_low_32.o mmu_context_hash32.o
|
||||
obj-$(CONFIG_PPC_STD_MMU) += tlb_hash$(CONFIG_WORD_SIZE).o
|
||||
ifeq ($(CONFIG_PPC_STD_MMU_64),y)
|
||||
obj-$(CONFIG_PPC_4K_PAGES) += hash64_4k.o
|
||||
obj-$(CONFIG_PPC_64K_PAGES) += hash64_64k.o
|
||||
@ -33,6 +34,7 @@ obj-$(CONFIG_PPC_MM_SLICES) += slice.o
|
||||
obj-y += hugetlbpage.o
|
||||
ifeq ($(CONFIG_HUGETLB_PAGE),y)
|
||||
obj-$(CONFIG_PPC_STD_MMU_64) += hugetlbpage-hash64.o
|
||||
obj-$(CONFIG_PPC_RADIX_MMU) += hugetlbpage-radix.o
|
||||
obj-$(CONFIG_PPC_BOOK3E_MMU) += hugetlbpage-book3e.o
|
||||
endif
|
||||
obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hugepage-hash64.o
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user