mirror of
https://github.com/torvalds/linux.git
synced 2024-11-10 14:11:52 +00:00
Merge branch 'x86-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 core updates from Ingo Molnar: "There were so many changes in the x86/asm, x86/apic and x86/mm topics in this cycle that the topical separation of -tip broke down somewhat - so the result is a more traditional architecture pull request, collected into the 'x86/core' topic. The topics were still maintained separately as far as possible, so bisectability and conceptual separation should still be pretty good - but there were a handful of merge points to avoid excessive dependencies (and conflicts) that would have been poorly tested in the end. The next cycle will hopefully be much more quiet (or at least will have fewer dependencies). The main changes in this cycle were: * x86/apic changes, with related IRQ core changes: (Jiang Liu, Thomas Gleixner) - This is the second and most intrusive part of changes to the x86 interrupt handling - full conversion to hierarchical interrupt domains: [IOAPIC domain] ----- | [MSI domain] --------[Remapping domain] ----- [ Vector domain ] | (optional) | [HPET MSI domain] ----- | | [DMAR domain] ----------------------------- | [Legacy domain] ----------------------------- This now reflects the actual hardware and allowed us to distangle the domain specific code from the underlying parent domain, which can be optional in the case of interrupt remapping. It's a clear separation of functionality and removes quite some duct tape constructs which plugged the remap code between ioapic/msi/hpet and the vector management. - Intel IOMMU IRQ remapping enhancements, to allow direct interrupt injection into guests (Feng Wu) * x86/asm changes: - Tons of cleanups and small speedups, micro-optimizations. This is in preparation to move a good chunk of the low level entry code from assembly to C code (Denys Vlasenko, Andy Lutomirski, Brian Gerst) - Moved all system entry related code to a new home under arch/x86/entry/ (Ingo Molnar) - Removal of the fragile and ugly CFI dwarf debuginfo annotations. Conversion to C will reintroduce many of them - but meanwhile they are only getting in the way, and the upstream kernel does not rely on them (Ingo Molnar) - NOP handling refinements. (Borislav Petkov) * x86/mm changes: - Big PAT and MTRR rework: making the code more robust and preparing to phase out exposing direct MTRR interfaces to drivers - in favor of using PAT driven interfaces (Toshi Kani, Luis R Rodriguez, Borislav Petkov) - New ioremap_wt()/set_memory_wt() interfaces to support Write-Through cached memory mappings. This is especially important for good performance on NVDIMM hardware (Toshi Kani) * x86/ras changes: - Add support for deferred errors on AMD (Aravind Gopalakrishnan) This is an important RAS feature which adds hardware support for poisoned data. That means roughly that the hardware marks data which it has detected as corrupted but wasn't able to correct, as poisoned data and raises an APIC interrupt to signal that in the form of a deferred error. It is the OS's responsibility then to take proper recovery action and thus prolonge system lifetime as far as possible. - Add support for Intel "Local MCE"s: upcoming CPUs will support CPU-local MCE interrupts, as opposed to the traditional system- wide broadcasted MCE interrupts (Ashok Raj) - Misc cleanups (Borislav Petkov) * x86/platform changes: - Intel Atom SoC updates ... and lots of other cleanups, fixlets and other changes - see the shortlog and the Git log for details" * 'x86-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (222 commits) x86/hpet: Use proper hpet device number for MSI allocation x86/hpet: Check for irq==0 when allocating hpet MSI interrupts x86/mm/pat, drivers/infiniband/ipath: Use arch_phys_wc_add() and require PAT disabled x86/mm/pat, drivers/media/ivtv: Use arch_phys_wc_add() and require PAT disabled x86/platform/intel/baytrail: Add comments about why we disabled HPET on Baytrail genirq: Prevent crash in irq_move_irq() genirq: Enhance irq_data_to_desc() to support hierarchy irqdomain iommu, x86: Properly handle posted interrupts for IOMMU hotplug iommu, x86: Provide irq_remapping_cap() interface iommu, x86: Setup Posted-Interrupts capability for Intel iommu iommu, x86: Add cap_pi_support() to detect VT-d PI capability iommu, x86: Avoid migrating VT-d posted interrupts iommu, x86: Save the mode (posted or remapped) of an IRTE iommu, x86: Implement irq_set_vcpu_affinity for intel_ir_chip iommu: dmar: Provide helper to copy shared irte fields iommu: dmar: Extend struct irte for VT-d Posted-Interrupts iommu: Add new member capability to struct irq_remap_ops x86/asm/entry/64: Disentangle error_entry/exit gsbase/ebx/usermode code x86/asm/entry/32: Shorten __audit_syscall_entry() args preparation x86/asm/entry/32: Explain reloading of registers after __audit_syscall_entry() ...
This commit is contained in:
commit
d70b3ef54c
@ -746,6 +746,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
||||
cpuidle.off=1 [CPU_IDLE]
|
||||
disable the cpuidle sub-system
|
||||
|
||||
cpu_init_udelay=N
|
||||
[X86] Delay for N microsec between assert and de-assert
|
||||
of APIC INIT to start processors. This delay occurs
|
||||
on every CPU online, such as boot, and resume from suspend.
|
||||
Default: 10000
|
||||
|
||||
cpcihp_generic= [HW,PCI] Generic port I/O CompactPCI driver
|
||||
Format:
|
||||
<first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>]
|
||||
|
@ -18,10 +18,10 @@ Some of these entries are:
|
||||
|
||||
- system_call: syscall instruction from 64-bit code.
|
||||
|
||||
- ia32_syscall: int 0x80 from 32-bit or 64-bit code; compat syscall
|
||||
- entry_INT80_compat: int 0x80 from 32-bit or 64-bit code; compat syscall
|
||||
either way.
|
||||
|
||||
- ia32_syscall, ia32_sysenter: syscall and sysenter from 32-bit
|
||||
- entry_INT80_compat, ia32_sysenter: syscall and sysenter from 32-bit
|
||||
code
|
||||
|
||||
- interrupt: An array of entries. Every IDT vector that doesn't
|
||||
|
@ -1,7 +1,19 @@
|
||||
MTRR (Memory Type Range Register) control
|
||||
3 Jun 1999
|
||||
Richard Gooch
|
||||
<rgooch@atnf.csiro.au>
|
||||
|
||||
Richard Gooch <rgooch@atnf.csiro.au> - 3 Jun 1999
|
||||
Luis R. Rodriguez <mcgrof@do-not-panic.com> - April 9, 2015
|
||||
|
||||
===============================================================================
|
||||
Phasing out MTRR use
|
||||
|
||||
MTRR use is replaced on modern x86 hardware with PAT. Over time the only type
|
||||
of effective MTRR that is expected to be supported will be for write-combining.
|
||||
As MTRR use is phased out device drivers should use arch_phys_wc_add() to make
|
||||
MTRR effective on non-PAT systems while a no-op on PAT enabled systems.
|
||||
|
||||
For details refer to Documentation/x86/pat.txt.
|
||||
|
||||
===============================================================================
|
||||
|
||||
On Intel P6 family processors (Pentium Pro, Pentium II and later)
|
||||
the Memory Type Range Registers (MTRRs) may be used to control
|
||||
|
@ -12,7 +12,7 @@ virtual addresses.
|
||||
|
||||
PAT allows for different types of memory attributes. The most commonly used
|
||||
ones that will be supported at this time are Write-back, Uncached,
|
||||
Write-combined and Uncached Minus.
|
||||
Write-combined, Write-through and Uncached Minus.
|
||||
|
||||
|
||||
PAT APIs
|
||||
@ -34,16 +34,23 @@ ioremap | -- | UC- | UC- |
|
||||
| | | |
|
||||
ioremap_cache | -- | WB | WB |
|
||||
| | | |
|
||||
ioremap_uc | -- | UC | UC |
|
||||
| | | |
|
||||
ioremap_nocache | -- | UC- | UC- |
|
||||
| | | |
|
||||
ioremap_wc | -- | -- | WC |
|
||||
| | | |
|
||||
ioremap_wt | -- | -- | WT |
|
||||
| | | |
|
||||
set_memory_uc | UC- | -- | -- |
|
||||
set_memory_wb | | | |
|
||||
| | | |
|
||||
set_memory_wc | WC | -- | -- |
|
||||
set_memory_wb | | | |
|
||||
| | | |
|
||||
set_memory_wt | WT | -- | -- |
|
||||
set_memory_wb | | | |
|
||||
| | | |
|
||||
pci sysfs resource | -- | -- | UC- |
|
||||
| | | |
|
||||
pci sysfs resource_wc | -- | -- | WC |
|
||||
@ -102,7 +109,38 @@ wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc()
|
||||
as step 0 above and also track the usage of those pages and use set_memory_wb()
|
||||
before the page is freed to free pool.
|
||||
|
||||
MTRR effects on PAT / non-PAT systems
|
||||
-------------------------------------
|
||||
|
||||
The following table provides the effects of using write-combining MTRRs when
|
||||
using ioremap*() calls on x86 for both non-PAT and PAT systems. Ideally
|
||||
mtrr_add() usage will be phased out in favor of arch_phys_wc_add() which will
|
||||
be a no-op on PAT enabled systems. The region over which a arch_phys_wc_add()
|
||||
is made, should already have been ioremapped with WC attributes or PAT entries,
|
||||
this can be done by using ioremap_wc() / set_memory_wc(). Devices which
|
||||
combine areas of IO memory desired to remain uncacheable with areas where
|
||||
write-combining is desirable should consider use of ioremap_uc() followed by
|
||||
set_memory_wc() to white-list effective write-combined areas. Such use is
|
||||
nevertheless discouraged as the effective memory type is considered
|
||||
implementation defined, yet this strategy can be used as last resort on devices
|
||||
with size-constrained regions where otherwise MTRR write-combining would
|
||||
otherwise not be effective.
|
||||
|
||||
----------------------------------------------------------------------
|
||||
MTRR Non-PAT PAT Linux ioremap value Effective memory type
|
||||
----------------------------------------------------------------------
|
||||
Non-PAT | PAT
|
||||
PAT
|
||||
|PCD
|
||||
||PWT
|
||||
|||
|
||||
WC 000 WB _PAGE_CACHE_MODE_WB WC | WC
|
||||
WC 001 WC _PAGE_CACHE_MODE_WC WC* | WC
|
||||
WC 010 UC- _PAGE_CACHE_MODE_UC_MINUS WC* | UC
|
||||
WC 011 UC _PAGE_CACHE_MODE_UC UC | UC
|
||||
----------------------------------------------------------------------
|
||||
|
||||
(*) denotes implementation defined and is discouraged
|
||||
|
||||
Notes:
|
||||
|
||||
@ -115,8 +153,8 @@ can be more restrictive, in case of any existing aliasing for that address.
|
||||
For example: If there is an existing uncached mapping, a new ioremap_wc can
|
||||
return uncached mapping in place of write-combine requested.
|
||||
|
||||
set_memory_[uc|wc] and set_memory_wb should be used in pairs, where driver will
|
||||
first make a region uc or wc and switch it back to wb after use.
|
||||
set_memory_[uc|wc|wt] and set_memory_wb should be used in pairs, where driver
|
||||
will first make a region uc, wc or wt and switch it back to wb after use.
|
||||
|
||||
Over time writes to /proc/mtrr will be deprecated in favor of using PAT based
|
||||
interfaces. Users writing to /proc/mtrr are suggested to use above interfaces.
|
||||
@ -124,7 +162,7 @@ interfaces. Users writing to /proc/mtrr are suggested to use above interfaces.
|
||||
Drivers should use ioremap_[uc|wc] to access PCI BARs with [uc|wc] access
|
||||
types.
|
||||
|
||||
Drivers should use set_memory_[uc|wc] to set access type for RAM ranges.
|
||||
Drivers should use set_memory_[uc|wc|wt] to set access type for RAM ranges.
|
||||
|
||||
|
||||
PAT debugging
|
||||
|
@ -31,6 +31,9 @@ Machine check
|
||||
(e.g. BIOS or hardware monitoring applications), conflicting
|
||||
with OS's error handling, and you cannot deactivate the agent,
|
||||
then this option will be a help.
|
||||
mce=no_lmce
|
||||
Do not opt-in to Local MCE delivery. Use legacy method
|
||||
to broadcast MCEs.
|
||||
mce=bootlog
|
||||
Enable logging of machine checks left over from booting.
|
||||
Disabled by default on AMD because some BIOS leave bogus ones.
|
||||
|
@ -10894,7 +10894,7 @@ M: Andy Lutomirski <luto@amacapital.net>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/vdso
|
||||
S: Maintained
|
||||
F: arch/x86/vdso/
|
||||
F: arch/x86/entry/vdso/
|
||||
|
||||
XC2028/3028 TUNER DRIVER
|
||||
M: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
|
||||
|
@ -20,6 +20,7 @@ extern void iounmap(const void __iomem *addr);
|
||||
|
||||
#define ioremap_nocache(phy, sz) ioremap(phy, sz)
|
||||
#define ioremap_wc(phy, sz) ioremap(phy, sz)
|
||||
#define ioremap_wt(phy, sz) ioremap(phy, sz)
|
||||
|
||||
/* Change struct page to physical address */
|
||||
#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
|
||||
|
@ -336,6 +336,7 @@ extern void _memset_io(volatile void __iomem *, int, size_t);
|
||||
#define ioremap_nocache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE)
|
||||
#define ioremap_cache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_CACHED)
|
||||
#define ioremap_wc(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_WC)
|
||||
#define ioremap_wt(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE)
|
||||
#define iounmap __arm_iounmap
|
||||
|
||||
/*
|
||||
|
@ -170,6 +170,7 @@ extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size);
|
||||
#define ioremap(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))
|
||||
#define ioremap_nocache(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))
|
||||
#define ioremap_wc(addr, size) __ioremap((addr), (size), __pgprot(PROT_NORMAL_NC))
|
||||
#define ioremap_wt(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))
|
||||
#define iounmap __iounmap
|
||||
|
||||
/*
|
||||
|
@ -296,6 +296,7 @@ extern void __iounmap(void __iomem *addr);
|
||||
__iounmap(addr)
|
||||
|
||||
#define ioremap_wc ioremap_nocache
|
||||
#define ioremap_wt ioremap_nocache
|
||||
|
||||
#define cached(addr) P1SEGADDR(addr)
|
||||
#define uncached(addr) P2SEGADDR(addr)
|
||||
|
@ -17,6 +17,8 @@
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#define ARCH_HAS_IOREMAP_WT
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <asm/virtconvert.h>
|
||||
#include <asm/string.h>
|
||||
@ -265,7 +267,7 @@ static inline void __iomem *ioremap_nocache(unsigned long physaddr, unsigned lon
|
||||
return __ioremap(physaddr, size, IOMAP_NOCACHE_SER);
|
||||
}
|
||||
|
||||
static inline void __iomem *ioremap_writethrough(unsigned long physaddr, unsigned long size)
|
||||
static inline void __iomem *ioremap_wt(unsigned long physaddr, unsigned long size)
|
||||
{
|
||||
return __ioremap(physaddr, size, IOMAP_WRITETHROUGH);
|
||||
}
|
||||
|
@ -1,6 +1,4 @@
|
||||
#ifndef __IA64_INTR_REMAPPING_H
|
||||
#define __IA64_INTR_REMAPPING_H
|
||||
#define irq_remapping_enabled 0
|
||||
#define dmar_alloc_hwirq create_irq
|
||||
#define dmar_free_hwirq destroy_irq
|
||||
#endif
|
||||
|
@ -165,7 +165,7 @@ static struct irq_chip dmar_msi_type = {
|
||||
.irq_retrigger = ia64_msi_retrigger_irq,
|
||||
};
|
||||
|
||||
static int
|
||||
static void
|
||||
msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg)
|
||||
{
|
||||
struct irq_cfg *cfg = irq_cfg + irq;
|
||||
@ -186,21 +186,29 @@ msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg)
|
||||
MSI_DATA_LEVEL_ASSERT |
|
||||
MSI_DATA_DELIVERY_FIXED |
|
||||
MSI_DATA_VECTOR(cfg->vector);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int arch_setup_dmar_msi(unsigned int irq)
|
||||
int dmar_alloc_hwirq(int id, int node, void *arg)
|
||||
{
|
||||
int ret;
|
||||
int irq;
|
||||
struct msi_msg msg;
|
||||
|
||||
ret = msi_compose_msg(NULL, irq, &msg);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
dmar_msi_write(irq, &msg);
|
||||
irq_set_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq,
|
||||
"edge");
|
||||
return 0;
|
||||
irq = create_irq();
|
||||
if (irq > 0) {
|
||||
irq_set_handler_data(irq, arg);
|
||||
irq_set_chip_and_handler_name(irq, &dmar_msi_type,
|
||||
handle_edge_irq, "edge");
|
||||
msi_compose_msg(NULL, irq, &msg);
|
||||
dmar_msi_write(irq, &msg);
|
||||
}
|
||||
|
||||
return irq;
|
||||
}
|
||||
|
||||
void dmar_free_hwirq(int irq)
|
||||
{
|
||||
irq_set_handler_data(irq, NULL);
|
||||
destroy_irq(irq);
|
||||
}
|
||||
#endif /* CONFIG_INTEL_IOMMU */
|
||||
|
||||
|
@ -68,6 +68,7 @@ static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
|
||||
extern void iounmap(volatile void __iomem *addr);
|
||||
#define ioremap_nocache(off,size) ioremap(off,size)
|
||||
#define ioremap_wc ioremap_nocache
|
||||
#define ioremap_wt ioremap_nocache
|
||||
|
||||
/*
|
||||
* IO bus memory addresses are also 1:1 with the physical address
|
||||
|
@ -20,6 +20,8 @@
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#define ARCH_HAS_IOREMAP_WT
|
||||
|
||||
#include <linux/compiler.h>
|
||||
#include <asm/raw_io.h>
|
||||
#include <asm/virtconvert.h>
|
||||
@ -465,7 +467,7 @@ static inline void __iomem *ioremap_nocache(unsigned long physaddr, unsigned lon
|
||||
{
|
||||
return __ioremap(physaddr, size, IOMAP_NOCACHE_SER);
|
||||
}
|
||||
static inline void __iomem *ioremap_writethrough(unsigned long physaddr,
|
||||
static inline void __iomem *ioremap_wt(unsigned long physaddr,
|
||||
unsigned long size)
|
||||
{
|
||||
return __ioremap(physaddr, size, IOMAP_WRITETHROUGH);
|
||||
|
@ -3,6 +3,8 @@
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
#define ARCH_HAS_IOREMAP_WT
|
||||
|
||||
#include <asm/virtconvert.h>
|
||||
#include <asm-generic/iomap.h>
|
||||
|
||||
@ -153,7 +155,7 @@ static inline void *ioremap_nocache(unsigned long physaddr, unsigned long size)
|
||||
{
|
||||
return __ioremap(physaddr, size, IOMAP_NOCACHE_SER);
|
||||
}
|
||||
static inline void *ioremap_writethrough(unsigned long physaddr, unsigned long size)
|
||||
static inline void *ioremap_wt(unsigned long physaddr, unsigned long size)
|
||||
{
|
||||
return __ioremap(physaddr, size, IOMAP_WRITETHROUGH);
|
||||
}
|
||||
|
@ -160,6 +160,9 @@ extern void __iounmap(void __iomem *addr);
|
||||
#define ioremap_wc(offset, size) \
|
||||
__ioremap((offset), (size), _PAGE_WR_COMBINE)
|
||||
|
||||
#define ioremap_wt(offset, size) \
|
||||
__ioremap((offset), (size), 0)
|
||||
|
||||
#define iounmap(addr) \
|
||||
__iounmap(addr)
|
||||
|
||||
|
@ -39,10 +39,10 @@ extern resource_size_t isa_mem_base;
|
||||
extern void iounmap(void __iomem *addr);
|
||||
|
||||
extern void __iomem *ioremap(phys_addr_t address, unsigned long size);
|
||||
#define ioremap_writethrough(addr, size) ioremap((addr), (size))
|
||||
#define ioremap_nocache(addr, size) ioremap((addr), (size))
|
||||
#define ioremap_fullcache(addr, size) ioremap((addr), (size))
|
||||
#define ioremap_wc(addr, size) ioremap((addr), (size))
|
||||
#define ioremap_wt(addr, size) ioremap((addr), (size))
|
||||
|
||||
#endif /* CONFIG_MMU */
|
||||
|
||||
|
@ -282,6 +282,7 @@ static inline void __iomem *ioremap_nocache(unsigned long offset, unsigned long
|
||||
}
|
||||
|
||||
#define ioremap_wc ioremap_nocache
|
||||
#define ioremap_wt ioremap_nocache
|
||||
|
||||
static inline void iounmap(void __iomem *addr)
|
||||
{
|
||||
|
@ -46,6 +46,7 @@ static inline void iounmap(void __iomem *addr)
|
||||
}
|
||||
|
||||
#define ioremap_wc ioremap_nocache
|
||||
#define ioremap_wt ioremap_nocache
|
||||
|
||||
/* Pages to physical address... */
|
||||
#define page_to_phys(page) virt_to_phys(page_to_virt(page))
|
||||
|
@ -29,6 +29,7 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);
|
||||
|
||||
#define ioremap_nocache(addr, size) ioremap(addr, size)
|
||||
#define ioremap_wc ioremap_nocache
|
||||
#define ioremap_wt ioremap_nocache
|
||||
|
||||
static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
|
||||
{
|
||||
|
@ -129,6 +129,7 @@ static inline void sbus_memcpy_toio(volatile void __iomem *dst,
|
||||
void __iomem *ioremap(unsigned long offset, unsigned long size);
|
||||
#define ioremap_nocache(X,Y) ioremap((X),(Y))
|
||||
#define ioremap_wc(X,Y) ioremap((X),(Y))
|
||||
#define ioremap_wt(X,Y) ioremap((X),(Y))
|
||||
void iounmap(volatile void __iomem *addr);
|
||||
|
||||
/* Create a virtual mapping cookie for an IO port range */
|
||||
|
@ -402,6 +402,7 @@ static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
|
||||
|
||||
#define ioremap_nocache(X,Y) ioremap((X),(Y))
|
||||
#define ioremap_wc(X,Y) ioremap((X),(Y))
|
||||
#define ioremap_wt(X,Y) ioremap((X),(Y))
|
||||
|
||||
static inline void iounmap(volatile void __iomem *addr)
|
||||
{
|
||||
|
@ -54,7 +54,7 @@ extern void iounmap(volatile void __iomem *addr);
|
||||
|
||||
#define ioremap_nocache(physaddr, size) ioremap(physaddr, size)
|
||||
#define ioremap_wc(physaddr, size) ioremap(physaddr, size)
|
||||
#define ioremap_writethrough(physaddr, size) ioremap(physaddr, size)
|
||||
#define ioremap_wt(physaddr, size) ioremap(physaddr, size)
|
||||
#define ioremap_fullcache(physaddr, size) ioremap(physaddr, size)
|
||||
|
||||
#define mmiowb()
|
||||
|
@ -1,3 +1,6 @@
|
||||
|
||||
obj-y += entry/
|
||||
|
||||
obj-$(CONFIG_KVM) += kvm/
|
||||
|
||||
# Xen paravirtualization support
|
||||
@ -11,7 +14,7 @@ obj-y += kernel/
|
||||
obj-y += mm/
|
||||
|
||||
obj-y += crypto/
|
||||
obj-y += vdso/
|
||||
|
||||
obj-$(CONFIG_IA32_EMULATION) += ia32/
|
||||
|
||||
obj-y += platform/
|
||||
|
235
arch/x86/Kconfig
235
arch/x86/Kconfig
@ -9,141 +9,141 @@ config 64BIT
|
||||
config X86_32
|
||||
def_bool y
|
||||
depends on !64BIT
|
||||
select CLKSRC_I8253
|
||||
select HAVE_UID16
|
||||
|
||||
config X86_64
|
||||
def_bool y
|
||||
depends on 64BIT
|
||||
select X86_DEV_DMA_OPS
|
||||
select ARCH_USE_CMPXCHG_LOCKREF
|
||||
select HAVE_LIVEPATCH
|
||||
|
||||
### Arch settings
|
||||
config X86
|
||||
def_bool y
|
||||
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
|
||||
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
|
||||
select ACPI_LEGACY_TABLES_LOOKUP if ACPI
|
||||
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
|
||||
select ANON_INODES
|
||||
select ARCH_CLOCKSOURCE_DATA
|
||||
select ARCH_DISCARD_MEMBLOCK
|
||||
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
|
||||
select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS
|
||||
select ARCH_HAS_ELF_RANDOMIZE
|
||||
select ARCH_HAS_FAST_MULTIPLIER
|
||||
select ARCH_HAS_GCOV_PROFILE_ALL
|
||||
select ARCH_HAS_SG_CHAIN
|
||||
select ARCH_HAVE_NMI_SAFE_CMPXCHG
|
||||
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
|
||||
select ARCH_MIGHT_HAVE_PC_PARPORT
|
||||
select ARCH_MIGHT_HAVE_PC_SERIO
|
||||
select HAVE_AOUT if X86_32
|
||||
select HAVE_UNSTABLE_SCHED_CLOCK
|
||||
select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
|
||||
select ARCH_SUPPORTS_INT128 if X86_64
|
||||
select HAVE_IDE
|
||||
select HAVE_OPROFILE
|
||||
select HAVE_PCSPKR_PLATFORM
|
||||
select HAVE_PERF_EVENTS
|
||||
select HAVE_IOREMAP_PROT
|
||||
select HAVE_KPROBES
|
||||
select HAVE_MEMBLOCK
|
||||
select HAVE_MEMBLOCK_NODE_MAP
|
||||
select ARCH_DISCARD_MEMBLOCK
|
||||
select ARCH_WANT_OPTIONAL_GPIOLIB
|
||||
select ARCH_SUPPORTS_ATOMIC_RMW
|
||||
select ARCH_SUPPORTS_INT128 if X86_64
|
||||
select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
|
||||
select ARCH_USE_BUILTIN_BSWAP
|
||||
select ARCH_USE_CMPXCHG_LOCKREF if X86_64
|
||||
select ARCH_USE_QUEUED_RWLOCKS
|
||||
select ARCH_USE_QUEUED_SPINLOCKS
|
||||
select ARCH_WANT_FRAME_POINTERS
|
||||
select ARCH_WANT_IPC_PARSE_VERSION if X86_32
|
||||
select ARCH_WANT_OPTIONAL_GPIOLIB
|
||||
select BUILDTIME_EXTABLE_SORT
|
||||
select CLKEVT_I8253
|
||||
select CLKSRC_I8253 if X86_32
|
||||
select CLOCKSOURCE_VALIDATE_LAST_CYCLE
|
||||
select CLOCKSOURCE_WATCHDOG
|
||||
select CLONE_BACKWARDS if X86_32
|
||||
select COMPAT_OLD_SIGACTION if IA32_EMULATION
|
||||
select DCACHE_WORD_ACCESS
|
||||
select GENERIC_CLOCKEVENTS
|
||||
select GENERIC_CLOCKEVENTS_BROADCAST if X86_64 || (X86_32 && X86_LOCAL_APIC)
|
||||
select GENERIC_CLOCKEVENTS_MIN_ADJUST
|
||||
select GENERIC_CMOS_UPDATE
|
||||
select GENERIC_CPU_AUTOPROBE
|
||||
select GENERIC_EARLY_IOREMAP
|
||||
select GENERIC_FIND_FIRST_BIT
|
||||
select GENERIC_IOMAP
|
||||
select GENERIC_IRQ_PROBE
|
||||
select GENERIC_IRQ_SHOW
|
||||
select GENERIC_PENDING_IRQ if SMP
|
||||
select GENERIC_SMP_IDLE_THREAD
|
||||
select GENERIC_STRNCPY_FROM_USER
|
||||
select GENERIC_STRNLEN_USER
|
||||
select GENERIC_TIME_VSYSCALL
|
||||
select HAVE_ACPI_APEI if ACPI
|
||||
select HAVE_ACPI_APEI_NMI if ACPI
|
||||
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
|
||||
select HAVE_AOUT if X86_32
|
||||
select HAVE_ARCH_AUDITSYSCALL
|
||||
select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE
|
||||
select HAVE_ARCH_JUMP_LABEL
|
||||
select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP
|
||||
select HAVE_ARCH_KGDB
|
||||
select HAVE_ARCH_KMEMCHECK
|
||||
select HAVE_ARCH_SECCOMP_FILTER
|
||||
select HAVE_ARCH_SOFT_DIRTY if X86_64
|
||||
select HAVE_ARCH_TRACEHOOK
|
||||
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
|
||||
select HAVE_BPF_JIT if X86_64
|
||||
select HAVE_CC_STACKPROTECTOR
|
||||
select HAVE_CMPXCHG_DOUBLE
|
||||
select HAVE_CMPXCHG_LOCAL
|
||||
select HAVE_CONTEXT_TRACKING if X86_64
|
||||
select HAVE_C_RECORDMCOUNT
|
||||
select HAVE_DEBUG_KMEMLEAK
|
||||
select HAVE_DEBUG_STACKOVERFLOW
|
||||
select HAVE_DMA_API_DEBUG
|
||||
select HAVE_DMA_ATTRS
|
||||
select HAVE_DMA_CONTIGUOUS
|
||||
select HAVE_KRETPROBES
|
||||
select GENERIC_EARLY_IOREMAP
|
||||
select HAVE_OPTPROBES
|
||||
select HAVE_KPROBES_ON_FTRACE
|
||||
select HAVE_FTRACE_MCOUNT_RECORD
|
||||
select HAVE_FENTRY if X86_64
|
||||
select HAVE_C_RECORDMCOUNT
|
||||
select HAVE_DYNAMIC_FTRACE
|
||||
select HAVE_DYNAMIC_FTRACE_WITH_REGS
|
||||
select HAVE_FUNCTION_TRACER
|
||||
select HAVE_FUNCTION_GRAPH_TRACER
|
||||
select HAVE_FUNCTION_GRAPH_FP_TEST
|
||||
select HAVE_SYSCALL_TRACEPOINTS
|
||||
select SYSCTL_EXCEPTION_TRACE
|
||||
select HAVE_KVM
|
||||
select HAVE_ARCH_KGDB
|
||||
select HAVE_ARCH_TRACEHOOK
|
||||
select HAVE_GENERIC_DMA_COHERENT if X86_32
|
||||
select HAVE_EFFICIENT_UNALIGNED_ACCESS
|
||||
select USER_STACKTRACE_SUPPORT
|
||||
select HAVE_REGS_AND_STACK_ACCESS_API
|
||||
select HAVE_DMA_API_DEBUG
|
||||
select HAVE_KERNEL_GZIP
|
||||
select HAVE_KERNEL_BZIP2
|
||||
select HAVE_KERNEL_LZMA
|
||||
select HAVE_KERNEL_XZ
|
||||
select HAVE_KERNEL_LZO
|
||||
select HAVE_KERNEL_LZ4
|
||||
select HAVE_FENTRY if X86_64
|
||||
select HAVE_FTRACE_MCOUNT_RECORD
|
||||
select HAVE_FUNCTION_GRAPH_FP_TEST
|
||||
select HAVE_FUNCTION_GRAPH_TRACER
|
||||
select HAVE_FUNCTION_TRACER
|
||||
select HAVE_GENERIC_DMA_COHERENT if X86_32
|
||||
select HAVE_HW_BREAKPOINT
|
||||
select HAVE_IDE
|
||||
select HAVE_IOREMAP_PROT
|
||||
select HAVE_IRQ_EXIT_ON_IRQ_STACK if X86_64
|
||||
select HAVE_IRQ_TIME_ACCOUNTING
|
||||
select HAVE_KERNEL_BZIP2
|
||||
select HAVE_KERNEL_GZIP
|
||||
select HAVE_KERNEL_LZ4
|
||||
select HAVE_KERNEL_LZMA
|
||||
select HAVE_KERNEL_LZO
|
||||
select HAVE_KERNEL_XZ
|
||||
select HAVE_KPROBES
|
||||
select HAVE_KPROBES_ON_FTRACE
|
||||
select HAVE_KRETPROBES
|
||||
select HAVE_KVM
|
||||
select HAVE_LIVEPATCH if X86_64
|
||||
select HAVE_MEMBLOCK
|
||||
select HAVE_MEMBLOCK_NODE_MAP
|
||||
select HAVE_MIXED_BREAKPOINTS_REGS
|
||||
select PERF_EVENTS
|
||||
select HAVE_OPROFILE
|
||||
select HAVE_OPTPROBES
|
||||
select HAVE_PCSPKR_PLATFORM
|
||||
select HAVE_PERF_EVENTS
|
||||
select HAVE_PERF_EVENTS_NMI
|
||||
select HAVE_PERF_REGS
|
||||
select HAVE_PERF_USER_STACK_DUMP
|
||||
select HAVE_DEBUG_KMEMLEAK
|
||||
select ANON_INODES
|
||||
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
|
||||
select HAVE_CMPXCHG_LOCAL
|
||||
select HAVE_CMPXCHG_DOUBLE
|
||||
select HAVE_ARCH_KMEMCHECK
|
||||
select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP
|
||||
select HAVE_REGS_AND_STACK_ACCESS_API
|
||||
select HAVE_SYSCALL_TRACEPOINTS
|
||||
select HAVE_UID16 if X86_32
|
||||
select HAVE_UNSTABLE_SCHED_CLOCK
|
||||
select HAVE_USER_RETURN_NOTIFIER
|
||||
select ARCH_HAS_ELF_RANDOMIZE
|
||||
select HAVE_ARCH_JUMP_LABEL
|
||||
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
|
||||
select SPARSE_IRQ
|
||||
select GENERIC_FIND_FIRST_BIT
|
||||
select GENERIC_IRQ_PROBE
|
||||
select GENERIC_PENDING_IRQ if SMP
|
||||
select GENERIC_IRQ_SHOW
|
||||
select GENERIC_CLOCKEVENTS_MIN_ADJUST
|
||||
select IRQ_FORCED_THREADING
|
||||
select HAVE_BPF_JIT if X86_64
|
||||
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
|
||||
select HAVE_ARCH_HUGE_VMAP if X86_64 || (X86_32 && X86_PAE)
|
||||
select ARCH_HAS_SG_CHAIN
|
||||
select CLKEVT_I8253
|
||||
select ARCH_HAVE_NMI_SAFE_CMPXCHG
|
||||
select GENERIC_IOMAP
|
||||
select DCACHE_WORD_ACCESS
|
||||
select GENERIC_SMP_IDLE_THREAD
|
||||
select ARCH_WANT_IPC_PARSE_VERSION if X86_32
|
||||
select HAVE_ARCH_SECCOMP_FILTER
|
||||
select BUILDTIME_EXTABLE_SORT
|
||||
select GENERIC_CMOS_UPDATE
|
||||
select HAVE_ARCH_SOFT_DIRTY if X86_64
|
||||
select CLOCKSOURCE_WATCHDOG
|
||||
select GENERIC_CLOCKEVENTS
|
||||
select ARCH_CLOCKSOURCE_DATA
|
||||
select CLOCKSOURCE_VALIDATE_LAST_CYCLE
|
||||
select GENERIC_CLOCKEVENTS_BROADCAST if X86_64 || (X86_32 && X86_LOCAL_APIC)
|
||||
select GENERIC_TIME_VSYSCALL
|
||||
select GENERIC_STRNCPY_FROM_USER
|
||||
select GENERIC_STRNLEN_USER
|
||||
select HAVE_CONTEXT_TRACKING if X86_64
|
||||
select HAVE_IRQ_TIME_ACCOUNTING
|
||||
select VIRT_TO_BUS
|
||||
select MODULES_USE_ELF_REL if X86_32
|
||||
select MODULES_USE_ELF_RELA if X86_64
|
||||
select CLONE_BACKWARDS if X86_32
|
||||
select ARCH_USE_BUILTIN_BSWAP
|
||||
select ARCH_USE_QUEUED_SPINLOCKS
|
||||
select ARCH_USE_QUEUED_RWLOCKS
|
||||
select OLD_SIGSUSPEND3 if X86_32 || IA32_EMULATION
|
||||
select OLD_SIGACTION if X86_32
|
||||
select COMPAT_OLD_SIGACTION if IA32_EMULATION
|
||||
select MODULES_USE_ELF_RELA if X86_64
|
||||
select MODULES_USE_ELF_REL if X86_32
|
||||
select OLD_SIGACTION if X86_32
|
||||
select OLD_SIGSUSPEND3 if X86_32 || IA32_EMULATION
|
||||
select PERF_EVENTS
|
||||
select RTC_LIB
|
||||
select HAVE_DEBUG_STACKOVERFLOW
|
||||
select HAVE_IRQ_EXIT_ON_IRQ_STACK if X86_64
|
||||
select HAVE_CC_STACKPROTECTOR
|
||||
select GENERIC_CPU_AUTOPROBE
|
||||
select HAVE_ARCH_AUDITSYSCALL
|
||||
select ARCH_SUPPORTS_ATOMIC_RMW
|
||||
select HAVE_ACPI_APEI if ACPI
|
||||
select HAVE_ACPI_APEI_NMI if ACPI
|
||||
select ACPI_LEGACY_TABLES_LOOKUP if ACPI
|
||||
select X86_FEATURE_NAMES if PROC_FS
|
||||
select SPARSE_IRQ
|
||||
select SRCU
|
||||
select SYSCTL_EXCEPTION_TRACE
|
||||
select USER_STACKTRACE_SUPPORT
|
||||
select VIRT_TO_BUS
|
||||
select X86_DEV_DMA_OPS if X86_64
|
||||
select X86_FEATURE_NAMES if PROC_FS
|
||||
|
||||
config INSTRUCTION_DECODER
|
||||
def_bool y
|
||||
@ -261,10 +261,6 @@ config X86_64_SMP
|
||||
def_bool y
|
||||
depends on X86_64 && SMP
|
||||
|
||||
config X86_HT
|
||||
def_bool y
|
||||
depends on SMP
|
||||
|
||||
config X86_32_LAZY_GS
|
||||
def_bool y
|
||||
depends on X86_32 && !CC_STACKPROTECTOR
|
||||
@ -342,7 +338,7 @@ config X86_FEATURE_NAMES
|
||||
|
||||
config X86_X2APIC
|
||||
bool "Support x2apic"
|
||||
depends on X86_LOCAL_APIC && X86_64 && IRQ_REMAP
|
||||
depends on X86_LOCAL_APIC && X86_64 && (IRQ_REMAP || HYPERVISOR_GUEST)
|
||||
---help---
|
||||
This enables x2apic support on CPUs that have this feature.
|
||||
|
||||
@ -442,6 +438,7 @@ config X86_UV
|
||||
depends on X86_EXTENDED_PLATFORM
|
||||
depends on NUMA
|
||||
depends on X86_X2APIC
|
||||
depends on PCI
|
||||
---help---
|
||||
This option is needed in order to support SGI Ultraviolet systems.
|
||||
If you don't have one of these, you should say N here.
|
||||
@ -467,7 +464,6 @@ config X86_INTEL_CE
|
||||
select X86_REBOOTFIXUPS
|
||||
select OF
|
||||
select OF_EARLY_FLATTREE
|
||||
select IRQ_DOMAIN
|
||||
---help---
|
||||
Select for the Intel CE media processor (CE4100) SOC.
|
||||
This option compiles in support for the CE4100 SOC for settop
|
||||
@ -852,11 +848,12 @@ config NR_CPUS
|
||||
default "1" if !SMP
|
||||
default "8192" if MAXSMP
|
||||
default "32" if SMP && X86_BIGSMP
|
||||
default "8" if SMP
|
||||
default "8" if SMP && X86_32
|
||||
default "64" if SMP
|
||||
---help---
|
||||
This allows you to specify the maximum number of CPUs which this
|
||||
kernel will support. If CPUMASK_OFFSTACK is enabled, the maximum
|
||||
supported value is 4096, otherwise the maximum value is 512. The
|
||||
supported value is 8192, otherwise the maximum value is 512. The
|
||||
minimum value which makes sense is 2.
|
||||
|
||||
This is purely to save memory - each supported CPU adds
|
||||
@ -864,7 +861,7 @@ config NR_CPUS
|
||||
|
||||
config SCHED_SMT
|
||||
bool "SMT (Hyperthreading) scheduler support"
|
||||
depends on X86_HT
|
||||
depends on SMP
|
||||
---help---
|
||||
SMT scheduler support improves the CPU scheduler's decision making
|
||||
when dealing with Intel Pentium 4 chips with HyperThreading at a
|
||||
@ -874,7 +871,7 @@ config SCHED_SMT
|
||||
config SCHED_MC
|
||||
def_bool y
|
||||
prompt "Multi-core scheduler support"
|
||||
depends on X86_HT
|
||||
depends on SMP
|
||||
---help---
|
||||
Multi-core scheduler support improves the CPU scheduler's decision
|
||||
making when dealing with multi-core CPU chips at a cost of slightly
|
||||
@ -915,12 +912,12 @@ config X86_UP_IOAPIC
|
||||
config X86_LOCAL_APIC
|
||||
def_bool y
|
||||
depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI
|
||||
select GENERIC_IRQ_LEGACY_ALLOC_HWIRQ
|
||||
select IRQ_DOMAIN_HIERARCHY
|
||||
select PCI_MSI_IRQ_DOMAIN if PCI_MSI
|
||||
|
||||
config X86_IO_APIC
|
||||
def_bool y
|
||||
depends on X86_LOCAL_APIC || X86_UP_IOAPIC
|
||||
select IRQ_DOMAIN
|
||||
|
||||
config X86_REROUTE_FOR_BROKEN_BOOT_IRQS
|
||||
bool "Reroute for broken boot IRQs"
|
||||
|
@ -344,4 +344,15 @@ config X86_DEBUG_FPU
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config PUNIT_ATOM_DEBUG
|
||||
tristate "ATOM Punit debug driver"
|
||||
select DEBUG_FS
|
||||
select IOSF_MBI
|
||||
---help---
|
||||
This is a debug driver, which gets the power states
|
||||
of all Punit North Complex devices. The power states of
|
||||
each device is exposed as part of the debugfs interface.
|
||||
The current power state can be read from
|
||||
/sys/kernel/debug/punit_atom/dev_power_state
|
||||
|
||||
endmenu
|
||||
|
@ -77,6 +77,12 @@ else
|
||||
KBUILD_AFLAGS += -m64
|
||||
KBUILD_CFLAGS += -m64
|
||||
|
||||
# Align jump targets to 1 byte, not the default 16 bytes:
|
||||
KBUILD_CFLAGS += -falign-jumps=1
|
||||
|
||||
# Pack loops tightly as well:
|
||||
KBUILD_CFLAGS += -falign-loops=1
|
||||
|
||||
# Don't autogenerate traditional x87 instructions
|
||||
KBUILD_CFLAGS += $(call cc-option,-mno-80387)
|
||||
KBUILD_CFLAGS += $(call cc-option,-mno-fp-ret-in-387)
|
||||
@ -84,6 +90,9 @@ else
|
||||
# Use -mpreferred-stack-boundary=3 if supported.
|
||||
KBUILD_CFLAGS += $(call cc-option,-mpreferred-stack-boundary=3)
|
||||
|
||||
# Use -mskip-rax-setup if supported.
|
||||
KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
|
||||
|
||||
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
|
||||
cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
|
||||
cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
|
||||
@ -140,12 +149,6 @@ endif
|
||||
sp-$(CONFIG_X86_32) := esp
|
||||
sp-$(CONFIG_X86_64) := rsp
|
||||
|
||||
# do binutils support CFI?
|
||||
cfi := $(call as-instr,.cfi_startproc\n.cfi_rel_offset $(sp-y)$(comma)0\n.cfi_endproc,-DCONFIG_AS_CFI=1)
|
||||
# is .cfi_signal_frame supported too?
|
||||
cfi-sigframe := $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1)
|
||||
cfi-sections := $(call as-instr,.cfi_sections .debug_frame,-DCONFIG_AS_CFI_SECTIONS=1)
|
||||
|
||||
# does binutils support specific instructions?
|
||||
asinstr := $(call as-instr,fxsaveq (%rax),-DCONFIG_AS_FXSAVEQ=1)
|
||||
asinstr += $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1)
|
||||
@ -153,8 +156,8 @@ asinstr += $(call as-instr,crc32l %eax$(comma)%eax,-DCONFIG_AS_CRC32=1)
|
||||
avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
|
||||
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
|
||||
|
||||
KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr)
|
||||
KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr)
|
||||
KBUILD_AFLAGS += $(asinstr) $(avx_instr) $(avx2_instr)
|
||||
KBUILD_CFLAGS += $(asinstr) $(avx_instr) $(avx2_instr)
|
||||
|
||||
LDFLAGS := -m elf_$(UTS_MACHINE)
|
||||
|
||||
@ -178,7 +181,7 @@ archscripts: scripts_basic
|
||||
# Syscall table generation
|
||||
|
||||
archheaders:
|
||||
$(Q)$(MAKE) $(build)=arch/x86/syscalls all
|
||||
$(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all
|
||||
|
||||
archprepare:
|
||||
ifeq ($(CONFIG_KEXEC_FILE),y)
|
||||
@ -241,7 +244,7 @@ install:
|
||||
|
||||
PHONY += vdso_install
|
||||
vdso_install:
|
||||
$(Q)$(MAKE) $(build)=arch/x86/vdso $@
|
||||
$(Q)$(MAKE) $(build)=arch/x86/entry/vdso $@
|
||||
|
||||
archclean:
|
||||
$(Q)rm -rf $(objtree)/arch/i386
|
||||
|
10
arch/x86/entry/Makefile
Normal file
10
arch/x86/entry/Makefile
Normal file
@ -0,0 +1,10 @@
|
||||
#
|
||||
# Makefile for the x86 low level entry code
|
||||
#
|
||||
obj-y := entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o
|
||||
|
||||
obj-y += vdso/
|
||||
obj-y += vsyscall/
|
||||
|
||||
obj-$(CONFIG_IA32_EMULATION) += entry_64_compat.o syscall_32.o
|
||||
|
@ -46,8 +46,6 @@ For 32-bit we have the following conventions - kernel is built with
|
||||
|
||||
*/
|
||||
|
||||
#include <asm/dwarf2.h>
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
|
||||
/*
|
||||
@ -91,28 +89,27 @@ For 32-bit we have the following conventions - kernel is built with
|
||||
#define SIZEOF_PTREGS 21*8
|
||||
|
||||
.macro ALLOC_PT_GPREGS_ON_STACK addskip=0
|
||||
subq $15*8+\addskip, %rsp
|
||||
CFI_ADJUST_CFA_OFFSET 15*8+\addskip
|
||||
addq $-(15*8+\addskip), %rsp
|
||||
.endm
|
||||
|
||||
.macro SAVE_C_REGS_HELPER offset=0 rax=1 rcx=1 r8910=1 r11=1
|
||||
.if \r11
|
||||
movq_cfi r11, 6*8+\offset
|
||||
movq %r11, 6*8+\offset(%rsp)
|
||||
.endif
|
||||
.if \r8910
|
||||
movq_cfi r10, 7*8+\offset
|
||||
movq_cfi r9, 8*8+\offset
|
||||
movq_cfi r8, 9*8+\offset
|
||||
movq %r10, 7*8+\offset(%rsp)
|
||||
movq %r9, 8*8+\offset(%rsp)
|
||||
movq %r8, 9*8+\offset(%rsp)
|
||||
.endif
|
||||
.if \rax
|
||||
movq_cfi rax, 10*8+\offset
|
||||
movq %rax, 10*8+\offset(%rsp)
|
||||
.endif
|
||||
.if \rcx
|
||||
movq_cfi rcx, 11*8+\offset
|
||||
movq %rcx, 11*8+\offset(%rsp)
|
||||
.endif
|
||||
movq_cfi rdx, 12*8+\offset
|
||||
movq_cfi rsi, 13*8+\offset
|
||||
movq_cfi rdi, 14*8+\offset
|
||||
movq %rdx, 12*8+\offset(%rsp)
|
||||
movq %rsi, 13*8+\offset(%rsp)
|
||||
movq %rdi, 14*8+\offset(%rsp)
|
||||
.endm
|
||||
.macro SAVE_C_REGS offset=0
|
||||
SAVE_C_REGS_HELPER \offset, 1, 1, 1, 1
|
||||
@ -131,24 +128,24 @@ For 32-bit we have the following conventions - kernel is built with
|
||||
.endm
|
||||
|
||||
.macro SAVE_EXTRA_REGS offset=0
|
||||
movq_cfi r15, 0*8+\offset
|
||||
movq_cfi r14, 1*8+\offset
|
||||
movq_cfi r13, 2*8+\offset
|
||||
movq_cfi r12, 3*8+\offset
|
||||
movq_cfi rbp, 4*8+\offset
|
||||
movq_cfi rbx, 5*8+\offset
|
||||
movq %r15, 0*8+\offset(%rsp)
|
||||
movq %r14, 1*8+\offset(%rsp)
|
||||
movq %r13, 2*8+\offset(%rsp)
|
||||
movq %r12, 3*8+\offset(%rsp)
|
||||
movq %rbp, 4*8+\offset(%rsp)
|
||||
movq %rbx, 5*8+\offset(%rsp)
|
||||
.endm
|
||||
.macro SAVE_EXTRA_REGS_RBP offset=0
|
||||
movq_cfi rbp, 4*8+\offset
|
||||
movq %rbp, 4*8+\offset(%rsp)
|
||||
.endm
|
||||
|
||||
.macro RESTORE_EXTRA_REGS offset=0
|
||||
movq_cfi_restore 0*8+\offset, r15
|
||||
movq_cfi_restore 1*8+\offset, r14
|
||||
movq_cfi_restore 2*8+\offset, r13
|
||||
movq_cfi_restore 3*8+\offset, r12
|
||||
movq_cfi_restore 4*8+\offset, rbp
|
||||
movq_cfi_restore 5*8+\offset, rbx
|
||||
movq 0*8+\offset(%rsp), %r15
|
||||
movq 1*8+\offset(%rsp), %r14
|
||||
movq 2*8+\offset(%rsp), %r13
|
||||
movq 3*8+\offset(%rsp), %r12
|
||||
movq 4*8+\offset(%rsp), %rbp
|
||||
movq 5*8+\offset(%rsp), %rbx
|
||||
.endm
|
||||
|
||||
.macro ZERO_EXTRA_REGS
|
||||
@ -162,24 +159,24 @@ For 32-bit we have the following conventions - kernel is built with
|
||||
|
||||
.macro RESTORE_C_REGS_HELPER rstor_rax=1, rstor_rcx=1, rstor_r11=1, rstor_r8910=1, rstor_rdx=1
|
||||
.if \rstor_r11
|
||||
movq_cfi_restore 6*8, r11
|
||||
movq 6*8(%rsp), %r11
|
||||
.endif
|
||||
.if \rstor_r8910
|
||||
movq_cfi_restore 7*8, r10
|
||||
movq_cfi_restore 8*8, r9
|
||||
movq_cfi_restore 9*8, r8
|
||||
movq 7*8(%rsp), %r10
|
||||
movq 8*8(%rsp), %r9
|
||||
movq 9*8(%rsp), %r8
|
||||
.endif
|
||||
.if \rstor_rax
|
||||
movq_cfi_restore 10*8, rax
|
||||
movq 10*8(%rsp), %rax
|
||||
.endif
|
||||
.if \rstor_rcx
|
||||
movq_cfi_restore 11*8, rcx
|
||||
movq 11*8(%rsp), %rcx
|
||||
.endif
|
||||
.if \rstor_rdx
|
||||
movq_cfi_restore 12*8, rdx
|
||||
movq 12*8(%rsp), %rdx
|
||||
.endif
|
||||
movq_cfi_restore 13*8, rsi
|
||||
movq_cfi_restore 14*8, rdi
|
||||
movq 13*8(%rsp), %rsi
|
||||
movq 14*8(%rsp), %rdi
|
||||
.endm
|
||||
.macro RESTORE_C_REGS
|
||||
RESTORE_C_REGS_HELPER 1,1,1,1,1
|
||||
@ -204,8 +201,7 @@ For 32-bit we have the following conventions - kernel is built with
|
||||
.endm
|
||||
|
||||
.macro REMOVE_PT_GPREGS_FROM_STACK addskip=0
|
||||
addq $15*8+\addskip, %rsp
|
||||
CFI_ADJUST_CFA_OFFSET -(15*8+\addskip)
|
||||
subq $-(15*8+\addskip), %rsp
|
||||
.endm
|
||||
|
||||
.macro icebp
|
||||
@ -224,23 +220,23 @@ For 32-bit we have the following conventions - kernel is built with
|
||||
*/
|
||||
|
||||
.macro SAVE_ALL
|
||||
pushl_cfi_reg eax
|
||||
pushl_cfi_reg ebp
|
||||
pushl_cfi_reg edi
|
||||
pushl_cfi_reg esi
|
||||
pushl_cfi_reg edx
|
||||
pushl_cfi_reg ecx
|
||||
pushl_cfi_reg ebx
|
||||
pushl %eax
|
||||
pushl %ebp
|
||||
pushl %edi
|
||||
pushl %esi
|
||||
pushl %edx
|
||||
pushl %ecx
|
||||
pushl %ebx
|
||||
.endm
|
||||
|
||||
.macro RESTORE_ALL
|
||||
popl_cfi_reg ebx
|
||||
popl_cfi_reg ecx
|
||||
popl_cfi_reg edx
|
||||
popl_cfi_reg esi
|
||||
popl_cfi_reg edi
|
||||
popl_cfi_reg ebp
|
||||
popl_cfi_reg eax
|
||||
popl %ebx
|
||||
popl %ecx
|
||||
popl %edx
|
||||
popl %esi
|
||||
popl %edi
|
||||
popl %ebp
|
||||
popl %eax
|
||||
.endm
|
||||
|
||||
#endif /* CONFIG_X86_64 */
|
1248
arch/x86/entry/entry_32.S
Normal file
1248
arch/x86/entry/entry_32.S
Normal file
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
556
arch/x86/entry/entry_64_compat.S
Normal file
556
arch/x86/entry/entry_64_compat.S
Normal file
@ -0,0 +1,556 @@
|
||||
/*
|
||||
* Compatibility mode system call entry point for x86-64.
|
||||
*
|
||||
* Copyright 2000-2002 Andi Kleen, SuSE Labs.
|
||||
*/
|
||||
#include "calling.h"
|
||||
#include <asm/asm-offsets.h>
|
||||
#include <asm/current.h>
|
||||
#include <asm/errno.h>
|
||||
#include <asm/ia32_unistd.h>
|
||||
#include <asm/thread_info.h>
|
||||
#include <asm/segment.h>
|
||||
#include <asm/irqflags.h>
|
||||
#include <asm/asm.h>
|
||||
#include <asm/smap.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/err.h>
|
||||
|
||||
/* Avoid __ASSEMBLER__'ifying <linux/audit.h> just for this. */
|
||||
#include <linux/elf-em.h>
|
||||
#define AUDIT_ARCH_I386 (EM_386|__AUDIT_ARCH_LE)
|
||||
#define __AUDIT_ARCH_LE 0x40000000
|
||||
|
||||
#ifndef CONFIG_AUDITSYSCALL
|
||||
# define sysexit_audit ia32_ret_from_sys_call
|
||||
# define sysretl_audit ia32_ret_from_sys_call
|
||||
#endif
|
||||
|
||||
.section .entry.text, "ax"
|
||||
|
||||
#ifdef CONFIG_PARAVIRT
|
||||
ENTRY(native_usergs_sysret32)
|
||||
swapgs
|
||||
sysretl
|
||||
ENDPROC(native_usergs_sysret32)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* 32-bit SYSENTER instruction entry.
|
||||
*
|
||||
* SYSENTER loads ss, rsp, cs, and rip from previously programmed MSRs.
|
||||
* IF and VM in rflags are cleared (IOW: interrupts are off).
|
||||
* SYSENTER does not save anything on the stack,
|
||||
* and does not save old rip (!!!) and rflags.
|
||||
*
|
||||
* Arguments:
|
||||
* eax system call number
|
||||
* ebx arg1
|
||||
* ecx arg2
|
||||
* edx arg3
|
||||
* esi arg4
|
||||
* edi arg5
|
||||
* ebp user stack
|
||||
* 0(%ebp) arg6
|
||||
*
|
||||
* This is purely a fast path. For anything complicated we use the int 0x80
|
||||
* path below. We set up a complete hardware stack frame to share code
|
||||
* with the int 0x80 path.
|
||||
*/
|
||||
ENTRY(entry_SYSENTER_compat)
|
||||
/*
|
||||
* Interrupts are off on entry.
|
||||
* We do not frame this tiny irq-off block with TRACE_IRQS_OFF/ON,
|
||||
* it is too small to ever cause noticeable irq latency.
|
||||
*/
|
||||
SWAPGS_UNSAFE_STACK
|
||||
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
|
||||
ENABLE_INTERRUPTS(CLBR_NONE)
|
||||
|
||||
/* Zero-extending 32-bit regs, do not remove */
|
||||
movl %ebp, %ebp
|
||||
movl %eax, %eax
|
||||
|
||||
movl ASM_THREAD_INFO(TI_sysenter_return, %rsp, 0), %r10d
|
||||
|
||||
/* Construct struct pt_regs on stack */
|
||||
pushq $__USER32_DS /* pt_regs->ss */
|
||||
pushq %rbp /* pt_regs->sp */
|
||||
pushfq /* pt_regs->flags */
|
||||
pushq $__USER32_CS /* pt_regs->cs */
|
||||
pushq %r10 /* pt_regs->ip = thread_info->sysenter_return */
|
||||
pushq %rax /* pt_regs->orig_ax */
|
||||
pushq %rdi /* pt_regs->di */
|
||||
pushq %rsi /* pt_regs->si */
|
||||
pushq %rdx /* pt_regs->dx */
|
||||
pushq %rcx /* pt_regs->cx */
|
||||
pushq $-ENOSYS /* pt_regs->ax */
|
||||
cld
|
||||
sub $(10*8), %rsp /* pt_regs->r8-11, bp, bx, r12-15 not saved */
|
||||
|
||||
/*
|
||||
* no need to do an access_ok check here because rbp has been
|
||||
* 32-bit zero extended
|
||||
*/
|
||||
ASM_STAC
|
||||
1: movl (%rbp), %ebp
|
||||
_ASM_EXTABLE(1b, ia32_badarg)
|
||||
ASM_CLAC
|
||||
|
||||
/*
|
||||
* Sysenter doesn't filter flags, so we need to clear NT
|
||||
* ourselves. To save a few cycles, we can check whether
|
||||
* NT was set instead of doing an unconditional popfq.
|
||||
*/
|
||||
testl $X86_EFLAGS_NT, EFLAGS(%rsp)
|
||||
jnz sysenter_fix_flags
|
||||
sysenter_flags_fixed:
|
||||
|
||||
orl $TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
|
||||
testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jnz sysenter_tracesys
|
||||
|
||||
sysenter_do_call:
|
||||
/* 32-bit syscall -> 64-bit C ABI argument conversion */
|
||||
movl %edi, %r8d /* arg5 */
|
||||
movl %ebp, %r9d /* arg6 */
|
||||
xchg %ecx, %esi /* rsi:arg2, rcx:arg4 */
|
||||
movl %ebx, %edi /* arg1 */
|
||||
movl %edx, %edx /* arg3 (zero extension) */
|
||||
sysenter_dispatch:
|
||||
cmpq $(IA32_NR_syscalls-1), %rax
|
||||
ja 1f
|
||||
call *ia32_sys_call_table(, %rax, 8)
|
||||
movq %rax, RAX(%rsp)
|
||||
1:
|
||||
DISABLE_INTERRUPTS(CLBR_NONE)
|
||||
TRACE_IRQS_OFF
|
||||
testl $_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jnz sysexit_audit
|
||||
sysexit_from_sys_call:
|
||||
/*
|
||||
* NB: SYSEXIT is not obviously safe for 64-bit kernels -- an
|
||||
* NMI between STI and SYSEXIT has poorly specified behavior,
|
||||
* and and NMI followed by an IRQ with usergs is fatal. So
|
||||
* we just pretend we're using SYSEXIT but we really use
|
||||
* SYSRETL instead.
|
||||
*
|
||||
* This code path is still called 'sysexit' because it pairs
|
||||
* with 'sysenter' and it uses the SYSENTER calling convention.
|
||||
*/
|
||||
andl $~TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
|
||||
movl RIP(%rsp), %ecx /* User %eip */
|
||||
RESTORE_RSI_RDI
|
||||
xorl %edx, %edx /* Do not leak kernel information */
|
||||
xorq %r8, %r8
|
||||
xorq %r9, %r9
|
||||
xorq %r10, %r10
|
||||
movl EFLAGS(%rsp), %r11d /* User eflags */
|
||||
TRACE_IRQS_ON
|
||||
|
||||
/*
|
||||
* SYSRETL works even on Intel CPUs. Use it in preference to SYSEXIT,
|
||||
* since it avoids a dicey window with interrupts enabled.
|
||||
*/
|
||||
movl RSP(%rsp), %esp
|
||||
|
||||
/*
|
||||
* USERGS_SYSRET32 does:
|
||||
* gsbase = user's gs base
|
||||
* eip = ecx
|
||||
* rflags = r11
|
||||
* cs = __USER32_CS
|
||||
* ss = __USER_DS
|
||||
*
|
||||
* The prologue set RIP(%rsp) to VDSO32_SYSENTER_RETURN, which does:
|
||||
*
|
||||
* pop %ebp
|
||||
* pop %edx
|
||||
* pop %ecx
|
||||
*
|
||||
* Therefore, we invoke SYSRETL with EDX and R8-R10 zeroed to
|
||||
* avoid info leaks. R11 ends up with VDSO32_SYSENTER_RETURN's
|
||||
* address (already known to user code), and R12-R15 are
|
||||
* callee-saved and therefore don't contain any interesting
|
||||
* kernel data.
|
||||
*/
|
||||
USERGS_SYSRET32
|
||||
|
||||
#ifdef CONFIG_AUDITSYSCALL
|
||||
.macro auditsys_entry_common
|
||||
/*
|
||||
* At this point, registers hold syscall args in the 32-bit syscall ABI:
|
||||
* EAX is syscall number, the 6 args are in EBX,ECX,EDX,ESI,EDI,EBP.
|
||||
*
|
||||
* We want to pass them to __audit_syscall_entry(), which is a 64-bit
|
||||
* C function with 5 parameters, so shuffle them to match what
|
||||
* the function expects: RDI,RSI,RDX,RCX,R8.
|
||||
*/
|
||||
movl %esi, %r8d /* arg5 (R8 ) <= 4th syscall arg (ESI) */
|
||||
xchg %ecx, %edx /* arg4 (RCX) <= 3rd syscall arg (EDX) */
|
||||
/* arg3 (RDX) <= 2nd syscall arg (ECX) */
|
||||
movl %ebx, %esi /* arg2 (RSI) <= 1st syscall arg (EBX) */
|
||||
movl %eax, %edi /* arg1 (RDI) <= syscall number (EAX) */
|
||||
call __audit_syscall_entry
|
||||
|
||||
/*
|
||||
* We are going to jump back to the syscall dispatch code.
|
||||
* Prepare syscall args as required by the 64-bit C ABI.
|
||||
* Registers clobbered by __audit_syscall_entry() are
|
||||
* loaded from pt_regs on stack:
|
||||
*/
|
||||
movl ORIG_RAX(%rsp), %eax /* syscall number */
|
||||
movl %ebx, %edi /* arg1 */
|
||||
movl RCX(%rsp), %esi /* arg2 */
|
||||
movl RDX(%rsp), %edx /* arg3 */
|
||||
movl RSI(%rsp), %ecx /* arg4 */
|
||||
movl RDI(%rsp), %r8d /* arg5 */
|
||||
movl %ebp, %r9d /* arg6 */
|
||||
.endm
|
||||
|
||||
.macro auditsys_exit exit
|
||||
testl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT), ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jnz ia32_ret_from_sys_call
|
||||
TRACE_IRQS_ON
|
||||
ENABLE_INTERRUPTS(CLBR_NONE)
|
||||
movl %eax, %esi /* second arg, syscall return value */
|
||||
cmpl $-MAX_ERRNO, %eax /* is it an error ? */
|
||||
jbe 1f
|
||||
movslq %eax, %rsi /* if error sign extend to 64 bits */
|
||||
1: setbe %al /* 1 if error, 0 if not */
|
||||
movzbl %al, %edi /* zero-extend that into %edi */
|
||||
call __audit_syscall_exit
|
||||
movq RAX(%rsp), %rax /* reload syscall return value */
|
||||
movl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT), %edi
|
||||
DISABLE_INTERRUPTS(CLBR_NONE)
|
||||
TRACE_IRQS_OFF
|
||||
testl %edi, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jz \exit
|
||||
xorl %eax, %eax /* Do not leak kernel information */
|
||||
movq %rax, R11(%rsp)
|
||||
movq %rax, R10(%rsp)
|
||||
movq %rax, R9(%rsp)
|
||||
movq %rax, R8(%rsp)
|
||||
jmp int_with_check
|
||||
.endm
|
||||
|
||||
sysenter_auditsys:
|
||||
auditsys_entry_common
|
||||
jmp sysenter_dispatch
|
||||
|
||||
sysexit_audit:
|
||||
auditsys_exit sysexit_from_sys_call
|
||||
#endif
|
||||
|
||||
sysenter_fix_flags:
|
||||
pushq $(X86_EFLAGS_IF|X86_EFLAGS_FIXED)
|
||||
popfq
|
||||
jmp sysenter_flags_fixed
|
||||
|
||||
sysenter_tracesys:
|
||||
#ifdef CONFIG_AUDITSYSCALL
|
||||
testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT), ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jz sysenter_auditsys
|
||||
#endif
|
||||
SAVE_EXTRA_REGS
|
||||
xorl %eax, %eax /* Do not leak kernel information */
|
||||
movq %rax, R11(%rsp)
|
||||
movq %rax, R10(%rsp)
|
||||
movq %rax, R9(%rsp)
|
||||
movq %rax, R8(%rsp)
|
||||
movq %rsp, %rdi /* &pt_regs -> arg1 */
|
||||
call syscall_trace_enter
|
||||
|
||||
/* Reload arg registers from stack. (see sysenter_tracesys) */
|
||||
movl RCX(%rsp), %ecx
|
||||
movl RDX(%rsp), %edx
|
||||
movl RSI(%rsp), %esi
|
||||
movl RDI(%rsp), %edi
|
||||
movl %eax, %eax /* zero extension */
|
||||
|
||||
RESTORE_EXTRA_REGS
|
||||
jmp sysenter_do_call
|
||||
ENDPROC(entry_SYSENTER_compat)
|
||||
|
||||
/*
|
||||
* 32-bit SYSCALL instruction entry.
|
||||
*
|
||||
* 32-bit SYSCALL saves rip to rcx, clears rflags.RF, then saves rflags to r11,
|
||||
* then loads new ss, cs, and rip from previously programmed MSRs.
|
||||
* rflags gets masked by a value from another MSR (so CLD and CLAC
|
||||
* are not needed). SYSCALL does not save anything on the stack
|
||||
* and does not change rsp.
|
||||
*
|
||||
* Note: rflags saving+masking-with-MSR happens only in Long mode
|
||||
* (in legacy 32-bit mode, IF, RF and VM bits are cleared and that's it).
|
||||
* Don't get confused: rflags saving+masking depends on Long Mode Active bit
|
||||
* (EFER.LMA=1), NOT on bitness of userspace where SYSCALL executes
|
||||
* or target CS descriptor's L bit (SYSCALL does not read segment descriptors).
|
||||
*
|
||||
* Arguments:
|
||||
* eax system call number
|
||||
* ecx return address
|
||||
* ebx arg1
|
||||
* ebp arg2 (note: not saved in the stack frame, should not be touched)
|
||||
* edx arg3
|
||||
* esi arg4
|
||||
* edi arg5
|
||||
* esp user stack
|
||||
* 0(%esp) arg6
|
||||
*
|
||||
* This is purely a fast path. For anything complicated we use the int 0x80
|
||||
* path below. We set up a complete hardware stack frame to share code
|
||||
* with the int 0x80 path.
|
||||
*/
|
||||
ENTRY(entry_SYSCALL_compat)
|
||||
/*
|
||||
* Interrupts are off on entry.
|
||||
* We do not frame this tiny irq-off block with TRACE_IRQS_OFF/ON,
|
||||
* it is too small to ever cause noticeable irq latency.
|
||||
*/
|
||||
SWAPGS_UNSAFE_STACK
|
||||
movl %esp, %r8d
|
||||
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
|
||||
ENABLE_INTERRUPTS(CLBR_NONE)
|
||||
|
||||
/* Zero-extending 32-bit regs, do not remove */
|
||||
movl %eax, %eax
|
||||
|
||||
/* Construct struct pt_regs on stack */
|
||||
pushq $__USER32_DS /* pt_regs->ss */
|
||||
pushq %r8 /* pt_regs->sp */
|
||||
pushq %r11 /* pt_regs->flags */
|
||||
pushq $__USER32_CS /* pt_regs->cs */
|
||||
pushq %rcx /* pt_regs->ip */
|
||||
pushq %rax /* pt_regs->orig_ax */
|
||||
pushq %rdi /* pt_regs->di */
|
||||
pushq %rsi /* pt_regs->si */
|
||||
pushq %rdx /* pt_regs->dx */
|
||||
pushq %rbp /* pt_regs->cx */
|
||||
movl %ebp, %ecx
|
||||
pushq $-ENOSYS /* pt_regs->ax */
|
||||
sub $(10*8), %rsp /* pt_regs->r8-11, bp, bx, r12-15 not saved */
|
||||
|
||||
/*
|
||||
* No need to do an access_ok check here because r8 has been
|
||||
* 32-bit zero extended:
|
||||
*/
|
||||
ASM_STAC
|
||||
1: movl (%r8), %ebp
|
||||
_ASM_EXTABLE(1b, ia32_badarg)
|
||||
ASM_CLAC
|
||||
orl $TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
|
||||
testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jnz cstar_tracesys
|
||||
|
||||
cstar_do_call:
|
||||
/* 32-bit syscall -> 64-bit C ABI argument conversion */
|
||||
movl %edi, %r8d /* arg5 */
|
||||
movl %ebp, %r9d /* arg6 */
|
||||
xchg %ecx, %esi /* rsi:arg2, rcx:arg4 */
|
||||
movl %ebx, %edi /* arg1 */
|
||||
movl %edx, %edx /* arg3 (zero extension) */
|
||||
|
||||
cstar_dispatch:
|
||||
cmpq $(IA32_NR_syscalls-1), %rax
|
||||
ja 1f
|
||||
|
||||
call *ia32_sys_call_table(, %rax, 8)
|
||||
movq %rax, RAX(%rsp)
|
||||
1:
|
||||
movl RCX(%rsp), %ebp
|
||||
DISABLE_INTERRUPTS(CLBR_NONE)
|
||||
TRACE_IRQS_OFF
|
||||
testl $_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jnz sysretl_audit
|
||||
|
||||
sysretl_from_sys_call:
|
||||
andl $~TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
|
||||
RESTORE_RSI_RDI_RDX
|
||||
movl RIP(%rsp), %ecx
|
||||
movl EFLAGS(%rsp), %r11d
|
||||
xorq %r10, %r10
|
||||
xorq %r9, %r9
|
||||
xorq %r8, %r8
|
||||
TRACE_IRQS_ON
|
||||
movl RSP(%rsp), %esp
|
||||
/*
|
||||
* 64-bit->32-bit SYSRET restores eip from ecx,
|
||||
* eflags from r11 (but RF and VM bits are forced to 0),
|
||||
* cs and ss are loaded from MSRs.
|
||||
* (Note: 32-bit->32-bit SYSRET is different: since r11
|
||||
* does not exist, it merely sets eflags.IF=1).
|
||||
*
|
||||
* NB: On AMD CPUs with the X86_BUG_SYSRET_SS_ATTRS bug, the ss
|
||||
* descriptor is not reinitialized. This means that we must
|
||||
* avoid SYSRET with SS == NULL, which could happen if we schedule,
|
||||
* exit the kernel, and re-enter using an interrupt vector. (All
|
||||
* interrupt entries on x86_64 set SS to NULL.) We prevent that
|
||||
* from happening by reloading SS in __switch_to.
|
||||
*/
|
||||
USERGS_SYSRET32
|
||||
|
||||
#ifdef CONFIG_AUDITSYSCALL
|
||||
cstar_auditsys:
|
||||
auditsys_entry_common
|
||||
jmp cstar_dispatch
|
||||
|
||||
sysretl_audit:
|
||||
auditsys_exit sysretl_from_sys_call
|
||||
#endif
|
||||
|
||||
cstar_tracesys:
|
||||
#ifdef CONFIG_AUDITSYSCALL
|
||||
testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT), ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jz cstar_auditsys
|
||||
#endif
|
||||
SAVE_EXTRA_REGS
|
||||
xorl %eax, %eax /* Do not leak kernel information */
|
||||
movq %rax, R11(%rsp)
|
||||
movq %rax, R10(%rsp)
|
||||
movq %rax, R9(%rsp)
|
||||
movq %rax, R8(%rsp)
|
||||
movq %rsp, %rdi /* &pt_regs -> arg1 */
|
||||
call syscall_trace_enter
|
||||
|
||||
/* Reload arg registers from stack. (see sysenter_tracesys) */
|
||||
movl RCX(%rsp), %ecx
|
||||
movl RDX(%rsp), %edx
|
||||
movl RSI(%rsp), %esi
|
||||
movl RDI(%rsp), %edi
|
||||
movl %eax, %eax /* zero extension */
|
||||
|
||||
RESTORE_EXTRA_REGS
|
||||
jmp cstar_do_call
|
||||
END(entry_SYSCALL_compat)
|
||||
|
||||
ia32_badarg:
|
||||
ASM_CLAC
|
||||
movq $-EFAULT, RAX(%rsp)
|
||||
ia32_ret_from_sys_call:
|
||||
xorl %eax, %eax /* Do not leak kernel information */
|
||||
movq %rax, R11(%rsp)
|
||||
movq %rax, R10(%rsp)
|
||||
movq %rax, R9(%rsp)
|
||||
movq %rax, R8(%rsp)
|
||||
jmp int_ret_from_sys_call
|
||||
|
||||
/*
|
||||
* Emulated IA32 system calls via int 0x80.
|
||||
*
|
||||
* Arguments:
|
||||
* eax system call number
|
||||
* ebx arg1
|
||||
* ecx arg2
|
||||
* edx arg3
|
||||
* esi arg4
|
||||
* edi arg5
|
||||
* ebp arg6 (note: not saved in the stack frame, should not be touched)
|
||||
*
|
||||
* Notes:
|
||||
* Uses the same stack frame as the x86-64 version.
|
||||
* All registers except eax must be saved (but ptrace may violate that).
|
||||
* Arguments are zero extended. For system calls that want sign extension and
|
||||
* take long arguments a wrapper is needed. Most calls can just be called
|
||||
* directly.
|
||||
* Assumes it is only called from user space and entered with interrupts off.
|
||||
*/
|
||||
|
||||
ENTRY(entry_INT80_compat)
|
||||
/*
|
||||
* Interrupts are off on entry.
|
||||
* We do not frame this tiny irq-off block with TRACE_IRQS_OFF/ON,
|
||||
* it is too small to ever cause noticeable irq latency.
|
||||
*/
|
||||
PARAVIRT_ADJUST_EXCEPTION_FRAME
|
||||
SWAPGS
|
||||
ENABLE_INTERRUPTS(CLBR_NONE)
|
||||
|
||||
/* Zero-extending 32-bit regs, do not remove */
|
||||
movl %eax, %eax
|
||||
|
||||
/* Construct struct pt_regs on stack (iret frame is already on stack) */
|
||||
pushq %rax /* pt_regs->orig_ax */
|
||||
pushq %rdi /* pt_regs->di */
|
||||
pushq %rsi /* pt_regs->si */
|
||||
pushq %rdx /* pt_regs->dx */
|
||||
pushq %rcx /* pt_regs->cx */
|
||||
pushq $-ENOSYS /* pt_regs->ax */
|
||||
pushq $0 /* pt_regs->r8 */
|
||||
pushq $0 /* pt_regs->r9 */
|
||||
pushq $0 /* pt_regs->r10 */
|
||||
pushq $0 /* pt_regs->r11 */
|
||||
cld
|
||||
sub $(6*8), %rsp /* pt_regs->bp, bx, r12-15 not saved */
|
||||
|
||||
orl $TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
|
||||
testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jnz ia32_tracesys
|
||||
|
||||
ia32_do_call:
|
||||
/* 32-bit syscall -> 64-bit C ABI argument conversion */
|
||||
movl %edi, %r8d /* arg5 */
|
||||
movl %ebp, %r9d /* arg6 */
|
||||
xchg %ecx, %esi /* rsi:arg2, rcx:arg4 */
|
||||
movl %ebx, %edi /* arg1 */
|
||||
movl %edx, %edx /* arg3 (zero extension) */
|
||||
cmpq $(IA32_NR_syscalls-1), %rax
|
||||
ja 1f
|
||||
|
||||
call *ia32_sys_call_table(, %rax, 8)
|
||||
movq %rax, RAX(%rsp)
|
||||
1:
|
||||
jmp int_ret_from_sys_call
|
||||
|
||||
ia32_tracesys:
|
||||
SAVE_EXTRA_REGS
|
||||
movq %rsp, %rdi /* &pt_regs -> arg1 */
|
||||
call syscall_trace_enter
|
||||
/*
|
||||
* Reload arg registers from stack in case ptrace changed them.
|
||||
* Don't reload %eax because syscall_trace_enter() returned
|
||||
* the %rax value we should see. But do truncate it to 32 bits.
|
||||
* If it's -1 to make us punt the syscall, then (u32)-1 is still
|
||||
* an appropriately invalid value.
|
||||
*/
|
||||
movl RCX(%rsp), %ecx
|
||||
movl RDX(%rsp), %edx
|
||||
movl RSI(%rsp), %esi
|
||||
movl RDI(%rsp), %edi
|
||||
movl %eax, %eax /* zero extension */
|
||||
RESTORE_EXTRA_REGS
|
||||
jmp ia32_do_call
|
||||
END(entry_INT80_compat)
|
||||
|
||||
.macro PTREGSCALL label, func
|
||||
ALIGN
|
||||
GLOBAL(\label)
|
||||
leaq \func(%rip), %rax
|
||||
jmp ia32_ptregs_common
|
||||
.endm
|
||||
|
||||
PTREGSCALL stub32_rt_sigreturn, sys32_rt_sigreturn
|
||||
PTREGSCALL stub32_sigreturn, sys32_sigreturn
|
||||
PTREGSCALL stub32_fork, sys_fork
|
||||
PTREGSCALL stub32_vfork, sys_vfork
|
||||
|
||||
ALIGN
|
||||
GLOBAL(stub32_clone)
|
||||
leaq sys_clone(%rip), %rax
|
||||
/*
|
||||
* The 32-bit clone ABI is: clone(..., int tls_val, int *child_tidptr).
|
||||
* The 64-bit clone ABI is: clone(..., int *child_tidptr, int tls_val).
|
||||
*
|
||||
* The native 64-bit kernel's sys_clone() implements the latter,
|
||||
* so we need to swap arguments here before calling it:
|
||||
*/
|
||||
xchg %r8, %rcx
|
||||
jmp ia32_ptregs_common
|
||||
|
||||
ALIGN
|
||||
ia32_ptregs_common:
|
||||
SAVE_EXTRA_REGS 8
|
||||
call *%rax
|
||||
RESTORE_EXTRA_REGS 8
|
||||
ret
|
||||
END(ia32_ptregs_common)
|
@ -10,7 +10,7 @@
|
||||
#else
|
||||
#define SYM(sym, compat) sym
|
||||
#define ia32_sys_call_table sys_call_table
|
||||
#define __NR_ia32_syscall_max __NR_syscall_max
|
||||
#define __NR_syscall_compat_max __NR_syscall_max
|
||||
#endif
|
||||
|
||||
#define __SYSCALL_I386(nr, sym, compat) extern asmlinkage void SYM(sym, compat)(void) ;
|
||||
@ -23,11 +23,11 @@ typedef asmlinkage void (*sys_call_ptr_t)(void);
|
||||
|
||||
extern asmlinkage void sys_ni_syscall(void);
|
||||
|
||||
__visible const sys_call_ptr_t ia32_sys_call_table[__NR_ia32_syscall_max+1] = {
|
||||
__visible const sys_call_ptr_t ia32_sys_call_table[__NR_syscall_compat_max+1] = {
|
||||
/*
|
||||
* Smells like a compiler bug -- it doesn't work
|
||||
* when the & below is removed.
|
||||
*/
|
||||
[0 ... __NR_ia32_syscall_max] = &sys_ni_syscall,
|
||||
[0 ... __NR_syscall_compat_max] = &sys_ni_syscall,
|
||||
#include <asm/syscalls_32.h>
|
||||
};
|
@ -1,5 +1,5 @@
|
||||
out := $(obj)/../include/generated/asm
|
||||
uapi := $(obj)/../include/generated/uapi/asm
|
||||
out := $(obj)/../../include/generated/asm
|
||||
uapi := $(obj)/../../include/generated/uapi/asm
|
||||
|
||||
# Create output directory if not already present
|
||||
_dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)') \
|
@ -6,16 +6,14 @@
|
||||
*/
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/asm.h>
|
||||
#include <asm/dwarf2.h>
|
||||
|
||||
/* put return address in eax (arg1) */
|
||||
.macro THUNK name, func, put_ret_addr_in_eax=0
|
||||
.globl \name
|
||||
\name:
|
||||
CFI_STARTPROC
|
||||
pushl_cfi_reg eax
|
||||
pushl_cfi_reg ecx
|
||||
pushl_cfi_reg edx
|
||||
pushl %eax
|
||||
pushl %ecx
|
||||
pushl %edx
|
||||
|
||||
.if \put_ret_addr_in_eax
|
||||
/* Place EIP in the arg1 */
|
||||
@ -23,11 +21,10 @@
|
||||
.endif
|
||||
|
||||
call \func
|
||||
popl_cfi_reg edx
|
||||
popl_cfi_reg ecx
|
||||
popl_cfi_reg eax
|
||||
popl %edx
|
||||
popl %ecx
|
||||
popl %eax
|
||||
ret
|
||||
CFI_ENDPROC
|
||||
_ASM_NOKPROBE(\name)
|
||||
.endm
|
||||
|
@ -6,35 +6,32 @@
|
||||
* Subject to the GNU public license, v.2. No warranty of any kind.
|
||||
*/
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/dwarf2.h>
|
||||
#include <asm/calling.h>
|
||||
#include "calling.h"
|
||||
#include <asm/asm.h>
|
||||
|
||||
/* rdi: arg1 ... normal C conventions. rax is saved/restored. */
|
||||
.macro THUNK name, func, put_ret_addr_in_rdi=0
|
||||
.globl \name
|
||||
\name:
|
||||
CFI_STARTPROC
|
||||
|
||||
/* this one pushes 9 elems, the next one would be %rIP */
|
||||
pushq_cfi_reg rdi
|
||||
pushq_cfi_reg rsi
|
||||
pushq_cfi_reg rdx
|
||||
pushq_cfi_reg rcx
|
||||
pushq_cfi_reg rax
|
||||
pushq_cfi_reg r8
|
||||
pushq_cfi_reg r9
|
||||
pushq_cfi_reg r10
|
||||
pushq_cfi_reg r11
|
||||
pushq %rdi
|
||||
pushq %rsi
|
||||
pushq %rdx
|
||||
pushq %rcx
|
||||
pushq %rax
|
||||
pushq %r8
|
||||
pushq %r9
|
||||
pushq %r10
|
||||
pushq %r11
|
||||
|
||||
.if \put_ret_addr_in_rdi
|
||||
/* 9*8(%rsp) is return addr on stack */
|
||||
movq_cfi_restore 9*8, rdi
|
||||
movq 9*8(%rsp), %rdi
|
||||
.endif
|
||||
|
||||
call \func
|
||||
jmp restore
|
||||
CFI_ENDPROC
|
||||
_ASM_NOKPROBE(\name)
|
||||
.endm
|
||||
|
||||
@ -55,19 +52,16 @@
|
||||
#if defined(CONFIG_TRACE_IRQFLAGS) \
|
||||
|| defined(CONFIG_DEBUG_LOCK_ALLOC) \
|
||||
|| defined(CONFIG_PREEMPT)
|
||||
CFI_STARTPROC
|
||||
CFI_ADJUST_CFA_OFFSET 9*8
|
||||
restore:
|
||||
popq_cfi_reg r11
|
||||
popq_cfi_reg r10
|
||||
popq_cfi_reg r9
|
||||
popq_cfi_reg r8
|
||||
popq_cfi_reg rax
|
||||
popq_cfi_reg rcx
|
||||
popq_cfi_reg rdx
|
||||
popq_cfi_reg rsi
|
||||
popq_cfi_reg rdi
|
||||
popq %r11
|
||||
popq %r10
|
||||
popq %r9
|
||||
popq %r8
|
||||
popq %rax
|
||||
popq %rcx
|
||||
popq %rdx
|
||||
popq %rsi
|
||||
popq %rdi
|
||||
ret
|
||||
CFI_ENDPROC
|
||||
_ASM_NOKPROBE(restore)
|
||||
#endif
|
7
arch/x86/entry/vsyscall/Makefile
Normal file
7
arch/x86/entry/vsyscall/Makefile
Normal file
@ -0,0 +1,7 @@
|
||||
#
|
||||
# Makefile for the x86 low level vsyscall code
|
||||
#
|
||||
obj-y := vsyscall_gtod.o
|
||||
|
||||
obj-$(CONFIG_X86_VSYSCALL_EMULATION) += vsyscall_64.o vsyscall_emu_64.o
|
||||
|
@ -24,6 +24,6 @@ TRACE_EVENT(emulate_vsyscall,
|
||||
#endif
|
||||
|
||||
#undef TRACE_INCLUDE_PATH
|
||||
#define TRACE_INCLUDE_PATH ../../arch/x86/kernel
|
||||
#define TRACE_INCLUDE_PATH ../../arch/x86/entry/vsyscall/
|
||||
#define TRACE_INCLUDE_FILE vsyscall_trace
|
||||
#include <trace/define_trace.h>
|
@ -2,7 +2,7 @@
|
||||
# Makefile for the ia32 kernel emulation subsystem.
|
||||
#
|
||||
|
||||
obj-$(CONFIG_IA32_EMULATION) := ia32entry.o sys_ia32.o ia32_signal.o
|
||||
obj-$(CONFIG_IA32_EMULATION) := sys_ia32.o ia32_signal.o
|
||||
|
||||
obj-$(CONFIG_IA32_AOUT) += ia32_aout.o
|
||||
|
||||
|
@ -1,611 +0,0 @@
|
||||
/*
|
||||
* Compatibility mode system call entry point for x86-64.
|
||||
*
|
||||
* Copyright 2000-2002 Andi Kleen, SuSE Labs.
|
||||
*/
|
||||
|
||||
#include <asm/dwarf2.h>
|
||||
#include <asm/calling.h>
|
||||
#include <asm/asm-offsets.h>
|
||||
#include <asm/current.h>
|
||||
#include <asm/errno.h>
|
||||
#include <asm/ia32_unistd.h>
|
||||
#include <asm/thread_info.h>
|
||||
#include <asm/segment.h>
|
||||
#include <asm/irqflags.h>
|
||||
#include <asm/asm.h>
|
||||
#include <asm/smap.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/err.h>
|
||||
|
||||
/* Avoid __ASSEMBLER__'ifying <linux/audit.h> just for this. */
|
||||
#include <linux/elf-em.h>
|
||||
#define AUDIT_ARCH_I386 (EM_386|__AUDIT_ARCH_LE)
|
||||
#define __AUDIT_ARCH_LE 0x40000000
|
||||
|
||||
#ifndef CONFIG_AUDITSYSCALL
|
||||
#define sysexit_audit ia32_ret_from_sys_call
|
||||
#define sysretl_audit ia32_ret_from_sys_call
|
||||
#endif
|
||||
|
||||
.section .entry.text, "ax"
|
||||
|
||||
/* clobbers %rax */
|
||||
.macro CLEAR_RREGS _r9=rax
|
||||
xorl %eax,%eax
|
||||
movq %rax,R11(%rsp)
|
||||
movq %rax,R10(%rsp)
|
||||
movq %\_r9,R9(%rsp)
|
||||
movq %rax,R8(%rsp)
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Reload arg registers from stack in case ptrace changed them.
|
||||
* We don't reload %eax because syscall_trace_enter() returned
|
||||
* the %rax value we should see. Instead, we just truncate that
|
||||
* value to 32 bits again as we did on entry from user mode.
|
||||
* If it's a new value set by user_regset during entry tracing,
|
||||
* this matches the normal truncation of the user-mode value.
|
||||
* If it's -1 to make us punt the syscall, then (u32)-1 is still
|
||||
* an appropriately invalid value.
|
||||
*/
|
||||
.macro LOAD_ARGS32 _r9=0
|
||||
.if \_r9
|
||||
movl R9(%rsp),%r9d
|
||||
.endif
|
||||
movl RCX(%rsp),%ecx
|
||||
movl RDX(%rsp),%edx
|
||||
movl RSI(%rsp),%esi
|
||||
movl RDI(%rsp),%edi
|
||||
movl %eax,%eax /* zero extension */
|
||||
.endm
|
||||
|
||||
.macro CFI_STARTPROC32 simple
|
||||
CFI_STARTPROC \simple
|
||||
CFI_UNDEFINED r8
|
||||
CFI_UNDEFINED r9
|
||||
CFI_UNDEFINED r10
|
||||
CFI_UNDEFINED r11
|
||||
CFI_UNDEFINED r12
|
||||
CFI_UNDEFINED r13
|
||||
CFI_UNDEFINED r14
|
||||
CFI_UNDEFINED r15
|
||||
.endm
|
||||
|
||||
#ifdef CONFIG_PARAVIRT
|
||||
ENTRY(native_usergs_sysret32)
|
||||
swapgs
|
||||
sysretl
|
||||
ENDPROC(native_usergs_sysret32)
|
||||
|
||||
ENTRY(native_irq_enable_sysexit)
|
||||
swapgs
|
||||
sti
|
||||
sysexit
|
||||
ENDPROC(native_irq_enable_sysexit)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* 32bit SYSENTER instruction entry.
|
||||
*
|
||||
* SYSENTER loads ss, rsp, cs, and rip from previously programmed MSRs.
|
||||
* IF and VM in rflags are cleared (IOW: interrupts are off).
|
||||
* SYSENTER does not save anything on the stack,
|
||||
* and does not save old rip (!!!) and rflags.
|
||||
*
|
||||
* Arguments:
|
||||
* eax system call number
|
||||
* ebx arg1
|
||||
* ecx arg2
|
||||
* edx arg3
|
||||
* esi arg4
|
||||
* edi arg5
|
||||
* ebp user stack
|
||||
* 0(%ebp) arg6
|
||||
*
|
||||
* This is purely a fast path. For anything complicated we use the int 0x80
|
||||
* path below. We set up a complete hardware stack frame to share code
|
||||
* with the int 0x80 path.
|
||||
*/
|
||||
ENTRY(ia32_sysenter_target)
|
||||
CFI_STARTPROC32 simple
|
||||
CFI_SIGNAL_FRAME
|
||||
CFI_DEF_CFA rsp,0
|
||||
CFI_REGISTER rsp,rbp
|
||||
|
||||
/*
|
||||
* Interrupts are off on entry.
|
||||
* We do not frame this tiny irq-off block with TRACE_IRQS_OFF/ON,
|
||||
* it is too small to ever cause noticeable irq latency.
|
||||
*/
|
||||
SWAPGS_UNSAFE_STACK
|
||||
movq PER_CPU_VAR(cpu_tss + TSS_sp0), %rsp
|
||||
ENABLE_INTERRUPTS(CLBR_NONE)
|
||||
|
||||
/* Zero-extending 32-bit regs, do not remove */
|
||||
movl %ebp, %ebp
|
||||
movl %eax, %eax
|
||||
|
||||
movl ASM_THREAD_INFO(TI_sysenter_return, %rsp, 0), %r10d
|
||||
CFI_REGISTER rip,r10
|
||||
|
||||
/* Construct struct pt_regs on stack */
|
||||
pushq_cfi $__USER32_DS /* pt_regs->ss */
|
||||
pushq_cfi %rbp /* pt_regs->sp */
|
||||
CFI_REL_OFFSET rsp,0
|
||||
pushfq_cfi /* pt_regs->flags */
|
||||
pushq_cfi $__USER32_CS /* pt_regs->cs */
|
||||
pushq_cfi %r10 /* pt_regs->ip = thread_info->sysenter_return */
|
||||
CFI_REL_OFFSET rip,0
|
||||
pushq_cfi_reg rax /* pt_regs->orig_ax */
|
||||
pushq_cfi_reg rdi /* pt_regs->di */
|
||||
pushq_cfi_reg rsi /* pt_regs->si */
|
||||
pushq_cfi_reg rdx /* pt_regs->dx */
|
||||
pushq_cfi_reg rcx /* pt_regs->cx */
|
||||
pushq_cfi_reg rax /* pt_regs->ax */
|
||||
cld
|
||||
sub $(10*8),%rsp /* pt_regs->r8-11,bp,bx,r12-15 not saved */
|
||||
CFI_ADJUST_CFA_OFFSET 10*8
|
||||
|
||||
/*
|
||||
* no need to do an access_ok check here because rbp has been
|
||||
* 32bit zero extended
|
||||
*/
|
||||
ASM_STAC
|
||||
1: movl (%rbp),%ebp
|
||||
_ASM_EXTABLE(1b,ia32_badarg)
|
||||
ASM_CLAC
|
||||
|
||||
/*
|
||||
* Sysenter doesn't filter flags, so we need to clear NT
|
||||
* ourselves. To save a few cycles, we can check whether
|
||||
* NT was set instead of doing an unconditional popfq.
|
||||
*/
|
||||
testl $X86_EFLAGS_NT,EFLAGS(%rsp)
|
||||
jnz sysenter_fix_flags
|
||||
sysenter_flags_fixed:
|
||||
|
||||
orl $TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
|
||||
testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
CFI_REMEMBER_STATE
|
||||
jnz sysenter_tracesys
|
||||
cmpq $(IA32_NR_syscalls-1),%rax
|
||||
ja ia32_badsys
|
||||
sysenter_do_call:
|
||||
/* 32bit syscall -> 64bit C ABI argument conversion */
|
||||
movl %edi,%r8d /* arg5 */
|
||||
movl %ebp,%r9d /* arg6 */
|
||||
xchg %ecx,%esi /* rsi:arg2, rcx:arg4 */
|
||||
movl %ebx,%edi /* arg1 */
|
||||
movl %edx,%edx /* arg3 (zero extension) */
|
||||
sysenter_dispatch:
|
||||
call *ia32_sys_call_table(,%rax,8)
|
||||
movq %rax,RAX(%rsp)
|
||||
DISABLE_INTERRUPTS(CLBR_NONE)
|
||||
TRACE_IRQS_OFF
|
||||
testl $_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jnz sysexit_audit
|
||||
sysexit_from_sys_call:
|
||||
/*
|
||||
* NB: SYSEXIT is not obviously safe for 64-bit kernels -- an
|
||||
* NMI between STI and SYSEXIT has poorly specified behavior,
|
||||
* and and NMI followed by an IRQ with usergs is fatal. So
|
||||
* we just pretend we're using SYSEXIT but we really use
|
||||
* SYSRETL instead.
|
||||
*
|
||||
* This code path is still called 'sysexit' because it pairs
|
||||
* with 'sysenter' and it uses the SYSENTER calling convention.
|
||||
*/
|
||||
andl $~TS_COMPAT,ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
|
||||
movl RIP(%rsp),%ecx /* User %eip */
|
||||
CFI_REGISTER rip,rcx
|
||||
RESTORE_RSI_RDI
|
||||
xorl %edx,%edx /* avoid info leaks */
|
||||
xorq %r8,%r8
|
||||
xorq %r9,%r9
|
||||
xorq %r10,%r10
|
||||
movl EFLAGS(%rsp),%r11d /* User eflags */
|
||||
/*CFI_RESTORE rflags*/
|
||||
TRACE_IRQS_ON
|
||||
|
||||
/*
|
||||
* SYSRETL works even on Intel CPUs. Use it in preference to SYSEXIT,
|
||||
* since it avoids a dicey window with interrupts enabled.
|
||||
*/
|
||||
movl RSP(%rsp),%esp
|
||||
|
||||
/*
|
||||
* USERGS_SYSRET32 does:
|
||||
* gsbase = user's gs base
|
||||
* eip = ecx
|
||||
* rflags = r11
|
||||
* cs = __USER32_CS
|
||||
* ss = __USER_DS
|
||||
*
|
||||
* The prologue set RIP(%rsp) to VDSO32_SYSENTER_RETURN, which does:
|
||||
*
|
||||
* pop %ebp
|
||||
* pop %edx
|
||||
* pop %ecx
|
||||
*
|
||||
* Therefore, we invoke SYSRETL with EDX and R8-R10 zeroed to
|
||||
* avoid info leaks. R11 ends up with VDSO32_SYSENTER_RETURN's
|
||||
* address (already known to user code), and R12-R15 are
|
||||
* callee-saved and therefore don't contain any interesting
|
||||
* kernel data.
|
||||
*/
|
||||
USERGS_SYSRET32
|
||||
|
||||
CFI_RESTORE_STATE
|
||||
|
||||
#ifdef CONFIG_AUDITSYSCALL
|
||||
.macro auditsys_entry_common
|
||||
movl %esi,%r8d /* 5th arg: 4th syscall arg */
|
||||
movl %ecx,%r9d /*swap with edx*/
|
||||
movl %edx,%ecx /* 4th arg: 3rd syscall arg */
|
||||
movl %r9d,%edx /* 3rd arg: 2nd syscall arg */
|
||||
movl %ebx,%esi /* 2nd arg: 1st syscall arg */
|
||||
movl %eax,%edi /* 1st arg: syscall number */
|
||||
call __audit_syscall_entry
|
||||
movl RAX(%rsp),%eax /* reload syscall number */
|
||||
cmpq $(IA32_NR_syscalls-1),%rax
|
||||
ja ia32_badsys
|
||||
movl %ebx,%edi /* reload 1st syscall arg */
|
||||
movl RCX(%rsp),%esi /* reload 2nd syscall arg */
|
||||
movl RDX(%rsp),%edx /* reload 3rd syscall arg */
|
||||
movl RSI(%rsp),%ecx /* reload 4th syscall arg */
|
||||
movl RDI(%rsp),%r8d /* reload 5th syscall arg */
|
||||
.endm
|
||||
|
||||
.macro auditsys_exit exit
|
||||
testl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT), ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jnz ia32_ret_from_sys_call
|
||||
TRACE_IRQS_ON
|
||||
ENABLE_INTERRUPTS(CLBR_NONE)
|
||||
movl %eax,%esi /* second arg, syscall return value */
|
||||
cmpl $-MAX_ERRNO,%eax /* is it an error ? */
|
||||
jbe 1f
|
||||
movslq %eax, %rsi /* if error sign extend to 64 bits */
|
||||
1: setbe %al /* 1 if error, 0 if not */
|
||||
movzbl %al,%edi /* zero-extend that into %edi */
|
||||
call __audit_syscall_exit
|
||||
movq RAX(%rsp),%rax /* reload syscall return value */
|
||||
movl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),%edi
|
||||
DISABLE_INTERRUPTS(CLBR_NONE)
|
||||
TRACE_IRQS_OFF
|
||||
testl %edi, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jz \exit
|
||||
CLEAR_RREGS
|
||||
jmp int_with_check
|
||||
.endm
|
||||
|
||||
sysenter_auditsys:
|
||||
auditsys_entry_common
|
||||
movl %ebp,%r9d /* reload 6th syscall arg */
|
||||
jmp sysenter_dispatch
|
||||
|
||||
sysexit_audit:
|
||||
auditsys_exit sysexit_from_sys_call
|
||||
#endif
|
||||
|
||||
sysenter_fix_flags:
|
||||
pushq_cfi $(X86_EFLAGS_IF|X86_EFLAGS_FIXED)
|
||||
popfq_cfi
|
||||
jmp sysenter_flags_fixed
|
||||
|
||||
sysenter_tracesys:
|
||||
#ifdef CONFIG_AUDITSYSCALL
|
||||
testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT), ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jz sysenter_auditsys
|
||||
#endif
|
||||
SAVE_EXTRA_REGS
|
||||
CLEAR_RREGS
|
||||
movq $-ENOSYS,RAX(%rsp)/* ptrace can change this for a bad syscall */
|
||||
movq %rsp,%rdi /* &pt_regs -> arg1 */
|
||||
call syscall_trace_enter
|
||||
LOAD_ARGS32 /* reload args from stack in case ptrace changed it */
|
||||
RESTORE_EXTRA_REGS
|
||||
cmpq $(IA32_NR_syscalls-1),%rax
|
||||
ja int_ret_from_sys_call /* sysenter_tracesys has set RAX(%rsp) */
|
||||
jmp sysenter_do_call
|
||||
CFI_ENDPROC
|
||||
ENDPROC(ia32_sysenter_target)
|
||||
|
||||
/*
|
||||
* 32bit SYSCALL instruction entry.
|
||||
*
|
||||
* 32bit SYSCALL saves rip to rcx, clears rflags.RF, then saves rflags to r11,
|
||||
* then loads new ss, cs, and rip from previously programmed MSRs.
|
||||
* rflags gets masked by a value from another MSR (so CLD and CLAC
|
||||
* are not needed). SYSCALL does not save anything on the stack
|
||||
* and does not change rsp.
|
||||
*
|
||||
* Note: rflags saving+masking-with-MSR happens only in Long mode
|
||||
* (in legacy 32bit mode, IF, RF and VM bits are cleared and that's it).
|
||||
* Don't get confused: rflags saving+masking depends on Long Mode Active bit
|
||||
* (EFER.LMA=1), NOT on bitness of userspace where SYSCALL executes
|
||||
* or target CS descriptor's L bit (SYSCALL does not read segment descriptors).
|
||||
*
|
||||
* Arguments:
|
||||
* eax system call number
|
||||
* ecx return address
|
||||
* ebx arg1
|
||||
* ebp arg2 (note: not saved in the stack frame, should not be touched)
|
||||
* edx arg3
|
||||
* esi arg4
|
||||
* edi arg5
|
||||
* esp user stack
|
||||
* 0(%esp) arg6
|
||||
*
|
||||
* This is purely a fast path. For anything complicated we use the int 0x80
|
||||
* path below. We set up a complete hardware stack frame to share code
|
||||
* with the int 0x80 path.
|
||||
*/
|
||||
ENTRY(ia32_cstar_target)
|
||||
CFI_STARTPROC32 simple
|
||||
CFI_SIGNAL_FRAME
|
||||
CFI_DEF_CFA rsp,0
|
||||
CFI_REGISTER rip,rcx
|
||||
/*CFI_REGISTER rflags,r11*/
|
||||
|
||||
/*
|
||||
* Interrupts are off on entry.
|
||||
* We do not frame this tiny irq-off block with TRACE_IRQS_OFF/ON,
|
||||
* it is too small to ever cause noticeable irq latency.
|
||||
*/
|
||||
SWAPGS_UNSAFE_STACK
|
||||
movl %esp,%r8d
|
||||
CFI_REGISTER rsp,r8
|
||||
movq PER_CPU_VAR(kernel_stack),%rsp
|
||||
ENABLE_INTERRUPTS(CLBR_NONE)
|
||||
|
||||
/* Zero-extending 32-bit regs, do not remove */
|
||||
movl %eax,%eax
|
||||
|
||||
/* Construct struct pt_regs on stack */
|
||||
pushq_cfi $__USER32_DS /* pt_regs->ss */
|
||||
pushq_cfi %r8 /* pt_regs->sp */
|
||||
CFI_REL_OFFSET rsp,0
|
||||
pushq_cfi %r11 /* pt_regs->flags */
|
||||
pushq_cfi $__USER32_CS /* pt_regs->cs */
|
||||
pushq_cfi %rcx /* pt_regs->ip */
|
||||
CFI_REL_OFFSET rip,0
|
||||
pushq_cfi_reg rax /* pt_regs->orig_ax */
|
||||
pushq_cfi_reg rdi /* pt_regs->di */
|
||||
pushq_cfi_reg rsi /* pt_regs->si */
|
||||
pushq_cfi_reg rdx /* pt_regs->dx */
|
||||
pushq_cfi_reg rbp /* pt_regs->cx */
|
||||
movl %ebp,%ecx
|
||||
pushq_cfi_reg rax /* pt_regs->ax */
|
||||
sub $(10*8),%rsp /* pt_regs->r8-11,bp,bx,r12-15 not saved */
|
||||
CFI_ADJUST_CFA_OFFSET 10*8
|
||||
|
||||
/*
|
||||
* no need to do an access_ok check here because r8 has been
|
||||
* 32bit zero extended
|
||||
*/
|
||||
ASM_STAC
|
||||
1: movl (%r8),%r9d
|
||||
_ASM_EXTABLE(1b,ia32_badarg)
|
||||
ASM_CLAC
|
||||
orl $TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
|
||||
testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
CFI_REMEMBER_STATE
|
||||
jnz cstar_tracesys
|
||||
cmpq $IA32_NR_syscalls-1,%rax
|
||||
ja ia32_badsys
|
||||
cstar_do_call:
|
||||
/* 32bit syscall -> 64bit C ABI argument conversion */
|
||||
movl %edi,%r8d /* arg5 */
|
||||
/* r9 already loaded */ /* arg6 */
|
||||
xchg %ecx,%esi /* rsi:arg2, rcx:arg4 */
|
||||
movl %ebx,%edi /* arg1 */
|
||||
movl %edx,%edx /* arg3 (zero extension) */
|
||||
cstar_dispatch:
|
||||
call *ia32_sys_call_table(,%rax,8)
|
||||
movq %rax,RAX(%rsp)
|
||||
DISABLE_INTERRUPTS(CLBR_NONE)
|
||||
TRACE_IRQS_OFF
|
||||
testl $_TIF_ALLWORK_MASK, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jnz sysretl_audit
|
||||
sysretl_from_sys_call:
|
||||
andl $~TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
|
||||
RESTORE_RSI_RDI_RDX
|
||||
movl RIP(%rsp),%ecx
|
||||
CFI_REGISTER rip,rcx
|
||||
movl EFLAGS(%rsp),%r11d
|
||||
/*CFI_REGISTER rflags,r11*/
|
||||
xorq %r10,%r10
|
||||
xorq %r9,%r9
|
||||
xorq %r8,%r8
|
||||
TRACE_IRQS_ON
|
||||
movl RSP(%rsp),%esp
|
||||
CFI_RESTORE rsp
|
||||
/*
|
||||
* 64bit->32bit SYSRET restores eip from ecx,
|
||||
* eflags from r11 (but RF and VM bits are forced to 0),
|
||||
* cs and ss are loaded from MSRs.
|
||||
* (Note: 32bit->32bit SYSRET is different: since r11
|
||||
* does not exist, it merely sets eflags.IF=1).
|
||||
*
|
||||
* NB: On AMD CPUs with the X86_BUG_SYSRET_SS_ATTRS bug, the ss
|
||||
* descriptor is not reinitialized. This means that we must
|
||||
* avoid SYSRET with SS == NULL, which could happen if we schedule,
|
||||
* exit the kernel, and re-enter using an interrupt vector. (All
|
||||
* interrupt entries on x86_64 set SS to NULL.) We prevent that
|
||||
* from happening by reloading SS in __switch_to.
|
||||
*/
|
||||
USERGS_SYSRET32
|
||||
|
||||
#ifdef CONFIG_AUDITSYSCALL
|
||||
cstar_auditsys:
|
||||
CFI_RESTORE_STATE
|
||||
movl %r9d,R9(%rsp) /* register to be clobbered by call */
|
||||
auditsys_entry_common
|
||||
movl R9(%rsp),%r9d /* reload 6th syscall arg */
|
||||
jmp cstar_dispatch
|
||||
|
||||
sysretl_audit:
|
||||
auditsys_exit sysretl_from_sys_call
|
||||
#endif
|
||||
|
||||
cstar_tracesys:
|
||||
#ifdef CONFIG_AUDITSYSCALL
|
||||
testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT), ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jz cstar_auditsys
|
||||
#endif
|
||||
xchgl %r9d,%ebp
|
||||
SAVE_EXTRA_REGS
|
||||
CLEAR_RREGS r9
|
||||
movq $-ENOSYS,RAX(%rsp) /* ptrace can change this for a bad syscall */
|
||||
movq %rsp,%rdi /* &pt_regs -> arg1 */
|
||||
call syscall_trace_enter
|
||||
LOAD_ARGS32 1 /* reload args from stack in case ptrace changed it */
|
||||
RESTORE_EXTRA_REGS
|
||||
xchgl %ebp,%r9d
|
||||
cmpq $(IA32_NR_syscalls-1),%rax
|
||||
ja int_ret_from_sys_call /* cstar_tracesys has set RAX(%rsp) */
|
||||
jmp cstar_do_call
|
||||
END(ia32_cstar_target)
|
||||
|
||||
ia32_badarg:
|
||||
ASM_CLAC
|
||||
movq $-EFAULT,%rax
|
||||
jmp ia32_sysret
|
||||
CFI_ENDPROC
|
||||
|
||||
/*
|
||||
* Emulated IA32 system calls via int 0x80.
|
||||
*
|
||||
* Arguments:
|
||||
* eax system call number
|
||||
* ebx arg1
|
||||
* ecx arg2
|
||||
* edx arg3
|
||||
* esi arg4
|
||||
* edi arg5
|
||||
* ebp arg6 (note: not saved in the stack frame, should not be touched)
|
||||
*
|
||||
* Notes:
|
||||
* Uses the same stack frame as the x86-64 version.
|
||||
* All registers except eax must be saved (but ptrace may violate that).
|
||||
* Arguments are zero extended. For system calls that want sign extension and
|
||||
* take long arguments a wrapper is needed. Most calls can just be called
|
||||
* directly.
|
||||
* Assumes it is only called from user space and entered with interrupts off.
|
||||
*/
|
||||
|
||||
ENTRY(ia32_syscall)
|
||||
CFI_STARTPROC32 simple
|
||||
CFI_SIGNAL_FRAME
|
||||
CFI_DEF_CFA rsp,5*8
|
||||
/*CFI_REL_OFFSET ss,4*8 */
|
||||
CFI_REL_OFFSET rsp,3*8
|
||||
/*CFI_REL_OFFSET rflags,2*8 */
|
||||
/*CFI_REL_OFFSET cs,1*8 */
|
||||
CFI_REL_OFFSET rip,0*8
|
||||
|
||||
/*
|
||||
* Interrupts are off on entry.
|
||||
* We do not frame this tiny irq-off block with TRACE_IRQS_OFF/ON,
|
||||
* it is too small to ever cause noticeable irq latency.
|
||||
*/
|
||||
PARAVIRT_ADJUST_EXCEPTION_FRAME
|
||||
SWAPGS
|
||||
ENABLE_INTERRUPTS(CLBR_NONE)
|
||||
|
||||
/* Zero-extending 32-bit regs, do not remove */
|
||||
movl %eax,%eax
|
||||
|
||||
/* Construct struct pt_regs on stack (iret frame is already on stack) */
|
||||
pushq_cfi_reg rax /* pt_regs->orig_ax */
|
||||
pushq_cfi_reg rdi /* pt_regs->di */
|
||||
pushq_cfi_reg rsi /* pt_regs->si */
|
||||
pushq_cfi_reg rdx /* pt_regs->dx */
|
||||
pushq_cfi_reg rcx /* pt_regs->cx */
|
||||
pushq_cfi_reg rax /* pt_regs->ax */
|
||||
cld
|
||||
sub $(10*8),%rsp /* pt_regs->r8-11,bp,bx,r12-15 not saved */
|
||||
CFI_ADJUST_CFA_OFFSET 10*8
|
||||
|
||||
orl $TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
|
||||
testl $_TIF_WORK_SYSCALL_ENTRY, ASM_THREAD_INFO(TI_flags, %rsp, SIZEOF_PTREGS)
|
||||
jnz ia32_tracesys
|
||||
cmpq $(IA32_NR_syscalls-1),%rax
|
||||
ja ia32_badsys
|
||||
ia32_do_call:
|
||||
/* 32bit syscall -> 64bit C ABI argument conversion */
|
||||
movl %edi,%r8d /* arg5 */
|
||||
movl %ebp,%r9d /* arg6 */
|
||||
xchg %ecx,%esi /* rsi:arg2, rcx:arg4 */
|
||||
movl %ebx,%edi /* arg1 */
|
||||
movl %edx,%edx /* arg3 (zero extension) */
|
||||
call *ia32_sys_call_table(,%rax,8) # xxx: rip relative
|
||||
ia32_sysret:
|
||||
movq %rax,RAX(%rsp)
|
||||
ia32_ret_from_sys_call:
|
||||
CLEAR_RREGS
|
||||
jmp int_ret_from_sys_call
|
||||
|
||||
ia32_tracesys:
|
||||
SAVE_EXTRA_REGS
|
||||
CLEAR_RREGS
|
||||
movq $-ENOSYS,RAX(%rsp) /* ptrace can change this for a bad syscall */
|
||||
movq %rsp,%rdi /* &pt_regs -> arg1 */
|
||||
call syscall_trace_enter
|
||||
LOAD_ARGS32 /* reload args from stack in case ptrace changed it */
|
||||
RESTORE_EXTRA_REGS
|
||||
cmpq $(IA32_NR_syscalls-1),%rax
|
||||
ja int_ret_from_sys_call /* ia32_tracesys has set RAX(%rsp) */
|
||||
jmp ia32_do_call
|
||||
END(ia32_syscall)
|
||||
|
||||
ia32_badsys:
|
||||
movq $0,ORIG_RAX(%rsp)
|
||||
movq $-ENOSYS,%rax
|
||||
jmp ia32_sysret
|
||||
|
||||
CFI_ENDPROC
|
||||
|
||||
.macro PTREGSCALL label, func
|
||||
ALIGN
|
||||
GLOBAL(\label)
|
||||
leaq \func(%rip),%rax
|
||||
jmp ia32_ptregs_common
|
||||
.endm
|
||||
|
||||
CFI_STARTPROC32
|
||||
|
||||
PTREGSCALL stub32_rt_sigreturn, sys32_rt_sigreturn
|
||||
PTREGSCALL stub32_sigreturn, sys32_sigreturn
|
||||
PTREGSCALL stub32_fork, sys_fork
|
||||
PTREGSCALL stub32_vfork, sys_vfork
|
||||
|
||||
ALIGN
|
||||
GLOBAL(stub32_clone)
|
||||
leaq sys_clone(%rip),%rax
|
||||
mov %r8, %rcx
|
||||
jmp ia32_ptregs_common
|
||||
|
||||
ALIGN
|
||||
ia32_ptregs_common:
|
||||
CFI_ENDPROC
|
||||
CFI_STARTPROC32 simple
|
||||
CFI_SIGNAL_FRAME
|
||||
CFI_DEF_CFA rsp,SIZEOF_PTREGS
|
||||
CFI_REL_OFFSET rax,RAX
|
||||
CFI_REL_OFFSET rcx,RCX
|
||||
CFI_REL_OFFSET rdx,RDX
|
||||
CFI_REL_OFFSET rsi,RSI
|
||||
CFI_REL_OFFSET rdi,RDI
|
||||
CFI_REL_OFFSET rip,RIP
|
||||
/* CFI_REL_OFFSET cs,CS*/
|
||||
/* CFI_REL_OFFSET rflags,EFLAGS*/
|
||||
CFI_REL_OFFSET rsp,RSP
|
||||
/* CFI_REL_OFFSET ss,SS*/
|
||||
SAVE_EXTRA_REGS 8
|
||||
call *%rax
|
||||
RESTORE_EXTRA_REGS 8
|
||||
ret
|
||||
CFI_ENDPROC
|
||||
END(ia32_ptregs_common)
|
@ -18,6 +18,12 @@
|
||||
.endm
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Issue one struct alt_instr descriptor entry (need to put it into
|
||||
* the section .altinstructions, see below). This entry contains
|
||||
* enough information for the alternatives patching code to patch an
|
||||
* instruction. See apply_alternatives().
|
||||
*/
|
||||
.macro altinstruction_entry orig alt feature orig_len alt_len pad_len
|
||||
.long \orig - .
|
||||
.long \alt - .
|
||||
@ -27,6 +33,12 @@
|
||||
.byte \pad_len
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Define an alternative between two instructions. If @feature is
|
||||
* present, early code in apply_alternatives() replaces @oldinstr with
|
||||
* @newinstr. ".skip" directive takes care of proper instruction padding
|
||||
* in case @newinstr is longer than @oldinstr.
|
||||
*/
|
||||
.macro ALTERNATIVE oldinstr, newinstr, feature
|
||||
140:
|
||||
\oldinstr
|
||||
@ -55,6 +67,12 @@
|
||||
*/
|
||||
#define alt_max_short(a, b) ((a) ^ (((a) ^ (b)) & -(-((a) < (b)))))
|
||||
|
||||
|
||||
/*
|
||||
* Same as ALTERNATIVE macro above but for two alternatives. If CPU
|
||||
* has @feature1, it replaces @oldinstr with @newinstr1. If CPU has
|
||||
* @feature2, it replaces @oldinstr with @feature2.
|
||||
*/
|
||||
.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2
|
||||
140:
|
||||
\oldinstr
|
||||
|
@ -644,6 +644,12 @@ static inline void entering_ack_irq(void)
|
||||
entering_irq();
|
||||
}
|
||||
|
||||
static inline void ipi_entering_ack_irq(void)
|
||||
{
|
||||
ack_APIC_irq();
|
||||
irq_enter();
|
||||
}
|
||||
|
||||
static inline void exiting_irq(void)
|
||||
{
|
||||
irq_exit();
|
||||
|
@ -63,6 +63,31 @@
|
||||
_ASM_ALIGN ; \
|
||||
_ASM_PTR (entry); \
|
||||
.popsection
|
||||
|
||||
.macro ALIGN_DESTINATION
|
||||
/* check for bad alignment of destination */
|
||||
movl %edi,%ecx
|
||||
andl $7,%ecx
|
||||
jz 102f /* already aligned */
|
||||
subl $8,%ecx
|
||||
negl %ecx
|
||||
subl %ecx,%edx
|
||||
100: movb (%rsi),%al
|
||||
101: movb %al,(%rdi)
|
||||
incq %rsi
|
||||
incq %rdi
|
||||
decl %ecx
|
||||
jnz 100b
|
||||
102:
|
||||
.section .fixup,"ax"
|
||||
103: addl %ecx,%edx /* ecx is zerorest also */
|
||||
jmp copy_user_handle_tail
|
||||
.previous
|
||||
|
||||
_ASM_EXTABLE(100b,103b)
|
||||
_ASM_EXTABLE(101b,103b)
|
||||
.endm
|
||||
|
||||
#else
|
||||
# define _ASM_EXTABLE(from,to) \
|
||||
" .pushsection \"__ex_table\",\"a\"\n" \
|
||||
|
@ -22,7 +22,7 @@
|
||||
*
|
||||
* Atomically reads the value of @v.
|
||||
*/
|
||||
static inline int atomic_read(const atomic_t *v)
|
||||
static __always_inline int atomic_read(const atomic_t *v)
|
||||
{
|
||||
return ACCESS_ONCE((v)->counter);
|
||||
}
|
||||
@ -34,7 +34,7 @@ static inline int atomic_read(const atomic_t *v)
|
||||
*
|
||||
* Atomically sets the value of @v to @i.
|
||||
*/
|
||||
static inline void atomic_set(atomic_t *v, int i)
|
||||
static __always_inline void atomic_set(atomic_t *v, int i)
|
||||
{
|
||||
v->counter = i;
|
||||
}
|
||||
@ -46,7 +46,7 @@ static inline void atomic_set(atomic_t *v, int i)
|
||||
*
|
||||
* Atomically adds @i to @v.
|
||||
*/
|
||||
static inline void atomic_add(int i, atomic_t *v)
|
||||
static __always_inline void atomic_add(int i, atomic_t *v)
|
||||
{
|
||||
asm volatile(LOCK_PREFIX "addl %1,%0"
|
||||
: "+m" (v->counter)
|
||||
@ -60,7 +60,7 @@ static inline void atomic_add(int i, atomic_t *v)
|
||||
*
|
||||
* Atomically subtracts @i from @v.
|
||||
*/
|
||||
static inline void atomic_sub(int i, atomic_t *v)
|
||||
static __always_inline void atomic_sub(int i, atomic_t *v)
|
||||
{
|
||||
asm volatile(LOCK_PREFIX "subl %1,%0"
|
||||
: "+m" (v->counter)
|
||||
@ -76,7 +76,7 @@ static inline void atomic_sub(int i, atomic_t *v)
|
||||
* true if the result is zero, or false for all
|
||||
* other cases.
|
||||
*/
|
||||
static inline int atomic_sub_and_test(int i, atomic_t *v)
|
||||
static __always_inline int atomic_sub_and_test(int i, atomic_t *v)
|
||||
{
|
||||
GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e");
|
||||
}
|
||||
@ -87,7 +87,7 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
|
||||
*
|
||||
* Atomically increments @v by 1.
|
||||
*/
|
||||
static inline void atomic_inc(atomic_t *v)
|
||||
static __always_inline void atomic_inc(atomic_t *v)
|
||||
{
|
||||
asm volatile(LOCK_PREFIX "incl %0"
|
||||
: "+m" (v->counter));
|
||||
@ -99,7 +99,7 @@ static inline void atomic_inc(atomic_t *v)
|
||||
*
|
||||
* Atomically decrements @v by 1.
|
||||
*/
|
||||
static inline void atomic_dec(atomic_t *v)
|
||||
static __always_inline void atomic_dec(atomic_t *v)
|
||||
{
|
||||
asm volatile(LOCK_PREFIX "decl %0"
|
||||
: "+m" (v->counter));
|
||||
@ -113,7 +113,7 @@ static inline void atomic_dec(atomic_t *v)
|
||||
* returns true if the result is 0, or false for all other
|
||||
* cases.
|
||||
*/
|
||||
static inline int atomic_dec_and_test(atomic_t *v)
|
||||
static __always_inline int atomic_dec_and_test(atomic_t *v)
|
||||
{
|
||||
GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", "e");
|
||||
}
|
||||
@ -126,7 +126,7 @@ static inline int atomic_dec_and_test(atomic_t *v)
|
||||
* and returns true if the result is zero, or false for all
|
||||
* other cases.
|
||||
*/
|
||||
static inline int atomic_inc_and_test(atomic_t *v)
|
||||
static __always_inline int atomic_inc_and_test(atomic_t *v)
|
||||
{
|
||||
GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", "e");
|
||||
}
|
||||
@ -140,7 +140,7 @@ static inline int atomic_inc_and_test(atomic_t *v)
|
||||
* if the result is negative, or false when
|
||||
* result is greater than or equal to zero.
|
||||
*/
|
||||
static inline int atomic_add_negative(int i, atomic_t *v)
|
||||
static __always_inline int atomic_add_negative(int i, atomic_t *v)
|
||||
{
|
||||
GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s");
|
||||
}
|
||||
@ -152,7 +152,7 @@ static inline int atomic_add_negative(int i, atomic_t *v)
|
||||
*
|
||||
* Atomically adds @i to @v and returns @i + @v
|
||||
*/
|
||||
static inline int atomic_add_return(int i, atomic_t *v)
|
||||
static __always_inline int atomic_add_return(int i, atomic_t *v)
|
||||
{
|
||||
return i + xadd(&v->counter, i);
|
||||
}
|
||||
@ -164,7 +164,7 @@ static inline int atomic_add_return(int i, atomic_t *v)
|
||||
*
|
||||
* Atomically subtracts @i from @v and returns @v - @i
|
||||
*/
|
||||
static inline int atomic_sub_return(int i, atomic_t *v)
|
||||
static __always_inline int atomic_sub_return(int i, atomic_t *v)
|
||||
{
|
||||
return atomic_add_return(-i, v);
|
||||
}
|
||||
@ -172,7 +172,7 @@ static inline int atomic_sub_return(int i, atomic_t *v)
|
||||
#define atomic_inc_return(v) (atomic_add_return(1, v))
|
||||
#define atomic_dec_return(v) (atomic_sub_return(1, v))
|
||||
|
||||
static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
|
||||
static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
|
||||
{
|
||||
return cmpxchg(&v->counter, old, new);
|
||||
}
|
||||
@ -191,7 +191,7 @@ static inline int atomic_xchg(atomic_t *v, int new)
|
||||
* Atomically adds @a to @v, so long as @v was not already @u.
|
||||
* Returns the old value of @v.
|
||||
*/
|
||||
static inline int __atomic_add_unless(atomic_t *v, int a, int u)
|
||||
static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
|
||||
{
|
||||
int c, old;
|
||||
c = atomic_read(v);
|
||||
@ -213,7 +213,7 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
|
||||
* Atomically adds 1 to @v
|
||||
* Returns the new value of @u
|
||||
*/
|
||||
static inline short int atomic_inc_short(short int *v)
|
||||
static __always_inline short int atomic_inc_short(short int *v)
|
||||
{
|
||||
asm(LOCK_PREFIX "addw $1, %0" : "+m" (*v));
|
||||
return *v;
|
||||
|
@ -40,7 +40,7 @@ static inline void atomic64_set(atomic64_t *v, long i)
|
||||
*
|
||||
* Atomically adds @i to @v.
|
||||
*/
|
||||
static inline void atomic64_add(long i, atomic64_t *v)
|
||||
static __always_inline void atomic64_add(long i, atomic64_t *v)
|
||||
{
|
||||
asm volatile(LOCK_PREFIX "addq %1,%0"
|
||||
: "=m" (v->counter)
|
||||
@ -81,7 +81,7 @@ static inline int atomic64_sub_and_test(long i, atomic64_t *v)
|
||||
*
|
||||
* Atomically increments @v by 1.
|
||||
*/
|
||||
static inline void atomic64_inc(atomic64_t *v)
|
||||
static __always_inline void atomic64_inc(atomic64_t *v)
|
||||
{
|
||||
asm volatile(LOCK_PREFIX "incq %0"
|
||||
: "=m" (v->counter)
|
||||
@ -94,7 +94,7 @@ static inline void atomic64_inc(atomic64_t *v)
|
||||
*
|
||||
* Atomically decrements @v by 1.
|
||||
*/
|
||||
static inline void atomic64_dec(atomic64_t *v)
|
||||
static __always_inline void atomic64_dec(atomic64_t *v)
|
||||
{
|
||||
asm volatile(LOCK_PREFIX "decq %0"
|
||||
: "=m" (v->counter)
|
||||
@ -148,7 +148,7 @@ static inline int atomic64_add_negative(long i, atomic64_t *v)
|
||||
*
|
||||
* Atomically adds @i to @v and returns @i + @v
|
||||
*/
|
||||
static inline long atomic64_add_return(long i, atomic64_t *v)
|
||||
static __always_inline long atomic64_add_return(long i, atomic64_t *v)
|
||||
{
|
||||
return i + xadd(&v->counter, i);
|
||||
}
|
||||
|
@ -8,7 +8,7 @@
|
||||
/*
|
||||
* The set_memory_* API can be used to change various attributes of a virtual
|
||||
* address range. The attributes include:
|
||||
* Cachability : UnCached, WriteCombining, WriteBack
|
||||
* Cachability : UnCached, WriteCombining, WriteThrough, WriteBack
|
||||
* Executability : eXeutable, NoteXecutable
|
||||
* Read/Write : ReadOnly, ReadWrite
|
||||
* Presence : NotPresent
|
||||
@ -35,9 +35,11 @@
|
||||
|
||||
int _set_memory_uc(unsigned long addr, int numpages);
|
||||
int _set_memory_wc(unsigned long addr, int numpages);
|
||||
int _set_memory_wt(unsigned long addr, int numpages);
|
||||
int _set_memory_wb(unsigned long addr, int numpages);
|
||||
int set_memory_uc(unsigned long addr, int numpages);
|
||||
int set_memory_wc(unsigned long addr, int numpages);
|
||||
int set_memory_wt(unsigned long addr, int numpages);
|
||||
int set_memory_wb(unsigned long addr, int numpages);
|
||||
int set_memory_x(unsigned long addr, int numpages);
|
||||
int set_memory_nx(unsigned long addr, int numpages);
|
||||
@ -48,10 +50,12 @@ int set_memory_4k(unsigned long addr, int numpages);
|
||||
|
||||
int set_memory_array_uc(unsigned long *addr, int addrinarray);
|
||||
int set_memory_array_wc(unsigned long *addr, int addrinarray);
|
||||
int set_memory_array_wt(unsigned long *addr, int addrinarray);
|
||||
int set_memory_array_wb(unsigned long *addr, int addrinarray);
|
||||
|
||||
int set_pages_array_uc(struct page **pages, int addrinarray);
|
||||
int set_pages_array_wc(struct page **pages, int addrinarray);
|
||||
int set_pages_array_wt(struct page **pages, int addrinarray);
|
||||
int set_pages_array_wb(struct page **pages, int addrinarray);
|
||||
|
||||
/*
|
||||
|
@ -1,170 +0,0 @@
|
||||
#ifndef _ASM_X86_DWARF2_H
|
||||
#define _ASM_X86_DWARF2_H
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#warning "asm/dwarf2.h should be only included in pure assembly files"
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Macros for dwarf2 CFI unwind table entries.
|
||||
* See "as.info" for details on these pseudo ops. Unfortunately
|
||||
* they are only supported in very new binutils, so define them
|
||||
* away for older version.
|
||||
*/
|
||||
|
||||
#ifdef CONFIG_AS_CFI
|
||||
|
||||
#define CFI_STARTPROC .cfi_startproc
|
||||
#define CFI_ENDPROC .cfi_endproc
|
||||
#define CFI_DEF_CFA .cfi_def_cfa
|
||||
#define CFI_DEF_CFA_REGISTER .cfi_def_cfa_register
|
||||
#define CFI_DEF_CFA_OFFSET .cfi_def_cfa_offset
|
||||
#define CFI_ADJUST_CFA_OFFSET .cfi_adjust_cfa_offset
|
||||
#define CFI_OFFSET .cfi_offset
|
||||
#define CFI_REL_OFFSET .cfi_rel_offset
|
||||
#define CFI_REGISTER .cfi_register
|
||||
#define CFI_RESTORE .cfi_restore
|
||||
#define CFI_REMEMBER_STATE .cfi_remember_state
|
||||
#define CFI_RESTORE_STATE .cfi_restore_state
|
||||
#define CFI_UNDEFINED .cfi_undefined
|
||||
#define CFI_ESCAPE .cfi_escape
|
||||
|
||||
#ifdef CONFIG_AS_CFI_SIGNAL_FRAME
|
||||
#define CFI_SIGNAL_FRAME .cfi_signal_frame
|
||||
#else
|
||||
#define CFI_SIGNAL_FRAME
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_AS_CFI_SECTIONS) && defined(__ASSEMBLY__)
|
||||
/*
|
||||
* Emit CFI data in .debug_frame sections, not .eh_frame sections.
|
||||
* The latter we currently just discard since we don't do DWARF
|
||||
* unwinding at runtime. So only the offline DWARF information is
|
||||
* useful to anyone. Note we should not use this directive if this
|
||||
* file is used in the vDSO assembly, or if vmlinux.lds.S gets
|
||||
* changed so it doesn't discard .eh_frame.
|
||||
*/
|
||||
.cfi_sections .debug_frame
|
||||
#endif
|
||||
|
||||
#else
|
||||
|
||||
/*
|
||||
* Due to the structure of pre-exisiting code, don't use assembler line
|
||||
* comment character # to ignore the arguments. Instead, use a dummy macro.
|
||||
*/
|
||||
.macro cfi_ignore a=0, b=0, c=0, d=0
|
||||
.endm
|
||||
|
||||
#define CFI_STARTPROC cfi_ignore
|
||||
#define CFI_ENDPROC cfi_ignore
|
||||
#define CFI_DEF_CFA cfi_ignore
|
||||
#define CFI_DEF_CFA_REGISTER cfi_ignore
|
||||
#define CFI_DEF_CFA_OFFSET cfi_ignore
|
||||
#define CFI_ADJUST_CFA_OFFSET cfi_ignore
|
||||
#define CFI_OFFSET cfi_ignore
|
||||
#define CFI_REL_OFFSET cfi_ignore
|
||||
#define CFI_REGISTER cfi_ignore
|
||||
#define CFI_RESTORE cfi_ignore
|
||||
#define CFI_REMEMBER_STATE cfi_ignore
|
||||
#define CFI_RESTORE_STATE cfi_ignore
|
||||
#define CFI_UNDEFINED cfi_ignore
|
||||
#define CFI_ESCAPE cfi_ignore
|
||||
#define CFI_SIGNAL_FRAME cfi_ignore
|
||||
|
||||
#endif
|
||||
|
||||
/*
|
||||
* An attempt to make CFI annotations more or less
|
||||
* correct and shorter. It is implied that you know
|
||||
* what you're doing if you use them.
|
||||
*/
|
||||
#ifdef __ASSEMBLY__
|
||||
#ifdef CONFIG_X86_64
|
||||
.macro pushq_cfi reg
|
||||
pushq \reg
|
||||
CFI_ADJUST_CFA_OFFSET 8
|
||||
.endm
|
||||
|
||||
.macro pushq_cfi_reg reg
|
||||
pushq %\reg
|
||||
CFI_ADJUST_CFA_OFFSET 8
|
||||
CFI_REL_OFFSET \reg, 0
|
||||
.endm
|
||||
|
||||
.macro popq_cfi reg
|
||||
popq \reg
|
||||
CFI_ADJUST_CFA_OFFSET -8
|
||||
.endm
|
||||
|
||||
.macro popq_cfi_reg reg
|
||||
popq %\reg
|
||||
CFI_ADJUST_CFA_OFFSET -8
|
||||
CFI_RESTORE \reg
|
||||
.endm
|
||||
|
||||
.macro pushfq_cfi
|
||||
pushfq
|
||||
CFI_ADJUST_CFA_OFFSET 8
|
||||
.endm
|
||||
|
||||
.macro popfq_cfi
|
||||
popfq
|
||||
CFI_ADJUST_CFA_OFFSET -8
|
||||
.endm
|
||||
|
||||
.macro movq_cfi reg offset=0
|
||||
movq %\reg, \offset(%rsp)
|
||||
CFI_REL_OFFSET \reg, \offset
|
||||
.endm
|
||||
|
||||
.macro movq_cfi_restore offset reg
|
||||
movq \offset(%rsp), %\reg
|
||||
CFI_RESTORE \reg
|
||||
.endm
|
||||
#else /*!CONFIG_X86_64*/
|
||||
.macro pushl_cfi reg
|
||||
pushl \reg
|
||||
CFI_ADJUST_CFA_OFFSET 4
|
||||
.endm
|
||||
|
||||
.macro pushl_cfi_reg reg
|
||||
pushl %\reg
|
||||
CFI_ADJUST_CFA_OFFSET 4
|
||||
CFI_REL_OFFSET \reg, 0
|
||||
.endm
|
||||
|
||||
.macro popl_cfi reg
|
||||
popl \reg
|
||||
CFI_ADJUST_CFA_OFFSET -4
|
||||
.endm
|
||||
|
||||
.macro popl_cfi_reg reg
|
||||
popl %\reg
|
||||
CFI_ADJUST_CFA_OFFSET -4
|
||||
CFI_RESTORE \reg
|
||||
.endm
|
||||
|
||||
.macro pushfl_cfi
|
||||
pushfl
|
||||
CFI_ADJUST_CFA_OFFSET 4
|
||||
.endm
|
||||
|
||||
.macro popfl_cfi
|
||||
popfl
|
||||
CFI_ADJUST_CFA_OFFSET -4
|
||||
.endm
|
||||
|
||||
.macro movl_cfi reg offset=0
|
||||
movl %\reg, \offset(%esp)
|
||||
CFI_REL_OFFSET \reg, \offset
|
||||
.endm
|
||||
|
||||
.macro movl_cfi_restore offset reg
|
||||
movl \offset(%esp), %\reg
|
||||
CFI_RESTORE \reg
|
||||
.endm
|
||||
#endif /*!CONFIG_X86_64*/
|
||||
#endif /*__ASSEMBLY__*/
|
||||
|
||||
#endif /* _ASM_X86_DWARF2_H */
|
@ -23,6 +23,8 @@ BUILD_INTERRUPT(x86_platform_ipi, X86_PLATFORM_IPI_VECTOR)
|
||||
#ifdef CONFIG_HAVE_KVM
|
||||
BUILD_INTERRUPT3(kvm_posted_intr_ipi, POSTED_INTR_VECTOR,
|
||||
smp_kvm_posted_intr_ipi)
|
||||
BUILD_INTERRUPT3(kvm_posted_intr_wakeup_ipi, POSTED_INTR_WAKEUP_VECTOR,
|
||||
smp_kvm_posted_intr_wakeup_ipi)
|
||||
#endif
|
||||
|
||||
/*
|
||||
@ -50,4 +52,7 @@ BUILD_INTERRUPT(thermal_interrupt,THERMAL_APIC_VECTOR)
|
||||
BUILD_INTERRUPT(threshold_interrupt,THRESHOLD_APIC_VECTOR)
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_X86_MCE_AMD
|
||||
BUILD_INTERRUPT(deferred_error_interrupt, DEFERRED_ERROR_VECTOR)
|
||||
#endif
|
||||
#endif
|
||||
|
@ -1,20 +1,17 @@
|
||||
#ifdef __ASSEMBLY__
|
||||
|
||||
#include <asm/asm.h>
|
||||
#include <asm/dwarf2.h>
|
||||
|
||||
/* The annotation hides the frame from the unwinder and makes it look
|
||||
like a ordinary ebp save/restore. This avoids some special cases for
|
||||
frame pointer later */
|
||||
#ifdef CONFIG_FRAME_POINTER
|
||||
.macro FRAME
|
||||
__ASM_SIZE(push,_cfi) %__ASM_REG(bp)
|
||||
CFI_REL_OFFSET __ASM_REG(bp), 0
|
||||
__ASM_SIZE(push,) %__ASM_REG(bp)
|
||||
__ASM_SIZE(mov) %__ASM_REG(sp), %__ASM_REG(bp)
|
||||
.endm
|
||||
.macro ENDFRAME
|
||||
__ASM_SIZE(pop,_cfi) %__ASM_REG(bp)
|
||||
CFI_RESTORE __ASM_REG(bp)
|
||||
__ASM_SIZE(pop,) %__ASM_REG(bp)
|
||||
.endm
|
||||
#else
|
||||
.macro FRAME
|
||||
|
@ -14,6 +14,7 @@ typedef struct {
|
||||
#endif
|
||||
#ifdef CONFIG_HAVE_KVM
|
||||
unsigned int kvm_posted_intr_ipis;
|
||||
unsigned int kvm_posted_intr_wakeup_ipis;
|
||||
#endif
|
||||
unsigned int x86_platform_ipis; /* arch dependent */
|
||||
unsigned int apic_perf_irqs;
|
||||
@ -33,6 +34,9 @@ typedef struct {
|
||||
#ifdef CONFIG_X86_MCE_THRESHOLD
|
||||
unsigned int irq_threshold_count;
|
||||
#endif
|
||||
#ifdef CONFIG_X86_MCE_AMD
|
||||
unsigned int irq_deferred_error_count;
|
||||
#endif
|
||||
#if IS_ENABLED(CONFIG_HYPERV) || defined(CONFIG_XEN)
|
||||
unsigned int irq_hv_callback_count;
|
||||
#endif
|
||||
|
@ -74,20 +74,16 @@ extern unsigned int hpet_readl(unsigned int a);
|
||||
extern void force_hpet_resume(void);
|
||||
|
||||
struct irq_data;
|
||||
struct hpet_dev;
|
||||
struct irq_domain;
|
||||
|
||||
extern void hpet_msi_unmask(struct irq_data *data);
|
||||
extern void hpet_msi_mask(struct irq_data *data);
|
||||
struct hpet_dev;
|
||||
extern void hpet_msi_write(struct hpet_dev *hdev, struct msi_msg *msg);
|
||||
extern void hpet_msi_read(struct hpet_dev *hdev, struct msi_msg *msg);
|
||||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
extern int default_setup_hpet_msi(unsigned int irq, unsigned int id);
|
||||
#else
|
||||
static inline int default_setup_hpet_msi(unsigned int irq, unsigned int id)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
#endif
|
||||
extern struct irq_domain *hpet_create_irq_domain(int hpet_id);
|
||||
extern int hpet_assign_irq(struct irq_domain *domain,
|
||||
struct hpet_dev *dev, int dev_num);
|
||||
|
||||
#ifdef CONFIG_HPET_EMULATE_RTC
|
||||
|
||||
|
@ -29,6 +29,7 @@
|
||||
extern asmlinkage void apic_timer_interrupt(void);
|
||||
extern asmlinkage void x86_platform_ipi(void);
|
||||
extern asmlinkage void kvm_posted_intr_ipi(void);
|
||||
extern asmlinkage void kvm_posted_intr_wakeup_ipi(void);
|
||||
extern asmlinkage void error_interrupt(void);
|
||||
extern asmlinkage void irq_work_interrupt(void);
|
||||
|
||||
@ -36,43 +37,10 @@ extern asmlinkage void spurious_interrupt(void);
|
||||
extern asmlinkage void thermal_interrupt(void);
|
||||
extern asmlinkage void reschedule_interrupt(void);
|
||||
|
||||
extern asmlinkage void invalidate_interrupt(void);
|
||||
extern asmlinkage void invalidate_interrupt0(void);
|
||||
extern asmlinkage void invalidate_interrupt1(void);
|
||||
extern asmlinkage void invalidate_interrupt2(void);
|
||||
extern asmlinkage void invalidate_interrupt3(void);
|
||||
extern asmlinkage void invalidate_interrupt4(void);
|
||||
extern asmlinkage void invalidate_interrupt5(void);
|
||||
extern asmlinkage void invalidate_interrupt6(void);
|
||||
extern asmlinkage void invalidate_interrupt7(void);
|
||||
extern asmlinkage void invalidate_interrupt8(void);
|
||||
extern asmlinkage void invalidate_interrupt9(void);
|
||||
extern asmlinkage void invalidate_interrupt10(void);
|
||||
extern asmlinkage void invalidate_interrupt11(void);
|
||||
extern asmlinkage void invalidate_interrupt12(void);
|
||||
extern asmlinkage void invalidate_interrupt13(void);
|
||||
extern asmlinkage void invalidate_interrupt14(void);
|
||||
extern asmlinkage void invalidate_interrupt15(void);
|
||||
extern asmlinkage void invalidate_interrupt16(void);
|
||||
extern asmlinkage void invalidate_interrupt17(void);
|
||||
extern asmlinkage void invalidate_interrupt18(void);
|
||||
extern asmlinkage void invalidate_interrupt19(void);
|
||||
extern asmlinkage void invalidate_interrupt20(void);
|
||||
extern asmlinkage void invalidate_interrupt21(void);
|
||||
extern asmlinkage void invalidate_interrupt22(void);
|
||||
extern asmlinkage void invalidate_interrupt23(void);
|
||||
extern asmlinkage void invalidate_interrupt24(void);
|
||||
extern asmlinkage void invalidate_interrupt25(void);
|
||||
extern asmlinkage void invalidate_interrupt26(void);
|
||||
extern asmlinkage void invalidate_interrupt27(void);
|
||||
extern asmlinkage void invalidate_interrupt28(void);
|
||||
extern asmlinkage void invalidate_interrupt29(void);
|
||||
extern asmlinkage void invalidate_interrupt30(void);
|
||||
extern asmlinkage void invalidate_interrupt31(void);
|
||||
|
||||
extern asmlinkage void irq_move_cleanup_interrupt(void);
|
||||
extern asmlinkage void reboot_interrupt(void);
|
||||
extern asmlinkage void threshold_interrupt(void);
|
||||
extern asmlinkage void deferred_error_interrupt(void);
|
||||
|
||||
extern asmlinkage void call_function_interrupt(void);
|
||||
extern asmlinkage void call_function_single_interrupt(void);
|
||||
@ -87,60 +55,93 @@ extern void trace_spurious_interrupt(void);
|
||||
extern void trace_thermal_interrupt(void);
|
||||
extern void trace_reschedule_interrupt(void);
|
||||
extern void trace_threshold_interrupt(void);
|
||||
extern void trace_deferred_error_interrupt(void);
|
||||
extern void trace_call_function_interrupt(void);
|
||||
extern void trace_call_function_single_interrupt(void);
|
||||
#define trace_irq_move_cleanup_interrupt irq_move_cleanup_interrupt
|
||||
#define trace_reboot_interrupt reboot_interrupt
|
||||
#define trace_kvm_posted_intr_ipi kvm_posted_intr_ipi
|
||||
#define trace_kvm_posted_intr_wakeup_ipi kvm_posted_intr_wakeup_ipi
|
||||
#endif /* CONFIG_TRACING */
|
||||
|
||||
#ifdef CONFIG_IRQ_REMAP
|
||||
/* Intel specific interrupt remapping information */
|
||||
struct irq_2_iommu {
|
||||
struct intel_iommu *iommu;
|
||||
u16 irte_index;
|
||||
u16 sub_handle;
|
||||
u8 irte_mask;
|
||||
};
|
||||
|
||||
/* AMD specific interrupt remapping information */
|
||||
struct irq_2_irte {
|
||||
u16 devid; /* Device ID for IRTE table */
|
||||
u16 index; /* Index into IRTE table*/
|
||||
};
|
||||
#endif /* CONFIG_IRQ_REMAP */
|
||||
|
||||
#ifdef CONFIG_X86_LOCAL_APIC
|
||||
struct irq_data;
|
||||
struct pci_dev;
|
||||
struct msi_desc;
|
||||
|
||||
struct irq_cfg {
|
||||
cpumask_var_t domain;
|
||||
cpumask_var_t old_domain;
|
||||
u8 vector;
|
||||
u8 move_in_progress : 1;
|
||||
#ifdef CONFIG_IRQ_REMAP
|
||||
u8 remapped : 1;
|
||||
enum irq_alloc_type {
|
||||
X86_IRQ_ALLOC_TYPE_IOAPIC = 1,
|
||||
X86_IRQ_ALLOC_TYPE_HPET,
|
||||
X86_IRQ_ALLOC_TYPE_MSI,
|
||||
X86_IRQ_ALLOC_TYPE_MSIX,
|
||||
X86_IRQ_ALLOC_TYPE_DMAR,
|
||||
X86_IRQ_ALLOC_TYPE_UV,
|
||||
};
|
||||
|
||||
struct irq_alloc_info {
|
||||
enum irq_alloc_type type;
|
||||
u32 flags;
|
||||
const struct cpumask *mask; /* CPU mask for vector allocation */
|
||||
union {
|
||||
struct irq_2_iommu irq_2_iommu;
|
||||
struct irq_2_irte irq_2_irte;
|
||||
};
|
||||
#endif
|
||||
union {
|
||||
#ifdef CONFIG_X86_IO_APIC
|
||||
int unused;
|
||||
#ifdef CONFIG_HPET_TIMER
|
||||
struct {
|
||||
struct list_head irq_2_pin;
|
||||
int hpet_id;
|
||||
int hpet_index;
|
||||
void *hpet_data;
|
||||
};
|
||||
#endif
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
struct {
|
||||
struct pci_dev *msi_dev;
|
||||
irq_hw_number_t msi_hwirq;
|
||||
};
|
||||
#endif
|
||||
#ifdef CONFIG_X86_IO_APIC
|
||||
struct {
|
||||
int ioapic_id;
|
||||
int ioapic_pin;
|
||||
int ioapic_node;
|
||||
u32 ioapic_trigger : 1;
|
||||
u32 ioapic_polarity : 1;
|
||||
u32 ioapic_valid : 1;
|
||||
struct IO_APIC_route_entry *ioapic_entry;
|
||||
};
|
||||
#endif
|
||||
#ifdef CONFIG_DMAR_TABLE
|
||||
struct {
|
||||
int dmar_id;
|
||||
void *dmar_data;
|
||||
};
|
||||
#endif
|
||||
#ifdef CONFIG_HT_IRQ
|
||||
struct {
|
||||
int ht_pos;
|
||||
int ht_idx;
|
||||
struct pci_dev *ht_dev;
|
||||
void *ht_update;
|
||||
};
|
||||
#endif
|
||||
#ifdef CONFIG_X86_UV
|
||||
struct {
|
||||
int uv_limit;
|
||||
int uv_blade;
|
||||
unsigned long uv_offset;
|
||||
char *uv_name;
|
||||
};
|
||||
#endif
|
||||
};
|
||||
};
|
||||
|
||||
struct irq_cfg {
|
||||
unsigned int dest_apicid;
|
||||
u8 vector;
|
||||
};
|
||||
|
||||
extern struct irq_cfg *irq_cfg(unsigned int irq);
|
||||
extern struct irq_cfg *irqd_cfg(struct irq_data *irq_data);
|
||||
extern struct irq_cfg *alloc_irq_and_cfg_at(unsigned int at, int node);
|
||||
extern void lock_vector_lock(void);
|
||||
extern void unlock_vector_lock(void);
|
||||
extern int assign_irq_vector(int, struct irq_cfg *, const struct cpumask *);
|
||||
extern void clear_irq_vector(int irq, struct irq_cfg *cfg);
|
||||
extern void setup_vector_irq(int cpu);
|
||||
#ifdef CONFIG_SMP
|
||||
extern void send_cleanup_vector(struct irq_cfg *);
|
||||
@ -150,10 +151,7 @@ static inline void send_cleanup_vector(struct irq_cfg *c) { }
|
||||
static inline void irq_complete_move(struct irq_cfg *c) { }
|
||||
#endif
|
||||
|
||||
extern int apic_retrigger_irq(struct irq_data *data);
|
||||
extern void apic_ack_edge(struct irq_data *data);
|
||||
extern int apic_set_affinity(struct irq_data *data, const struct cpumask *mask,
|
||||
unsigned int *dest_id);
|
||||
#else /* CONFIG_X86_LOCAL_APIC */
|
||||
static inline void lock_vector_lock(void) {}
|
||||
static inline void unlock_vector_lock(void) {}
|
||||
@ -163,8 +161,7 @@ static inline void unlock_vector_lock(void) {}
|
||||
extern atomic_t irq_err_count;
|
||||
extern atomic_t irq_mis_count;
|
||||
|
||||
/* EISA */
|
||||
extern void eisa_set_level_irq(unsigned int irq);
|
||||
extern void elcr_set_level_irq(unsigned int irq);
|
||||
|
||||
/* SMP */
|
||||
extern __visible void smp_apic_timer_interrupt(struct pt_regs *);
|
||||
@ -178,7 +175,6 @@ extern asmlinkage void smp_irq_move_cleanup_interrupt(void);
|
||||
extern __visible void smp_reschedule_interrupt(struct pt_regs *);
|
||||
extern __visible void smp_call_function_interrupt(struct pt_regs *);
|
||||
extern __visible void smp_call_function_single_interrupt(struct pt_regs *);
|
||||
extern __visible void smp_invalidate_interrupt(struct pt_regs *);
|
||||
#endif
|
||||
|
||||
extern char irq_entries_start[];
|
||||
|
@ -35,11 +35,13 @@
|
||||
*/
|
||||
|
||||
#define ARCH_HAS_IOREMAP_WC
|
||||
#define ARCH_HAS_IOREMAP_WT
|
||||
|
||||
#include <linux/string.h>
|
||||
#include <linux/compiler.h>
|
||||
#include <asm/page.h>
|
||||
#include <asm/early_ioremap.h>
|
||||
#include <asm/pgtable_types.h>
|
||||
|
||||
#define build_mmio_read(name, size, type, reg, barrier) \
|
||||
static inline type name(const volatile void __iomem *addr) \
|
||||
@ -177,6 +179,7 @@ static inline unsigned int isa_virt_to_bus(volatile void *address)
|
||||
* look at pci_iomap().
|
||||
*/
|
||||
extern void __iomem *ioremap_nocache(resource_size_t offset, unsigned long size);
|
||||
extern void __iomem *ioremap_uc(resource_size_t offset, unsigned long size);
|
||||
extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size);
|
||||
extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size,
|
||||
unsigned long prot_val);
|
||||
@ -197,8 +200,6 @@ extern void set_iounmap_nonlazy(void);
|
||||
|
||||
#include <asm-generic/iomap.h>
|
||||
|
||||
#include <linux/vmalloc.h>
|
||||
|
||||
/*
|
||||
* Convert a virtual cached pointer to an uncached pointer
|
||||
*/
|
||||
@ -320,6 +321,7 @@ extern void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);
|
||||
extern int ioremap_change_attr(unsigned long vaddr, unsigned long size,
|
||||
enum page_cache_mode pcm);
|
||||
extern void __iomem *ioremap_wc(resource_size_t offset, unsigned long size);
|
||||
extern void __iomem *ioremap_wt(resource_size_t offset, unsigned long size);
|
||||
|
||||
extern bool is_early_ioremap_ptep(pte_t *ptep);
|
||||
|
||||
@ -338,6 +340,9 @@ extern bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
|
||||
#define IO_SPACE_LIMIT 0xffff
|
||||
|
||||
#ifdef CONFIG_MTRR
|
||||
extern int __must_check arch_phys_wc_index(int handle);
|
||||
#define arch_phys_wc_index arch_phys_wc_index
|
||||
|
||||
extern int __must_check arch_phys_wc_add(unsigned long base,
|
||||
unsigned long size);
|
||||
extern void arch_phys_wc_del(int handle);
|
||||
|
@ -95,9 +95,22 @@ struct IR_IO_APIC_route_entry {
|
||||
index : 15;
|
||||
} __attribute__ ((packed));
|
||||
|
||||
#define IOAPIC_AUTO -1
|
||||
#define IOAPIC_EDGE 0
|
||||
#define IOAPIC_LEVEL 1
|
||||
struct irq_alloc_info;
|
||||
struct ioapic_domain_cfg;
|
||||
|
||||
#define IOAPIC_AUTO -1
|
||||
#define IOAPIC_EDGE 0
|
||||
#define IOAPIC_LEVEL 1
|
||||
|
||||
#define IOAPIC_MASKED 1
|
||||
#define IOAPIC_UNMASKED 0
|
||||
|
||||
#define IOAPIC_POL_HIGH 0
|
||||
#define IOAPIC_POL_LOW 1
|
||||
|
||||
#define IOAPIC_DEST_MODE_PHYSICAL 0
|
||||
#define IOAPIC_DEST_MODE_LOGICAL 1
|
||||
|
||||
#define IOAPIC_MAP_ALLOC 0x1
|
||||
#define IOAPIC_MAP_CHECK 0x2
|
||||
|
||||
@ -110,9 +123,6 @@ extern int nr_ioapics;
|
||||
|
||||
extern int mpc_ioapic_id(int ioapic);
|
||||
extern unsigned int mpc_ioapic_addr(int ioapic);
|
||||
extern struct mp_ioapic_gsi *mp_ioapic_gsi_routing(int ioapic);
|
||||
|
||||
#define MP_MAX_IOAPIC_PIN 127
|
||||
|
||||
/* # of MP IRQ source entries */
|
||||
extern int mp_irq_entries;
|
||||
@ -120,9 +130,6 @@ extern int mp_irq_entries;
|
||||
/* MP IRQ source entries */
|
||||
extern struct mpc_intsrc mp_irqs[MAX_IRQ_SOURCES];
|
||||
|
||||
/* Older SiS APIC requires we rewrite the index register */
|
||||
extern int sis_apic_bug;
|
||||
|
||||
/* 1 if "noapic" boot option passed */
|
||||
extern int skip_ioapic_setup;
|
||||
|
||||
@ -132,6 +139,8 @@ extern int noioapicquirk;
|
||||
/* -1 if "noapic" boot option passed */
|
||||
extern int noioapicreroute;
|
||||
|
||||
extern u32 gsi_top;
|
||||
|
||||
extern unsigned long io_apic_irqs;
|
||||
|
||||
#define IO_APIC_IRQ(x) (((x) >= NR_IRQS_LEGACY) || ((1 << (x)) & io_apic_irqs))
|
||||
@ -147,13 +156,6 @@ struct irq_cfg;
|
||||
extern void ioapic_insert_resources(void);
|
||||
extern int arch_early_ioapic_init(void);
|
||||
|
||||
extern int native_setup_ioapic_entry(int, struct IO_APIC_route_entry *,
|
||||
unsigned int, int,
|
||||
struct io_apic_irq_attr *);
|
||||
extern void eoi_ioapic_irq(unsigned int irq, struct irq_cfg *cfg);
|
||||
|
||||
extern void native_eoi_ioapic_pin(int apic, int pin, int vector);
|
||||
|
||||
extern int save_ioapic_entries(void);
|
||||
extern void mask_ioapic_entries(void);
|
||||
extern int restore_ioapic_entries(void);
|
||||
@ -161,82 +163,32 @@ extern int restore_ioapic_entries(void);
|
||||
extern void setup_ioapic_ids_from_mpc(void);
|
||||
extern void setup_ioapic_ids_from_mpc_nocheck(void);
|
||||
|
||||
struct io_apic_irq_attr {
|
||||
int ioapic;
|
||||
int ioapic_pin;
|
||||
int trigger;
|
||||
int polarity;
|
||||
};
|
||||
|
||||
enum ioapic_domain_type {
|
||||
IOAPIC_DOMAIN_INVALID,
|
||||
IOAPIC_DOMAIN_LEGACY,
|
||||
IOAPIC_DOMAIN_STRICT,
|
||||
IOAPIC_DOMAIN_DYNAMIC,
|
||||
};
|
||||
|
||||
struct device_node;
|
||||
struct irq_domain;
|
||||
struct irq_domain_ops;
|
||||
|
||||
struct ioapic_domain_cfg {
|
||||
enum ioapic_domain_type type;
|
||||
const struct irq_domain_ops *ops;
|
||||
struct device_node *dev;
|
||||
};
|
||||
|
||||
struct mp_ioapic_gsi{
|
||||
u32 gsi_base;
|
||||
u32 gsi_end;
|
||||
};
|
||||
extern u32 gsi_top;
|
||||
|
||||
extern int mp_find_ioapic(u32 gsi);
|
||||
extern int mp_find_ioapic_pin(int ioapic, u32 gsi);
|
||||
extern u32 mp_pin_to_gsi(int ioapic, int pin);
|
||||
extern int mp_map_gsi_to_irq(u32 gsi, unsigned int flags);
|
||||
extern int mp_map_gsi_to_irq(u32 gsi, unsigned int flags,
|
||||
struct irq_alloc_info *info);
|
||||
extern void mp_unmap_irq(int irq);
|
||||
extern int mp_register_ioapic(int id, u32 address, u32 gsi_base,
|
||||
struct ioapic_domain_cfg *cfg);
|
||||
extern int mp_unregister_ioapic(u32 gsi_base);
|
||||
extern int mp_ioapic_registered(u32 gsi_base);
|
||||
extern int mp_irqdomain_map(struct irq_domain *domain, unsigned int virq,
|
||||
irq_hw_number_t hwirq);
|
||||
extern void mp_irqdomain_unmap(struct irq_domain *domain, unsigned int virq);
|
||||
extern int mp_set_gsi_attr(u32 gsi, int trigger, int polarity, int node);
|
||||
extern void __init pre_init_apic_IRQ0(void);
|
||||
|
||||
extern void ioapic_set_alloc_attr(struct irq_alloc_info *info,
|
||||
int node, int trigger, int polarity);
|
||||
|
||||
extern void mp_save_irq(struct mpc_intsrc *m);
|
||||
|
||||
extern void disable_ioapic_support(void);
|
||||
|
||||
extern void __init native_io_apic_init_mappings(void);
|
||||
extern void __init io_apic_init_mappings(void);
|
||||
extern unsigned int native_io_apic_read(unsigned int apic, unsigned int reg);
|
||||
extern void native_io_apic_write(unsigned int apic, unsigned int reg, unsigned int val);
|
||||
extern void native_io_apic_modify(unsigned int apic, unsigned int reg, unsigned int val);
|
||||
extern void native_disable_io_apic(void);
|
||||
extern void native_io_apic_print_entries(unsigned int apic, unsigned int nr_entries);
|
||||
extern void intel_ir_io_apic_print_entries(unsigned int apic, unsigned int nr_entries);
|
||||
extern int native_ioapic_set_affinity(struct irq_data *,
|
||||
const struct cpumask *,
|
||||
bool);
|
||||
|
||||
static inline unsigned int io_apic_read(unsigned int apic, unsigned int reg)
|
||||
{
|
||||
return x86_io_apic_ops.read(apic, reg);
|
||||
}
|
||||
|
||||
static inline void io_apic_write(unsigned int apic, unsigned int reg, unsigned int value)
|
||||
{
|
||||
x86_io_apic_ops.write(apic, reg, value);
|
||||
}
|
||||
static inline void io_apic_modify(unsigned int apic, unsigned int reg, unsigned int value)
|
||||
{
|
||||
x86_io_apic_ops.modify(apic, reg, value);
|
||||
}
|
||||
|
||||
extern void io_apic_eoi(unsigned int apic, unsigned int vector);
|
||||
|
||||
extern void setup_IO_APIC(void);
|
||||
extern void enable_IO_APIC(void);
|
||||
extern void disable_IO_APIC(void);
|
||||
@ -253,8 +205,12 @@ static inline int arch_early_ioapic_init(void) { return 0; }
|
||||
static inline void print_IO_APICs(void) {}
|
||||
#define gsi_top (NR_IRQS_LEGACY)
|
||||
static inline int mp_find_ioapic(u32 gsi) { return 0; }
|
||||
static inline u32 mp_pin_to_gsi(int ioapic, int pin) { return UINT_MAX; }
|
||||
static inline int mp_map_gsi_to_irq(u32 gsi, unsigned int flags) { return gsi; }
|
||||
static inline int mp_map_gsi_to_irq(u32 gsi, unsigned int flags,
|
||||
struct irq_alloc_info *info)
|
||||
{
|
||||
return gsi;
|
||||
}
|
||||
|
||||
static inline void mp_unmap_irq(int irq) { }
|
||||
|
||||
static inline int save_ioapic_entries(void)
|
||||
@ -268,17 +224,11 @@ static inline int restore_ioapic_entries(void)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static inline void mp_save_irq(struct mpc_intsrc *m) { };
|
||||
static inline void mp_save_irq(struct mpc_intsrc *m) { }
|
||||
static inline void disable_ioapic_support(void) { }
|
||||
#define native_io_apic_init_mappings NULL
|
||||
static inline void io_apic_init_mappings(void) { }
|
||||
#define native_io_apic_read NULL
|
||||
#define native_io_apic_write NULL
|
||||
#define native_io_apic_modify NULL
|
||||
#define native_disable_io_apic NULL
|
||||
#define native_io_apic_print_entries NULL
|
||||
#define native_ioapic_set_affinity NULL
|
||||
#define native_setup_ioapic_entry NULL
|
||||
#define native_eoi_ioapic_pin NULL
|
||||
|
||||
static inline void setup_IO_APIC(void) { }
|
||||
static inline void enable_IO_APIC(void) { }
|
||||
|
@ -30,6 +30,10 @@ extern void fixup_irqs(void);
|
||||
extern void irq_force_complete_move(int);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_HAVE_KVM
|
||||
extern void kvm_set_posted_intr_wakeup_handler(void (*handler)(void));
|
||||
#endif
|
||||
|
||||
extern void (*x86_platform_ipi_callback)(void);
|
||||
extern void native_init_IRQ(void);
|
||||
extern bool handle_irq(unsigned irq, struct pt_regs *regs);
|
||||
|
@ -22,84 +22,72 @@
|
||||
#ifndef __X86_IRQ_REMAPPING_H
|
||||
#define __X86_IRQ_REMAPPING_H
|
||||
|
||||
#include <asm/irqdomain.h>
|
||||
#include <asm/hw_irq.h>
|
||||
#include <asm/io_apic.h>
|
||||
|
||||
struct IO_APIC_route_entry;
|
||||
struct io_apic_irq_attr;
|
||||
struct irq_chip;
|
||||
struct msi_msg;
|
||||
struct pci_dev;
|
||||
struct irq_cfg;
|
||||
struct irq_alloc_info;
|
||||
|
||||
enum irq_remap_cap {
|
||||
IRQ_POSTING_CAP = 0,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_IRQ_REMAP
|
||||
|
||||
extern bool irq_remapping_cap(enum irq_remap_cap cap);
|
||||
extern void set_irq_remapping_broken(void);
|
||||
extern int irq_remapping_prepare(void);
|
||||
extern int irq_remapping_enable(void);
|
||||
extern void irq_remapping_disable(void);
|
||||
extern int irq_remapping_reenable(int);
|
||||
extern int irq_remap_enable_fault_handling(void);
|
||||
extern int setup_ioapic_remapped_entry(int irq,
|
||||
struct IO_APIC_route_entry *entry,
|
||||
unsigned int destination,
|
||||
int vector,
|
||||
struct io_apic_irq_attr *attr);
|
||||
extern void free_remapped_irq(int irq);
|
||||
extern void compose_remapped_msi_msg(struct pci_dev *pdev,
|
||||
unsigned int irq, unsigned int dest,
|
||||
struct msi_msg *msg, u8 hpet_id);
|
||||
extern int setup_hpet_msi_remapped(unsigned int irq, unsigned int id);
|
||||
extern void panic_if_irq_remap(const char *msg);
|
||||
extern bool setup_remapped_irq(int irq,
|
||||
struct irq_cfg *cfg,
|
||||
struct irq_chip *chip);
|
||||
|
||||
void irq_remap_modify_chip_defaults(struct irq_chip *chip);
|
||||
extern struct irq_domain *
|
||||
irq_remapping_get_ir_irq_domain(struct irq_alloc_info *info);
|
||||
extern struct irq_domain *
|
||||
irq_remapping_get_irq_domain(struct irq_alloc_info *info);
|
||||
|
||||
/* Create PCI MSI/MSIx irqdomain, use @parent as the parent irqdomain. */
|
||||
extern struct irq_domain *arch_create_msi_irq_domain(struct irq_domain *parent);
|
||||
|
||||
/* Get parent irqdomain for interrupt remapping irqdomain */
|
||||
static inline struct irq_domain *arch_get_ir_parent_domain(void)
|
||||
{
|
||||
return x86_vector_domain;
|
||||
}
|
||||
|
||||
struct vcpu_data {
|
||||
u64 pi_desc_addr; /* Physical address of PI Descriptor */
|
||||
u32 vector; /* Guest vector of the interrupt */
|
||||
};
|
||||
|
||||
#else /* CONFIG_IRQ_REMAP */
|
||||
|
||||
static inline bool irq_remapping_cap(enum irq_remap_cap cap) { return 0; }
|
||||
static inline void set_irq_remapping_broken(void) { }
|
||||
static inline int irq_remapping_prepare(void) { return -ENODEV; }
|
||||
static inline int irq_remapping_enable(void) { return -ENODEV; }
|
||||
static inline void irq_remapping_disable(void) { }
|
||||
static inline int irq_remapping_reenable(int eim) { return -ENODEV; }
|
||||
static inline int irq_remap_enable_fault_handling(void) { return -ENODEV; }
|
||||
static inline int setup_ioapic_remapped_entry(int irq,
|
||||
struct IO_APIC_route_entry *entry,
|
||||
unsigned int destination,
|
||||
int vector,
|
||||
struct io_apic_irq_attr *attr)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
static inline void free_remapped_irq(int irq) { }
|
||||
static inline void compose_remapped_msi_msg(struct pci_dev *pdev,
|
||||
unsigned int irq, unsigned int dest,
|
||||
struct msi_msg *msg, u8 hpet_id)
|
||||
{
|
||||
}
|
||||
static inline int setup_hpet_msi_remapped(unsigned int irq, unsigned int id)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static inline void panic_if_irq_remap(const char *msg)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void irq_remap_modify_chip_defaults(struct irq_chip *chip)
|
||||
static inline struct irq_domain *
|
||||
irq_remapping_get_ir_irq_domain(struct irq_alloc_info *info)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline bool setup_remapped_irq(int irq,
|
||||
struct irq_cfg *cfg,
|
||||
struct irq_chip *chip)
|
||||
static inline struct irq_domain *
|
||||
irq_remapping_get_irq_domain(struct irq_alloc_info *info)
|
||||
{
|
||||
return false;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_IRQ_REMAP */
|
||||
|
||||
#define dmar_alloc_hwirq() irq_alloc_hwirq(-1)
|
||||
#define dmar_free_hwirq irq_free_hwirq
|
||||
|
||||
#endif /* __X86_IRQ_REMAPPING_H */
|
||||
|
@ -47,31 +47,12 @@
|
||||
#define IRQ_MOVE_CLEANUP_VECTOR FIRST_EXTERNAL_VECTOR
|
||||
|
||||
#define IA32_SYSCALL_VECTOR 0x80
|
||||
#ifdef CONFIG_X86_32
|
||||
# define SYSCALL_VECTOR 0x80
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Vectors 0x30-0x3f are used for ISA interrupts.
|
||||
* round up to the next 16-vector boundary
|
||||
*/
|
||||
#define IRQ0_VECTOR ((FIRST_EXTERNAL_VECTOR + 16) & ~15)
|
||||
|
||||
#define IRQ1_VECTOR (IRQ0_VECTOR + 1)
|
||||
#define IRQ2_VECTOR (IRQ0_VECTOR + 2)
|
||||
#define IRQ3_VECTOR (IRQ0_VECTOR + 3)
|
||||
#define IRQ4_VECTOR (IRQ0_VECTOR + 4)
|
||||
#define IRQ5_VECTOR (IRQ0_VECTOR + 5)
|
||||
#define IRQ6_VECTOR (IRQ0_VECTOR + 6)
|
||||
#define IRQ7_VECTOR (IRQ0_VECTOR + 7)
|
||||
#define IRQ8_VECTOR (IRQ0_VECTOR + 8)
|
||||
#define IRQ9_VECTOR (IRQ0_VECTOR + 9)
|
||||
#define IRQ10_VECTOR (IRQ0_VECTOR + 10)
|
||||
#define IRQ11_VECTOR (IRQ0_VECTOR + 11)
|
||||
#define IRQ12_VECTOR (IRQ0_VECTOR + 12)
|
||||
#define IRQ13_VECTOR (IRQ0_VECTOR + 13)
|
||||
#define IRQ14_VECTOR (IRQ0_VECTOR + 14)
|
||||
#define IRQ15_VECTOR (IRQ0_VECTOR + 15)
|
||||
#define ISA_IRQ_VECTOR(irq) (((FIRST_EXTERNAL_VECTOR + 16) & ~15) + irq)
|
||||
|
||||
/*
|
||||
* Special IRQ vectors used by the SMP architecture, 0xf0-0xff
|
||||
@ -102,21 +83,23 @@
|
||||
*/
|
||||
#define X86_PLATFORM_IPI_VECTOR 0xf7
|
||||
|
||||
/* Vector for KVM to deliver posted interrupt IPI */
|
||||
#ifdef CONFIG_HAVE_KVM
|
||||
#define POSTED_INTR_VECTOR 0xf2
|
||||
#endif
|
||||
|
||||
#define POSTED_INTR_WAKEUP_VECTOR 0xf1
|
||||
/*
|
||||
* IRQ work vector:
|
||||
*/
|
||||
#define IRQ_WORK_VECTOR 0xf6
|
||||
|
||||
#define UV_BAU_MESSAGE 0xf5
|
||||
#define DEFERRED_ERROR_VECTOR 0xf4
|
||||
|
||||
/* Vector on which hypervisor callbacks will be delivered */
|
||||
#define HYPERVISOR_CALLBACK_VECTOR 0xf3
|
||||
|
||||
/* Vector for KVM to deliver posted interrupt IPI */
|
||||
#ifdef CONFIG_HAVE_KVM
|
||||
#define POSTED_INTR_VECTOR 0xf2
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Local APIC timer IRQ vector is on a different priority level,
|
||||
* to work around the 'lost local interrupt if more than 2 IRQ
|
||||
@ -155,18 +138,22 @@ static inline int invalid_vm86_irq(int irq)
|
||||
* static arrays.
|
||||
*/
|
||||
|
||||
#define NR_IRQS_LEGACY 16
|
||||
#define NR_IRQS_LEGACY 16
|
||||
|
||||
#define IO_APIC_VECTOR_LIMIT ( 32 * MAX_IO_APICS )
|
||||
#define CPU_VECTOR_LIMIT (64 * NR_CPUS)
|
||||
#define IO_APIC_VECTOR_LIMIT (32 * MAX_IO_APICS)
|
||||
|
||||
#ifdef CONFIG_X86_IO_APIC
|
||||
# define CPU_VECTOR_LIMIT (64 * NR_CPUS)
|
||||
# define NR_IRQS \
|
||||
#if defined(CONFIG_X86_IO_APIC) && defined(CONFIG_PCI_MSI)
|
||||
#define NR_IRQS \
|
||||
(CPU_VECTOR_LIMIT > IO_APIC_VECTOR_LIMIT ? \
|
||||
(NR_VECTORS + CPU_VECTOR_LIMIT) : \
|
||||
(NR_VECTORS + IO_APIC_VECTOR_LIMIT))
|
||||
#else /* !CONFIG_X86_IO_APIC: */
|
||||
# define NR_IRQS NR_IRQS_LEGACY
|
||||
#elif defined(CONFIG_X86_IO_APIC)
|
||||
#define NR_IRQS (NR_VECTORS + IO_APIC_VECTOR_LIMIT)
|
||||
#elif defined(CONFIG_PCI_MSI)
|
||||
#define NR_IRQS (NR_VECTORS + CPU_VECTOR_LIMIT)
|
||||
#else
|
||||
#define NR_IRQS NR_IRQS_LEGACY
|
||||
#endif
|
||||
|
||||
#endif /* _ASM_X86_IRQ_VECTORS_H */
|
||||
|
63
arch/x86/include/asm/irqdomain.h
Normal file
63
arch/x86/include/asm/irqdomain.h
Normal file
@ -0,0 +1,63 @@
|
||||
#ifndef _ASM_IRQDOMAIN_H
|
||||
#define _ASM_IRQDOMAIN_H
|
||||
|
||||
#include <linux/irqdomain.h>
|
||||
#include <asm/hw_irq.h>
|
||||
|
||||
#ifdef CONFIG_X86_LOCAL_APIC
|
||||
enum {
|
||||
/* Allocate contiguous CPU vectors */
|
||||
X86_IRQ_ALLOC_CONTIGUOUS_VECTORS = 0x1,
|
||||
};
|
||||
|
||||
extern struct irq_domain *x86_vector_domain;
|
||||
|
||||
extern void init_irq_alloc_info(struct irq_alloc_info *info,
|
||||
const struct cpumask *mask);
|
||||
extern void copy_irq_alloc_info(struct irq_alloc_info *dst,
|
||||
struct irq_alloc_info *src);
|
||||
#endif /* CONFIG_X86_LOCAL_APIC */
|
||||
|
||||
#ifdef CONFIG_X86_IO_APIC
|
||||
struct device_node;
|
||||
struct irq_data;
|
||||
|
||||
enum ioapic_domain_type {
|
||||
IOAPIC_DOMAIN_INVALID,
|
||||
IOAPIC_DOMAIN_LEGACY,
|
||||
IOAPIC_DOMAIN_STRICT,
|
||||
IOAPIC_DOMAIN_DYNAMIC,
|
||||
};
|
||||
|
||||
struct ioapic_domain_cfg {
|
||||
enum ioapic_domain_type type;
|
||||
const struct irq_domain_ops *ops;
|
||||
struct device_node *dev;
|
||||
};
|
||||
|
||||
extern const struct irq_domain_ops mp_ioapic_irqdomain_ops;
|
||||
|
||||
extern int mp_irqdomain_alloc(struct irq_domain *domain, unsigned int virq,
|
||||
unsigned int nr_irqs, void *arg);
|
||||
extern void mp_irqdomain_free(struct irq_domain *domain, unsigned int virq,
|
||||
unsigned int nr_irqs);
|
||||
extern void mp_irqdomain_activate(struct irq_domain *domain,
|
||||
struct irq_data *irq_data);
|
||||
extern void mp_irqdomain_deactivate(struct irq_domain *domain,
|
||||
struct irq_data *irq_data);
|
||||
extern int mp_irqdomain_ioapic_idx(struct irq_domain *domain);
|
||||
#endif /* CONFIG_X86_IO_APIC */
|
||||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
extern void arch_init_msi_domain(struct irq_domain *domain);
|
||||
#else
|
||||
static inline void arch_init_msi_domain(struct irq_domain *domain) { }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_HT_IRQ
|
||||
extern void arch_init_htirq_domain(struct irq_domain *domain);
|
||||
#else
|
||||
static inline void arch_init_htirq_domain(struct irq_domain *domain) { }
|
||||
#endif
|
||||
|
||||
#endif
|
@ -17,11 +17,16 @@
|
||||
#define MCG_EXT_CNT(c) (((c) & MCG_EXT_CNT_MASK) >> MCG_EXT_CNT_SHIFT)
|
||||
#define MCG_SER_P (1ULL<<24) /* MCA recovery/new status bits */
|
||||
#define MCG_ELOG_P (1ULL<<26) /* Extended error log supported */
|
||||
#define MCG_LMCE_P (1ULL<<27) /* Local machine check supported */
|
||||
|
||||
/* MCG_STATUS register defines */
|
||||
#define MCG_STATUS_RIPV (1ULL<<0) /* restart ip valid */
|
||||
#define MCG_STATUS_EIPV (1ULL<<1) /* ip points to correct instruction */
|
||||
#define MCG_STATUS_MCIP (1ULL<<2) /* machine check in progress */
|
||||
#define MCG_STATUS_LMCES (1ULL<<3) /* LMCE signaled */
|
||||
|
||||
/* MCG_EXT_CTL register defines */
|
||||
#define MCG_EXT_CTL_LMCE_EN (1ULL<<0) /* Enable LMCE */
|
||||
|
||||
/* MCi_STATUS register defines */
|
||||
#define MCI_STATUS_VAL (1ULL<<63) /* valid error */
|
||||
@ -104,6 +109,7 @@ struct mce_log {
|
||||
struct mca_config {
|
||||
bool dont_log_ce;
|
||||
bool cmci_disabled;
|
||||
bool lmce_disabled;
|
||||
bool ignore_ce;
|
||||
bool disabled;
|
||||
bool ser;
|
||||
@ -117,8 +123,19 @@ struct mca_config {
|
||||
};
|
||||
|
||||
struct mce_vendor_flags {
|
||||
__u64 overflow_recov : 1, /* cpuid_ebx(80000007) */
|
||||
__reserved_0 : 63;
|
||||
/*
|
||||
* overflow recovery cpuid bit indicates that overflow
|
||||
* conditions are not fatal
|
||||
*/
|
||||
__u64 overflow_recov : 1,
|
||||
|
||||
/*
|
||||
* SUCCOR stands for S/W UnCorrectable error COntainment
|
||||
* and Recovery. It indicates support for data poisoning
|
||||
* in HW and deferred error interrupts.
|
||||
*/
|
||||
succor : 1,
|
||||
__reserved_0 : 62;
|
||||
};
|
||||
extern struct mce_vendor_flags mce_flags;
|
||||
|
||||
@ -168,12 +185,16 @@ void cmci_clear(void);
|
||||
void cmci_reenable(void);
|
||||
void cmci_rediscover(void);
|
||||
void cmci_recheck(void);
|
||||
void lmce_clear(void);
|
||||
void lmce_enable(void);
|
||||
#else
|
||||
static inline void mce_intel_feature_init(struct cpuinfo_x86 *c) { }
|
||||
static inline void cmci_clear(void) {}
|
||||
static inline void cmci_reenable(void) {}
|
||||
static inline void cmci_rediscover(void) {}
|
||||
static inline void cmci_recheck(void) {}
|
||||
static inline void lmce_clear(void) {}
|
||||
static inline void lmce_enable(void) {}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_X86_MCE_AMD
|
||||
@ -223,6 +244,9 @@ void do_machine_check(struct pt_regs *, long);
|
||||
extern void (*mce_threshold_vector)(void);
|
||||
extern void (*threshold_cpu_callback)(unsigned long action, unsigned int cpu);
|
||||
|
||||
/* Deferred error interrupt handler */
|
||||
extern void (*deferred_error_int_vector)(void);
|
||||
|
||||
/*
|
||||
* Thermal handler
|
||||
*/
|
||||
|
7
arch/x86/include/asm/msi.h
Normal file
7
arch/x86/include/asm/msi.h
Normal file
@ -0,0 +1,7 @@
|
||||
#ifndef _ASM_X86_MSI_H
|
||||
#define _ASM_X86_MSI_H
|
||||
#include <asm/hw_irq.h>
|
||||
|
||||
typedef struct irq_alloc_info msi_alloc_info_t;
|
||||
|
||||
#endif /* _ASM_X86_MSI_H */
|
@ -56,6 +56,7 @@
|
||||
#define MSR_IA32_MCG_CAP 0x00000179
|
||||
#define MSR_IA32_MCG_STATUS 0x0000017a
|
||||
#define MSR_IA32_MCG_CTL 0x0000017b
|
||||
#define MSR_IA32_MCG_EXT_CTL 0x000004d0
|
||||
|
||||
#define MSR_OFFCORE_RSP_0 0x000001a6
|
||||
#define MSR_OFFCORE_RSP_1 0x000001a7
|
||||
@ -380,6 +381,7 @@
|
||||
#define FEATURE_CONTROL_LOCKED (1<<0)
|
||||
#define FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX (1<<1)
|
||||
#define FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX (1<<2)
|
||||
#define FEATURE_CONTROL_LMCE (1<<20)
|
||||
|
||||
#define MSR_IA32_APICBASE 0x0000001b
|
||||
#define MSR_IA32_APICBASE_BSP (1<<8)
|
@ -1,13 +1,14 @@
|
||||
#ifndef _ASM_X86_MSR_H
|
||||
#define _ASM_X86_MSR_H
|
||||
|
||||
#include <uapi/asm/msr.h>
|
||||
#include "msr-index.h"
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <asm/asm.h>
|
||||
#include <asm/errno.h>
|
||||
#include <asm/cpumask.h>
|
||||
#include <uapi/asm/msr.h>
|
||||
|
||||
struct msr {
|
||||
union {
|
||||
@ -205,8 +206,13 @@ do { \
|
||||
|
||||
#endif /* !CONFIG_PARAVIRT */
|
||||
|
||||
#define wrmsrl_safe(msr, val) wrmsr_safe((msr), (u32)(val), \
|
||||
(u32)((val) >> 32))
|
||||
/*
|
||||
* 64-bit version of wrmsr_safe():
|
||||
*/
|
||||
static inline int wrmsrl_safe(u32 msr, u64 val)
|
||||
{
|
||||
return wrmsr_safe(msr, (u32)val, (u32)(val >> 32));
|
||||
}
|
||||
|
||||
#define write_tsc(low, high) wrmsr(MSR_IA32_TSC, (low), (high))
|
||||
|
||||
|
@ -31,7 +31,7 @@
|
||||
* arch_phys_wc_add and arch_phys_wc_del.
|
||||
*/
|
||||
# ifdef CONFIG_MTRR
|
||||
extern u8 mtrr_type_lookup(u64 addr, u64 end);
|
||||
extern u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform);
|
||||
extern void mtrr_save_fixed_ranges(void *);
|
||||
extern void mtrr_save_state(void);
|
||||
extern int mtrr_add(unsigned long base, unsigned long size,
|
||||
@ -48,14 +48,13 @@ extern void mtrr_aps_init(void);
|
||||
extern void mtrr_bp_restore(void);
|
||||
extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
|
||||
extern int amd_special_default_mtrr(void);
|
||||
extern int phys_wc_to_mtrr_index(int handle);
|
||||
# else
|
||||
static inline u8 mtrr_type_lookup(u64 addr, u64 end)
|
||||
static inline u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform)
|
||||
{
|
||||
/*
|
||||
* Return no-MTRRs:
|
||||
*/
|
||||
return 0xff;
|
||||
return MTRR_TYPE_INVALID;
|
||||
}
|
||||
#define mtrr_save_fixed_ranges(arg) do {} while (0)
|
||||
#define mtrr_save_state() do {} while (0)
|
||||
@ -84,10 +83,6 @@ static inline int mtrr_trim_uncached_memory(unsigned long end_pfn)
|
||||
static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
|
||||
{
|
||||
}
|
||||
static inline int phys_wc_to_mtrr_index(int handle)
|
||||
{
|
||||
return -1;
|
||||
}
|
||||
|
||||
#define mtrr_ap_init() do {} while (0)
|
||||
#define mtrr_bp_init() do {} while (0)
|
||||
@ -127,4 +122,8 @@ struct mtrr_gentry32 {
|
||||
_IOW(MTRR_IOCTL_BASE, 9, struct mtrr_sentry32)
|
||||
#endif /* CONFIG_COMPAT */
|
||||
|
||||
/* Bit fields for enabled in struct mtrr_state_type */
|
||||
#define MTRR_STATE_MTRR_FIXED_ENABLED 0x01
|
||||
#define MTRR_STATE_MTRR_ENABLED 0x02
|
||||
|
||||
#endif /* _ASM_X86_MTRR_H */
|
||||
|
@ -160,13 +160,14 @@ struct pv_cpu_ops {
|
||||
u64 (*read_pmc)(int counter);
|
||||
unsigned long long (*read_tscp)(unsigned int *aux);
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
/*
|
||||
* Atomically enable interrupts and return to userspace. This
|
||||
* is only ever used to return to 32-bit processes; in a
|
||||
* 64-bit kernel, it's used for 32-on-64 compat processes, but
|
||||
* never native 64-bit processes. (Jump, not call.)
|
||||
* is only used in 32-bit kernels. 64-bit kernels use
|
||||
* usergs_sysret32 instead.
|
||||
*/
|
||||
void (*irq_enable_sysexit)(void);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Switch to usermode gs and return to 64-bit usermode using
|
||||
|
@ -4,14 +4,9 @@
|
||||
#include <linux/types.h>
|
||||
#include <asm/pgtable_types.h>
|
||||
|
||||
#ifdef CONFIG_X86_PAT
|
||||
extern int pat_enabled;
|
||||
#else
|
||||
static const int pat_enabled;
|
||||
#endif
|
||||
|
||||
bool pat_enabled(void);
|
||||
extern void pat_init(void);
|
||||
void pat_init_cache_modes(void);
|
||||
void pat_init_cache_modes(u64);
|
||||
|
||||
extern int reserve_memtype(u64 start, u64 end,
|
||||
enum page_cache_mode req_pcm, enum page_cache_mode *ret_pcm);
|
||||
|
@ -96,15 +96,10 @@ extern void pci_iommu_alloc(void);
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
/* implemented in arch/x86/kernel/apic/io_apic. */
|
||||
struct msi_desc;
|
||||
void native_compose_msi_msg(struct pci_dev *pdev, unsigned int irq,
|
||||
unsigned int dest, struct msi_msg *msg, u8 hpet_id);
|
||||
int native_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
|
||||
void native_teardown_msi_irq(unsigned int irq);
|
||||
void native_restore_msi_irqs(struct pci_dev *dev);
|
||||
int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc,
|
||||
unsigned int irq_base, unsigned int irq_offset);
|
||||
#else
|
||||
#define native_compose_msi_msg NULL
|
||||
#define native_setup_msi_irqs NULL
|
||||
#define native_teardown_msi_irq NULL
|
||||
#endif
|
||||
|
@ -398,11 +398,17 @@ static inline int is_new_memtype_allowed(u64 paddr, unsigned long size,
|
||||
* requested memtype:
|
||||
* - request is uncached, return cannot be write-back
|
||||
* - request is write-combine, return cannot be write-back
|
||||
* - request is write-through, return cannot be write-back
|
||||
* - request is write-through, return cannot be write-combine
|
||||
*/
|
||||
if ((pcm == _PAGE_CACHE_MODE_UC_MINUS &&
|
||||
new_pcm == _PAGE_CACHE_MODE_WB) ||
|
||||
(pcm == _PAGE_CACHE_MODE_WC &&
|
||||
new_pcm == _PAGE_CACHE_MODE_WB)) {
|
||||
new_pcm == _PAGE_CACHE_MODE_WB) ||
|
||||
(pcm == _PAGE_CACHE_MODE_WT &&
|
||||
new_pcm == _PAGE_CACHE_MODE_WB) ||
|
||||
(pcm == _PAGE_CACHE_MODE_WT &&
|
||||
new_pcm == _PAGE_CACHE_MODE_WC)) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -367,6 +367,9 @@ extern int nx_enabled;
|
||||
#define pgprot_writecombine pgprot_writecombine
|
||||
extern pgprot_t pgprot_writecombine(pgprot_t prot);
|
||||
|
||||
#define pgprot_writethrough pgprot_writethrough
|
||||
extern pgprot_t pgprot_writethrough(pgprot_t prot);
|
||||
|
||||
/* Indicate that x86 has its own track and untrack pfn vma functions */
|
||||
#define __HAVE_PFNMAP_TRACKING
|
||||
|
||||
|
@ -5,12 +5,14 @@
|
||||
|
||||
/* misc architecture specific prototypes */
|
||||
|
||||
void system_call(void);
|
||||
void syscall_init(void);
|
||||
|
||||
void ia32_syscall(void);
|
||||
void ia32_cstar_target(void);
|
||||
void ia32_sysenter_target(void);
|
||||
void entry_SYSCALL_64(void);
|
||||
void entry_SYSCALL_compat(void);
|
||||
void entry_INT80_32(void);
|
||||
void entry_INT80_compat(void);
|
||||
void entry_SYSENTER_32(void);
|
||||
void entry_SYSENTER_compat(void);
|
||||
|
||||
void x86_configure_nx(void);
|
||||
void x86_report_nx(void);
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user