Commit Graph

4401 Commits

Author SHA1 Message Date
Peter Oruba
b6cffde1a2 x86, microcode rework, v2, renaming
Renaming based on patch from Dmitry Adamushko.

Made code more readable by renaming define and variables related
to microcode _container_file_ header to make it distinguishable from
microcode _patch_ header.

Signed-off-by: Peter Oruba <peter.oruba@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-23 16:12:59 +02:00
Srinivasa Ds
da654b74bd signals: demultiplexing SIGTRAP signal
Currently a SIGTRAP can denote any one of below reasons.
	- Breakpoint hit
	- H/W debug register hit
	- Single step
	- Signal sent through kill() or rasie()

Architectures like powerpc/parisc provides infrastructure to demultiplex
SIGTRAP signal by passing down the information for receiving SIGTRAP through
si_code of siginfot_t structure. Here is an attempt is generalise this
infrastructure by extending it to x86 and x86_64 archs.

Signed-off-by: Srinivasa DS <srinivasa@in.ibm.com>
Cc: Roland McGrath <roland@redhat.com>
Cc: akpm@linux-foundation.org
Cc: paulus@samba.org
Cc: linuxppc-dev@ozlabs.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-23 13:26:52 +02:00
Ingo Molnar
101d5b7137 Merge branch 'x86/signal' into core/signal
Conflicts:
	arch/x86/kernel/cpu/feature_names.c
	arch/x86/kernel/setup.c
	drivers/pci/intel-iommu.c
	include/asm-x86/cpufeature.h

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-23 13:26:27 +02:00
Dmitry Adamushko
18dbc91605 x86: moved microcode.c to microcode_intel.c
Combine both generic and arch-specific parts of microcode into a
single module (arch-specific parts are config-dependent).

Also while we are at it, move arch-specific parts from microcode.h
into their respective arch-specific .c files.

Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
Cc: "Peter Oruba" <peter.oruba@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-23 12:21:42 +02:00
Andreas Herrmann
09bfeea13c x86: c1e_idle: don't mark TSC unstable if CPU has invariant TSC
Impact: Functional TSC is marked unstable on AMD family 0x10 and 0x11 CPUs.

This would be wrong because for those CPUs "invariant TSC" means:

   "The TSC counts at the same rate in all P-states, all C states, S0,
   or S1"

(See "Processor BIOS and Kernel Developer's Guides" for those CPUs.)

[ tglx: Changed C1E to AMD C1E in the printks to avoid confusion 
	with Intel C1E ]

Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-09-23 11:38:53 +02:00
Thomas Gleixner
a8d6829044 x86: prevent C-states hang on AMD C1E enabled machines
Impact: System hang when AMD C1E machines switch into C2/C3

AMD C1E enabled systems do not work with normal ACPI C-states 
even if the BIOS is advertising them. Limit the C-states to 
C1 for the ACPI processor idle code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-09-23 11:38:53 +02:00
Thomas Gleixner
4faac97d44 x86: prevent stale state of c1e_mask across CPU offline/online
Impact: hang which happens across CPU offline/online on AMD C1E systems.

When a CPU goes offline then the corresponding bit in the broadcast
mask is cleared. For AMD C1E enabled CPUs we do not reenable the
broadcast when the CPU comes online again as we do not clear the
corresponding bit in the c1e_mask, which keeps track which CPUs
have been switched to broadcast already. So on those !$@#& machines
we never switch back to broadcasting after a CPU offline/online cycle.

Clear the bit when the CPU plays dead.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-09-23 11:38:52 +02:00
Ravikiran G Thirumalai
05e12e1c4c x86: fix 27-rc crash on vsmp due to paravirt during module load
27-rc fails to boot up if configured to use modules.

Turns out vsmp_patch was marked __init, and vsmp_patch being the
pvops 'patch' routine for vsmp, a call to vsmp_patch just turns out
to execute a code page with series of 0xcc (POISON_FREE_INITMEM -- int3).

vsmp_patch has been marked with __init ever since pvops, however,
apply_paravirt can be called during module load causing calls to
freed memory location.

Since apply_paravirt can only be called during init/module load, make
vsmp_patch with "__init_or_module"

Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-23 10:31:26 +02:00
Yinghai Lu
a8b71a2810 x86: fix macro with bad_bios_dmi_table
DMI tables need a blank NULL tail.

fixes the crash on Ingo's test box.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-23 10:13:29 +02:00
FUJITA Tomonori
afa9fdc2f5 iommu: remove fullflush and nofullflush in IOMMU generic option
This patch against tip/x86/iommu virtually reverts
2842e5bf31. But just reverting the
commit breaks AMD IOMMU so this patch also includes some fixes.

The above commit adds new two options to x86 IOMMU generic kernel boot
options, fullflush and nofullflush. But such change that affects all
the IOMMUs needs more discussion (all IOMMU parties need the chance to
discuss it):

http://lkml.org/lkml/2008/9/19/106

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22 20:43:37 +02:00
Aristeu Rozanski
b3e15bdef6 x86, NMI watchdog: setup before enabling NMI watchdog
There's a small window when NMI watchdog is being set up that if any NMIs
are triggered, the NMI code will make make use of not initalized wd_ops
elements:
	void setup_apic_nmi_watchdog(void *unused)
	{
		if (__get_cpu_var(wd_enabled))
			return;

		/* cheap hack to support suspend/resume */
		/* if cpu0 is not active neither should the other cpus */
		if (smp_processor_id() != 0 && atomic_read(&nmi_active) <= 0)
			return;

		switch (nmi_watchdog) {
		case NMI_LOCAL_APIC:
			/* enable it before to avoid race with handler */
-->			__get_cpu_var(wd_enabled) = 1;
-->			if (lapic_watchdog_init(nmi_hz) < 0) {
(...)
	asmlinkage notrace __kprobes void default_do_nmi(struct pt_regs *regs)
	{
	(...)
			if (nmi_watchdog_tick(regs, reason))
				return;
(...)
	notrace __kprobes int
	nmi_watchdog_tick(struct pt_regs *regs, unsigned reason)
	{
	(...)
		if (!__get_cpu_var(wd_enabled))
			return rc;
		switch (nmi_watchdog) {
		case NMI_LOCAL_APIC:
			rc |= lapic_wd_event(nmi_hz);
(...)
int lapic_wd_event(unsigned nmi_hz)
{
	struct nmi_watchdog_ctlblk *wd = &__get_cpu_var(nmi_watchdog_ctlblk);
	u64 ctr;

-->	rdmsrl(wd->perfctr_msr, ctr);

and wd->*_msr will be initialized on each processor type specific setup, after
enabling NMIs for PMIs. Since the counter was just set, the chances of an
performance counter generated NMI is minimal, but any other unknown NMI would
trigger the problem. This patch fixes the problem by setting everything up
before enabling performance counter generated NMIs and will set wd_enabled
using a callback function.

Signed-off-by: Aristeu Rozanski <aris@redhat.com>
Acked-by: Don Zickus <dzickus@redhat.com>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22 19:48:19 +02:00
Aristeu Rozanski
28b166a700 x86, NMI watchdog: when booting with reset_devices, clear the performance counters
P4s have a quirk that makes necessary to clear P4_CCCR_OVF bit on the CCCR
everytime the PMI is triggered. When booting the kernel with reset_devices
(more specific kdump case), the counters reach zero and the PMI will be
generated. This is not a problem on other processors but on P4s, it'll
continue to generate NMIs until that bit is cleared. Since there may be
other users of the performance counters, clear and disable all of them
when booting with reset_devices option.

We have a P4 box here that crashes because of this problem. Since the kdump
kernel usually boots with only one processor active, the second logical
unit won't be set up, therefore, MSR_P4_IQ_CCCR1 (and other performance
counter registers) won't be cleared and P4_CCCR_OVF may be still set because
the previous kernel was using this register. An NMI is triggered because of
the MSR_P4_IQ_CCCR1 right after the NMI delivery is enabled, triggering the
race fixed on my previous email.

Signed-off-by: Aristeu Rozanski <aris@redhat.com>
Acked-by: Don Zickus <dzickus@redhat.com>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22 19:48:18 +02:00
FUJITA Tomonori
d26dbc5cf9 iommu: export iommu_area_reserve helper function
x86 has set_bit_string() that does the exact same thing that
set_bit_area() in lib/iommu-helper.c does.

This patch exports set_bit_area() in lib/iommu-helper.c as
iommu_area_reserve(), converts GART, Calgary, and AMD IOMMU to use it.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22 16:47:50 +02:00
Yinghai Lu
16dc552f35 x86: use WARN_ONCE in workaround for mtrr mask
so could help catch attention about bug in bios about mtrr mask setting.

WARN_ONCE got into mainline already, lets use it.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22 13:09:56 +02:00
Ingo Molnar
0b88641f1b Merge commit 'v2.6.27-rc7' into x86/debug 2008-09-22 13:08:57 +02:00
Akinobu Mita
153dab77e2 x86: use platform_device_register_simple()
Cleanup pcspeaker.c

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22 12:58:36 +02:00
Akinobu Mita
af2d237bf5 x86: check for ioremap() failure in copy_oldmem_page()
Add a check for ioremap() failure in copy_oldmem_page().
This patch also includes small coding style fixes.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22 12:15:33 +02:00
Yinghai Lu
2216d199b1 x86: fix CONFIG_X86_RESERVE_LOW_64K=y
The bad_bios_dmi_table() quirk never triggered because we do DMI setup
too late. Move it a bit earlier.

Also change the CONFIG_X86_RESERVE_LOW_64K quirk to operate on the e820
table directly instead of messing with early reservations - this handles
overlaps (which do occur in this low range of RAM) more gracefully.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22 12:04:38 +02:00
Andrea Righi
b61e06f258 x86, oprofile: BUG scheduling while atomic
nmi_shutdown() calls unregister_die_notifier() from an atomic context
after setting preempt_disable() via get_cpu_var():

[ 1049.404154] BUG: scheduling while atomic: oprofiled/7796/0x00000002
[ 1049.404171] INFO: lockdep is turned off.
[ 1049.404176] Modules linked in: oprofile af_packet rfcomm l2cap kvm_intel kvm i915 drm acpi_cpufreq cpufreq_userspace cpufreq_conservative cpufreq_ondemand cpufreq_powersave freq_table container sbs sbshc dm_mod arc4 ecb cryptomgr aead snd_hda_intel crypto_blkcipher snd_pcm_oss crypto_algapi snd_pcm iwlagn iwlcore snd_timer iTCO_wdt led_class btusb iTCO_vendor_support snd psmouse bluetooth mac80211 soundcore cfg80211 snd_page_alloc intel_agp video output button battery ac dcdbas evdev ext3 jbd mbcache sg sd_mod piix ata_piix libata scsi_mod dock tg3 libphy ehci_hcd uhci_hcd usbcore thermal processor fan fuse
[ 1049.404362] Pid: 7796, comm: oprofiled Not tainted 2.6.27-rc5-mm1 #30
[ 1049.404368] Call Trace:
[ 1049.404384]  [<ffffffff804769fd>] thread_return+0x4a0/0x7d3
[ 1049.404396]  [<ffffffff8026ad92>] generic_exec_single+0x52/0xe0
[ 1049.404405]  [<ffffffff8026ae1a>] generic_exec_single+0xda/0xe0
[ 1049.404414]  [<ffffffff8026aee3>] smp_call_function_single+0x73/0x150
[ 1049.404423]  [<ffffffff804770c5>] schedule_timeout+0x95/0xd0
[ 1049.404430]  [<ffffffff80476083>] wait_for_common+0x43/0x180
[ 1049.404438]  [<ffffffff80476154>] wait_for_common+0x114/0x180
[ 1049.404448]  [<ffffffff80236980>] default_wake_function+0x0/0x10
[ 1049.404457]  [<ffffffff8024f810>] synchronize_rcu+0x30/0x40
[ 1049.404463]  [<ffffffff8024f890>] wakeme_after_rcu+0x0/0x10
[ 1049.404472]  [<ffffffff80479ca0>] _spin_unlock_irqrestore+0x40/0x80
[ 1049.404482]  [<ffffffff80256def>] atomic_notifier_chain_unregister+0x3f/0x60
[ 1049.404501]  [<ffffffffa03d8801>] nmi_shutdown+0x51/0x90 [oprofile]
[ 1049.404517]  [<ffffffffa03d6134>] oprofile_shutdown+0x34/0x70 [oprofile]
[ 1049.404532]  [<ffffffffa03d721e>] event_buffer_release+0xe/0x40 [oprofile]
[ 1049.404543]  [<ffffffff802bdcdd>] __fput+0xcd/0x240
[ 1049.404551]  [<ffffffff802baa74>] filp_close+0x54/0x90
[ 1049.404560]  [<ffffffff8023e1d1>] put_files_struct+0xb1/0xd0
[ 1049.404568]  [<ffffffff8023f82f>] do_exit+0x18f/0x930
[ 1049.404576]  [<ffffffff8020be03>] restore_args+0x0/0x30
[ 1049.404584]  [<ffffffff80240006>] do_group_exit+0x36/0xa0
[ 1049.404592]  [<ffffffff8020b7cb>] system_call_fastpath+0x16/0x1b

This can be easily triggered with 'opcontrol --shutdown'.

Simply move get_cpu_var() above unregister_die_notifier().

Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
Acked-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22 11:54:24 +02:00
Arjan van de Ven
5871c6b0a5 x86: use round_jiffies() for the corruption check timer
the exact timing of the corruption check isn't too important (it's once a
minute timer), use round_jiffies() to align it and avoid extra wakeups.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-22 10:29:25 +02:00
Linus Torvalds
06d4a22be3 Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: completely disable NOPL on 32 bits
  x86/paravirt: Remove duplicate paravirt_pagetable_setup_{start, done}()
  xen: fix for xen guest with mem > 3.7G
  x86: fix possible x86_64 and EFI regression
  arch/x86/kernel/kdebugfs.c: introduce missing kfree
2008-09-19 16:11:09 -07:00
Joerg Roedel
832a90c304 AMD IOMMU: use coherent_dma_mask in alloc_coherent
The alloc_coherent implementation for AMD IOMMU currently uses
*dev->dma_mask per default. This patch changes it to prefer
dev->coherent_dma_mask if it is set.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:34 +02:00
Joerg Roedel
23c1713fe9 AMD IOMMU: use cmd_buf_size when freeing the command buffer
The command buffer release function uses the CMD_BUF_SIZE macro for
get_order. Replace this with iommu->cmd_buf_size which is more reliable
about the actual size of the buffer.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:31 +02:00
Joerg Roedel
b514e55569 AMD IOMMU: calculate IVHD size with a function
The current calculation of the IVHD entry size is hard to read. So move
this code to a seperate function to make it more clear what this
calculation does.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:30 +02:00
Joerg Roedel
199d0d5012 AMD IOMMU: remove unnecessary cast to u64 in the init code
The ctrl variable is only u32 and readl also returns a 32 bit value. So
the cast to u64 is pointless. Remove it with this patch.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:29 +02:00
Joerg Roedel
d58befd3a0 AMD IOMMU: free domain bitmap with its allocation order
The amd_iommu_pd_alloc_bitmap is allocated with a calculated order and
freed with order 1. This is not a bug since the calculated order always
evaluates to 1, but its unclean code. So replace the 1 with the
calculation in the release path.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:27 +02:00
Joerg Roedel
6754086ce6 AMD IOMMU: simplify dma_mask_to_pages
The current calculation is very complicated. This patch replaces it with
a much simpler version.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:26 +02:00
Joerg Roedel
c97ac5359e AMD IOMMU: replace memset with __GFP_ZERO in alloc_coherent
Remove the memset and use __GFP_ZERO at allocation time instead.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:25 +02:00
FUJITA Tomonori
13d9fead3d AMD IOMMU: avoid unnecessary low zone allocation in alloc_coherent
x86's common alloc_coherent (dma_alloc_coherent in dma-mapping.h) sets
up the gfp flag according to the device dma_mask but AMD IOMMU doesn't
need it for devices that the IOMMU can do virtual mappings for. This
patch avoids unnecessary low zone allocation.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:24 +02:00
Joerg Roedel
38ddf41b19 AMD IOMMU: some set_device_domain cleanups
Remove some magic numbers and split the pte_root using standard
functions.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:22 +02:00
Joerg Roedel
bd60b735c6 AMD IOMMU: don't assign preallocated protection domains to devices
In isolation mode the protection domains for the devices are
preallocated and preassigned. This is bad if a device should be passed
to a virtualization guest because the IOMMU code does not know if it is
in use by a driver. This patch changes the code to assign the device to
the preallocated domain only if there are dma mapping requests for it.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:21 +02:00
Joerg Roedel
b39ba6ad00 AMD IOMMU: add dma_supported callback
This function determines if the AMD IOMMU implementation is responsible
for a given device. So the DMA layer can get this information from the
driver.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:20 +02:00
Joerg Roedel
a22131a223 AMD IOMMU: allow IO page faults from devices
There is a bit in the device entry to suppress all IO page faults
generated by a device. This bit was set until now because there was no
event logging. Now that there is event logging this patch allows IO page
faults from devices to see them in the kernel log.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:19 +02:00
Joerg Roedel
126c52be4b AMD IOMMU: enable event logging
The code to log IOMMU events is in place now. So enable event logging
with this patch.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:17 +02:00
Joerg Roedel
90008ee4b8 AMD IOMMU: add event handling code
This patch adds code for polling and printing out events generated by
the AMD IOMMU.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:16 +02:00
Joerg Roedel
a80dc3e0e0 AMD IOMMU: add MSI interrupt support
The AMD IOMMU can generate interrupts for various reasons. This patch
adds the basic interrupt enabling infrastructure to the driver.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:15 +02:00
Joerg Roedel
3eaf28a1cd AMD IOMMU: save pci_dev instead of devid
We need the pci_dev later anyways to enable MSI for the IOMMU hardware.
So remove the devid pointing to the BDF and replace it with the pci_dev
structure where the IOMMU is implemented.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:13 +02:00
Joerg Roedel
ee893c24ed AMD IOMMU: save pci segment from ACPI tables
This patch adds the pci_seg field to the amd_iommu structure and fills
it with the corresponding value from the ACPI table.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:12 +02:00
Joerg Roedel
335503e57b AMD IOMMU: add event buffer allocation
This patch adds the allocation of a event buffer for each AMD IOMMU in
the system. The hardware will log events like device page faults or
other errors to this buffer once this is enabled.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:11 +02:00
Joerg Roedel
6d4f343f84 AMD IOMMU: align alloc_coherent addresses properly
The API definition for dma_alloc_coherent states that the bus address
has to be aligned to the next power of 2 boundary greater than the
allocation size. This is violated by AMD IOMMU so far and this patch
fixes it.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:10 +02:00
Joerg Roedel
5507eef835 AMD IOMMU: add branch hints to completion wait checks
This patch adds branch hints to the cecks if a completion_wait is
necessary. The completion_waits in the mapping paths are unlikly because
they will only happen on software implementations of AMD IOMMU which
don't exists today or with lazy IO/TLB flushing when the allocator wraps
around the address space. With lazy IO/TLB flushing the completion_wait
in the unmapping path is unlikely too.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:08 +02:00
Joerg Roedel
1c65577398 AMD IOMMU: implement lazy IO/TLB flushing
The IO/TLB flushing on every unmaping operation is the most expensive
part in AMD IOMMU code and not strictly necessary. It is sufficient to
do the flush before any entries are reused. This is patch implements
lazy IO/TLB flushing which does exactly this.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:07 +02:00
Joerg Roedel
2842e5bf31 x86: move GART TLB flushing options to generic code
The GART currently implements the iommu=[no]fullflush command line
parameters which influence its IO/TLB flushing strategy. This patch
makes these parameters generic so that they can be used by the AMD IOMMU
too.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:06 +02:00
Joerg Roedel
270cab2426 AMD IOMMU: move TLB flushing to the map/unmap helper functions
This patch moves the invocation of the flushing functions to the
map/unmap helpers because its common code in all dma_ops relevant
mapping/unmapping code.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:04 +02:00
Joerg Roedel
dbcc112e3b AMD IOMMU: check for invalid device pointers
Currently AMD IOMMU code triggers a BUG_ON if NULL is passed as the
device. This is inconsistent with other IOMMU implementations.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 12:59:03 +02:00
Yinghai Lu
279b0bbba2 x86: fix arch/x86/kernel/cpu/mtrr/main.c warning
fix this warning reported by Andrew Morton:

> arch/x86/kernel/cpu/mtrr/main.c: In function 'mtrr_bp_init':
> arch/x86/kernel/cpu/mtrr/main.c:1170: warning: 'extra_remove_base' may be used uninitialized in this function

the warning is bogus but the logic that prevents uninitialized use
is a bit convoluted so simplify it all.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-19 09:16:06 +02:00
Ingo Molnar
5e51900be6 Merge commit 'v2.6.27-rc6' into x86/cleanups 2008-09-19 09:15:50 +02:00
Joerg Roedel
7e4f88da7b AMD IOMMU: protect completion wait loop with iommu lock
The unlocked polling of the ComWaitInt bit in the IOMMU completion wait
path is racy. Protect it with the iommu lock.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-18 09:25:44 +02:00
Joerg Roedel
ee2fa7435b AMD IOMMU: set iommu sunc flag after command queuing
The iommu->need_sync flag must be set after the command is queued to
avoid race conditions.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-18 09:25:04 +02:00
Arjan van de Ven
90f7d25c6b x86: print DMI information in the oops trace
in order to diagnose hard system specific issues, it's useful to
have the system name in the oops (as provided by DMI)

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-17 11:53:03 +02:00
H. Peter Anvin
97fc0555da x86 setup: handle more than 8 CPU flag words
Checkin e38e05a858 added a 9th CPU flag
word, but didn't adjust the boot code to match.  This patch adds the
necessary boot code support.

Note: due to a typo in an #if statement, it didn't trigger the #error
this was supposed to do.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-16 15:09:26 -07:00
Paul Bolle
9985647891 x86 setup: drop SWAP_DEV
Impact: None (cleanup)

SWAP_DEV is unused since 2.6.23-rc1. The comment was already incorrect
since (at least) 2.6.12.

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-16 09:57:36 -07:00
H. Peter Anvin
ba0593bf55 x86: completely disable NOPL on 32 bits
Completely disable NOPL on 32 bits.  It turns out that Microsoft
Virtual PC is so broken it can't even reliably *fail* in the presence
of NOPL.

This leaves the infrastructure in place but disables it
unconditionally.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-16 09:33:57 -07:00
Ingo Molnar
fc38151947 x86: add X86_RESERVE_LOW_64K
This bugzilla:

  http://bugzilla.kernel.org/show_bug.cgi?id=11237

Documents a wide range of systems where the BIOS utilizes the first
64K of physical memory during suspend/resume and other hardware events.

Currently we reserve this memory on all AMI and Phoenix BIOS systems.
Life is too short to hunt subtle memory corruption problems like this,
so we try to be robust by default.

Still, allow this to be overriden: allow users who want that first 64K
of memory to be available to the kernel disable the quirk, via
CONFIG_X86_RESERVE_LOW_64K=n.

Also, allow the early reservation to overlap with other
early reservations.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-16 12:16:07 +02:00
Ingo Molnar
1e22436eba x86: reserve low 64K on AMI and Phoenix BIOS boxen
there's multiple reports about suspend/resume related low memory
corruption in this bugzilla:

  http://bugzilla.kernel.org/show_bug.cgi?id=11237

the common pattern is that the corruption is caused by the BIOS,
and that it affects some portion of the first 64K of physical RAM.

So add a DMI quirk

This will waste 64K RAM on 'good' systems too, but without knowing
the exact nature of this BIOS memory corruption this is the safest
approach.

This might as well solve a wide range of suspend/resume breakages
under Linux.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-16 09:58:02 +02:00
Ingo Molnar
5649b7c303 x86: add DMI quirk for AMI BIOS which corrupts address 0xc000 during resume
Alan Jenkins and Andy Wettstein reported a suspend/resume memory
corruption bug and extensively documented it here:

   http://bugzilla.kernel.org/show_bug.cgi?id=11237

The bug is that the BIOS overwrites 1K of memory at 0xc000 physical,
without registering it in e820 as reserved or giving the kernel any
idea about this.

Detect AMI BIOSen and reserve that 1K.

We paint this bug around with a very broad brush (reserving that 1K on all
AMI BIOS systems), as the bug was extremely hard to find and needed several
weeks and lots of debugging and patching.

The bug was found via the CONFIG_X86_CHECK_BIOS_CORRUPTION=y debug feature,
if similar bugs are suspected then this feature can be enabled on other
systems as well to scan low memory for corrupted memory.

Reported-by: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
Reported-by: Andy Wettstein <ajw1980@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-16 09:43:07 +02:00
Ingo Molnar
e3bbaa3cb6 Merge commit 'v2.6.27-rc6' into x86/memory-corruption-check 2008-09-16 09:34:23 +02:00
Alex Nixon
5132895f14 x86/paravirt: Remove duplicate paravirt_pagetable_setup_{start, done}()
They were already called once in arch/x86/kernel/setup.c - we don't need to call them again.

fixes:

  http://bugzilla.kernel.org/show_bug.cgi?id=11485

Signed-off-by: Alex Nixon <alex.nixon@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 18:10:01 +02:00
Ingo Molnar
f81b691a3d Merge commit 'v2.6.27-rc6' into x86/pat 2008-09-14 17:26:53 +02:00
Jeremy Fitzhardinge
5670a43d71 xen: fix for xen guest with mem > 3.7G
PFN_PHYS() can truncate large addresses unless its passed a suitable
large type.  This is fixed more generally in the patch series
introducing phys_addr_t, but we need a short-term fix to solve a
Xen regression reported by Roberto De Ioris.

Reported-by: Roberto De Ioris <roberto@unbit.it>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 16:46:34 +02:00
FUJITA Tomonori
f6a32a36ab x86: gart alloc_coherent does virtual mapppings only when necessary
gart alloc_coherent need to do virtual mapppings only when an
allocated buffer is not DMA-capable for a device.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 16:43:58 +02:00
FUJITA Tomonori
f10ac8a232 x86: avoid unnecessary low zone allocation in Calgary's alloc_coherent
x86's common alloc_coherent (dma_alloc_coherent in dma-mapping.h) sets
up the gfp flag according to the device dma_mask but Calgary doesn't
need it because of virtual mappings. This patch avoids unnecessary low
zone allocation.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Muli Ben-Yehuda <muli@il.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 16:43:58 +02:00
FUJITA Tomonori
bee44f294e x86: make GART to respect device's dma_mask about virtual mappings
Currently, GART IOMMU ingores device's dma_mask when it does virtual
mappings. So it could give a device a virtual address that the device
can't access to.

This patch fixes the above problem.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 16:42:37 +02:00
Hiroshi Shimamoto
e6babb6b7f x86: signal: introduce do_rt_sigreturn()
introduce do_rt_sigreturn(), to collect common part of sys_rt_sigreturn().

No change in functionality intended.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 15:35:52 +02:00
Hiroshi Shimamoto
2ba48e16e7 x86: signal: remove unneeded err handling
This patch eliminates unused or unneeded variable handling.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 15:35:51 +02:00
Hiroshi Shimamoto
3d0aedd953 x86: signal: put give_sigsegv of setup frames together
When setup frame fails, force_sigsegv is called and returns -EFAULT.
There is similar code in ia32_setup_frame(), ia32_setup_rt_frame(),
__setup_frame() and __setup_rt_frame().

Make them identical.

No change in functionality intended.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 15:35:34 +02:00
Alexey Dobriyan
b899219572 x86: simpler SYSVIPC_COMPAT definition
X86_64 part is entirely redundant.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 14:57:15 +02:00
Ingo Molnar
a1c75cc501 x86, microcode rework, v2, fix
based on patch from Dmitry Adamushko.

- add missing vfree()
- update debug printks

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 14:53:00 +02:00
Ingo Molnar
a9853dd6d2 x86: cpuid, fix typo
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 14:46:58 +02:00
Yinghai Lu
afae865613 x86: move transmeta cap read to early_init_transmeta()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 14:09:14 +02:00
Yinghai Lu
aef93c8bd5 x86: identify_cpu_without_cpuid v2
Krzysztof found some old cyrix cpu where an mtrr-alike cpu feature was
not detected properly.

this one is based on Krzysztof' patch, and we call ->c_identify() in
early_identify_cpu.

need to call c_identify() for cpus without cpuid even earlier ...

v2: Krzysztof point out need to give cyrix another chance about cpuid
    checking again, after ->c_identify() enables cpuid for it

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 14:09:13 +02:00
Ingo Molnar
6e03f99803 Merge branch 'linus' into x86/iommu
Conflicts:
	lib/swiotlb.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-14 14:07:00 +02:00
Dmitry Adamushko
a0a29b62a9 x86, microcode rework, v2
this is a rework of the microcode splitup in tip/x86/microcode

(1) I think this new interface is cleaner (look at the changes
    in 'struct microcode_ops' in microcode.h);

(2) it's -64 lines of code;

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-12 12:20:27 +02:00
Jeremy Fitzhardinge
0ad5bce740 x86: fix possible x86_64 and EFI regression
Russ Anderson reported a boot crash with EFI and latest mainline:

 BIOS-e820: 00000000fffa0000 - 00000000fffac000 (reserved)
Pid: 0, comm: swapper Not tainted 2.6.27-rc5-00100-gec0c15a-dirty #5

Call Trace:
 [<ffffffff80849195>] early_idt_handler+0x55/0x69
 [<ffffffff80313e52>] __memcpy+0x12/0xa4
 [<ffffffff80859015>] efi_init+0xce/0x932
 [<ffffffff80869c83>] setup_early_serial8250_console+0x2d/0x36a
 [<ffffffff80238688>] __insert_resource+0x18/0xc8
 [<ffffffff8084f6de>] setup_arch+0x3a7/0x632
 [<ffffffff808499ed>] start_kernel+0x91/0x367
 [<ffffffff80849393>] x86_64_start_kernel+0xe3/0xe7
 [<ffffffff808492b0>] x86_64_start_kernel+0x0/0xe7

 RIP 0x10

Such a crash is possible if the CPU in this system is a 64-bit
processor which doesn't support NX (ie, old Intel P4 -based64-bit
processors).

Certainly, if we support such processors, then we should start with
_PAGE_NX initially clear in __supported_pte_flags, and then set it once
we've established that the processor does indeed support NX.  That will
prevent early_ioremap - or anything else - from trying to set it.

The simple fix is to simply call check_efer() earlier.

Reported-by: Russ Anderson <rja@sgi.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-12 11:40:57 +02:00
Sheng Yang
534e38b447 KVM: VMX: Always return old for clear_flush_young() when using EPT
As well as discard fake accessed bit and dirty bit of EPT.

Signed-off-by: Sheng Yang <sheng.yang@intel.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-09-11 11:48:19 +03:00
Joerg Roedel
e5eab0cede KVM: SVM: fix guest global tlb flushes with NPT
Accesses to CR4 are intercepted even with Nested Paging enabled. But the code
does not check if the guest wants to do a global TLB flush. So this flush gets
lost. This patch adds the check and the flush to svm_set_cr4.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-09-11 11:39:25 +03:00
Joerg Roedel
44874f8491 KVM: SVM: fix random segfaults with NPT enabled
This patch introduces a guest TLB flush on every NPF exit in KVM. This fixes
random segfaults and #UD exceptions in the guest seen under some workloads
(e.g. long running compile workloads or tbench). A kernbench run with and
without that fix showed that it has a slowdown lower than 0.5%

Cc: stable@kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-09-11 11:31:53 +03:00
Jeremy Fitzhardinge
6a9e91846b xen: fix pinning when not using split pte locks
We only pin PTE pages when using split PTE locks, so don't do the
pin/unpin when attaching/detaching pte pages to a pinned pagetable.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 14:05:53 +02:00
Ingo Molnar
3ce9bcb583 Merge branch 'core/xen' into x86/xen 2008-09-10 14:05:45 +02:00
Jeremy Fitzhardinge
f7d0b926ac mm: define USE_SPLIT_PTLOCKS rather than repeating expression
Define USE_SPLIT_PTLOCKS as a constant expression rather than repeating
"NR_CPUS >= CONFIG_SPLIT_PTLOCK_CPUS" all over the place.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 14:04:59 +02:00
Julia Lawall
f461a1d80c arch/x86/kernel/kdebugfs.c: introduce missing kfree
Error handling code following a kmalloc should free the allocated data.
Note that at the point of the change, node has not yet been stored in d, so
it is not affected by the existing cleanup code.

The semantic match that finds the problem is as follows:
(http://www.emn.fr/x-info/coccinelle/)

// <smpl>
@r exists@
local idexpression x;
statement S;
expression E;
identifier f,l;
position p1,p2;
expression *ptr != NULL;
@@

(
if ((x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...)) == NULL) S
|
x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...);
...
if (x == NULL) S
)
<... when != x
     when != if (...) { <+...x...+> }
x->f = E
...>
(
 return \(0\|<+...x...+>\|ptr\);
|
 return@p2 ...;
)

@script:python@
p1 << r.p1;
p2 << r.p2;
@@

print "* file: %s kmalloc %s return %s" % (p1[0].file,p1[0].line,p2[0].line)
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 14:03:49 +02:00
Sheng Yang
e38e05a858 x86: extended "flags" to show virtualization HW feature in /proc/cpuinfo
The hardware virtualization technology evolves very fast. But currently
it's hard to tell if your CPU support a certain kind of HW technology
without digging into the source code.

The patch add a new catagory in "flags" under /proc/cpuinfo. Now "flags"
can indicate the (important) HW virtulization features the CPU supported
as well.

Current implementation just cover Intel VMX side.

Signed-off-by: Sheng Yang <sheng.yang@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 14:00:56 +02:00
Sheng Yang
315a6558f3 x86: move VMX MSRs to msr-index.h
They are hardware specific MSRs, and we would use them in virtualization
feature detection later.

Signed-off-by: Sheng Yang <sheng.yang@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 14:00:55 +02:00
Ingo Molnar
59c37bf892 Merge commit 'v2.6.27-rc6' into x86/unify-cpu-detect
Conflicts:
	arch/x86/kernel/cpu/amd.c
	arch/x86/kernel/cpu/common.c
	arch/x86/kernel/cpu/common_64.c
	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 14:00:45 +02:00
FUJITA Tomonori
49fbf4e9f9 x86: convert pci-nommu to use is_buffer_dma_capable helper function
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 11:33:44 +02:00
FUJITA Tomonori
ac4ff656c0 x86: convert gart to use is_buffer_dma_capable helper function
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 11:33:44 +02:00
Ingo Molnar
e92b4fdacc Merge commit 'v2.6.27-rc6' into x86/iommu 2008-09-10 11:32:52 +02:00
Hiroshi Shimamoto
764e8d128f x86: signal_64.c: make handle_signal() similar
Make handle_signal() same as 32bit.

No change in functionality intended.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:28:24 +02:00
Hiroshi Shimamoto
0c40ed7173 x86: signal_64.c: arg for restore_i387_xstate() is void __user *
restore_i387_xstate() is declared as:

  int restore_i387_xstate(void __user *buf);

so, make the variable buf void __user *.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:28:23 +02:00
Hiroshi Shimamoto
b2994ef0de x86: signal_64.c: clean up signal_fault()
clean up and make signal_fault() same as 32bit.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:28:23 +02:00
Yinghai Lu
ec70cae869 x86: centaur_64.c remove duplicated setting of CONSTANT_TSC
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:21:06 +02:00
Yinghai Lu
4052704d92 x86: intel.c put workaround for old cpus together
consolidate the code some more.

No change in functionality intended.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:21:06 +02:00
Yinghai Lu
879d792b66 x86: let intel 64-bit use intel.c
now that arch/x86/kernel/cpu/intel_64.c and
arch/x86/kernel/cpu/intel.c are equal, drop
arch/x86/kernel/cpu/intel_64.c and fix up
the glue.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:21:05 +02:00
Yinghai Lu
58602c1681 x86: make intel_64.c the same as intel.c
No change in functionality intended - this only adds the 32-bit side.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:21:04 +02:00
Yinghai Lu
185f3b9da2 x86: make intel.c have 64-bit support code
prepare for unification.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:21:03 +02:00
Ingo Molnar
81faaae457 Merge branch 'x86/pebs' into x86/unify-cpu-detect
Conflicts:
	arch/x86/Kconfig.cpu
	include/asm-x86/ds.h

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-10 08:20:51 +02:00
Linus Torvalds
93811d94f7 Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: fix memmap=exactmap boot argument
  x86: disable static NOPLs on 32 bits
  xen: fix 2.6.27-rc5 xen balloon driver warnings
2008-09-09 12:23:41 -07:00
Prarit Bhargava
d6be118a97 x86: fix memmap=exactmap boot argument
When using kdump modifying the e820 map is yielding strange results.

For example starting with

 BIOS-provided physical RAM map:
 BIOS-e820: 0000000000000100 - 0000000000093400 (usable)
 BIOS-e820: 0000000000093400 - 00000000000a0000 (reserved)
 BIOS-e820: 0000000000100000 - 000000003fee0000 (usable)
 BIOS-e820: 000000003fee0000 - 000000003fef3000 (ACPI data)
 BIOS-e820: 000000003fef3000 - 000000003ff80000 (ACPI NVS)
 BIOS-e820: 000000003ff80000 - 0000000040000000 (reserved)
 BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved)
 BIOS-e820: 00000000fec00000 - 00000000fec10000 (reserved)
 BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved)
 BIOS-e820: 00000000ff000000 - 0000000100000000 (reserved)

and booting with args

memmap=exactmap memmap=640K@0K memmap=5228K@16384K memmap=125188K@22252K memmap=76K#1047424K memmap=564K#1047500K

resulted in:

 user-defined physical RAM map:
 user: 0000000000000000 - 0000000000093400 (usable)
 user: 0000000000093400 - 00000000000a0000 (reserved)
 user: 0000000000100000 - 000000003fee0000 (usable)
 user: 000000003fee0000 - 000000003fef3000 (ACPI data)
 user: 000000003fef3000 - 000000003ff80000 (ACPI NVS)
 user: 000000003ff80000 - 0000000040000000 (reserved)
 user: 00000000e0000000 - 00000000f0000000 (reserved)
 user: 00000000fec00000 - 00000000fec10000 (reserved)
 user: 00000000fee00000 - 00000000fee01000 (reserved)
 user: 00000000ff000000 - 0000000100000000 (reserved)

But should have resulted in:

 user-defined physical RAM map:
 user: 0000000000000000 - 00000000000a0000 (usable)
 user: 0000000001000000 - 000000000151b000 (usable)
 user: 00000000015bb000 - 0000000008ffc000 (usable)
 user: 000000003fee0000 - 000000003ff80000 (ACPI data)

This is happening because of an improper usage of strcmp() in the
e820 parsing code.  The strcmp() always returns !0 and never resets the
value for e820.nr_map and returns an incorrect user-defined map.

This patch fixes the problem.

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-09 11:54:53 -07:00
H. Peter Anvin
28f7e66fc1 x86: prevent binutils from being "smart" and generating NOPLs for us
binutils, contrary to documented behaviour, will generate long NOPs (a
P6-or-higher instruction which is broken on at least some VIA chips,
Virtual PC/Virtual Server, and some versions of Qemu) depending on the
-mtune= option, which is not supposed to change architectural
behaviour.

Pass an explicit override to the assembler, in case ends up passing
the -mtune= parameter to gas (gcc 4.3.0 does not appear to.)

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-09 11:52:41 -07:00
Alexey Dobriyan
9c0bbee8a6 seccomp: drop now bogus dependency on PROC_FS
seccomp is prctl(2)-driven now.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-09 09:09:51 +02:00
Linus Torvalds
14469a8dd2 x86: disable static NOPLs on 32 bits
On 32-bit, at least the generic nops are fairly reasonable, but the
default nops for 64-bit really look pretty sad, and the P6 nops really do
look better.

So I would suggest perhaps moving the static P6 nop selection into the
CONFIG_X86_64 thing.

The alternative is to just get rid of that static nop selection, and just
have two cases: 32-bit and 64-bit, and just pick obviously safe cases for
them.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-08 11:35:43 -07:00
Manfred Spraul
e545a6140b kernel/cpu.c: create a CPU_STARTING cpu_chain notifier
Right now, there is no notifier that is called on a new cpu, before the new
cpu begins processing interrupts/softirqs.
Various kernel function would need that notification, e.g. kvm works around
by calling smp_call_function_single(), rcu polls cpu_online_map.

The patch adds a CPU_STARTING notification. It also adds a helper function
that sends the message to all cpu_chain handlers.

Tested on x86-64.
All other archs are untested. Especially on sparc, I'm not sure if I got
it right.

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 19:25:24 +02:00
Alex Nixon
26fd10517e xen: make CPU hotplug functions static
There's no need for these functions to be accessed from outside of xen/smp.c

Signed-off-by: Alex Nixon <alex.nixon@citrix.com>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 19:12:24 +02:00
Alex Nixon
2737146b3a x86, xen: fix build when !CONFIG_HOTPLUG_CPU
Signed-off-by: Alex Nixon <alex.nixon@citrix.com>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 19:12:23 +02:00
FUJITA Tomonori
823e7e8c6e x86: dma_alloc_coherent sets gfp flags properly
Non real IOMMU implemenations (which doesn't do virtual mappings,
e.g. swiotlb, pci-nommu, etc) need to use proper gfp flags and
dma_mask to allocate pages in their own dma_alloc_coherent()
(allocated page need to be suitable for device's coherent_dma_mask).

This patch makes dma_alloc_coherent do this job so that IOMMUs don't
need to take care of it any more.

Real IOMMU implemenataions can simply ignore the gfp flags.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:50:07 +02:00
FUJITA Tomonori
8a53ad675f x86: fix nommu_alloc_coherent allocation with NULL device argument
We need to use __GFP_DMA for NULL device argument (fallback_dev) with
pci-nommu. It's a hack for ISA (and some old code) so we need to use
GFP_DMA.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:50:06 +02:00
FUJITA Tomonori
de9f521fb7 x86: move pci-nommu's dma_mask check to common code
The check to see if dev->dma_mask is NULL in pci-nommu is more
appropriate for dma_alloc_coherent().

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:50:06 +02:00
Yinghai Lu
f69feff720 x86: little clean up of intel.c/intel_64.c
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:46:03 +02:00
Yinghai Lu
ff73152ced x86: make 64 bit to use amd.c
arch/x86/kernel/cpu/amd.c is now 100% identical to
arch/x86/kernel/cpu/amd_64.c, so use amd.c on 64-bit too
and fix up the namespace impact.

Simplify the Kconfig glue as well.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:32:06 +02:00
Yinghai Lu
2a02505055 x86: make amd_64 have 32 bit code
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:32:03 +02:00
Yinghai Lu
6c62aa4a3c x86: make amd.c have 64bit support code
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:32:02 +02:00
Yinghai Lu
8d71a2ea0a x86: merge header in amd_64.c
Singed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:32:01 +02:00
Yinghai Lu
2b86473604 x86: add srat_detect_node for amd64
separate that from amd_detect_cmp()

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:32:00 +02:00
Yinghai Lu
c58606ad55 x86: remove duplicated force_mwait
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:31:59 +02:00
Yinghai Lu
11fdd252bb x86: cpu make amd.c more like amd_64.c v2
1. make 32bit have early_init_amd_mc and amd_detect_cmp
2. seperate init_amd_k5/k6/k7 ...

v2: fix compiling for !CONFIG_SMP

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-08 15:31:58 +02:00
Jeremy Fitzhardinge
c885df50f5 x86: default corruption check to off, but put parameter default in Kconfig
Default the low memory corruption check to off, but make the default setting of
the memory_corruption_check kernel parameter a config parameter.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-07 17:40:02 +02:00
Jeremy Fitzhardinge
9f077871ce x86: clean up memory corruption check and add more kernel parameters
The corruption check is enabled in Kconfig by default, but disabled at runtime.

This patch adds several kernel parameters to control the corruption
check's behaviour; these are documented in kernel-parameters.txt.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-07 17:40:01 +02:00
Hugh Dickins
bb577f980e x86: add periodic corruption check
Perodically check for corruption in low phusical memory.  Don't bother
checking at fault time, since it won't show anything useful.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-07 17:40:00 +02:00
Jeremy Fitzhardinge
5394f80f92 x86: check for and defend against BIOS memory corruption
Some BIOSes have been observed to corrupt memory in the low 64k.  This
change:
 - Reserves all memory which does not have to be in that area, to
   prevent it from being used as general memory by the kernel.  Things
   like the SMP trampoline are still in the memory, however.
 - Clears the reserved memory so we can observe changes to it.
 - Adds a function check_for_bios_corruption() which checks and reports on
   memory becoming unexpectedly non-zero.  Currently it's called in the
   x86 fault handler, and the powermanagement debug output.

Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-07 17:39:59 +02:00
Linus Torvalds
64f996f670 Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: cpu_init(): fix memory leak when using CPU hotplug
  x86: pda_init(): fix memory leak when using CPU hotplug
  x86, xen: Use native_pte_flags instead of native_pte_val for .pte_flags
  x86: move mtrr cpu cap setting early in early_init_xxxx
  x86: delay early cpu initialization until cpuid is done
  x86: use X86_FEATURE_NOPL in alternatives
  x86: add NOPL as a synthetic CPU feature bit
  x86: boot: stub out unimplemented CPU feature words
2008-09-06 19:36:23 -07:00
Linus Torvalds
f532522565 Merge branch 'timers-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'timers-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  clocksource, acpi_pm.c: check for monotonicity
  clocksource, acpi_pm.c: use proper read function also in errata mode
  ntp: fix calculation of the next jiffie to trigger RTC sync
  x86: HPET: read back compare register before reading counter
  x86: HPET fix moronic 32/64bit thinko
  clockevents: broadcast fixup possible waiters
  HPET: make minimum reprogramming delta useful
  clockevents: prevent endless loop lockup
  clockevents: prevent multiple init/shutdown
  clockevents: enforce reprogram in oneshot setup
  clockevents: prevent endless loop in periodic broadcast handler
  clockevents: prevent clockevent event_handler ending up handler_noop
2008-09-06 19:33:26 -07:00
Ingo Molnar
5df4551551 x86, tsc calibration: fix
my brown paperbag day ...

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 23:55:40 +02:00
Andreas Herrmann
23952a96ae x86: cpu_init(): fix memory leak when using CPU hotplug
Exception stacks are allocated each time a CPU is set online.
But the allocated space is never freed. Thus with one CPU hotplug
offline/online cycle there is a memory leak of 24K (6 pages) for
a CPU.

Fix is to allocate exception stacks only once -- when the CPU is
set online for the first time.

Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: akpm@linux-foundation.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 20:48:16 +02:00
Andreas Herrmann
d04ec773d7 x86: pda_init(): fix memory leak when using CPU hotplug
pda->irqstackptr is allocated whenever a CPU is set online.
But it is never freed. This results in a memory leak of 16K
for each CPU offline/online cycle.

Fix is to allocate pda->irqstackptr only once.

Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com>
Cc: akpm@linux-foundation.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 20:48:02 +02:00
Eduardo Habkost
e4a6be4d28 x86, xen: Use native_pte_flags instead of native_pte_val for .pte_flags
Using native_pte_val triggers the BUG_ON() in the paravirt_ops
version of pte_flags().

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 20:13:58 +02:00
Jan Beulich
2d9cd6c27f x86-64: add two __cpuinit annotations
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 19:50:41 +02:00
Jan Beulich
cc643d4687 x86: adjust vmalloc_sync_all() for Xen (2nd try)
Since the fourth PDPT entry cannot be shared under Xen,
vmalloc_sync_all() must iterate over pmd-s rather than pgd-s here.
Luckily, the code isn't used for native PAE (SHARED_KERNEL_PMD is 1)
and the change is benign to non-PAE.

Also do a little more cleanup in that function.

Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
2008-09-06 19:48:01 +02:00
Jan Beulich
17b746278d x86: pgd_{c,d}tor() cleanup
Giving pgd_ctor() a properly typed parameter allows eliminating a local
variable. Adjust pgd_dtor() to match.

Signed-off-by: Jan Beulich <jbeulich@novell.com>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: "Jeremy Fitzhardinge" <jeremy@goop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 19:47:09 +02:00
Jan Beulich
30cec97967 x86: init annotations in early_printk() setup
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 19:41:31 +02:00
Alexey Dobriyan
a19aac8548 x86: make setup_xstate_init() __init
WARNING: vmlinux.o(.text+0x22453): Section mismatch in reference from the function setup_xstate_init() to the function .init.text:__alloc_bootmem()
The function setup_xstate_init() references the function __init __alloc_bootmem().
This is often because setup_xstate_init lacks a __init annotation or the annotation of __alloc_bootmem is wrong.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 19:01:18 +02:00
Christoph Hellwig
0722bba8f1 x86: kill sys32_pause
It's an unused duplicate of the generic sys_pause.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 18:44:47 +02:00
Yinghai Lu
dd786dd12c x86: move mtrr cpu cap setting early in early_init_xxxx
Krzysztof Helt found MTRR is not detected on k6-2

root cause:
	we moved mtrr_bp_init() early for mtrr trimming,
and in early_detect we only read the CPU capability from cpuid,
so some cpu doesn't have that bit in cpuid.

So we need to add early_init_xxxx to preset those bit before mtrr_bp_init
for those earlier cpus.

this patch is for v2.6.27

Reported-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 17:50:55 +02:00
Krzysztof Helt
12cf105cd6 x86: delay early cpu initialization until cpuid is done
Move early cpu initialization after cpu early get cap so the
early cpu initialization can fix up cpu caps.

Signed-off-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 17:50:38 +02:00
Andrey Borzenkov
464f04c9e9 x86: fix ghost EDD devices in /sys again
> This is regression but old enough. Apparently I had for whatever reasons
> EDD turned off till recently. This is 2.6.27-rc5 just in case.
>
> In 2006 I fixed ghost devices due to buggy BIOS:
>
> http://marc.info/?l=linux-kernel&m=114087765422490&w=2
>
> Later edd.S has been rewritten in C, and apparently this patch has been
> lost:
>
> {pts/1}% ls /sys/firmware/edd
> int13_dev80/  int13_dev84/  int13_dev88/  int13_dev8c/
> int13_dev81/  int13_dev85/  int13_dev89/  int13_dev8d/
> int13_dev82/  int13_dev86/  int13_dev8a/  int13_dev8e/
> int13_dev83/  int13_dev87/  int13_dev8b/  int13_dev8f/
>
> But I have just a single disk. This is the same system BTW.

Some BIOSes do not always set CF on error before return from int13.
The patch adds additional check for status being zero (AH == 0).

This was fixed for edd.S in
http://marc.info/?l=linux-kernel&m=114087765422490&w=2, but lost
again when edd.S was rewritten in C.

Signed-off-by: Andrey Borzenkov <arvidjaar@mail.ru>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 16:30:36 +02:00
Hiroshi Shimamoto
13ad7725e9 x86_32: signal: move signal number conversion to upper layer
Bring signal number conversion in __setup_frame() and __setup_rt_frame()
up into the common part setup_frame().

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:54:04 +02:00
Hiroshi Shimamoto
1d13024e62 x86: signal: split out frame setups
Make setup_rt_frame() and split out frame setups from handle_signal().
This is for cosmetic unification of handle_signal().

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:54:03 +02:00
Hiroshi Shimamoto
8fcd8e20f3 x86: signal: make NR_restart_syscall
make NR_restart_syscall macro for cosmetic unification of handle_signal().

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:54:02 +02:00
Hiroshi Shimamoto
72fa50f4ef x86_32: signal: introduce signal_fault()
implement signal_fault() for 32bit.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:54:02 +02:00
Hiroshi Shimamoto
bb57925f50 x86_32: signal: use syscall_get_nr and error
Use asm/syscall.h interfaces that do the same things.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:54:01 +02:00
Ingo Molnar
f12e6a451a Merge branch 'x86/cleanups' into x86/signal
Conflicts:
	arch/x86/kernel/signal_64.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:53:20 +02:00
Ingo Molnar
046fd53773 Merge branches 'x86/tracehook', 'x86/xsave' and 'x86/prototypes' into x86/signal
Conflicts:
	arch/x86/kernel/signal_64.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:53:01 +02:00
Yinghai Lu
e322423471 x86, cpu init: call early_init_xxx in init_xxx
so we:

 1. could set some cap to ap
 2. restore some cap after memset in identify_cpu for boot cpu

esp for CONSTANT_TSC this matters, as:

before this patch:
 flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow rep_good nopl pni monitor cx16 lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs

after this patch:
 flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl pni monitor cx16 lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs

so constant_tsc is back...

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:09:14 +02:00
Yinghai Lu
1b05d60d60 x86: remove duplicated get_model_name() calling
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-06 14:09:12 +02:00
Thomas Gleixner
72d43d9bc9 x86: HPET: read back compare register before reading counter
After fixing the u32 thinko I sill had occasional hickups on ATI chipsets
with small deltas. There seems to be a delay between writing the compare
register and the transffer to the internal register which triggers the
interrupt. Reading back the value makes sure, that it hit the internal
match register befor we compare against the counter value.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-09-06 07:21:17 +02:00
Thomas Gleixner
f7676254f1 x86: HPET fix moronic 32/64bit thinko
We use the HPET only in 32bit mode because:
1) some HPETs are 32bit only
2) on i386 there is no way to read/write the HPET atomic 64bit wide

The HPET code unification done by the "moron of the year" did
not take into account that unsigned long is different on 32 and
64 bit.

This thinko results in a possible endless loop in the clockevents
code, when the return comparison fails due to the 64bit/332bit
unawareness. 

unsigned long cnt = (u32) hpet_read() + delta can wrap over 32bit.
but the final compare will fail and return -ETIME causing endless
loops.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-09-06 07:21:17 +02:00
H. Peter Anvin
d2f37384fc x86: when building image.iso, use isohybrid if it exists
When building image.iso (make isoimage), use the isohybrid tool if it
exists.  isohybrid is a script included with Syslinux 3.72 and higher,
which creates an image that can be booted either as a hard disk
(including removable, e.g. USB disk) or as a CD-ROM.

If isohybrid doesn't exist, then this has no effect.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-05 22:11:25 -07:00
H. Peter Anvin
f31d731e44 x86: use X86_FEATURE_NOPL in alternatives
Use X86_FEATURE_NOPL to determine if it is safe to use P6 NOPs in
alternatives.  Also, replace table and loop with simple if statement.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-05 16:14:01 -07:00
H. Peter Anvin
b6734c35af x86: add NOPL as a synthetic CPU feature bit
The long noops ("NOPL") are supposed to be detected by family >= 6.
Unfortunately, several non-Intel x86 implementations, both hardware
and software, don't obey this dictum.  Instead, probe for NOPL
directly by executing a NOPL instruction and see if we get #UD.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-05 16:13:52 -07:00
H. Peter Anvin
b74b06c5f6 x86: boot: stub out unimplemented CPU feature words
The CPU feature detection code in the boot code is somewhat minimal,
and doesn't include all possible CPUID words.  In particular, it
doesn't contain the code for CPU feature words 2 (Transmeta),
3 (Linux-specific), 5 (VIA), or 7 (scattered).  Zero them out, so we
can still set those bits as known at compile time; in particular, this
allows creating a Linux-specific NOPL flag and have it required (and
therefore resolvable at compile time) in 64-bit mode.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-05 16:13:44 -07:00
Linus Torvalds
1c402c8cd1 Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: add io delay quirk for Presario F700
2008-09-05 14:36:21 -07:00
David Woodhouse
e51af66308 x86: blacklist DMAR on Intel G31/G33 chipsets
Some BIOSes (the Intel DG33BU, for example) wrongly claim to have DMAR
when they don't. Avoid the resulting crashes when it doesn't work as
expected.

I'd still be grateful if someone could test it on a DG33BU with the old
BIOS though, since I've killed mine. I tested the DMI version, but not
this one.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 20:20:25 +02:00
Joerg Roedel
cf169702ba x86, gart: add detection of AMD family 0x11 northbridges
This patch adds the detection of the northbridges in the AMD family 0x11
processors. It also fixes the magic numbers there while changing this code.

Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 19:11:44 +02:00
H. Peter Anvin
5b7e41ff37 x86: additional defconfig updates
Additional updates to the x86 defconfigs.  The goals are, as before:

- Make them usable to testers, more so than distributors or end users,
  both of which are likely to have their own config already.
- Keep 32 and 64 bits as similar as is practical.

Changes:

- Use a more generic CPU type (ppro and generic, respectively).
- Bump number of CPUs to 64 (few if any NR_CPUS arrays left).
- Enable PAT.
- Enable OPTIMIZE_INLINE.
- Enable microcode update support.
- Build SMT scheduler support (in addition to MC).

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 18:57:14 +02:00
Ingo Molnar
616ad8c442 Merge branch 'linus' into x86/defconfig 2008-09-05 18:56:57 +02:00
Ingo Molnar
28c3cfd5fb Merge branch 'linus' into x86/tracehook 2008-09-05 17:53:05 +02:00
Alex Nixon
efd327a2d4 x86/paravirt: Remove duplicate paravirt_pagetable_setup_{start, done}()
They were already called once in arch/x86/kernel/setup.c - we don't need to call them again.

Signed-off-by: Alex Nixon <alex.nixon@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 17:46:44 +02:00
Alex Nixon
913da64b54 x86: build fix for !CONFIG_SMP
Move reset_lazy_tlbstate into tlb_32.c, and define noop versions of
play_dead() in process_{32,64}.c when !CONFIG_SMP.

Signed-off-by: Alex Nixon <alex.nixon@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 17:44:08 +02:00
Jeremy Fitzhardinge
110e0358e7 x86: make sure the CPA test code's use of _PAGE_UNUSED1 is obvious
The CPA test code uses _PAGE_UNUSED1, so make sure its obvious.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 17:09:57 +02:00
FUJITA Tomonori
551b4545bf x86: gart alloc_coherent doesn't need to check NULL device argument
asm/dma-mapping.h guarantees that gart alloc_coherent doesn't get NULL
device argument.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 12:48:13 +02:00
Thomas Gleixner
7cfb043533 HPET: make minimum reprogramming delta useful
The minimum reprogramming delta was hardcoded in HPET ticks,
which is stupid as it does not work with faster running HPETs.
The C1E idle patches made this prominent on AMD/RS690 chipsets,
where the HPET runs with 25MHz. Set it to 5us which seems to be
a reasonable value and fixes the problems on the bug reporters
machines. We have a further sanity check now in the clock events,
which increases the delta when it is not sufficient.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Luiz Fernando N. Capitulino <lcapitulino@mandriva.com.br>
Tested-by: Dmitry Nezhevenko <dion@inhex.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 11:11:54 +02:00
Ingo Molnar
deed05b7c0 x86, init_64.c: cleanup
Clean up comments.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 10:23:47 +02:00
Yinghai Lu
bd220a24a9 x86: move nonx_setup etc from common.c to init_64.c
like 32 bit put it in init_32.c

Signed-off-by: Yinghai <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 10:23:47 +02:00
Yinghai Lu
f5017cfa35 x86: use cpu/common.c on 64 bit
Use cpu/common.c on both 64-bit and 32-bit and remove cpu/common_64.c.

We started out with this linecount:

  816  arch/x86/kernel/cpu/common_64.c
  805  arch/x86/kernel/cpu/common.c

and the resulting common.c is 1197 lines long, so there's already
424 lines of code eliminated in this phase of the unification.

Signed-off-by: Yinghai <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:57 +02:00
Ingo Molnar
143b604a2d x86: cpu/common*.c, merge whitespaces
Merge leftover whitespaces, to make arch/x86/kernel/cpu/common_64.c
exactly identical to arch/x86/kernel/cpu/common.c.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:56 +02:00
Yinghai Lu
102bbe3ab8 x86: cpu/common*.c, merge identify_cpu()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:56 +02:00
Yinghai Lu
b89d3b3e2c x86: cpu/common*.c, merge generic_identify()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:55 +02:00
Yinghai Lu
56f0d033be x86: cpu/common*.c: merge print_cpu_info()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:54 +02:00
Yinghai Lu
6627d24230 x86: cpu/common*.c, merge early_identify_cpu()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:54 +02:00
Yinghai Lu
5122c890ba x86: cpu/common.c: merge get_cpu_cap()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:53 +02:00
Yinghai Lu
1cd78776c7 x86: cpu/common*.c, merge detect_ht()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:52 +02:00
Yinghai Lu
140fc72709 x86: cpu/common*.c, merge display_cacheinfo()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:51 +02:00
Yinghai Lu
b9e67f0042 x86: cpu/common.c, merge default_init()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:50 +02:00
Yinghai Lu
fab334c1d5 x86: cpu/common*.c, merge switch_to_new_gdt()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:50 +02:00
Yinghai Lu
1ba76586f7 x86: cpu/common*.c have same cpu_init(), with copying and #ifdef
hard to merge by lines... (as here we have material differences between
32-bit and 64-bit mode) - will try to do it later.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:49 +02:00
Yinghai Lu
d5494d4f51 x86: cpu/common*.c, make 32-bit have 64-bit only functions
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:48 +02:00
Yinghai Lu
ba51dced0b x86: cpu/common.c, let 64-bit code have 32-bit only functions
No effect on 64-bit.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:47 +02:00
Yinghai Lu
950ad7ff6e x86: same gdt_page with macro
Move the 32-bit and 64-bit gdt_page definitions next to each
other, separated with an #ifdef.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:47 +02:00
Yinghai Lu
f0fc4aff1f x86: make header file the same in arch/x86/kernel/cpu/common_xx.c
Make the files more similar in preparation to unification, no
code changed.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:46 +02:00
Yinghai Lu
97e4db7c87 x86: make detect_ht depend on CONFIG_X86_HT
64-bit has X86_HT set too, so use that instead of SMP.

This also removes a include/asm-x86/processor.h ifdef.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:45 +02:00
Ingo Molnar
0c8c708a7e Merge branch 'x86/core' into x86/unify-cpu-detect 2008-09-05 09:27:23 +02:00
Ingo Molnar
d3d0ba7b8f Merge commit '63cc8c75156462d4b42cbdd76c293b7eee7ddbfe':
"percpu: introduce DEFINE_PER_CPU_PAGE_ALIGNED() macro"

into x86/core

Conflicts:
	arch/x86/kernel/cpu/common.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:24:30 +02:00
Ingo Molnar
9042763808 Merge branch 'x86/x2apic' into x86/core
Conflicts:
	arch/x86/kernel/cpu/common_64.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:21:21 +02:00
Ingo Molnar
446d27338d Merge branch 'x86/cpu' into x86/core 2008-09-05 09:19:50 +02:00
Ingo Molnar
accf0fa697 Merge branch 'x86/xsave' into x86/core 2008-09-05 09:18:39 +02:00
Ingo Molnar
4156e9a8ef x86: quick TSC calibration, improve
- make sure the final TSC timestamp is reliable too

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 23:21:57 +02:00
Linus Torvalds
6ac40ed041 x86: quick TSC calibration
Introduce a fast TSC-calibration method on sane hardware.

It only uses 17920 PIT timer ticks to calibrate the TSC, plus 256 ticks on
each side to make sure the TSC values were very close to the tick, so the
whole calibration takes 15ms. Yet, despite only takign 15ms,
we can actually give pretty stringent guarantees of accuracy:

 - the code requires that we hit each 256-counter block at least 50 times,
   so the TSC error is basically at *MOST* just a few PIT cycles off in
   any direction. In practice, it's going to be about one microseconds
   off (which is how long it takes to read the counter)

 - so over 17920 PIT cycles, we can pretty much guarantee that the
   calibration error is less than one half of a percent.

My testing bears this out: on my machine, the quick-calibration reports
2934.085kHz, while the slow one reports 2933.415.

Yes, the slower calibration is still more precise. For me, the slow
calibration is stable to within about one hundreth of a percent, so it's
(at a guess) roughly an order-and-a-half of magnitude more precise. The
longer you wait, the more precise you can be.

However, the nice thing about the fast TSC PIT synchronization is that
it's pretty much _guaranteed_ to give that 0.5% precision, and fail
gracefully (and very quickly) if it doesn't get it. And it really is
fairly simple (even if there's a lot of _details_ there, and I didn't get
all of those right ont he first try or even the second ;)

The patch says "110 insertions", but 63 of those new lines are actually
comments.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 arch/x86/kernel/tsc.c |  111 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 110 insertions(+), 1 deletions(-)
2008-09-04 22:54:50 +02:00
Yinghai Lu
0a488a53d7 x86: move 32bit related functions together
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:47 +02:00
Yinghai Lu
01b2e16a7a x86: make get_mode_name of 64bit the same as 32bit
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:46 +02:00
Yinghai Lu
a0854a46c5 x86: make 32bit support show_msr like 64 bit
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:46 +02:00
Yinghai Lu
10a434fcb2 x86: remove cpu_vendor_dev
1. add c_x86_vendor into cpu_dev
2. change cpu_devs to static
3. check c_x86_vendor before put that cpu_dev into array
4. remove alignment for 64bit
5. order the sequence in cpu_devs according to link sequence...
   so could put intel at first, then amd...

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:45 +02:00
Yinghai Lu
9d31d35b5f x86: order functions in cpu/common.c and cpu/common_64.c v2
v2: make 64 bit get c->x86_cache_alignment = c->x86_clfush_size

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:44 +02:00
Yinghai Lu
3da99c9776 x86: make (early)_identify_cpu more the same between 32bit and 64 bit
1. add extended_cpuid_level for 32bit
 2. add generic_identify for 64bit
 3. add early_identify_cpu for 32bit
 4. early_identify_cpu not be called by identify_cpu
 5. remove early in get_cpu_vendor for 32bit
 6. add get_cpu_cap
 7. add cpu_detect for 64bit

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:44 +02:00
Krzysztof Helt
5031088dbc x86: delay early cpu initialization until cpuid is done
Move early cpu initialization after cpu early get cap so the
early cpu initialization can fix up cpu caps.

Signed-off-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:43 +02:00
Yinghai Lu
5fef55fddb x86: move mtrr cpu cap setting early in early_init_xxxx
Krzysztof Helt found MTRR is not detected on k6-2

root cause:
	we moved mtrr_bp_init() early for mtrr trimming,
and in early_detect we only read the CPU capability from cpuid,
so some cpu doesn't have that bit in cpuid.

So we need to add early_init_xxxx to preset those bit before mtrr_bp_init
for those earlier cpus.

this patch is for v2.6.27

Reported-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:43 +02:00
Ingo Molnar
62b3f98188 Merge branch 'x86/debug' into x86/cpu 2008-09-04 21:08:09 +02:00
Yinghai Lu
ebd60cd64f x86: unify using pci_mmcfg_insert_resource
even with known_bridge insert them late too.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:04:32 +02:00
Yinghai Lu
fac8f1e4f9 x86: split e820 reserved entries record to late, v7
try to insert_resource second time, by expanding the resource...

for case: e820 reserved entry is partially overlapped with bar res...

hope it will never happen

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:04:25 +02:00
Andi Kleen
dc44e65943 x86: capitalize function call interrupts consistently
Impact: aestetic

Capitalize function call interrupts consistently.

All other descriptions in /proc/interrupts are capitalized except
for "function call interrupts". Capitalize it too for consistency.

While that's technically a published ABI I think the risk of anyone
relying on that text to stay the same is negligible.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-04 10:51:36 -07:00
H. Peter Anvin
aa3341a168 Merge branch 'x86/cpu' into x86/x2apic
Conflicts:

	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h
2008-09-04 09:21:21 -07:00
H. Peter Anvin
fe47784ba5 Merge branch 'x86/cpu' into x86/xsave
Conflicts:

	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h
2008-09-04 09:04:45 -07:00
Andi Kleen
fb481dd56a x86: drop -funroll-loops for csum_partial_64.c
Impact: performance optimization

I did some rebenchmarking with modern compilers and dropping
-funroll-loops makes the function consistently go faster by a few
percent.  So drop that flag.

Thanks to Richard Guenther for a hint.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-04 08:42:06 -07:00
Ingo Molnar
a5444d15b6 x86: split e820 reserved entries record to late v4
this one replaces:

| commit a2bd7274b4
| Author: Yinghai Lu <yhlu.kernel@gmail.com>
| Date:   Mon Aug 25 00:56:08 2008 -0700
|
|    x86: fix HPET regression in 2.6.26 versus 2.6.25, check hpet against BAR, v3

v2: insert e820 reserve resources before pnp_system_init
v3: fix merging problem in tip/x86/core
v4: address Linus's review about comments and condition in _late()

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 08:39:25 -07:00
Yinghai Lu
58f7c98850 x86: split e820 reserved entries record to late v2
so could let BAR res register at first, or even pnp.

v2: insert e820 reserve resources before pnp_system_init

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 08:37:57 -07:00
Thomas Gleixner
a977c40095 x86: TSC make the calibration loop smarter
The last changes made the calibration loop 250ms long which is far
too much. Try to do that more clever.

Experiments have shown that using a 10ms delay for the PIT based calibration
gives us a good enough value. If we have a reference (HPET/PMTIMER) and the
result of the PIT and the reference is close enough, then we can break out of
the calibration loop on a match right away and use the reference value.

Otherwise we just loop 3 times and decide then, which value to take.

One caveat is that for virtualized environments the PIT calibration often does
not work at all and I found out that 10us is a bit too short as well for the
reference to give a sane result. The solution here is to make the last loop
longer when the first two PIT calibrations failed.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 17:35:35 +02:00
Thomas Gleixner
827014be05 x86: TSC: use one set of reference variables
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 17:35:34 +02:00
Thomas Gleixner
d683ef7afe x86: TSC: separate hpet/pmtimer calculation out
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 17:35:33 +02:00
Thomas Gleixner
cce3e05724 x86: TSC: define the PIT latch value separate
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 17:35:33 +02:00
H. Peter Anvin
0ccd8c39bc Merge branch 'linus' into x86/core 2008-09-04 08:09:09 -07:00
Yinghai Lu
1625324d22 x86: move dir es7000 to es7000_32.c
to be aligned with numaq, summit.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 08:08:53 -07:00
H. Peter Anvin
7203781c98 Merge branch 'x86/cpu' into x86/core
Conflicts:

	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h
2008-09-04 08:08:42 -07:00
H. Peter Anvin
7f16a33978 x86: boot/compressed/Makefile: fix "make clean"
The Kbuild variable "targets" is supposed to be
configuration-independent and reflect "all possible targets".  This is
required to make "make clean" work properly.

Therefore, move all manipulation of "targets" as well as custom rules
out of the x86-32 ifdef statement.  Only leave inside the ifdefs the
things that are genuinely configuration-dependent.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-04 06:19:45 -07:00
Ingo Molnar
42390cdec5 Merge branch 'linus' into x86/x2apic
Conflicts:
	arch/x86/kernel/cpu/cyrix.c
	include/asm-x86/cpufeature.h

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 13:02:35 +02:00
Alok N Kataria
de014d6176 x86: Change warning message in TSC calibration.
When calibration against PIT fails, the warning that we print is misleading.
In a virtualized environment the VM may get descheduled while calibration
or, the check in PIT calibration may fail due to other virtualization
overheads.

The warning message explicitly assumes that calibration failed due to SMI's
which may not be the case. Change that to something proper.

Signed-off-by: Alok N Kataria <akataria@vmware.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-03 20:10:37 -07:00
Chuck Ebbert
e6a5652fd1 x86: add io delay quirk for Presario F700
Manually adding "io_delay=0xed" fixes system lockups in ioapic
mode on this machine.

System Information
	Manufacturer: Hewlett-Packard
	Product Name: Presario F700 (KA695EA#ABF)

Base Board Information
	Manufacturer: Quanta
	Product Name: 30D3

Reference:
https://bugzilla.redhat.com/show_bug.cgi?id=459546

Signed-off-by: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-03 16:42:51 -07:00
Alexey Dobriyan
a57a5c2e8d x86 setup: remove remnants of CONFIG_VIDEO_SELECT (read: vga=)
Impact: cleanup

Video mode selection became always possible in 2.6.23-rc1 after i386 setup
code rewrite in C.

Regardless, VIDEO_SELECT is stupid config option because it affects only
kernel setup code, not code which always stays in memory.

vga= always possible now which is good.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-03 14:12:39 -07:00
Linus Torvalds
ec0c15afb4 Split up PIT part of TSC calibration from native_calibrate_tsc
The TSC calibration function is still very complicated, but this makes
it at least a little bit less so by moving the PIT part out into a
helper function of its own.

Tested-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-of-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-03 07:30:13 -07:00
Thomas Gleixner
fbb16e2438 [x86] Fix TSC calibration issues
Larry Finger reported at http://lkml.org/lkml/2008/9/1/90:
An ancient laptop of mine started throwing errors from b43legacy when
I started using 2.6.27 on it. This has been bisected to commit bfc0f59
"x86: merge tsc calibration".

The unification of the TSC code adopted mostly the 64bit code, which
prefers PMTIMER/HPET over the PIT calibration.

Larrys system has an AMD K6 CPU. Such systems are known to have
PMTIMER incarnations which run at double speed. This results in a
miscalibration of the TSC by factor 0.5. So the resulting calibrated
CPU/TSC speed is half of the real CPU speed, which means that the TSC
based delay loop will run half the time it should run. That might
explain why the b43legacy driver went berserk.

On the other hand we know about systems, where the PIT based
calibration results in random crap due to heavy SMI/SMM
disturbance. On those systems the PMTIMER/HPET based calibration logic
with SMI detection shows better results.

According to Alok also virtualized systems suffer from the PIT
calibration method.

The solution is to use a more wreckage aware aproach than the current
either/or decision.

1) reimplement the retry loop which was dropped from the 32bit code
during the merge. It repeats the calibration and selects the lowest
frequency value as this is probably the closest estimate to the real
frequency

2) Monitor the delta of the TSC values in the delay loop which waits
for the PIT counter to reach zero. If the maximum value is
significantly different from the minimum, then we have a pretty safe
indicator that the loop was disturbed by an SMI.

3) keep the pmtimer/hpet reference as a backup solution for systems
where the SMI disturbance is a permanent point of failure for PIT
based calibration

4) do the loop iteration for both methods, record the lowest value and
decide after all iterations finished.

5) Set a clear preference to PIT based calibration when the result
makes sense.

The implementation does the reference calibration based on
HPET/PMTIMER around the delay, which is necessary for the PIT anyway,
but keeps separate TSC values to ensure the "independency" of the
resulting calibration values.

Tested on various 32bit/64bit machines including Geode 266Mhz, AMD K6
(affected machine with a double speed pmtimer which I grabbed out of
the dump), Pentium class machines and AMD/Intel 64 bit boxen.

Bisected-by:  Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-02 20:35:56 -07:00
Linus Torvalds
011fec7486 Un-break printk strings in x86 PCI probing code
Breaking lines due to some imaginary problem with a long line length is
often stupid and wrong, but never more so when it splits a string that
is printed out into multiple lines.  This really ended up making it much
harder to find where some error strings were printed out, because a
simple 'grep' didn't work.

I'm sure there is tons more of this particular idiocy hiding in other
places, but this particular case hit me once more last week. So fix it.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-02 10:38:28 -07:00
Linus Torvalds
b460947211 Revert "x86: fix HPET regression in 2.6.26 versus 2.6.25, check hpet against BAR, v3"
This reverts commit a2bd7274b4.

It wasn't really right to begin with (there's a better fix for the
problem with e820 reservations clashing with PCI BAR's pending), but it
also actually causes more regressions, so it should be reverted even
before the better fix is finalized.

Rafael reports that this commit broke AHCI detection, and thus causes
the kernel to not boot on his quad core test box.

Reported-and-bisected-by: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: David Witbrodt <dawitbro@sbcglobal.net>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-08-29 14:46:05 -07:00
Austin Zhang
8cb51ba8e0 crypto: crc32c - Use Intel CRC32 instruction
From NHM processor onward, Intel processors can support hardware accelerated
CRC32c algorithm with the new CRC32 instruction in SSE 4.2 instruction set.
The patch detects the availability of the feature, and chooses the most proper
way to calculate CRC32c checksum.
Byte code instructions are used for compiler compatibility.
No MMX / XMM registers is involved in the implementation.

Signed-off-by: Austin Zhang <austin.zhang@intel.com>
Signed-off-by: Kent Liu <kent.liu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-08-29 15:49:50 +10:00
Linus Torvalds
e52c8857e0 Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: update defconfigs
  x86: msr: fix bogus return values from rdmsr_safe/wrmsr_safe
  x86: cpuid: correct return value on partial operations
  x86: msr: correct return value on partial operations
  x86: cpuid: propagate error from smp_call_function_single()
  x86: msr: propagate errors from smp_call_function_single()
  smp: have smp_call_function_single() detect invalid CPUs
2008-08-28 12:30:59 -07:00
Joe Korty
2c7e9fd4c6 x86: make poll_idle behave more like the other idle methods
Make poll_idle() behave more like the other idle methods.

Currently, poll_idle() returns immediately.  The other
idle methods all wait indefinately for some condition
to come true before returning.  poll_idle should emulate
these other methods and also wait for a return condition,
in this case, for need_resched() to become 'true'.

Without this delay the idle loop spends all of its time
in the outer loop that calls poll_idle.  This outer loop,
these days, does real work, some of it under rcu locks.
That work should only be done when idle is entered and
when idle exits, not continuously while idle is spinning.

Signed-off-by: Joe Korty <joe.korty@ccur.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-28 11:29:48 +02:00
H. Peter Anvin
7414aa41a6 x86: generate names for /proc/cpuinfo from <asm/cpufeature.h>
We have had a number of cases where <asm/cpufeature.h> (and its
predecessors) have diverged substantially from the names list in
/proc/cpuinfo.  This patch generates the latter from the former.

It retains the option for explicitly overriding the strings, but by
making that require a separate action it should at least be less
likely to happen.

It would be good to do a future pass and rename strings that are
gratuituously different in the kernel (/proc/cpuinfo is a userspace
interface and must remain constant.)

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-27 19:23:22 -07:00
H. Peter Anvin
b30a72a7ed Merge branch 'x86/urgent' into x86/cpu
Conflicts:

	arch/x86/kernel/cpu/cyrix.c
2008-08-27 19:17:07 -07:00
Suresh Siddha
83b8e28b14 x86: xsave: restore xcr0 during resume
Add the missing XCR0(XFEATURE_ENABLED_MASK) restore during resume.

Reported-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-27 15:23:23 -07:00
Suresh Siddha
11c231a962 x86: use x2apic id reported by cpuid during topology discovery, fix
v2: Fix for !SMP build

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-27 09:02:19 +02:00
Hiroshi Shimamoto
a817260874 x86: acpi: move acpi_mcfg_64bit_base_addr into CONFIG_PCI_MMCONFIG
acpi_mcfg_64bit_base_addr is used when CONFIG_PCI_MMCONFIG is enabled.

Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-27 08:25:06 +02:00
H. Peter Anvin
c1b362e3b4 x86: update defconfigs
Enable some option commonly used by testers in defconfig, including
some very common device drivers and network boot support.  defconfig
is still not meant to be a kitchen-sink configuration.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-27 08:14:17 +02:00
H. Peter Anvin
bdd314616f x86: msr-on-cpu: remove unnecessary level of abstraction
Remove an unnecessary level of abstraction in the msr-on-cpu library.
Although this duplicates some code, the duplicated code is less than
the additional code, and this way should be faster.

Additionally, change the order of the functions to make the regular
structure of this file more obvious.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25 22:45:50 -07:00
H. Peter Anvin
94d4ac2f4a Merge branch 'x86/urgent' into x86/cleanups 2008-08-25 22:45:37 -07:00
H. Peter Anvin
9ea2b82ed6 x86: cpuid: correct return value on partial operations
Return the correct return value when the CPUID driver partially
completes a request (we should return the number of bytes actually
read or written, instead of the error code.)

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25 17:46:12 -07:00
H. Peter Anvin
85f1cb6015 x86: msr: correct return value on partial operations
Return the correct return value when the MSR driver partially
completes a request (we should return the number of bytes actually
read or written, instead of the error code.)

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25 17:46:12 -07:00
H. Peter Anvin
4b46ca701b x86: cpuid: propagate error from smp_call_function_single()
Propagate error (-ENXIO) from smp_call_function_single() in the CPUID
driver.  This can happen when a CPU is unplugged while the CPUID
driver is open.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25 17:45:48 -07:00
H. Peter Anvin
c6f31932d0 x86: msr: propagate errors from smp_call_function_single()
Propagate error (-ENXIO) from smp_call_function_single().  These
errors can happen when a CPU is unplugged while the MSR driver is
open.

Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-25 17:45:48 -07:00
Linus Torvalds
d25e26b61d [x86] Clean up MAXSMP Kconfig, and limit NR_CPUS to 512
This fixes a regression that was indirectly caused by commit
1184dc2ffe ("x86: modify Kconfig to allow
up to 4096 cpus").

Allowing 4k CPU's is not practical at this time, because we still have a
number of places that have several 'cpumask_t's on the stack, and a
4k-bit cpumask is 512 bytes of stack-space for each such variable.  This
literally caused functions like 'smp_call_function_mask' to have a 2.5kB
stack frame, and several functions to have 2kB stackframes.

With an 8kB stack total, smashing the stack was simply much too likely.
At least bugzilla entry

	http://bugzilla.kernel.org/show_bug.cgi?id=11342

was due to this.

The earlier commit to not inline load_module() into sys_init_module()
fixed the particular symptoms of this that Alan Brunelle saw in that
bugzilla entry, but the huge stack waste by cpumask_t's was the more
direct cause.

Some day we'll have allocation helpers that allocate large CPU masks
dynamically, but in the meantime we simply cannot allow cpumasks this
large.

Cc: Alan D. Brunelle <Alan.Brunelle@hp.com>
Cc: Mike Travis <travis@sgi.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-08-25 14:15:38 -07:00
Linus Torvalds
ec73adba51 Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  x86: add X86_FEATURE_XMM4_2 definitions
  x86: fix cpufreq + sched_clock() regression
  x86: fix HPET regression in 2.6.26 versus 2.6.25, check hpet against BAR, v3
  x86: do not enable TSC notifier if we don't need it
  x86 MCE: Fix CPU hotplug problem with multiple multicore AMD CPUs
  x86: fix: make PCI ECS for AMD CPUs hotplug capable
  x86: fix: do not run code in amd_bus.c on non-AMD CPUs
2008-08-25 11:26:33 -07:00
Avi Kivity
cd5998ebfb KVM: MMU: Fix torn shadow pte
The shadow code assigns a pte directly in one place, which is nonatomic on
i386 can can cause random memory references.  Fix by using an atomic setter.

Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-08-25 17:24:27 +03:00
Peter Zijlstra
52a8968ce9 x86: fix cpufreq + sched_clock() regression
I noticed that my sched_clock() was slow on a number of machine, so I
started looking at cpufreq.

The below seems to fix the problem for me.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 14:39:19 +02:00
Ingo Molnar
f58899bb02 Merge branch 'linus' into x86/urgent 2008-08-25 14:39:12 +02:00
Avi Kivity
c7ffa6c262 x86: default to reboot via ACPI
Triple-fault and keyboard reset may assert INIT instead of RESET; however
INIT is blocked when Intel VT is enabled.  This leads to a partially reset
machine when invoking emergency_restart via sysrq-b: the processor is still
working but other parts of the system are dead.

Default to rebooting via ACPI, which correctly asserts RESET and reboots the
machine.

This is safe since we will fall back to keyboard reset and triple fault if
acpi is not enabled or if the reset is not successful.

Signed-off-by: Avi Kivity <avi@qumranet.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 12:31:32 +02:00
Alex Nixon
d68d82afd4 xen: implement CPU hotplugging
Note the changes from 2.6.18-xen CPU hotplugging:

A vcpu_down request from the remote admin via Xenbus both hotunplugs the
CPU, and disables it by removing it from the cpu_present map, and removing
its entry in /sys.

A vcpu_up request from the remote admin only re-enables the CPU, and does
not immediately bring the CPU up. A udev event is emitted, which can be
caught by the user if he wishes to automatically re-up CPUs when available,
or implement a more complex policy.

Signed-off-by: Alex Nixon <alex.nixon@citrix.com>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 11:25:14 +02:00
Robert Richter
ed21763e7b x86: cleanup in amd_cpu_notify()
small coding style fix.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 11:11:59 +02:00
Ingo Molnar
ea1c9de45e Merge branch 'x86/urgent' into x86/cleanups 2008-08-25 11:10:42 +02:00
Alex Nixon
8227dce7dc x86: separate generic cpu disabling code from APIC writes in cpu_disable
It allows paravirt implementations of cpu_disable to share the
cpu_disable_common code, without having to take on board APIC
writes, which may not be appropriate.

Signed-off-by: Alex Nixon <alex.nixon@citrix.com>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 10:59:20 +02:00
Alex Nixon
a21f5d88c1 x86: unify x86_32 and x86_64 play_dead into one function
Add the new play_dead into smpboot.c, as it fits more cleanly in there
alongside other CONFIG_HOTPLUG functions.

Separate out the common code into its own function.

Signed-off-by: Alex Nixon <alex.nixon@citrix.com>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 10:59:19 +02:00
Alex Nixon
3790025863 x86_32: clean up play_dead
The removal of the CPU from the various maps was redundant as it already
happened in cpu_disable.

After cleaning this up, cpu_uninit only resets the tlb state, so rename
it and create a noop version for the X86_64 case (so the two play_deads
can be unified later).

Signed-off-by: Alex Nixon <alex.nixon@citrix.com>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 10:59:18 +02:00
Alex Nixon
93be71b672 x86: add cpu hotplug hooks into smp_ops
Signed-off-by: Alex Nixon <alex.nixon@citrix.com>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 10:59:18 +02:00
Ingo Molnar
e4f807c2b4 Merge branch 'linus' into x86/xen
Conflicts:
	arch/x86/kernel/paravirt.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 10:54:07 +02:00
Yinghai Lu
a2bd7274b4 x86: fix HPET regression in 2.6.26 versus 2.6.25, check hpet against BAR, v3
David Witbrodt tracked down (and bisected) a hpet bootup hang on his
system to the following problem: a BIOS bug made the hpet device
visible as a generic PCI device. If e820 reserved entries happen to
be registered first in the resource tree [which v2.6.26 started doing],
then the PCI code will reallocate that device's BAR to some other
address - breaking timer IRQs and hanging the system.

( Normally hpet devices are hidden by the BIOS from the OS's PCI
  discovery via chipset magic. Sometimes the hpet is not a PCI device
  at all. )

Solve this fundamental fragility by making non-PCI platform drivers
insert resources into the resource tree even if it overlaps the e820
reserved entry, to keep the resource manager from updating the BAR.

Also do these checks for the ioapic and mmconfig addresses, and emit
a warning if this happens.

Bisected-by: David Witbrodt <dawitbro@sbcglobal.net>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Tested-by: David Witbrodt <dawitbro@sbcglobal.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-08-25 10:02:03 +02:00
Linus Torvalds
060700b571 x86: do not enable TSC notifier if we don't need it
Impact: crash on non-TSC-equipped CPUs

Don't enable the TSC notifier if we *either*:

1. don't have a CPU, or
2. have a CPU with constant TSC.

In either of those cases, the notifier is either damaging (1) or useless(2).

From: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-08-24 17:16:28 -07:00