ia64: SMP: Remove call to ipi_call_lock_irq()/ipi_call_unlock_irq()
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock protects only the call_function data structure itself, but it's completely unrelated to cpu_online_mask. The mask to which the IPIs are sent is calculated before call_function.lock is taken in smp_call_function_many(), so the locking around set_cpu_online() is pointless and can be removed. [ tglx: Massaged changelog ] Signed-off-by: Yong Zhang <yong.zhang0@gmail.com> Cc: ralf@linux-mips.org Cc: sshtylyov@mvista.com Cc: david.daney@cavium.com Cc: nikunj@linux.vnet.ibm.com Cc: paulmck@linux.vnet.ibm.com Cc: axboe@kernel.dk Cc: peterz@infradead.org Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64@vger.kernel.org Link: http://lkml.kernel.org/r/1338275765-3217-8-git-send-email-yong.zhang0@gmail.com Acked-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This commit is contained in:
parent
3b6f70fd7d
commit
459165e250
@ -382,7 +382,6 @@ smp_callin (void)
|
||||
set_numa_node(cpu_to_node_map[cpuid]);
|
||||
set_numa_mem(local_memory_node(cpu_to_node_map[cpuid]));
|
||||
|
||||
ipi_call_lock_irq();
|
||||
spin_lock(&vector_lock);
|
||||
/* Setup the per cpu irq handling data structures */
|
||||
__setup_vector_irq(cpuid);
|
||||
@ -390,7 +389,6 @@ smp_callin (void)
|
||||
set_cpu_online(cpuid, true);
|
||||
per_cpu(cpu_state, cpuid) = CPU_ONLINE;
|
||||
spin_unlock(&vector_lock);
|
||||
ipi_call_unlock_irq();
|
||||
|
||||
smp_setup_percpu_timer();
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user