mirror of
https://github.com/torvalds/linux.git
synced 2024-11-14 08:02:07 +00:00
67d93ffc0f
vclocks were using spinlocks to protect access to its timecounter and
cyclecounter. Access to timecounter/cyclecounter is backed by the same
driver callbacks that are used for non-virtual PHCs, but the usage of
the spinlock imposes a new limitation that didn't exist previously: now
they're called in atomic context so they mustn't sleep.
Some drivers like sfc or ice may sleep on these callbacks, causing
errors like "BUG: scheduling while atomic: ptp5/25223/0x00000002"
Fix it replacing the vclock's spinlock by a mutex. It fix the mentioned
bug and it doesn't introduce longer delays.
I've tested synchronizing various different combinations of clocks:
- vclock->sysclock
- sysclock->vclock
- vclock->vclock
- hardware PHC in different NIC -> vclock
- created 4 vclocks and launch 4 parallel phc2sys processes with
lockdep enabled
In all cases, comparing the delays reported by phc2sys, they are in the
same range of values than before applying the patch.
Link: https://lore.kernel.org/netdev/69d0ff33-bd32-6aa5-d36c-fbdc3c01337c@redhat.com/
Fixes:
|
||
---|---|---|
.. | ||
Kconfig | ||
Makefile | ||
ptp_chardev.c | ||
ptp_clock.c | ||
ptp_clockmatrix.c | ||
ptp_clockmatrix.h | ||
ptp_dte.c | ||
ptp_idt82p33.c | ||
ptp_idt82p33.h | ||
ptp_ines.c | ||
ptp_kvm_arm.c | ||
ptp_kvm_common.c | ||
ptp_kvm_x86.c | ||
ptp_ocp.c | ||
ptp_pch.c | ||
ptp_private.h | ||
ptp_qoriq_debugfs.c | ||
ptp_qoriq.c | ||
ptp_sysfs.c | ||
ptp_vclock.c | ||
ptp_vmw.c |