mirror of
https://github.com/torvalds/linux.git
synced 2024-11-11 06:31:49 +00:00
The time and timers updates contain:
Core changes: - Allow runtime power management when the clocksource is changed. - A correctness fix for clock_adjtime32() so that the return value on success is not overwritten by the result of the copy to user. - Allow late installment of broadcast clockevent devices which was broken because nothing switched them over to oneshot mode. This went unnoticed so far because clockevent devices used to be built in, but now people started to make them modular. - Debugfs related simplifications - Small cleanups and improvements here and there Driver changes: - The usual set of device tree binding updates for a wide range of drivers/devices. - The usual updates and improvements for drivers all over the place but nothing outstanding. - No new clocksource/event drivers. They'll come back next time. -----BEGIN PGP SIGNATURE----- iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmCGieYTHHRnbHhAbGlu dXRyb25peC5kZQAKCRCmGPVMDXSYobRJEACNCtecUXdyt/u+ViDgHwG1XOHSZUkG zBO6E/uZ3G6ZUkr6FogAaY2eMMrSdSUyqbiNBSYBJki2ptMJWF5Li5VzqINmrBuD VyjK3FEDV0bXW9EJOm4d+95pMyFQ/pYv9VPcByj7VW21t+IDE/4pLeZ8M8shNDHa pmMnR/tgX4ZZtSrX2NqCUNoTrkycaz8d5NOuso5HjKvPkJ5BU2kSxULTGmvaeTil 8d+70AetApDgzAWpCnJFPlLlOHIPyhnMxS5edvsMIbMIkRLsnI+b3LsPZe+CqVZ0 zaP6KYvG+iqU8nKdz7OweV1fLgBD52GKgHlpTkhhYs3GW4XBEXDrsyoEyeIiZ22u YUkTzFvZ4JG/+80UUaKpLDIGYWUj1h+xe/EtWS0s8lj108RsNLghd/0YjFMikspT fYC2WpaXJDz3URbSV57OXGbwhg2zOYI5Supg6wNrmFfcld3k6CSitG4idDpIGjJE 8WIcZmeZSelDufskiY8RmsiTumqNOf5P33F71r9JRI6QU9RsyYb3fJN71AFKnLq2 31YEAShpzPYG5EGRinPymJRi3icdmcEQECz/pWUb6ua0s/HG1+HD9emLwHzvPdul hcWRq19GaK1YBzOfV60+8cdxW8ZEOROvRVdYJO8FoYcnueUJmOSM+boqSkRtDw3o RywO8BetxukPJg== =F6Du -----END PGP SIGNATURE----- Merge tag 'timers-core-2021-04-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer updates from Thomas Gleixner: "The time and timers updates contain: Core changes: - Allow runtime power management when the clocksource is changed. - A correctness fix for clock_adjtime32() so that the return value on success is not overwritten by the result of the copy to user. - Allow late installment of broadcast clockevent devices which was broken because nothing switched them over to oneshot mode. This went unnoticed so far because clockevent devices used to be built in, but now people started to make them modular. - Debugfs related simplifications - Small cleanups and improvements here and there Driver changes: - The usual set of device tree binding updates for a wide range of drivers/devices. - The usual updates and improvements for drivers all over the place but nothing outstanding. - No new clocksource/event drivers. They'll come back next time" * tag 'timers-core-2021-04-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits) posix-timers: Preserve return value in clock_adjtime32() tick/broadcast: Allow late registered device to enter oneshot mode tick: Use tick_check_replacement() instead of open coding it time/timecounter: Mark 1st argument of timecounter_cyc2time() as const dt-bindings: timer: nuvoton,npcm7xx: Add wpcm450-timer clocksource/drivers/arm_arch_timer: Add __ro_after_init and __init clocksource/drivers/timer-ti-dm: Handle dra7 timer wrap errata i940 clocksource/drivers/timer-ti-dm: Prepare to handle dra7 timer wrap issue clocksource/drivers/dw_apb_timer_of: Add handling for potential memory leak clocksource/drivers/npcm: Add support for WPCM450 clocksource/drivers/sh_cmt: Don't use CMTOUT_IE with R-Car Gen2/3 clocksource/drivers/pistachio: Fix trivial typo clocksource/drivers/ingenic_ost: Fix return value check in ingenic_ost_probe() clocksource/drivers/timer-ti-dm: Add missing set_state_oneshot_stopped clocksource/drivers/timer-ti-dm: Fix posted mode status check order dt-bindings: timer: renesas,cmt: Document R8A77961 dt-bindings: timer: renesas,cmt: Add r8a779a0 CMT support clocksource/drivers/ingenic-ost: Add support for the JZ4760B clocksource/drivers/ingenic: Add support for the JZ4760 dt-bindings: timer: ingenic: Add compatible strings for JZ4760(B) ...
This commit is contained in:
commit
87dcebff92
@ -20,6 +20,8 @@ select:
|
||||
enum:
|
||||
- ingenic,jz4740-tcu
|
||||
- ingenic,jz4725b-tcu
|
||||
- ingenic,jz4760-tcu
|
||||
- ingenic,jz4760b-tcu
|
||||
- ingenic,jz4770-tcu
|
||||
- ingenic,jz4780-tcu
|
||||
- ingenic,x1000-tcu
|
||||
@ -52,12 +54,15 @@ properties:
|
||||
- enum:
|
||||
- ingenic,jz4740-tcu
|
||||
- ingenic,jz4725b-tcu
|
||||
- ingenic,jz4770-tcu
|
||||
- ingenic,jz4760-tcu
|
||||
- ingenic,x1000-tcu
|
||||
- const: simple-mfd
|
||||
- items:
|
||||
- const: ingenic,jz4780-tcu
|
||||
- const: ingenic,jz4770-tcu
|
||||
- enum:
|
||||
- ingenic,jz4780-tcu
|
||||
- ingenic,jz4770-tcu
|
||||
- ingenic,jz4760b-tcu
|
||||
- const: ingenic,jz4760-tcu
|
||||
- const: simple-mfd
|
||||
|
||||
reg:
|
||||
@ -118,6 +123,8 @@ patternProperties:
|
||||
- items:
|
||||
- enum:
|
||||
- ingenic,jz4770-watchdog
|
||||
- ingenic,jz4760b-watchdog
|
||||
- ingenic,jz4760-watchdog
|
||||
- ingenic,jz4725b-watchdog
|
||||
- const: ingenic,jz4740-watchdog
|
||||
|
||||
@ -147,6 +154,8 @@ patternProperties:
|
||||
- ingenic,jz4725b-pwm
|
||||
- items:
|
||||
- enum:
|
||||
- ingenic,jz4760-pwm
|
||||
- ingenic,jz4760b-pwm
|
||||
- ingenic,jz4770-pwm
|
||||
- ingenic,jz4780-pwm
|
||||
- const: ingenic,jz4740-pwm
|
||||
@ -183,10 +192,15 @@ patternProperties:
|
||||
oneOf:
|
||||
- enum:
|
||||
- ingenic,jz4725b-ost
|
||||
- ingenic,jz4770-ost
|
||||
- ingenic,jz4760b-ost
|
||||
- items:
|
||||
- const: ingenic,jz4780-ost
|
||||
- const: ingenic,jz4770-ost
|
||||
- const: ingenic,jz4760-ost
|
||||
- const: ingenic,jz4725b-ost
|
||||
- items:
|
||||
- enum:
|
||||
- ingenic,jz4780-ost
|
||||
- ingenic,jz4770-ost
|
||||
- const: ingenic,jz4760b-ost
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
@ -226,7 +240,7 @@ examples:
|
||||
#include <dt-bindings/clock/jz4770-cgu.h>
|
||||
#include <dt-bindings/clock/ingenic,tcu.h>
|
||||
tcu: timer@10002000 {
|
||||
compatible = "ingenic,jz4770-tcu", "simple-mfd";
|
||||
compatible = "ingenic,jz4770-tcu", "ingenic,jz4760-tcu", "simple-mfd";
|
||||
reg = <0x10002000 0x1000>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
@ -272,7 +286,7 @@ examples:
|
||||
};
|
||||
|
||||
ost: timer@e0 {
|
||||
compatible = "ingenic,jz4770-ost";
|
||||
compatible = "ingenic,jz4770-ost", "ingenic,jz4760b-ost";
|
||||
reg = <0xe0 0x20>;
|
||||
|
||||
clocks = <&tcu TCU_CLK_OST>;
|
||||
|
@ -4,7 +4,8 @@ Nuvoton NPCM7xx have three timer modules, each timer module provides five 24-bit
|
||||
timer counters.
|
||||
|
||||
Required properties:
|
||||
- compatible : "nuvoton,npcm750-timer" for Poleg NPCM750.
|
||||
- compatible : "nuvoton,npcm750-timer" for Poleg NPCM750, or
|
||||
"nuvoton,wpcm450-timer" for Hermon WPCM450.
|
||||
- reg : Offset and length of the register set for the device.
|
||||
- interrupts : Contain the timer interrupt of timer 0.
|
||||
- clocks : phandle of timer reference clock (usually a 25 MHz clock).
|
||||
|
@ -74,11 +74,13 @@ properties:
|
||||
- renesas,r8a774e1-cmt0 # 32-bit CMT0 on RZ/G2H
|
||||
- renesas,r8a7795-cmt0 # 32-bit CMT0 on R-Car H3
|
||||
- renesas,r8a7796-cmt0 # 32-bit CMT0 on R-Car M3-W
|
||||
- renesas,r8a77961-cmt0 # 32-bit CMT0 on R-Car M3-W+
|
||||
- renesas,r8a77965-cmt0 # 32-bit CMT0 on R-Car M3-N
|
||||
- renesas,r8a77970-cmt0 # 32-bit CMT0 on R-Car V3M
|
||||
- renesas,r8a77980-cmt0 # 32-bit CMT0 on R-Car V3H
|
||||
- renesas,r8a77990-cmt0 # 32-bit CMT0 on R-Car E3
|
||||
- renesas,r8a77995-cmt0 # 32-bit CMT0 on R-Car D3
|
||||
- renesas,r8a779a0-cmt0 # 32-bit CMT0 on R-Car V3U
|
||||
- const: renesas,rcar-gen3-cmt0 # 32-bit CMT0 on R-Car Gen3 and RZ/G2
|
||||
|
||||
- items:
|
||||
@ -89,11 +91,13 @@ properties:
|
||||
- renesas,r8a774e1-cmt1 # 48-bit CMT on RZ/G2H
|
||||
- renesas,r8a7795-cmt1 # 48-bit CMT on R-Car H3
|
||||
- renesas,r8a7796-cmt1 # 48-bit CMT on R-Car M3-W
|
||||
- renesas,r8a77961-cmt1 # 48-bit CMT on R-Car M3-W+
|
||||
- renesas,r8a77965-cmt1 # 48-bit CMT on R-Car M3-N
|
||||
- renesas,r8a77970-cmt1 # 48-bit CMT on R-Car V3M
|
||||
- renesas,r8a77980-cmt1 # 48-bit CMT on R-Car V3H
|
||||
- renesas,r8a77990-cmt1 # 48-bit CMT on R-Car E3
|
||||
- renesas,r8a77995-cmt1 # 48-bit CMT on R-Car D3
|
||||
- renesas,r8a779a0-cmt1 # 48-bit CMT on R-Car V3U
|
||||
- const: renesas,rcar-gen3-cmt1 # 48-bit CMT on R-Car Gen3 and RZ/G2
|
||||
|
||||
reg:
|
||||
|
@ -28,8 +28,14 @@ properties:
|
||||
- renesas,tmu-r8a774e1 # RZ/G2H
|
||||
- renesas,tmu-r8a7778 # R-Car M1A
|
||||
- renesas,tmu-r8a7779 # R-Car H1
|
||||
- renesas,tmu-r8a7795 # R-Car H3
|
||||
- renesas,tmu-r8a7796 # R-Car M3-W
|
||||
- renesas,tmu-r8a77961 # R-Car M3-W+
|
||||
- renesas,tmu-r8a77965 # R-Car M3-N
|
||||
- renesas,tmu-r8a77970 # R-Car V3M
|
||||
- renesas,tmu-r8a77980 # R-Car V3H
|
||||
- renesas,tmu-r8a77990 # R-Car E3
|
||||
- renesas,tmu-r8a77995 # R-Car D3
|
||||
- const: renesas,tmu
|
||||
|
||||
reg:
|
||||
|
@ -1168,7 +1168,7 @@
|
||||
};
|
||||
};
|
||||
|
||||
target-module@34000 { /* 0x48034000, ap 7 46.0 */
|
||||
timer3_target: target-module@34000 { /* 0x48034000, ap 7 46.0 */
|
||||
compatible = "ti,sysc-omap4-timer", "ti,sysc";
|
||||
reg = <0x34000 0x4>,
|
||||
<0x34010 0x4>;
|
||||
@ -1195,7 +1195,7 @@
|
||||
};
|
||||
};
|
||||
|
||||
target-module@36000 { /* 0x48036000, ap 9 4e.0 */
|
||||
timer4_target: target-module@36000 { /* 0x48036000, ap 9 4e.0 */
|
||||
compatible = "ti,sysc-omap4-timer", "ti,sysc";
|
||||
reg = <0x36000 0x4>,
|
||||
<0x36010 0x4>;
|
||||
|
@ -46,6 +46,7 @@
|
||||
|
||||
timer {
|
||||
compatible = "arm,armv7-timer";
|
||||
status = "disabled"; /* See ARM architected timer wrap erratum i940 */
|
||||
interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
|
||||
<GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
|
||||
<GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
|
||||
@ -1241,3 +1242,22 @@
|
||||
assigned-clock-parents = <&sys_32k_ck>;
|
||||
};
|
||||
};
|
||||
|
||||
/* Local timers, see ARM architected timer wrap erratum i940 */
|
||||
&timer3_target {
|
||||
ti,no-reset-on-init;
|
||||
ti,no-idle;
|
||||
timer@0 {
|
||||
assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER3_CLKCTRL 24>;
|
||||
assigned-clock-parents = <&timer_sys_clk_div>;
|
||||
};
|
||||
};
|
||||
|
||||
&timer4_target {
|
||||
ti,no-reset-on-init;
|
||||
ti,no-idle;
|
||||
timer@0 {
|
||||
assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER4_CLKCTRL 24>;
|
||||
assigned-clock-parents = <&timer_sys_clk_div>;
|
||||
};
|
||||
};
|
||||
|
@ -51,7 +51,7 @@
|
||||
|
||||
static unsigned arch_timers_present __initdata;
|
||||
|
||||
static void __iomem *arch_counter_base;
|
||||
static void __iomem *arch_counter_base __ro_after_init;
|
||||
|
||||
struct arch_timer {
|
||||
void __iomem *base;
|
||||
@ -60,15 +60,16 @@ struct arch_timer {
|
||||
|
||||
#define to_arch_timer(e) container_of(e, struct arch_timer, evt)
|
||||
|
||||
static u32 arch_timer_rate;
|
||||
static int arch_timer_ppi[ARCH_TIMER_MAX_TIMER_PPI];
|
||||
static u32 arch_timer_rate __ro_after_init;
|
||||
u32 arch_timer_rate1 __ro_after_init;
|
||||
static int arch_timer_ppi[ARCH_TIMER_MAX_TIMER_PPI] __ro_after_init;
|
||||
|
||||
static struct clock_event_device __percpu *arch_timer_evt;
|
||||
|
||||
static enum arch_timer_ppi_nr arch_timer_uses_ppi = ARCH_TIMER_VIRT_PPI;
|
||||
static bool arch_timer_c3stop;
|
||||
static bool arch_timer_mem_use_virtual;
|
||||
static bool arch_counter_suspend_stop;
|
||||
static enum arch_timer_ppi_nr arch_timer_uses_ppi __ro_after_init = ARCH_TIMER_VIRT_PPI;
|
||||
static bool arch_timer_c3stop __ro_after_init;
|
||||
static bool arch_timer_mem_use_virtual __ro_after_init;
|
||||
static bool arch_counter_suspend_stop __ro_after_init;
|
||||
#ifdef CONFIG_GENERIC_GETTIMEOFDAY
|
||||
static enum vdso_clock_mode vdso_default = VDSO_CLOCKMODE_ARCHTIMER;
|
||||
#else
|
||||
@ -76,7 +77,7 @@ static enum vdso_clock_mode vdso_default = VDSO_CLOCKMODE_NONE;
|
||||
#endif /* CONFIG_GENERIC_GETTIMEOFDAY */
|
||||
|
||||
static cpumask_t evtstrm_available = CPU_MASK_NONE;
|
||||
static bool evtstrm_enable = IS_ENABLED(CONFIG_ARM_ARCH_TIMER_EVTSTREAM);
|
||||
static bool evtstrm_enable __ro_after_init = IS_ENABLED(CONFIG_ARM_ARCH_TIMER_EVTSTREAM);
|
||||
|
||||
static int __init early_evtstrm_cfg(char *buf)
|
||||
{
|
||||
@ -176,7 +177,7 @@ static notrace u64 arch_counter_get_cntvct(void)
|
||||
* to exist on arm64. arm doesn't use this before DT is probed so even
|
||||
* if we don't have the cp15 accessors we won't have a problem.
|
||||
*/
|
||||
u64 (*arch_timer_read_counter)(void) = arch_counter_get_cntvct;
|
||||
u64 (*arch_timer_read_counter)(void) __ro_after_init = arch_counter_get_cntvct;
|
||||
EXPORT_SYMBOL_GPL(arch_timer_read_counter);
|
||||
|
||||
static u64 arch_counter_read(struct clocksource *cs)
|
||||
@ -925,7 +926,7 @@ static int validate_timer_rate(void)
|
||||
* rate was probed first, and don't verify that others match. If the first node
|
||||
* probed has a clock-frequency property, this overrides the HW register.
|
||||
*/
|
||||
static void arch_timer_of_configure_rate(u32 rate, struct device_node *np)
|
||||
static void __init arch_timer_of_configure_rate(u32 rate, struct device_node *np)
|
||||
{
|
||||
/* Who has more than one independent system counter? */
|
||||
if (arch_timer_rate)
|
||||
@ -939,7 +940,7 @@ static void arch_timer_of_configure_rate(u32 rate, struct device_node *np)
|
||||
pr_warn("frequency not available\n");
|
||||
}
|
||||
|
||||
static void arch_timer_banner(unsigned type)
|
||||
static void __init arch_timer_banner(unsigned type)
|
||||
{
|
||||
pr_info("%s%s%s timer(s) running at %lu.%02luMHz (%s%s%s).\n",
|
||||
type & ARCH_TIMER_TYPE_CP15 ? "cp15" : "",
|
||||
|
@ -18,7 +18,7 @@
|
||||
|
||||
#define RATE_32K 32768
|
||||
|
||||
#define TIMER_MODE_CONTINOUS 0x1
|
||||
#define TIMER_MODE_CONTINUOUS 0x1
|
||||
#define TIMER_DOWNCOUNT_VAL 0xffffffff
|
||||
|
||||
#define PRCMU_TIMER_REF 0
|
||||
@ -55,13 +55,13 @@ static int __init clksrc_dbx500_prcmu_init(struct device_node *node)
|
||||
|
||||
/*
|
||||
* The A9 sub system expects the timer to be configured as
|
||||
* a continous looping timer.
|
||||
* a continuous looping timer.
|
||||
* The PRCMU should configure it but if it for some reason
|
||||
* don't we do it here.
|
||||
*/
|
||||
if (readl(clksrc_dbx500_timer_base + PRCMU_TIMER_MODE) !=
|
||||
TIMER_MODE_CONTINOUS) {
|
||||
writel(TIMER_MODE_CONTINOUS,
|
||||
TIMER_MODE_CONTINUOUS) {
|
||||
writel(TIMER_MODE_CONTINUOUS,
|
||||
clksrc_dbx500_timer_base + PRCMU_TIMER_MODE);
|
||||
writel(TIMER_DOWNCOUNT_VAL,
|
||||
clksrc_dbx500_timer_base + PRCMU_TIMER_REF);
|
||||
|
@ -38,7 +38,7 @@ static int __init timer_get_base_and_rate(struct device_node *np,
|
||||
}
|
||||
|
||||
/*
|
||||
* Not all implementations use a periphal clock, so don't panic
|
||||
* Not all implementations use a peripheral clock, so don't panic
|
||||
* if it's not present
|
||||
*/
|
||||
pclk = of_clk_get_by_name(np, "pclk");
|
||||
@ -52,18 +52,34 @@ static int __init timer_get_base_and_rate(struct device_node *np,
|
||||
return 0;
|
||||
|
||||
timer_clk = of_clk_get_by_name(np, "timer");
|
||||
if (IS_ERR(timer_clk))
|
||||
return PTR_ERR(timer_clk);
|
||||
if (IS_ERR(timer_clk)) {
|
||||
ret = PTR_ERR(timer_clk);
|
||||
goto out_pclk_disable;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(timer_clk);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto out_timer_clk_put;
|
||||
|
||||
*rate = clk_get_rate(timer_clk);
|
||||
if (!(*rate))
|
||||
return -EINVAL;
|
||||
if (!(*rate)) {
|
||||
ret = -EINVAL;
|
||||
goto out_timer_clk_disable;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
out_timer_clk_disable:
|
||||
clk_disable_unprepare(timer_clk);
|
||||
out_timer_clk_put:
|
||||
clk_put(timer_clk);
|
||||
out_pclk_disable:
|
||||
if (!IS_ERR(pclk)) {
|
||||
clk_disable_unprepare(pclk);
|
||||
clk_put(pclk);
|
||||
}
|
||||
iounmap(*base);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int __init add_clockevent(struct device_node *event_timer)
|
||||
|
@ -457,7 +457,7 @@ void __init hv_init_clocksource(void)
|
||||
{
|
||||
/*
|
||||
* Try to set up the TSC page clocksource. If it succeeds, we're
|
||||
* done. Otherwise, set up the MSR clocksoruce. At least one of
|
||||
* done. Otherwise, set up the MSR clocksource. At least one of
|
||||
* these will always be available except on very old versions of
|
||||
* Hyper-V on x86. In that case we won't have a Hyper-V
|
||||
* clocksource, but Linux will still run with a clocksource based
|
||||
|
@ -88,9 +88,9 @@ static int __init ingenic_ost_probe(struct platform_device *pdev)
|
||||
return PTR_ERR(ost->regs);
|
||||
|
||||
map = device_node_to_regmap(dev->parent->of_node);
|
||||
if (!map) {
|
||||
if (IS_ERR(map)) {
|
||||
dev_err(dev, "regmap not found");
|
||||
return -EINVAL;
|
||||
return PTR_ERR(map);
|
||||
}
|
||||
|
||||
ost->clk = devm_clk_get(dev, "ost");
|
||||
@ -167,13 +167,14 @@ static const struct ingenic_ost_soc_info jz4725b_ost_soc_info = {
|
||||
.is64bit = false,
|
||||
};
|
||||
|
||||
static const struct ingenic_ost_soc_info jz4770_ost_soc_info = {
|
||||
static const struct ingenic_ost_soc_info jz4760b_ost_soc_info = {
|
||||
.is64bit = true,
|
||||
};
|
||||
|
||||
static const struct of_device_id ingenic_ost_of_match[] = {
|
||||
{ .compatible = "ingenic,jz4725b-ost", .data = &jz4725b_ost_soc_info, },
|
||||
{ .compatible = "ingenic,jz4770-ost", .data = &jz4770_ost_soc_info, },
|
||||
{ .compatible = "ingenic,jz4760b-ost", .data = &jz4760b_ost_soc_info, },
|
||||
{ .compatible = "ingenic,jz4770-ost", .data = &jz4760b_ost_soc_info, },
|
||||
{ }
|
||||
};
|
||||
|
||||
|
@ -264,6 +264,7 @@ static const struct ingenic_soc_info jz4725b_soc_info = {
|
||||
static const struct of_device_id ingenic_tcu_of_match[] = {
|
||||
{ .compatible = "ingenic,jz4740-tcu", .data = &jz4740_soc_info, },
|
||||
{ .compatible = "ingenic,jz4725b-tcu", .data = &jz4725b_soc_info, },
|
||||
{ .compatible = "ingenic,jz4760-tcu", .data = &jz4740_soc_info, },
|
||||
{ .compatible = "ingenic,jz4770-tcu", .data = &jz4740_soc_info, },
|
||||
{ .compatible = "ingenic,x1000-tcu", .data = &jz4740_soc_info, },
|
||||
{ /* sentinel */ }
|
||||
@ -358,6 +359,7 @@ err_free_ingenic_tcu:
|
||||
|
||||
TIMER_OF_DECLARE(jz4740_tcu_intc, "ingenic,jz4740-tcu", ingenic_tcu_init);
|
||||
TIMER_OF_DECLARE(jz4725b_tcu_intc, "ingenic,jz4725b-tcu", ingenic_tcu_init);
|
||||
TIMER_OF_DECLARE(jz4760_tcu_intc, "ingenic,jz4760-tcu", ingenic_tcu_init);
|
||||
TIMER_OF_DECLARE(jz4770_tcu_intc, "ingenic,jz4770-tcu", ingenic_tcu_init);
|
||||
TIMER_OF_DECLARE(x1000_tcu_intc, "ingenic,x1000-tcu", ingenic_tcu_init);
|
||||
|
||||
|
@ -339,8 +339,9 @@ static int sh_cmt_enable(struct sh_cmt_channel *ch)
|
||||
sh_cmt_write_cmcsr(ch, SH_CMT16_CMCSR_CMIE |
|
||||
SH_CMT16_CMCSR_CKS512);
|
||||
} else {
|
||||
sh_cmt_write_cmcsr(ch, SH_CMT32_CMCSR_CMM |
|
||||
SH_CMT32_CMCSR_CMTOUT_IE |
|
||||
u32 cmtout = ch->cmt->info->model <= SH_CMT_48BIT ?
|
||||
SH_CMT32_CMCSR_CMTOUT_IE : 0;
|
||||
sh_cmt_write_cmcsr(ch, cmtout | SH_CMT32_CMCSR_CMM |
|
||||
SH_CMT32_CMCSR_CMR_IRQ |
|
||||
SH_CMT32_CMCSR_CKS_RCLK8);
|
||||
}
|
||||
|
@ -455,9 +455,9 @@ static int __init tcb_clksrc_init(struct device_node *node)
|
||||
tcaddr = tc.regs;
|
||||
|
||||
if (bits == 32) {
|
||||
/* use apropriate function to read 32 bit counter */
|
||||
/* use appropriate function to read 32 bit counter */
|
||||
clksrc.read = tc_get_cycles32;
|
||||
/* setup ony channel 0 */
|
||||
/* setup only channel 0 */
|
||||
tcb_setup_single_chan(&tc, best_divisor_idx);
|
||||
tc_sched_clock = tc_sched_clock_read32;
|
||||
tc_delay_timer.read_current_timer = tc_delay_timer_read32;
|
||||
|
@ -116,7 +116,7 @@ static int ftm_set_next_event(unsigned long delta,
|
||||
* to the MOD register latches the value into a buffer. The MOD
|
||||
* register is updated with the value of its write buffer with
|
||||
* the following scenario:
|
||||
* a, the counter source clock is diabled.
|
||||
* a, the counter source clock is disabled.
|
||||
*/
|
||||
ftm_counter_disable(priv->clkevt_base);
|
||||
|
||||
|
@ -237,7 +237,7 @@ static void __init mchp_pit64b_pres_compute(u32 *pres, u32 clk_rate,
|
||||
break;
|
||||
}
|
||||
|
||||
/* Use the bigest prescaler if we didn't match one. */
|
||||
/* Use the biggest prescaler if we didn't match one. */
|
||||
if (*pres == MCHP_PIT64B_PRES_MAX)
|
||||
*pres = MCHP_PIT64B_PRES_MAX - 1;
|
||||
}
|
||||
|
@ -208,5 +208,6 @@ static int __init npcm7xx_timer_init(struct device_node *np)
|
||||
return 0;
|
||||
}
|
||||
|
||||
TIMER_OF_DECLARE(wpcm450, "nuvoton,wpcm450-timer", npcm7xx_timer_init);
|
||||
TIMER_OF_DECLARE(npcm7xx, "nuvoton,npcm750-timer", npcm7xx_timer_init);
|
||||
|
||||
|
@ -211,10 +211,10 @@ out_fail:
|
||||
}
|
||||
|
||||
/**
|
||||
* timer_of_cleanup - release timer_of ressources
|
||||
* timer_of_cleanup - release timer_of resources
|
||||
* @to: timer_of structure
|
||||
*
|
||||
* Release the ressources that has been used in timer_of_init().
|
||||
* Release the resources that has been used in timer_of_init().
|
||||
* This function should be called in init error cases
|
||||
*/
|
||||
void __init timer_of_cleanup(struct timer_of *to)
|
||||
|
@ -71,7 +71,7 @@ static u64 notrace
|
||||
pistachio_clocksource_read_cycles(struct clocksource *cs)
|
||||
{
|
||||
struct pistachio_clocksource *pcs = to_pistachio_clocksource(cs);
|
||||
u32 counter, overflw;
|
||||
u32 counter, overflow;
|
||||
unsigned long flags;
|
||||
|
||||
/*
|
||||
@ -80,7 +80,7 @@ pistachio_clocksource_read_cycles(struct clocksource *cs)
|
||||
*/
|
||||
|
||||
raw_spin_lock_irqsave(&pcs->lock, flags);
|
||||
overflw = gpt_readl(pcs->base, TIMER_CURRENT_OVERFLOW_VALUE, 0);
|
||||
overflow = gpt_readl(pcs->base, TIMER_CURRENT_OVERFLOW_VALUE, 0);
|
||||
counter = gpt_readl(pcs->base, TIMER_CURRENT_VALUE, 0);
|
||||
raw_spin_unlock_irqrestore(&pcs->lock, flags);
|
||||
|
||||
|
@ -2,6 +2,7 @@
|
||||
#include <linux/clk.h>
|
||||
#include <linux/clocksource.h>
|
||||
#include <linux/clockchips.h>
|
||||
#include <linux/cpuhotplug.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/iopoll.h>
|
||||
@ -449,13 +450,13 @@ static int dmtimer_set_next_event(unsigned long cycles,
|
||||
struct dmtimer_systimer *t = &clkevt->t;
|
||||
void __iomem *pend = t->base + t->pend;
|
||||
|
||||
writel_relaxed(0xffffffff - cycles, t->base + t->counter);
|
||||
while (readl_relaxed(pend) & WP_TCRR)
|
||||
cpu_relax();
|
||||
writel_relaxed(0xffffffff - cycles, t->base + t->counter);
|
||||
|
||||
writel_relaxed(OMAP_TIMER_CTRL_ST, t->base + t->ctrl);
|
||||
while (readl_relaxed(pend) & WP_TCLR)
|
||||
cpu_relax();
|
||||
writel_relaxed(OMAP_TIMER_CTRL_ST, t->base + t->ctrl);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -490,18 +491,18 @@ static int dmtimer_set_periodic(struct clock_event_device *evt)
|
||||
dmtimer_clockevent_shutdown(evt);
|
||||
|
||||
/* Looks like we need to first set the load value separately */
|
||||
writel_relaxed(clkevt->period, t->base + t->load);
|
||||
while (readl_relaxed(pend) & WP_TLDR)
|
||||
cpu_relax();
|
||||
writel_relaxed(clkevt->period, t->base + t->load);
|
||||
|
||||
writel_relaxed(clkevt->period, t->base + t->counter);
|
||||
while (readl_relaxed(pend) & WP_TCRR)
|
||||
cpu_relax();
|
||||
writel_relaxed(clkevt->period, t->base + t->counter);
|
||||
|
||||
writel_relaxed(OMAP_TIMER_CTRL_AR | OMAP_TIMER_CTRL_ST,
|
||||
t->base + t->ctrl);
|
||||
while (readl_relaxed(pend) & WP_TCLR)
|
||||
cpu_relax();
|
||||
writel_relaxed(OMAP_TIMER_CTRL_AR | OMAP_TIMER_CTRL_ST,
|
||||
t->base + t->ctrl);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -530,17 +531,17 @@ static void omap_clockevent_unidle(struct clock_event_device *evt)
|
||||
writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup);
|
||||
}
|
||||
|
||||
static int __init dmtimer_clockevent_init(struct device_node *np)
|
||||
static int __init dmtimer_clkevt_init_common(struct dmtimer_clockevent *clkevt,
|
||||
struct device_node *np,
|
||||
unsigned int features,
|
||||
const struct cpumask *cpumask,
|
||||
const char *name,
|
||||
int rating)
|
||||
{
|
||||
struct dmtimer_clockevent *clkevt;
|
||||
struct clock_event_device *dev;
|
||||
struct dmtimer_systimer *t;
|
||||
int error;
|
||||
|
||||
clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
|
||||
if (!clkevt)
|
||||
return -ENOMEM;
|
||||
|
||||
t = &clkevt->t;
|
||||
dev = &clkevt->dev;
|
||||
|
||||
@ -548,24 +549,23 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
|
||||
* We mostly use cpuidle_coupled with ARM local timers for runtime,
|
||||
* so there's probably no use for CLOCK_EVT_FEAT_DYNIRQ here.
|
||||
*/
|
||||
dev->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT;
|
||||
dev->rating = 300;
|
||||
dev->features = features;
|
||||
dev->rating = rating;
|
||||
dev->set_next_event = dmtimer_set_next_event;
|
||||
dev->set_state_shutdown = dmtimer_clockevent_shutdown;
|
||||
dev->set_state_periodic = dmtimer_set_periodic;
|
||||
dev->set_state_oneshot = dmtimer_clockevent_shutdown;
|
||||
dev->set_state_oneshot_stopped = dmtimer_clockevent_shutdown;
|
||||
dev->tick_resume = dmtimer_clockevent_shutdown;
|
||||
dev->cpumask = cpu_possible_mask;
|
||||
dev->cpumask = cpumask;
|
||||
|
||||
dev->irq = irq_of_parse_and_map(np, 0);
|
||||
if (!dev->irq) {
|
||||
error = -ENXIO;
|
||||
goto err_out_free;
|
||||
}
|
||||
if (!dev->irq)
|
||||
return -ENXIO;
|
||||
|
||||
error = dmtimer_systimer_setup(np, &clkevt->t);
|
||||
if (error)
|
||||
goto err_out_free;
|
||||
return error;
|
||||
|
||||
clkevt->period = 0xffffffff - DIV_ROUND_CLOSEST(t->rate, HZ);
|
||||
|
||||
@ -577,38 +577,132 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
|
||||
writel_relaxed(OMAP_TIMER_CTRL_POSTED, t->base + t->ifctrl);
|
||||
|
||||
error = request_irq(dev->irq, dmtimer_clockevent_interrupt,
|
||||
IRQF_TIMER, "clockevent", clkevt);
|
||||
IRQF_TIMER, name, clkevt);
|
||||
if (error)
|
||||
goto err_out_unmap;
|
||||
|
||||
writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->irq_ena);
|
||||
writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->wakeup);
|
||||
|
||||
pr_info("TI gptimer clockevent: %s%lu Hz at %pOF\n",
|
||||
of_find_property(np, "ti,timer-alwon", NULL) ?
|
||||
pr_info("TI gptimer %s: %s%lu Hz at %pOF\n",
|
||||
name, of_find_property(np, "ti,timer-alwon", NULL) ?
|
||||
"always-on " : "", t->rate, np->parent);
|
||||
|
||||
clockevents_config_and_register(dev, t->rate,
|
||||
3, /* Timer internal resynch latency */
|
||||
0xffffffff);
|
||||
|
||||
if (of_machine_is_compatible("ti,am33xx") ||
|
||||
of_machine_is_compatible("ti,am43")) {
|
||||
dev->suspend = omap_clockevent_idle;
|
||||
dev->resume = omap_clockevent_unidle;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_out_unmap:
|
||||
iounmap(t->base);
|
||||
|
||||
return error;
|
||||
}
|
||||
|
||||
static int __init dmtimer_clockevent_init(struct device_node *np)
|
||||
{
|
||||
struct dmtimer_clockevent *clkevt;
|
||||
int error;
|
||||
|
||||
clkevt = kzalloc(sizeof(*clkevt), GFP_KERNEL);
|
||||
if (!clkevt)
|
||||
return -ENOMEM;
|
||||
|
||||
error = dmtimer_clkevt_init_common(clkevt, np,
|
||||
CLOCK_EVT_FEAT_PERIODIC |
|
||||
CLOCK_EVT_FEAT_ONESHOT,
|
||||
cpu_possible_mask, "clockevent",
|
||||
300);
|
||||
if (error)
|
||||
goto err_out_free;
|
||||
|
||||
clockevents_config_and_register(&clkevt->dev, clkevt->t.rate,
|
||||
3, /* Timer internal resync latency */
|
||||
0xffffffff);
|
||||
|
||||
if (of_machine_is_compatible("ti,am33xx") ||
|
||||
of_machine_is_compatible("ti,am43")) {
|
||||
clkevt->dev.suspend = omap_clockevent_idle;
|
||||
clkevt->dev.resume = omap_clockevent_unidle;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_out_free:
|
||||
kfree(clkevt);
|
||||
|
||||
return error;
|
||||
}
|
||||
|
||||
/* Dmtimer as percpu timer. See dra7 ARM architected timer wrap erratum i940 */
|
||||
static DEFINE_PER_CPU(struct dmtimer_clockevent, dmtimer_percpu_timer);
|
||||
|
||||
static int __init dmtimer_percpu_timer_init(struct device_node *np, int cpu)
|
||||
{
|
||||
struct dmtimer_clockevent *clkevt;
|
||||
int error;
|
||||
|
||||
if (!cpu_possible(cpu))
|
||||
return -EINVAL;
|
||||
|
||||
if (!of_property_read_bool(np->parent, "ti,no-reset-on-init") ||
|
||||
!of_property_read_bool(np->parent, "ti,no-idle"))
|
||||
pr_warn("Incomplete dtb for percpu dmtimer %pOF\n", np->parent);
|
||||
|
||||
clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu);
|
||||
|
||||
error = dmtimer_clkevt_init_common(clkevt, np, CLOCK_EVT_FEAT_ONESHOT,
|
||||
cpumask_of(cpu), "percpu-dmtimer",
|
||||
500);
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* See TRM for timer internal resynch latency */
|
||||
static int omap_dmtimer_starting_cpu(unsigned int cpu)
|
||||
{
|
||||
struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu);
|
||||
struct clock_event_device *dev = &clkevt->dev;
|
||||
struct dmtimer_systimer *t = &clkevt->t;
|
||||
|
||||
clockevents_config_and_register(dev, t->rate, 3, ULONG_MAX);
|
||||
irq_force_affinity(dev->irq, cpumask_of(cpu));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init dmtimer_percpu_timer_startup(void)
|
||||
{
|
||||
struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, 0);
|
||||
struct dmtimer_systimer *t = &clkevt->t;
|
||||
|
||||
if (t->sysc) {
|
||||
cpuhp_setup_state(CPUHP_AP_TI_GP_TIMER_STARTING,
|
||||
"clockevents/omap/gptimer:starting",
|
||||
omap_dmtimer_starting_cpu, NULL);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
subsys_initcall(dmtimer_percpu_timer_startup);
|
||||
|
||||
static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
|
||||
{
|
||||
struct device_node *arm_timer;
|
||||
|
||||
arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
|
||||
if (of_device_is_available(arm_timer)) {
|
||||
pr_warn_once("ARM architected timer wrap issue i940 detected\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (pa == 0x48034000) /* dra7 dmtimer3 */
|
||||
return dmtimer_percpu_timer_init(np, 0);
|
||||
else if (pa == 0x48036000) /* dra7 dmtimer4 */
|
||||
return dmtimer_percpu_timer_init(np, 1);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Clocksource */
|
||||
static struct dmtimer_clocksource *
|
||||
to_dmtimer_clocksource(struct clocksource *cs)
|
||||
@ -742,6 +836,9 @@ static int __init dmtimer_systimer_init(struct device_node *np)
|
||||
if (clockevent == pa)
|
||||
return dmtimer_clockevent_init(np);
|
||||
|
||||
if (of_machine_is_compatible("ti,dra7"))
|
||||
return dmtimer_percpu_quirk_init(np, pa);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -136,7 +136,7 @@ static int __init pit_clockevent_init(unsigned long rate, int irq)
|
||||
/*
|
||||
* The value for the LDVAL register trigger is calculated as:
|
||||
* LDVAL trigger = (period / clock period) - 1
|
||||
* The pit is a 32-bit down count timer, when the conter value
|
||||
* The pit is a 32-bit down count timer, when the counter value
|
||||
* reaches 0, it will generate an interrupt, thus the minimal
|
||||
* LDVAL trigger value is 1. And then the min_delta is
|
||||
* minimal LDVAL trigger value + 1, and the max_delta is full 32-bit.
|
||||
|
@ -70,7 +70,7 @@ struct module;
|
||||
* @mark_unstable: Optional function to inform the clocksource driver that
|
||||
* the watchdog marked the clocksource unstable
|
||||
* @tick_stable: Optional function called periodically from the watchdog
|
||||
* code to provide stable syncrhonization points
|
||||
* code to provide stable synchronization points
|
||||
* @wd_list: List head to enqueue into the watchdog list (internal)
|
||||
* @cs_last: Last clocksource value for clocksource watchdog
|
||||
* @wd_last: Last watchdog value corresponding to @cs_last
|
||||
|
@ -135,6 +135,7 @@ enum cpuhp_state {
|
||||
CPUHP_AP_RISCV_TIMER_STARTING,
|
||||
CPUHP_AP_CLINT_TIMER_STARTING,
|
||||
CPUHP_AP_CSKY_TIMER_STARTING,
|
||||
CPUHP_AP_TI_GP_TIMER_STARTING,
|
||||
CPUHP_AP_HYPERV_TIMER_STARTING,
|
||||
CPUHP_AP_KVM_STARTING,
|
||||
CPUHP_AP_KVM_ARM_VGIC_INIT_STARTING,
|
||||
|
@ -124,7 +124,7 @@ extern u64 timecounter_read(struct timecounter *tc);
|
||||
* This allows conversion of cycle counter values which were generated
|
||||
* in the past.
|
||||
*/
|
||||
extern u64 timecounter_cyc2time(struct timecounter *tc,
|
||||
extern u64 timecounter_cyc2time(const struct timecounter *tc,
|
||||
u64 cycle_tstamp);
|
||||
|
||||
#endif
|
||||
|
@ -133,7 +133,7 @@
|
||||
|
||||
/*
|
||||
* kernel variables
|
||||
* Note: maximum error = NTP synch distance = dispersion + delay / 2;
|
||||
* Note: maximum error = NTP sync distance = dispersion + delay / 2;
|
||||
* estimated error = NTP dispersion.
|
||||
*/
|
||||
extern unsigned long tick_usec; /* USER_HZ period (usec) */
|
||||
|
@ -2,13 +2,13 @@
|
||||
/*
|
||||
* Alarmtimer interface
|
||||
*
|
||||
* This interface provides a timer which is similarto hrtimers,
|
||||
* This interface provides a timer which is similar to hrtimers,
|
||||
* but triggers a RTC alarm if the box is suspend.
|
||||
*
|
||||
* This interface is influenced by the Android RTC Alarm timer
|
||||
* interface.
|
||||
*
|
||||
* Copyright (C) 2010 IBM Corperation
|
||||
* Copyright (C) 2010 IBM Corporation
|
||||
*
|
||||
* Author: John Stultz <john.stultz@linaro.org>
|
||||
*/
|
||||
@ -811,7 +811,7 @@ static long __sched alarm_timer_nsleep_restart(struct restart_block *restart)
|
||||
/**
|
||||
* alarm_timer_nsleep - alarmtimer nanosleep
|
||||
* @which_clock: clockid
|
||||
* @flags: determins abstime or relative
|
||||
* @flags: determines abstime or relative
|
||||
* @tsreq: requested sleep time (abs or rel)
|
||||
*
|
||||
* Handles clock_nanosleep calls against _ALARM clockids
|
||||
|
@ -38,7 +38,7 @@
|
||||
* calculated mult and shift factors. This guarantees that no 64bit
|
||||
* overflow happens when the input value of the conversion is
|
||||
* multiplied with the calculated mult factor. Larger ranges may
|
||||
* reduce the conversion accuracy by chosing smaller mult and shift
|
||||
* reduce the conversion accuracy by choosing smaller mult and shift
|
||||
* factors.
|
||||
*/
|
||||
void
|
||||
@ -518,7 +518,7 @@ static void clocksource_suspend_select(bool fallback)
|
||||
* the suspend time when resuming system.
|
||||
*
|
||||
* This function is called late in the suspend process from timekeeping_suspend(),
|
||||
* that means processes are freezed, non-boot cpus and interrupts are disabled
|
||||
* that means processes are frozen, non-boot cpus and interrupts are disabled
|
||||
* now. It is therefore possible to start the suspend timer without taking the
|
||||
* clocksource mutex.
|
||||
*/
|
||||
|
@ -683,7 +683,7 @@ hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal)
|
||||
* T1 is removed, so this code is called and would reprogram
|
||||
* the hardware to 5s from now. Any hrtimer_start after that
|
||||
* will not reprogram the hardware due to hang_detected being
|
||||
* set. So we'd effectivly block all timers until the T2 event
|
||||
* set. So we'd effectively block all timers until the T2 event
|
||||
* fires.
|
||||
*/
|
||||
if (!__hrtimer_hres_active(cpu_base) || cpu_base->hang_detected)
|
||||
@ -1019,7 +1019,7 @@ static void __remove_hrtimer(struct hrtimer *timer,
|
||||
* cpu_base->next_timer. This happens when we remove the first
|
||||
* timer on a remote cpu. No harm as we never dereference
|
||||
* cpu_base->next_timer. So the worst thing what can happen is
|
||||
* an superflous call to hrtimer_force_reprogram() on the
|
||||
* an superfluous call to hrtimer_force_reprogram() on the
|
||||
* remote cpu later on if the same timer gets enqueued again.
|
||||
*/
|
||||
if (reprogram && timer == cpu_base->next_timer)
|
||||
@ -1212,7 +1212,7 @@ static void hrtimer_cpu_base_unlock_expiry(struct hrtimer_cpu_base *base)
|
||||
* The counterpart to hrtimer_cancel_wait_running().
|
||||
*
|
||||
* If there is a waiter for cpu_base->expiry_lock, then it was waiting for
|
||||
* the timer callback to finish. Drop expiry_lock and reaquire it. That
|
||||
* the timer callback to finish. Drop expiry_lock and reacquire it. That
|
||||
* allows the waiter to acquire the lock and make progress.
|
||||
*/
|
||||
static void hrtimer_sync_wait_running(struct hrtimer_cpu_base *cpu_base,
|
||||
@ -1398,7 +1398,7 @@ static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id,
|
||||
int base;
|
||||
|
||||
/*
|
||||
* On PREEMPT_RT enabled kernels hrtimers which are not explicitely
|
||||
* On PREEMPT_RT enabled kernels hrtimers which are not explicitly
|
||||
* marked for hard interrupt expiry mode are moved into soft
|
||||
* interrupt context for latency reasons and because the callbacks
|
||||
* can invoke functions which might sleep on RT, e.g. spin_lock().
|
||||
@ -1430,7 +1430,7 @@ static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id,
|
||||
* hrtimer_init - initialize a timer to the given clock
|
||||
* @timer: the timer to be initialized
|
||||
* @clock_id: the clock to be used
|
||||
* @mode: The modes which are relevant for intitialization:
|
||||
* @mode: The modes which are relevant for initialization:
|
||||
* HRTIMER_MODE_ABS, HRTIMER_MODE_REL, HRTIMER_MODE_ABS_SOFT,
|
||||
* HRTIMER_MODE_REL_SOFT
|
||||
*
|
||||
@ -1487,7 +1487,7 @@ EXPORT_SYMBOL_GPL(hrtimer_active);
|
||||
* insufficient for that.
|
||||
*
|
||||
* The sequence numbers are required because otherwise we could still observe
|
||||
* a false negative if the read side got smeared over multiple consequtive
|
||||
* a false negative if the read side got smeared over multiple consecutive
|
||||
* __run_hrtimer() invocations.
|
||||
*/
|
||||
|
||||
@ -1588,7 +1588,7 @@ static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now,
|
||||
* minimizing wakeups, not running timers at the
|
||||
* earliest interrupt after their soft expiration.
|
||||
* This allows us to avoid using a Priority Search
|
||||
* Tree, which can answer a stabbing querry for
|
||||
* Tree, which can answer a stabbing query for
|
||||
* overlapping intervals and instead use the simple
|
||||
* BST we already have.
|
||||
* We don't add extra wakeups by delaying timers that
|
||||
@ -1822,7 +1822,7 @@ static void __hrtimer_init_sleeper(struct hrtimer_sleeper *sl,
|
||||
clockid_t clock_id, enum hrtimer_mode mode)
|
||||
{
|
||||
/*
|
||||
* On PREEMPT_RT enabled kernels hrtimers which are not explicitely
|
||||
* On PREEMPT_RT enabled kernels hrtimers which are not explicitly
|
||||
* marked for hard interrupt expiry mode are moved into soft
|
||||
* interrupt context either for latency reasons or because the
|
||||
* hrtimer callback takes regular spinlocks or invokes other
|
||||
@ -1835,7 +1835,7 @@ static void __hrtimer_init_sleeper(struct hrtimer_sleeper *sl,
|
||||
* the same CPU. That causes a latency spike due to the wakeup of
|
||||
* a gazillion threads.
|
||||
*
|
||||
* OTOH, priviledged real-time user space applications rely on the
|
||||
* OTOH, privileged real-time user space applications rely on the
|
||||
* low latency of hard interrupt wakeups. If the current task is in
|
||||
* a real-time scheduling class, mark the mode for hard interrupt
|
||||
* expiry.
|
||||
|
@ -44,7 +44,7 @@ static u64 jiffies_read(struct clocksource *cs)
|
||||
* the timer interrupt frequency HZ and it suffers
|
||||
* inaccuracies caused by missed or lost timer
|
||||
* interrupts and the inability for the timer
|
||||
* interrupt hardware to accuratly tick at the
|
||||
* interrupt hardware to accurately tick at the
|
||||
* requested HZ value. It is also not recommended
|
||||
* for "tick-less" systems.
|
||||
*/
|
||||
|
@ -544,7 +544,7 @@ static inline bool rtc_tv_nsec_ok(unsigned long set_offset_nsec,
|
||||
struct timespec64 *to_set,
|
||||
const struct timespec64 *now)
|
||||
{
|
||||
/* Allowed error in tv_nsec, arbitarily set to 5 jiffies in ns. */
|
||||
/* Allowed error in tv_nsec, arbitrarily set to 5 jiffies in ns. */
|
||||
const unsigned long TIME_SET_NSEC_FUZZ = TICK_NSEC * 5;
|
||||
struct timespec64 delay = {.tv_sec = -1,
|
||||
.tv_nsec = set_offset_nsec};
|
||||
|
@ -279,7 +279,7 @@ void thread_group_sample_cputime(struct task_struct *tsk, u64 *samples)
|
||||
* @tsk: Task for which cputime needs to be started
|
||||
* @samples: Storage for time samples
|
||||
*
|
||||
* The thread group cputime accouting is avoided when there are no posix
|
||||
* The thread group cputime accounting is avoided when there are no posix
|
||||
* CPU timers armed. Before starting a timer it's required to check whether
|
||||
* the time accounting is active. If not, a full update of the atomic
|
||||
* accounting store needs to be done and the accounting enabled.
|
||||
@ -390,7 +390,7 @@ static int posix_cpu_timer_create(struct k_itimer *new_timer)
|
||||
/*
|
||||
* If posix timer expiry is handled in task work context then
|
||||
* timer::it_lock can be taken without disabling interrupts as all
|
||||
* other locking happens in task context. This requires a seperate
|
||||
* other locking happens in task context. This requires a separate
|
||||
* lock class key otherwise regular posix timer expiry would record
|
||||
* the lock class being taken in interrupt context and generate a
|
||||
* false positive warning.
|
||||
@ -1216,7 +1216,7 @@ static void handle_posix_cpu_timers(struct task_struct *tsk)
|
||||
check_process_timers(tsk, &firing);
|
||||
|
||||
/*
|
||||
* The above timer checks have updated the exipry cache and
|
||||
* The above timer checks have updated the expiry cache and
|
||||
* because nothing can have queued or modified timers after
|
||||
* sighand lock was taken above it is guaranteed to be
|
||||
* consistent. So the next timer interrupt fastpath check
|
||||
|
@ -1191,8 +1191,8 @@ SYSCALL_DEFINE2(clock_adjtime32, clockid_t, which_clock,
|
||||
|
||||
err = do_clock_adjtime(which_clock, &ktx);
|
||||
|
||||
if (err >= 0)
|
||||
err = put_old_timex32(utp, &ktx);
|
||||
if (err >= 0 && put_old_timex32(utp, &ktx))
|
||||
return -EFAULT;
|
||||
|
||||
return err;
|
||||
}
|
||||
|
@ -21,7 +21,6 @@
|
||||
#define DEBUGFS_FILENAME "udelay_test"
|
||||
|
||||
static DEFINE_MUTEX(udelay_test_lock);
|
||||
static struct dentry *udelay_test_debugfs_file;
|
||||
static int udelay_test_usecs;
|
||||
static int udelay_test_iterations = DEFAULT_ITERATIONS;
|
||||
|
||||
@ -138,8 +137,8 @@ static const struct file_operations udelay_test_debugfs_ops = {
|
||||
static int __init udelay_test_init(void)
|
||||
{
|
||||
mutex_lock(&udelay_test_lock);
|
||||
udelay_test_debugfs_file = debugfs_create_file(DEBUGFS_FILENAME,
|
||||
S_IRUSR, NULL, NULL, &udelay_test_debugfs_ops);
|
||||
debugfs_create_file(DEBUGFS_FILENAME, S_IRUSR, NULL, NULL,
|
||||
&udelay_test_debugfs_ops);
|
||||
mutex_unlock(&udelay_test_lock);
|
||||
|
||||
return 0;
|
||||
@ -150,7 +149,7 @@ module_init(udelay_test_init);
|
||||
static void __exit udelay_test_exit(void)
|
||||
{
|
||||
mutex_lock(&udelay_test_lock);
|
||||
debugfs_remove(udelay_test_debugfs_file);
|
||||
debugfs_remove(debugfs_lookup(DEBUGFS_FILENAME, NULL));
|
||||
mutex_unlock(&udelay_test_lock);
|
||||
}
|
||||
|
||||
|
@ -53,7 +53,7 @@ static int bc_set_next(ktime_t expires, struct clock_event_device *bc)
|
||||
* reasons.
|
||||
*
|
||||
* Each caller tries to arm the hrtimer on its own CPU, but if the
|
||||
* hrtimer callbback function is currently running, then
|
||||
* hrtimer callback function is currently running, then
|
||||
* hrtimer_start() cannot move it and the timer stays on the CPU on
|
||||
* which it is assigned at the moment.
|
||||
*
|
||||
|
@ -107,6 +107,19 @@ void tick_install_broadcast_device(struct clock_event_device *dev)
|
||||
tick_broadcast_device.evtdev = dev;
|
||||
if (!cpumask_empty(tick_broadcast_mask))
|
||||
tick_broadcast_start_periodic(dev);
|
||||
|
||||
if (!(dev->features & CLOCK_EVT_FEAT_ONESHOT))
|
||||
return;
|
||||
|
||||
/*
|
||||
* If the system already runs in oneshot mode, switch the newly
|
||||
* registered broadcast device to oneshot mode explicitly.
|
||||
*/
|
||||
if (tick_broadcast_oneshot_active()) {
|
||||
tick_broadcast_switch_to_oneshot();
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Inform all cpus about this. We might be in a situation
|
||||
* where we did not switch to oneshot mode because the per cpu
|
||||
@ -115,8 +128,7 @@ void tick_install_broadcast_device(struct clock_event_device *dev)
|
||||
* notification the systems stays stuck in periodic mode
|
||||
* forever.
|
||||
*/
|
||||
if (dev->features & CLOCK_EVT_FEAT_ONESHOT)
|
||||
tick_clock_notify();
|
||||
tick_clock_notify();
|
||||
}
|
||||
|
||||
/*
|
||||
@ -157,7 +169,7 @@ static void tick_device_setup_broadcast_func(struct clock_event_device *dev)
|
||||
}
|
||||
|
||||
/*
|
||||
* Check, if the device is disfunctional and a place holder, which
|
||||
* Check, if the device is dysfunctional and a placeholder, which
|
||||
* needs to be handled by the broadcast device.
|
||||
*/
|
||||
int tick_device_uses_broadcast(struct clock_event_device *dev, int cpu)
|
||||
@ -391,7 +403,7 @@ void tick_broadcast_control(enum tick_broadcast_mode mode)
|
||||
* - the broadcast device exists
|
||||
* - the broadcast device is not a hrtimer based one
|
||||
* - the broadcast device is in periodic mode to
|
||||
* avoid a hickup during switch to oneshot mode
|
||||
* avoid a hiccup during switch to oneshot mode
|
||||
*/
|
||||
if (bc && !(bc->features & CLOCK_EVT_FEAT_HRTIMER) &&
|
||||
tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC)
|
||||
|
@ -348,12 +348,7 @@ void tick_check_new_device(struct clock_event_device *newdev)
|
||||
td = &per_cpu(tick_cpu_device, cpu);
|
||||
curdev = td->evtdev;
|
||||
|
||||
/* cpu local device ? */
|
||||
if (!tick_check_percpu(curdev, newdev, cpu))
|
||||
goto out_bc;
|
||||
|
||||
/* Preference decision */
|
||||
if (!tick_check_preferred(curdev, newdev))
|
||||
if (!tick_check_replacement(curdev, newdev))
|
||||
goto out_bc;
|
||||
|
||||
if (!try_module_get(newdev->owner))
|
||||
|
@ -45,7 +45,7 @@ int tick_program_event(ktime_t expires, int force)
|
||||
}
|
||||
|
||||
/**
|
||||
* tick_resume_onshot - resume oneshot mode
|
||||
* tick_resume_oneshot - resume oneshot mode
|
||||
*/
|
||||
void tick_resume_oneshot(void)
|
||||
{
|
||||
|
@ -751,7 +751,7 @@ static ktime_t tick_nohz_next_event(struct tick_sched *ts, int cpu)
|
||||
* Aside of that check whether the local timer softirq is
|
||||
* pending. If so its a bad idea to call get_next_timer_interrupt()
|
||||
* because there is an already expired timer, so it will request
|
||||
* immeditate expiry, which rearms the hardware timer with a
|
||||
* immediate expiry, which rearms the hardware timer with a
|
||||
* minimal delta which brings us back to this place
|
||||
* immediately. Lather, rinse and repeat...
|
||||
*/
|
||||
|
@ -29,7 +29,7 @@ enum tick_nohz_mode {
|
||||
* @inidle: Indicator that the CPU is in the tick idle mode
|
||||
* @tick_stopped: Indicator that the idle tick has been stopped
|
||||
* @idle_active: Indicator that the CPU is actively in the tick idle mode;
|
||||
* it is resetted during irq handling phases.
|
||||
* it is reset during irq handling phases.
|
||||
* @do_timer_lst: CPU was the last one doing do_timer before going idle
|
||||
* @got_idle_tick: Tick timer function has run with @inidle set
|
||||
* @last_tick: Store the last tick expiry time when the tick
|
||||
|
@ -571,7 +571,7 @@ EXPORT_SYMBOL(__usecs_to_jiffies);
|
||||
/*
|
||||
* The TICK_NSEC - 1 rounds up the value to the next resolution. Note
|
||||
* that a remainder subtract here would not do the right thing as the
|
||||
* resolution values don't fall on second boundries. I.e. the line:
|
||||
* resolution values don't fall on second boundaries. I.e. the line:
|
||||
* nsec -= nsec % TICK_NSEC; is NOT a correct resolution rounding.
|
||||
* Note that due to the small error in the multiplier here, this
|
||||
* rounding is incorrect for sufficiently large values of tv_nsec, but
|
||||
|
@ -76,7 +76,7 @@ static u64 cc_cyc2ns_backwards(const struct cyclecounter *cc,
|
||||
return ns;
|
||||
}
|
||||
|
||||
u64 timecounter_cyc2time(struct timecounter *tc,
|
||||
u64 timecounter_cyc2time(const struct timecounter *tc,
|
||||
u64 cycle_tstamp)
|
||||
{
|
||||
u64 delta = (cycle_tstamp - tc->cycle_last) & tc->cc->mask;
|
||||
|
@ -596,14 +596,14 @@ EXPORT_SYMBOL_GPL(ktime_get_real_fast_ns);
|
||||
* careful cache layout of the timekeeper because the sequence count and
|
||||
* struct tk_read_base would then need two cache lines instead of one.
|
||||
*
|
||||
* Access to the time keeper clock source is disabled accross the innermost
|
||||
* Access to the time keeper clock source is disabled across the innermost
|
||||
* steps of suspend/resume. The accessors still work, but the timestamps
|
||||
* are frozen until time keeping is resumed which happens very early.
|
||||
*
|
||||
* For regular suspend/resume there is no observable difference vs. sched
|
||||
* clock, but it might affect some of the nasty low level debug printks.
|
||||
*
|
||||
* OTOH, access to sched clock is not guaranteed accross suspend/resume on
|
||||
* OTOH, access to sched clock is not guaranteed across suspend/resume on
|
||||
* all systems either so it depends on the hardware in use.
|
||||
*
|
||||
* If that turns out to be a real problem then this could be mitigated by
|
||||
@ -899,7 +899,7 @@ ktime_t ktime_get_coarse_with_offset(enum tk_offsets offs)
|
||||
EXPORT_SYMBOL_GPL(ktime_get_coarse_with_offset);
|
||||
|
||||
/**
|
||||
* ktime_mono_to_any() - convert mononotic time to any other time
|
||||
* ktime_mono_to_any() - convert monotonic time to any other time
|
||||
* @tmono: time to convert.
|
||||
* @offs: which offset to use
|
||||
*/
|
||||
@ -1427,35 +1427,45 @@ static void __timekeeping_set_tai_offset(struct timekeeper *tk, s32 tai_offset)
|
||||
static int change_clocksource(void *data)
|
||||
{
|
||||
struct timekeeper *tk = &tk_core.timekeeper;
|
||||
struct clocksource *new, *old;
|
||||
struct clocksource *new, *old = NULL;
|
||||
unsigned long flags;
|
||||
bool change = false;
|
||||
|
||||
new = (struct clocksource *) data;
|
||||
|
||||
raw_spin_lock_irqsave(&timekeeper_lock, flags);
|
||||
write_seqcount_begin(&tk_core.seq);
|
||||
|
||||
timekeeping_forward_now(tk);
|
||||
/*
|
||||
* If the cs is in module, get a module reference. Succeeds
|
||||
* for built-in code (owner == NULL) as well.
|
||||
*/
|
||||
if (try_module_get(new->owner)) {
|
||||
if (!new->enable || new->enable(new) == 0) {
|
||||
old = tk->tkr_mono.clock;
|
||||
tk_setup_internals(tk, new);
|
||||
if (old->disable)
|
||||
old->disable(old);
|
||||
module_put(old->owner);
|
||||
} else {
|
||||
if (!new->enable || new->enable(new) == 0)
|
||||
change = true;
|
||||
else
|
||||
module_put(new->owner);
|
||||
}
|
||||
}
|
||||
|
||||
raw_spin_lock_irqsave(&timekeeper_lock, flags);
|
||||
write_seqcount_begin(&tk_core.seq);
|
||||
|
||||
timekeeping_forward_now(tk);
|
||||
|
||||
if (change) {
|
||||
old = tk->tkr_mono.clock;
|
||||
tk_setup_internals(tk, new);
|
||||
}
|
||||
|
||||
timekeeping_update(tk, TK_CLEAR_NTP | TK_MIRROR | TK_CLOCK_WAS_SET);
|
||||
|
||||
write_seqcount_end(&tk_core.seq);
|
||||
raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
|
||||
|
||||
if (old) {
|
||||
if (old->disable)
|
||||
old->disable(old);
|
||||
|
||||
module_put(old->owner);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1948,7 +1958,7 @@ static __always_inline void timekeeping_apply_adjustment(struct timekeeper *tk,
|
||||
* xtime_nsec_1 = offset + xtime_nsec_2
|
||||
* Which gives us:
|
||||
* xtime_nsec_2 = xtime_nsec_1 - offset
|
||||
* Which simplfies to:
|
||||
* Which simplifies to:
|
||||
* xtime_nsec -= offset
|
||||
*/
|
||||
if ((mult_adj > 0) && (tk->tkr_mono.mult + mult_adj < mult_adj)) {
|
||||
@ -2336,7 +2346,7 @@ static int timekeeping_validate_timex(const struct __kernel_timex *txc)
|
||||
|
||||
/*
|
||||
* Validate if a timespec/timeval used to inject a time
|
||||
* offset is valid. Offsets can be postive or negative, so
|
||||
* offset is valid. Offsets can be positive or negative, so
|
||||
* we don't check tv_sec. The value of the timeval/timespec
|
||||
* is the sum of its fields,but *NOTE*:
|
||||
* The field tv_usec/tv_nsec must always be non-negative and
|
||||
|
@ -894,7 +894,7 @@ static inline void forward_timer_base(struct timer_base *base)
|
||||
/*
|
||||
* No need to forward if we are close enough below jiffies.
|
||||
* Also while executing timers, base->clk is 1 offset ahead
|
||||
* of jiffies to avoid endless requeuing to current jffies.
|
||||
* of jiffies to avoid endless requeuing to current jiffies.
|
||||
*/
|
||||
if ((long)(jnow - base->clk) < 1)
|
||||
return;
|
||||
@ -1271,7 +1271,7 @@ static inline void timer_base_unlock_expiry(struct timer_base *base)
|
||||
* The counterpart to del_timer_wait_running().
|
||||
*
|
||||
* If there is a waiter for base->expiry_lock, then it was waiting for the
|
||||
* timer callback to finish. Drop expiry_lock and reaquire it. That allows
|
||||
* timer callback to finish. Drop expiry_lock and reacquire it. That allows
|
||||
* the waiter to acquire the lock and make progress.
|
||||
*/
|
||||
static void timer_sync_wait_running(struct timer_base *base)
|
||||
|
@ -108,7 +108,7 @@ void update_vsyscall(struct timekeeper *tk)
|
||||
|
||||
/*
|
||||
* If the current clocksource is not VDSO capable, then spare the
|
||||
* update of the high reolution parts.
|
||||
* update of the high resolution parts.
|
||||
*/
|
||||
if (clock_mode != VDSO_CLOCKMODE_NONE)
|
||||
update_vdso_data(vdata, tk);
|
||||
|
@ -3,7 +3,7 @@
|
||||
* (C) Copyright IBM 2012
|
||||
* Licensed under the GPLv2
|
||||
*
|
||||
* NOTE: This is a meta-test which quickly changes the clocksourc and
|
||||
* NOTE: This is a meta-test which quickly changes the clocksource and
|
||||
* then uses other tests to detect problems. Thus this test requires
|
||||
* that the inconsistency-check and nanosleep tests be present in the
|
||||
* same directory it is run from.
|
||||
@ -134,7 +134,7 @@ int main(int argv, char **argc)
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Check everything is sane before we start switching asyncrhonously */
|
||||
/* Check everything is sane before we start switching asynchronously */
|
||||
for (i = 0; i < count; i++) {
|
||||
printf("Validating clocksource %s\n", clocksource_list[i]);
|
||||
if (change_clocksource(clocksource_list[i])) {
|
||||
|
@ -5,7 +5,7 @@
|
||||
* Licensed under the GPLv2
|
||||
*
|
||||
* This test signals the kernel to insert a leap second
|
||||
* every day at midnight GMT. This allows for stessing the
|
||||
* every day at midnight GMT. This allows for stressing the
|
||||
* kernel's leap-second behavior, as well as how well applications
|
||||
* handle the leap-second discontinuity.
|
||||
*
|
||||
|
@ -4,10 +4,10 @@
|
||||
* (C) Copyright 2013, 2015 Linaro Limited
|
||||
* Licensed under the GPL
|
||||
*
|
||||
* This test demonstrates leapsecond deadlock that is possibe
|
||||
* This test demonstrates leapsecond deadlock that is possible
|
||||
* on kernels from 2.6.26 to 3.3.
|
||||
*
|
||||
* WARNING: THIS WILL LIKELY HARDHANG SYSTEMS AND MAY LOSE DATA
|
||||
* WARNING: THIS WILL LIKELY HARD HANG SYSTEMS AND MAY LOSE DATA
|
||||
* RUN AT YOUR OWN RISK!
|
||||
* To build:
|
||||
* $ gcc leapcrash.c -o leapcrash -lrt
|
||||
|
@ -76,7 +76,7 @@ void checklist(struct timespec *list, int size)
|
||||
|
||||
/* The shared thread shares a global list
|
||||
* that each thread fills while holding the lock.
|
||||
* This stresses clock syncronization across cpus.
|
||||
* This stresses clock synchronization across cpus.
|
||||
*/
|
||||
void *shared_thread(void *arg)
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user