mirror of
https://github.com/torvalds/linux.git
synced 2024-11-11 06:31:49 +00:00
Power management updates for 5.3-rc1
- Improve the handling of shared ACPI power resources in the PCI bus type layer (Mika Westerberg). - Make the PCI layer take link delays required by the PCIe spec into account as appropriate and avoid polling devices in D3cold for PME (Mika Westerberg). - Fix some corner case issues in ACPI device power management and in the PCI bus type layer, optimiza and clean up the handling of runtime-suspended PCI devices during system-wide transitions to sleep states (Rafael Wysocki). - Rework hibernation handling in the ACPI core and the PCI bus type to resume runtime-suspended devices before hibernation (which allows some functional problems to be avoided) and fix some ACPI power management issues related to hiberation (Rafael Wysocki). - Extend the operating performance points (OPP) framework to support a wider range of devices (Rajendra Nayak, Stehpen Boyd). - Fix issues related to genpd_virt_devs and issues with platforms using the set_opp() callback in the OPP framework (Viresh Kumar, Dmitry Osipenko). - Add new cpufreq driver for Raspberry Pi (Nicolas Saenz Julienne). - Add new cpufreq driver for imx8m and imx7d chips (Leonard Crestez). - Fix and clean up the pcc-cpufreq, brcmstb-avs-cpufreq, s5pv210, and armada-37xx cpufreq drivers (David Arcari, Florian Fainelli, Paweł Chmiel, YueHaibing). - Clean up and fix the cpufreq core (Viresh Kumar, Daniel Lezcano). - Fix minor issue in the ACPI system sleep support code and export one function from it (Lenny Szubowicz, Dexuan Cui). - Clean up assorted pieces of PM code and documentation (Kefeng Wang, Andy Shevchenko, Bart Van Assche, Greg Kroah-Hartman, Fuqian Huang, Geert Uytterhoeven, Mathieu Malaterre, Rafael Wysocki). - Update the pm-graph utility to v5.4 (Todd Brandt). - Fix and clean up the cpupower utility (Abhishek Goel, Nick Black). -----BEGIN PGP SIGNATURE----- iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAl0jK18SHHJqd0Byand5 c29ja2kubmV0AAoJEILEb/54YlRxgEAP/RbPe71Y9ufKu64L3EtgV6mS9iuLEhux /Ad9laLNeM1b0oceT3QxGk7xQCacnZcBlcaqXVWI4NRsn4RBZp1cYZngpgJ9DP6E ONr8hzyzDOMVReba3XJIF8H+WoTKjywMYtFutjdx6dRe2ZJLutqnuZ0JbH1YSSK7 IxOt0mJVALf2M4Zz7F17d+n3yGE/4xAPBVbj/rBRcTEsGYlR/Hoxs7iF6EBau7fy R5drUH6XSrWk8adc+z7l3BTGqMMYj9deRSfAWB3wpM4YK7Fv7msX/amBoGINkdn6 xP/ZcrHvhKKzE89MS8OUGP4rGVwq+7tu6mktnYL/tpKgutJqqx5LVvrLsGDSWr+W /aJExN8Eb4Jh98C6vog3XUJoqBxkVGbU8qoCBU3jlFsaznFEWjW9IKhBHs5CIaqz MXte6AsJ8lvFzxILjvx0m2206wNpRJRXYLX3a/BSBxa4OgOESjIpBTmbPfOwbxwj 8z9hIVlDTTDtnF6BEyDQr1fjPi3Mxl7ibGnoqRrJm36VKBy9VZNNwG/0Y2oSvm6k Es8CiTWA3A/46dCZxGr18/9Vbfxn1Yvg9QZ1lCE5Fqij+0F2yRApbHZgUro+1rji 6J8OWC5r5JccdKuGHh4RH/asMFhD0cAR/gUsRzS4dz/wz2jVIN1FstdD1aKN5GBy d0lchx/AKR5H =aBN3 -----END PGP SIGNATURE----- Merge tag 'pm-5.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull power management updates from Rafael Wysocki: "These update PCI and ACPI power management (improved handling of ACPI power resources and PCIe link delays, fixes related to corner cases, hibernation handling rework), fix and extend the operating performance points (OPP) framework, add new cpufreq drivers for Raspberry Pi and imx8m chips, update some other cpufreq drivers, clean up assorted pieces of PM code and documentation and update tools. Specifics: - Improve the handling of shared ACPI power resources in the PCI bus type layer (Mika Westerberg). - Make the PCI layer take link delays required by the PCIe spec into account as appropriate and avoid polling devices in D3cold for PME (Mika Westerberg). - Fix some corner case issues in ACPI device power management and in the PCI bus type layer, optimiza and clean up the handling of runtime-suspended PCI devices during system-wide transitions to sleep states (Rafael Wysocki). - Rework hibernation handling in the ACPI core and the PCI bus type to resume runtime-suspended devices before hibernation (which allows some functional problems to be avoided) and fix some ACPI power management issues related to hiberation (Rafael Wysocki). - Extend the operating performance points (OPP) framework to support a wider range of devices (Rajendra Nayak, Stehpen Boyd). - Fix issues related to genpd_virt_devs and issues with platforms using the set_opp() callback in the OPP framework (Viresh Kumar, Dmitry Osipenko). - Add new cpufreq driver for Raspberry Pi (Nicolas Saenz Julienne). - Add new cpufreq driver for imx8m and imx7d chips (Leonard Crestez). - Fix and clean up the pcc-cpufreq, brcmstb-avs-cpufreq, s5pv210, and armada-37xx cpufreq drivers (David Arcari, Florian Fainelli, Paweł Chmiel, YueHaibing). - Clean up and fix the cpufreq core (Viresh Kumar, Daniel Lezcano). - Fix minor issue in the ACPI system sleep support code and export one function from it (Lenny Szubowicz, Dexuan Cui). - Clean up assorted pieces of PM code and documentation (Kefeng Wang, Andy Shevchenko, Bart Van Assche, Greg Kroah-Hartman, Fuqian Huang, Geert Uytterhoeven, Mathieu Malaterre, Rafael Wysocki). - Update the pm-graph utility to v5.4 (Todd Brandt). - Fix and clean up the cpupower utility (Abhishek Goel, Nick Black)" * tag 'pm-5.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (57 commits) ACPI: PM: Make acpi_sleep_state_supported() non-static PM: sleep: Drop dev_pm_skip_next_resume_phases() ACPI: PM: Unexport acpi_device_get_power() Documentation: ABI: power: Add missing newline at end of file ACPI: PM: Drop unused function and function header ACPI: PM: Introduce "poweroff" callbacks for ACPI PM domain and LPSS ACPI: PM: Simplify and fix PM domain hibernation callbacks PCI: PM: Simplify bus-level hibernation callbacks PM: ACPI/PCI: Resume all devices during hibernation cpufreq: Avoid calling cpufreq_verify_current_freq() from handle_update() cpufreq: Consolidate cpufreq_update_current_freq() and __cpufreq_get() kernel: power: swap: use kzalloc() instead of kmalloc() followed by memset() cpufreq: Don't skip frequency validation for has_target() drivers PCI: PM/ACPI: Refresh all stale power state data in pci_pm_complete() PCI / ACPI: Add _PR0 dependent devices ACPI / PM: Introduce concept of a _PR0 dependent device PCI / ACPI: Use cached ACPI device state to get PCI device power state ACPI: PM: Allow transitions to D0 to occur in special cases ACPI: PM: Avoid evaluating _PS3 on transitions from D3hot to D3cold cpufreq: Use has_target() instead of !setpolicy ...
This commit is contained in:
commit
cf2d213e49
@ -300,4 +300,4 @@ Description:
|
|||||||
attempt.
|
attempt.
|
||||||
|
|
||||||
Using this sysfs file will override any values that were
|
Using this sysfs file will override any values that were
|
||||||
set using the kernel command line for disk offset.
|
set using the kernel command line for disk offset.
|
||||||
|
37
Documentation/devicetree/bindings/cpufreq/imx-cpufreq-dt.txt
Normal file
37
Documentation/devicetree/bindings/cpufreq/imx-cpufreq-dt.txt
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
i.MX CPUFreq-DT OPP bindings
|
||||||
|
================================
|
||||||
|
|
||||||
|
Certain i.MX SoCs support different OPPs depending on the "market segment" and
|
||||||
|
"speed grading" value which are written in fuses. These bits are combined with
|
||||||
|
the opp-supported-hw values for each OPP to check if the OPP is allowed.
|
||||||
|
|
||||||
|
Required properties:
|
||||||
|
--------------------
|
||||||
|
|
||||||
|
For each opp entry in 'operating-points-v2' table:
|
||||||
|
- opp-supported-hw: Two bitmaps indicating:
|
||||||
|
- Supported speed grade mask
|
||||||
|
- Supported market segment mask
|
||||||
|
0: Consumer
|
||||||
|
1: Extended Consumer
|
||||||
|
2: Industrial
|
||||||
|
3: Automotive
|
||||||
|
|
||||||
|
Example:
|
||||||
|
--------
|
||||||
|
|
||||||
|
opp_table {
|
||||||
|
compatible = "operating-points-v2";
|
||||||
|
opp-1000000000 {
|
||||||
|
opp-hz = /bits/ 64 <1000000000>;
|
||||||
|
/* grade >= 0, consumer only */
|
||||||
|
opp-supported-hw = <0xf>, <0x3>;
|
||||||
|
};
|
||||||
|
|
||||||
|
opp-1300000000 {
|
||||||
|
opp-hz = /bits/ 64 <1300000000>;
|
||||||
|
opp-microvolt = <1000000>;
|
||||||
|
/* grade >= 1, all segments */
|
||||||
|
opp-supported-hw = <0xe>, <0x7>;
|
||||||
|
};
|
||||||
|
}
|
@ -7,6 +7,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/mm.h>
|
#include <linux/mm.h>
|
||||||
|
#include <linux/suspend.h>
|
||||||
#include <asm/page.h>
|
#include <asm/page.h>
|
||||||
#include <asm/sections.h>
|
#include <asm/sections.h>
|
||||||
|
|
||||||
|
@ -63,7 +63,6 @@ void __init startup_init(void);
|
|||||||
void die(struct pt_regs *regs, const char *str);
|
void die(struct pt_regs *regs, const char *str);
|
||||||
int setup_profiling_timer(unsigned int multiplier);
|
int setup_profiling_timer(unsigned int multiplier);
|
||||||
void __init time_init(void);
|
void __init time_init(void);
|
||||||
int pfn_is_nosave(unsigned long);
|
|
||||||
void s390_early_resume(void);
|
void s390_early_resume(void);
|
||||||
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long sp, unsigned long ip);
|
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long sp, unsigned long ip);
|
||||||
|
|
||||||
|
@ -129,7 +129,7 @@ static void lpit_update_residency(struct lpit_residency_info *info,
|
|||||||
|
|
||||||
static void lpit_process(u64 begin, u64 end)
|
static void lpit_process(u64 begin, u64 end)
|
||||||
{
|
{
|
||||||
while (begin + sizeof(struct acpi_lpit_native) < end) {
|
while (begin + sizeof(struct acpi_lpit_native) <= end) {
|
||||||
struct acpi_lpit_native *lpit_native = (struct acpi_lpit_native *)begin;
|
struct acpi_lpit_native *lpit_native = (struct acpi_lpit_native *)begin;
|
||||||
|
|
||||||
if (!lpit_native->header.type && !lpit_native->header.flags) {
|
if (!lpit_native->header.type && !lpit_native->header.flags) {
|
||||||
@ -148,7 +148,6 @@ static void lpit_process(u64 begin, u64 end)
|
|||||||
void acpi_init_lpit(void)
|
void acpi_init_lpit(void)
|
||||||
{
|
{
|
||||||
acpi_status status;
|
acpi_status status;
|
||||||
u64 lpit_begin;
|
|
||||||
struct acpi_table_lpit *lpit;
|
struct acpi_table_lpit *lpit;
|
||||||
|
|
||||||
status = acpi_get_table(ACPI_SIG_LPIT, 0, (struct acpi_table_header **)&lpit);
|
status = acpi_get_table(ACPI_SIG_LPIT, 0, (struct acpi_table_header **)&lpit);
|
||||||
@ -156,6 +155,6 @@ void acpi_init_lpit(void)
|
|||||||
if (ACPI_FAILURE(status))
|
if (ACPI_FAILURE(status))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
lpit_begin = (u64)lpit + sizeof(*lpit);
|
lpit_process((u64)lpit + sizeof(*lpit),
|
||||||
lpit_process(lpit_begin, lpit_begin + lpit->header.length);
|
(u64)lpit + lpit->header.length);
|
||||||
}
|
}
|
||||||
|
@ -1061,6 +1061,13 @@ static int acpi_lpss_suspend_noirq(struct device *dev)
|
|||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (pdata->dev_desc->resume_from_noirq) {
|
if (pdata->dev_desc->resume_from_noirq) {
|
||||||
|
/*
|
||||||
|
* The driver's ->suspend_late callback will be invoked by
|
||||||
|
* acpi_lpss_do_suspend_late(), with the assumption that the
|
||||||
|
* driver really wanted to run that code in ->suspend_noirq, but
|
||||||
|
* it could not run after acpi_dev_suspend() and the driver
|
||||||
|
* expected the latter to be called in the "late" phase.
|
||||||
|
*/
|
||||||
ret = acpi_lpss_do_suspend_late(dev);
|
ret = acpi_lpss_do_suspend_late(dev);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
@ -1091,16 +1098,99 @@ static int acpi_lpss_resume_noirq(struct device *dev)
|
|||||||
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
|
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = acpi_subsys_resume_noirq(dev);
|
/* Follow acpi_subsys_resume_noirq(). */
|
||||||
|
if (dev_pm_may_skip_resume(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
pm_runtime_set_active(dev);
|
||||||
|
|
||||||
|
ret = pm_generic_resume_noirq(dev);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
if (!dev_pm_may_skip_resume(dev) && pdata->dev_desc->resume_from_noirq)
|
if (!pdata->dev_desc->resume_from_noirq)
|
||||||
ret = acpi_lpss_do_resume_early(dev);
|
return 0;
|
||||||
|
|
||||||
return ret;
|
/*
|
||||||
|
* The driver's ->resume_early callback will be invoked by
|
||||||
|
* acpi_lpss_do_resume_early(), with the assumption that the driver
|
||||||
|
* really wanted to run that code in ->resume_noirq, but it could not
|
||||||
|
* run before acpi_dev_resume() and the driver expected the latter to be
|
||||||
|
* called in the "early" phase.
|
||||||
|
*/
|
||||||
|
return acpi_lpss_do_resume_early(dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int acpi_lpss_do_restore_early(struct device *dev)
|
||||||
|
{
|
||||||
|
int ret = acpi_lpss_resume(dev);
|
||||||
|
|
||||||
|
return ret ? ret : pm_generic_restore_early(dev);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int acpi_lpss_restore_early(struct device *dev)
|
||||||
|
{
|
||||||
|
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
|
||||||
|
|
||||||
|
if (pdata->dev_desc->resume_from_noirq)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return acpi_lpss_do_restore_early(dev);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int acpi_lpss_restore_noirq(struct device *dev)
|
||||||
|
{
|
||||||
|
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = pm_generic_restore_noirq(dev);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
if (!pdata->dev_desc->resume_from_noirq)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/* This is analogous to what happens in acpi_lpss_resume_noirq(). */
|
||||||
|
return acpi_lpss_do_restore_early(dev);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int acpi_lpss_do_poweroff_late(struct device *dev)
|
||||||
|
{
|
||||||
|
int ret = pm_generic_poweroff_late(dev);
|
||||||
|
|
||||||
|
return ret ? ret : acpi_lpss_suspend(dev, device_may_wakeup(dev));
|
||||||
|
}
|
||||||
|
|
||||||
|
static int acpi_lpss_poweroff_late(struct device *dev)
|
||||||
|
{
|
||||||
|
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
|
||||||
|
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (pdata->dev_desc->resume_from_noirq)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
return acpi_lpss_do_poweroff_late(dev);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int acpi_lpss_poweroff_noirq(struct device *dev)
|
||||||
|
{
|
||||||
|
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
|
||||||
|
|
||||||
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (pdata->dev_desc->resume_from_noirq) {
|
||||||
|
/* This is analogous to the acpi_lpss_suspend_noirq() case. */
|
||||||
|
int ret = acpi_lpss_do_poweroff_late(dev);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
return pm_generic_poweroff_noirq(dev);
|
||||||
|
}
|
||||||
#endif /* CONFIG_PM_SLEEP */
|
#endif /* CONFIG_PM_SLEEP */
|
||||||
|
|
||||||
static int acpi_lpss_runtime_suspend(struct device *dev)
|
static int acpi_lpss_runtime_suspend(struct device *dev)
|
||||||
@ -1134,14 +1224,11 @@ static struct dev_pm_domain acpi_lpss_pm_domain = {
|
|||||||
.resume_noirq = acpi_lpss_resume_noirq,
|
.resume_noirq = acpi_lpss_resume_noirq,
|
||||||
.resume_early = acpi_lpss_resume_early,
|
.resume_early = acpi_lpss_resume_early,
|
||||||
.freeze = acpi_subsys_freeze,
|
.freeze = acpi_subsys_freeze,
|
||||||
.freeze_late = acpi_subsys_freeze_late,
|
.poweroff = acpi_subsys_poweroff,
|
||||||
.freeze_noirq = acpi_subsys_freeze_noirq,
|
.poweroff_late = acpi_lpss_poweroff_late,
|
||||||
.thaw_noirq = acpi_subsys_thaw_noirq,
|
.poweroff_noirq = acpi_lpss_poweroff_noirq,
|
||||||
.poweroff = acpi_subsys_suspend,
|
.restore_noirq = acpi_lpss_restore_noirq,
|
||||||
.poweroff_late = acpi_lpss_suspend_late,
|
.restore_early = acpi_lpss_restore_early,
|
||||||
.poweroff_noirq = acpi_lpss_suspend_noirq,
|
|
||||||
.restore_noirq = acpi_lpss_resume_noirq,
|
|
||||||
.restore_early = acpi_lpss_resume_early,
|
|
||||||
#endif
|
#endif
|
||||||
.runtime_suspend = acpi_lpss_runtime_suspend,
|
.runtime_suspend = acpi_lpss_runtime_suspend,
|
||||||
.runtime_resume = acpi_lpss_runtime_resume,
|
.runtime_resume = acpi_lpss_runtime_resume,
|
||||||
|
@ -45,6 +45,19 @@ const char *acpi_power_state_string(int state)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int acpi_dev_pm_explicit_get(struct acpi_device *device, int *state)
|
||||||
|
{
|
||||||
|
unsigned long long psc;
|
||||||
|
acpi_status status;
|
||||||
|
|
||||||
|
status = acpi_evaluate_integer(device->handle, "_PSC", NULL, &psc);
|
||||||
|
if (ACPI_FAILURE(status))
|
||||||
|
return -ENODEV;
|
||||||
|
|
||||||
|
*state = psc;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* acpi_device_get_power - Get power state of an ACPI device.
|
* acpi_device_get_power - Get power state of an ACPI device.
|
||||||
* @device: Device to get the power state of.
|
* @device: Device to get the power state of.
|
||||||
@ -53,10 +66,16 @@ const char *acpi_power_state_string(int state)
|
|||||||
* This function does not update the device's power.state field, but it may
|
* This function does not update the device's power.state field, but it may
|
||||||
* update its parent's power.state field (when the parent's power state is
|
* update its parent's power.state field (when the parent's power state is
|
||||||
* unknown and the device's power state turns out to be D0).
|
* unknown and the device's power state turns out to be D0).
|
||||||
|
*
|
||||||
|
* Also, it does not update power resource reference counters to ensure that
|
||||||
|
* the power state returned by it will be persistent and it may return a power
|
||||||
|
* state shallower than previously set by acpi_device_set_power() for @device
|
||||||
|
* (if that power state depends on any power resources).
|
||||||
*/
|
*/
|
||||||
int acpi_device_get_power(struct acpi_device *device, int *state)
|
int acpi_device_get_power(struct acpi_device *device, int *state)
|
||||||
{
|
{
|
||||||
int result = ACPI_STATE_UNKNOWN;
|
int result = ACPI_STATE_UNKNOWN;
|
||||||
|
int error;
|
||||||
|
|
||||||
if (!device || !state)
|
if (!device || !state)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@ -73,18 +92,16 @@ int acpi_device_get_power(struct acpi_device *device, int *state)
|
|||||||
* if available.
|
* if available.
|
||||||
*/
|
*/
|
||||||
if (device->power.flags.power_resources) {
|
if (device->power.flags.power_resources) {
|
||||||
int error = acpi_power_get_inferred_state(device, &result);
|
error = acpi_power_get_inferred_state(device, &result);
|
||||||
if (error)
|
if (error)
|
||||||
return error;
|
return error;
|
||||||
}
|
}
|
||||||
if (device->power.flags.explicit_get) {
|
if (device->power.flags.explicit_get) {
|
||||||
acpi_handle handle = device->handle;
|
int psc;
|
||||||
unsigned long long psc;
|
|
||||||
acpi_status status;
|
|
||||||
|
|
||||||
status = acpi_evaluate_integer(handle, "_PSC", NULL, &psc);
|
error = acpi_dev_pm_explicit_get(device, &psc);
|
||||||
if (ACPI_FAILURE(status))
|
if (error)
|
||||||
return -ENODEV;
|
return error;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The power resources settings may indicate a power state
|
* The power resources settings may indicate a power state
|
||||||
@ -118,7 +135,6 @@ int acpi_device_get_power(struct acpi_device *device, int *state)
|
|||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(acpi_device_get_power);
|
|
||||||
|
|
||||||
static int acpi_dev_pm_explicit_set(struct acpi_device *adev, int state)
|
static int acpi_dev_pm_explicit_set(struct acpi_device *adev, int state)
|
||||||
{
|
{
|
||||||
@ -152,7 +168,8 @@ int acpi_device_set_power(struct acpi_device *device, int state)
|
|||||||
|
|
||||||
/* Make sure this is a valid target state */
|
/* Make sure this is a valid target state */
|
||||||
|
|
||||||
if (state == device->power.state) {
|
/* There is a special case for D0 addressed below. */
|
||||||
|
if (state > ACPI_STATE_D0 && state == device->power.state) {
|
||||||
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Device [%s] already in %s\n",
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Device [%s] already in %s\n",
|
||||||
device->pnp.bus_id,
|
device->pnp.bus_id,
|
||||||
acpi_power_state_string(state)));
|
acpi_power_state_string(state)));
|
||||||
@ -202,9 +219,15 @@ int acpi_device_set_power(struct acpi_device *device, int state)
|
|||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
|
||||||
result = acpi_dev_pm_explicit_set(device, state);
|
/*
|
||||||
if (result)
|
* If the device goes from D3hot to D3cold, _PS3 has been
|
||||||
goto end;
|
* evaluated for it already, so skip it in that case.
|
||||||
|
*/
|
||||||
|
if (device->power.state < ACPI_STATE_D3_HOT) {
|
||||||
|
result = acpi_dev_pm_explicit_set(device, state);
|
||||||
|
if (result)
|
||||||
|
goto end;
|
||||||
|
}
|
||||||
|
|
||||||
if (device->power.flags.power_resources)
|
if (device->power.flags.power_resources)
|
||||||
result = acpi_power_transition(device, target_state);
|
result = acpi_power_transition(device, target_state);
|
||||||
@ -214,6 +237,30 @@ int acpi_device_set_power(struct acpi_device *device, int state)
|
|||||||
if (result)
|
if (result)
|
||||||
goto end;
|
goto end;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (device->power.state == ACPI_STATE_D0) {
|
||||||
|
int psc;
|
||||||
|
|
||||||
|
/* Nothing to do here if _PSC is not present. */
|
||||||
|
if (!device->power.flags.explicit_get)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The power state of the device was set to D0 last
|
||||||
|
* time, but that might have happened before a
|
||||||
|
* system-wide transition involving the platform
|
||||||
|
* firmware, so it may be necessary to evaluate _PS0
|
||||||
|
* for the device here. However, use extra care here
|
||||||
|
* and evaluate _PSC to check the device's current power
|
||||||
|
* state, and only invoke _PS0 if the evaluation of _PSC
|
||||||
|
* is successful and it returns a power state different
|
||||||
|
* from D0.
|
||||||
|
*/
|
||||||
|
result = acpi_dev_pm_explicit_get(device, &psc);
|
||||||
|
if (result || psc == ACPI_STATE_D0)
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
result = acpi_dev_pm_explicit_set(device, ACPI_STATE_D0);
|
result = acpi_dev_pm_explicit_set(device, ACPI_STATE_D0);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1073,7 +1120,7 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);
|
|||||||
* acpi_subsys_resume_noirq - Run the device driver's "noirq" resume callback.
|
* acpi_subsys_resume_noirq - Run the device driver's "noirq" resume callback.
|
||||||
* @dev: Device to handle.
|
* @dev: Device to handle.
|
||||||
*/
|
*/
|
||||||
int acpi_subsys_resume_noirq(struct device *dev)
|
static int acpi_subsys_resume_noirq(struct device *dev)
|
||||||
{
|
{
|
||||||
if (dev_pm_may_skip_resume(dev))
|
if (dev_pm_may_skip_resume(dev))
|
||||||
return 0;
|
return 0;
|
||||||
@ -1088,7 +1135,6 @@ int acpi_subsys_resume_noirq(struct device *dev)
|
|||||||
|
|
||||||
return pm_generic_resume_noirq(dev);
|
return pm_generic_resume_noirq(dev);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(acpi_subsys_resume_noirq);
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* acpi_subsys_resume_early - Resume device using ACPI.
|
* acpi_subsys_resume_early - Resume device using ACPI.
|
||||||
@ -1098,12 +1144,11 @@ EXPORT_SYMBOL_GPL(acpi_subsys_resume_noirq);
|
|||||||
* generic early resume procedure for it during system transition into the
|
* generic early resume procedure for it during system transition into the
|
||||||
* working state.
|
* working state.
|
||||||
*/
|
*/
|
||||||
int acpi_subsys_resume_early(struct device *dev)
|
static int acpi_subsys_resume_early(struct device *dev)
|
||||||
{
|
{
|
||||||
int ret = acpi_dev_resume(dev);
|
int ret = acpi_dev_resume(dev);
|
||||||
return ret ? ret : pm_generic_resume_early(dev);
|
return ret ? ret : pm_generic_resume_early(dev);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(acpi_subsys_resume_early);
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* acpi_subsys_freeze - Run the device driver's freeze callback.
|
* acpi_subsys_freeze - Run the device driver's freeze callback.
|
||||||
@ -1112,65 +1157,81 @@ EXPORT_SYMBOL_GPL(acpi_subsys_resume_early);
|
|||||||
int acpi_subsys_freeze(struct device *dev)
|
int acpi_subsys_freeze(struct device *dev)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* This used to be done in acpi_subsys_prepare() for all devices and
|
* Resume all runtime-suspended devices before creating a snapshot
|
||||||
* some drivers may depend on it, so do it here. Ideally, however,
|
* image of system memory, because the restore kernel generally cannot
|
||||||
* runtime-suspended devices should not be touched during freeze/thaw
|
* be expected to always handle them consistently and they need to be
|
||||||
* transitions.
|
* put into the runtime-active metastate during system resume anyway,
|
||||||
|
* so it is better to ensure that the state saved in the image will be
|
||||||
|
* always consistent with that.
|
||||||
*/
|
*/
|
||||||
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND))
|
pm_runtime_resume(dev);
|
||||||
pm_runtime_resume(dev);
|
|
||||||
|
|
||||||
return pm_generic_freeze(dev);
|
return pm_generic_freeze(dev);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(acpi_subsys_freeze);
|
EXPORT_SYMBOL_GPL(acpi_subsys_freeze);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* acpi_subsys_freeze_late - Run the device driver's "late" freeze callback.
|
* acpi_subsys_restore_early - Restore device using ACPI.
|
||||||
* @dev: Device to handle.
|
* @dev: Device to restore.
|
||||||
*/
|
*/
|
||||||
int acpi_subsys_freeze_late(struct device *dev)
|
int acpi_subsys_restore_early(struct device *dev)
|
||||||
{
|
{
|
||||||
|
int ret = acpi_dev_resume(dev);
|
||||||
|
return ret ? ret : pm_generic_restore_early(dev);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(acpi_subsys_restore_early);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* acpi_subsys_poweroff - Run the device driver's poweroff callback.
|
||||||
|
* @dev: Device to handle.
|
||||||
|
*
|
||||||
|
* Follow PCI and resume devices from runtime suspend before running their
|
||||||
|
* system poweroff callbacks, unless the driver can cope with runtime-suspended
|
||||||
|
* devices during system suspend and there are no ACPI-specific reasons for
|
||||||
|
* resuming them.
|
||||||
|
*/
|
||||||
|
int acpi_subsys_poweroff(struct device *dev)
|
||||||
|
{
|
||||||
|
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
|
||||||
|
acpi_dev_needs_resume(dev, ACPI_COMPANION(dev)))
|
||||||
|
pm_runtime_resume(dev);
|
||||||
|
|
||||||
|
return pm_generic_poweroff(dev);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(acpi_subsys_poweroff);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* acpi_subsys_poweroff_late - Run the device driver's poweroff callback.
|
||||||
|
* @dev: Device to handle.
|
||||||
|
*
|
||||||
|
* Carry out the generic late poweroff procedure for @dev and use ACPI to put
|
||||||
|
* it into a low-power state during system transition into a sleep state.
|
||||||
|
*/
|
||||||
|
static int acpi_subsys_poweroff_late(struct device *dev)
|
||||||
|
{
|
||||||
|
int ret;
|
||||||
|
|
||||||
if (dev_pm_smart_suspend_and_suspended(dev))
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
return pm_generic_freeze_late(dev);
|
ret = pm_generic_poweroff_late(dev);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
return acpi_dev_suspend(dev, device_may_wakeup(dev));
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(acpi_subsys_freeze_late);
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* acpi_subsys_freeze_noirq - Run the device driver's "noirq" freeze callback.
|
* acpi_subsys_poweroff_noirq - Run the driver's "noirq" poweroff callback.
|
||||||
* @dev: Device to handle.
|
* @dev: Device to suspend.
|
||||||
*/
|
*/
|
||||||
int acpi_subsys_freeze_noirq(struct device *dev)
|
static int acpi_subsys_poweroff_noirq(struct device *dev)
|
||||||
{
|
{
|
||||||
|
|
||||||
if (dev_pm_smart_suspend_and_suspended(dev))
|
if (dev_pm_smart_suspend_and_suspended(dev))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
return pm_generic_freeze_noirq(dev);
|
return pm_generic_poweroff_noirq(dev);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(acpi_subsys_freeze_noirq);
|
|
||||||
|
|
||||||
/**
|
|
||||||
* acpi_subsys_thaw_noirq - Run the device driver's "noirq" thaw callback.
|
|
||||||
* @dev: Device to handle.
|
|
||||||
*/
|
|
||||||
int acpi_subsys_thaw_noirq(struct device *dev)
|
|
||||||
{
|
|
||||||
/*
|
|
||||||
* If the device is in runtime suspend, the "thaw" code may not work
|
|
||||||
* correctly with it, so skip the driver callback and make the PM core
|
|
||||||
* skip all of the subsequent "thaw" callbacks for the device.
|
|
||||||
*/
|
|
||||||
if (dev_pm_smart_suspend_and_suspended(dev)) {
|
|
||||||
dev_pm_skip_next_resume_phases(dev);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
return pm_generic_thaw_noirq(dev);
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL_GPL(acpi_subsys_thaw_noirq);
|
|
||||||
#endif /* CONFIG_PM_SLEEP */
|
#endif /* CONFIG_PM_SLEEP */
|
||||||
|
|
||||||
static struct dev_pm_domain acpi_general_pm_domain = {
|
static struct dev_pm_domain acpi_general_pm_domain = {
|
||||||
@ -1186,14 +1247,10 @@ static struct dev_pm_domain acpi_general_pm_domain = {
|
|||||||
.resume_noirq = acpi_subsys_resume_noirq,
|
.resume_noirq = acpi_subsys_resume_noirq,
|
||||||
.resume_early = acpi_subsys_resume_early,
|
.resume_early = acpi_subsys_resume_early,
|
||||||
.freeze = acpi_subsys_freeze,
|
.freeze = acpi_subsys_freeze,
|
||||||
.freeze_late = acpi_subsys_freeze_late,
|
.poweroff = acpi_subsys_poweroff,
|
||||||
.freeze_noirq = acpi_subsys_freeze_noirq,
|
.poweroff_late = acpi_subsys_poweroff_late,
|
||||||
.thaw_noirq = acpi_subsys_thaw_noirq,
|
.poweroff_noirq = acpi_subsys_poweroff_noirq,
|
||||||
.poweroff = acpi_subsys_suspend,
|
.restore_early = acpi_subsys_restore_early,
|
||||||
.poweroff_late = acpi_subsys_suspend_late,
|
|
||||||
.poweroff_noirq = acpi_subsys_suspend_noirq,
|
|
||||||
.restore_noirq = acpi_subsys_resume_noirq,
|
|
||||||
.restore_early = acpi_subsys_resume_early,
|
|
||||||
#endif
|
#endif
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
@ -139,8 +139,15 @@ int acpi_power_get_inferred_state(struct acpi_device *device, int *state);
|
|||||||
int acpi_power_on_resources(struct acpi_device *device, int state);
|
int acpi_power_on_resources(struct acpi_device *device, int state);
|
||||||
int acpi_power_transition(struct acpi_device *device, int state);
|
int acpi_power_transition(struct acpi_device *device, int state);
|
||||||
|
|
||||||
|
/* --------------------------------------------------------------------------
|
||||||
|
Device Power Management
|
||||||
|
-------------------------------------------------------------------------- */
|
||||||
|
int acpi_device_get_power(struct acpi_device *device, int *state);
|
||||||
int acpi_wakeup_device_init(void);
|
int acpi_wakeup_device_init(void);
|
||||||
|
|
||||||
|
/* --------------------------------------------------------------------------
|
||||||
|
Processor
|
||||||
|
-------------------------------------------------------------------------- */
|
||||||
#ifdef CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC
|
#ifdef CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC
|
||||||
void acpi_early_processor_set_pdc(void);
|
void acpi_early_processor_set_pdc(void);
|
||||||
#else
|
#else
|
||||||
|
@ -42,6 +42,11 @@ ACPI_MODULE_NAME("power");
|
|||||||
#define ACPI_POWER_RESOURCE_STATE_ON 0x01
|
#define ACPI_POWER_RESOURCE_STATE_ON 0x01
|
||||||
#define ACPI_POWER_RESOURCE_STATE_UNKNOWN 0xFF
|
#define ACPI_POWER_RESOURCE_STATE_UNKNOWN 0xFF
|
||||||
|
|
||||||
|
struct acpi_power_dependent_device {
|
||||||
|
struct device *dev;
|
||||||
|
struct list_head node;
|
||||||
|
};
|
||||||
|
|
||||||
struct acpi_power_resource {
|
struct acpi_power_resource {
|
||||||
struct acpi_device device;
|
struct acpi_device device;
|
||||||
struct list_head list_node;
|
struct list_head list_node;
|
||||||
@ -51,6 +56,7 @@ struct acpi_power_resource {
|
|||||||
unsigned int ref_count;
|
unsigned int ref_count;
|
||||||
bool wakeup_enabled;
|
bool wakeup_enabled;
|
||||||
struct mutex resource_lock;
|
struct mutex resource_lock;
|
||||||
|
struct list_head dependents;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct acpi_power_resource_entry {
|
struct acpi_power_resource_entry {
|
||||||
@ -232,8 +238,121 @@ static int acpi_power_get_list_state(struct list_head *list, int *state)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int
|
||||||
|
acpi_power_resource_add_dependent(struct acpi_power_resource *resource,
|
||||||
|
struct device *dev)
|
||||||
|
{
|
||||||
|
struct acpi_power_dependent_device *dep;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
mutex_lock(&resource->resource_lock);
|
||||||
|
list_for_each_entry(dep, &resource->dependents, node) {
|
||||||
|
/* Only add it once */
|
||||||
|
if (dep->dev == dev)
|
||||||
|
goto unlock;
|
||||||
|
}
|
||||||
|
|
||||||
|
dep = kzalloc(sizeof(*dep), GFP_KERNEL);
|
||||||
|
if (!dep) {
|
||||||
|
ret = -ENOMEM;
|
||||||
|
goto unlock;
|
||||||
|
}
|
||||||
|
|
||||||
|
dep->dev = dev;
|
||||||
|
list_add_tail(&dep->node, &resource->dependents);
|
||||||
|
dev_dbg(dev, "added power dependency to [%s]\n", resource->name);
|
||||||
|
|
||||||
|
unlock:
|
||||||
|
mutex_unlock(&resource->resource_lock);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void
|
||||||
|
acpi_power_resource_remove_dependent(struct acpi_power_resource *resource,
|
||||||
|
struct device *dev)
|
||||||
|
{
|
||||||
|
struct acpi_power_dependent_device *dep;
|
||||||
|
|
||||||
|
mutex_lock(&resource->resource_lock);
|
||||||
|
list_for_each_entry(dep, &resource->dependents, node) {
|
||||||
|
if (dep->dev == dev) {
|
||||||
|
list_del(&dep->node);
|
||||||
|
kfree(dep);
|
||||||
|
dev_dbg(dev, "removed power dependency to [%s]\n",
|
||||||
|
resource->name);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
mutex_unlock(&resource->resource_lock);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* acpi_device_power_add_dependent - Add dependent device of this ACPI device
|
||||||
|
* @adev: ACPI device pointer
|
||||||
|
* @dev: Dependent device
|
||||||
|
*
|
||||||
|
* If @adev has non-empty _PR0 the @dev is added as dependent device to all
|
||||||
|
* power resources returned by it. This means that whenever these power
|
||||||
|
* resources are turned _ON the dependent devices get runtime resumed. This
|
||||||
|
* is needed for devices such as PCI to allow its driver to re-initialize
|
||||||
|
* it after it went to D0uninitialized.
|
||||||
|
*
|
||||||
|
* If @adev does not have _PR0 this does nothing.
|
||||||
|
*
|
||||||
|
* Returns %0 in case of success and negative errno otherwise.
|
||||||
|
*/
|
||||||
|
int acpi_device_power_add_dependent(struct acpi_device *adev,
|
||||||
|
struct device *dev)
|
||||||
|
{
|
||||||
|
struct acpi_power_resource_entry *entry;
|
||||||
|
struct list_head *resources;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
if (!adev->flags.power_manageable)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
resources = &adev->power.states[ACPI_STATE_D0].resources;
|
||||||
|
list_for_each_entry(entry, resources, node) {
|
||||||
|
ret = acpi_power_resource_add_dependent(entry->resource, dev);
|
||||||
|
if (ret)
|
||||||
|
goto err;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
err:
|
||||||
|
list_for_each_entry(entry, resources, node)
|
||||||
|
acpi_power_resource_remove_dependent(entry->resource, dev);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* acpi_device_power_remove_dependent - Remove dependent device
|
||||||
|
* @adev: ACPI device pointer
|
||||||
|
* @dev: Dependent device
|
||||||
|
*
|
||||||
|
* Does the opposite of acpi_device_power_add_dependent() and removes the
|
||||||
|
* dependent device if it is found. Can be called to @adev that does not
|
||||||
|
* have _PR0 as well.
|
||||||
|
*/
|
||||||
|
void acpi_device_power_remove_dependent(struct acpi_device *adev,
|
||||||
|
struct device *dev)
|
||||||
|
{
|
||||||
|
struct acpi_power_resource_entry *entry;
|
||||||
|
struct list_head *resources;
|
||||||
|
|
||||||
|
if (!adev->flags.power_manageable)
|
||||||
|
return;
|
||||||
|
|
||||||
|
resources = &adev->power.states[ACPI_STATE_D0].resources;
|
||||||
|
list_for_each_entry_reverse(entry, resources, node)
|
||||||
|
acpi_power_resource_remove_dependent(entry->resource, dev);
|
||||||
|
}
|
||||||
|
|
||||||
static int __acpi_power_on(struct acpi_power_resource *resource)
|
static int __acpi_power_on(struct acpi_power_resource *resource)
|
||||||
{
|
{
|
||||||
|
struct acpi_power_dependent_device *dep;
|
||||||
acpi_status status = AE_OK;
|
acpi_status status = AE_OK;
|
||||||
|
|
||||||
status = acpi_evaluate_object(resource->device.handle, "_ON", NULL, NULL);
|
status = acpi_evaluate_object(resource->device.handle, "_ON", NULL, NULL);
|
||||||
@ -243,6 +362,21 @@ static int __acpi_power_on(struct acpi_power_resource *resource)
|
|||||||
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Power resource [%s] turned on\n",
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Power resource [%s] turned on\n",
|
||||||
resource->name));
|
resource->name));
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If there are other dependents on this power resource we need to
|
||||||
|
* resume them now so that their drivers can re-initialize the
|
||||||
|
* hardware properly after it went back to D0.
|
||||||
|
*/
|
||||||
|
if (list_empty(&resource->dependents) ||
|
||||||
|
list_is_singular(&resource->dependents))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
list_for_each_entry(dep, &resource->dependents, node) {
|
||||||
|
dev_dbg(dep->dev, "runtime resuming because [%s] turned on\n",
|
||||||
|
resource->name);
|
||||||
|
pm_request_resume(dep->dev);
|
||||||
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -810,6 +944,7 @@ int acpi_add_power_resource(acpi_handle handle)
|
|||||||
ACPI_STA_DEFAULT);
|
ACPI_STA_DEFAULT);
|
||||||
mutex_init(&resource->resource_lock);
|
mutex_init(&resource->resource_lock);
|
||||||
INIT_LIST_HEAD(&resource->list_node);
|
INIT_LIST_HEAD(&resource->list_node);
|
||||||
|
INIT_LIST_HEAD(&resource->dependents);
|
||||||
resource->name = device->pnp.bus_id;
|
resource->name = device->pnp.bus_id;
|
||||||
strcpy(acpi_device_name(device), ACPI_POWER_DEVICE_NAME);
|
strcpy(acpi_device_name(device), ACPI_POWER_DEVICE_NAME);
|
||||||
strcpy(acpi_device_class(device), ACPI_POWER_CLASS);
|
strcpy(acpi_device_class(device), ACPI_POWER_CLASS);
|
||||||
|
@ -77,7 +77,7 @@ static int acpi_sleep_prepare(u32 acpi_state)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool acpi_sleep_state_supported(u8 sleep_state)
|
bool acpi_sleep_state_supported(u8 sleep_state)
|
||||||
{
|
{
|
||||||
acpi_status status;
|
acpi_status status;
|
||||||
u8 type_a, type_b;
|
u8 type_a, type_b;
|
||||||
@ -452,14 +452,6 @@ static int acpi_pm_prepare(void)
|
|||||||
return error;
|
return error;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int find_powerf_dev(struct device *dev, void *data)
|
|
||||||
{
|
|
||||||
struct acpi_device *device = to_acpi_device(dev);
|
|
||||||
const char *hid = acpi_device_hid(device);
|
|
||||||
|
|
||||||
return !strcmp(hid, ACPI_BUTTON_HID_POWERF);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* acpi_pm_finish - Instruct the platform to leave a sleep state.
|
* acpi_pm_finish - Instruct the platform to leave a sleep state.
|
||||||
*
|
*
|
||||||
@ -468,7 +460,7 @@ static int find_powerf_dev(struct device *dev, void *data)
|
|||||||
*/
|
*/
|
||||||
static void acpi_pm_finish(void)
|
static void acpi_pm_finish(void)
|
||||||
{
|
{
|
||||||
struct device *pwr_btn_dev;
|
struct acpi_device *pwr_btn_adev;
|
||||||
u32 acpi_state = acpi_target_sleep_state;
|
u32 acpi_state = acpi_target_sleep_state;
|
||||||
|
|
||||||
acpi_ec_unblock_transactions();
|
acpi_ec_unblock_transactions();
|
||||||
@ -499,11 +491,11 @@ static void acpi_pm_finish(void)
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
pwr_btn_event_pending = false;
|
pwr_btn_event_pending = false;
|
||||||
pwr_btn_dev = bus_find_device(&acpi_bus_type, NULL, NULL,
|
pwr_btn_adev = acpi_dev_get_first_match_dev(ACPI_BUTTON_HID_POWERF,
|
||||||
find_powerf_dev);
|
NULL, -1);
|
||||||
if (pwr_btn_dev) {
|
if (pwr_btn_adev) {
|
||||||
pm_wakeup_event(pwr_btn_dev, 0);
|
pm_wakeup_event(&pwr_btn_adev->dev, 0);
|
||||||
put_device(pwr_btn_dev);
|
acpi_dev_put(pwr_btn_adev);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
#include <linux/pm_clock.h>
|
#include <linux/pm_clock.h>
|
||||||
#include <linux/clk.h>
|
#include <linux/clk.h>
|
||||||
#include <linux/clkdev.h>
|
#include <linux/clkdev.h>
|
||||||
|
#include <linux/of_clk.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
#include <linux/err.h>
|
#include <linux/err.h>
|
||||||
#include <linux/pm_domain.h>
|
#include <linux/pm_domain.h>
|
||||||
@ -92,8 +93,6 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
|
|||||||
if (con_id) {
|
if (con_id) {
|
||||||
ce->con_id = kstrdup(con_id, GFP_KERNEL);
|
ce->con_id = kstrdup(con_id, GFP_KERNEL);
|
||||||
if (!ce->con_id) {
|
if (!ce->con_id) {
|
||||||
dev_err(dev,
|
|
||||||
"Not enough memory for clock connection ID.\n");
|
|
||||||
kfree(ce);
|
kfree(ce);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
@ -195,8 +194,7 @@ int of_pm_clk_add_clks(struct device *dev)
|
|||||||
if (!dev || !dev->of_node)
|
if (!dev || !dev->of_node)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
count = of_count_phandle_with_args(dev->of_node, "clocks",
|
count = of_clk_get_parent_count(dev->of_node);
|
||||||
"#clock-cells");
|
|
||||||
if (count <= 0)
|
if (count <= 0)
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
||||||
|
@ -529,21 +529,6 @@ static void dpm_watchdog_clear(struct dpm_watchdog *wd)
|
|||||||
|
|
||||||
/*------------------------- Resume routines -------------------------*/
|
/*------------------------- Resume routines -------------------------*/
|
||||||
|
|
||||||
/**
|
|
||||||
* dev_pm_skip_next_resume_phases - Skip next system resume phases for device.
|
|
||||||
* @dev: Target device.
|
|
||||||
*
|
|
||||||
* Make the core skip the "early resume" and "resume" phases for @dev.
|
|
||||||
*
|
|
||||||
* This function can be called by middle-layer code during the "noirq" phase of
|
|
||||||
* system resume if necessary, but not by device drivers.
|
|
||||||
*/
|
|
||||||
void dev_pm_skip_next_resume_phases(struct device *dev)
|
|
||||||
{
|
|
||||||
dev->power.is_late_suspended = false;
|
|
||||||
dev->power.is_suspended = false;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* suspend_event - Return a "suspend" message for given "resume" one.
|
* suspend_event - Return a "suspend" message for given "resume" one.
|
||||||
* @resume_msg: PM message representing a system-wide resume transition.
|
* @resume_msg: PM message representing a system-wide resume transition.
|
||||||
@ -681,6 +666,9 @@ Skip:
|
|||||||
dev->power.is_noirq_suspended = false;
|
dev->power.is_noirq_suspended = false;
|
||||||
|
|
||||||
if (skip_resume) {
|
if (skip_resume) {
|
||||||
|
/* Make the next phases of resume skip the device. */
|
||||||
|
dev->power.is_late_suspended = false;
|
||||||
|
dev->power.is_suspended = false;
|
||||||
/*
|
/*
|
||||||
* The device is going to be left in suspend, but it might not
|
* The device is going to be left in suspend, but it might not
|
||||||
* have been in runtime suspend before the system suspended, so
|
* have been in runtime suspend before the system suspended, so
|
||||||
@ -689,7 +677,6 @@ Skip:
|
|||||||
* device again.
|
* device again.
|
||||||
*/
|
*/
|
||||||
pm_runtime_set_suspended(dev);
|
pm_runtime_set_suspended(dev);
|
||||||
dev_pm_skip_next_resume_phases(dev);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Out:
|
Out:
|
||||||
@ -1631,17 +1618,20 @@ int dpm_suspend_late(pm_message_t state)
|
|||||||
*/
|
*/
|
||||||
int dpm_suspend_end(pm_message_t state)
|
int dpm_suspend_end(pm_message_t state)
|
||||||
{
|
{
|
||||||
int error = dpm_suspend_late(state);
|
ktime_t starttime = ktime_get();
|
||||||
|
int error;
|
||||||
|
|
||||||
|
error = dpm_suspend_late(state);
|
||||||
if (error)
|
if (error)
|
||||||
return error;
|
goto out;
|
||||||
|
|
||||||
error = dpm_suspend_noirq(state);
|
error = dpm_suspend_noirq(state);
|
||||||
if (error) {
|
if (error)
|
||||||
dpm_resume_early(resume_event(state));
|
dpm_resume_early(resume_event(state));
|
||||||
return error;
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
out:
|
||||||
|
dpm_show_time(starttime, state, error, "end");
|
||||||
|
return error;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(dpm_suspend_end);
|
EXPORT_SYMBOL_GPL(dpm_suspend_end);
|
||||||
|
|
||||||
@ -2034,6 +2024,7 @@ int dpm_prepare(pm_message_t state)
|
|||||||
*/
|
*/
|
||||||
int dpm_suspend_start(pm_message_t state)
|
int dpm_suspend_start(pm_message_t state)
|
||||||
{
|
{
|
||||||
|
ktime_t starttime = ktime_get();
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
error = dpm_prepare(state);
|
error = dpm_prepare(state);
|
||||||
@ -2042,6 +2033,7 @@ int dpm_suspend_start(pm_message_t state)
|
|||||||
dpm_save_failed_step(SUSPEND_PREPARE);
|
dpm_save_failed_step(SUSPEND_PREPARE);
|
||||||
} else
|
} else
|
||||||
error = dpm_suspend(state);
|
error = dpm_suspend(state);
|
||||||
|
dpm_show_time(starttime, state, error, "start");
|
||||||
return error;
|
return error;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(dpm_suspend_start);
|
EXPORT_SYMBOL_GPL(dpm_suspend_start);
|
||||||
|
@ -968,8 +968,6 @@ void pm_wakep_autosleep_enabled(bool set)
|
|||||||
}
|
}
|
||||||
#endif /* CONFIG_PM_AUTOSLEEP */
|
#endif /* CONFIG_PM_AUTOSLEEP */
|
||||||
|
|
||||||
static struct dentry *wakeup_sources_stats_dentry;
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* print_wakeup_source_stats - Print wakeup source statistics information.
|
* print_wakeup_source_stats - Print wakeup source statistics information.
|
||||||
* @m: seq_file to print the statistics into.
|
* @m: seq_file to print the statistics into.
|
||||||
@ -1099,8 +1097,8 @@ static const struct file_operations wakeup_sources_stats_fops = {
|
|||||||
|
|
||||||
static int __init wakeup_sources_debugfs_init(void)
|
static int __init wakeup_sources_debugfs_init(void)
|
||||||
{
|
{
|
||||||
wakeup_sources_stats_dentry = debugfs_create_file("wakeup_sources",
|
debugfs_create_file("wakeup_sources", S_IRUGO, NULL, NULL,
|
||||||
S_IRUGO, NULL, NULL, &wakeup_sources_stats_fops);
|
&wakeup_sources_stats_fops);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -93,6 +93,15 @@ config ARM_IMX6Q_CPUFREQ
|
|||||||
|
|
||||||
If in doubt, say N.
|
If in doubt, say N.
|
||||||
|
|
||||||
|
config ARM_IMX_CPUFREQ_DT
|
||||||
|
tristate "Freescale i.MX8M cpufreq support"
|
||||||
|
depends on ARCH_MXC && CPUFREQ_DT
|
||||||
|
help
|
||||||
|
This adds cpufreq driver support for Freescale i.MX8M series SoCs,
|
||||||
|
based on cpufreq-dt.
|
||||||
|
|
||||||
|
If in doubt, say N.
|
||||||
|
|
||||||
config ARM_KIRKWOOD_CPUFREQ
|
config ARM_KIRKWOOD_CPUFREQ
|
||||||
def_bool MACH_KIRKWOOD
|
def_bool MACH_KIRKWOOD
|
||||||
help
|
help
|
||||||
@ -133,6 +142,14 @@ config ARM_QCOM_CPUFREQ_HW
|
|||||||
The driver implements the cpufreq interface for this HW engine.
|
The driver implements the cpufreq interface for this HW engine.
|
||||||
Say Y if you want to support CPUFreq HW.
|
Say Y if you want to support CPUFreq HW.
|
||||||
|
|
||||||
|
config ARM_RASPBERRYPI_CPUFREQ
|
||||||
|
tristate "Raspberry Pi cpufreq support"
|
||||||
|
depends on CLK_RASPBERRYPI || COMPILE_TEST
|
||||||
|
help
|
||||||
|
This adds the CPUFreq driver for Raspberry Pi
|
||||||
|
|
||||||
|
If in doubt, say N.
|
||||||
|
|
||||||
config ARM_S3C_CPUFREQ
|
config ARM_S3C_CPUFREQ
|
||||||
bool
|
bool
|
||||||
help
|
help
|
||||||
|
@ -56,6 +56,7 @@ obj-$(CONFIG_ACPI_CPPC_CPUFREQ) += cppc_cpufreq.o
|
|||||||
obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o
|
obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o
|
||||||
obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o
|
obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o
|
||||||
obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
|
obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
|
||||||
|
obj-$(CONFIG_ARM_IMX_CPUFREQ_DT) += imx-cpufreq-dt.o
|
||||||
obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o
|
obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o
|
||||||
obj-$(CONFIG_ARM_MEDIATEK_CPUFREQ) += mediatek-cpufreq.o
|
obj-$(CONFIG_ARM_MEDIATEK_CPUFREQ) += mediatek-cpufreq.o
|
||||||
obj-$(CONFIG_MACH_MVEBU_V7) += mvebu-cpufreq.o
|
obj-$(CONFIG_MACH_MVEBU_V7) += mvebu-cpufreq.o
|
||||||
@ -64,6 +65,7 @@ obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o
|
|||||||
obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
|
obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
|
||||||
obj-$(CONFIG_ARM_QCOM_CPUFREQ_HW) += qcom-cpufreq-hw.o
|
obj-$(CONFIG_ARM_QCOM_CPUFREQ_HW) += qcom-cpufreq-hw.o
|
||||||
obj-$(CONFIG_ARM_QCOM_CPUFREQ_KRYO) += qcom-cpufreq-kryo.o
|
obj-$(CONFIG_ARM_QCOM_CPUFREQ_KRYO) += qcom-cpufreq-kryo.o
|
||||||
|
obj-$(CONFIG_ARM_RASPBERRYPI_CPUFREQ) += raspberrypi-cpufreq.o
|
||||||
obj-$(CONFIG_ARM_S3C2410_CPUFREQ) += s3c2410-cpufreq.o
|
obj-$(CONFIG_ARM_S3C2410_CPUFREQ) += s3c2410-cpufreq.o
|
||||||
obj-$(CONFIG_ARM_S3C2412_CPUFREQ) += s3c2412-cpufreq.o
|
obj-$(CONFIG_ARM_S3C2412_CPUFREQ) += s3c2412-cpufreq.o
|
||||||
obj-$(CONFIG_ARM_S3C2416_CPUFREQ) += s3c2416-cpufreq.o
|
obj-$(CONFIG_ARM_S3C2416_CPUFREQ) += s3c2416-cpufreq.o
|
||||||
|
@ -257,7 +257,7 @@ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
|
|||||||
static void __init armada37xx_cpufreq_avs_setup(struct regmap *base,
|
static void __init armada37xx_cpufreq_avs_setup(struct regmap *base,
|
||||||
struct armada_37xx_dvfs *dvfs)
|
struct armada_37xx_dvfs *dvfs)
|
||||||
{
|
{
|
||||||
unsigned int avs_val = 0, freq;
|
unsigned int avs_val = 0;
|
||||||
int load_level = 0;
|
int load_level = 0;
|
||||||
|
|
||||||
if (base == NULL)
|
if (base == NULL)
|
||||||
@ -275,8 +275,6 @@ static void __init armada37xx_cpufreq_avs_setup(struct regmap *base,
|
|||||||
|
|
||||||
|
|
||||||
for (load_level = 1; load_level < LOAD_LEVEL_NR; load_level++) {
|
for (load_level = 1; load_level < LOAD_LEVEL_NR; load_level++) {
|
||||||
freq = dvfs->cpu_freq_max / dvfs->divider[load_level];
|
|
||||||
|
|
||||||
avs_val = dvfs->avs[load_level];
|
avs_val = dvfs->avs[load_level];
|
||||||
regmap_update_bits(base, ARMADA_37XX_AVS_VSET(load_level-1),
|
regmap_update_bits(base, ARMADA_37XX_AVS_VSET(load_level-1),
|
||||||
ARMADA_37XX_AVS_VDD_MASK << ARMADA_37XX_AVS_HIGH_VDD_LIMIT |
|
ARMADA_37XX_AVS_VDD_MASK << ARMADA_37XX_AVS_HIGH_VDD_LIMIT |
|
||||||
|
@ -384,12 +384,12 @@ static int brcm_avs_set_pstate(struct private_data *priv, unsigned int pstate)
|
|||||||
return __issue_avs_command(priv, AVS_CMD_SET_PSTATE, true, args);
|
return __issue_avs_command(priv, AVS_CMD_SET_PSTATE, true, args);
|
||||||
}
|
}
|
||||||
|
|
||||||
static unsigned long brcm_avs_get_voltage(void __iomem *base)
|
static u32 brcm_avs_get_voltage(void __iomem *base)
|
||||||
{
|
{
|
||||||
return readl(base + AVS_MBOX_VOLTAGE1);
|
return readl(base + AVS_MBOX_VOLTAGE1);
|
||||||
}
|
}
|
||||||
|
|
||||||
static unsigned long brcm_avs_get_frequency(void __iomem *base)
|
static u32 brcm_avs_get_frequency(void __iomem *base)
|
||||||
{
|
{
|
||||||
return readl(base + AVS_MBOX_FREQUENCY) * 1000; /* in kHz */
|
return readl(base + AVS_MBOX_FREQUENCY) * 1000; /* in kHz */
|
||||||
}
|
}
|
||||||
@ -446,8 +446,8 @@ static bool brcm_avs_is_firmware_loaded(struct private_data *priv)
|
|||||||
rc = brcm_avs_get_pmap(priv, NULL);
|
rc = brcm_avs_get_pmap(priv, NULL);
|
||||||
magic = readl(priv->base + AVS_MBOX_MAGIC);
|
magic = readl(priv->base + AVS_MBOX_MAGIC);
|
||||||
|
|
||||||
return (magic == AVS_FIRMWARE_MAGIC) && (rc != -ENOTSUPP) &&
|
return (magic == AVS_FIRMWARE_MAGIC) && ((rc != -ENOTSUPP) ||
|
||||||
(rc != -EINVAL);
|
(rc != -EINVAL));
|
||||||
}
|
}
|
||||||
|
|
||||||
static unsigned int brcm_avs_cpufreq_get(unsigned int cpu)
|
static unsigned int brcm_avs_cpufreq_get(unsigned int cpu)
|
||||||
@ -653,14 +653,14 @@ static ssize_t show_brcm_avs_voltage(struct cpufreq_policy *policy, char *buf)
|
|||||||
{
|
{
|
||||||
struct private_data *priv = policy->driver_data;
|
struct private_data *priv = policy->driver_data;
|
||||||
|
|
||||||
return sprintf(buf, "0x%08lx\n", brcm_avs_get_voltage(priv->base));
|
return sprintf(buf, "0x%08x\n", brcm_avs_get_voltage(priv->base));
|
||||||
}
|
}
|
||||||
|
|
||||||
static ssize_t show_brcm_avs_frequency(struct cpufreq_policy *policy, char *buf)
|
static ssize_t show_brcm_avs_frequency(struct cpufreq_policy *policy, char *buf)
|
||||||
{
|
{
|
||||||
struct private_data *priv = policy->driver_data;
|
struct private_data *priv = policy->driver_data;
|
||||||
|
|
||||||
return sprintf(buf, "0x%08lx\n", brcm_avs_get_frequency(priv->base));
|
return sprintf(buf, "0x%08x\n", brcm_avs_get_frequency(priv->base));
|
||||||
}
|
}
|
||||||
|
|
||||||
cpufreq_freq_attr_ro(brcm_avs_pstate);
|
cpufreq_freq_attr_ro(brcm_avs_pstate);
|
||||||
|
@ -37,7 +37,6 @@ static const struct of_device_id whitelist[] __initconst = {
|
|||||||
{ .compatible = "fsl,imx27", },
|
{ .compatible = "fsl,imx27", },
|
||||||
{ .compatible = "fsl,imx51", },
|
{ .compatible = "fsl,imx51", },
|
||||||
{ .compatible = "fsl,imx53", },
|
{ .compatible = "fsl,imx53", },
|
||||||
{ .compatible = "fsl,imx7d", },
|
|
||||||
|
|
||||||
{ .compatible = "marvell,berlin", },
|
{ .compatible = "marvell,berlin", },
|
||||||
{ .compatible = "marvell,pxa250", },
|
{ .compatible = "marvell,pxa250", },
|
||||||
@ -105,6 +104,10 @@ static const struct of_device_id blacklist[] __initconst = {
|
|||||||
{ .compatible = "calxeda,highbank", },
|
{ .compatible = "calxeda,highbank", },
|
||||||
{ .compatible = "calxeda,ecx-2000", },
|
{ .compatible = "calxeda,ecx-2000", },
|
||||||
|
|
||||||
|
{ .compatible = "fsl,imx7d", },
|
||||||
|
{ .compatible = "fsl,imx8mq", },
|
||||||
|
{ .compatible = "fsl,imx8mm", },
|
||||||
|
|
||||||
{ .compatible = "marvell,armadaxp", },
|
{ .compatible = "marvell,armadaxp", },
|
||||||
|
|
||||||
{ .compatible = "mediatek,mt2701", },
|
{ .compatible = "mediatek,mt2701", },
|
||||||
|
@ -356,12 +356,10 @@ static void cpufreq_notify_transition(struct cpufreq_policy *policy,
|
|||||||
* which is not equal to what the cpufreq core thinks is
|
* which is not equal to what the cpufreq core thinks is
|
||||||
* "old frequency".
|
* "old frequency".
|
||||||
*/
|
*/
|
||||||
if (!(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) {
|
if (policy->cur && policy->cur != freqs->old) {
|
||||||
if (policy->cur && (policy->cur != freqs->old)) {
|
pr_debug("Warning: CPU frequency is %u, cpufreq assumed %u kHz\n",
|
||||||
pr_debug("Warning: CPU frequency is %u, cpufreq assumed %u kHz\n",
|
freqs->old, policy->cur);
|
||||||
freqs->old, policy->cur);
|
freqs->old = policy->cur;
|
||||||
freqs->old = policy->cur;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
srcu_notifier_call_chain(&cpufreq_transition_notifier_list,
|
srcu_notifier_call_chain(&cpufreq_transition_notifier_list,
|
||||||
@ -631,7 +629,7 @@ static int cpufreq_parse_policy(char *str_governor,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* cpufreq_parse_governor - parse a governor string only for !setpolicy
|
* cpufreq_parse_governor - parse a governor string only for has_target()
|
||||||
*/
|
*/
|
||||||
static int cpufreq_parse_governor(char *str_governor,
|
static int cpufreq_parse_governor(char *str_governor,
|
||||||
struct cpufreq_policy *policy)
|
struct cpufreq_policy *policy)
|
||||||
@ -1114,13 +1112,25 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, unsigned int cp
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void refresh_frequency_limits(struct cpufreq_policy *policy)
|
||||||
|
{
|
||||||
|
struct cpufreq_policy new_policy = *policy;
|
||||||
|
|
||||||
|
pr_debug("updating policy for CPU %u\n", policy->cpu);
|
||||||
|
|
||||||
|
new_policy.min = policy->user_policy.min;
|
||||||
|
new_policy.max = policy->user_policy.max;
|
||||||
|
|
||||||
|
cpufreq_set_policy(policy, &new_policy);
|
||||||
|
}
|
||||||
|
|
||||||
static void handle_update(struct work_struct *work)
|
static void handle_update(struct work_struct *work)
|
||||||
{
|
{
|
||||||
struct cpufreq_policy *policy =
|
struct cpufreq_policy *policy =
|
||||||
container_of(work, struct cpufreq_policy, update);
|
container_of(work, struct cpufreq_policy, update);
|
||||||
unsigned int cpu = policy->cpu;
|
|
||||||
pr_debug("handle_update for cpu %u called\n", cpu);
|
pr_debug("handle_update for cpu %u called\n", policy->cpu);
|
||||||
cpufreq_update_policy(cpu);
|
refresh_frequency_limits(policy);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
|
static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
|
||||||
@ -1300,7 +1310,7 @@ static int cpufreq_online(unsigned int cpu)
|
|||||||
policy->max = policy->user_policy.max;
|
policy->max = policy->user_policy.max;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (cpufreq_driver->get && !cpufreq_driver->setpolicy) {
|
if (cpufreq_driver->get && has_target()) {
|
||||||
policy->cur = cpufreq_driver->get(policy->cpu);
|
policy->cur = cpufreq_driver->get(policy->cpu);
|
||||||
if (!policy->cur) {
|
if (!policy->cur) {
|
||||||
pr_err("%s: ->get() failed\n", __func__);
|
pr_err("%s: ->get() failed\n", __func__);
|
||||||
@ -1375,8 +1385,7 @@ static int cpufreq_online(unsigned int cpu)
|
|||||||
if (cpufreq_driver->ready)
|
if (cpufreq_driver->ready)
|
||||||
cpufreq_driver->ready(policy);
|
cpufreq_driver->ready(policy);
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_CPU_THERMAL) &&
|
if (cpufreq_thermal_control_enabled(cpufreq_driver))
|
||||||
cpufreq_driver->flags & CPUFREQ_IS_COOLING_DEV)
|
|
||||||
policy->cdev = of_cpufreq_cooling_register(policy);
|
policy->cdev = of_cpufreq_cooling_register(policy);
|
||||||
|
|
||||||
pr_debug("initialization complete\n");
|
pr_debug("initialization complete\n");
|
||||||
@ -1466,8 +1475,7 @@ static int cpufreq_offline(unsigned int cpu)
|
|||||||
goto unlock;
|
goto unlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (IS_ENABLED(CONFIG_CPU_THERMAL) &&
|
if (cpufreq_thermal_control_enabled(cpufreq_driver)) {
|
||||||
cpufreq_driver->flags & CPUFREQ_IS_COOLING_DEV) {
|
|
||||||
cpufreq_cooling_unregister(policy->cdev);
|
cpufreq_cooling_unregister(policy->cdev);
|
||||||
policy->cdev = NULL;
|
policy->cdev = NULL;
|
||||||
}
|
}
|
||||||
@ -1546,6 +1554,30 @@ static void cpufreq_out_of_sync(struct cpufreq_policy *policy,
|
|||||||
cpufreq_freq_transition_end(policy, &freqs, 0);
|
cpufreq_freq_transition_end(policy, &freqs, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static unsigned int cpufreq_verify_current_freq(struct cpufreq_policy *policy, bool update)
|
||||||
|
{
|
||||||
|
unsigned int new_freq;
|
||||||
|
|
||||||
|
new_freq = cpufreq_driver->get(policy->cpu);
|
||||||
|
if (!new_freq)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If fast frequency switching is used with the given policy, the check
|
||||||
|
* against policy->cur is pointless, so skip it in that case.
|
||||||
|
*/
|
||||||
|
if (policy->fast_switch_enabled || !has_target())
|
||||||
|
return new_freq;
|
||||||
|
|
||||||
|
if (policy->cur != new_freq) {
|
||||||
|
cpufreq_out_of_sync(policy, new_freq);
|
||||||
|
if (update)
|
||||||
|
schedule_work(&policy->update);
|
||||||
|
}
|
||||||
|
|
||||||
|
return new_freq;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* cpufreq_quick_get - get the CPU frequency (in kHz) from policy->cur
|
* cpufreq_quick_get - get the CPU frequency (in kHz) from policy->cur
|
||||||
* @cpu: CPU number
|
* @cpu: CPU number
|
||||||
@ -1601,31 +1633,10 @@ EXPORT_SYMBOL(cpufreq_quick_get_max);
|
|||||||
|
|
||||||
static unsigned int __cpufreq_get(struct cpufreq_policy *policy)
|
static unsigned int __cpufreq_get(struct cpufreq_policy *policy)
|
||||||
{
|
{
|
||||||
unsigned int ret_freq = 0;
|
|
||||||
|
|
||||||
if (unlikely(policy_is_inactive(policy)))
|
if (unlikely(policy_is_inactive(policy)))
|
||||||
return ret_freq;
|
return 0;
|
||||||
|
|
||||||
ret_freq = cpufreq_driver->get(policy->cpu);
|
return cpufreq_verify_current_freq(policy, true);
|
||||||
|
|
||||||
/*
|
|
||||||
* If fast frequency switching is used with the given policy, the check
|
|
||||||
* against policy->cur is pointless, so skip it in that case too.
|
|
||||||
*/
|
|
||||||
if (policy->fast_switch_enabled)
|
|
||||||
return ret_freq;
|
|
||||||
|
|
||||||
if (ret_freq && policy->cur &&
|
|
||||||
!(cpufreq_driver->flags & CPUFREQ_CONST_LOOPS)) {
|
|
||||||
/* verify no discrepancy between actual and
|
|
||||||
saved value exists */
|
|
||||||
if (unlikely(ret_freq != policy->cur)) {
|
|
||||||
cpufreq_out_of_sync(policy, ret_freq);
|
|
||||||
schedule_work(&policy->update);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return ret_freq;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -1652,24 +1663,6 @@ unsigned int cpufreq_get(unsigned int cpu)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL(cpufreq_get);
|
EXPORT_SYMBOL(cpufreq_get);
|
||||||
|
|
||||||
static unsigned int cpufreq_update_current_freq(struct cpufreq_policy *policy)
|
|
||||||
{
|
|
||||||
unsigned int new_freq;
|
|
||||||
|
|
||||||
new_freq = cpufreq_driver->get(policy->cpu);
|
|
||||||
if (!new_freq)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
if (!policy->cur) {
|
|
||||||
pr_debug("cpufreq: Driver did not initialize current freq\n");
|
|
||||||
policy->cur = new_freq;
|
|
||||||
} else if (policy->cur != new_freq && has_target()) {
|
|
||||||
cpufreq_out_of_sync(policy, new_freq);
|
|
||||||
}
|
|
||||||
|
|
||||||
return new_freq;
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct subsys_interface cpufreq_interface = {
|
static struct subsys_interface cpufreq_interface = {
|
||||||
.name = "cpufreq",
|
.name = "cpufreq",
|
||||||
.subsys = &cpu_subsys,
|
.subsys = &cpu_subsys,
|
||||||
@ -2150,8 +2143,8 @@ static int cpufreq_start_governor(struct cpufreq_policy *policy)
|
|||||||
|
|
||||||
pr_debug("%s: for CPU %u\n", __func__, policy->cpu);
|
pr_debug("%s: for CPU %u\n", __func__, policy->cpu);
|
||||||
|
|
||||||
if (cpufreq_driver->get && !cpufreq_driver->setpolicy)
|
if (cpufreq_driver->get)
|
||||||
cpufreq_update_current_freq(policy);
|
cpufreq_verify_current_freq(policy, false);
|
||||||
|
|
||||||
if (policy->governor->start) {
|
if (policy->governor->start) {
|
||||||
ret = policy->governor->start(policy);
|
ret = policy->governor->start(policy);
|
||||||
@ -2392,7 +2385,6 @@ int cpufreq_set_policy(struct cpufreq_policy *policy,
|
|||||||
void cpufreq_update_policy(unsigned int cpu)
|
void cpufreq_update_policy(unsigned int cpu)
|
||||||
{
|
{
|
||||||
struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpu);
|
struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpu);
|
||||||
struct cpufreq_policy new_policy;
|
|
||||||
|
|
||||||
if (!policy)
|
if (!policy)
|
||||||
return;
|
return;
|
||||||
@ -2401,16 +2393,11 @@ void cpufreq_update_policy(unsigned int cpu)
|
|||||||
* BIOS might change freq behind our back
|
* BIOS might change freq behind our back
|
||||||
* -> ask driver for current freq and notify governors about a change
|
* -> ask driver for current freq and notify governors about a change
|
||||||
*/
|
*/
|
||||||
if (cpufreq_driver->get && !cpufreq_driver->setpolicy &&
|
if (cpufreq_driver->get && has_target() &&
|
||||||
(cpufreq_suspended || WARN_ON(!cpufreq_update_current_freq(policy))))
|
(cpufreq_suspended || WARN_ON(!cpufreq_verify_current_freq(policy, false))))
|
||||||
goto unlock;
|
goto unlock;
|
||||||
|
|
||||||
pr_debug("updating policy for CPU %u\n", cpu);
|
refresh_frequency_limits(policy);
|
||||||
memcpy(&new_policy, policy, sizeof(*policy));
|
|
||||||
new_policy.min = policy->user_policy.min;
|
|
||||||
new_policy.max = policy->user_policy.max;
|
|
||||||
|
|
||||||
cpufreq_set_policy(policy, &new_policy);
|
|
||||||
|
|
||||||
unlock:
|
unlock:
|
||||||
cpufreq_cpu_release(policy);
|
cpufreq_cpu_release(policy);
|
||||||
|
97
drivers/cpufreq/imx-cpufreq-dt.c
Normal file
97
drivers/cpufreq/imx-cpufreq-dt.c
Normal file
@ -0,0 +1,97 @@
|
|||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
|
/*
|
||||||
|
* Copyright 2019 NXP
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <linux/cpu.h>
|
||||||
|
#include <linux/err.h>
|
||||||
|
#include <linux/init.h>
|
||||||
|
#include <linux/kernel.h>
|
||||||
|
#include <linux/module.h>
|
||||||
|
#include <linux/nvmem-consumer.h>
|
||||||
|
#include <linux/of.h>
|
||||||
|
#include <linux/platform_device.h>
|
||||||
|
#include <linux/pm_opp.h>
|
||||||
|
#include <linux/slab.h>
|
||||||
|
|
||||||
|
#define OCOTP_CFG3_SPEED_GRADE_SHIFT 8
|
||||||
|
#define OCOTP_CFG3_SPEED_GRADE_MASK (0x3 << 8)
|
||||||
|
#define OCOTP_CFG3_MKT_SEGMENT_SHIFT 6
|
||||||
|
#define OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 6)
|
||||||
|
|
||||||
|
/* cpufreq-dt device registered by imx-cpufreq-dt */
|
||||||
|
static struct platform_device *cpufreq_dt_pdev;
|
||||||
|
static struct opp_table *cpufreq_opp_table;
|
||||||
|
|
||||||
|
static int imx_cpufreq_dt_probe(struct platform_device *pdev)
|
||||||
|
{
|
||||||
|
struct device *cpu_dev = get_cpu_device(0);
|
||||||
|
u32 cell_value, supported_hw[2];
|
||||||
|
int speed_grade, mkt_segment;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = nvmem_cell_read_u32(cpu_dev, "speed_grade", &cell_value);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK) >> OCOTP_CFG3_SPEED_GRADE_SHIFT;
|
||||||
|
mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) >> OCOTP_CFG3_MKT_SEGMENT_SHIFT;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Early samples without fuses written report "0 0" which means
|
||||||
|
* consumer segment and minimum speed grading.
|
||||||
|
*
|
||||||
|
* According to datasheet minimum speed grading is not supported for
|
||||||
|
* consumer parts so clamp to 1 to avoid warning for "no OPPs"
|
||||||
|
*
|
||||||
|
* Applies to 8mq and 8mm.
|
||||||
|
*/
|
||||||
|
if (mkt_segment == 0 && speed_grade == 0 && (
|
||||||
|
of_machine_is_compatible("fsl,imx8mm") ||
|
||||||
|
of_machine_is_compatible("fsl,imx8mq")))
|
||||||
|
speed_grade = 1;
|
||||||
|
|
||||||
|
supported_hw[0] = BIT(speed_grade);
|
||||||
|
supported_hw[1] = BIT(mkt_segment);
|
||||||
|
dev_info(&pdev->dev, "cpu speed grade %d mkt segment %d supported-hw %#x %#x\n",
|
||||||
|
speed_grade, mkt_segment, supported_hw[0], supported_hw[1]);
|
||||||
|
|
||||||
|
cpufreq_opp_table = dev_pm_opp_set_supported_hw(cpu_dev, supported_hw, 2);
|
||||||
|
if (IS_ERR(cpufreq_opp_table)) {
|
||||||
|
ret = PTR_ERR(cpufreq_opp_table);
|
||||||
|
dev_err(&pdev->dev, "Failed to set supported opp: %d\n", ret);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
cpufreq_dt_pdev = platform_device_register_data(
|
||||||
|
&pdev->dev, "cpufreq-dt", -1, NULL, 0);
|
||||||
|
if (IS_ERR(cpufreq_dt_pdev)) {
|
||||||
|
dev_pm_opp_put_supported_hw(cpufreq_opp_table);
|
||||||
|
ret = PTR_ERR(cpufreq_dt_pdev);
|
||||||
|
dev_err(&pdev->dev, "Failed to register cpufreq-dt: %d\n", ret);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int imx_cpufreq_dt_remove(struct platform_device *pdev)
|
||||||
|
{
|
||||||
|
platform_device_unregister(cpufreq_dt_pdev);
|
||||||
|
dev_pm_opp_put_supported_hw(cpufreq_opp_table);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct platform_driver imx_cpufreq_dt_driver = {
|
||||||
|
.probe = imx_cpufreq_dt_probe,
|
||||||
|
.remove = imx_cpufreq_dt_remove,
|
||||||
|
.driver = {
|
||||||
|
.name = "imx-cpufreq-dt",
|
||||||
|
},
|
||||||
|
};
|
||||||
|
module_platform_driver(imx_cpufreq_dt_driver);
|
||||||
|
|
||||||
|
MODULE_ALIAS("platform:imx-cpufreq-dt");
|
||||||
|
MODULE_DESCRIPTION("Freescale i.MX cpufreq speed grading driver");
|
||||||
|
MODULE_LICENSE("GPL v2");
|
@ -582,10 +582,10 @@ static int __init pcc_cpufreq_init(void)
|
|||||||
|
|
||||||
/* Skip initialization if another cpufreq driver is there. */
|
/* Skip initialization if another cpufreq driver is there. */
|
||||||
if (cpufreq_get_current_driver())
|
if (cpufreq_get_current_driver())
|
||||||
return 0;
|
return -EEXIST;
|
||||||
|
|
||||||
if (acpi_disabled)
|
if (acpi_disabled)
|
||||||
return 0;
|
return -ENODEV;
|
||||||
|
|
||||||
ret = pcc_cpufreq_probe();
|
ret = pcc_cpufreq_probe();
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
97
drivers/cpufreq/raspberrypi-cpufreq.c
Normal file
97
drivers/cpufreq/raspberrypi-cpufreq.c
Normal file
@ -0,0 +1,97 @@
|
|||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
|
/*
|
||||||
|
* Raspberry Pi cpufreq driver
|
||||||
|
*
|
||||||
|
* Copyright (C) 2019, Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <linux/clk.h>
|
||||||
|
#include <linux/cpu.h>
|
||||||
|
#include <linux/cpufreq.h>
|
||||||
|
#include <linux/module.h>
|
||||||
|
#include <linux/platform_device.h>
|
||||||
|
#include <linux/pm_opp.h>
|
||||||
|
|
||||||
|
#define RASPBERRYPI_FREQ_INTERVAL 100000000
|
||||||
|
|
||||||
|
static struct platform_device *cpufreq_dt;
|
||||||
|
|
||||||
|
static int raspberrypi_cpufreq_probe(struct platform_device *pdev)
|
||||||
|
{
|
||||||
|
struct device *cpu_dev;
|
||||||
|
unsigned long min, max;
|
||||||
|
unsigned long rate;
|
||||||
|
struct clk *clk;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
cpu_dev = get_cpu_device(0);
|
||||||
|
if (!cpu_dev) {
|
||||||
|
pr_err("Cannot get CPU for cpufreq driver\n");
|
||||||
|
return -ENODEV;
|
||||||
|
}
|
||||||
|
|
||||||
|
clk = clk_get(cpu_dev, NULL);
|
||||||
|
if (IS_ERR(clk)) {
|
||||||
|
dev_err(cpu_dev, "Cannot get clock for CPU0\n");
|
||||||
|
return PTR_ERR(clk);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The max and min frequencies are configurable in the Raspberry Pi
|
||||||
|
* firmware, so we query them at runtime.
|
||||||
|
*/
|
||||||
|
min = roundup(clk_round_rate(clk, 0), RASPBERRYPI_FREQ_INTERVAL);
|
||||||
|
max = roundup(clk_round_rate(clk, ULONG_MAX), RASPBERRYPI_FREQ_INTERVAL);
|
||||||
|
clk_put(clk);
|
||||||
|
|
||||||
|
for (rate = min; rate <= max; rate += RASPBERRYPI_FREQ_INTERVAL) {
|
||||||
|
ret = dev_pm_opp_add(cpu_dev, rate, 0);
|
||||||
|
if (ret)
|
||||||
|
goto remove_opp;
|
||||||
|
}
|
||||||
|
|
||||||
|
cpufreq_dt = platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
|
||||||
|
ret = PTR_ERR_OR_ZERO(cpufreq_dt);
|
||||||
|
if (ret) {
|
||||||
|
dev_err(cpu_dev, "Failed to create platform device, %d\n", ret);
|
||||||
|
goto remove_opp;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
remove_opp:
|
||||||
|
dev_pm_opp_remove_all_dynamic(cpu_dev);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int raspberrypi_cpufreq_remove(struct platform_device *pdev)
|
||||||
|
{
|
||||||
|
struct device *cpu_dev;
|
||||||
|
|
||||||
|
cpu_dev = get_cpu_device(0);
|
||||||
|
if (cpu_dev)
|
||||||
|
dev_pm_opp_remove_all_dynamic(cpu_dev);
|
||||||
|
|
||||||
|
platform_device_unregister(cpufreq_dt);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Since the driver depends on clk-raspberrypi, which may return EPROBE_DEFER,
|
||||||
|
* all the activity is performed in the probe, which may be defered as well.
|
||||||
|
*/
|
||||||
|
static struct platform_driver raspberrypi_cpufreq_driver = {
|
||||||
|
.driver = {
|
||||||
|
.name = "raspberrypi-cpufreq",
|
||||||
|
},
|
||||||
|
.probe = raspberrypi_cpufreq_probe,
|
||||||
|
.remove = raspberrypi_cpufreq_remove,
|
||||||
|
};
|
||||||
|
module_platform_driver(raspberrypi_cpufreq_driver);
|
||||||
|
|
||||||
|
MODULE_AUTHOR("Nicolas Saenz Julienne <nsaenzjulienne@suse.de");
|
||||||
|
MODULE_DESCRIPTION("Raspberry Pi cpufreq driver");
|
||||||
|
MODULE_LICENSE("GPL");
|
||||||
|
MODULE_ALIAS("platform:raspberrypi-cpufreq");
|
@ -478,7 +478,7 @@ static int s5pv210_target(struct cpufreq_policy *policy, unsigned int index)
|
|||||||
arm_volt, arm_volt_max);
|
arm_volt, arm_volt_max);
|
||||||
}
|
}
|
||||||
|
|
||||||
printk(KERN_DEBUG "Perf changed[L%d]\n", index);
|
pr_debug("Perf changed[L%d]\n", index);
|
||||||
|
|
||||||
exit:
|
exit:
|
||||||
mutex_unlock(&set_freq_lock);
|
mutex_unlock(&set_freq_lock);
|
||||||
|
@ -1406,7 +1406,7 @@ static void __init i8042_register_ports(void)
|
|||||||
* behavior on many platforms using suspend-to-RAM (ACPI S3)
|
* behavior on many platforms using suspend-to-RAM (ACPI S3)
|
||||||
* by default.
|
* by default.
|
||||||
*/
|
*/
|
||||||
if (pm_suspend_via_s2idle() && i == I8042_KBD_PORT_NO)
|
if (pm_suspend_default_s2idle() && i == I8042_KBD_PORT_NO)
|
||||||
device_set_wakeup_enable(&serio->dev, true);
|
device_set_wakeup_enable(&serio->dev, true);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -682,7 +682,7 @@ static int _set_opp_custom(const struct opp_table *opp_table,
|
|||||||
|
|
||||||
data->old_opp.rate = old_freq;
|
data->old_opp.rate = old_freq;
|
||||||
size = sizeof(*old_supply) * opp_table->regulator_count;
|
size = sizeof(*old_supply) * opp_table->regulator_count;
|
||||||
if (IS_ERR(old_supply))
|
if (!old_supply)
|
||||||
memset(data->old_opp.supplies, 0, size);
|
memset(data->old_opp.supplies, 0, size);
|
||||||
else
|
else
|
||||||
memcpy(data->old_opp.supplies, old_supply, size);
|
memcpy(data->old_opp.supplies, old_supply, size);
|
||||||
@ -708,7 +708,7 @@ static int _set_required_opps(struct device *dev,
|
|||||||
|
|
||||||
/* Single genpd case */
|
/* Single genpd case */
|
||||||
if (!genpd_virt_devs) {
|
if (!genpd_virt_devs) {
|
||||||
pstate = opp->required_opps[0]->pstate;
|
pstate = likely(opp) ? opp->required_opps[0]->pstate : 0;
|
||||||
ret = dev_pm_genpd_set_performance_state(dev, pstate);
|
ret = dev_pm_genpd_set_performance_state(dev, pstate);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(dev, "Failed to set performance state of %s: %d (%d)\n",
|
dev_err(dev, "Failed to set performance state of %s: %d (%d)\n",
|
||||||
@ -726,7 +726,7 @@ static int _set_required_opps(struct device *dev,
|
|||||||
mutex_lock(&opp_table->genpd_virt_dev_lock);
|
mutex_lock(&opp_table->genpd_virt_dev_lock);
|
||||||
|
|
||||||
for (i = 0; i < opp_table->required_opp_count; i++) {
|
for (i = 0; i < opp_table->required_opp_count; i++) {
|
||||||
pstate = opp->required_opps[i]->pstate;
|
pstate = likely(opp) ? opp->required_opps[i]->pstate : 0;
|
||||||
|
|
||||||
if (!genpd_virt_devs[i])
|
if (!genpd_virt_devs[i])
|
||||||
continue;
|
continue;
|
||||||
@ -748,29 +748,37 @@ static int _set_required_opps(struct device *dev,
|
|||||||
* @dev: device for which we do this operation
|
* @dev: device for which we do this operation
|
||||||
* @target_freq: frequency to achieve
|
* @target_freq: frequency to achieve
|
||||||
*
|
*
|
||||||
* This configures the power-supplies and clock source to the levels specified
|
* This configures the power-supplies to the levels specified by the OPP
|
||||||
* by the OPP corresponding to the target_freq.
|
* corresponding to the target_freq, and programs the clock to a value <=
|
||||||
|
* target_freq, as rounded by clk_round_rate(). Device wanting to run at fmax
|
||||||
|
* provided by the opp, should have already rounded to the target OPP's
|
||||||
|
* frequency.
|
||||||
*/
|
*/
|
||||||
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
|
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
|
||||||
{
|
{
|
||||||
struct opp_table *opp_table;
|
struct opp_table *opp_table;
|
||||||
unsigned long freq, old_freq;
|
unsigned long freq, old_freq, temp_freq;
|
||||||
struct dev_pm_opp *old_opp, *opp;
|
struct dev_pm_opp *old_opp, *opp;
|
||||||
struct clk *clk;
|
struct clk *clk;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (unlikely(!target_freq)) {
|
|
||||||
dev_err(dev, "%s: Invalid target frequency %lu\n", __func__,
|
|
||||||
target_freq);
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
opp_table = _find_opp_table(dev);
|
opp_table = _find_opp_table(dev);
|
||||||
if (IS_ERR(opp_table)) {
|
if (IS_ERR(opp_table)) {
|
||||||
dev_err(dev, "%s: device opp doesn't exist\n", __func__);
|
dev_err(dev, "%s: device opp doesn't exist\n", __func__);
|
||||||
return PTR_ERR(opp_table);
|
return PTR_ERR(opp_table);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (unlikely(!target_freq)) {
|
||||||
|
if (opp_table->required_opp_tables) {
|
||||||
|
ret = _set_required_opps(dev, opp_table, NULL);
|
||||||
|
} else {
|
||||||
|
dev_err(dev, "target frequency can't be 0\n");
|
||||||
|
ret = -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
goto put_opp_table;
|
||||||
|
}
|
||||||
|
|
||||||
clk = opp_table->clk;
|
clk = opp_table->clk;
|
||||||
if (IS_ERR(clk)) {
|
if (IS_ERR(clk)) {
|
||||||
dev_err(dev, "%s: No clock available for the device\n",
|
dev_err(dev, "%s: No clock available for the device\n",
|
||||||
@ -793,13 +801,15 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
|
|||||||
goto put_opp_table;
|
goto put_opp_table;
|
||||||
}
|
}
|
||||||
|
|
||||||
old_opp = _find_freq_ceil(opp_table, &old_freq);
|
temp_freq = old_freq;
|
||||||
|
old_opp = _find_freq_ceil(opp_table, &temp_freq);
|
||||||
if (IS_ERR(old_opp)) {
|
if (IS_ERR(old_opp)) {
|
||||||
dev_err(dev, "%s: failed to find current OPP for freq %lu (%ld)\n",
|
dev_err(dev, "%s: failed to find current OPP for freq %lu (%ld)\n",
|
||||||
__func__, old_freq, PTR_ERR(old_opp));
|
__func__, old_freq, PTR_ERR(old_opp));
|
||||||
}
|
}
|
||||||
|
|
||||||
opp = _find_freq_ceil(opp_table, &freq);
|
temp_freq = freq;
|
||||||
|
opp = _find_freq_ceil(opp_table, &temp_freq);
|
||||||
if (IS_ERR(opp)) {
|
if (IS_ERR(opp)) {
|
||||||
ret = PTR_ERR(opp);
|
ret = PTR_ERR(opp);
|
||||||
dev_err(dev, "%s: failed to find OPP for freq %lu (%d)\n",
|
dev_err(dev, "%s: failed to find OPP for freq %lu (%d)\n",
|
||||||
@ -1741,91 +1751,137 @@ void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(dev_pm_opp_unregister_set_opp_helper);
|
EXPORT_SYMBOL_GPL(dev_pm_opp_unregister_set_opp_helper);
|
||||||
|
|
||||||
|
static void _opp_detach_genpd(struct opp_table *opp_table)
|
||||||
|
{
|
||||||
|
int index;
|
||||||
|
|
||||||
|
for (index = 0; index < opp_table->required_opp_count; index++) {
|
||||||
|
if (!opp_table->genpd_virt_devs[index])
|
||||||
|
continue;
|
||||||
|
|
||||||
|
dev_pm_domain_detach(opp_table->genpd_virt_devs[index], false);
|
||||||
|
opp_table->genpd_virt_devs[index] = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
kfree(opp_table->genpd_virt_devs);
|
||||||
|
opp_table->genpd_virt_devs = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* dev_pm_opp_set_genpd_virt_dev - Set virtual genpd device for an index
|
* dev_pm_opp_attach_genpd - Attach genpd(s) for the device and save virtual device pointer
|
||||||
* @dev: Consumer device for which the genpd device is getting set.
|
* @dev: Consumer device for which the genpd is getting attached.
|
||||||
* @virt_dev: virtual genpd device.
|
* @names: Null terminated array of pointers containing names of genpd to attach.
|
||||||
* @index: index.
|
|
||||||
*
|
*
|
||||||
* Multiple generic power domains for a device are supported with the help of
|
* Multiple generic power domains for a device are supported with the help of
|
||||||
* virtual genpd devices, which are created for each consumer device - genpd
|
* virtual genpd devices, which are created for each consumer device - genpd
|
||||||
* pair. These are the device structures which are attached to the power domain
|
* pair. These are the device structures which are attached to the power domain
|
||||||
* and are required by the OPP core to set the performance state of the genpd.
|
* and are required by the OPP core to set the performance state of the genpd.
|
||||||
|
* The same API also works for the case where single genpd is available and so
|
||||||
|
* we don't need to support that separately.
|
||||||
*
|
*
|
||||||
* This helper will normally be called by the consumer driver of the device
|
* This helper will normally be called by the consumer driver of the device
|
||||||
* "dev", as only that has details of the genpd devices.
|
* "dev", as only that has details of the genpd names.
|
||||||
*
|
*
|
||||||
* This helper needs to be called once for each of those virtual devices, but
|
* This helper needs to be called once with a list of all genpd to attach.
|
||||||
* only if multiple domains are available for a device. Otherwise the original
|
* Otherwise the original device structure will be used instead by the OPP core.
|
||||||
* device structure will be used instead by the OPP core.
|
|
||||||
*/
|
*/
|
||||||
struct opp_table *dev_pm_opp_set_genpd_virt_dev(struct device *dev,
|
struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names)
|
||||||
struct device *virt_dev,
|
|
||||||
int index)
|
|
||||||
{
|
{
|
||||||
struct opp_table *opp_table;
|
struct opp_table *opp_table;
|
||||||
|
struct device *virt_dev;
|
||||||
|
int index, ret = -EINVAL;
|
||||||
|
const char **name = names;
|
||||||
|
|
||||||
opp_table = dev_pm_opp_get_opp_table(dev);
|
opp_table = dev_pm_opp_get_opp_table(dev);
|
||||||
if (!opp_table)
|
if (!opp_table)
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
|
|
||||||
mutex_lock(&opp_table->genpd_virt_dev_lock);
|
/*
|
||||||
|
* If the genpd's OPP table isn't already initialized, parsing of the
|
||||||
if (unlikely(!opp_table->genpd_virt_devs ||
|
* required-opps fail for dev. We should retry this after genpd's OPP
|
||||||
index >= opp_table->required_opp_count ||
|
* table is added.
|
||||||
opp_table->genpd_virt_devs[index])) {
|
*/
|
||||||
|
if (!opp_table->required_opp_count) {
|
||||||
dev_err(dev, "Invalid request to set required device\n");
|
ret = -EPROBE_DEFER;
|
||||||
dev_pm_opp_put_opp_table(opp_table);
|
goto put_table;
|
||||||
mutex_unlock(&opp_table->genpd_virt_dev_lock);
|
}
|
||||||
|
|
||||||
return ERR_PTR(-EINVAL);
|
mutex_lock(&opp_table->genpd_virt_dev_lock);
|
||||||
|
|
||||||
|
opp_table->genpd_virt_devs = kcalloc(opp_table->required_opp_count,
|
||||||
|
sizeof(*opp_table->genpd_virt_devs),
|
||||||
|
GFP_KERNEL);
|
||||||
|
if (!opp_table->genpd_virt_devs)
|
||||||
|
goto unlock;
|
||||||
|
|
||||||
|
while (*name) {
|
||||||
|
index = of_property_match_string(dev->of_node,
|
||||||
|
"power-domain-names", *name);
|
||||||
|
if (index < 0) {
|
||||||
|
dev_err(dev, "Failed to find power domain: %s (%d)\n",
|
||||||
|
*name, index);
|
||||||
|
goto err;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (index >= opp_table->required_opp_count) {
|
||||||
|
dev_err(dev, "Index can't be greater than required-opp-count - 1, %s (%d : %d)\n",
|
||||||
|
*name, opp_table->required_opp_count, index);
|
||||||
|
goto err;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (opp_table->genpd_virt_devs[index]) {
|
||||||
|
dev_err(dev, "Genpd virtual device already set %s\n",
|
||||||
|
*name);
|
||||||
|
goto err;
|
||||||
|
}
|
||||||
|
|
||||||
|
virt_dev = dev_pm_domain_attach_by_name(dev, *name);
|
||||||
|
if (IS_ERR(virt_dev)) {
|
||||||
|
ret = PTR_ERR(virt_dev);
|
||||||
|
dev_err(dev, "Couldn't attach to pm_domain: %d\n", ret);
|
||||||
|
goto err;
|
||||||
|
}
|
||||||
|
|
||||||
|
opp_table->genpd_virt_devs[index] = virt_dev;
|
||||||
|
name++;
|
||||||
}
|
}
|
||||||
|
|
||||||
opp_table->genpd_virt_devs[index] = virt_dev;
|
|
||||||
mutex_unlock(&opp_table->genpd_virt_dev_lock);
|
mutex_unlock(&opp_table->genpd_virt_dev_lock);
|
||||||
|
|
||||||
return opp_table;
|
return opp_table;
|
||||||
|
|
||||||
|
err:
|
||||||
|
_opp_detach_genpd(opp_table);
|
||||||
|
unlock:
|
||||||
|
mutex_unlock(&opp_table->genpd_virt_dev_lock);
|
||||||
|
|
||||||
|
put_table:
|
||||||
|
dev_pm_opp_put_opp_table(opp_table);
|
||||||
|
|
||||||
|
return ERR_PTR(ret);
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(dev_pm_opp_attach_genpd);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* dev_pm_opp_put_genpd_virt_dev() - Releases resources blocked for genpd device.
|
* dev_pm_opp_detach_genpd() - Detach genpd(s) from the device.
|
||||||
* @opp_table: OPP table returned by dev_pm_opp_set_genpd_virt_dev().
|
* @opp_table: OPP table returned by dev_pm_opp_attach_genpd().
|
||||||
* @virt_dev: virtual genpd device.
|
|
||||||
*
|
*
|
||||||
* This releases the resource previously acquired with a call to
|
* This detaches the genpd(s), resets the virtual device pointers, and puts the
|
||||||
* dev_pm_opp_set_genpd_virt_dev(). The consumer driver shall call this helper
|
* OPP table.
|
||||||
* if it doesn't want OPP core to update performance state of a power domain
|
|
||||||
* anymore.
|
|
||||||
*/
|
*/
|
||||||
void dev_pm_opp_put_genpd_virt_dev(struct opp_table *opp_table,
|
void dev_pm_opp_detach_genpd(struct opp_table *opp_table)
|
||||||
struct device *virt_dev)
|
|
||||||
{
|
{
|
||||||
int i;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Acquire genpd_virt_dev_lock to make sure virt_dev isn't getting
|
* Acquire genpd_virt_dev_lock to make sure virt_dev isn't getting
|
||||||
* used in parallel.
|
* used in parallel.
|
||||||
*/
|
*/
|
||||||
mutex_lock(&opp_table->genpd_virt_dev_lock);
|
mutex_lock(&opp_table->genpd_virt_dev_lock);
|
||||||
|
_opp_detach_genpd(opp_table);
|
||||||
for (i = 0; i < opp_table->required_opp_count; i++) {
|
|
||||||
if (opp_table->genpd_virt_devs[i] != virt_dev)
|
|
||||||
continue;
|
|
||||||
|
|
||||||
opp_table->genpd_virt_devs[i] = NULL;
|
|
||||||
dev_pm_opp_put_opp_table(opp_table);
|
|
||||||
|
|
||||||
/* Drop the vote */
|
|
||||||
dev_pm_genpd_set_performance_state(virt_dev, 0);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
mutex_unlock(&opp_table->genpd_virt_dev_lock);
|
mutex_unlock(&opp_table->genpd_virt_dev_lock);
|
||||||
|
|
||||||
if (unlikely(i == opp_table->required_opp_count))
|
dev_pm_opp_put_opp_table(opp_table);
|
||||||
dev_err(virt_dev, "Failed to find required device entry\n");
|
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(dev_pm_opp_detach_genpd);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* dev_pm_opp_xlate_performance_state() - Find required OPP's pstate for src_table.
|
* dev_pm_opp_xlate_performance_state() - Find required OPP's pstate for src_table.
|
||||||
|
@ -138,7 +138,6 @@ err:
|
|||||||
static void _opp_table_free_required_tables(struct opp_table *opp_table)
|
static void _opp_table_free_required_tables(struct opp_table *opp_table)
|
||||||
{
|
{
|
||||||
struct opp_table **required_opp_tables = opp_table->required_opp_tables;
|
struct opp_table **required_opp_tables = opp_table->required_opp_tables;
|
||||||
struct device **genpd_virt_devs = opp_table->genpd_virt_devs;
|
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (!required_opp_tables)
|
if (!required_opp_tables)
|
||||||
@ -152,10 +151,8 @@ static void _opp_table_free_required_tables(struct opp_table *opp_table)
|
|||||||
}
|
}
|
||||||
|
|
||||||
kfree(required_opp_tables);
|
kfree(required_opp_tables);
|
||||||
kfree(genpd_virt_devs);
|
|
||||||
|
|
||||||
opp_table->required_opp_count = 0;
|
opp_table->required_opp_count = 0;
|
||||||
opp_table->genpd_virt_devs = NULL;
|
|
||||||
opp_table->required_opp_tables = NULL;
|
opp_table->required_opp_tables = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -168,9 +165,8 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table,
|
|||||||
struct device_node *opp_np)
|
struct device_node *opp_np)
|
||||||
{
|
{
|
||||||
struct opp_table **required_opp_tables;
|
struct opp_table **required_opp_tables;
|
||||||
struct device **genpd_virt_devs = NULL;
|
|
||||||
struct device_node *required_np, *np;
|
struct device_node *required_np, *np;
|
||||||
int count, count_pd, i;
|
int count, i;
|
||||||
|
|
||||||
/* Traversing the first OPP node is all we need */
|
/* Traversing the first OPP node is all we need */
|
||||||
np = of_get_next_available_child(opp_np, NULL);
|
np = of_get_next_available_child(opp_np, NULL);
|
||||||
@ -183,33 +179,11 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table,
|
|||||||
if (!count)
|
if (!count)
|
||||||
goto put_np;
|
goto put_np;
|
||||||
|
|
||||||
/*
|
|
||||||
* Check the number of power-domains to know if we need to deal
|
|
||||||
* with virtual devices. In some cases we have devices with multiple
|
|
||||||
* power domains but with only one of them being scalable, hence
|
|
||||||
* 'count' could be 1, but we still have to deal with multiple genpds
|
|
||||||
* and virtual devices.
|
|
||||||
*/
|
|
||||||
count_pd = of_count_phandle_with_args(dev->of_node, "power-domains",
|
|
||||||
"#power-domain-cells");
|
|
||||||
if (!count_pd)
|
|
||||||
goto put_np;
|
|
||||||
|
|
||||||
if (count_pd > 1) {
|
|
||||||
genpd_virt_devs = kcalloc(count, sizeof(*genpd_virt_devs),
|
|
||||||
GFP_KERNEL);
|
|
||||||
if (!genpd_virt_devs)
|
|
||||||
goto put_np;
|
|
||||||
}
|
|
||||||
|
|
||||||
required_opp_tables = kcalloc(count, sizeof(*required_opp_tables),
|
required_opp_tables = kcalloc(count, sizeof(*required_opp_tables),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!required_opp_tables) {
|
if (!required_opp_tables)
|
||||||
kfree(genpd_virt_devs);
|
|
||||||
goto put_np;
|
goto put_np;
|
||||||
}
|
|
||||||
|
|
||||||
opp_table->genpd_virt_devs = genpd_virt_devs;
|
|
||||||
opp_table->required_opp_tables = required_opp_tables;
|
opp_table->required_opp_tables = required_opp_tables;
|
||||||
opp_table->required_opp_count = count;
|
opp_table->required_opp_count = count;
|
||||||
|
|
||||||
|
@ -685,12 +685,21 @@ static pci_power_t acpi_pci_get_power_state(struct pci_dev *dev)
|
|||||||
if (!adev || !acpi_device_power_manageable(adev))
|
if (!adev || !acpi_device_power_manageable(adev))
|
||||||
return PCI_UNKNOWN;
|
return PCI_UNKNOWN;
|
||||||
|
|
||||||
if (acpi_device_get_power(adev, &state) || state == ACPI_STATE_UNKNOWN)
|
state = adev->power.state;
|
||||||
|
if (state == ACPI_STATE_UNKNOWN)
|
||||||
return PCI_UNKNOWN;
|
return PCI_UNKNOWN;
|
||||||
|
|
||||||
return state_conv[state];
|
return state_conv[state];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void acpi_pci_refresh_power_state(struct pci_dev *dev)
|
||||||
|
{
|
||||||
|
struct acpi_device *adev = ACPI_COMPANION(&dev->dev);
|
||||||
|
|
||||||
|
if (adev && acpi_device_power_manageable(adev))
|
||||||
|
acpi_device_update_power(adev, NULL);
|
||||||
|
}
|
||||||
|
|
||||||
static int acpi_pci_propagate_wakeup(struct pci_bus *bus, bool enable)
|
static int acpi_pci_propagate_wakeup(struct pci_bus *bus, bool enable)
|
||||||
{
|
{
|
||||||
while (bus->parent) {
|
while (bus->parent) {
|
||||||
@ -748,6 +757,7 @@ static const struct pci_platform_pm_ops acpi_pci_platform_pm = {
|
|||||||
.is_manageable = acpi_pci_power_manageable,
|
.is_manageable = acpi_pci_power_manageable,
|
||||||
.set_state = acpi_pci_set_power_state,
|
.set_state = acpi_pci_set_power_state,
|
||||||
.get_state = acpi_pci_get_power_state,
|
.get_state = acpi_pci_get_power_state,
|
||||||
|
.refresh_state = acpi_pci_refresh_power_state,
|
||||||
.choose_state = acpi_pci_choose_state,
|
.choose_state = acpi_pci_choose_state,
|
||||||
.set_wakeup = acpi_pci_wakeup,
|
.set_wakeup = acpi_pci_wakeup,
|
||||||
.need_resume = acpi_pci_need_resume,
|
.need_resume = acpi_pci_need_resume,
|
||||||
@ -901,6 +911,7 @@ static void pci_acpi_setup(struct device *dev)
|
|||||||
device_wakeup_enable(dev);
|
device_wakeup_enable(dev);
|
||||||
|
|
||||||
acpi_pci_wakeup(pci_dev, false);
|
acpi_pci_wakeup(pci_dev, false);
|
||||||
|
acpi_device_power_add_dependent(adev, dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void pci_acpi_cleanup(struct device *dev)
|
static void pci_acpi_cleanup(struct device *dev)
|
||||||
@ -913,6 +924,7 @@ static void pci_acpi_cleanup(struct device *dev)
|
|||||||
|
|
||||||
pci_acpi_remove_pm_notifier(adev);
|
pci_acpi_remove_pm_notifier(adev);
|
||||||
if (adev->wakeup.flags.valid) {
|
if (adev->wakeup.flags.valid) {
|
||||||
|
acpi_device_power_remove_dependent(adev, dev);
|
||||||
if (pci_dev->bridge_d3)
|
if (pci_dev->bridge_d3)
|
||||||
device_wakeup_disable(dev);
|
device_wakeup_disable(dev);
|
||||||
|
|
||||||
|
@ -678,6 +678,7 @@ static bool pci_has_legacy_pm_support(struct pci_dev *pci_dev)
|
|||||||
static int pci_pm_prepare(struct device *dev)
|
static int pci_pm_prepare(struct device *dev)
|
||||||
{
|
{
|
||||||
struct device_driver *drv = dev->driver;
|
struct device_driver *drv = dev->driver;
|
||||||
|
struct pci_dev *pci_dev = to_pci_dev(dev);
|
||||||
|
|
||||||
if (drv && drv->pm && drv->pm->prepare) {
|
if (drv && drv->pm && drv->pm->prepare) {
|
||||||
int error = drv->pm->prepare(dev);
|
int error = drv->pm->prepare(dev);
|
||||||
@ -687,7 +688,15 @@ static int pci_pm_prepare(struct device *dev)
|
|||||||
if (!error && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE))
|
if (!error && dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_PREPARE))
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
return pci_dev_keep_suspended(to_pci_dev(dev));
|
if (pci_dev_need_resume(pci_dev))
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The PME setting needs to be adjusted here in case the direct-complete
|
||||||
|
* optimization is used with respect to this device.
|
||||||
|
*/
|
||||||
|
pci_dev_adjust_pme(pci_dev);
|
||||||
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void pci_pm_complete(struct device *dev)
|
static void pci_pm_complete(struct device *dev)
|
||||||
@ -701,7 +710,14 @@ static void pci_pm_complete(struct device *dev)
|
|||||||
if (pm_runtime_suspended(dev) && pm_resume_via_firmware()) {
|
if (pm_runtime_suspended(dev) && pm_resume_via_firmware()) {
|
||||||
pci_power_t pre_sleep_state = pci_dev->current_state;
|
pci_power_t pre_sleep_state = pci_dev->current_state;
|
||||||
|
|
||||||
pci_update_current_state(pci_dev, pci_dev->current_state);
|
pci_refresh_power_state(pci_dev);
|
||||||
|
/*
|
||||||
|
* On platforms with ACPI this check may also trigger for
|
||||||
|
* devices sharing power resources if one of those power
|
||||||
|
* resources has been activated as a result of a change of the
|
||||||
|
* power state of another device sharing it. However, in that
|
||||||
|
* case it is also better to resume the device, in general.
|
||||||
|
*/
|
||||||
if (pci_dev->current_state < pre_sleep_state)
|
if (pci_dev->current_state < pre_sleep_state)
|
||||||
pm_request_resume(dev);
|
pm_request_resume(dev);
|
||||||
}
|
}
|
||||||
@ -757,9 +773,11 @@ static int pci_pm_suspend(struct device *dev)
|
|||||||
* better to resume the device from runtime suspend here.
|
* better to resume the device from runtime suspend here.
|
||||||
*/
|
*/
|
||||||
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
|
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
|
||||||
!pci_dev_keep_suspended(pci_dev)) {
|
pci_dev_need_resume(pci_dev)) {
|
||||||
pm_runtime_resume(dev);
|
pm_runtime_resume(dev);
|
||||||
pci_dev->state_saved = false;
|
pci_dev->state_saved = false;
|
||||||
|
} else {
|
||||||
|
pci_dev_adjust_pme(pci_dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pm->suspend) {
|
if (pm->suspend) {
|
||||||
@ -994,15 +1012,15 @@ static int pci_pm_freeze(struct device *dev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This used to be done in pci_pm_prepare() for all devices and some
|
* Resume all runtime-suspended devices before creating a snapshot
|
||||||
* drivers may depend on it, so do it here. Ideally, runtime-suspended
|
* image of system memory, because the restore kernel generally cannot
|
||||||
* devices should not be touched during freeze/thaw transitions,
|
* be expected to always handle them consistently and they need to be
|
||||||
* however.
|
* put into the runtime-active metastate during system resume anyway,
|
||||||
|
* so it is better to ensure that the state saved in the image will be
|
||||||
|
* always consistent with that.
|
||||||
*/
|
*/
|
||||||
if (!dev_pm_smart_suspend_and_suspended(dev)) {
|
pm_runtime_resume(dev);
|
||||||
pm_runtime_resume(dev);
|
pci_dev->state_saved = false;
|
||||||
pci_dev->state_saved = false;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (pm->freeze) {
|
if (pm->freeze) {
|
||||||
int error;
|
int error;
|
||||||
@ -1016,22 +1034,11 @@ static int pci_pm_freeze(struct device *dev)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int pci_pm_freeze_late(struct device *dev)
|
|
||||||
{
|
|
||||||
if (dev_pm_smart_suspend_and_suspended(dev))
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
return pm_generic_freeze_late(dev);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int pci_pm_freeze_noirq(struct device *dev)
|
static int pci_pm_freeze_noirq(struct device *dev)
|
||||||
{
|
{
|
||||||
struct pci_dev *pci_dev = to_pci_dev(dev);
|
struct pci_dev *pci_dev = to_pci_dev(dev);
|
||||||
struct device_driver *drv = dev->driver;
|
struct device_driver *drv = dev->driver;
|
||||||
|
|
||||||
if (dev_pm_smart_suspend_and_suspended(dev))
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
if (pci_has_legacy_pm_support(pci_dev))
|
if (pci_has_legacy_pm_support(pci_dev))
|
||||||
return pci_legacy_suspend_late(dev, PMSG_FREEZE);
|
return pci_legacy_suspend_late(dev, PMSG_FREEZE);
|
||||||
|
|
||||||
@ -1061,16 +1068,6 @@ static int pci_pm_thaw_noirq(struct device *dev)
|
|||||||
struct device_driver *drv = dev->driver;
|
struct device_driver *drv = dev->driver;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
/*
|
|
||||||
* If the device is in runtime suspend, the code below may not work
|
|
||||||
* correctly with it, so skip that code and make the PM core skip all of
|
|
||||||
* the subsequent "thaw" callbacks for the device.
|
|
||||||
*/
|
|
||||||
if (dev_pm_smart_suspend_and_suspended(dev)) {
|
|
||||||
dev_pm_skip_next_resume_phases(dev);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (pcibios_pm_ops.thaw_noirq) {
|
if (pcibios_pm_ops.thaw_noirq) {
|
||||||
error = pcibios_pm_ops.thaw_noirq(dev);
|
error = pcibios_pm_ops.thaw_noirq(dev);
|
||||||
if (error)
|
if (error)
|
||||||
@ -1130,10 +1127,13 @@ static int pci_pm_poweroff(struct device *dev)
|
|||||||
|
|
||||||
/* The reason to do that is the same as in pci_pm_suspend(). */
|
/* The reason to do that is the same as in pci_pm_suspend(). */
|
||||||
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
|
if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) ||
|
||||||
!pci_dev_keep_suspended(pci_dev))
|
pci_dev_need_resume(pci_dev)) {
|
||||||
pm_runtime_resume(dev);
|
pm_runtime_resume(dev);
|
||||||
|
pci_dev->state_saved = false;
|
||||||
|
} else {
|
||||||
|
pci_dev_adjust_pme(pci_dev);
|
||||||
|
}
|
||||||
|
|
||||||
pci_dev->state_saved = false;
|
|
||||||
if (pm->poweroff) {
|
if (pm->poweroff) {
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
@ -1205,10 +1205,6 @@ static int pci_pm_restore_noirq(struct device *dev)
|
|||||||
struct device_driver *drv = dev->driver;
|
struct device_driver *drv = dev->driver;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
|
|
||||||
/* This is analogous to the pci_pm_resume_noirq() case. */
|
|
||||||
if (dev_pm_smart_suspend_and_suspended(dev))
|
|
||||||
pm_runtime_set_active(dev);
|
|
||||||
|
|
||||||
if (pcibios_pm_ops.restore_noirq) {
|
if (pcibios_pm_ops.restore_noirq) {
|
||||||
error = pcibios_pm_ops.restore_noirq(dev);
|
error = pcibios_pm_ops.restore_noirq(dev);
|
||||||
if (error)
|
if (error)
|
||||||
@ -1258,7 +1254,6 @@ static int pci_pm_restore(struct device *dev)
|
|||||||
#else /* !CONFIG_HIBERNATE_CALLBACKS */
|
#else /* !CONFIG_HIBERNATE_CALLBACKS */
|
||||||
|
|
||||||
#define pci_pm_freeze NULL
|
#define pci_pm_freeze NULL
|
||||||
#define pci_pm_freeze_late NULL
|
|
||||||
#define pci_pm_freeze_noirq NULL
|
#define pci_pm_freeze_noirq NULL
|
||||||
#define pci_pm_thaw NULL
|
#define pci_pm_thaw NULL
|
||||||
#define pci_pm_thaw_noirq NULL
|
#define pci_pm_thaw_noirq NULL
|
||||||
@ -1384,7 +1379,6 @@ static const struct dev_pm_ops pci_dev_pm_ops = {
|
|||||||
.suspend_late = pci_pm_suspend_late,
|
.suspend_late = pci_pm_suspend_late,
|
||||||
.resume = pci_pm_resume,
|
.resume = pci_pm_resume,
|
||||||
.freeze = pci_pm_freeze,
|
.freeze = pci_pm_freeze,
|
||||||
.freeze_late = pci_pm_freeze_late,
|
|
||||||
.thaw = pci_pm_thaw,
|
.thaw = pci_pm_thaw,
|
||||||
.poweroff = pci_pm_poweroff,
|
.poweroff = pci_pm_poweroff,
|
||||||
.poweroff_late = pci_pm_poweroff_late,
|
.poweroff_late = pci_pm_poweroff_late,
|
||||||
|
@ -777,6 +777,12 @@ static inline pci_power_t platform_pci_get_power_state(struct pci_dev *dev)
|
|||||||
return pci_platform_pm ? pci_platform_pm->get_state(dev) : PCI_UNKNOWN;
|
return pci_platform_pm ? pci_platform_pm->get_state(dev) : PCI_UNKNOWN;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline void platform_pci_refresh_power_state(struct pci_dev *dev)
|
||||||
|
{
|
||||||
|
if (pci_platform_pm && pci_platform_pm->refresh_state)
|
||||||
|
pci_platform_pm->refresh_state(dev);
|
||||||
|
}
|
||||||
|
|
||||||
static inline pci_power_t platform_pci_choose_state(struct pci_dev *dev)
|
static inline pci_power_t platform_pci_choose_state(struct pci_dev *dev)
|
||||||
{
|
{
|
||||||
return pci_platform_pm ?
|
return pci_platform_pm ?
|
||||||
@ -937,6 +943,21 @@ void pci_update_current_state(struct pci_dev *dev, pci_power_t state)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* pci_refresh_power_state - Refresh the given device's power state data
|
||||||
|
* @dev: Target PCI device.
|
||||||
|
*
|
||||||
|
* Ask the platform to refresh the devices power state information and invoke
|
||||||
|
* pci_update_current_state() to update its current PCI power state.
|
||||||
|
*/
|
||||||
|
void pci_refresh_power_state(struct pci_dev *dev)
|
||||||
|
{
|
||||||
|
if (platform_pci_power_manageable(dev))
|
||||||
|
platform_pci_refresh_power_state(dev);
|
||||||
|
|
||||||
|
pci_update_current_state(dev, dev->current_state);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* pci_power_up - Put the given device into D0 forcibly
|
* pci_power_up - Put the given device into D0 forcibly
|
||||||
* @dev: PCI device to power up
|
* @dev: PCI device to power up
|
||||||
@ -1004,15 +1025,10 @@ static void __pci_start_power_transition(struct pci_dev *dev, pci_power_t state)
|
|||||||
if (state == PCI_D0) {
|
if (state == PCI_D0) {
|
||||||
pci_platform_power_transition(dev, PCI_D0);
|
pci_platform_power_transition(dev, PCI_D0);
|
||||||
/*
|
/*
|
||||||
* Mandatory power management transition delays, see
|
* Mandatory power management transition delays are
|
||||||
* PCI Express Base Specification Revision 2.0 Section
|
* handled in the PCIe portdrv resume hooks.
|
||||||
* 6.6.1: Conventional Reset. Do not delay for
|
|
||||||
* devices powered on/off by corresponding bridge,
|
|
||||||
* because have already delayed for the bridge.
|
|
||||||
*/
|
*/
|
||||||
if (dev->runtime_d3cold) {
|
if (dev->runtime_d3cold) {
|
||||||
if (dev->d3cold_delay && !dev->imm_ready)
|
|
||||||
msleep(dev->d3cold_delay);
|
|
||||||
/*
|
/*
|
||||||
* When powering on a bridge from D3cold, the
|
* When powering on a bridge from D3cold, the
|
||||||
* whole hierarchy may be powered on into
|
* whole hierarchy may be powered on into
|
||||||
@ -2065,6 +2081,13 @@ static void pci_pme_list_scan(struct work_struct *work)
|
|||||||
*/
|
*/
|
||||||
if (bridge && bridge->current_state != PCI_D0)
|
if (bridge && bridge->current_state != PCI_D0)
|
||||||
continue;
|
continue;
|
||||||
|
/*
|
||||||
|
* If the device is in D3cold it should not be
|
||||||
|
* polled either.
|
||||||
|
*/
|
||||||
|
if (pme_dev->dev->current_state == PCI_D3cold)
|
||||||
|
continue;
|
||||||
|
|
||||||
pci_pme_wakeup(pme_dev->dev, NULL);
|
pci_pme_wakeup(pme_dev->dev, NULL);
|
||||||
} else {
|
} else {
|
||||||
list_del(&pme_dev->list);
|
list_del(&pme_dev->list);
|
||||||
@ -2459,45 +2482,56 @@ bool pci_dev_run_wake(struct pci_dev *dev)
|
|||||||
EXPORT_SYMBOL_GPL(pci_dev_run_wake);
|
EXPORT_SYMBOL_GPL(pci_dev_run_wake);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* pci_dev_keep_suspended - Check if the device can stay in the suspended state.
|
* pci_dev_need_resume - Check if it is necessary to resume the device.
|
||||||
* @pci_dev: Device to check.
|
* @pci_dev: Device to check.
|
||||||
*
|
*
|
||||||
* Return 'true' if the device is runtime-suspended, it doesn't have to be
|
* Return 'true' if the device is not runtime-suspended or it has to be
|
||||||
* reconfigured due to wakeup settings difference between system and runtime
|
* reconfigured due to wakeup settings difference between system and runtime
|
||||||
* suspend and the current power state of it is suitable for the upcoming
|
* suspend, or the current power state of it is not suitable for the upcoming
|
||||||
* (system) transition.
|
* (system-wide) transition.
|
||||||
*
|
|
||||||
* If the device is not configured for system wakeup, disable PME for it before
|
|
||||||
* returning 'true' to prevent it from waking up the system unnecessarily.
|
|
||||||
*/
|
*/
|
||||||
bool pci_dev_keep_suspended(struct pci_dev *pci_dev)
|
bool pci_dev_need_resume(struct pci_dev *pci_dev)
|
||||||
{
|
{
|
||||||
struct device *dev = &pci_dev->dev;
|
struct device *dev = &pci_dev->dev;
|
||||||
bool wakeup = device_may_wakeup(dev);
|
pci_power_t target_state;
|
||||||
|
|
||||||
if (!pm_runtime_suspended(dev)
|
if (!pm_runtime_suspended(dev) || platform_pci_need_resume(pci_dev))
|
||||||
|| pci_target_state(pci_dev, wakeup) != pci_dev->current_state
|
return true;
|
||||||
|| platform_pci_need_resume(pci_dev))
|
|
||||||
return false;
|
target_state = pci_target_state(pci_dev, device_may_wakeup(dev));
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* At this point the device is good to go unless it's been configured
|
* If the earlier platform check has not triggered, D3cold is just power
|
||||||
* to generate PME at the runtime suspend time, but it is not supposed
|
* removal on top of D3hot, so no need to resume the device in that
|
||||||
* to wake up the system. In that case, simply disable PME for it
|
* case.
|
||||||
* (it will have to be re-enabled on exit from system resume).
|
|
||||||
*
|
|
||||||
* If the device's power state is D3cold and the platform check above
|
|
||||||
* hasn't triggered, the device's configuration is suitable and we don't
|
|
||||||
* need to manipulate it at all.
|
|
||||||
*/
|
*/
|
||||||
|
return target_state != pci_dev->current_state &&
|
||||||
|
target_state != PCI_D3cold &&
|
||||||
|
pci_dev->current_state != PCI_D3hot;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* pci_dev_adjust_pme - Adjust PME setting for a suspended device.
|
||||||
|
* @pci_dev: Device to check.
|
||||||
|
*
|
||||||
|
* If the device is suspended and it is not configured for system wakeup,
|
||||||
|
* disable PME for it to prevent it from waking up the system unnecessarily.
|
||||||
|
*
|
||||||
|
* Note that if the device's power state is D3cold and the platform check in
|
||||||
|
* pci_dev_need_resume() has not triggered, the device's configuration need not
|
||||||
|
* be changed.
|
||||||
|
*/
|
||||||
|
void pci_dev_adjust_pme(struct pci_dev *pci_dev)
|
||||||
|
{
|
||||||
|
struct device *dev = &pci_dev->dev;
|
||||||
|
|
||||||
spin_lock_irq(&dev->power.lock);
|
spin_lock_irq(&dev->power.lock);
|
||||||
|
|
||||||
if (pm_runtime_suspended(dev) && pci_dev->current_state < PCI_D3cold &&
|
if (pm_runtime_suspended(dev) && !device_may_wakeup(dev) &&
|
||||||
!wakeup)
|
pci_dev->current_state < PCI_D3cold)
|
||||||
__pci_pme_active(pci_dev, false);
|
__pci_pme_active(pci_dev, false);
|
||||||
|
|
||||||
spin_unlock_irq(&dev->power.lock);
|
spin_unlock_irq(&dev->power.lock);
|
||||||
return true;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -4568,14 +4602,16 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
|
|||||||
|
|
||||||
return pci_dev_wait(dev, "PM D3->D0", PCIE_RESET_READY_POLL_MS);
|
return pci_dev_wait(dev, "PM D3->D0", PCIE_RESET_READY_POLL_MS);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* pcie_wait_for_link - Wait until link is active or inactive
|
* pcie_wait_for_link_delay - Wait until link is active or inactive
|
||||||
* @pdev: Bridge device
|
* @pdev: Bridge device
|
||||||
* @active: waiting for active or inactive?
|
* @active: waiting for active or inactive?
|
||||||
|
* @delay: Delay to wait after link has become active (in ms)
|
||||||
*
|
*
|
||||||
* Use this to wait till link becomes active or inactive.
|
* Use this to wait till link becomes active or inactive.
|
||||||
*/
|
*/
|
||||||
bool pcie_wait_for_link(struct pci_dev *pdev, bool active)
|
bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, int delay)
|
||||||
{
|
{
|
||||||
int timeout = 1000;
|
int timeout = 1000;
|
||||||
bool ret;
|
bool ret;
|
||||||
@ -4612,13 +4648,25 @@ bool pcie_wait_for_link(struct pci_dev *pdev, bool active)
|
|||||||
timeout -= 10;
|
timeout -= 10;
|
||||||
}
|
}
|
||||||
if (active && ret)
|
if (active && ret)
|
||||||
msleep(100);
|
msleep(delay);
|
||||||
else if (ret != active)
|
else if (ret != active)
|
||||||
pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
|
pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
|
||||||
active ? "set" : "cleared");
|
active ? "set" : "cleared");
|
||||||
return ret == active;
|
return ret == active;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* pcie_wait_for_link - Wait until link is active or inactive
|
||||||
|
* @pdev: Bridge device
|
||||||
|
* @active: waiting for active or inactive?
|
||||||
|
*
|
||||||
|
* Use this to wait till link becomes active or inactive.
|
||||||
|
*/
|
||||||
|
bool pcie_wait_for_link(struct pci_dev *pdev, bool active)
|
||||||
|
{
|
||||||
|
return pcie_wait_for_link_delay(pdev, active, 100);
|
||||||
|
}
|
||||||
|
|
||||||
void pci_reset_secondary_bus(struct pci_dev *dev)
|
void pci_reset_secondary_bus(struct pci_dev *dev)
|
||||||
{
|
{
|
||||||
u16 ctrl;
|
u16 ctrl;
|
||||||
|
@ -51,6 +51,8 @@ int pci_bus_error_reset(struct pci_dev *dev);
|
|||||||
*
|
*
|
||||||
* @get_state: queries the platform firmware for a device's current power state
|
* @get_state: queries the platform firmware for a device's current power state
|
||||||
*
|
*
|
||||||
|
* @refresh_state: asks the platform to refresh the device's power state data
|
||||||
|
*
|
||||||
* @choose_state: returns PCI power state of given device preferred by the
|
* @choose_state: returns PCI power state of given device preferred by the
|
||||||
* platform; to be used during system-wide transitions from a
|
* platform; to be used during system-wide transitions from a
|
||||||
* sleeping state to the working state and vice versa
|
* sleeping state to the working state and vice versa
|
||||||
@ -69,6 +71,7 @@ struct pci_platform_pm_ops {
|
|||||||
bool (*is_manageable)(struct pci_dev *dev);
|
bool (*is_manageable)(struct pci_dev *dev);
|
||||||
int (*set_state)(struct pci_dev *dev, pci_power_t state);
|
int (*set_state)(struct pci_dev *dev, pci_power_t state);
|
||||||
pci_power_t (*get_state)(struct pci_dev *dev);
|
pci_power_t (*get_state)(struct pci_dev *dev);
|
||||||
|
void (*refresh_state)(struct pci_dev *dev);
|
||||||
pci_power_t (*choose_state)(struct pci_dev *dev);
|
pci_power_t (*choose_state)(struct pci_dev *dev);
|
||||||
int (*set_wakeup)(struct pci_dev *dev, bool enable);
|
int (*set_wakeup)(struct pci_dev *dev, bool enable);
|
||||||
bool (*need_resume)(struct pci_dev *dev);
|
bool (*need_resume)(struct pci_dev *dev);
|
||||||
@ -76,13 +79,15 @@ struct pci_platform_pm_ops {
|
|||||||
|
|
||||||
int pci_set_platform_pm(const struct pci_platform_pm_ops *ops);
|
int pci_set_platform_pm(const struct pci_platform_pm_ops *ops);
|
||||||
void pci_update_current_state(struct pci_dev *dev, pci_power_t state);
|
void pci_update_current_state(struct pci_dev *dev, pci_power_t state);
|
||||||
|
void pci_refresh_power_state(struct pci_dev *dev);
|
||||||
void pci_power_up(struct pci_dev *dev);
|
void pci_power_up(struct pci_dev *dev);
|
||||||
void pci_disable_enabled_device(struct pci_dev *dev);
|
void pci_disable_enabled_device(struct pci_dev *dev);
|
||||||
int pci_finish_runtime_suspend(struct pci_dev *dev);
|
int pci_finish_runtime_suspend(struct pci_dev *dev);
|
||||||
void pcie_clear_root_pme_status(struct pci_dev *dev);
|
void pcie_clear_root_pme_status(struct pci_dev *dev);
|
||||||
int __pci_pme_wakeup(struct pci_dev *dev, void *ign);
|
int __pci_pme_wakeup(struct pci_dev *dev, void *ign);
|
||||||
void pci_pme_restore(struct pci_dev *dev);
|
void pci_pme_restore(struct pci_dev *dev);
|
||||||
bool pci_dev_keep_suspended(struct pci_dev *dev);
|
bool pci_dev_need_resume(struct pci_dev *dev);
|
||||||
|
void pci_dev_adjust_pme(struct pci_dev *dev);
|
||||||
void pci_dev_complete_resume(struct pci_dev *pci_dev);
|
void pci_dev_complete_resume(struct pci_dev *pci_dev);
|
||||||
void pci_config_pm_runtime_get(struct pci_dev *dev);
|
void pci_config_pm_runtime_get(struct pci_dev *dev);
|
||||||
void pci_config_pm_runtime_put(struct pci_dev *dev);
|
void pci_config_pm_runtime_put(struct pci_dev *dev);
|
||||||
@ -493,6 +498,7 @@ static inline int pci_dev_specific_disable_acs_redir(struct pci_dev *dev)
|
|||||||
void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state,
|
void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state,
|
||||||
u32 service);
|
u32 service);
|
||||||
|
|
||||||
|
bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, int delay);
|
||||||
bool pcie_wait_for_link(struct pci_dev *pdev, bool active);
|
bool pcie_wait_for_link(struct pci_dev *pdev, bool active);
|
||||||
#ifdef CONFIG_PCIEASPM
|
#ifdef CONFIG_PCIEASPM
|
||||||
void pcie_aspm_init_link_state(struct pci_dev *pdev);
|
void pcie_aspm_init_link_state(struct pci_dev *pdev);
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
#include <linux/pci.h>
|
#include <linux/pci.h>
|
||||||
#include <linux/kernel.h>
|
#include <linux/kernel.h>
|
||||||
|
#include <linux/delay.h>
|
||||||
#include <linux/errno.h>
|
#include <linux/errno.h>
|
||||||
#include <linux/pm.h>
|
#include <linux/pm.h>
|
||||||
#include <linux/pm_runtime.h>
|
#include <linux/pm_runtime.h>
|
||||||
@ -378,6 +379,67 @@ static int pm_iter(struct device *dev, void *data)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int get_downstream_delay(struct pci_bus *bus)
|
||||||
|
{
|
||||||
|
struct pci_dev *pdev;
|
||||||
|
int min_delay = 100;
|
||||||
|
int max_delay = 0;
|
||||||
|
|
||||||
|
list_for_each_entry(pdev, &bus->devices, bus_list) {
|
||||||
|
if (!pdev->imm_ready)
|
||||||
|
min_delay = 0;
|
||||||
|
else if (pdev->d3cold_delay < min_delay)
|
||||||
|
min_delay = pdev->d3cold_delay;
|
||||||
|
if (pdev->d3cold_delay > max_delay)
|
||||||
|
max_delay = pdev->d3cold_delay;
|
||||||
|
}
|
||||||
|
|
||||||
|
return max(min_delay, max_delay);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* wait_for_downstream_link - Wait for downstream link to establish
|
||||||
|
* @pdev: PCIe port whose downstream link is waited
|
||||||
|
*
|
||||||
|
* Handle delays according to PCIe 4.0 section 6.6.1 before configuration
|
||||||
|
* access to the downstream component is permitted.
|
||||||
|
*
|
||||||
|
* This blocks PCI core resume of the hierarchy below this port until the
|
||||||
|
* link is trained. Should be called before resuming port services to
|
||||||
|
* prevent pciehp from starting to tear-down the hierarchy too soon.
|
||||||
|
*/
|
||||||
|
static void wait_for_downstream_link(struct pci_dev *pdev)
|
||||||
|
{
|
||||||
|
int delay;
|
||||||
|
|
||||||
|
if (pci_pcie_type(pdev) != PCI_EXP_TYPE_ROOT_PORT &&
|
||||||
|
pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM)
|
||||||
|
return;
|
||||||
|
|
||||||
|
if (pci_dev_is_disconnected(pdev))
|
||||||
|
return;
|
||||||
|
|
||||||
|
if (!pdev->subordinate || list_empty(&pdev->subordinate->devices) ||
|
||||||
|
!pdev->bridge_d3)
|
||||||
|
return;
|
||||||
|
|
||||||
|
delay = get_downstream_delay(pdev->subordinate);
|
||||||
|
if (!delay)
|
||||||
|
return;
|
||||||
|
|
||||||
|
dev_dbg(&pdev->dev, "waiting downstream link for %d ms\n", delay);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If downstream port does not support speeds greater than 5 GT/s
|
||||||
|
* need to wait 100ms. For higher speeds (gen3) we need to wait
|
||||||
|
* first for the data link layer to become active.
|
||||||
|
*/
|
||||||
|
if (pcie_get_speed_cap(pdev) <= PCIE_SPEED_5_0GT)
|
||||||
|
msleep(delay);
|
||||||
|
else
|
||||||
|
pcie_wait_for_link_delay(pdev, true, delay);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* pcie_port_device_suspend - suspend port services associated with a PCIe port
|
* pcie_port_device_suspend - suspend port services associated with a PCIe port
|
||||||
* @dev: PCI Express port to handle
|
* @dev: PCI Express port to handle
|
||||||
@ -391,6 +453,8 @@ int pcie_port_device_suspend(struct device *dev)
|
|||||||
int pcie_port_device_resume_noirq(struct device *dev)
|
int pcie_port_device_resume_noirq(struct device *dev)
|
||||||
{
|
{
|
||||||
size_t off = offsetof(struct pcie_port_service_driver, resume_noirq);
|
size_t off = offsetof(struct pcie_port_service_driver, resume_noirq);
|
||||||
|
|
||||||
|
wait_for_downstream_link(to_pci_dev(dev));
|
||||||
return device_for_each_child(dev, &off, pm_iter);
|
return device_for_each_child(dev, &off, pm_iter);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -421,6 +485,8 @@ int pcie_port_device_runtime_suspend(struct device *dev)
|
|||||||
int pcie_port_device_runtime_resume(struct device *dev)
|
int pcie_port_device_runtime_resume(struct device *dev)
|
||||||
{
|
{
|
||||||
size_t off = offsetof(struct pcie_port_service_driver, runtime_resume);
|
size_t off = offsetof(struct pcie_port_service_driver, runtime_resume);
|
||||||
|
|
||||||
|
wait_for_downstream_link(to_pci_dev(dev));
|
||||||
return device_for_each_child(dev, &off, pm_iter);
|
return device_for_each_child(dev, &off, pm_iter);
|
||||||
}
|
}
|
||||||
#endif /* PM */
|
#endif /* PM */
|
||||||
|
@ -899,38 +899,19 @@ static int omap_sr_probe(struct platform_device *pdev)
|
|||||||
}
|
}
|
||||||
|
|
||||||
dev_info(&pdev->dev, "%s: SmartReflex driver initialized\n", __func__);
|
dev_info(&pdev->dev, "%s: SmartReflex driver initialized\n", __func__);
|
||||||
if (!sr_dbg_dir) {
|
if (!sr_dbg_dir)
|
||||||
sr_dbg_dir = debugfs_create_dir("smartreflex", NULL);
|
sr_dbg_dir = debugfs_create_dir("smartreflex", NULL);
|
||||||
if (IS_ERR_OR_NULL(sr_dbg_dir)) {
|
|
||||||
ret = PTR_ERR(sr_dbg_dir);
|
|
||||||
pr_err("%s:sr debugfs dir creation failed(%d)\n",
|
|
||||||
__func__, ret);
|
|
||||||
goto err_list_del;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
sr_info->dbg_dir = debugfs_create_dir(sr_info->name, sr_dbg_dir);
|
sr_info->dbg_dir = debugfs_create_dir(sr_info->name, sr_dbg_dir);
|
||||||
if (IS_ERR_OR_NULL(sr_info->dbg_dir)) {
|
|
||||||
dev_err(&pdev->dev, "%s: Unable to create debugfs directory\n",
|
|
||||||
__func__);
|
|
||||||
ret = PTR_ERR(sr_info->dbg_dir);
|
|
||||||
goto err_debugfs;
|
|
||||||
}
|
|
||||||
|
|
||||||
(void) debugfs_create_file("autocomp", S_IRUGO | S_IWUSR,
|
debugfs_create_file("autocomp", S_IRUGO | S_IWUSR, sr_info->dbg_dir,
|
||||||
sr_info->dbg_dir, (void *)sr_info, &pm_sr_fops);
|
(void *)sr_info, &pm_sr_fops);
|
||||||
(void) debugfs_create_x32("errweight", S_IRUGO, sr_info->dbg_dir,
|
debugfs_create_x32("errweight", S_IRUGO, sr_info->dbg_dir,
|
||||||
&sr_info->err_weight);
|
&sr_info->err_weight);
|
||||||
(void) debugfs_create_x32("errmaxlimit", S_IRUGO, sr_info->dbg_dir,
|
debugfs_create_x32("errmaxlimit", S_IRUGO, sr_info->dbg_dir,
|
||||||
&sr_info->err_maxlimit);
|
&sr_info->err_maxlimit);
|
||||||
|
|
||||||
nvalue_dir = debugfs_create_dir("nvalue", sr_info->dbg_dir);
|
nvalue_dir = debugfs_create_dir("nvalue", sr_info->dbg_dir);
|
||||||
if (IS_ERR_OR_NULL(nvalue_dir)) {
|
|
||||||
dev_err(&pdev->dev, "%s: Unable to create debugfs directory for n-values\n",
|
|
||||||
__func__);
|
|
||||||
ret = PTR_ERR(nvalue_dir);
|
|
||||||
goto err_debugfs;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (sr_info->nvalue_count == 0 || !sr_info->nvalue_table) {
|
if (sr_info->nvalue_count == 0 || !sr_info->nvalue_table) {
|
||||||
dev_warn(&pdev->dev, "%s: %s: No Voltage table for the corresponding vdd. Cannot create debugfs entries for n-values\n",
|
dev_warn(&pdev->dev, "%s: %s: No Voltage table for the corresponding vdd. Cannot create debugfs entries for n-values\n",
|
||||||
@ -945,12 +926,12 @@ static int omap_sr_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
snprintf(name, sizeof(name), "volt_%lu",
|
snprintf(name, sizeof(name), "volt_%lu",
|
||||||
sr_info->nvalue_table[i].volt_nominal);
|
sr_info->nvalue_table[i].volt_nominal);
|
||||||
(void) debugfs_create_x32(name, S_IRUGO | S_IWUSR, nvalue_dir,
|
debugfs_create_x32(name, S_IRUGO | S_IWUSR, nvalue_dir,
|
||||||
&(sr_info->nvalue_table[i].nvalue));
|
&(sr_info->nvalue_table[i].nvalue));
|
||||||
snprintf(name, sizeof(name), "errminlimit_%lu",
|
snprintf(name, sizeof(name), "errminlimit_%lu",
|
||||||
sr_info->nvalue_table[i].volt_nominal);
|
sr_info->nvalue_table[i].volt_nominal);
|
||||||
(void) debugfs_create_x32(name, S_IRUGO | S_IWUSR, nvalue_dir,
|
debugfs_create_x32(name, S_IRUGO | S_IWUSR, nvalue_dir,
|
||||||
&(sr_info->nvalue_table[i].errminlimit));
|
&(sr_info->nvalue_table[i].errminlimit));
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -103,6 +103,9 @@ static int __init imx8_soc_init(void)
|
|||||||
if (IS_ERR(soc_dev))
|
if (IS_ERR(soc_dev))
|
||||||
goto free_rev;
|
goto free_rev;
|
||||||
|
|
||||||
|
if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT))
|
||||||
|
platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
free_rev:
|
free_rev:
|
||||||
|
@ -506,13 +506,16 @@ int acpi_bus_get_status(struct acpi_device *device);
|
|||||||
|
|
||||||
int acpi_bus_set_power(acpi_handle handle, int state);
|
int acpi_bus_set_power(acpi_handle handle, int state);
|
||||||
const char *acpi_power_state_string(int state);
|
const char *acpi_power_state_string(int state);
|
||||||
int acpi_device_get_power(struct acpi_device *device, int *state);
|
|
||||||
int acpi_device_set_power(struct acpi_device *device, int state);
|
int acpi_device_set_power(struct acpi_device *device, int state);
|
||||||
int acpi_bus_init_power(struct acpi_device *device);
|
int acpi_bus_init_power(struct acpi_device *device);
|
||||||
int acpi_device_fix_up_power(struct acpi_device *device);
|
int acpi_device_fix_up_power(struct acpi_device *device);
|
||||||
int acpi_bus_update_power(acpi_handle handle, int *state_p);
|
int acpi_bus_update_power(acpi_handle handle, int *state_p);
|
||||||
int acpi_device_update_power(struct acpi_device *device, int *state_p);
|
int acpi_device_update_power(struct acpi_device *device, int *state_p);
|
||||||
bool acpi_bus_power_manageable(acpi_handle handle);
|
bool acpi_bus_power_manageable(acpi_handle handle);
|
||||||
|
int acpi_device_power_add_dependent(struct acpi_device *adev,
|
||||||
|
struct device *dev);
|
||||||
|
void acpi_device_power_remove_dependent(struct acpi_device *adev,
|
||||||
|
struct device *dev);
|
||||||
|
|
||||||
#ifdef CONFIG_PM
|
#ifdef CONFIG_PM
|
||||||
bool acpi_bus_can_wakeup(acpi_handle handle);
|
bool acpi_bus_can_wakeup(acpi_handle handle);
|
||||||
@ -651,6 +654,12 @@ static inline int acpi_pm_set_bridge_wakeup(struct device *dev, bool enable)
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
#ifdef CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT
|
||||||
|
bool acpi_sleep_state_supported(u8 sleep_state);
|
||||||
|
#else
|
||||||
|
static inline bool acpi_sleep_state_supported(u8 sleep_state) { return false; }
|
||||||
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_ACPI_SLEEP
|
#ifdef CONFIG_ACPI_SLEEP
|
||||||
u32 acpi_target_system_state(void);
|
u32 acpi_target_system_state(void);
|
||||||
#else
|
#else
|
||||||
|
@ -920,31 +920,21 @@ static inline int acpi_dev_pm_attach(struct device *dev, bool power_on)
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
#if defined(CONFIG_ACPI) && defined(CONFIG_PM_SLEEP)
|
#if defined(CONFIG_ACPI) && defined(CONFIG_PM_SLEEP)
|
||||||
int acpi_dev_suspend_late(struct device *dev);
|
|
||||||
int acpi_subsys_prepare(struct device *dev);
|
int acpi_subsys_prepare(struct device *dev);
|
||||||
void acpi_subsys_complete(struct device *dev);
|
void acpi_subsys_complete(struct device *dev);
|
||||||
int acpi_subsys_suspend_late(struct device *dev);
|
int acpi_subsys_suspend_late(struct device *dev);
|
||||||
int acpi_subsys_suspend_noirq(struct device *dev);
|
int acpi_subsys_suspend_noirq(struct device *dev);
|
||||||
int acpi_subsys_resume_noirq(struct device *dev);
|
|
||||||
int acpi_subsys_resume_early(struct device *dev);
|
|
||||||
int acpi_subsys_suspend(struct device *dev);
|
int acpi_subsys_suspend(struct device *dev);
|
||||||
int acpi_subsys_freeze(struct device *dev);
|
int acpi_subsys_freeze(struct device *dev);
|
||||||
int acpi_subsys_freeze_late(struct device *dev);
|
int acpi_subsys_poweroff(struct device *dev);
|
||||||
int acpi_subsys_freeze_noirq(struct device *dev);
|
|
||||||
int acpi_subsys_thaw_noirq(struct device *dev);
|
|
||||||
#else
|
#else
|
||||||
static inline int acpi_dev_resume_early(struct device *dev) { return 0; }
|
|
||||||
static inline int acpi_subsys_prepare(struct device *dev) { return 0; }
|
static inline int acpi_subsys_prepare(struct device *dev) { return 0; }
|
||||||
static inline void acpi_subsys_complete(struct device *dev) {}
|
static inline void acpi_subsys_complete(struct device *dev) {}
|
||||||
static inline int acpi_subsys_suspend_late(struct device *dev) { return 0; }
|
static inline int acpi_subsys_suspend_late(struct device *dev) { return 0; }
|
||||||
static inline int acpi_subsys_suspend_noirq(struct device *dev) { return 0; }
|
static inline int acpi_subsys_suspend_noirq(struct device *dev) { return 0; }
|
||||||
static inline int acpi_subsys_resume_noirq(struct device *dev) { return 0; }
|
|
||||||
static inline int acpi_subsys_resume_early(struct device *dev) { return 0; }
|
|
||||||
static inline int acpi_subsys_suspend(struct device *dev) { return 0; }
|
static inline int acpi_subsys_suspend(struct device *dev) { return 0; }
|
||||||
static inline int acpi_subsys_freeze(struct device *dev) { return 0; }
|
static inline int acpi_subsys_freeze(struct device *dev) { return 0; }
|
||||||
static inline int acpi_subsys_freeze_late(struct device *dev) { return 0; }
|
static inline int acpi_subsys_poweroff(struct device *dev) { return 0; }
|
||||||
static inline int acpi_subsys_freeze_noirq(struct device *dev) { return 0; }
|
|
||||||
static inline int acpi_subsys_thaw_noirq(struct device *dev) { return 0; }
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_ACPI
|
#ifdef CONFIG_ACPI
|
||||||
|
@ -406,6 +406,12 @@ int cpufreq_unregister_driver(struct cpufreq_driver *driver_data);
|
|||||||
const char *cpufreq_get_current_driver(void);
|
const char *cpufreq_get_current_driver(void);
|
||||||
void *cpufreq_get_driver_data(void);
|
void *cpufreq_get_driver_data(void);
|
||||||
|
|
||||||
|
static inline int cpufreq_thermal_control_enabled(struct cpufreq_driver *drv)
|
||||||
|
{
|
||||||
|
return IS_ENABLED(CONFIG_CPU_THERMAL) &&
|
||||||
|
(drv->flags & CPUFREQ_IS_COOLING_DEV);
|
||||||
|
}
|
||||||
|
|
||||||
static inline void cpufreq_verify_within_limits(struct cpufreq_policy *policy,
|
static inline void cpufreq_verify_within_limits(struct cpufreq_policy *policy,
|
||||||
unsigned int min, unsigned int max)
|
unsigned int min, unsigned int max)
|
||||||
{
|
{
|
||||||
|
@ -760,7 +760,6 @@ extern int pm_generic_poweroff_late(struct device *dev);
|
|||||||
extern int pm_generic_poweroff(struct device *dev);
|
extern int pm_generic_poweroff(struct device *dev);
|
||||||
extern void pm_generic_complete(struct device *dev);
|
extern void pm_generic_complete(struct device *dev);
|
||||||
|
|
||||||
extern void dev_pm_skip_next_resume_phases(struct device *dev);
|
|
||||||
extern bool dev_pm_may_skip_resume(struct device *dev);
|
extern bool dev_pm_may_skip_resume(struct device *dev);
|
||||||
extern bool dev_pm_smart_suspend_and_suspended(struct device *dev);
|
extern bool dev_pm_smart_suspend_and_suspended(struct device *dev);
|
||||||
|
|
||||||
|
@ -128,8 +128,8 @@ struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char * name);
|
|||||||
void dev_pm_opp_put_clkname(struct opp_table *opp_table);
|
void dev_pm_opp_put_clkname(struct opp_table *opp_table);
|
||||||
struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
|
struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
|
||||||
void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table);
|
void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table);
|
||||||
struct opp_table *dev_pm_opp_set_genpd_virt_dev(struct device *dev, struct device *virt_dev, int index);
|
struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names);
|
||||||
void dev_pm_opp_put_genpd_virt_dev(struct opp_table *opp_table, struct device *virt_dev);
|
void dev_pm_opp_detach_genpd(struct opp_table *opp_table);
|
||||||
int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate);
|
int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate);
|
||||||
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
|
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
|
||||||
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask);
|
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask);
|
||||||
@ -292,12 +292,12 @@ static inline struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const
|
|||||||
|
|
||||||
static inline void dev_pm_opp_put_clkname(struct opp_table *opp_table) {}
|
static inline void dev_pm_opp_put_clkname(struct opp_table *opp_table) {}
|
||||||
|
|
||||||
static inline struct opp_table *dev_pm_opp_set_genpd_virt_dev(struct device *dev, struct device *virt_dev, int index)
|
static inline struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names)
|
||||||
{
|
{
|
||||||
return ERR_PTR(-ENOTSUPP);
|
return ERR_PTR(-ENOTSUPP);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void dev_pm_opp_put_genpd_virt_dev(struct opp_table *opp_table, struct device *virt_dev) {}
|
static inline void dev_pm_opp_detach_genpd(struct opp_table *opp_table) {}
|
||||||
|
|
||||||
static inline int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate)
|
static inline int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate)
|
||||||
{
|
{
|
||||||
|
@ -36,7 +36,7 @@ struct wake_irq;
|
|||||||
* @expire_count: Number of times the wakeup source's timeout has expired.
|
* @expire_count: Number of times the wakeup source's timeout has expired.
|
||||||
* @wakeup_count: Number of times the wakeup source might abort suspend.
|
* @wakeup_count: Number of times the wakeup source might abort suspend.
|
||||||
* @active: Status of the wakeup source.
|
* @active: Status of the wakeup source.
|
||||||
* @has_timeout: The wakeup source has been activated with a timeout.
|
* @autosleep_enabled: Autosleep is active, so update @prevent_sleep_time.
|
||||||
*/
|
*/
|
||||||
struct wakeup_source {
|
struct wakeup_source {
|
||||||
const char *name;
|
const char *name;
|
||||||
|
@ -304,7 +304,7 @@ static inline bool idle_should_enter_s2idle(void)
|
|||||||
return unlikely(s2idle_state == S2IDLE_STATE_ENTER);
|
return unlikely(s2idle_state == S2IDLE_STATE_ENTER);
|
||||||
}
|
}
|
||||||
|
|
||||||
extern bool pm_suspend_via_s2idle(void);
|
extern bool pm_suspend_default_s2idle(void);
|
||||||
extern void __init pm_states_init(void);
|
extern void __init pm_states_init(void);
|
||||||
extern void s2idle_set_ops(const struct platform_s2idle_ops *ops);
|
extern void s2idle_set_ops(const struct platform_s2idle_ops *ops);
|
||||||
extern void s2idle_wake(void);
|
extern void s2idle_wake(void);
|
||||||
@ -336,7 +336,7 @@ static inline void pm_set_suspend_via_firmware(void) {}
|
|||||||
static inline void pm_set_resume_via_firmware(void) {}
|
static inline void pm_set_resume_via_firmware(void) {}
|
||||||
static inline bool pm_suspend_via_firmware(void) { return false; }
|
static inline bool pm_suspend_via_firmware(void) { return false; }
|
||||||
static inline bool pm_resume_via_firmware(void) { return false; }
|
static inline bool pm_resume_via_firmware(void) { return false; }
|
||||||
static inline bool pm_suspend_via_s2idle(void) { return false; }
|
static inline bool pm_suspend_default_s2idle(void) { return false; }
|
||||||
|
|
||||||
static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {}
|
static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {}
|
||||||
static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; }
|
static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; }
|
||||||
@ -448,6 +448,7 @@ extern bool system_entering_hibernation(void);
|
|||||||
extern bool hibernation_available(void);
|
extern bool hibernation_available(void);
|
||||||
asmlinkage int swsusp_save(void);
|
asmlinkage int swsusp_save(void);
|
||||||
extern struct pbe *restore_pblist;
|
extern struct pbe *restore_pblist;
|
||||||
|
int pfn_is_nosave(unsigned long pfn);
|
||||||
#else /* CONFIG_HIBERNATION */
|
#else /* CONFIG_HIBERNATION */
|
||||||
static inline void register_nosave_region(unsigned long b, unsigned long e) {}
|
static inline void register_nosave_region(unsigned long b, unsigned long e) {}
|
||||||
static inline void register_nosave_region_late(unsigned long b, unsigned long e) {}
|
static inline void register_nosave_region_late(unsigned long b, unsigned long e) {}
|
||||||
|
@ -75,8 +75,6 @@ static inline void hibernate_reserved_size_init(void) {}
|
|||||||
static inline void hibernate_image_size_init(void) {}
|
static inline void hibernate_image_size_init(void) {}
|
||||||
#endif /* !CONFIG_HIBERNATION */
|
#endif /* !CONFIG_HIBERNATION */
|
||||||
|
|
||||||
extern int pfn_is_nosave(unsigned long);
|
|
||||||
|
|
||||||
#define power_attr(_name) \
|
#define power_attr(_name) \
|
||||||
static struct kobj_attribute _name##_attr = { \
|
static struct kobj_attribute _name##_attr = { \
|
||||||
.attr = { \
|
.attr = { \
|
||||||
|
@ -62,16 +62,16 @@ enum s2idle_states __read_mostly s2idle_state;
|
|||||||
static DEFINE_RAW_SPINLOCK(s2idle_lock);
|
static DEFINE_RAW_SPINLOCK(s2idle_lock);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* pm_suspend_via_s2idle - Check if suspend-to-idle is the default suspend.
|
* pm_suspend_default_s2idle - Check if suspend-to-idle is the default suspend.
|
||||||
*
|
*
|
||||||
* Return 'true' if suspend-to-idle has been selected as the default system
|
* Return 'true' if suspend-to-idle has been selected as the default system
|
||||||
* suspend method.
|
* suspend method.
|
||||||
*/
|
*/
|
||||||
bool pm_suspend_via_s2idle(void)
|
bool pm_suspend_default_s2idle(void)
|
||||||
{
|
{
|
||||||
return mem_sleep_current == PM_SUSPEND_TO_IDLE;
|
return mem_sleep_current == PM_SUSPEND_TO_IDLE;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(pm_suspend_via_s2idle);
|
EXPORT_SYMBOL_GPL(pm_suspend_default_s2idle);
|
||||||
|
|
||||||
void s2idle_set_ops(const struct platform_s2idle_ops *ops)
|
void s2idle_set_ops(const struct platform_s2idle_ops *ops)
|
||||||
{
|
{
|
||||||
|
@ -974,12 +974,11 @@ static int get_swap_reader(struct swap_map_handle *handle,
|
|||||||
last = handle->maps = NULL;
|
last = handle->maps = NULL;
|
||||||
offset = swsusp_header->image;
|
offset = swsusp_header->image;
|
||||||
while (offset) {
|
while (offset) {
|
||||||
tmp = kmalloc(sizeof(*handle->maps), GFP_KERNEL);
|
tmp = kzalloc(sizeof(*handle->maps), GFP_KERNEL);
|
||||||
if (!tmp) {
|
if (!tmp) {
|
||||||
release_swap_reader(handle);
|
release_swap_reader(handle);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
memset(tmp, 0, sizeof(*tmp));
|
|
||||||
if (!handle->maps)
|
if (!handle->maps)
|
||||||
handle->maps = tmp;
|
handle->maps = tmp;
|
||||||
if (last)
|
if (last)
|
||||||
|
@ -61,7 +61,7 @@ Only display specific monitors. Use the monitor string(s) provided by \-l option
|
|||||||
.PP
|
.PP
|
||||||
\-i seconds
|
\-i seconds
|
||||||
.RS 4
|
.RS 4
|
||||||
Measure intervall.
|
Measure interval.
|
||||||
.RE
|
.RE
|
||||||
.PP
|
.PP
|
||||||
\-c
|
\-c
|
||||||
|
@ -98,7 +98,7 @@ msgstr ""
|
|||||||
|
|
||||||
#: utils/idle_monitor/cpupower-monitor.c:74
|
#: utils/idle_monitor/cpupower-monitor.c:74
|
||||||
#, c-format
|
#, c-format
|
||||||
msgid "\t -i: time intervall to measure for in seconds (default 1)\n"
|
msgid "\t -i: time interval to measure for in seconds (default 1)\n"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: utils/idle_monitor/cpupower-monitor.c:75
|
#: utils/idle_monitor/cpupower-monitor.c:75
|
||||||
|
@ -95,7 +95,7 @@ msgstr ""
|
|||||||
|
|
||||||
#: utils/idle_monitor/cpupower-monitor.c:74
|
#: utils/idle_monitor/cpupower-monitor.c:74
|
||||||
#, c-format
|
#, c-format
|
||||||
msgid "\t -i: time intervall to measure for in seconds (default 1)\n"
|
msgid "\t -i: time interval to measure for in seconds (default 1)\n"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: utils/idle_monitor/cpupower-monitor.c:75
|
#: utils/idle_monitor/cpupower-monitor.c:75
|
||||||
|
@ -95,7 +95,7 @@ msgstr ""
|
|||||||
|
|
||||||
#: utils/idle_monitor/cpupower-monitor.c:74
|
#: utils/idle_monitor/cpupower-monitor.c:74
|
||||||
#, c-format
|
#, c-format
|
||||||
msgid "\t -i: time intervall to measure for in seconds (default 1)\n"
|
msgid "\t -i: time interval to measure for in seconds (default 1)\n"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: utils/idle_monitor/cpupower-monitor.c:75
|
#: utils/idle_monitor/cpupower-monitor.c:75
|
||||||
|
@ -95,7 +95,7 @@ msgstr ""
|
|||||||
|
|
||||||
#: utils/idle_monitor/cpupower-monitor.c:74
|
#: utils/idle_monitor/cpupower-monitor.c:74
|
||||||
#, c-format
|
#, c-format
|
||||||
msgid "\t -i: time intervall to measure for in seconds (default 1)\n"
|
msgid "\t -i: time interval to measure for in seconds (default 1)\n"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: utils/idle_monitor/cpupower-monitor.c:75
|
#: utils/idle_monitor/cpupower-monitor.c:75
|
||||||
|
@ -93,7 +93,7 @@ msgstr ""
|
|||||||
|
|
||||||
#: utils/idle_monitor/cpupower-monitor.c:74
|
#: utils/idle_monitor/cpupower-monitor.c:74
|
||||||
#, c-format
|
#, c-format
|
||||||
msgid "\t -i: time intervall to measure for in seconds (default 1)\n"
|
msgid "\t -i: time interval to measure for in seconds (default 1)\n"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: utils/idle_monitor/cpupower-monitor.c:75
|
#: utils/idle_monitor/cpupower-monitor.c:75
|
||||||
|
@ -305,6 +305,8 @@ int cmd_freq_set(int argc, char **argv)
|
|||||||
bitmask_setbit(cpus_chosen, cpus->cpu);
|
bitmask_setbit(cpus_chosen, cpus->cpu);
|
||||||
cpus = cpus->next;
|
cpus = cpus->next;
|
||||||
}
|
}
|
||||||
|
/* Set the last cpu in related cpus list */
|
||||||
|
bitmask_setbit(cpus_chosen, cpus->cpu);
|
||||||
cpufreq_put_related_cpus(cpus);
|
cpufreq_put_related_cpus(cpus);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
552
tools/power/pm-graph/README
Normal file
552
tools/power/pm-graph/README
Normal file
@ -0,0 +1,552 @@
|
|||||||
|
p m - g r a p h
|
||||||
|
|
||||||
|
pm-graph: suspend/resume/boot timing analysis tools
|
||||||
|
Version: 5.4
|
||||||
|
Author: Todd Brandt <todd.e.brandt@intel.com>
|
||||||
|
Home Page: https://01.org/pm-graph
|
||||||
|
|
||||||
|
Report bugs/issues at bugzilla.kernel.org Tools/pm-graph
|
||||||
|
- https://bugzilla.kernel.org/buglist.cgi?component=pm-graph&product=Tools
|
||||||
|
|
||||||
|
Full documentation available online & in man pages
|
||||||
|
- Getting Started:
|
||||||
|
https://01.org/pm-graph/documentation/getting-started
|
||||||
|
|
||||||
|
- Config File Format:
|
||||||
|
https://01.org/pm-graph/documentation/3-config-file-format
|
||||||
|
|
||||||
|
- upstream version in git:
|
||||||
|
https://github.com/intel/pm-graph/
|
||||||
|
|
||||||
|
Table of Contents
|
||||||
|
- Overview
|
||||||
|
- Setup
|
||||||
|
- Usage
|
||||||
|
- Basic Usage
|
||||||
|
- Dev Mode Usage
|
||||||
|
- Proc Mode Usage
|
||||||
|
- Configuration Files
|
||||||
|
- Usage Examples
|
||||||
|
- Config File Options
|
||||||
|
- Custom Timeline Entries
|
||||||
|
- Adding/Editing Timeline Functions
|
||||||
|
- Adding/Editing Dev Timeline Source Functions
|
||||||
|
- Verifying your Custom Functions
|
||||||
|
- Testing on consumer linux Operating Systems
|
||||||
|
- Android
|
||||||
|
|
||||||
|
------------------------------------------------------------------
|
||||||
|
| OVERVIEW |
|
||||||
|
------------------------------------------------------------------
|
||||||
|
|
||||||
|
This tool suite is designed to assist kernel and OS developers in optimizing
|
||||||
|
their linux stack's suspend/resume & boot time. Using a kernel image built
|
||||||
|
with a few extra options enabled, the tools will execute a suspend or boot,
|
||||||
|
and will capture dmesg and ftrace data. This data is transformed into a set of
|
||||||
|
timelines and a callgraph to give a quick and detailed view of which devices
|
||||||
|
and kernel processes are taking the most time in suspend/resume & boot.
|
||||||
|
|
||||||
|
------------------------------------------------------------------
|
||||||
|
| SETUP |
|
||||||
|
------------------------------------------------------------------
|
||||||
|
|
||||||
|
These packages are required to execute the scripts
|
||||||
|
- python
|
||||||
|
- python-requests
|
||||||
|
|
||||||
|
Ubuntu:
|
||||||
|
sudo apt-get install python python-requests
|
||||||
|
|
||||||
|
Fedora:
|
||||||
|
sudo dnf install python python-requests
|
||||||
|
|
||||||
|
The tools can most easily be installed via git clone and make install
|
||||||
|
|
||||||
|
$> git clone http://github.com/intel/pm-graph.git
|
||||||
|
$> cd pm-graph
|
||||||
|
$> sudo make install
|
||||||
|
$> man sleepgraph ; man bootgraph
|
||||||
|
|
||||||
|
Setup involves some minor kernel configuration
|
||||||
|
|
||||||
|
The following kernel build options are required for all kernels:
|
||||||
|
CONFIG_DEVMEM=y
|
||||||
|
CONFIG_PM_DEBUG=y
|
||||||
|
CONFIG_PM_SLEEP_DEBUG=y
|
||||||
|
CONFIG_FTRACE=y
|
||||||
|
CONFIG_FUNCTION_TRACER=y
|
||||||
|
CONFIG_FUNCTION_GRAPH_TRACER=y
|
||||||
|
CONFIG_KPROBES=y
|
||||||
|
CONFIG_KPROBES_ON_FTRACE=y
|
||||||
|
|
||||||
|
In kernel 3.15.0, two patches were upstreamed which enable the
|
||||||
|
v3.0 behavior. These patches allow the tool to read all the
|
||||||
|
data from trace events instead of from dmesg. You can enable
|
||||||
|
this behavior on earlier kernels with these patches:
|
||||||
|
|
||||||
|
(kernel/pre-3.15/enable_trace_events_suspend_resume.patch)
|
||||||
|
(kernel/pre-3.15/enable_trace_events_device_pm_callback.patch)
|
||||||
|
|
||||||
|
If you're using a kernel older than 3.15.0, the following
|
||||||
|
additional kernel parameters are required:
|
||||||
|
(e.g. in file /etc/default/grub)
|
||||||
|
GRUB_CMDLINE_LINUX_DEFAULT="... initcall_debug log_buf_len=32M ..."
|
||||||
|
|
||||||
|
If you're using a kernel older than 3.11-rc2, the following simple
|
||||||
|
patch must be applied to enable ftrace data:
|
||||||
|
in file: kernel/power/suspend.c
|
||||||
|
in function: int suspend_devices_and_enter(suspend_state_t state)
|
||||||
|
remove call to "ftrace_stop();"
|
||||||
|
remove call to "ftrace_start();"
|
||||||
|
|
||||||
|
There is a patch which does this for kernel v3.8.0:
|
||||||
|
(kernel/pre-3.11-rc2/enable_ftrace_in_suspendresume.patch)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
------------------------------------------------------------------
|
||||||
|
| USAGE |
|
||||||
|
------------------------------------------------------------------
|
||||||
|
|
||||||
|
Basic Usage
|
||||||
|
___________
|
||||||
|
|
||||||
|
1) First configure a kernel using the instructions from the previous sections.
|
||||||
|
Then build, install, and boot with it.
|
||||||
|
2) Open up a terminal window and execute the mode list command:
|
||||||
|
|
||||||
|
%> sudo ./sleepgraph.py -modes
|
||||||
|
['freeze', 'mem', 'disk']
|
||||||
|
|
||||||
|
Execute a test using one of the available power modes, e.g. mem (S3):
|
||||||
|
|
||||||
|
%> sudo ./sleepgraph.py -m mem -rtcwake 15
|
||||||
|
|
||||||
|
or with a config file
|
||||||
|
|
||||||
|
%> sudo ./sleepgraph.py -config config/suspend.cfg
|
||||||
|
|
||||||
|
When the system comes back you'll see the script finishing up and
|
||||||
|
creating the output files in the test subdir. It generates output
|
||||||
|
files in subdirectory: suspend-mmddyy-HHMMSS. The ftrace file can
|
||||||
|
be used to regenerate the html timeline with different options
|
||||||
|
|
||||||
|
HTML output: <hostname>_<mode>.html
|
||||||
|
raw dmesg output: <hostname>_<mode>_dmesg.txt
|
||||||
|
raw ftrace output: <hostname>_<mode>_ftrace.txt
|
||||||
|
|
||||||
|
View the html in firefox or chrome.
|
||||||
|
|
||||||
|
|
||||||
|
Dev Mode Usage
|
||||||
|
______________
|
||||||
|
|
||||||
|
Developer mode adds information on low level source calls to the timeline.
|
||||||
|
The tool sets kprobes on all delay and mutex calls to see which devices
|
||||||
|
are waiting for something and when. It also sets a suite of kprobes on
|
||||||
|
subsystem dependent calls to better fill out the timeline.
|
||||||
|
|
||||||
|
The tool will also expose kernel threads that don't normally show up in the
|
||||||
|
timeline. This is useful in discovering dependent threads to get a better
|
||||||
|
idea of what each device is waiting for. For instance, the scsi_eh thread,
|
||||||
|
a.k.a. scsi resume error handler, is what each SATA disk device waits for
|
||||||
|
before it can continue resume.
|
||||||
|
|
||||||
|
The timeline will be much larger if run with dev mode, so it can be useful
|
||||||
|
to set the -mindev option to clip out any device blocks that are too small
|
||||||
|
to see easily. The following command will give a nice dev mode run:
|
||||||
|
|
||||||
|
%> sudo ./sleepgraph.py -m mem -rtcwake 15 -mindev 1 -dev
|
||||||
|
|
||||||
|
or with a config file
|
||||||
|
|
||||||
|
%> sudo ./sleepgraph.py -config config/suspend-dev.cfg
|
||||||
|
|
||||||
|
|
||||||
|
Proc Mode Usage
|
||||||
|
_______________
|
||||||
|
|
||||||
|
Proc mode adds user process info to the timeline. This is done in a manner
|
||||||
|
similar to the bootchart utility, which graphs init processes and their
|
||||||
|
execution as the system boots. This tool option does the same thing but for
|
||||||
|
the period before and after suspend/resume.
|
||||||
|
|
||||||
|
In order to see any process info, there needs to be some delay before or
|
||||||
|
after resume since processes are frozen in suspend_prepare and thawed in
|
||||||
|
resume_complete. The predelay and postdelay args allow you to do this. It
|
||||||
|
can also be useful to run in x2 mode with an x2 delay, this way you can
|
||||||
|
see process activity before and after resume, and in between two
|
||||||
|
successive suspend/resumes.
|
||||||
|
|
||||||
|
The command can be run like this:
|
||||||
|
|
||||||
|
%> sudo ./sleepgraph.py -m mem -rtcwake 15 -x2 -x2delay 1000 -predelay 1000 -postdelay 1000 -proc
|
||||||
|
|
||||||
|
or with a config file
|
||||||
|
|
||||||
|
%> sudo ./sleepgraph.py -config config/suspend-proc.cfg
|
||||||
|
|
||||||
|
|
||||||
|
------------------------------------------------------------------
|
||||||
|
| CONFIGURATION FILES |
|
||||||
|
------------------------------------------------------------------
|
||||||
|
|
||||||
|
Since 4.0 we've moved to using config files in lieu of command line options.
|
||||||
|
The config folder contains a collection of typical use cases.
|
||||||
|
There are corresponding configs for other power modes:
|
||||||
|
|
||||||
|
Simple suspend/resume with basic timeline (mem/freeze/standby)
|
||||||
|
config/suspend.cfg
|
||||||
|
config/freeze.cfg
|
||||||
|
config/standby.cfg
|
||||||
|
|
||||||
|
Dev mode suspend/resume with dev timeline (mem/freeze/standby)
|
||||||
|
config/suspend-dev.cfg
|
||||||
|
config/freeze-dev.cfg
|
||||||
|
config/standby-dev.cfg
|
||||||
|
|
||||||
|
Simple suspend/resume with timeline and callgraph (mem/freeze/standby)
|
||||||
|
config/suspend-callgraph.cfg
|
||||||
|
config/freeze-callgraph.cfg
|
||||||
|
config/standby-callgraph.cfg
|
||||||
|
|
||||||
|
Sample proc mode x2 run using mem suspend
|
||||||
|
config/suspend-x2-proc.cfg
|
||||||
|
|
||||||
|
Sample for editing timeline funcs (moves internal functions into config)
|
||||||
|
config/custom-timeline-functions.cfg
|
||||||
|
|
||||||
|
Sample debug config for serio subsystem
|
||||||
|
config/debug-serio-suspend.cfg
|
||||||
|
|
||||||
|
|
||||||
|
Usage Examples
|
||||||
|
______________
|
||||||
|
|
||||||
|
Run a simple mem suspend:
|
||||||
|
%> sudo ./sleepgraph.py -config config/suspend.cfg
|
||||||
|
|
||||||
|
Run a mem suspend with callgraph data:
|
||||||
|
%> sudo ./sleepgraph.py -config config/suspend-callgraph.cfg
|
||||||
|
|
||||||
|
Run a mem suspend with dev mode detail:
|
||||||
|
%> sudo ./sleepgraph.py -config config/suspend-dev.cfg
|
||||||
|
|
||||||
|
|
||||||
|
Config File Options
|
||||||
|
___________________
|
||||||
|
|
||||||
|
[Settings]
|
||||||
|
|
||||||
|
# Verbosity: print verbose messages (def: false)
|
||||||
|
verbose: false
|
||||||
|
|
||||||
|
# Suspend Mode: e.g. standby, mem, freeze, disk (def: mem)
|
||||||
|
mode: mem
|
||||||
|
|
||||||
|
# Output Directory Format: {hostname}, {date}, {time} give current values
|
||||||
|
output-dir: suspend-{hostname}-{date}-{time}
|
||||||
|
|
||||||
|
# Automatic Wakeup: use rtcwake to wakeup after X seconds (def: infinity)
|
||||||
|
rtcwake: 15
|
||||||
|
|
||||||
|
# Add Logs: add the dmesg and ftrace log to the html output (def: false)
|
||||||
|
addlogs: false
|
||||||
|
|
||||||
|
# Sus/Res Gap: insert a gap between sus & res in the timeline (def: false)
|
||||||
|
srgap: false
|
||||||
|
|
||||||
|
# Custom Command: Command to execute in lieu of suspend (def: "")
|
||||||
|
command: echo mem > /sys/power/state
|
||||||
|
|
||||||
|
# Proc mode: graph user processes and cpu usage in the timeline (def: false)
|
||||||
|
proc: false
|
||||||
|
|
||||||
|
# Dev mode: graph source functions in the timeline (def: false)
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
# Suspend/Resume x2: run 2 suspend/resumes back to back (def: false)
|
||||||
|
x2: false
|
||||||
|
|
||||||
|
# x2 Suspend Delay: time delay between the two test runs in ms (def: 0 ms)
|
||||||
|
x2delay: 0
|
||||||
|
|
||||||
|
# Pre Suspend Delay: nclude an N ms delay before (1st) suspend (def: 0 ms)
|
||||||
|
predelay: 0
|
||||||
|
|
||||||
|
# Post Resume Delay: include an N ms delay after (last) resume (def: 0 ms)
|
||||||
|
postdelay: 0
|
||||||
|
|
||||||
|
# Min Device Length: graph only dev callbacks longer than min (def: 0.001 ms)
|
||||||
|
mindev: 0.001
|
||||||
|
|
||||||
|
# Callgraph: gather ftrace callgraph data on all timeline events (def: false)
|
||||||
|
callgraph: false
|
||||||
|
|
||||||
|
# Expand Callgraph: pre-expand the callgraph treeviews in html (def: false)
|
||||||
|
expandcg: false
|
||||||
|
|
||||||
|
# Min Callgraph Length: show callgraphs only if longer than min (def: 1 ms)
|
||||||
|
mincg: 1
|
||||||
|
|
||||||
|
# Timestamp Precision: number of sig digits in timestamps (0:S, [3:ms], 6:us)
|
||||||
|
timeprec: 3
|
||||||
|
|
||||||
|
# Device Filter: show only devs whose name/driver includes one of these strings
|
||||||
|
devicefilter: _cpu_up,_cpu_down,i915,usb
|
||||||
|
|
||||||
|
# Override default timeline entries:
|
||||||
|
# Do not use the internal default functions for timeline entries (def: false)
|
||||||
|
# Set this to true if you intend to only use the ones defined in the config
|
||||||
|
override-timeline-functions: true
|
||||||
|
|
||||||
|
# Override default dev timeline entries:
|
||||||
|
# Do not use the internal default functions for dev timeline entries (def: false)
|
||||||
|
# Set this to true if you intend to only use the ones defined in the config
|
||||||
|
override-dev-timeline-functions: true
|
||||||
|
|
||||||
|
# Call Loop Max Gap (dev mode only)
|
||||||
|
# merge loops of the same call if each is less than maxgap apart (def: 100us)
|
||||||
|
callloop-maxgap: 0.0001
|
||||||
|
|
||||||
|
# Call Loop Max Length (dev mode only)
|
||||||
|
# merge loops of the same call if each is less than maxlen in length (def: 5ms)
|
||||||
|
callloop-maxlen: 0.005
|
||||||
|
|
||||||
|
------------------------------------------------------------------
|
||||||
|
| CUSTOM TIMELINE ENTRIES |
|
||||||
|
------------------------------------------------------------------
|
||||||
|
|
||||||
|
Adding or Editing Timeline Functions
|
||||||
|
____________________________________
|
||||||
|
|
||||||
|
The tool uses an array of function names to fill out empty spaces in the
|
||||||
|
timeline where device callbacks don't appear. For instance, in suspend_prepare
|
||||||
|
the tool adds the sys_sync and freeze_processes calls as virtual device blocks
|
||||||
|
in the timeline to show you where the time is going. These calls should fill
|
||||||
|
the timeline with contiguous data so that most kernel execution is covered.
|
||||||
|
|
||||||
|
It is possible to add new function calls to the timeline by adding them to
|
||||||
|
the config. It's also possible to copy the internal timeline functions into
|
||||||
|
the config so that you can override and edit them. Place them in the
|
||||||
|
timeline_functions_ARCH section with the name of your architecture appended.
|
||||||
|
i.e. for x86_64: [timeline_functions_x86_64]
|
||||||
|
|
||||||
|
Use the override-timeline-functions option if you only want to use your
|
||||||
|
custom calls, or leave it false to append them to the internal ones.
|
||||||
|
|
||||||
|
This section includes a list of functions (set using kprobes) which use both
|
||||||
|
symbol data and function arg data. The args are pulled directly from the
|
||||||
|
stack using this architecture's registers and stack formatting. Each entry
|
||||||
|
can include up to four pieces of info: The function name, a format string,
|
||||||
|
an argument list, and a color. But only a function name is required.
|
||||||
|
|
||||||
|
For a full example config, see config/custom-timeline-functions.cfg. It pulls
|
||||||
|
all the internal timeline functions into the config and allows you to edit
|
||||||
|
them.
|
||||||
|
|
||||||
|
Entry format:
|
||||||
|
|
||||||
|
function: format{fn_arg1}_{fn_arg2} fn_arg1 fn_arg2 ... [color=purple]
|
||||||
|
|
||||||
|
Required Arguments:
|
||||||
|
|
||||||
|
function: The symbol name for the function you want probed, this is the
|
||||||
|
minimum required for an entry, it will show up as the function
|
||||||
|
name with no arguments.
|
||||||
|
|
||||||
|
example: _cpu_up:
|
||||||
|
|
||||||
|
Optional Arguments:
|
||||||
|
|
||||||
|
format: The format to display the data on the timeline in. Use braces to
|
||||||
|
enclose the arg names.
|
||||||
|
|
||||||
|
example: CPU_ON[{cpu}]
|
||||||
|
|
||||||
|
color: The color of the entry block in the timeline. The default color is
|
||||||
|
transparent, so the entry shares the phase color. The color is an
|
||||||
|
html color string, either a word, or an RGB.
|
||||||
|
|
||||||
|
example: [color=#CC00CC]
|
||||||
|
|
||||||
|
arglist: A list of arguments from registers/stack addresses. See URL:
|
||||||
|
https://www.kernel.org/doc/Documentation/trace/kprobetrace.txt
|
||||||
|
|
||||||
|
example: cpu=%di:s32
|
||||||
|
|
||||||
|
Here is a full example entry. It displays cpu resume calls in the timeline
|
||||||
|
in orange. They will appear as CPU_ON[0], CPU_ON[1], etc.
|
||||||
|
|
||||||
|
[timeline_functions_x86_64]
|
||||||
|
_cpu_up: CPU_ON[{cpu}] cpu=%di:s32 [color=orange]
|
||||||
|
|
||||||
|
|
||||||
|
Adding or Editing Dev Mode Timeline Source Functions
|
||||||
|
____________________________________________________
|
||||||
|
|
||||||
|
In dev mode, the tool uses an array of function names to monitor source
|
||||||
|
execution within the timeline entries.
|
||||||
|
|
||||||
|
The function calls are displayed inside the main device/call blocks in the
|
||||||
|
timeline. However, if a function call is not within a main timeline event,
|
||||||
|
it will spawn an entirely new event named after the caller's kernel thread.
|
||||||
|
These asynchronous kernel threads will populate in a separate section
|
||||||
|
beneath the main device/call section.
|
||||||
|
|
||||||
|
The tool has a set of hard coded calls which focus on the most common use
|
||||||
|
cases: msleep, udelay, schedule_timeout, mutex_lock_slowpath, etc. These are
|
||||||
|
the functions that add a hardcoded time delay to the suspend/resume path.
|
||||||
|
The tool also includes some common functions native to important
|
||||||
|
subsystems: ata, i915, and ACPI, etc.
|
||||||
|
|
||||||
|
It is possible to add new function calls to the dev timeline by adding them
|
||||||
|
to the config. It's also possible to copy the internal dev timeline
|
||||||
|
functions into the config so that you can override and edit them. Place them
|
||||||
|
in the dev_timeline_functions_ARCH section with the name of your architecture
|
||||||
|
appended. i.e. for x86_64: [dev_timeline_functions_x86_64]
|
||||||
|
|
||||||
|
Use the override-dev-timeline-functions option if you only want to use your
|
||||||
|
custom calls, or leave it false to append them to the internal ones.
|
||||||
|
|
||||||
|
The format is the same as the timeline_functions_x86_64 section. It's a
|
||||||
|
list of functions (set using kprobes) which use both symbol data and function
|
||||||
|
arg data. The args are pulled directly from the stack using this
|
||||||
|
architecture's registers and stack formatting. Each entry can include up
|
||||||
|
to four pieces of info: The function name, a format string, an argument list,
|
||||||
|
and a color. But only the function name is required.
|
||||||
|
|
||||||
|
For a full example config, see config/custom-timeline-functions.cfg. It pulls
|
||||||
|
all the internal dev timeline functions into the config and allows you to edit
|
||||||
|
them.
|
||||||
|
|
||||||
|
Here is a full example entry. It displays the ATA port reset calls as
|
||||||
|
ataN_port_reset in the timeline. This is where most of the SATA disk resume
|
||||||
|
time goes, so it can be helpful to see the low level call.
|
||||||
|
|
||||||
|
[dev_timeline_functions_x86_64]
|
||||||
|
ata_eh_recover: ata{port}_port_reset port=+36(%di):s32 [color=#CC00CC]
|
||||||
|
|
||||||
|
|
||||||
|
Verifying your custom functions
|
||||||
|
_______________________________
|
||||||
|
|
||||||
|
Once you have a set of functions (kprobes) defined, it can be useful to
|
||||||
|
perform a quick check to see if you formatted them correctly and if the system
|
||||||
|
actually supports them. To do this, run the tool with your config file
|
||||||
|
and the -status option. The tool will go through all the kprobes (both
|
||||||
|
custom and internal if you haven't overridden them) and actually attempts
|
||||||
|
to set them in ftrace. It will then print out success or fail for you.
|
||||||
|
|
||||||
|
Note that kprobes which don't actually exist in the kernel won't stop the
|
||||||
|
tool, they just wont show up.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
sudo ./sleepgraph.py -config config/custom-timeline-functions.cfg -status
|
||||||
|
Checking this system (myhostname)...
|
||||||
|
have root access: YES
|
||||||
|
is sysfs mounted: YES
|
||||||
|
is "mem" a valid power mode: YES
|
||||||
|
is ftrace supported: YES
|
||||||
|
are kprobes supported: YES
|
||||||
|
timeline data source: FTRACE (all trace events found)
|
||||||
|
is rtcwake supported: YES
|
||||||
|
verifying timeline kprobes work:
|
||||||
|
_cpu_down: YES
|
||||||
|
_cpu_up: YES
|
||||||
|
acpi_pm_finish: YES
|
||||||
|
acpi_pm_prepare: YES
|
||||||
|
freeze_kernel_threads: YES
|
||||||
|
freeze_processes: YES
|
||||||
|
sys_sync: YES
|
||||||
|
thaw_processes: YES
|
||||||
|
verifying dev kprobes work:
|
||||||
|
__const_udelay: YES
|
||||||
|
__mutex_lock_slowpath: YES
|
||||||
|
acpi_os_stall: YES
|
||||||
|
acpi_ps_parse_aml: YES
|
||||||
|
intel_opregion_init: NO
|
||||||
|
intel_opregion_register: NO
|
||||||
|
intel_opregion_setup: NO
|
||||||
|
msleep: YES
|
||||||
|
schedule_timeout: YES
|
||||||
|
schedule_timeout_uninterruptible: YES
|
||||||
|
usleep_range: YES
|
||||||
|
|
||||||
|
|
||||||
|
------------------------------------------------------------------
|
||||||
|
| TESTING ON CONSUMER LINUX OPERATING SYSTEMS |
|
||||||
|
------------------------------------------------------------------
|
||||||
|
|
||||||
|
Android
|
||||||
|
_______
|
||||||
|
|
||||||
|
The easiest way to execute on an android device is to run the android.sh
|
||||||
|
script on the device, then pull the ftrace log back to the host and run
|
||||||
|
sleepgraph.py on it.
|
||||||
|
|
||||||
|
Here are the steps:
|
||||||
|
|
||||||
|
[download and install the tool on the device]
|
||||||
|
|
||||||
|
host%> wget https://raw.githubusercontent.com/intel/pm-graph/master/tools/android.sh
|
||||||
|
host%> adb connect 192.168.1.6
|
||||||
|
host%> adb root
|
||||||
|
# push the script to a writeable location
|
||||||
|
host%> adb push android.sh /sdcard/
|
||||||
|
|
||||||
|
[check whether the tool will run on your device]
|
||||||
|
|
||||||
|
host%> adb shell
|
||||||
|
dev%> cd /sdcard
|
||||||
|
dev%> sh android.sh status
|
||||||
|
host : asus_t100
|
||||||
|
kernel : 3.14.0-i386-dirty
|
||||||
|
modes : freeze mem
|
||||||
|
rtcwake : supported
|
||||||
|
ftrace : supported
|
||||||
|
trace events {
|
||||||
|
suspend_resume: found
|
||||||
|
device_pm_callback_end: found
|
||||||
|
device_pm_callback_start: found
|
||||||
|
}
|
||||||
|
# the above is what you see on a system that's properly patched
|
||||||
|
|
||||||
|
[execute the suspend]
|
||||||
|
|
||||||
|
# NOTE: The suspend will only work if the screen isn't timed out,
|
||||||
|
# so you have to press some keys first to wake it up b4 suspend)
|
||||||
|
dev%> sh android.sh suspend mem
|
||||||
|
------------------------------------
|
||||||
|
Suspend/Resume timing test initiated
|
||||||
|
------------------------------------
|
||||||
|
hostname : asus_t100
|
||||||
|
kernel : 3.14.0-i386-dirty
|
||||||
|
mode : mem
|
||||||
|
ftrace out : /mnt/shell/emulated/0/ftrace.txt
|
||||||
|
dmesg out : /mnt/shell/emulated/0/dmesg.txt
|
||||||
|
log file : /mnt/shell/emulated/0/log.txt
|
||||||
|
------------------------------------
|
||||||
|
INITIALIZING FTRACE........DONE
|
||||||
|
STARTING FTRACE
|
||||||
|
SUSPEND START @ 21:24:02 (rtcwake in 10 seconds)
|
||||||
|
<adb connection will now terminate>
|
||||||
|
|
||||||
|
[retrieve the data from the device]
|
||||||
|
|
||||||
|
# I find that you have to actually kill the adb process and
|
||||||
|
# reconnect sometimes in order for the connection to work post-suspend
|
||||||
|
host%> adb connect 192.168.1.6
|
||||||
|
# (required) get the ftrace data, this is the most important piece
|
||||||
|
host%> adb pull /sdcard/ftrace.txt
|
||||||
|
# (optional) get the dmesg data, this is for debugging
|
||||||
|
host%> adb pull /sdcard/dmesg.txt
|
||||||
|
# (optional) get the log, which just lists some test times for comparison
|
||||||
|
host%> adb pull /sdcard/log.txt
|
||||||
|
|
||||||
|
[create an output html file using sleepgraph.py]
|
||||||
|
|
||||||
|
host%> sleepgraph.py -ftrace ftrace.txt
|
||||||
|
|
||||||
|
You should now have an output.html with the android data, enjoy!
|
@ -325,9 +325,9 @@ def parseKernelLog():
|
|||||||
if(not sysvals.stamp['kernel']):
|
if(not sysvals.stamp['kernel']):
|
||||||
sysvals.stamp['kernel'] = sysvals.kernelVersion(msg)
|
sysvals.stamp['kernel'] = sysvals.kernelVersion(msg)
|
||||||
continue
|
continue
|
||||||
m = re.match('.* setting system clock to (?P<t>.*) UTC.*', msg)
|
m = re.match('.* setting system clock to (?P<d>[0-9\-]*)[ A-Z](?P<t>[0-9:]*) UTC.*', msg)
|
||||||
if(m):
|
if(m):
|
||||||
bt = datetime.strptime(m.group('t'), '%Y-%m-%d %H:%M:%S')
|
bt = datetime.strptime(m.group('d')+' '+m.group('t'), '%Y-%m-%d %H:%M:%S')
|
||||||
bt = bt - timedelta(seconds=int(ktime))
|
bt = bt - timedelta(seconds=int(ktime))
|
||||||
data.boottime = bt.strftime('%Y-%m-%d_%H:%M:%S')
|
data.boottime = bt.strftime('%Y-%m-%d_%H:%M:%S')
|
||||||
sysvals.stamp['time'] = bt.strftime('%B %d %Y, %I:%M:%S %p')
|
sysvals.stamp['time'] = bt.strftime('%B %d %Y, %I:%M:%S %p')
|
||||||
@ -348,7 +348,7 @@ def parseKernelLog():
|
|||||||
data.newAction(phase, f, pid, start, ktime, int(r), int(t))
|
data.newAction(phase, f, pid, start, ktime, int(r), int(t))
|
||||||
del devtemp[f]
|
del devtemp[f]
|
||||||
continue
|
continue
|
||||||
if(re.match('^Freeing unused kernel memory.*', msg)):
|
if(re.match('^Freeing unused kernel .*', msg)):
|
||||||
data.tUserMode = ktime
|
data.tUserMode = ktime
|
||||||
data.dmesg['kernel']['end'] = ktime
|
data.dmesg['kernel']['end'] = ktime
|
||||||
data.dmesg['user']['start'] = ktime
|
data.dmesg['user']['start'] = ktime
|
||||||
@ -1008,7 +1008,7 @@ if __name__ == '__main__':
|
|||||||
updateKernelParams()
|
updateKernelParams()
|
||||||
elif cmd == 'flistall':
|
elif cmd == 'flistall':
|
||||||
for f in sysvals.getBootFtraceFilterFunctions():
|
for f in sysvals.getBootFtraceFilterFunctions():
|
||||||
print f
|
print(f)
|
||||||
elif cmd == 'checkbl':
|
elif cmd == 'checkbl':
|
||||||
sysvals.getBootLoader()
|
sysvals.getBootLoader()
|
||||||
pprint('Boot Loader: %s\n%s' % (sysvals.bootloader, sysvals.blexec))
|
pprint('Boot Loader: %s\n%s' % (sysvals.bootloader, sysvals.blexec))
|
||||||
|
@ -98,12 +98,34 @@ postdelay: 0
|
|||||||
# graph only devices longer than min in the timeline (default: 0.001 ms)
|
# graph only devices longer than min in the timeline (default: 0.001 ms)
|
||||||
mindev: 0.001
|
mindev: 0.001
|
||||||
|
|
||||||
|
# Call Loop Max Gap (dev mode only)
|
||||||
|
# merge loops of the same call if each is less than maxgap apart (def: 100us)
|
||||||
|
callloop-maxgap: 0.0001
|
||||||
|
|
||||||
|
# Call Loop Max Length (dev mode only)
|
||||||
|
# merge loops of the same call if each is less than maxlen in length (def: 5ms)
|
||||||
|
callloop-maxlen: 0.005
|
||||||
|
|
||||||
|
# Override default timeline entries:
|
||||||
|
# Do not use the internal default functions for timeline entries (def: false)
|
||||||
|
# Set this to true if you intend to only use the ones defined in the config
|
||||||
|
override-timeline-functions: true
|
||||||
|
|
||||||
|
# Override default dev timeline entries:
|
||||||
|
# Do not use the internal default functions for dev timeline entries (def: false)
|
||||||
|
# Set this to true if you intend to only use the ones defined in the config
|
||||||
|
override-dev-timeline-functions: true
|
||||||
|
|
||||||
# ---- Debug Options ----
|
# ---- Debug Options ----
|
||||||
|
|
||||||
# Callgraph
|
# Callgraph
|
||||||
# gather detailed ftrace callgraph data on all timeline events (default: false)
|
# gather detailed ftrace callgraph data on all timeline events (default: false)
|
||||||
callgraph: false
|
callgraph: false
|
||||||
|
|
||||||
|
# Max graph depth
|
||||||
|
# limit the callgraph trace to this depth (default: 0 = all)
|
||||||
|
maxdepth: 2
|
||||||
|
|
||||||
# Callgraph phase filter
|
# Callgraph phase filter
|
||||||
# Only enable callgraphs for one phase, i.e. resume_noirq (default: all)
|
# Only enable callgraphs for one phase, i.e. resume_noirq (default: all)
|
||||||
cgphase: suspend
|
cgphase: suspend
|
||||||
@ -131,3 +153,7 @@ timeprec: 6
|
|||||||
# Add kprobe functions to the timeline
|
# Add kprobe functions to the timeline
|
||||||
# Add functions to the timeline from a text file (default: no-action)
|
# Add functions to the timeline from a text file (default: no-action)
|
||||||
# fadd: file.txt
|
# fadd: file.txt
|
||||||
|
|
||||||
|
# Ftrace buffer size
|
||||||
|
# Set trace buffer size to N kilo-bytes (default: all of free memory up to 3GB)
|
||||||
|
# bufsize: 1000
|
||||||
|
@ -53,6 +53,11 @@ disable rtcwake and require a user keypress to resume.
|
|||||||
Add the dmesg and ftrace logs to the html output. They will be viewable by
|
Add the dmesg and ftrace logs to the html output. They will be viewable by
|
||||||
clicking buttons in the timeline.
|
clicking buttons in the timeline.
|
||||||
.TP
|
.TP
|
||||||
|
\fB-turbostat\fR
|
||||||
|
Use turbostat to execute the command in freeze mode (default: disabled). This
|
||||||
|
will provide turbostat output in the log which will tell you which actual
|
||||||
|
power modes were entered.
|
||||||
|
.TP
|
||||||
\fB-result \fIfile\fR
|
\fB-result \fIfile\fR
|
||||||
Export a results table to a text file for parsing.
|
Export a results table to a text file for parsing.
|
||||||
.TP
|
.TP
|
||||||
@ -121,6 +126,10 @@ be created in a new subdirectory with a summary page: suspend-xN-{date}-{time}.
|
|||||||
Use ftrace to create device callgraphs (default: disabled). This can produce
|
Use ftrace to create device callgraphs (default: disabled). This can produce
|
||||||
very large outputs, i.e. 10MB - 100MB.
|
very large outputs, i.e. 10MB - 100MB.
|
||||||
.TP
|
.TP
|
||||||
|
\fB-ftop\fR
|
||||||
|
Use ftrace on the top level call: "suspend_devices_and_enter" only (default: disabled).
|
||||||
|
This option implies -f and creates a single callgraph covering all of suspend/resume.
|
||||||
|
.TP
|
||||||
\fB-maxdepth \fIlevel\fR
|
\fB-maxdepth \fIlevel\fR
|
||||||
limit the callgraph trace depth to \fIlevel\fR (default: 0=all). This is
|
limit the callgraph trace depth to \fIlevel\fR (default: 0=all). This is
|
||||||
the best way to limit the output size when using callgraphs via -f.
|
the best way to limit the output size when using callgraphs via -f.
|
||||||
@ -138,8 +147,8 @@ which are barely visible in the timeline.
|
|||||||
The value is a float: e.g. 0.001 represents 1 us.
|
The value is a float: e.g. 0.001 represents 1 us.
|
||||||
.TP
|
.TP
|
||||||
\fB-cgfilter \fI"func1,func2,..."\fR
|
\fB-cgfilter \fI"func1,func2,..."\fR
|
||||||
Reduce callgraph output in the timeline by limiting it to a list of calls. The
|
Reduce callgraph output in the timeline by limiting it certain devices. The
|
||||||
argument can be a single function name or a comma delimited list.
|
argument can be a single device name or a comma delimited list.
|
||||||
(default: none)
|
(default: none)
|
||||||
.TP
|
.TP
|
||||||
\fB-cgskip \fIfile\fR
|
\fB-cgskip \fIfile\fR
|
||||||
@ -183,6 +192,9 @@ Print out the contents of the ACPI Firmware Performance Data Table.
|
|||||||
\fB-battery\fR
|
\fB-battery\fR
|
||||||
Print out battery status and current charge.
|
Print out battery status and current charge.
|
||||||
.TP
|
.TP
|
||||||
|
\fB-wifi\fR
|
||||||
|
Print out wifi status and connection details.
|
||||||
|
.TP
|
||||||
\fB-xon/-xoff/-xstandby/-xsuspend\fR
|
\fB-xon/-xoff/-xstandby/-xsuspend\fR
|
||||||
Test xset by attempting to switch the display to the given mode. This
|
Test xset by attempting to switch the display to the given mode. This
|
||||||
is the same command which will be issued by \fB-display \fImode\fR.
|
is the same command which will be issued by \fB-display \fImode\fR.
|
||||||
|
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue
Block a user