Power management updates for 5.7-rc1

- Clean up and rework the PM QoS API to simplify the code and
    reduce the size of it (Rafael Wysocki).
 
  - Fix a suspend-to-idle wakeup regression on Dell XPS13 9370
    and similar platforms where the USB plug/unplug events are
    handled by the EC (Rafael Wysocki).
 
  - CLean up the intel_idle and PSCI cpuidle drivers (Rafael Wysocki,
    Ulf Hansson).
 
  - Extend the haltpoll cpuidle driver so that it can be forced to
    run on some systems where it refused to load (Maciej Szmigiero).
 
  - Convert several cpufreq documents to the .rst format and move the
    legacy driver documentation into one common file (Mauro Carvalho
    Chehab, Rafael Wysocki).
 
  - Update several cpufreq drivers:
 
    * Extend and fix the imx-cpufreq-dt driver (Anson Huang).
 
    * Improve the -EPROBE_DEFER handling and fix unwanted CPU
      overclocking on i.MX6ULL in imx6q-cpufreq (Anson Huang,
      Christoph Niedermaier).
 
    * Add support for Krait based SoCs to the qcom driver (Ansuel
      Smith).
 
    * Add support for OPP_PLUS to ti-cpufreq (Lokesh Vutla).
 
    * Add platform specific intermediate callbacks support to
      cpufreq-dt and update the imx6q driver (Peng Fan).
 
    * Simplify and consolidate some pieces of the intel_pstate driver
      and update its documentation (Rafael Wysocki, Alex Hung).
 
  - Fix several devfreq issues:
 
    * Remove unneeded extern keyword from a devfreq header file
      and use the DEVFREQ_GOV_UPDATE_INTERNAL event name instead of
      DEVFREQ_GOV_INTERNAL (Chanwoo Choi).
 
    * Fix the handling of dev_pm_qos_remove_request() result (Leonard
      Crestez).
 
    * Use constant name for userspace governor (Pierre Kuo).
 
    * Get rid of doc warnings and fix a typo (Christophe JAILLET).
 
  - Use built-in RCU list checking in some places in the PM core to
    avoid false-positive RCU usage warnings (Madhuparna Bhowmik).
 
  - Add explicit READ_ONCE()/WRITE_ONCE() annotations to low-level
    PM QoS routines (Qian Cai).
 
  - Fix removal of wakeup sources to avoid NULL pointer dereferences
    in a corner case (Neeraj Upadhyay).
 
  - Clean up the handling of hibernate compat ioctls and fix the
    related documentation (Eric Biggers).
 
  - Update the idle_inject power capping driver to use variable-length
    arrays instead of zero-length arrays (Gustavo Silva).
 
  - Fix list format in a PM QoS document (Randy Dunlap).
 
  - Make the cpufreq stats module use scnprintf() to avoid potential
    buffer overflows (Takashi Iwai).
 
  - Add pm_runtime_get_if_active() to PM-runtime API (Sakari Ailus).
 
  - Allow no domain-idle-states DT property in generic PM domains (Ulf
    Hansson).
 
  - Fix a broken y-axis scale in the intel_pstate_tracer utility (Doug
    Smythies).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAl6B/YkSHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRxEjIP/jXoO1pAxq7BMx7naZnZL7pzemJfAGR7
 HVnRLDo0IlsSwI7Jvuy13a0eI+EcGPA6pRo5qnBM4TZCIFsHoO5Yle47ndNGsi8r
 Jd3T89oT3I+fXI4KTfWO0n+K/F6mv8/CTZDz/E7Z6zirpFxyyZQxgIsAT76RcZom
 xhWna9vygOlBnFsQaAeph+GzoXBWIylaMZfylUeT3v4c4DLH6FzcbnINPkgJsZCw
 Ayt1bmE0L9yiqCizEto91eaDObkxTHVFGr2OVNa/Y/SVW+VUThUJrXqV28opQxPZ
 h4TiQorpTX1CwMmiXZwmoeqqsiVXrm0KyhK0lwc5tZ9FnZWiW4qjJ487Eu6TjOmh
 gecT+M2Yexy0BvUGN0wIdaCLtfmf2Hjxk0trxM2blAh3uoFjf3UJ9SLNkRjlu2/b
 QqWmIRRPljD5fEUid5lVV4EAXuITUzWMJeia+FiAsgx1SF3pZPar80f+FGrYfaJN
 wL2BTwBx1aXpPpAkEX0kM9Rkf6oJsFATR3p7DNzyZ1bMrQUxiToWRlQBID5H6G4v
 /kAkSTQjNQVwkkylUzTLOlcmL56sCvc0YPdybH62OsLXs9K4gyC8v6tEdtdA5qtw
 0Up9DrYbNKKv6GrSXf8eyk2Q2CEqfRXHv2ACNnkLRXZ6fWnFiTfMgNj7zqtrfna7
 tJBvrV9/ACXE
 =cBQd
 -----END PGP SIGNATURE-----

Merge tag 'pm-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These clean up and rework the PM QoS API, address a suspend-to-idle
  wakeup regression on some ACPI-based platforms, clean up and extend a
  few cpuidle drivers, update multiple cpufreq drivers and cpufreq
  documentation, and fix a number of issues in devfreq and several other
  things all over.

  Specifics:

   - Clean up and rework the PM QoS API to simplify the code and reduce
     the size of it (Rafael Wysocki).

   - Fix a suspend-to-idle wakeup regression on Dell XPS13 9370 and
     similar platforms where the USB plug/unplug events are handled by
     the EC (Rafael Wysocki).

   - CLean up the intel_idle and PSCI cpuidle drivers (Rafael Wysocki,
     Ulf Hansson).

   - Extend the haltpoll cpuidle driver so that it can be forced to run
     on some systems where it refused to load (Maciej Szmigiero).

   - Convert several cpufreq documents to the .rst format and move the
     legacy driver documentation into one common file (Mauro Carvalho
     Chehab, Rafael Wysocki).

   - Update several cpufreq drivers:

        * Extend and fix the imx-cpufreq-dt driver (Anson Huang).

        * Improve the -EPROBE_DEFER handling and fix unwanted CPU
          overclocking on i.MX6ULL in imx6q-cpufreq (Anson Huang,
          Christoph Niedermaier).

        * Add support for Krait based SoCs to the qcom driver (Ansuel
          Smith).

        * Add support for OPP_PLUS to ti-cpufreq (Lokesh Vutla).

        * Add platform specific intermediate callbacks support to
          cpufreq-dt and update the imx6q driver (Peng Fan).

        * Simplify and consolidate some pieces of the intel_pstate
          driver and update its documentation (Rafael Wysocki, Alex
          Hung).

   - Fix several devfreq issues:

        * Remove unneeded extern keyword from a devfreq header file and
          use the DEVFREQ_GOV_UPDATE_INTERNAL event name instead of
          DEVFREQ_GOV_INTERNAL (Chanwoo Choi).

        * Fix the handling of dev_pm_qos_remove_request() result
          (Leonard Crestez).

        * Use constant name for userspace governor (Pierre Kuo).

        * Get rid of doc warnings and fix a typo (Christophe JAILLET).

   - Use built-in RCU list checking in some places in the PM core to
     avoid false-positive RCU usage warnings (Madhuparna Bhowmik).

   - Add explicit READ_ONCE()/WRITE_ONCE() annotations to low-level PM
     QoS routines (Qian Cai).

   - Fix removal of wakeup sources to avoid NULL pointer dereferences in
     a corner case (Neeraj Upadhyay).

   - Clean up the handling of hibernate compat ioctls and fix the
     related documentation (Eric Biggers).

   - Update the idle_inject power capping driver to use variable-length
     arrays instead of zero-length arrays (Gustavo Silva).

   - Fix list format in a PM QoS document (Randy Dunlap).

   - Make the cpufreq stats module use scnprintf() to avoid potential
     buffer overflows (Takashi Iwai).

   - Add pm_runtime_get_if_active() to PM-runtime API (Sakari Ailus).

   - Allow no domain-idle-states DT property in generic PM domains (Ulf
     Hansson).

   - Fix a broken y-axis scale in the intel_pstate_tracer utility (Doug
     Smythies)"

* tag 'pm-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (78 commits)
  cpufreq: intel_pstate: Simplify intel_pstate_cpu_init()
  tools/power/x86/intel_pstate_tracer: fix a broken y-axis scale
  ACPI: PM: s2idle: Refine active GPEs check
  ACPICA: Allow acpi_any_gpe_status_set() to skip one GPE
  PM: sleep: wakeup: Skip wakeup_source_sysfs_remove() if device is not there
  PM / devfreq: Get rid of some doc warnings
  PM / devfreq: Fix handling dev_pm_qos_remove_request result
  PM / devfreq: Fix a typo in a comment
  PM / devfreq: Change to DEVFREQ_GOV_UPDATE_INTERVAL event name
  PM / devfreq: Remove unneeded extern keyword
  PM / devfreq: Use constant name of userspace governor
  ACPI: PM: s2idle: Fix comment in acpi_s2idle_prepare_late()
  cpufreq: qcom: Add support for krait based socs
  cpufreq: imx6q-cpufreq: Improve the logic of -EPROBE_DEFER handling
  cpufreq: Use scnprintf() for avoiding potential buffer overflow
  cpuidle: psci: Split psci_dt_cpu_init_idle()
  PM / Domains: Allow no domain-idle-states DT property in genpd when parsing
  PM / hibernate: Remove unnecessary compat ioctl overrides
  PM: hibernate: fix docs for ioctls that return loff_t via pointer
  Documentation: intel_pstate: update links for references
  ...
This commit is contained in:
Linus Torvalds 2020-03-30 15:05:01 -07:00
commit 49835c15a5
79 changed files with 1595 additions and 1604 deletions

View File

@ -0,0 +1,274 @@
.. SPDX-License-Identifier: GPL-2.0
=======================================================
Legacy Documentation of CPU Performance Scaling Drivers
=======================================================
Included below are historic documents describing assorted
:doc:`CPU performance scaling <cpufreq>` drivers. They are reproduced verbatim,
with the original white space formatting and indentation preserved, except for
the added leading space character in every line of text.
AMD PowerNow! Drivers
=====================
::
PowerNow! and Cool'n'Quiet are AMD names for frequency
management capabilities in AMD processors. As the hardware
implementation changes in new generations of the processors,
there is a different cpu-freq driver for each generation.
Note that the driver's will not load on the "wrong" hardware,
so it is safe to try each driver in turn when in doubt as to
which is the correct driver.
Note that the functionality to change frequency (and voltage)
is not available in all processors. The drivers will refuse
to load on processors without this capability. The capability
is detected with the cpuid instruction.
The drivers use BIOS supplied tables to obtain frequency and
voltage information appropriate for a particular platform.
Frequency transitions will be unavailable if the BIOS does
not supply these tables.
6th Generation: powernow-k6
7th Generation: powernow-k7: Athlon, Duron, Geode.
8th Generation: powernow-k8: Athlon, Athlon 64, Opteron, Sempron.
Documentation on this functionality in 8th generation processors
is available in the "BIOS and Kernel Developer's Guide", publication
26094, in chapter 9, available for download from www.amd.com.
BIOS supplied data, for powernow-k7 and for powernow-k8, may be
from either the PSB table or from ACPI objects. The ACPI support
is only available if the kernel config sets CONFIG_ACPI_PROCESSOR.
The powernow-k8 driver will attempt to use ACPI if so configured,
and fall back to PST if that fails.
The powernow-k7 driver will try to use the PSB support first, and
fall back to ACPI if the PSB support fails. A module parameter,
acpi_force, is provided to force ACPI support to be used instead
of PSB support.
``cpufreq-nforce2``
===================
::
The cpufreq-nforce2 driver changes the FSB on nVidia nForce2 platforms.
This works better than on other platforms, because the FSB of the CPU
can be controlled independently from the PCI/AGP clock.
The module has two options:
fid: multiplier * 10 (for example 8.5 = 85)
min_fsb: minimum FSB
If not set, fid is calculated from the current CPU speed and the FSB.
min_fsb defaults to FSB at boot time - 50 MHz.
IMPORTANT: The available range is limited downwards!
Also the minimum available FSB can differ, for systems
booting with 200 MHz, 150 should always work.
``pcc-cpufreq``
===============
::
/*
* pcc-cpufreq.txt - PCC interface documentation
*
* Copyright (C) 2009 Red Hat, Matthew Garrett <mjg@redhat.com>
* Copyright (C) 2009 Hewlett-Packard Development Company, L.P.
* Nagananda Chumbalkar <nagananda.chumbalkar@hp.com>
*/
Processor Clocking Control Driver
---------------------------------
Contents:
---------
1. Introduction
1.1 PCC interface
1.1.1 Get Average Frequency
1.1.2 Set Desired Frequency
1.2 Platforms affected
2. Driver and /sys details
2.1 scaling_available_frequencies
2.2 cpuinfo_transition_latency
2.3 cpuinfo_cur_freq
2.4 related_cpus
3. Caveats
1. Introduction:
----------------
Processor Clocking Control (PCC) is an interface between the platform
firmware and OSPM. It is a mechanism for coordinating processor
performance (ie: frequency) between the platform firmware and the OS.
The PCC driver (pcc-cpufreq) allows OSPM to take advantage of the PCC
interface.
OS utilizes the PCC interface to inform platform firmware what frequency the
OS wants for a logical processor. The platform firmware attempts to achieve
the requested frequency. If the request for the target frequency could not be
satisfied by platform firmware, then it usually means that power budget
conditions are in place, and "power capping" is taking place.
1.1 PCC interface:
------------------
The complete PCC specification is available here:
https://acpica.org/sites/acpica/files/Processor-Clocking-Control-v1p0.pdf
PCC relies on a shared memory region that provides a channel for communication
between the OS and platform firmware. PCC also implements a "doorbell" that
is used by the OS to inform the platform firmware that a command has been
sent.
The ACPI PCCH() method is used to discover the location of the PCC shared
memory region. The shared memory region header contains the "command" and
"status" interface. PCCH() also contains details on how to access the platform
doorbell.
The following commands are supported by the PCC interface:
* Get Average Frequency
* Set Desired Frequency
The ACPI PCCP() method is implemented for each logical processor and is
used to discover the offsets for the input and output buffers in the shared
memory region.
When PCC mode is enabled, the platform will not expose processor performance
or throttle states (_PSS, _TSS and related ACPI objects) to OSPM. Therefore,
the native P-state driver (such as acpi-cpufreq for Intel, powernow-k8 for
AMD) will not load.
However, OSPM remains in control of policy. The governor (eg: "ondemand")
computes the required performance for each processor based on server workload.
The PCC driver fills in the command interface, and the input buffer and
communicates the request to the platform firmware. The platform firmware is
responsible for delivering the requested performance.
Each PCC command is "global" in scope and can affect all the logical CPUs in
the system. Therefore, PCC is capable of performing "group" updates. With PCC
the OS is capable of getting/setting the frequency of all the logical CPUs in
the system with a single call to the BIOS.
1.1.1 Get Average Frequency:
----------------------------
This command is used by the OSPM to query the running frequency of the
processor since the last time this command was completed. The output buffer
indicates the average unhalted frequency of the logical processor expressed as
a percentage of the nominal (ie: maximum) CPU frequency. The output buffer
also signifies if the CPU frequency is limited by a power budget condition.
1.1.2 Set Desired Frequency:
----------------------------
This command is used by the OSPM to communicate to the platform firmware the
desired frequency for a logical processor. The output buffer is currently
ignored by OSPM. The next invocation of "Get Average Frequency" will inform
OSPM if the desired frequency was achieved or not.
1.2 Platforms affected:
-----------------------
The PCC driver will load on any system where the platform firmware:
* supports the PCC interface, and the associated PCCH() and PCCP() methods
* assumes responsibility for managing the hardware clocking controls in order
to deliver the requested processor performance
Currently, certain HP ProLiant platforms implement the PCC interface. On those
platforms PCC is the "default" choice.
However, it is possible to disable this interface via a BIOS setting. In
such an instance, as is also the case on platforms where the PCC interface
is not implemented, the PCC driver will fail to load silently.
2. Driver and /sys details:
---------------------------
When the driver loads, it merely prints the lowest and the highest CPU
frequencies supported by the platform firmware.
The PCC driver loads with a message such as:
pcc-cpufreq: (v1.00.00) driver loaded with frequency limits: 1600 MHz, 2933
MHz
This means that the OPSM can request the CPU to run at any frequency in
between the limits (1600 MHz, and 2933 MHz) specified in the message.
Internally, there is no need for the driver to convert the "target" frequency
to a corresponding P-state.
The VERSION number for the driver will be of the format v.xy.ab.
eg: 1.00.02
----- --
| |
| -- this will increase with bug fixes/enhancements to the driver
|-- this is the version of the PCC specification the driver adheres to
The following is a brief discussion on some of the fields exported via the
/sys filesystem and how their values are affected by the PCC driver:
2.1 scaling_available_frequencies:
----------------------------------
scaling_available_frequencies is not created in /sys. No intermediate
frequencies need to be listed because the BIOS will try to achieve any
frequency, within limits, requested by the governor. A frequency does not have
to be strictly associated with a P-state.
2.2 cpuinfo_transition_latency:
-------------------------------
The cpuinfo_transition_latency field is 0. The PCC specification does
not include a field to expose this value currently.
2.3 cpuinfo_cur_freq:
---------------------
A) Often cpuinfo_cur_freq will show a value different than what is declared
in the scaling_available_frequencies or scaling_cur_freq, or scaling_max_freq.
This is due to "turbo boost" available on recent Intel processors. If certain
conditions are met the BIOS can achieve a slightly higher speed than requested
by OSPM. An example:
scaling_cur_freq : 2933000
cpuinfo_cur_freq : 3196000
B) There is a round-off error associated with the cpuinfo_cur_freq value.
Since the driver obtains the current frequency as a "percentage" (%) of the
nominal frequency from the BIOS, sometimes, the values displayed by
scaling_cur_freq and cpuinfo_cur_freq may not match. An example:
scaling_cur_freq : 1600000
cpuinfo_cur_freq : 1583000
In this example, the nominal frequency is 2933 MHz. The driver obtains the
current frequency, cpuinfo_cur_freq, as 54% of the nominal frequency:
54% of 2933 MHz = 1583 MHz
Nominal frequency is the maximum frequency of the processor, and it usually
corresponds to the frequency of the P0 P-state.
2.4 related_cpus:
-----------------
The related_cpus field is identical to affected_cpus.
affected_cpus : 4
related_cpus : 4
Currently, the PCC driver does not evaluate _PSD. The platforms that support
PCC do not implement SW_ALL. So OSPM doesn't need to perform any coordination
to ensure that the same frequency is requested of all dependent CPUs.
3. Caveats:
-----------
The "cpufreq_stats" module in its present form cannot be loaded and
expected to work with the PCC driver. Since the "cpufreq_stats" module
provides information wrt each P-state, it is not applicable to the PCC driver.

View File

@ -583,20 +583,17 @@ Power Management Quality of Service for CPUs
The power management quality of service (PM QoS) framework in the Linux kernel The power management quality of service (PM QoS) framework in the Linux kernel
allows kernel code and user space processes to set constraints on various allows kernel code and user space processes to set constraints on various
energy-efficiency features of the kernel to prevent performance from dropping energy-efficiency features of the kernel to prevent performance from dropping
below a required level. The PM QoS constraints can be set globally, in below a required level.
predefined categories referred to as PM QoS classes, or against individual
devices.
CPU idle time management can be affected by PM QoS in two ways, through the CPU idle time management can be affected by PM QoS in two ways, through the
global constraint in the ``PM_QOS_CPU_DMA_LATENCY`` class and through the global CPU latency limit and through the resume latency constraints for
resume latency constraints for individual CPUs. Kernel code (e.g. device individual CPUs. Kernel code (e.g. device drivers) can set both of them with
drivers) can set both of them with the help of special internal interfaces the help of special internal interfaces provided by the PM QoS framework. User
provided by the PM QoS framework. User space can modify the former by opening space can modify the former by opening the :file:`cpu_dma_latency` special
the :file:`cpu_dma_latency` special device file under :file:`/dev/` and writing device file under :file:`/dev/` and writing a binary value (interpreted as a
a binary value (interpreted as a signed 32-bit integer) to it. In turn, the signed 32-bit integer) to it. In turn, the resume latency constraint for a CPU
resume latency constraint for a CPU can be modified by user space by writing a can be modified from user space by writing a string (representing a signed
string (representing a signed 32-bit integer) to the 32-bit integer) to the :file:`power/pm_qos_resume_latency_us` file under
:file:`power/pm_qos_resume_latency_us` file under
:file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs``, where the CPU number :file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs``, where the CPU number
``<N>`` is allocated at the system initialization time. Negative values ``<N>`` is allocated at the system initialization time. Negative values
will be rejected in both cases and, also in both cases, the written integer will be rejected in both cases and, also in both cases, the written integer
@ -605,32 +602,34 @@ number will be interpreted as a requested PM QoS constraint in microseconds.
The requested value is not automatically applied as a new constraint, however, The requested value is not automatically applied as a new constraint, however,
as it may be less restrictive (greater in this particular case) than another as it may be less restrictive (greater in this particular case) than another
constraint previously requested by someone else. For this reason, the PM QoS constraint previously requested by someone else. For this reason, the PM QoS
framework maintains a list of requests that have been made so far in each framework maintains a list of requests that have been made so far for the
global class and for each device, aggregates them and applies the effective global CPU latency limit and for each individual CPU, aggregates them and
(minimum in this particular case) value as the new constraint. applies the effective (minimum in this particular case) value as the new
constraint.
In fact, opening the :file:`cpu_dma_latency` special device file causes a new In fact, opening the :file:`cpu_dma_latency` special device file causes a new
PM QoS request to be created and added to the priority list of requests in the PM QoS request to be created and added to a global priority list of CPU latency
``PM_QOS_CPU_DMA_LATENCY`` class and the file descriptor coming from the limit requests and the file descriptor coming from the "open" operation
"open" operation represents that request. If that file descriptor is then represents that request. If that file descriptor is then used for writing, the
used for writing, the number written to it will be associated with the PM QoS number written to it will be associated with the PM QoS request represented by
request represented by it as a new requested constraint value. Next, the it as a new requested limit value. Next, the priority list mechanism will be
priority list mechanism will be used to determine the new effective value of used to determine the new effective value of the entire list of requests and
the entire list of requests and that effective value will be set as a new that effective value will be set as a new CPU latency limit. Thus requesting a
constraint. Thus setting a new requested constraint value will only change the new limit value will only change the real limit if the effective "list" value is
real constraint if the effective "list" value is affected by it. In particular, affected by it, which is the case if it is the minimum of the requested values
for the ``PM_QOS_CPU_DMA_LATENCY`` class it only affects the real constraint if in the list.
it is the minimum of the requested constraints in the list. The process holding
a file descriptor obtained by opening the :file:`cpu_dma_latency` special device The process holding a file descriptor obtained by opening the
file controls the PM QoS request associated with that file descriptor, but it :file:`cpu_dma_latency` special device file controls the PM QoS request
controls this particular PM QoS request only. associated with that file descriptor, but it controls this particular PM QoS
request only.
Closing the :file:`cpu_dma_latency` special device file or, more precisely, the Closing the :file:`cpu_dma_latency` special device file or, more precisely, the
file descriptor obtained while opening it, causes the PM QoS request associated file descriptor obtained while opening it, causes the PM QoS request associated
with that file descriptor to be removed from the ``PM_QOS_CPU_DMA_LATENCY`` with that file descriptor to be removed from the global priority list of CPU
class priority list and destroyed. If that happens, the priority list mechanism latency limit requests and destroyed. If that happens, the priority list
will be used, again, to determine the new effective value for the whole list mechanism will be used again, to determine the new effective value for the whole
and that value will become the new real constraint. list and that value will become the new limit.
In turn, for each CPU there is one resume latency PM QoS request associated with In turn, for each CPU there is one resume latency PM QoS request associated with
the :file:`power/pm_qos_resume_latency_us` file under the :file:`power/pm_qos_resume_latency_us` file under
@ -647,10 +646,10 @@ CPU in question every time the list of requests is updated this way or another
(there may be other requests coming from kernel code in that list). (there may be other requests coming from kernel code in that list).
CPU idle time governors are expected to regard the minimum of the global CPU idle time governors are expected to regard the minimum of the global
effective ``PM_QOS_CPU_DMA_LATENCY`` class constraint and the effective (effective) CPU latency limit and the effective resume latency constraint for
resume latency constraint for the given CPU as the upper limit for the exit the given CPU as the upper limit for the exit latency of the idle states that
latency of the idle states they can select for that CPU. They should never they are allowed to select for that CPU. They should never select any idle
select any idle states with exit latency beyond that limit. states with exit latency beyond that limit.
Idle States Control Via Kernel Command Line Idle States Control Via Kernel Command Line

View File

@ -734,10 +734,10 @@ References
========== ==========
.. [1] Kristen Accardi, *Balancing Power and Performance in the Linux Kernel*, .. [1] Kristen Accardi, *Balancing Power and Performance in the Linux Kernel*,
http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf https://events.static.linuxfound.org/sites/events/files/slides/LinuxConEurope_2015.pdf
.. [2] *Intel® 64 and IA-32 Architectures Software Developers Manual Volume 3: System Programming Guide*, .. [2] *Intel® 64 and IA-32 Architectures Software Developers Manual Volume 3: System Programming Guide*,
http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html https://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html
.. [3] *Advanced Configuration and Power Interface Specification*, .. [3] *Advanced Configuration and Power Interface Specification*,
https://uefi.org/sites/default/files/resources/ACPI_6_3_final_Jan30.pdf https://uefi.org/sites/default/files/resources/ACPI_6_3_final_Jan30.pdf

View File

@ -11,4 +11,5 @@ Working-State Power Management
intel_idle intel_idle
cpufreq cpufreq
intel_pstate intel_pstate
cpufreq_drivers
intel_epb intel_epb

View File

@ -1,38 +0,0 @@
PowerNow! and Cool'n'Quiet are AMD names for frequency
management capabilities in AMD processors. As the hardware
implementation changes in new generations of the processors,
there is a different cpu-freq driver for each generation.
Note that the driver's will not load on the "wrong" hardware,
so it is safe to try each driver in turn when in doubt as to
which is the correct driver.
Note that the functionality to change frequency (and voltage)
is not available in all processors. The drivers will refuse
to load on processors without this capability. The capability
is detected with the cpuid instruction.
The drivers use BIOS supplied tables to obtain frequency and
voltage information appropriate for a particular platform.
Frequency transitions will be unavailable if the BIOS does
not supply these tables.
6th Generation: powernow-k6
7th Generation: powernow-k7: Athlon, Duron, Geode.
8th Generation: powernow-k8: Athlon, Athlon 64, Opteron, Sempron.
Documentation on this functionality in 8th generation processors
is available in the "BIOS and Kernel Developer's Guide", publication
26094, in chapter 9, available for download from www.amd.com.
BIOS supplied data, for powernow-k7 and for powernow-k8, may be
from either the PSB table or from ACPI objects. The ACPI support
is only available if the kernel config sets CONFIG_ACPI_PROCESSOR.
The powernow-k8 driver will attempt to use ACPI if so configured,
and fall back to PST if that fails.
The powernow-k7 driver will try to use the PSB support first, and
fall back to ACPI if the PSB support fails. A module parameter,
acpi_force, is provided to force ACPI support to be used instead
of PSB support.

View File

@ -1,31 +1,23 @@
CPU frequency and voltage scaling code in the Linux(TM) kernel .. SPDX-License-Identifier: GPL-2.0
=============================================================
General description of the CPUFreq core and CPUFreq notifiers
=============================================================
L i n u x C P U F r e q Authors:
- Dominik Brodowski <linux@brodo.de>
- David Kimdon <dwhedon@debian.org>
- Rafael J. Wysocki <rafael.j.wysocki@intel.com>
- Viresh Kumar <viresh.kumar@linaro.org>
C P U F r e q C o r e .. Contents:
1. CPUFreq core and interfaces
Dominik Brodowski <linux@brodo.de> 2. CPUFreq notifiers
David Kimdon <dwhedon@debian.org> 3. CPUFreq Table Generation with Operating Performance Point (OPP)
Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Viresh Kumar <viresh.kumar@linaro.org>
Clock scaling allows you to change the clock speed of the CPUs on the
fly. This is a nice method to save battery power, because the lower
the clock speed, the less power the CPU consumes.
Contents:
---------
1. CPUFreq core and interfaces
2. CPUFreq notifiers
3. CPUFreq Table Generation with Operating Performance Point (OPP)
1. General Information 1. General Information
======================= ======================
The CPUFreq core code is located in drivers/cpufreq/cpufreq.c. This The CPUFreq core code is located in drivers/cpufreq/cpufreq.c. This
cpufreq code offers a standardized interface for the CPUFreq cpufreq code offers a standardized interface for the CPUFreq
@ -63,7 +55,7 @@ The phase is specified in the second argument to the notifier. The phase is
CPUFREQ_CREATE_POLICY when the policy is first created and it is CPUFREQ_CREATE_POLICY when the policy is first created and it is
CPUFREQ_REMOVE_POLICY when the policy is removed. CPUFREQ_REMOVE_POLICY when the policy is removed.
The third argument, a void *pointer, points to a struct cpufreq_policy The third argument, a ``void *pointer``, points to a struct cpufreq_policy
consisting of several values, including min, max (the lower and upper consisting of several values, including min, max (the lower and upper
frequencies (in kHz) of the new policy). frequencies (in kHz) of the new policy).
@ -80,10 +72,13 @@ CPUFREQ_POSTCHANGE.
The third argument is a struct cpufreq_freqs with the following The third argument is a struct cpufreq_freqs with the following
values: values:
cpu - number of the affected CPU
old - old frequency ===== ===========================
new - new frequency cpu number of the affected CPU
flags - flags of the cpufreq driver old old frequency
new new frequency
flags flags of the cpufreq driver
===== ===========================
3. CPUFreq Table Generation with Operating Performance Point (OPP) 3. CPUFreq Table Generation with Operating Performance Point (OPP)
================================================================== ==================================================================
@ -94,9 +89,12 @@ dev_pm_opp_init_cpufreq_table -
the OPP layer's internal information about the available frequencies the OPP layer's internal information about the available frequencies
into a format readily providable to cpufreq. into a format readily providable to cpufreq.
WARNING: Do not use this function in interrupt context. .. Warning::
Do not use this function in interrupt context.
Example::
Example:
soc_pm_init() soc_pm_init()
{ {
/* Do things */ /* Do things */
@ -106,7 +104,10 @@ dev_pm_opp_init_cpufreq_table -
/* Do other things */ /* Do other things */
} }
NOTE: This function is available only if CONFIG_CPU_FREQ is enabled in .. note::
addition to CONFIG_PM_OPP.
dev_pm_opp_free_cpufreq_table - Free up the table allocated by dev_pm_opp_init_cpufreq_table This function is available only if CONFIG_CPU_FREQ is enabled in
addition to CONFIG_PM_OPP.
dev_pm_opp_free_cpufreq_table
Free up the table allocated by dev_pm_opp_init_cpufreq_table

View File

@ -1,35 +1,27 @@
CPU frequency and voltage scaling code in the Linux(TM) kernel .. SPDX-License-Identifier: GPL-2.0
===============================================
How to Implement a new CPUFreq Processor Driver
===============================================
Authors:
L i n u x C P U F r e q - Dominik Brodowski <linux@brodo.de>
- Rafael J. Wysocki <rafael.j.wysocki@intel.com>
- Viresh Kumar <viresh.kumar@linaro.org>
C P U D r i v e r s .. Contents
- information for developers - 1. What To Do?
1.1 Initialization
1.2 Per-CPU Initialization
Dominik Brodowski <linux@brodo.de> 1.3 verify
Rafael J. Wysocki <rafael.j.wysocki@intel.com> 1.4 target/target_index or setpolicy?
Viresh Kumar <viresh.kumar@linaro.org> 1.5 target/target_index
1.6 setpolicy
1.7 get_intermediate and target_intermediate
2. Frequency Table Helpers
Clock scaling allows you to change the clock speed of the CPUs on the
fly. This is a nice method to save battery power, because the lower
the clock speed, the less power the CPU consumes.
Contents:
---------
1. What To Do?
1.1 Initialization
1.2 Per-CPU Initialization
1.3 verify
1.4 target/target_index or setpolicy?
1.5 target/target_index
1.6 setpolicy
1.7 get_intermediate and target_intermediate
2. Frequency Table Helpers
@ -49,7 +41,7 @@ function check whether this kernel runs on the right CPU and the right
chipset. If so, register a struct cpufreq_driver with the CPUfreq core chipset. If so, register a struct cpufreq_driver with the CPUfreq core
using cpufreq_register_driver() using cpufreq_register_driver()
What shall this struct cpufreq_driver contain? What shall this struct cpufreq_driver contain?
.name - The name of this driver. .name - The name of this driver.
@ -108,37 +100,42 @@ Whenever a new CPU is registered with the device model, or after the
cpufreq driver registers itself, the per-policy initialization function cpufreq driver registers itself, the per-policy initialization function
cpufreq_driver.init is called if no cpufreq policy existed for the CPU. cpufreq_driver.init is called if no cpufreq policy existed for the CPU.
Note that the .init() and .exit() routines are called only once for the Note that the .init() and .exit() routines are called only once for the
policy and not for each CPU managed by the policy. It takes a struct policy and not for each CPU managed by the policy. It takes a ``struct
cpufreq_policy *policy as argument. What to do now? cpufreq_policy *policy`` as argument. What to do now?
If necessary, activate the CPUfreq support on your CPU. If necessary, activate the CPUfreq support on your CPU.
Then, the driver must fill in the following values: Then, the driver must fill in the following values:
policy->cpuinfo.min_freq _and_ +-----------------------------------+--------------------------------------+
policy->cpuinfo.max_freq - the minimum and maximum frequency |policy->cpuinfo.min_freq _and_ | |
(in kHz) which is supported by |policy->cpuinfo.max_freq | the minimum and maximum frequency |
this CPU | | (in kHz) which is supported by |
policy->cpuinfo.transition_latency the time it takes on this CPU to | | this CPU |
switch between two frequencies in +-----------------------------------+--------------------------------------+
nanoseconds (if appropriate, else |policy->cpuinfo.transition_latency | the time it takes on this CPU to |
specify CPUFREQ_ETERNAL) | | switch between two frequencies in |
| | nanoseconds (if appropriate, else |
policy->cur The current operating frequency of | | specify CPUFREQ_ETERNAL) |
this CPU (if appropriate) +-----------------------------------+--------------------------------------+
policy->min, |policy->cur | The current operating frequency of |
policy->max, | | this CPU (if appropriate) |
policy->policy and, if necessary, +-----------------------------------+--------------------------------------+
policy->governor must contain the "default policy" for |policy->min, | |
this CPU. A few moments later, |policy->max, | |
cpufreq_driver.verify and either |policy->policy and, if necessary, | |
cpufreq_driver.setpolicy or |policy->governor | must contain the "default policy" for|
cpufreq_driver.target/target_index is called | | this CPU. A few moments later, |
with these values. | | cpufreq_driver.verify and either |
policy->cpus Update this with the masks of the | | cpufreq_driver.setpolicy or |
(online + offline) CPUs that do DVFS | | cpufreq_driver.target/target_index is|
along with this CPU (i.e. that share | | called with these values. |
clock/voltage rails with it). +-----------------------------------+--------------------------------------+
|policy->cpus | Update this with the masks of the |
| | (online + offline) CPUs that do DVFS |
| | along with this CPU (i.e. that share|
| | clock/voltage rails with it). |
+-----------------------------------+--------------------------------------+
For setting some of these values (cpuinfo.min[max]_freq, policy->min[max]), the For setting some of these values (cpuinfo.min[max]_freq, policy->min[max]), the
frequency table helpers might be helpful. See the section 2 for more information frequency table helpers might be helpful. See the section 2 for more information
@ -151,8 +148,8 @@ on them.
When the user decides a new policy (consisting of When the user decides a new policy (consisting of
"policy,governor,min,max") shall be set, this policy must be validated "policy,governor,min,max") shall be set, this policy must be validated
so that incompatible values can be corrected. For verifying these so that incompatible values can be corrected. For verifying these
values cpufreq_verify_within_limits(struct cpufreq_policy *policy, values cpufreq_verify_within_limits(``struct cpufreq_policy *policy``,
unsigned int min_freq, unsigned int max_freq) function might be helpful. ``unsigned int min_freq``, ``unsigned int max_freq``) function might be helpful.
See section 2 for details on frequency table helpers. See section 2 for details on frequency table helpers.
You need to make sure that at least one valid frequency (or operating You need to make sure that at least one valid frequency (or operating
@ -163,7 +160,7 @@ policy->max first, and only if this is no solution, decrease policy->min.
1.4 target or target_index or setpolicy or fast_switch? 1.4 target or target_index or setpolicy or fast_switch?
------------------------------------------------------- -------------------------------------------------------
Most cpufreq drivers or even most cpu frequency scaling algorithms Most cpufreq drivers or even most cpu frequency scaling algorithms
only allow the CPU frequency to be set to predefined fixed values. For only allow the CPU frequency to be set to predefined fixed values. For
these, you use the ->target(), ->target_index() or ->fast_switch() these, you use the ->target(), ->target_index() or ->fast_switch()
callbacks. callbacks.
@ -175,8 +172,8 @@ limits on their own. These shall use the ->setpolicy() callback.
1.5. target/target_index 1.5. target/target_index
------------------------ ------------------------
The target_index call has two arguments: struct cpufreq_policy *policy, The target_index call has two arguments: ``struct cpufreq_policy *policy``,
and unsigned int index (into the exposed frequency table). and ``unsigned int`` index (into the exposed frequency table).
The CPUfreq driver must set the new frequency when called here. The The CPUfreq driver must set the new frequency when called here. The
actual frequency must be determined by freq_table[index].frequency. actual frequency must be determined by freq_table[index].frequency.
@ -184,9 +181,9 @@ actual frequency must be determined by freq_table[index].frequency.
It should always restore to earlier frequency (i.e. policy->restore_freq) in It should always restore to earlier frequency (i.e. policy->restore_freq) in
case of errors, even if we switched to intermediate frequency earlier. case of errors, even if we switched to intermediate frequency earlier.
Deprecated: Deprecated
---------- ----------
The target call has three arguments: struct cpufreq_policy *policy, The target call has three arguments: ``struct cpufreq_policy *policy``,
unsigned int target_frequency, unsigned int relation. unsigned int target_frequency, unsigned int relation.
The CPUfreq driver must set the new frequency when called here. The The CPUfreq driver must set the new frequency when called here. The
@ -210,14 +207,14 @@ Not all drivers are expected to implement it, as sleeping from within
this callback isn't allowed. This callback must be highly optimized to this callback isn't allowed. This callback must be highly optimized to
do switching as fast as possible. do switching as fast as possible.
This function has two arguments: struct cpufreq_policy *policy and This function has two arguments: ``struct cpufreq_policy *policy`` and
unsigned int target_frequency. ``unsigned int target_frequency``.
1.7 setpolicy 1.7 setpolicy
------------- -------------
The setpolicy call only takes a struct cpufreq_policy *policy as The setpolicy call only takes a ``struct cpufreq_policy *policy`` as
argument. You need to set the lower limit of the in-processor or argument. You need to set the lower limit of the in-processor or
in-chipset dynamic frequency switching to policy->min, the upper limit in-chipset dynamic frequency switching to policy->min, the upper limit
to policy->max, and -if supported- select a performance-oriented to policy->max, and -if supported- select a performance-oriented
@ -278,10 +275,10 @@ table.
cpufreq_for_each_valid_entry(pos, table) - iterates over all entries, cpufreq_for_each_valid_entry(pos, table) - iterates over all entries,
excluding CPUFREQ_ENTRY_INVALID frequencies. excluding CPUFREQ_ENTRY_INVALID frequencies.
Use arguments "pos" - a cpufreq_frequency_table * as a loop cursor and Use arguments "pos" - a ``cpufreq_frequency_table *`` as a loop cursor and
"table" - the cpufreq_frequency_table * you want to iterate over. "table" - the ``cpufreq_frequency_table *`` you want to iterate over.
For example: For example::
struct cpufreq_frequency_table *pos, *driver_freq_table; struct cpufreq_frequency_table *pos, *driver_freq_table;

View File

@ -1,19 +0,0 @@
The cpufreq-nforce2 driver changes the FSB on nVidia nForce2 platforms.
This works better than on other platforms, because the FSB of the CPU
can be controlled independently from the PCI/AGP clock.
The module has two options:
fid: multiplier * 10 (for example 8.5 = 85)
min_fsb: minimum FSB
If not set, fid is calculated from the current CPU speed and the FSB.
min_fsb defaults to FSB at boot time - 50 MHz.
IMPORTANT: The available range is limited downwards!
Also the minimum available FSB can differ, for systems
booting with 200 MHz, 150 should always work.

View File

@ -1,21 +1,23 @@
.. SPDX-License-Identifier: GPL-2.0
CPU frequency and voltage scaling statistics in the Linux(TM) kernel ==========================================
General Description of sysfs CPUFreq Stats
==========================================
information for users
L i n u x c p u f r e q - s t a t s d r i v e r Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
- information for users - .. Contents
1. Introduction
Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> 2. Statistics Provided (with example)
3. Configuring cpufreq-stats
Contents
1. Introduction
2. Statistics Provided (with example)
3. Configuring cpufreq-stats
1. Introduction 1. Introduction
===============
cpufreq-stats is a driver that provides CPU frequency statistics for each CPU. cpufreq-stats is a driver that provides CPU frequency statistics for each CPU.
These statistics are provided in /sysfs as a bunch of read_only interfaces. This These statistics are provided in /sysfs as a bunch of read_only interfaces. This
@ -28,8 +30,10 @@ that may be running on your CPU. So, it will work with any cpufreq_driver.
2. Statistics Provided (with example) 2. Statistics Provided (with example)
=====================================
cpufreq stats provides following statistics (explained in detail below). cpufreq stats provides following statistics (explained in detail below).
- time_in_state - time_in_state
- total_trans - total_trans
- trans_table - trans_table
@ -39,53 +43,57 @@ All the statistics will be from the time the stats driver has been inserted
statistic is done. Obviously, stats driver will not have any information statistic is done. Obviously, stats driver will not have any information
about the frequency transitions before the stats driver insertion. about the frequency transitions before the stats driver insertion.
-------------------------------------------------------------------------------- ::
<mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # ls -l
total 0 <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # ls -l
drwxr-xr-x 2 root root 0 May 14 16:06 . total 0
drwxr-xr-x 3 root root 0 May 14 15:58 .. drwxr-xr-x 2 root root 0 May 14 16:06 .
--w------- 1 root root 4096 May 14 16:06 reset drwxr-xr-x 3 root root 0 May 14 15:58 ..
-r--r--r-- 1 root root 4096 May 14 16:06 time_in_state --w------- 1 root root 4096 May 14 16:06 reset
-r--r--r-- 1 root root 4096 May 14 16:06 total_trans -r--r--r-- 1 root root 4096 May 14 16:06 time_in_state
-r--r--r-- 1 root root 4096 May 14 16:06 trans_table -r--r--r-- 1 root root 4096 May 14 16:06 total_trans
-------------------------------------------------------------------------------- -r--r--r-- 1 root root 4096 May 14 16:06 trans_table
- **reset**
- reset
Write-only attribute that can be used to reset the stat counters. This can be Write-only attribute that can be used to reset the stat counters. This can be
useful for evaluating system behaviour under different governors without the useful for evaluating system behaviour under different governors without the
need for a reboot. need for a reboot.
- time_in_state - **time_in_state**
This gives the amount of time spent in each of the frequencies supported by This gives the amount of time spent in each of the frequencies supported by
this CPU. The cat output will have "<frequency> <time>" pair in each line, which this CPU. The cat output will have "<frequency> <time>" pair in each line, which
will mean this CPU spent <time> usertime units of time at <frequency>. Output will mean this CPU spent <time> usertime units of time at <frequency>. Output
will have one line for each of the supported frequencies. usertime units here will have one line for each of the supported frequencies. usertime units here
is 10mS (similar to other time exported in /proc). is 10mS (similar to other time exported in /proc).
-------------------------------------------------------------------------------- ::
<mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat time_in_state
3600000 2089 <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat time_in_state
3400000 136 3600000 2089
3200000 34 3400000 136
3000000 67 3200000 34
2800000 172488 3000000 67
-------------------------------------------------------------------------------- 2800000 172488
- total_trans - **total_trans**
This gives the total number of frequency transitions on this CPU. The cat
This gives the total number of frequency transitions on this CPU. The cat
output will have a single count which is the total number of frequency output will have a single count which is the total number of frequency
transitions. transitions.
-------------------------------------------------------------------------------- ::
<mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat total_trans
20 <mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat total_trans
-------------------------------------------------------------------------------- 20
- **trans_table**
- trans_table
This will give a fine grained information about all the CPU frequency This will give a fine grained information about all the CPU frequency
transitions. The cat output here is a two dimensional matrix, where an entry transitions. The cat output here is a two dimensional matrix, where an entry
<i,j> (row i, column j) represents the count of number of transitions from <i,j> (row i, column j) represents the count of number of transitions from
Freq_i to Freq_j. Freq_i rows and Freq_j columns follow the sorting order in Freq_i to Freq_j. Freq_i rows and Freq_j columns follow the sorting order in
which the driver has provided the frequency table initially to the cpufreq core which the driver has provided the frequency table initially to the cpufreq core
and so can be sorted (ascending or descending) or unsorted. The output here and so can be sorted (ascending or descending) or unsorted. The output here
@ -95,26 +103,27 @@ readability.
If the transition table is bigger than PAGE_SIZE, reading this will If the transition table is bigger than PAGE_SIZE, reading this will
return an -EFBIG error. return an -EFBIG error.
-------------------------------------------------------------------------------- ::
<mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat trans_table
From : To
: 3600000 3400000 3200000 3000000 2800000
3600000: 0 5 0 0 0
3400000: 4 0 2 0 0
3200000: 0 1 0 2 0
3000000: 0 0 1 0 3
2800000: 0 0 0 2 0
--------------------------------------------------------------------------------
<mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat trans_table
From : To
: 3600000 3400000 3200000 3000000 2800000
3600000: 0 5 0 0 0
3400000: 4 0 2 0 0
3200000: 0 1 0 2 0
3000000: 0 0 1 0 3
2800000: 0 0 0 2 0
3. Configuring cpufreq-stats 3. Configuring cpufreq-stats
============================
To configure cpufreq-stats in your kernel To configure cpufreq-stats in your kernel::
Config Main Menu
Power management options (ACPI, APM) ---> Config Main Menu
CPU Frequency scaling ---> Power management options (ACPI, APM) --->
[*] CPU Frequency scaling CPU Frequency scaling --->
[*] CPU frequency translation statistics [*] CPU Frequency scaling
[*] CPU frequency translation statistics
"CPU Frequency scaling" (CONFIG_CPU_FREQ) should be enabled to configure "CPU Frequency scaling" (CONFIG_CPU_FREQ) should be enabled to configure

View File

@ -0,0 +1,39 @@
.. SPDX-License-Identifier: GPL-2.0
==============================================================================
Linux CPUFreq - CPU frequency and voltage scaling code in the Linux(TM) kernel
==============================================================================
Author: Dominik Brodowski <linux@brodo.de>
Clock scaling allows you to change the clock speed of the CPUs on the
fly. This is a nice method to save battery power, because the lower
the clock speed, the less power the CPU consumes.
.. toctree::
:maxdepth: 1
core
cpu-drivers
cpufreq-stats
Mailing List
------------
There is a CPU frequency changing CVS commit and general list where
you can report bugs, problems or submit patches. To post a message,
send an email to linux-pm@vger.kernel.org.
Links
-----
the FTP archives:
* ftp://ftp.linux.org.uk/pub/linux/cpufreq/
how to access the CVS repository:
* http://cvs.arm.linux.org.uk/
the CPUFreq Mailing list:
* http://vger.kernel.org/vger-lists.html#linux-pm
Clock and voltage scaling for the SA-1100:
* http://www.lartmaker.nl/projects/scaling

View File

@ -1,56 +0,0 @@
CPU frequency and voltage scaling code in the Linux(TM) kernel
L i n u x C P U F r e q
Dominik Brodowski <linux@brodo.de>
Clock scaling allows you to change the clock speed of the CPUs on the
fly. This is a nice method to save battery power, because the lower
the clock speed, the less power the CPU consumes.
Documents in this directory:
----------------------------
amd-powernow.txt - AMD powernow driver specific file.
core.txt - General description of the CPUFreq core and
of CPUFreq notifiers.
cpu-drivers.txt - How to implement a new cpufreq processor driver.
cpufreq-nforce2.txt - nVidia nForce2 platform specific file.
cpufreq-stats.txt - General description of sysfs cpufreq stats.
index.txt - File index, Mailing list and Links (this document)
pcc-cpufreq.txt - PCC cpufreq driver specific file.
Mailing List
------------
There is a CPU frequency changing CVS commit and general list where
you can report bugs, problems or submit patches. To post a message,
send an email to linux-pm@vger.kernel.org.
Links
-----
the FTP archives:
* ftp://ftp.linux.org.uk/pub/linux/cpufreq/
how to access the CVS repository:
* http://cvs.arm.linux.org.uk/
the CPUFreq Mailing list:
* http://vger.kernel.org/vger-lists.html#linux-pm
Clock and voltage scaling for the SA-1100:
* http://www.lartmaker.nl/projects/scaling

View File

@ -1,207 +0,0 @@
/*
* pcc-cpufreq.txt - PCC interface documentation
*
* Copyright (C) 2009 Red Hat, Matthew Garrett <mjg@redhat.com>
* Copyright (C) 2009 Hewlett-Packard Development Company, L.P.
* Nagananda Chumbalkar <nagananda.chumbalkar@hp.com>
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or NON
* INFRINGEMENT. See the GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 675 Mass Ave, Cambridge, MA 02139, USA.
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
Processor Clocking Control Driver
---------------------------------
Contents:
---------
1. Introduction
1.1 PCC interface
1.1.1 Get Average Frequency
1.1.2 Set Desired Frequency
1.2 Platforms affected
2. Driver and /sys details
2.1 scaling_available_frequencies
2.2 cpuinfo_transition_latency
2.3 cpuinfo_cur_freq
2.4 related_cpus
3. Caveats
1. Introduction:
----------------
Processor Clocking Control (PCC) is an interface between the platform
firmware and OSPM. It is a mechanism for coordinating processor
performance (ie: frequency) between the platform firmware and the OS.
The PCC driver (pcc-cpufreq) allows OSPM to take advantage of the PCC
interface.
OS utilizes the PCC interface to inform platform firmware what frequency the
OS wants for a logical processor. The platform firmware attempts to achieve
the requested frequency. If the request for the target frequency could not be
satisfied by platform firmware, then it usually means that power budget
conditions are in place, and "power capping" is taking place.
1.1 PCC interface:
------------------
The complete PCC specification is available here:
http://www.acpica.org/download/Processor-Clocking-Control-v1p0.pdf
PCC relies on a shared memory region that provides a channel for communication
between the OS and platform firmware. PCC also implements a "doorbell" that
is used by the OS to inform the platform firmware that a command has been
sent.
The ACPI PCCH() method is used to discover the location of the PCC shared
memory region. The shared memory region header contains the "command" and
"status" interface. PCCH() also contains details on how to access the platform
doorbell.
The following commands are supported by the PCC interface:
* Get Average Frequency
* Set Desired Frequency
The ACPI PCCP() method is implemented for each logical processor and is
used to discover the offsets for the input and output buffers in the shared
memory region.
When PCC mode is enabled, the platform will not expose processor performance
or throttle states (_PSS, _TSS and related ACPI objects) to OSPM. Therefore,
the native P-state driver (such as acpi-cpufreq for Intel, powernow-k8 for
AMD) will not load.
However, OSPM remains in control of policy. The governor (eg: "ondemand")
computes the required performance for each processor based on server workload.
The PCC driver fills in the command interface, and the input buffer and
communicates the request to the platform firmware. The platform firmware is
responsible for delivering the requested performance.
Each PCC command is "global" in scope and can affect all the logical CPUs in
the system. Therefore, PCC is capable of performing "group" updates. With PCC
the OS is capable of getting/setting the frequency of all the logical CPUs in
the system with a single call to the BIOS.
1.1.1 Get Average Frequency:
----------------------------
This command is used by the OSPM to query the running frequency of the
processor since the last time this command was completed. The output buffer
indicates the average unhalted frequency of the logical processor expressed as
a percentage of the nominal (ie: maximum) CPU frequency. The output buffer
also signifies if the CPU frequency is limited by a power budget condition.
1.1.2 Set Desired Frequency:
----------------------------
This command is used by the OSPM to communicate to the platform firmware the
desired frequency for a logical processor. The output buffer is currently
ignored by OSPM. The next invocation of "Get Average Frequency" will inform
OSPM if the desired frequency was achieved or not.
1.2 Platforms affected:
-----------------------
The PCC driver will load on any system where the platform firmware:
* supports the PCC interface, and the associated PCCH() and PCCP() methods
* assumes responsibility for managing the hardware clocking controls in order
to deliver the requested processor performance
Currently, certain HP ProLiant platforms implement the PCC interface. On those
platforms PCC is the "default" choice.
However, it is possible to disable this interface via a BIOS setting. In
such an instance, as is also the case on platforms where the PCC interface
is not implemented, the PCC driver will fail to load silently.
2. Driver and /sys details:
---------------------------
When the driver loads, it merely prints the lowest and the highest CPU
frequencies supported by the platform firmware.
The PCC driver loads with a message such as:
pcc-cpufreq: (v1.00.00) driver loaded with frequency limits: 1600 MHz, 2933
MHz
This means that the OPSM can request the CPU to run at any frequency in
between the limits (1600 MHz, and 2933 MHz) specified in the message.
Internally, there is no need for the driver to convert the "target" frequency
to a corresponding P-state.
The VERSION number for the driver will be of the format v.xy.ab.
eg: 1.00.02
----- --
| |
| -- this will increase with bug fixes/enhancements to the driver
|-- this is the version of the PCC specification the driver adheres to
The following is a brief discussion on some of the fields exported via the
/sys filesystem and how their values are affected by the PCC driver:
2.1 scaling_available_frequencies:
----------------------------------
scaling_available_frequencies is not created in /sys. No intermediate
frequencies need to be listed because the BIOS will try to achieve any
frequency, within limits, requested by the governor. A frequency does not have
to be strictly associated with a P-state.
2.2 cpuinfo_transition_latency:
-------------------------------
The cpuinfo_transition_latency field is 0. The PCC specification does
not include a field to expose this value currently.
2.3 cpuinfo_cur_freq:
---------------------
A) Often cpuinfo_cur_freq will show a value different than what is declared
in the scaling_available_frequencies or scaling_cur_freq, or scaling_max_freq.
This is due to "turbo boost" available on recent Intel processors. If certain
conditions are met the BIOS can achieve a slightly higher speed than requested
by OSPM. An example:
scaling_cur_freq : 2933000
cpuinfo_cur_freq : 3196000
B) There is a round-off error associated with the cpuinfo_cur_freq value.
Since the driver obtains the current frequency as a "percentage" (%) of the
nominal frequency from the BIOS, sometimes, the values displayed by
scaling_cur_freq and cpuinfo_cur_freq may not match. An example:
scaling_cur_freq : 1600000
cpuinfo_cur_freq : 1583000
In this example, the nominal frequency is 2933 MHz. The driver obtains the
current frequency, cpuinfo_cur_freq, as 54% of the nominal frequency:
54% of 2933 MHz = 1583 MHz
Nominal frequency is the maximum frequency of the processor, and it usually
corresponds to the frequency of the P0 P-state.
2.4 related_cpus:
-----------------
The related_cpus field is identical to affected_cpus.
affected_cpus : 4
related_cpus : 4
Currently, the PCC driver does not evaluate _PSD. The platforms that support
PCC do not implement SW_ALL. So OSPM doesn't need to perform any coordination
to ensure that the same frequency is requested of all dependent CPUs.
3. Caveats:
-----------
The "cpufreq_stats" module in its present form cannot be loaded and
expected to work with the PCC driver. Since the "cpufreq_stats" module
provides information wrt each P-state, it is not applicable to the PCC driver.

View File

@ -19,7 +19,8 @@ In 'cpu' nodes:
In 'operating-points-v2' table: In 'operating-points-v2' table:
- compatible: Should be - compatible: Should be
- 'operating-points-v2-kryo-cpu' for apq8096 and msm8996. - 'operating-points-v2-kryo-cpu' for apq8096, msm8996, msm8974,
apq8064, ipq8064, msm8960 and ipq8074.
Optional properties: Optional properties:
-------------------- --------------------

View File

@ -99,6 +99,7 @@ needed).
accounting/index accounting/index
block/index block/index
cdrom/index cdrom/index
cpu-freq/index
ide/index ide/index
fb/index fb/index
fpga/index fpga/index

View File

@ -7,86 +7,78 @@ performance expectations by drivers, subsystems and user space applications on
one of the parameters. one of the parameters.
Two different PM QoS frameworks are available: Two different PM QoS frameworks are available:
1. PM QoS classes for cpu_dma_latency * CPU latency QoS.
2. The per-device PM QoS framework provides the API to manage the * The per-device PM QoS framework provides the API to manage the
per-device latency constraints and PM QoS flags. per-device latency constraints and PM QoS flags.
Each parameters have defined units: The latency unit used in the PM QoS framework is the microsecond (usec).
* latency: usec
* timeout: usec
* throughput: kbs (kilo bit / sec)
* memory bandwidth: mbs (mega bit / sec)
1. PM QoS framework 1. PM QoS framework
=================== ===================
The infrastructure exposes multiple misc device nodes one per implemented A global list of CPU latency QoS requests is maintained along with an aggregated
parameter. The set of parameters implement is defined by pm_qos_power_init() (effective) target value. The aggregated target value is updated with changes
and pm_qos_params.h. This is done because having the available parameters to the request list or elements of the list. For CPU latency QoS, the
being runtime configurable or changeable from a driver was seen as too easy to aggregated target value is simply the min of the request values held in the list
abuse. elements.
For each parameter a list of performance requests is maintained along with
an aggregated target value. The aggregated target value is updated with
changes to the request list or elements of the list. Typically the
aggregated target value is simply the max or min of the request values held
in the parameter list elements.
Note: the aggregated target value is implemented as an atomic variable so that Note: the aggregated target value is implemented as an atomic variable so that
reading the aggregated value does not require any locking mechanism. reading the aggregated value does not require any locking mechanism.
From kernel space the use of this interface is simple:
From kernel mode the use of this interface is simple: void cpu_latency_qos_add_request(handle, target_value):
Will insert an element into the CPU latency QoS list with the target value.
Upon change to this list the new target is recomputed and any registered
notifiers are called only if the target value is now different.
Clients of PM QoS need to save the returned handle for future use in other
PM QoS API functions.
void pm_qos_add_request(handle, param_class, target_value): void cpu_latency_qos_update_request(handle, new_target_value):
Will insert an element into the list for that identified PM QoS class with the
target value. Upon change to this list the new target is recomputed and any
registered notifiers are called only if the target value is now different.
Clients of pm_qos need to save the returned handle for future use in other
pm_qos API functions.
void pm_qos_update_request(handle, new_target_value):
Will update the list element pointed to by the handle with the new target Will update the list element pointed to by the handle with the new target
value and recompute the new aggregated target, calling the notification tree value and recompute the new aggregated target, calling the notification tree
if the target is changed. if the target is changed.
void pm_qos_remove_request(handle): void cpu_latency_qos_remove_request(handle):
Will remove the element. After removal it will update the aggregate target Will remove the element. After removal it will update the aggregate target
and call the notification tree if the target was changed as a result of and call the notification tree if the target was changed as a result of
removing the request. removing the request.
int pm_qos_request(param_class): int cpu_latency_qos_limit():
Returns the aggregated value for a given PM QoS class. Returns the aggregated value for the CPU latency QoS.
int pm_qos_request_active(handle): int cpu_latency_qos_request_active(handle):
Returns if the request is still active, i.e. it has not been removed from a Returns if the request is still active, i.e. it has not been removed from the
PM QoS class constraints list. CPU latency QoS list.
int pm_qos_add_notifier(param_class, notifier): int cpu_latency_qos_add_notifier(notifier):
Adds a notification callback function to the PM QoS class. The callback is Adds a notification callback function to the CPU latency QoS. The callback is
called when the aggregated value for the PM QoS class is changed. called when the aggregated value for the CPU latency QoS is changed.
int pm_qos_remove_notifier(int param_class, notifier): int cpu_latency_qos_remove_notifier(notifier):
Removes the notification callback function for the PM QoS class. Removes the notification callback function from the CPU latency QoS.
From user mode: From user space:
Only processes can register a pm_qos request. To provide for automatic The infrastructure exposes one device node, /dev/cpu_dma_latency, for the CPU
latency QoS.
Only processes can register a PM QoS request. To provide for automatic
cleanup of a process, the interface requires the process to register its cleanup of a process, the interface requires the process to register its
parameter requests in the following way: parameter requests as follows.
To register the default pm_qos target for the specific parameter, the process To register the default PM QoS target for the CPU latency QoS, the process must
must open /dev/cpu_dma_latency open /dev/cpu_dma_latency.
As long as the device node is held open that process has a registered As long as the device node is held open that process has a registered
request on the parameter. request on the parameter.
To change the requested target value the process needs to write an s32 value to To change the requested target value, the process needs to write an s32 value to
the open device node. Alternatively the user mode program could write a hex the open device node. Alternatively, it can write a hex string for the value
string for the value using 10 char long format e.g. "0x12345678". This using the 10 char long format e.g. "0x12345678". This translates to a
translates to a pm_qos_update_request call. cpu_latency_qos_update_request() call.
To remove the user mode request for a target value simply close the device To remove the user mode request for a target value simply close the device
node. node.

View File

@ -382,6 +382,12 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
nonzero, increment the counter and return 1; otherwise return 0 without nonzero, increment the counter and return 1; otherwise return 0 without
changing the counter changing the counter
`int pm_runtime_get_if_active(struct device *dev, bool ign_usage_count);`
- return -EINVAL if 'power.disable_depth' is nonzero; otherwise, if the
runtime PM status is RPM_ACTIVE, and either ign_usage_count is true
or the device's usage_count is non-zero, increment the counter and
return 1; otherwise return 0 without changing the counter
`void pm_runtime_put_noidle(struct device *dev);` `void pm_runtime_put_noidle(struct device *dev);`
- decrement the device's usage counter - decrement the device's usage counter

View File

@ -69,11 +69,13 @@ SNAPSHOT_PREF_IMAGE_SIZE
SNAPSHOT_GET_IMAGE_SIZE SNAPSHOT_GET_IMAGE_SIZE
return the actual size of the hibernation image return the actual size of the hibernation image
(the last argument should be a pointer to a loff_t variable that
will contain the result if the call is successful)
SNAPSHOT_AVAIL_SWAP_SIZE SNAPSHOT_AVAIL_SWAP_SIZE
return the amount of available swap in bytes (the return the amount of available swap in bytes
last argument should be a pointer to an unsigned int variable that will (the last argument should be a pointer to a loff_t variable that
contain the result if the call is successful). will contain the result if the call is successful)
SNAPSHOT_ALLOC_SWAP_PAGE SNAPSHOT_ALLOC_SWAP_PAGE
allocate a swap page from the resume partition allocate a swap page from the resume partition

View File

@ -73,16 +73,6 @@ The second parameter is the power domain target state.
================ ================
The PM QoS events are used for QoS add/update/remove request and for The PM QoS events are used for QoS add/update/remove request and for
target/flags update. target/flags update.
::
pm_qos_add_request "pm_qos_class=%s value=%d"
pm_qos_update_request "pm_qos_class=%s value=%d"
pm_qos_remove_request "pm_qos_class=%s value=%d"
pm_qos_update_request_timeout "pm_qos_class=%s value=%d, timeout_us=%ld"
The first parameter gives the QoS class name (e.g. "CPU_DMA_LATENCY").
The second parameter is value to be added/updated/removed.
The third parameter is timeout value in usec.
:: ::
pm_qos_update_target "action=%s prev_value=%d curr_value=%d" pm_qos_update_target "action=%s prev_value=%d curr_value=%d"
@ -92,7 +82,7 @@ The first parameter gives the QoS action name (e.g. "ADD_REQ").
The second parameter is the previous QoS value. The second parameter is the previous QoS value.
The third parameter is the current QoS value to update. The third parameter is the current QoS value to update.
And, there are also events used for device PM QoS add/update/remove request. There are also events used for device PM QoS add/update/remove request.
:: ::
dev_pm_qos_add_request "device=%s type=%s new_value=%d" dev_pm_qos_add_request "device=%s type=%s new_value=%d"
@ -103,3 +93,12 @@ The first parameter gives the device name which tries to add/update/remove
QoS requests. QoS requests.
The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY"). The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY").
The third parameter is value to be added/updated/removed. The third parameter is value to be added/updated/removed.
And, there are events used for CPU latency QoS add/update/remove request.
::
pm_qos_add_request "value=%d"
pm_qos_update_request "value=%d"
pm_qos_remove_request "value=%d"
The parameter is the value to be added/updated/removed.

View File

@ -265,7 +265,7 @@ static void iosf_mbi_reset_semaphore(void)
iosf_mbi_sem_address, 0, PUNIT_SEMAPHORE_BIT)) iosf_mbi_sem_address, 0, PUNIT_SEMAPHORE_BIT))
dev_err(&mbi_pdev->dev, "Error P-Unit semaphore reset failed\n"); dev_err(&mbi_pdev->dev, "Error P-Unit semaphore reset failed\n");
pm_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier, blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier,
MBI_PMIC_BUS_ACCESS_END, NULL); MBI_PMIC_BUS_ACCESS_END, NULL);
@ -301,8 +301,8 @@ static void iosf_mbi_reset_semaphore(void)
* 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC * 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC
* if this happens while the kernel itself is accessing the PMIC I2C bus * if this happens while the kernel itself is accessing the PMIC I2C bus
* the SoC hangs. * the SoC hangs.
* As the third step we call pm_qos_update_request() to disallow the CPU * As the third step we call cpu_latency_qos_update_request() to disallow the
* to enter C6 or C7. * CPU to enter C6 or C7.
* *
* 5) The P-Unit has a PMIC bus semaphore which we can request to stop * 5) The P-Unit has a PMIC bus semaphore which we can request to stop
* autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it. * autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it.
@ -338,7 +338,7 @@ int iosf_mbi_block_punit_i2c_access(void)
* requires the P-Unit to talk to the PMIC and if this happens while * requires the P-Unit to talk to the PMIC and if this happens while
* we're holding the semaphore, the SoC hangs. * we're holding the semaphore, the SoC hangs.
*/ */
pm_qos_update_request(&iosf_mbi_pm_qos, 0); cpu_latency_qos_update_request(&iosf_mbi_pm_qos, 0);
/* host driver writes to side band semaphore register */ /* host driver writes to side band semaphore register */
ret = iosf_mbi_write(BT_MBI_UNIT_PMC, MBI_REG_WRITE, ret = iosf_mbi_write(BT_MBI_UNIT_PMC, MBI_REG_WRITE,
@ -547,8 +547,7 @@ static int __init iosf_mbi_init(void)
{ {
iosf_debugfs_init(); iosf_debugfs_init();
pm_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
return pci_register_driver(&iosf_mbi_pci_driver); return pci_register_driver(&iosf_mbi_pci_driver);
} }
@ -561,7 +560,7 @@ static void __exit iosf_mbi_exit(void)
pci_dev_put(mbi_pdev); pci_dev_put(mbi_pdev);
mbi_pdev = NULL; mbi_pdev = NULL;
pm_qos_remove_request(&iosf_mbi_pm_qos); cpu_latency_qos_remove_request(&iosf_mbi_pm_qos);
} }
module_init(iosf_mbi_init); module_init(iosf_mbi_init);

View File

@ -101,7 +101,7 @@ acpi_status acpi_hw_enable_all_runtime_gpes(void);
acpi_status acpi_hw_enable_all_wakeup_gpes(void); acpi_status acpi_hw_enable_all_wakeup_gpes(void);
u8 acpi_hw_check_all_gpes(void); u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number);
acpi_status acpi_status
acpi_hw_enable_runtime_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info, acpi_hw_enable_runtime_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,

View File

@ -799,17 +799,19 @@ ACPI_EXPORT_SYMBOL(acpi_enable_all_wakeup_gpes)
* *
* FUNCTION: acpi_any_gpe_status_set * FUNCTION: acpi_any_gpe_status_set
* *
* PARAMETERS: None * PARAMETERS: gpe_skip_number - Number of the GPE to skip
* *
* RETURN: Whether or not the status bit is set for any GPE * RETURN: Whether or not the status bit is set for any GPE
* *
* DESCRIPTION: Check the status bits of all enabled GPEs and return TRUE if any * DESCRIPTION: Check the status bits of all enabled GPEs, except for the one
* of them is set or FALSE otherwise. * represented by the "skip" argument, and return TRUE if any of
* them is set or FALSE otherwise.
* *
******************************************************************************/ ******************************************************************************/
u32 acpi_any_gpe_status_set(void) u32 acpi_any_gpe_status_set(u32 gpe_skip_number)
{ {
acpi_status status; acpi_status status;
acpi_handle gpe_device;
u8 ret; u8 ret;
ACPI_FUNCTION_TRACE(acpi_any_gpe_status_set); ACPI_FUNCTION_TRACE(acpi_any_gpe_status_set);
@ -819,7 +821,12 @@ u32 acpi_any_gpe_status_set(void)
return (FALSE); return (FALSE);
} }
ret = acpi_hw_check_all_gpes(); status = acpi_get_gpe_device(gpe_skip_number, &gpe_device);
if (ACPI_FAILURE(status)) {
gpe_device = NULL;
}
ret = acpi_hw_check_all_gpes(gpe_device, gpe_skip_number);
(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS); (void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
return (ret); return (ret);

View File

@ -444,12 +444,19 @@ acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
return (AE_OK); return (AE_OK);
} }
struct acpi_gpe_block_status_context {
struct acpi_gpe_register_info *gpe_skip_register_info;
u8 gpe_skip_mask;
u8 retval;
};
/****************************************************************************** /******************************************************************************
* *
* FUNCTION: acpi_hw_get_gpe_block_status * FUNCTION: acpi_hw_get_gpe_block_status
* *
* PARAMETERS: gpe_xrupt_info - GPE Interrupt info * PARAMETERS: gpe_xrupt_info - GPE Interrupt info
* gpe_block - Gpe Block info * gpe_block - Gpe Block info
* context - GPE list walk context data
* *
* RETURN: Success * RETURN: Success
* *
@ -460,12 +467,13 @@ acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
static acpi_status static acpi_status
acpi_hw_get_gpe_block_status(struct acpi_gpe_xrupt_info *gpe_xrupt_info, acpi_hw_get_gpe_block_status(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
struct acpi_gpe_block_info *gpe_block, struct acpi_gpe_block_info *gpe_block,
void *ret_ptr) void *context)
{ {
struct acpi_gpe_block_status_context *c = context;
struct acpi_gpe_register_info *gpe_register_info; struct acpi_gpe_register_info *gpe_register_info;
u64 in_enable, in_status; u64 in_enable, in_status;
acpi_status status; acpi_status status;
u8 *ret = ret_ptr; u8 ret_mask;
u32 i; u32 i;
/* Examine each GPE Register within the block */ /* Examine each GPE Register within the block */
@ -485,7 +493,11 @@ acpi_hw_get_gpe_block_status(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
continue; continue;
} }
*ret |= in_enable & in_status; ret_mask = in_enable & in_status;
if (ret_mask && c->gpe_skip_register_info == gpe_register_info) {
ret_mask &= ~c->gpe_skip_mask;
}
c->retval |= ret_mask;
} }
return (AE_OK); return (AE_OK);
@ -561,24 +573,41 @@ acpi_status acpi_hw_enable_all_wakeup_gpes(void)
* *
* FUNCTION: acpi_hw_check_all_gpes * FUNCTION: acpi_hw_check_all_gpes
* *
* PARAMETERS: None * PARAMETERS: gpe_skip_device - GPE devoce of the GPE to skip
* gpe_skip_number - Number of the GPE to skip
* *
* RETURN: Combined status of all GPEs * RETURN: Combined status of all GPEs
* *
* DESCRIPTION: Check all enabled GPEs in all GPE blocks and return TRUE if the * DESCRIPTION: Check all enabled GPEs in all GPE blocks, except for the one
* represented by the "skip" arguments, and return TRUE if the
* status bit is set for at least one of them of FALSE otherwise. * status bit is set for at least one of them of FALSE otherwise.
* *
******************************************************************************/ ******************************************************************************/
u8 acpi_hw_check_all_gpes(void) u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number)
{ {
u8 ret = 0; struct acpi_gpe_block_status_context context = {
.gpe_skip_register_info = NULL,
.retval = 0,
};
struct acpi_gpe_event_info *gpe_event_info;
acpi_cpu_flags flags;
ACPI_FUNCTION_TRACE(acpi_hw_check_all_gpes); ACPI_FUNCTION_TRACE(acpi_hw_check_all_gpes);
(void)acpi_ev_walk_gpe_list(acpi_hw_get_gpe_block_status, &ret); flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
return (ret != 0); gpe_event_info = acpi_ev_get_gpe_event_info(gpe_skip_device,
gpe_skip_number);
if (gpe_event_info) {
context.gpe_skip_register_info = gpe_event_info->register_info;
context.gpe_skip_mask = acpi_hw_get_gpe_register_bit(gpe_event_info);
}
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
(void)acpi_ev_walk_gpe_list(acpi_hw_get_gpe_block_status, &context);
return (context.retval != 0);
} }
#endif /* !ACPI_REDUCED_HARDWARE */ #endif /* !ACPI_REDUCED_HARDWARE */

View File

@ -2037,6 +2037,11 @@ void acpi_ec_set_gpe_wake_mask(u8 action)
acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action); acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
} }
bool acpi_ec_other_gpes_active(void)
{
return acpi_any_gpe_status_set(first_ec ? first_ec->gpe : U32_MAX);
}
bool acpi_ec_dispatch_gpe(void) bool acpi_ec_dispatch_gpe(void)
{ {
u32 ret; u32 ret;

View File

@ -202,6 +202,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
void acpi_ec_flush_work(void); void acpi_ec_flush_work(void);
bool acpi_ec_other_gpes_active(void);
bool acpi_ec_dispatch_gpe(void); bool acpi_ec_dispatch_gpe(void);
#endif #endif

View File

@ -982,10 +982,7 @@ static int acpi_s2idle_prepare_late(void)
static void acpi_s2idle_sync(void) static void acpi_s2idle_sync(void)
{ {
/* /* The EC driver uses special workqueues that need to be flushed. */
* The EC driver uses the system workqueue and an additional special
* one, so those need to be flushed too.
*/
acpi_ec_flush_work(); acpi_ec_flush_work();
acpi_os_wait_events_complete(); /* synchronize Notify handling */ acpi_os_wait_events_complete(); /* synchronize Notify handling */
} }
@ -1013,18 +1010,19 @@ static bool acpi_s2idle_wake(void)
return true; return true;
/* /*
* If there are no EC events to process and at least one of the * If the status bit is set for any enabled GPE other than the
* other enabled GPEs is active, the wakeup is regarded as a * EC one, the wakeup is regarded as a genuine one.
* genuine one.
*
* Note that the checks below must be carried out in this order
* to avoid returning prematurely due to a change of the EC GPE
* status bit from unset to set between the checks with the
* status bits of all the other GPEs unset.
*/ */
if (acpi_any_gpe_status_set() && !acpi_ec_dispatch_gpe()) if (acpi_ec_other_gpes_active())
return true; return true;
/*
* If the EC GPE status bit has not been set, the wakeup is
* regarded as a spurious one.
*/
if (!acpi_ec_dispatch_gpe())
return false;
/* /*
* Cancel the wakeup and process all pending events in case * Cancel the wakeup and process all pending events in case
* there are any wakeup ones in there. * there are any wakeup ones in there.

View File

@ -2653,7 +2653,7 @@ static int genpd_iterate_idle_states(struct device_node *dn,
ret = of_count_phandle_with_args(dn, "domain-idle-states", NULL); ret = of_count_phandle_with_args(dn, "domain-idle-states", NULL);
if (ret <= 0) if (ret <= 0)
return ret; return ret == -ENOENT ? 0 : ret;
/* Loop over the phandles until all the requested entry is found */ /* Loop over the phandles until all the requested entry is found */
of_for_each_phandle(&it, ret, dn, "domain-idle-states", NULL, 0) { of_for_each_phandle(&it, ret, dn, "domain-idle-states", NULL, 0) {

View File

@ -40,6 +40,10 @@
typedef int (*pm_callback_t)(struct device *); typedef int (*pm_callback_t)(struct device *);
#define list_for_each_entry_rcu_locked(pos, head, member) \
list_for_each_entry_rcu(pos, head, member, \
device_links_read_lock_held())
/* /*
* The entries in the dpm_list list are in a depth first order, simply * The entries in the dpm_list list are in a depth first order, simply
* because children are guaranteed to be discovered after parents, and * because children are guaranteed to be discovered after parents, and
@ -266,7 +270,7 @@ static void dpm_wait_for_suppliers(struct device *dev, bool async)
* callbacks freeing the link objects for the links in the list we're * callbacks freeing the link objects for the links in the list we're
* walking. * walking.
*/ */
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node)
if (READ_ONCE(link->status) != DL_STATE_DORMANT) if (READ_ONCE(link->status) != DL_STATE_DORMANT)
dpm_wait(link->supplier, async); dpm_wait(link->supplier, async);
@ -323,7 +327,7 @@ static void dpm_wait_for_consumers(struct device *dev, bool async)
* continue instead of trying to continue in parallel with its * continue instead of trying to continue in parallel with its
* unregistration). * unregistration).
*/ */
list_for_each_entry_rcu(link, &dev->links.consumers, s_node) list_for_each_entry_rcu_locked(link, &dev->links.consumers, s_node)
if (READ_ONCE(link->status) != DL_STATE_DORMANT) if (READ_ONCE(link->status) != DL_STATE_DORMANT)
dpm_wait(link->consumer, async); dpm_wait(link->consumer, async);
@ -1235,7 +1239,7 @@ static void dpm_superior_set_must_resume(struct device *dev)
idx = device_links_read_lock(); idx = device_links_read_lock();
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node)
link->supplier->power.must_resume = true; link->supplier->power.must_resume = true;
device_links_read_unlock(idx); device_links_read_unlock(idx);
@ -1695,7 +1699,7 @@ static void dpm_clear_superiors_direct_complete(struct device *dev)
idx = device_links_read_lock(); idx = device_links_read_lock();
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) { list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node) {
spin_lock_irq(&link->supplier->power.lock); spin_lock_irq(&link->supplier->power.lock);
link->supplier->power.direct_complete = false; link->supplier->power.direct_complete = false;
spin_unlock_irq(&link->supplier->power.lock); spin_unlock_irq(&link->supplier->power.lock);

View File

@ -1087,29 +1087,47 @@ int __pm_runtime_resume(struct device *dev, int rpmflags)
EXPORT_SYMBOL_GPL(__pm_runtime_resume); EXPORT_SYMBOL_GPL(__pm_runtime_resume);
/** /**
* pm_runtime_get_if_in_use - Conditionally bump up the device's usage counter. * pm_runtime_get_if_active - Conditionally bump up the device's usage counter.
* @dev: Device to handle. * @dev: Device to handle.
* *
* Return -EINVAL if runtime PM is disabled for the device. * Return -EINVAL if runtime PM is disabled for the device.
* *
* If that's not the case and if the device's runtime PM status is RPM_ACTIVE * Otherwise, if the device's runtime PM status is RPM_ACTIVE and either
* and the runtime PM usage counter is nonzero, increment the counter and * ign_usage_count is true or the device's usage_count is non-zero, increment
* return 1. Otherwise return 0 without changing the counter. * the counter and return 1. Otherwise return 0 without changing the counter.
*
* If ign_usage_count is true, the function can be used to prevent suspending
* the device when its runtime PM status is RPM_ACTIVE.
*
* If ign_usage_count is false, the function can be used to prevent suspending
* the device when both its runtime PM status is RPM_ACTIVE and its usage_count
* is non-zero.
*
* The caller is resposible for putting the device's usage count when ther
* return value is greater than zero.
*/ */
int pm_runtime_get_if_in_use(struct device *dev) int pm_runtime_get_if_active(struct device *dev, bool ign_usage_count)
{ {
unsigned long flags; unsigned long flags;
int retval; int retval;
spin_lock_irqsave(&dev->power.lock, flags); spin_lock_irqsave(&dev->power.lock, flags);
retval = dev->power.disable_depth > 0 ? -EINVAL : if (dev->power.disable_depth > 0) {
dev->power.runtime_status == RPM_ACTIVE retval = -EINVAL;
&& atomic_inc_not_zero(&dev->power.usage_count); } else if (dev->power.runtime_status != RPM_ACTIVE) {
retval = 0;
} else if (ign_usage_count) {
retval = 1;
atomic_inc(&dev->power.usage_count);
} else {
retval = atomic_inc_not_zero(&dev->power.usage_count);
}
trace_rpm_usage_rcuidle(dev, 0); trace_rpm_usage_rcuidle(dev, 0);
spin_unlock_irqrestore(&dev->power.lock, flags); spin_unlock_irqrestore(&dev->power.lock, flags);
return retval; return retval;
} }
EXPORT_SYMBOL_GPL(pm_runtime_get_if_in_use); EXPORT_SYMBOL_GPL(pm_runtime_get_if_active);
/** /**
* __pm_runtime_set_status - Set runtime PM status of a device. * __pm_runtime_set_status - Set runtime PM status of a device.

View File

@ -24,6 +24,9 @@ suspend_state_t pm_suspend_target_state;
#define pm_suspend_target_state (PM_SUSPEND_ON) #define pm_suspend_target_state (PM_SUSPEND_ON)
#endif #endif
#define list_for_each_entry_rcu_locked(pos, head, member) \
list_for_each_entry_rcu(pos, head, member, \
srcu_read_lock_held(&wakeup_srcu))
/* /*
* If set, the suspend/hibernate code will abort transitions to a sleep state * If set, the suspend/hibernate code will abort transitions to a sleep state
* if wakeup events are registered during or immediately before the transition. * if wakeup events are registered during or immediately before the transition.
@ -241,7 +244,9 @@ void wakeup_source_unregister(struct wakeup_source *ws)
{ {
if (ws) { if (ws) {
wakeup_source_remove(ws); wakeup_source_remove(ws);
wakeup_source_sysfs_remove(ws); if (ws->dev)
wakeup_source_sysfs_remove(ws);
wakeup_source_destroy(ws); wakeup_source_destroy(ws);
} }
} }
@ -405,7 +410,7 @@ void device_wakeup_arm_wake_irqs(void)
int srcuidx; int srcuidx;
srcuidx = srcu_read_lock(&wakeup_srcu); srcuidx = srcu_read_lock(&wakeup_srcu);
list_for_each_entry_rcu(ws, &wakeup_sources, entry) list_for_each_entry_rcu_locked(ws, &wakeup_sources, entry)
dev_pm_arm_wake_irq(ws->wakeirq); dev_pm_arm_wake_irq(ws->wakeirq);
srcu_read_unlock(&wakeup_srcu, srcuidx); srcu_read_unlock(&wakeup_srcu, srcuidx);
} }
@ -421,7 +426,7 @@ void device_wakeup_disarm_wake_irqs(void)
int srcuidx; int srcuidx;
srcuidx = srcu_read_lock(&wakeup_srcu); srcuidx = srcu_read_lock(&wakeup_srcu);
list_for_each_entry_rcu(ws, &wakeup_sources, entry) list_for_each_entry_rcu_locked(ws, &wakeup_sources, entry)
dev_pm_disarm_wake_irq(ws->wakeirq); dev_pm_disarm_wake_irq(ws->wakeirq);
srcu_read_unlock(&wakeup_srcu, srcuidx); srcu_read_unlock(&wakeup_srcu, srcuidx);
} }
@ -874,7 +879,7 @@ void pm_print_active_wakeup_sources(void)
struct wakeup_source *last_activity_ws = NULL; struct wakeup_source *last_activity_ws = NULL;
srcuidx = srcu_read_lock(&wakeup_srcu); srcuidx = srcu_read_lock(&wakeup_srcu);
list_for_each_entry_rcu(ws, &wakeup_sources, entry) { list_for_each_entry_rcu_locked(ws, &wakeup_sources, entry) {
if (ws->active) { if (ws->active) {
pm_pr_dbg("active wakeup source: %s\n", ws->name); pm_pr_dbg("active wakeup source: %s\n", ws->name);
active = 1; active = 1;
@ -1025,7 +1030,7 @@ void pm_wakep_autosleep_enabled(bool set)
int srcuidx; int srcuidx;
srcuidx = srcu_read_lock(&wakeup_srcu); srcuidx = srcu_read_lock(&wakeup_srcu);
list_for_each_entry_rcu(ws, &wakeup_sources, entry) { list_for_each_entry_rcu_locked(ws, &wakeup_sources, entry) {
spin_lock_irq(&ws->lock); spin_lock_irq(&ws->lock);
if (ws->autosleep_enabled != set) { if (ws->autosleep_enabled != set) {
ws->autosleep_enabled = set; ws->autosleep_enabled = set;
@ -1104,7 +1109,7 @@ static void *wakeup_sources_stats_seq_start(struct seq_file *m,
} }
*srcuidx = srcu_read_lock(&wakeup_srcu); *srcuidx = srcu_read_lock(&wakeup_srcu);
list_for_each_entry_rcu(ws, &wakeup_sources, entry) { list_for_each_entry_rcu_locked(ws, &wakeup_sources, entry) {
if (n-- <= 0) if (n-- <= 0)
return ws; return ws;
} }

View File

@ -128,7 +128,7 @@ config ARM_OMAP2PLUS_CPUFREQ
config ARM_QCOM_CPUFREQ_NVMEM config ARM_QCOM_CPUFREQ_NVMEM
tristate "Qualcomm nvmem based CPUFreq" tristate "Qualcomm nvmem based CPUFreq"
depends on ARM64 depends on ARCH_QCOM
depends on QCOM_QFPROM depends on QCOM_QFPROM
depends on QCOM_SMEM depends on QCOM_SMEM
select PM_OPP select PM_OPP

View File

@ -25,7 +25,7 @@ config X86_PCC_CPUFREQ
This driver adds support for the PCC interface. This driver adds support for the PCC interface.
For details, take a look at: For details, take a look at:
<file:Documentation/cpu-freq/pcc-cpufreq.txt>. <file:Documentation/admin-guide/pm/cpufreq_drivers.rst>.
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called pcc-cpufreq. module will be called pcc-cpufreq.

View File

@ -141,6 +141,11 @@ static const struct of_device_id blacklist[] __initconst = {
{ .compatible = "ti,dra7", }, { .compatible = "ti,dra7", },
{ .compatible = "ti,omap3", }, { .compatible = "ti,omap3", },
{ .compatible = "qcom,ipq8064", },
{ .compatible = "qcom,apq8064", },
{ .compatible = "qcom,msm8974", },
{ .compatible = "qcom,msm8960", },
{ } { }
}; };

View File

@ -363,6 +363,10 @@ static int dt_cpufreq_probe(struct platform_device *pdev)
dt_cpufreq_driver.resume = data->resume; dt_cpufreq_driver.resume = data->resume;
if (data->suspend) if (data->suspend)
dt_cpufreq_driver.suspend = data->suspend; dt_cpufreq_driver.suspend = data->suspend;
if (data->get_intermediate) {
dt_cpufreq_driver.target_intermediate = data->target_intermediate;
dt_cpufreq_driver.get_intermediate = data->get_intermediate;
}
} }
ret = cpufreq_register_driver(&dt_cpufreq_driver); ret = cpufreq_register_driver(&dt_cpufreq_driver);

View File

@ -14,6 +14,10 @@ struct cpufreq_policy;
struct cpufreq_dt_platform_data { struct cpufreq_dt_platform_data {
bool have_governor_per_policy; bool have_governor_per_policy;
unsigned int (*get_intermediate)(struct cpufreq_policy *policy,
unsigned int index);
int (*target_intermediate)(struct cpufreq_policy *policy,
unsigned int index);
int (*suspend)(struct cpufreq_policy *policy); int (*suspend)(struct cpufreq_policy *policy);
int (*resume)(struct cpufreq_policy *policy); int (*resume)(struct cpufreq_policy *policy);
}; };

View File

@ -90,35 +90,35 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
if (policy->fast_switch_enabled) if (policy->fast_switch_enabled)
return 0; return 0;
len += snprintf(buf + len, PAGE_SIZE - len, " From : To\n"); len += scnprintf(buf + len, PAGE_SIZE - len, " From : To\n");
len += snprintf(buf + len, PAGE_SIZE - len, " : "); len += scnprintf(buf + len, PAGE_SIZE - len, " : ");
for (i = 0; i < stats->state_num; i++) { for (i = 0; i < stats->state_num; i++) {
if (len >= PAGE_SIZE) if (len >= PAGE_SIZE)
break; break;
len += snprintf(buf + len, PAGE_SIZE - len, "%9u ", len += scnprintf(buf + len, PAGE_SIZE - len, "%9u ",
stats->freq_table[i]); stats->freq_table[i]);
} }
if (len >= PAGE_SIZE) if (len >= PAGE_SIZE)
return PAGE_SIZE; return PAGE_SIZE;
len += snprintf(buf + len, PAGE_SIZE - len, "\n"); len += scnprintf(buf + len, PAGE_SIZE - len, "\n");
for (i = 0; i < stats->state_num; i++) { for (i = 0; i < stats->state_num; i++) {
if (len >= PAGE_SIZE) if (len >= PAGE_SIZE)
break; break;
len += snprintf(buf + len, PAGE_SIZE - len, "%9u: ", len += scnprintf(buf + len, PAGE_SIZE - len, "%9u: ",
stats->freq_table[i]); stats->freq_table[i]);
for (j = 0; j < stats->state_num; j++) { for (j = 0; j < stats->state_num; j++) {
if (len >= PAGE_SIZE) if (len >= PAGE_SIZE)
break; break;
len += snprintf(buf + len, PAGE_SIZE - len, "%9u ", len += scnprintf(buf + len, PAGE_SIZE - len, "%9u ",
stats->trans_table[i*stats->max_state+j]); stats->trans_table[i*stats->max_state+j]);
} }
if (len >= PAGE_SIZE) if (len >= PAGE_SIZE)
break; break;
len += snprintf(buf + len, PAGE_SIZE - len, "\n"); len += scnprintf(buf + len, PAGE_SIZE - len, "\n");
} }
if (len >= PAGE_SIZE) { if (len >= PAGE_SIZE) {

View File

@ -19,6 +19,8 @@
#define IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK (0xf << 8) #define IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK (0xf << 8)
#define OCOTP_CFG3_MKT_SEGMENT_SHIFT 6 #define OCOTP_CFG3_MKT_SEGMENT_SHIFT 6
#define OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 6) #define OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 6)
#define IMX8MP_OCOTP_CFG3_MKT_SEGMENT_SHIFT 5
#define IMX8MP_OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 5)
/* cpufreq-dt device registered by imx-cpufreq-dt */ /* cpufreq-dt device registered by imx-cpufreq-dt */
static struct platform_device *cpufreq_dt_pdev; static struct platform_device *cpufreq_dt_pdev;
@ -31,6 +33,9 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
int speed_grade, mkt_segment; int speed_grade, mkt_segment;
int ret; int ret;
if (!of_find_property(cpu_dev->of_node, "cpu-supply", NULL))
return -ENODEV;
ret = nvmem_cell_read_u32(cpu_dev, "speed_grade", &cell_value); ret = nvmem_cell_read_u32(cpu_dev, "speed_grade", &cell_value);
if (ret) if (ret)
return ret; return ret;
@ -42,7 +47,13 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
else else
speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK) speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK)
>> OCOTP_CFG3_SPEED_GRADE_SHIFT; >> OCOTP_CFG3_SPEED_GRADE_SHIFT;
mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) >> OCOTP_CFG3_MKT_SEGMENT_SHIFT;
if (of_machine_is_compatible("fsl,imx8mp"))
mkt_segment = (cell_value & IMX8MP_OCOTP_CFG3_MKT_SEGMENT_MASK)
>> IMX8MP_OCOTP_CFG3_MKT_SEGMENT_SHIFT;
else
mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK)
>> OCOTP_CFG3_MKT_SEGMENT_SHIFT;
/* /*
* Early samples without fuses written report "0 0" which may NOT * Early samples without fuses written report "0 0" which may NOT

View File

@ -216,31 +216,41 @@ static struct cpufreq_driver imx6q_cpufreq_driver = {
#define OCOTP_CFG3_SPEED_996MHZ 0x2 #define OCOTP_CFG3_SPEED_996MHZ 0x2
#define OCOTP_CFG3_SPEED_852MHZ 0x1 #define OCOTP_CFG3_SPEED_852MHZ 0x1
static void imx6q_opp_check_speed_grading(struct device *dev) static int imx6q_opp_check_speed_grading(struct device *dev)
{ {
struct device_node *np; struct device_node *np;
void __iomem *base; void __iomem *base;
u32 val; u32 val;
int ret;
np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-ocotp"); if (of_find_property(dev->of_node, "nvmem-cells", NULL)) {
if (!np) ret = nvmem_cell_read_u32(dev, "speed_grade", &val);
return; if (ret)
return ret;
} else {
np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-ocotp");
if (!np)
return -ENOENT;
base = of_iomap(np, 0); base = of_iomap(np, 0);
if (!base) { of_node_put(np);
dev_err(dev, "failed to map ocotp\n"); if (!base) {
goto put_node; dev_err(dev, "failed to map ocotp\n");
return -EFAULT;
}
/*
* SPEED_GRADING[1:0] defines the max speed of ARM:
* 2b'11: 1200000000Hz;
* 2b'10: 996000000Hz;
* 2b'01: 852000000Hz; -- i.MX6Q Only, exclusive with 996MHz.
* 2b'00: 792000000Hz;
* We need to set the max speed of ARM according to fuse map.
*/
val = readl_relaxed(base + OCOTP_CFG3);
iounmap(base);
} }
/*
* SPEED_GRADING[1:0] defines the max speed of ARM:
* 2b'11: 1200000000Hz;
* 2b'10: 996000000Hz;
* 2b'01: 852000000Hz; -- i.MX6Q Only, exclusive with 996MHz.
* 2b'00: 792000000Hz;
* We need to set the max speed of ARM according to fuse map.
*/
val = readl_relaxed(base + OCOTP_CFG3);
val >>= OCOTP_CFG3_SPEED_SHIFT; val >>= OCOTP_CFG3_SPEED_SHIFT;
val &= 0x3; val &= 0x3;
@ -257,9 +267,8 @@ static void imx6q_opp_check_speed_grading(struct device *dev)
if (dev_pm_opp_disable(dev, 1200000000)) if (dev_pm_opp_disable(dev, 1200000000))
dev_warn(dev, "failed to disable 1.2GHz OPP\n"); dev_warn(dev, "failed to disable 1.2GHz OPP\n");
} }
iounmap(base);
put_node: return 0;
of_node_put(np);
} }
#define OCOTP_CFG3_6UL_SPEED_696MHZ 0x2 #define OCOTP_CFG3_6UL_SPEED_696MHZ 0x2
@ -280,6 +289,9 @@ static int imx6ul_opp_check_speed_grading(struct device *dev)
void __iomem *base; void __iomem *base;
np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-ocotp"); np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-ocotp");
if (!np)
np = of_find_compatible_node(NULL, NULL,
"fsl,imx6ull-ocotp");
if (!np) if (!np)
return -ENOENT; return -ENOENT;
@ -378,23 +390,22 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
goto put_reg; goto put_reg;
} }
/* Because we have added the OPPs here, we must free them */
free_opp = true;
if (of_machine_is_compatible("fsl,imx6ul") || if (of_machine_is_compatible("fsl,imx6ul") ||
of_machine_is_compatible("fsl,imx6ull")) { of_machine_is_compatible("fsl,imx6ull")) {
ret = imx6ul_opp_check_speed_grading(cpu_dev); ret = imx6ul_opp_check_speed_grading(cpu_dev);
if (ret) { } else {
if (ret == -EPROBE_DEFER) ret = imx6q_opp_check_speed_grading(cpu_dev);
goto put_node; }
if (ret) {
if (ret != -EPROBE_DEFER)
dev_err(cpu_dev, "failed to read ocotp: %d\n", dev_err(cpu_dev, "failed to read ocotp: %d\n",
ret); ret);
goto put_node; goto out_free_opp;
}
} else {
imx6q_opp_check_speed_grading(cpu_dev);
} }
/* Because we have added the OPPs here, we must free them */
free_opp = true;
num = dev_pm_opp_get_opp_count(cpu_dev); num = dev_pm_opp_get_opp_count(cpu_dev);
if (num < 0) { if (num < 0) {
ret = num; ret = num;

View File

@ -2155,15 +2155,19 @@ static void intel_pstate_adjust_policy_max(struct cpudata *cpu,
} }
} }
static int intel_pstate_verify_policy(struct cpufreq_policy_data *policy) static void intel_pstate_verify_cpu_policy(struct cpudata *cpu,
struct cpufreq_policy_data *policy)
{ {
struct cpudata *cpu = all_cpu_data[policy->cpu];
update_turbo_state(); update_turbo_state();
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq,
intel_pstate_get_max_freq(cpu)); intel_pstate_get_max_freq(cpu));
intel_pstate_adjust_policy_max(cpu, policy); intel_pstate_adjust_policy_max(cpu, policy);
}
static int intel_pstate_verify_policy(struct cpufreq_policy_data *policy)
{
intel_pstate_verify_cpu_policy(all_cpu_data[policy->cpu], policy);
return 0; return 0;
} }
@ -2243,10 +2247,11 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy)
if (ret) if (ret)
return ret; return ret;
if (IS_ENABLED(CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE)) /*
policy->policy = CPUFREQ_POLICY_PERFORMANCE; * Set the policy to powersave to provide a valid fallback value in case
else * the default cpufreq governor is neither powersave nor performance.
policy->policy = CPUFREQ_POLICY_POWERSAVE; */
policy->policy = CPUFREQ_POLICY_POWERSAVE;
return 0; return 0;
} }
@ -2268,12 +2273,7 @@ static int intel_cpufreq_verify_policy(struct cpufreq_policy_data *policy)
{ {
struct cpudata *cpu = all_cpu_data[policy->cpu]; struct cpudata *cpu = all_cpu_data[policy->cpu];
update_turbo_state(); intel_pstate_verify_cpu_policy(cpu, policy);
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq,
intel_pstate_get_max_freq(cpu));
intel_pstate_adjust_policy_max(cpu, policy);
intel_pstate_update_perf_limits(cpu, policy->min, policy->max); intel_pstate_update_perf_limits(cpu, policy->min, policy->max);
return 0; return 0;

View File

@ -49,12 +49,14 @@ struct qcom_cpufreq_drv;
struct qcom_cpufreq_match_data { struct qcom_cpufreq_match_data {
int (*get_version)(struct device *cpu_dev, int (*get_version)(struct device *cpu_dev,
struct nvmem_cell *speedbin_nvmem, struct nvmem_cell *speedbin_nvmem,
char **pvs_name,
struct qcom_cpufreq_drv *drv); struct qcom_cpufreq_drv *drv);
const char **genpd_names; const char **genpd_names;
}; };
struct qcom_cpufreq_drv { struct qcom_cpufreq_drv {
struct opp_table **opp_tables; struct opp_table **names_opp_tables;
struct opp_table **hw_opp_tables;
struct opp_table **genpd_opp_tables; struct opp_table **genpd_opp_tables;
u32 versions; u32 versions;
const struct qcom_cpufreq_match_data *data; const struct qcom_cpufreq_match_data *data;
@ -62,6 +64,84 @@ struct qcom_cpufreq_drv {
static struct platform_device *cpufreq_dt_pdev, *cpufreq_pdev; static struct platform_device *cpufreq_dt_pdev, *cpufreq_pdev;
static void get_krait_bin_format_a(struct device *cpu_dev,
int *speed, int *pvs, int *pvs_ver,
struct nvmem_cell *pvs_nvmem, u8 *buf)
{
u32 pte_efuse;
pte_efuse = *((u32 *)buf);
*speed = pte_efuse & 0xf;
if (*speed == 0xf)
*speed = (pte_efuse >> 4) & 0xf;
if (*speed == 0xf) {
*speed = 0;
dev_warn(cpu_dev, "Speed bin: Defaulting to %d\n", *speed);
} else {
dev_dbg(cpu_dev, "Speed bin: %d\n", *speed);
}
*pvs = (pte_efuse >> 10) & 0x7;
if (*pvs == 0x7)
*pvs = (pte_efuse >> 13) & 0x7;
if (*pvs == 0x7) {
*pvs = 0;
dev_warn(cpu_dev, "PVS bin: Defaulting to %d\n", *pvs);
} else {
dev_dbg(cpu_dev, "PVS bin: %d\n", *pvs);
}
}
static void get_krait_bin_format_b(struct device *cpu_dev,
int *speed, int *pvs, int *pvs_ver,
struct nvmem_cell *pvs_nvmem, u8 *buf)
{
u32 pte_efuse, redundant_sel;
pte_efuse = *((u32 *)buf);
redundant_sel = (pte_efuse >> 24) & 0x7;
*pvs_ver = (pte_efuse >> 4) & 0x3;
switch (redundant_sel) {
case 1:
*pvs = ((pte_efuse >> 28) & 0x8) | ((pte_efuse >> 6) & 0x7);
*speed = (pte_efuse >> 27) & 0xf;
break;
case 2:
*pvs = (pte_efuse >> 27) & 0xf;
*speed = pte_efuse & 0x7;
break;
default:
/* 4 bits of PVS are in efuse register bits 31, 8-6. */
*pvs = ((pte_efuse >> 28) & 0x8) | ((pte_efuse >> 6) & 0x7);
*speed = pte_efuse & 0x7;
}
/* Check SPEED_BIN_BLOW_STATUS */
if (pte_efuse & BIT(3)) {
dev_dbg(cpu_dev, "Speed bin: %d\n", *speed);
} else {
dev_warn(cpu_dev, "Speed bin not set. Defaulting to 0!\n");
*speed = 0;
}
/* Check PVS_BLOW_STATUS */
pte_efuse = *(((u32 *)buf) + 4);
pte_efuse &= BIT(21);
if (pte_efuse) {
dev_dbg(cpu_dev, "PVS bin: %d\n", *pvs);
} else {
dev_warn(cpu_dev, "PVS bin not set. Defaulting to 0!\n");
*pvs = 0;
}
dev_dbg(cpu_dev, "PVS version: %d\n", *pvs_ver);
}
static enum _msm8996_version qcom_cpufreq_get_msm_id(void) static enum _msm8996_version qcom_cpufreq_get_msm_id(void)
{ {
size_t len; size_t len;
@ -93,11 +173,13 @@ static enum _msm8996_version qcom_cpufreq_get_msm_id(void)
static int qcom_cpufreq_kryo_name_version(struct device *cpu_dev, static int qcom_cpufreq_kryo_name_version(struct device *cpu_dev,
struct nvmem_cell *speedbin_nvmem, struct nvmem_cell *speedbin_nvmem,
char **pvs_name,
struct qcom_cpufreq_drv *drv) struct qcom_cpufreq_drv *drv)
{ {
size_t len; size_t len;
u8 *speedbin; u8 *speedbin;
enum _msm8996_version msm8996_version; enum _msm8996_version msm8996_version;
*pvs_name = NULL;
msm8996_version = qcom_cpufreq_get_msm_id(); msm8996_version = qcom_cpufreq_get_msm_id();
if (NUM_OF_MSM8996_VERSIONS == msm8996_version) { if (NUM_OF_MSM8996_VERSIONS == msm8996_version) {
@ -125,10 +207,51 @@ static int qcom_cpufreq_kryo_name_version(struct device *cpu_dev,
return 0; return 0;
} }
static int qcom_cpufreq_krait_name_version(struct device *cpu_dev,
struct nvmem_cell *speedbin_nvmem,
char **pvs_name,
struct qcom_cpufreq_drv *drv)
{
int speed = 0, pvs = 0, pvs_ver = 0;
u8 *speedbin;
size_t len;
speedbin = nvmem_cell_read(speedbin_nvmem, &len);
if (IS_ERR(speedbin))
return PTR_ERR(speedbin);
switch (len) {
case 4:
get_krait_bin_format_a(cpu_dev, &speed, &pvs, &pvs_ver,
speedbin_nvmem, speedbin);
break;
case 8:
get_krait_bin_format_b(cpu_dev, &speed, &pvs, &pvs_ver,
speedbin_nvmem, speedbin);
break;
default:
dev_err(cpu_dev, "Unable to read nvmem data. Defaulting to 0!\n");
return -ENODEV;
}
snprintf(*pvs_name, sizeof("speedXX-pvsXX-vXX"), "speed%d-pvs%d-v%d",
speed, pvs, pvs_ver);
drv->versions = (1 << speed);
kfree(speedbin);
return 0;
}
static const struct qcom_cpufreq_match_data match_data_kryo = { static const struct qcom_cpufreq_match_data match_data_kryo = {
.get_version = qcom_cpufreq_kryo_name_version, .get_version = qcom_cpufreq_kryo_name_version,
}; };
static const struct qcom_cpufreq_match_data match_data_krait = {
.get_version = qcom_cpufreq_krait_name_version,
};
static const char *qcs404_genpd_names[] = { "cpr", NULL }; static const char *qcs404_genpd_names[] = { "cpr", NULL };
static const struct qcom_cpufreq_match_data match_data_qcs404 = { static const struct qcom_cpufreq_match_data match_data_qcs404 = {
@ -141,6 +264,7 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
struct nvmem_cell *speedbin_nvmem; struct nvmem_cell *speedbin_nvmem;
struct device_node *np; struct device_node *np;
struct device *cpu_dev; struct device *cpu_dev;
char *pvs_name = "speedXX-pvsXX-vXX";
unsigned cpu; unsigned cpu;
const struct of_device_id *match; const struct of_device_id *match;
int ret; int ret;
@ -153,7 +277,7 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
if (!np) if (!np)
return -ENOENT; return -ENOENT;
ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu"); ret = of_device_is_compatible(np, "operating-points-v2-qcom-cpu");
if (!ret) { if (!ret) {
of_node_put(np); of_node_put(np);
return -ENOENT; return -ENOENT;
@ -181,7 +305,8 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
goto free_drv; goto free_drv;
} }
ret = drv->data->get_version(cpu_dev, speedbin_nvmem, drv); ret = drv->data->get_version(cpu_dev,
speedbin_nvmem, &pvs_name, drv);
if (ret) { if (ret) {
nvmem_cell_put(speedbin_nvmem); nvmem_cell_put(speedbin_nvmem);
goto free_drv; goto free_drv;
@ -190,12 +315,20 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
} }
of_node_put(np); of_node_put(np);
drv->opp_tables = kcalloc(num_possible_cpus(), sizeof(*drv->opp_tables), drv->names_opp_tables = kcalloc(num_possible_cpus(),
sizeof(*drv->names_opp_tables),
GFP_KERNEL); GFP_KERNEL);
if (!drv->opp_tables) { if (!drv->names_opp_tables) {
ret = -ENOMEM; ret = -ENOMEM;
goto free_drv; goto free_drv;
} }
drv->hw_opp_tables = kcalloc(num_possible_cpus(),
sizeof(*drv->hw_opp_tables),
GFP_KERNEL);
if (!drv->hw_opp_tables) {
ret = -ENOMEM;
goto free_opp_names;
}
drv->genpd_opp_tables = kcalloc(num_possible_cpus(), drv->genpd_opp_tables = kcalloc(num_possible_cpus(),
sizeof(*drv->genpd_opp_tables), sizeof(*drv->genpd_opp_tables),
@ -213,11 +346,23 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
} }
if (drv->data->get_version) { if (drv->data->get_version) {
drv->opp_tables[cpu] =
dev_pm_opp_set_supported_hw(cpu_dev, if (pvs_name) {
&drv->versions, 1); drv->names_opp_tables[cpu] = dev_pm_opp_set_prop_name(
if (IS_ERR(drv->opp_tables[cpu])) { cpu_dev,
ret = PTR_ERR(drv->opp_tables[cpu]); pvs_name);
if (IS_ERR(drv->names_opp_tables[cpu])) {
ret = PTR_ERR(drv->names_opp_tables[cpu]);
dev_err(cpu_dev, "Failed to add OPP name %s\n",
pvs_name);
goto free_opp;
}
}
drv->hw_opp_tables[cpu] = dev_pm_opp_set_supported_hw(
cpu_dev, &drv->versions, 1);
if (IS_ERR(drv->hw_opp_tables[cpu])) {
ret = PTR_ERR(drv->hw_opp_tables[cpu]);
dev_err(cpu_dev, dev_err(cpu_dev,
"Failed to set supported hardware\n"); "Failed to set supported hardware\n");
goto free_genpd_opp; goto free_genpd_opp;
@ -259,11 +404,18 @@ free_genpd_opp:
kfree(drv->genpd_opp_tables); kfree(drv->genpd_opp_tables);
free_opp: free_opp:
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
if (IS_ERR_OR_NULL(drv->opp_tables[cpu])) if (IS_ERR_OR_NULL(drv->names_opp_tables[cpu]))
break; break;
dev_pm_opp_put_supported_hw(drv->opp_tables[cpu]); dev_pm_opp_put_prop_name(drv->names_opp_tables[cpu]);
} }
kfree(drv->opp_tables); for_each_possible_cpu(cpu) {
if (IS_ERR_OR_NULL(drv->hw_opp_tables[cpu]))
break;
dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]);
}
kfree(drv->hw_opp_tables);
free_opp_names:
kfree(drv->names_opp_tables);
free_drv: free_drv:
kfree(drv); kfree(drv);
@ -278,13 +430,16 @@ static int qcom_cpufreq_remove(struct platform_device *pdev)
platform_device_unregister(cpufreq_dt_pdev); platform_device_unregister(cpufreq_dt_pdev);
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
if (drv->opp_tables[cpu]) if (drv->names_opp_tables[cpu])
dev_pm_opp_put_supported_hw(drv->opp_tables[cpu]); dev_pm_opp_put_supported_hw(drv->names_opp_tables[cpu]);
if (drv->hw_opp_tables[cpu])
dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]);
if (drv->genpd_opp_tables[cpu]) if (drv->genpd_opp_tables[cpu])
dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]); dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
} }
kfree(drv->opp_tables); kfree(drv->names_opp_tables);
kfree(drv->hw_opp_tables);
kfree(drv->genpd_opp_tables); kfree(drv->genpd_opp_tables);
kfree(drv); kfree(drv);
@ -303,6 +458,10 @@ static const struct of_device_id qcom_cpufreq_match_list[] __initconst = {
{ .compatible = "qcom,apq8096", .data = &match_data_kryo }, { .compatible = "qcom,apq8096", .data = &match_data_kryo },
{ .compatible = "qcom,msm8996", .data = &match_data_kryo }, { .compatible = "qcom,msm8996", .data = &match_data_kryo },
{ .compatible = "qcom,qcs404", .data = &match_data_qcs404 }, { .compatible = "qcom,qcs404", .data = &match_data_qcs404 },
{ .compatible = "qcom,ipq8064", .data = &match_data_krait },
{ .compatible = "qcom,apq8064", .data = &match_data_krait },
{ .compatible = "qcom,msm8974", .data = &match_data_krait },
{ .compatible = "qcom,msm8960", .data = &match_data_krait },
{}, {},
}; };

View File

@ -25,11 +25,14 @@
#define DRA7_EFUSE_HAS_OD_MPU_OPP 11 #define DRA7_EFUSE_HAS_OD_MPU_OPP 11
#define DRA7_EFUSE_HAS_HIGH_MPU_OPP 15 #define DRA7_EFUSE_HAS_HIGH_MPU_OPP 15
#define DRA76_EFUSE_HAS_PLUS_MPU_OPP 18
#define DRA7_EFUSE_HAS_ALL_MPU_OPP 23 #define DRA7_EFUSE_HAS_ALL_MPU_OPP 23
#define DRA76_EFUSE_HAS_ALL_MPU_OPP 24
#define DRA7_EFUSE_NOM_MPU_OPP BIT(0) #define DRA7_EFUSE_NOM_MPU_OPP BIT(0)
#define DRA7_EFUSE_OD_MPU_OPP BIT(1) #define DRA7_EFUSE_OD_MPU_OPP BIT(1)
#define DRA7_EFUSE_HIGH_MPU_OPP BIT(2) #define DRA7_EFUSE_HIGH_MPU_OPP BIT(2)
#define DRA76_EFUSE_PLUS_MPU_OPP BIT(3)
#define OMAP3_CONTROL_DEVICE_STATUS 0x4800244C #define OMAP3_CONTROL_DEVICE_STATUS 0x4800244C
#define OMAP3_CONTROL_IDCODE 0x4830A204 #define OMAP3_CONTROL_IDCODE 0x4830A204
@ -80,6 +83,10 @@ static unsigned long dra7_efuse_xlate(struct ti_cpufreq_data *opp_data,
*/ */
switch (efuse) { switch (efuse) {
case DRA76_EFUSE_HAS_PLUS_MPU_OPP:
case DRA76_EFUSE_HAS_ALL_MPU_OPP:
calculated_efuse |= DRA76_EFUSE_PLUS_MPU_OPP;
/* Fall through */
case DRA7_EFUSE_HAS_ALL_MPU_OPP: case DRA7_EFUSE_HAS_ALL_MPU_OPP:
case DRA7_EFUSE_HAS_HIGH_MPU_OPP: case DRA7_EFUSE_HAS_HIGH_MPU_OPP:
calculated_efuse |= DRA7_EFUSE_HIGH_MPU_OPP; calculated_efuse |= DRA7_EFUSE_HIGH_MPU_OPP;

View File

@ -18,6 +18,10 @@
#include <linux/kvm_para.h> #include <linux/kvm_para.h>
#include <linux/cpuidle_haltpoll.h> #include <linux/cpuidle_haltpoll.h>
static bool force __read_mostly;
module_param(force, bool, 0444);
MODULE_PARM_DESC(force, "Load unconditionally");
static struct cpuidle_device __percpu *haltpoll_cpuidle_devices; static struct cpuidle_device __percpu *haltpoll_cpuidle_devices;
static enum cpuhp_state haltpoll_hp_state; static enum cpuhp_state haltpoll_hp_state;
@ -90,6 +94,11 @@ static void haltpoll_uninit(void)
haltpoll_cpuidle_devices = NULL; haltpoll_cpuidle_devices = NULL;
} }
static bool haltpool_want(void)
{
return kvm_para_has_hint(KVM_HINTS_REALTIME) || force;
}
static int __init haltpoll_init(void) static int __init haltpoll_init(void)
{ {
int ret; int ret;
@ -101,8 +110,7 @@ static int __init haltpoll_init(void)
cpuidle_poll_state_init(drv); cpuidle_poll_state_init(drv);
if (!kvm_para_available() || if (!kvm_para_available() || !haltpool_want())
!kvm_para_has_hint(KVM_HINTS_REALTIME))
return -ENODEV; return -ENODEV;
ret = cpuidle_register_driver(drv); ret = cpuidle_register_driver(drv);

View File

@ -160,6 +160,29 @@ int __init psci_dt_parse_state_node(struct device_node *np, u32 *state)
return 0; return 0;
} }
static int __init psci_dt_cpu_init_topology(struct cpuidle_driver *drv,
struct psci_cpuidle_data *data,
unsigned int state_count, int cpu)
{
/* Currently limit the hierarchical topology to be used in OSI mode. */
if (!psci_has_osi_support())
return 0;
data->dev = psci_dt_attach_cpu(cpu);
if (IS_ERR_OR_NULL(data->dev))
return PTR_ERR_OR_ZERO(data->dev);
/*
* Using the deepest state for the CPU to trigger a potential selection
* of a shared state for the domain, assumes the domain states are all
* deeper states.
*/
drv->states[state_count - 1].enter = psci_enter_domain_idle_state;
psci_cpuidle_use_cpuhp = true;
return 0;
}
static int __init psci_dt_cpu_init_idle(struct cpuidle_driver *drv, static int __init psci_dt_cpu_init_idle(struct cpuidle_driver *drv,
struct device_node *cpu_node, struct device_node *cpu_node,
unsigned int state_count, int cpu) unsigned int state_count, int cpu)
@ -193,25 +216,10 @@ static int __init psci_dt_cpu_init_idle(struct cpuidle_driver *drv,
goto free_mem; goto free_mem;
} }
/* Currently limit the hierarchical topology to be used in OSI mode. */ /* Initialize optional data, used for the hierarchical topology. */
if (psci_has_osi_support()) { ret = psci_dt_cpu_init_topology(drv, data, state_count, cpu);
data->dev = psci_dt_attach_cpu(cpu); if (ret < 0)
if (IS_ERR(data->dev)) { goto free_mem;
ret = PTR_ERR(data->dev);
goto free_mem;
}
/*
* Using the deepest state for the CPU to trigger a potential
* selection of a shared state for the domain, assumes the
* domain states are all deeper states.
*/
if (data->dev) {
drv->states[state_count - 1].enter =
psci_enter_domain_idle_state;
psci_cpuidle_use_cpuhp = true;
}
}
/* Idle states parsed correctly, store them in the per-cpu struct. */ /* Idle states parsed correctly, store them in the per-cpu struct. */
data->psci_states = psci_states; data->psci_states = psci_states;

View File

@ -736,53 +736,15 @@ int cpuidle_register(struct cpuidle_driver *drv,
} }
EXPORT_SYMBOL_GPL(cpuidle_register); EXPORT_SYMBOL_GPL(cpuidle_register);
#ifdef CONFIG_SMP
/*
* This function gets called when a part of the kernel has a new latency
* requirement. This means we need to get all processors out of their C-state,
* and then recalculate a new suitable C-state. Just do a cross-cpu IPI; that
* wakes them all right up.
*/
static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
{
wake_up_all_idle_cpus();
return NOTIFY_OK;
}
static struct notifier_block cpuidle_latency_notifier = {
.notifier_call = cpuidle_latency_notify,
};
static inline void latency_notifier_init(struct notifier_block *n)
{
pm_qos_add_notifier(PM_QOS_CPU_DMA_LATENCY, n);
}
#else /* CONFIG_SMP */
#define latency_notifier_init(x) do { } while (0)
#endif /* CONFIG_SMP */
/** /**
* cpuidle_init - core initializer * cpuidle_init - core initializer
*/ */
static int __init cpuidle_init(void) static int __init cpuidle_init(void)
{ {
int ret;
if (cpuidle_disabled()) if (cpuidle_disabled())
return -ENODEV; return -ENODEV;
ret = cpuidle_add_interface(cpu_subsys.dev_root); return cpuidle_add_interface(cpu_subsys.dev_root);
if (ret)
return ret;
latency_notifier_init(&cpuidle_latency_notifier);
return 0;
} }
module_param(off, int, 0444); module_param(off, int, 0444);

View File

@ -109,9 +109,9 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
*/ */
s64 cpuidle_governor_latency_req(unsigned int cpu) s64 cpuidle_governor_latency_req(unsigned int cpu)
{ {
int global_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
struct device *device = get_cpu_device(cpu); struct device *device = get_cpu_device(cpu);
int device_req = dev_pm_qos_raw_resume_latency(device); int device_req = dev_pm_qos_raw_resume_latency(device);
int global_req = cpu_latency_qos_limit();
if (device_req > global_req) if (device_req > global_req)
device_req = global_req; device_req = global_req;

View File

@ -550,14 +550,14 @@ out:
EXPORT_SYMBOL(devfreq_monitor_resume); EXPORT_SYMBOL(devfreq_monitor_resume);
/** /**
* devfreq_interval_update() - Update device devfreq monitoring interval * devfreq_update_interval() - Update device devfreq monitoring interval
* @devfreq: the devfreq instance. * @devfreq: the devfreq instance.
* @delay: new polling interval to be set. * @delay: new polling interval to be set.
* *
* Helper function to set new load monitoring polling interval. Function * Helper function to set new load monitoring polling interval. Function
* to be called from governor in response to DEVFREQ_GOV_INTERVAL event. * to be called from governor in response to DEVFREQ_GOV_UPDATE_INTERVAL event.
*/ */
void devfreq_interval_update(struct devfreq *devfreq, unsigned int *delay) void devfreq_update_interval(struct devfreq *devfreq, unsigned int *delay)
{ {
unsigned int cur_delay = devfreq->profile->polling_ms; unsigned int cur_delay = devfreq->profile->polling_ms;
unsigned int new_delay = *delay; unsigned int new_delay = *delay;
@ -597,7 +597,7 @@ void devfreq_interval_update(struct devfreq *devfreq, unsigned int *delay)
out: out:
mutex_unlock(&devfreq->lock); mutex_unlock(&devfreq->lock);
} }
EXPORT_SYMBOL(devfreq_interval_update); EXPORT_SYMBOL(devfreq_update_interval);
/** /**
* devfreq_notifier_call() - Notify that the device frequency requirements * devfreq_notifier_call() - Notify that the device frequency requirements
@ -705,13 +705,13 @@ static void devfreq_dev_release(struct device *dev)
if (dev_pm_qos_request_active(&devfreq->user_max_freq_req)) { if (dev_pm_qos_request_active(&devfreq->user_max_freq_req)) {
err = dev_pm_qos_remove_request(&devfreq->user_max_freq_req); err = dev_pm_qos_remove_request(&devfreq->user_max_freq_req);
if (err) if (err < 0)
dev_warn(dev->parent, dev_warn(dev->parent,
"Failed to remove max_freq request: %d\n", err); "Failed to remove max_freq request: %d\n", err);
} }
if (dev_pm_qos_request_active(&devfreq->user_min_freq_req)) { if (dev_pm_qos_request_active(&devfreq->user_min_freq_req)) {
err = dev_pm_qos_remove_request(&devfreq->user_min_freq_req); err = dev_pm_qos_remove_request(&devfreq->user_min_freq_req);
if (err) if (err < 0)
dev_warn(dev->parent, dev_warn(dev->parent,
"Failed to remove min_freq request: %d\n", err); "Failed to remove min_freq request: %d\n", err);
} }
@ -1424,7 +1424,7 @@ static ssize_t polling_interval_store(struct device *dev,
if (ret != 1) if (ret != 1)
return -EINVAL; return -EINVAL;
df->governor->event_handler(df, DEVFREQ_GOV_INTERVAL, &value); df->governor->event_handler(df, DEVFREQ_GOV_UPDATE_INTERVAL, &value);
ret = count; ret = count;
return ret; return ret;

View File

@ -18,7 +18,7 @@
/* Devfreq events */ /* Devfreq events */
#define DEVFREQ_GOV_START 0x1 #define DEVFREQ_GOV_START 0x1
#define DEVFREQ_GOV_STOP 0x2 #define DEVFREQ_GOV_STOP 0x2
#define DEVFREQ_GOV_INTERVAL 0x3 #define DEVFREQ_GOV_UPDATE_INTERVAL 0x3
#define DEVFREQ_GOV_SUSPEND 0x4 #define DEVFREQ_GOV_SUSPEND 0x4
#define DEVFREQ_GOV_RESUME 0x5 #define DEVFREQ_GOV_RESUME 0x5
@ -30,7 +30,7 @@
* @node: list node - contains registered devfreq governors * @node: list node - contains registered devfreq governors
* @name: Governor's name * @name: Governor's name
* @immutable: Immutable flag for governor. If the value is 1, * @immutable: Immutable flag for governor. If the value is 1,
* this govenror is never changeable to other governor. * this governor is never changeable to other governor.
* @interrupt_driven: Devfreq core won't schedule polling work for this * @interrupt_driven: Devfreq core won't schedule polling work for this
* governor if value is set to 1. * governor if value is set to 1.
* @get_target_freq: Returns desired operating frequency for the device. * @get_target_freq: Returns desired operating frequency for the device.
@ -57,17 +57,16 @@ struct devfreq_governor {
unsigned int event, void *data); unsigned int event, void *data);
}; };
extern void devfreq_monitor_start(struct devfreq *devfreq); void devfreq_monitor_start(struct devfreq *devfreq);
extern void devfreq_monitor_stop(struct devfreq *devfreq); void devfreq_monitor_stop(struct devfreq *devfreq);
extern void devfreq_monitor_suspend(struct devfreq *devfreq); void devfreq_monitor_suspend(struct devfreq *devfreq);
extern void devfreq_monitor_resume(struct devfreq *devfreq); void devfreq_monitor_resume(struct devfreq *devfreq);
extern void devfreq_interval_update(struct devfreq *devfreq, void devfreq_update_interval(struct devfreq *devfreq, unsigned int *delay);
unsigned int *delay);
extern int devfreq_add_governor(struct devfreq_governor *governor); int devfreq_add_governor(struct devfreq_governor *governor);
extern int devfreq_remove_governor(struct devfreq_governor *governor); int devfreq_remove_governor(struct devfreq_governor *governor);
extern int devfreq_update_status(struct devfreq *devfreq, unsigned long freq); int devfreq_update_status(struct devfreq *devfreq, unsigned long freq);
static inline int devfreq_update_stats(struct devfreq *df) static inline int devfreq_update_stats(struct devfreq *df)
{ {

View File

@ -96,8 +96,8 @@ static int devfreq_simple_ondemand_handler(struct devfreq *devfreq,
devfreq_monitor_stop(devfreq); devfreq_monitor_stop(devfreq);
break; break;
case DEVFREQ_GOV_INTERVAL: case DEVFREQ_GOV_UPDATE_INTERVAL:
devfreq_interval_update(devfreq, (unsigned int *)data); devfreq_update_interval(devfreq, (unsigned int *)data);
break; break;
case DEVFREQ_GOV_SUSPEND: case DEVFREQ_GOV_SUSPEND:

View File

@ -131,7 +131,7 @@ static int devfreq_userspace_handler(struct devfreq *devfreq,
} }
static struct devfreq_governor devfreq_userspace = { static struct devfreq_governor devfreq_userspace = {
.name = "userspace", .name = DEVFREQ_GOV_USERSPACE,
.get_target_freq = devfreq_userspace_func, .get_target_freq = devfreq_userspace_func,
.event_handler = devfreq_userspace_handler, .event_handler = devfreq_userspace_handler,
}; };

View File

@ -734,7 +734,7 @@ static int tegra_governor_event_handler(struct devfreq *devfreq,
devfreq_monitor_stop(devfreq); devfreq_monitor_stop(devfreq);
break; break;
case DEVFREQ_GOV_INTERVAL: case DEVFREQ_GOV_UPDATE_INTERVAL:
/* /*
* ACTMON hardware supports up to 256 milliseconds for the * ACTMON hardware supports up to 256 milliseconds for the
* sampling period. * sampling period.
@ -745,7 +745,7 @@ static int tegra_governor_event_handler(struct devfreq *devfreq,
} }
tegra_actmon_pause(tegra); tegra_actmon_pause(tegra);
devfreq_interval_update(devfreq, new_delay); devfreq_update_interval(devfreq, new_delay);
ret = tegra_actmon_resume(tegra); ret = tegra_actmon_resume(tegra);
break; break;

View File

@ -1360,7 +1360,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
* lowest possible wakeup latency and so prevent the cpu from going into * lowest possible wakeup latency and so prevent the cpu from going into
* deep sleep states. * deep sleep states.
*/ */
pm_qos_update_request(&i915->pm_qos, 0); cpu_latency_qos_update_request(&i915->pm_qos, 0);
intel_dp_check_edp(intel_dp); intel_dp_check_edp(intel_dp);
@ -1488,7 +1488,7 @@ done:
ret = recv_bytes; ret = recv_bytes;
out: out:
pm_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
if (vdd) if (vdd)
edp_panel_vdd_off(intel_dp, false); edp_panel_vdd_off(intel_dp, false);

View File

@ -505,8 +505,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
mutex_init(&dev_priv->backlight_lock); mutex_init(&dev_priv->backlight_lock);
mutex_init(&dev_priv->sb_lock); mutex_init(&dev_priv->sb_lock);
pm_qos_add_request(&dev_priv->sb_qos, cpu_latency_qos_add_request(&dev_priv->sb_qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
mutex_init(&dev_priv->av_mutex); mutex_init(&dev_priv->av_mutex);
mutex_init(&dev_priv->wm.wm_mutex); mutex_init(&dev_priv->wm.wm_mutex);
@ -571,7 +570,7 @@ static void i915_driver_late_release(struct drm_i915_private *dev_priv)
vlv_free_s0ix_state(dev_priv); vlv_free_s0ix_state(dev_priv);
i915_workqueues_cleanup(dev_priv); i915_workqueues_cleanup(dev_priv);
pm_qos_remove_request(&dev_priv->sb_qos); cpu_latency_qos_remove_request(&dev_priv->sb_qos);
mutex_destroy(&dev_priv->sb_lock); mutex_destroy(&dev_priv->sb_lock);
} }
@ -1229,8 +1228,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
} }
} }
pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&dev_priv->pm_qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
intel_gt_init_workarounds(dev_priv); intel_gt_init_workarounds(dev_priv);
@ -1276,7 +1274,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
err_msi: err_msi:
if (pdev->msi_enabled) if (pdev->msi_enabled)
pci_disable_msi(pdev); pci_disable_msi(pdev);
pm_qos_remove_request(&dev_priv->pm_qos); cpu_latency_qos_remove_request(&dev_priv->pm_qos);
err_mem_regions: err_mem_regions:
intel_memory_regions_driver_release(dev_priv); intel_memory_regions_driver_release(dev_priv);
err_ggtt: err_ggtt:
@ -1299,7 +1297,7 @@ static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)
if (pdev->msi_enabled) if (pdev->msi_enabled)
pci_disable_msi(pdev); pci_disable_msi(pdev);
pm_qos_remove_request(&dev_priv->pm_qos); cpu_latency_qos_remove_request(&dev_priv->pm_qos);
} }
/** /**

View File

@ -60,7 +60,7 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
* to the Valleyview P-unit and not all sideband communications. * to the Valleyview P-unit and not all sideband communications.
*/ */
if (IS_VALLEYVIEW(i915)) { if (IS_VALLEYVIEW(i915)) {
pm_qos_update_request(&i915->sb_qos, 0); cpu_latency_qos_update_request(&i915->sb_qos, 0);
on_each_cpu(ping, NULL, 1); on_each_cpu(ping, NULL, 1);
} }
} }
@ -68,7 +68,8 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
static void __vlv_punit_put(struct drm_i915_private *i915) static void __vlv_punit_put(struct drm_i915_private *i915)
{ {
if (IS_VALLEYVIEW(i915)) if (IS_VALLEYVIEW(i915))
pm_qos_update_request(&i915->sb_qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&i915->sb_qos,
PM_QOS_DEFAULT_VALUE);
iosf_mbi_punit_release(); iosf_mbi_punit_release();
} }

View File

@ -965,14 +965,13 @@ static int cs_hsi_buf_config(struct cs_hsi_iface *hi,
if (old_state != hi->iface_state) { if (old_state != hi->iface_state) {
if (hi->iface_state == CS_STATE_CONFIGURED) { if (hi->iface_state == CS_STATE_CONFIGURED) {
pm_qos_add_request(&hi->pm_qos_req, cpu_latency_qos_add_request(&hi->pm_qos_req,
PM_QOS_CPU_DMA_LATENCY,
CS_QOS_LATENCY_FOR_DATA_USEC); CS_QOS_LATENCY_FOR_DATA_USEC);
local_bh_disable(); local_bh_disable();
cs_hsi_read_on_data(hi); cs_hsi_read_on_data(hi);
local_bh_enable(); local_bh_enable();
} else if (old_state == CS_STATE_CONFIGURED) { } else if (old_state == CS_STATE_CONFIGURED) {
pm_qos_remove_request(&hi->pm_qos_req); cpu_latency_qos_remove_request(&hi->pm_qos_req);
} }
} }
return r; return r;
@ -1075,8 +1074,8 @@ static void cs_hsi_stop(struct cs_hsi_iface *hi)
WARN_ON(!cs_state_idle(hi->control_state)); WARN_ON(!cs_state_idle(hi->control_state));
WARN_ON(!cs_state_idle(hi->data_state)); WARN_ON(!cs_state_idle(hi->data_state));
if (pm_qos_request_active(&hi->pm_qos_req)) if (cpu_latency_qos_request_active(&hi->pm_qos_req))
pm_qos_remove_request(&hi->pm_qos_req); cpu_latency_qos_remove_request(&hi->pm_qos_req);
spin_lock_bh(&hi->lock); spin_lock_bh(&hi->lock);
cs_hsi_free_data(hi); cs_hsi_free_data(hi);

View File

@ -2,8 +2,9 @@
/* /*
* intel_idle.c - native hardware idle loop for modern Intel processors * intel_idle.c - native hardware idle loop for modern Intel processors
* *
* Copyright (c) 2013, Intel Corporation. * Copyright (c) 2013 - 2020, Intel Corporation.
* Len Brown <len.brown@intel.com> * Len Brown <len.brown@intel.com>
* Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*/ */
/* /*
@ -25,11 +26,6 @@
/* /*
* Known limitations * Known limitations
* *
* The driver currently initializes for_each_online_cpu() upon modprobe.
* It it unaware of subsequent processors hot-added to the system.
* This means that if you boot with maxcpus=n and later online
* processors above n, those processors will use C1 only.
*
* ACPI has a .suspend hack to turn off deep c-statees during suspend * ACPI has a .suspend hack to turn off deep c-statees during suspend
* to avoid complications with the lapic timer workaround. * to avoid complications with the lapic timer workaround.
* Have not seen issues with suspend, but may need same workaround here. * Have not seen issues with suspend, but may need same workaround here.
@ -55,7 +51,7 @@
#include <asm/mwait.h> #include <asm/mwait.h>
#include <asm/msr.h> #include <asm/msr.h>
#define INTEL_IDLE_VERSION "0.4.1" #define INTEL_IDLE_VERSION "0.5.1"
static struct cpuidle_driver intel_idle_driver = { static struct cpuidle_driver intel_idle_driver = {
.name = "intel_idle", .name = "intel_idle",
@ -65,11 +61,12 @@ static struct cpuidle_driver intel_idle_driver = {
static int max_cstate = CPUIDLE_STATE_MAX - 1; static int max_cstate = CPUIDLE_STATE_MAX - 1;
static unsigned int disabled_states_mask; static unsigned int disabled_states_mask;
static unsigned int mwait_substates; static struct cpuidle_device __percpu *intel_idle_cpuidle_devices;
#define LAPIC_TIMER_ALWAYS_RELIABLE 0xFFFFFFFF static unsigned long auto_demotion_disable_flags;
/* Reliable LAPIC Timer States, bit 1 for C1 etc. */ static bool disable_promotion_to_c1e;
static unsigned int lapic_timer_reliable_states = (1 << 1); /* Default to only C1 */
static bool lapic_timer_always_reliable;
struct idle_cpu { struct idle_cpu {
struct cpuidle_state *state_table; struct cpuidle_state *state_table;
@ -84,13 +81,10 @@ struct idle_cpu {
bool use_acpi; bool use_acpi;
}; };
static const struct idle_cpu *icpu; static const struct idle_cpu *icpu __initdata;
static struct cpuidle_device __percpu *intel_idle_cpuidle_devices; static struct cpuidle_state *cpuidle_state_table __initdata;
static int intel_idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index); static unsigned int mwait_substates __initdata;
static void intel_idle_s2idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index);
static struct cpuidle_state *cpuidle_state_table;
/* /*
* Enable this state by default even if the ACPI _CST does not list it. * Enable this state by default even if the ACPI _CST does not list it.
@ -103,7 +97,7 @@ static struct cpuidle_state *cpuidle_state_table;
* If this flag is set, SW flushes the TLB, so even if the * If this flag is set, SW flushes the TLB, so even if the
* HW doesn't do the flushing, this flag is safe to use. * HW doesn't do the flushing, this flag is safe to use.
*/ */
#define CPUIDLE_FLAG_TLB_FLUSHED 0x10000 #define CPUIDLE_FLAG_TLB_FLUSHED BIT(16)
/* /*
* MWAIT takes an 8-bit "hint" in EAX "suggesting" * MWAIT takes an 8-bit "hint" in EAX "suggesting"
@ -115,12 +109,87 @@ static struct cpuidle_state *cpuidle_state_table;
#define flg2MWAIT(flags) (((flags) >> 24) & 0xFF) #define flg2MWAIT(flags) (((flags) >> 24) & 0xFF)
#define MWAIT2flg(eax) ((eax & 0xFF) << 24) #define MWAIT2flg(eax) ((eax & 0xFF) << 24)
/**
* intel_idle - Ask the processor to enter the given idle state.
* @dev: cpuidle device of the target CPU.
* @drv: cpuidle driver (assumed to point to intel_idle_driver).
* @index: Target idle state index.
*
* Use the MWAIT instruction to notify the processor that the CPU represented by
* @dev is idle and it can try to enter the idle state corresponding to @index.
*
* If the local APIC timer is not known to be reliable in the target idle state,
* enable one-shot tick broadcasting for the target CPU before executing MWAIT.
*
* Optionally call leave_mm() for the target CPU upfront to avoid wakeups due to
* flushing user TLBs.
*
* Must be called under local_irq_disable().
*/
static __cpuidle int intel_idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index)
{
struct cpuidle_state *state = &drv->states[index];
unsigned long eax = flg2MWAIT(state->flags);
unsigned long ecx = 1; /* break on interrupt flag */
bool uninitialized_var(tick);
int cpu = smp_processor_id();
/*
* leave_mm() to avoid costly and often unnecessary wakeups
* for flushing the user TLB's associated with the active mm.
*/
if (state->flags & CPUIDLE_FLAG_TLB_FLUSHED)
leave_mm(cpu);
if (!static_cpu_has(X86_FEATURE_ARAT) && !lapic_timer_always_reliable) {
/*
* Switch over to one-shot tick broadcast if the target C-state
* is deeper than C1.
*/
if ((eax >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK) {
tick = true;
tick_broadcast_enter();
} else {
tick = false;
}
}
mwait_idle_with_hints(eax, ecx);
if (!static_cpu_has(X86_FEATURE_ARAT) && tick)
tick_broadcast_exit();
return index;
}
/**
* intel_idle_s2idle - Ask the processor to enter the given idle state.
* @dev: cpuidle device of the target CPU.
* @drv: cpuidle driver (assumed to point to intel_idle_driver).
* @index: Target idle state index.
*
* Use the MWAIT instruction to notify the processor that the CPU represented by
* @dev is idle and it can try to enter the idle state corresponding to @index.
*
* Invoked as a suspend-to-idle callback routine with frozen user space, frozen
* scheduler tick and suspended scheduler clock on the target CPU.
*/
static __cpuidle void intel_idle_s2idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index)
{
unsigned long eax = flg2MWAIT(drv->states[index].flags);
unsigned long ecx = 1; /* break on interrupt flag */
mwait_idle_with_hints(eax, ecx);
}
/* /*
* States are indexed by the cstate number, * States are indexed by the cstate number,
* which is also the index into the MWAIT hint array. * which is also the index into the MWAIT hint array.
* Thus C0 is a dummy. * Thus C0 is a dummy.
*/ */
static struct cpuidle_state nehalem_cstates[] = { static struct cpuidle_state nehalem_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -157,7 +226,7 @@ static struct cpuidle_state nehalem_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state snb_cstates[] = { static struct cpuidle_state snb_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -202,7 +271,7 @@ static struct cpuidle_state snb_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state byt_cstates[] = { static struct cpuidle_state byt_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -247,7 +316,7 @@ static struct cpuidle_state byt_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state cht_cstates[] = { static struct cpuidle_state cht_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -292,7 +361,7 @@ static struct cpuidle_state cht_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state ivb_cstates[] = { static struct cpuidle_state ivb_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -337,7 +406,7 @@ static struct cpuidle_state ivb_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state ivt_cstates[] = { static struct cpuidle_state ivt_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -374,7 +443,7 @@ static struct cpuidle_state ivt_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state ivt_cstates_4s[] = { static struct cpuidle_state ivt_cstates_4s[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -411,7 +480,7 @@ static struct cpuidle_state ivt_cstates_4s[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state ivt_cstates_8s[] = { static struct cpuidle_state ivt_cstates_8s[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -448,7 +517,7 @@ static struct cpuidle_state ivt_cstates_8s[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state hsw_cstates[] = { static struct cpuidle_state hsw_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -516,7 +585,7 @@ static struct cpuidle_state hsw_cstates[] = {
{ {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state bdw_cstates[] = { static struct cpuidle_state bdw_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -585,7 +654,7 @@ static struct cpuidle_state bdw_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state skl_cstates[] = { static struct cpuidle_state skl_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -654,7 +723,7 @@ static struct cpuidle_state skl_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state skx_cstates[] = { static struct cpuidle_state skx_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -683,7 +752,7 @@ static struct cpuidle_state skx_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state atom_cstates[] = { static struct cpuidle_state atom_cstates[] __initdata = {
{ {
.name = "C1E", .name = "C1E",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -719,7 +788,7 @@ static struct cpuidle_state atom_cstates[] = {
{ {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state tangier_cstates[] = { static struct cpuidle_state tangier_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -763,7 +832,7 @@ static struct cpuidle_state tangier_cstates[] = {
{ {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state avn_cstates[] = { static struct cpuidle_state avn_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -783,7 +852,7 @@ static struct cpuidle_state avn_cstates[] = {
{ {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state knl_cstates[] = { static struct cpuidle_state knl_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -804,7 +873,7 @@ static struct cpuidle_state knl_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state bxt_cstates[] = { static struct cpuidle_state bxt_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -865,7 +934,7 @@ static struct cpuidle_state bxt_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
static struct cpuidle_state dnv_cstates[] = { static struct cpuidle_state dnv_cstates[] __initdata = {
{ {
.name = "C1", .name = "C1",
.desc = "MWAIT 0x00", .desc = "MWAIT 0x00",
@ -894,174 +963,116 @@ static struct cpuidle_state dnv_cstates[] = {
.enter = NULL } .enter = NULL }
}; };
/** static const struct idle_cpu idle_cpu_nehalem __initconst = {
* intel_idle
* @dev: cpuidle_device
* @drv: cpuidle driver
* @index: index of cpuidle state
*
* Must be called under local_irq_disable().
*/
static __cpuidle int intel_idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index)
{
unsigned long ecx = 1; /* break on interrupt flag */
struct cpuidle_state *state = &drv->states[index];
unsigned long eax = flg2MWAIT(state->flags);
unsigned int cstate;
bool uninitialized_var(tick);
int cpu = smp_processor_id();
/*
* leave_mm() to avoid costly and often unnecessary wakeups
* for flushing the user TLB's associated with the active mm.
*/
if (state->flags & CPUIDLE_FLAG_TLB_FLUSHED)
leave_mm(cpu);
if (!static_cpu_has(X86_FEATURE_ARAT)) {
cstate = (((eax) >> MWAIT_SUBSTATE_SIZE) &
MWAIT_CSTATE_MASK) + 1;
tick = false;
if (!(lapic_timer_reliable_states & (1 << (cstate)))) {
tick = true;
tick_broadcast_enter();
}
}
mwait_idle_with_hints(eax, ecx);
if (!static_cpu_has(X86_FEATURE_ARAT) && tick)
tick_broadcast_exit();
return index;
}
/**
* intel_idle_s2idle - simplified "enter" callback routine for suspend-to-idle
* @dev: cpuidle_device
* @drv: cpuidle driver
* @index: state index
*/
static void intel_idle_s2idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index)
{
unsigned long ecx = 1; /* break on interrupt flag */
unsigned long eax = flg2MWAIT(drv->states[index].flags);
mwait_idle_with_hints(eax, ecx);
}
static const struct idle_cpu idle_cpu_nehalem = {
.state_table = nehalem_cstates, .state_table = nehalem_cstates,
.auto_demotion_disable_flags = NHM_C1_AUTO_DEMOTE | NHM_C3_AUTO_DEMOTE, .auto_demotion_disable_flags = NHM_C1_AUTO_DEMOTE | NHM_C3_AUTO_DEMOTE,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
}; };
static const struct idle_cpu idle_cpu_nhx = { static const struct idle_cpu idle_cpu_nhx __initconst = {
.state_table = nehalem_cstates, .state_table = nehalem_cstates,
.auto_demotion_disable_flags = NHM_C1_AUTO_DEMOTE | NHM_C3_AUTO_DEMOTE, .auto_demotion_disable_flags = NHM_C1_AUTO_DEMOTE | NHM_C3_AUTO_DEMOTE,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
.use_acpi = true, .use_acpi = true,
}; };
static const struct idle_cpu idle_cpu_atom = { static const struct idle_cpu idle_cpu_atom __initconst = {
.state_table = atom_cstates, .state_table = atom_cstates,
}; };
static const struct idle_cpu idle_cpu_tangier = { static const struct idle_cpu idle_cpu_tangier __initconst = {
.state_table = tangier_cstates, .state_table = tangier_cstates,
}; };
static const struct idle_cpu idle_cpu_lincroft = { static const struct idle_cpu idle_cpu_lincroft __initconst = {
.state_table = atom_cstates, .state_table = atom_cstates,
.auto_demotion_disable_flags = ATM_LNC_C6_AUTO_DEMOTE, .auto_demotion_disable_flags = ATM_LNC_C6_AUTO_DEMOTE,
}; };
static const struct idle_cpu idle_cpu_snb = { static const struct idle_cpu idle_cpu_snb __initconst = {
.state_table = snb_cstates, .state_table = snb_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
}; };
static const struct idle_cpu idle_cpu_snx = { static const struct idle_cpu idle_cpu_snx __initconst = {
.state_table = snb_cstates, .state_table = snb_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
.use_acpi = true, .use_acpi = true,
}; };
static const struct idle_cpu idle_cpu_byt = { static const struct idle_cpu idle_cpu_byt __initconst = {
.state_table = byt_cstates, .state_table = byt_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
.byt_auto_demotion_disable_flag = true, .byt_auto_demotion_disable_flag = true,
}; };
static const struct idle_cpu idle_cpu_cht = { static const struct idle_cpu idle_cpu_cht __initconst = {
.state_table = cht_cstates, .state_table = cht_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
.byt_auto_demotion_disable_flag = true, .byt_auto_demotion_disable_flag = true,
}; };
static const struct idle_cpu idle_cpu_ivb = { static const struct idle_cpu idle_cpu_ivb __initconst = {
.state_table = ivb_cstates, .state_table = ivb_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
}; };
static const struct idle_cpu idle_cpu_ivt = { static const struct idle_cpu idle_cpu_ivt __initconst = {
.state_table = ivt_cstates, .state_table = ivt_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
.use_acpi = true, .use_acpi = true,
}; };
static const struct idle_cpu idle_cpu_hsw = { static const struct idle_cpu idle_cpu_hsw __initconst = {
.state_table = hsw_cstates, .state_table = hsw_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
}; };
static const struct idle_cpu idle_cpu_hsx = { static const struct idle_cpu idle_cpu_hsx __initconst = {
.state_table = hsw_cstates, .state_table = hsw_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
.use_acpi = true, .use_acpi = true,
}; };
static const struct idle_cpu idle_cpu_bdw = { static const struct idle_cpu idle_cpu_bdw __initconst = {
.state_table = bdw_cstates, .state_table = bdw_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
}; };
static const struct idle_cpu idle_cpu_bdx = { static const struct idle_cpu idle_cpu_bdx __initconst = {
.state_table = bdw_cstates, .state_table = bdw_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
.use_acpi = true, .use_acpi = true,
}; };
static const struct idle_cpu idle_cpu_skl = { static const struct idle_cpu idle_cpu_skl __initconst = {
.state_table = skl_cstates, .state_table = skl_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
}; };
static const struct idle_cpu idle_cpu_skx = { static const struct idle_cpu idle_cpu_skx __initconst = {
.state_table = skx_cstates, .state_table = skx_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
.use_acpi = true, .use_acpi = true,
}; };
static const struct idle_cpu idle_cpu_avn = { static const struct idle_cpu idle_cpu_avn __initconst = {
.state_table = avn_cstates, .state_table = avn_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
.use_acpi = true, .use_acpi = true,
}; };
static const struct idle_cpu idle_cpu_knl = { static const struct idle_cpu idle_cpu_knl __initconst = {
.state_table = knl_cstates, .state_table = knl_cstates,
.use_acpi = true, .use_acpi = true,
}; };
static const struct idle_cpu idle_cpu_bxt = { static const struct idle_cpu idle_cpu_bxt __initconst = {
.state_table = bxt_cstates, .state_table = bxt_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
}; };
static const struct idle_cpu idle_cpu_dnv = { static const struct idle_cpu idle_cpu_dnv __initconst = {
.state_table = dnv_cstates, .state_table = dnv_cstates,
.disable_promotion_to_c1e = true, .disable_promotion_to_c1e = true,
.use_acpi = true, .use_acpi = true,
@ -1273,11 +1284,11 @@ static inline void intel_idle_init_cstates_acpi(struct cpuidle_driver *drv) { }
static inline bool intel_idle_off_by_default(u32 mwait_hint) { return false; } static inline bool intel_idle_off_by_default(u32 mwait_hint) { return false; }
#endif /* !CONFIG_ACPI_PROCESSOR_CSTATE */ #endif /* !CONFIG_ACPI_PROCESSOR_CSTATE */
/* /**
* ivt_idle_state_table_update(void) * ivt_idle_state_table_update - Tune the idle states table for Ivy Town.
* *
* Tune IVT multi-socket targets * Tune IVT multi-socket targets.
* Assumption: num_sockets == (max_package_num + 1) * Assumption: num_sockets == (max_package_num + 1).
*/ */
static void __init ivt_idle_state_table_update(void) static void __init ivt_idle_state_table_update(void)
{ {
@ -1323,11 +1334,11 @@ static unsigned long long __init irtl_2_usec(unsigned long long irtl)
return div_u64((irtl & 0x3FF) * ns, NSEC_PER_USEC); return div_u64((irtl & 0x3FF) * ns, NSEC_PER_USEC);
} }
/* /**
* bxt_idle_state_table_update(void) * bxt_idle_state_table_update - Fix up the Broxton idle states table.
* *
* On BXT, we trust the IRTL to show the definitive maximum latency * On BXT, trust the IRTL (Interrupt Response Time Limit) MSR to show the
* We use the same value for target_residency. * definitive maximum latency and use the same value for target_residency.
*/ */
static void __init bxt_idle_state_table_update(void) static void __init bxt_idle_state_table_update(void)
{ {
@ -1370,11 +1381,11 @@ static void __init bxt_idle_state_table_update(void)
} }
} }
/*
* sklh_idle_state_table_update(void) /**
* sklh_idle_state_table_update - Fix up the Sky Lake idle states table.
* *
* On SKL-H (model 0x5e) disable C8 and C9 if: * On SKL-H (model 0x5e) skip C8 and C9 if C10 is enabled and SGX disabled.
* C10 is enabled and SGX disabled
*/ */
static void __init sklh_idle_state_table_update(void) static void __init sklh_idle_state_table_update(void)
{ {
@ -1485,9 +1496,9 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
} }
} }
/* /**
* intel_idle_cpuidle_driver_init() * intel_idle_cpuidle_driver_init - Create the list of available idle states.
* allocate, initialize cpuidle_states * @drv: cpuidle driver structure to initialize.
*/ */
static void __init intel_idle_cpuidle_driver_init(struct cpuidle_driver *drv) static void __init intel_idle_cpuidle_driver_init(struct cpuidle_driver *drv)
{ {
@ -1509,7 +1520,7 @@ static void auto_demotion_disable(void)
unsigned long long msr_bits; unsigned long long msr_bits;
rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr_bits); rdmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr_bits);
msr_bits &= ~(icpu->auto_demotion_disable_flags); msr_bits &= ~auto_demotion_disable_flags;
wrmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr_bits); wrmsrl(MSR_PKG_CST_CONFIG_CONTROL, msr_bits);
} }
@ -1522,10 +1533,12 @@ static void c1e_promotion_disable(void)
wrmsrl(MSR_IA32_POWER_CTL, msr_bits); wrmsrl(MSR_IA32_POWER_CTL, msr_bits);
} }
/* /**
* intel_idle_cpu_init() * intel_idle_cpu_init - Register the target CPU with the cpuidle core.
* allocate, initialize, register cpuidle_devices * @cpu: CPU to initialize.
* @cpu: cpu/core to initialize *
* Register a cpuidle device object for @cpu and update its MSRs in accordance
* with the processor model flags.
*/ */
static int intel_idle_cpu_init(unsigned int cpu) static int intel_idle_cpu_init(unsigned int cpu)
{ {
@ -1539,13 +1552,10 @@ static int intel_idle_cpu_init(unsigned int cpu)
return -EIO; return -EIO;
} }
if (!icpu) if (auto_demotion_disable_flags)
return 0;
if (icpu->auto_demotion_disable_flags)
auto_demotion_disable(); auto_demotion_disable();
if (icpu->disable_promotion_to_c1e) if (disable_promotion_to_c1e)
c1e_promotion_disable(); c1e_promotion_disable();
return 0; return 0;
@ -1555,7 +1565,7 @@ static int intel_idle_cpu_online(unsigned int cpu)
{ {
struct cpuidle_device *dev; struct cpuidle_device *dev;
if (lapic_timer_reliable_states != LAPIC_TIMER_ALWAYS_RELIABLE) if (!lapic_timer_always_reliable)
tick_broadcast_enable(); tick_broadcast_enable();
/* /*
@ -1623,6 +1633,8 @@ static int __init intel_idle_init(void)
icpu = (const struct idle_cpu *)id->driver_data; icpu = (const struct idle_cpu *)id->driver_data;
if (icpu) { if (icpu) {
cpuidle_state_table = icpu->state_table; cpuidle_state_table = icpu->state_table;
auto_demotion_disable_flags = icpu->auto_demotion_disable_flags;
disable_promotion_to_c1e = icpu->disable_promotion_to_c1e;
if (icpu->use_acpi || force_use_acpi) if (icpu->use_acpi || force_use_acpi)
intel_idle_acpi_cst_extract(); intel_idle_acpi_cst_extract();
} else if (!intel_idle_acpi_cst_extract()) { } else if (!intel_idle_acpi_cst_extract()) {
@ -1647,15 +1659,15 @@ static int __init intel_idle_init(void)
} }
if (boot_cpu_has(X86_FEATURE_ARAT)) /* Always Reliable APIC Timer */ if (boot_cpu_has(X86_FEATURE_ARAT)) /* Always Reliable APIC Timer */
lapic_timer_reliable_states = LAPIC_TIMER_ALWAYS_RELIABLE; lapic_timer_always_reliable = true;
retval = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "idle/intel:online", retval = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "idle/intel:online",
intel_idle_cpu_online, NULL); intel_idle_cpu_online, NULL);
if (retval < 0) if (retval < 0)
goto hp_setup_fail; goto hp_setup_fail;
pr_debug("lapic_timer_reliable_states 0x%x\n", pr_debug("Local APIC timer is reliable in %s\n",
lapic_timer_reliable_states); lapic_timer_always_reliable ? "all C-states" : "C1");
return 0; return 0;

View File

@ -1008,8 +1008,7 @@ int saa7134_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
*/ */
if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) || if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
(dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq))) (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
pm_qos_add_request(&dev->qos_request, cpu_latency_qos_add_request(&dev->qos_request, 20);
PM_QOS_CPU_DMA_LATENCY, 20);
dmaq->seq_nr = 0; dmaq->seq_nr = 0;
return 0; return 0;
@ -1024,7 +1023,7 @@ void saa7134_vb2_stop_streaming(struct vb2_queue *vq)
if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) || if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
(dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq))) (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
pm_qos_remove_request(&dev->qos_request); cpu_latency_qos_remove_request(&dev->qos_request);
} }
static const struct vb2_ops vb2_qops = { static const struct vb2_ops vb2_qops = {

View File

@ -646,7 +646,7 @@ static int viacam_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
* requirement which will keep the CPU out of the deeper sleep * requirement which will keep the CPU out of the deeper sleep
* states. * states.
*/ */
pm_qos_add_request(&cam->qos_request, PM_QOS_CPU_DMA_LATENCY, 50); cpu_latency_qos_add_request(&cam->qos_request, 50);
viacam_start_engine(cam); viacam_start_engine(cam);
return 0; return 0;
out: out:
@ -662,7 +662,7 @@ static void viacam_vb2_stop_streaming(struct vb2_queue *vq)
struct via_camera *cam = vb2_get_drv_priv(vq); struct via_camera *cam = vb2_get_drv_priv(vq);
struct via_buffer *buf, *tmp; struct via_buffer *buf, *tmp;
pm_qos_remove_request(&cam->qos_request); cpu_latency_qos_remove_request(&cam->qos_request);
viacam_stop_engine(cam); viacam_stop_engine(cam);
list_for_each_entry_safe(buf, tmp, &cam->buffer_queue, queue) { list_for_each_entry_safe(buf, tmp, &cam->buffer_queue, queue) {

View File

@ -1452,8 +1452,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
pdev->id_entry->driver_data; pdev->id_entry->driver_data;
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_add_request(&imx_data->pm_qos_req, cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
imx_data->clk_ipg = devm_clk_get(&pdev->dev, "ipg"); imx_data->clk_ipg = devm_clk_get(&pdev->dev, "ipg");
if (IS_ERR(imx_data->clk_ipg)) { if (IS_ERR(imx_data->clk_ipg)) {
@ -1572,7 +1571,7 @@ disable_per_clk:
clk_disable_unprepare(imx_data->clk_per); clk_disable_unprepare(imx_data->clk_per);
free_sdhci: free_sdhci:
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
sdhci_pltfm_free(pdev); sdhci_pltfm_free(pdev);
return err; return err;
} }
@ -1595,7 +1594,7 @@ static int sdhci_esdhc_imx_remove(struct platform_device *pdev)
clk_disable_unprepare(imx_data->clk_ahb); clk_disable_unprepare(imx_data->clk_ahb);
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
sdhci_pltfm_free(pdev); sdhci_pltfm_free(pdev);
@ -1667,7 +1666,7 @@ static int sdhci_esdhc_runtime_suspend(struct device *dev)
clk_disable_unprepare(imx_data->clk_ahb); clk_disable_unprepare(imx_data->clk_ahb);
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
return ret; return ret;
} }
@ -1680,8 +1679,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev)
int err; int err;
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_add_request(&imx_data->pm_qos_req, cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
err = clk_prepare_enable(imx_data->clk_ahb); err = clk_prepare_enable(imx_data->clk_ahb);
if (err) if (err)
@ -1714,7 +1712,7 @@ disable_ahb_clk:
clk_disable_unprepare(imx_data->clk_ahb); clk_disable_unprepare(imx_data->clk_ahb);
remove_pm_qos_request: remove_pm_qos_request:
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
return err; return err;
} }
#endif #endif

View File

@ -3280,10 +3280,10 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
dev_info(&adapter->pdev->dev, dev_info(&adapter->pdev->dev,
"Some CPU C-states have been disabled in order to enable jumbo frames\n"); "Some CPU C-states have been disabled in order to enable jumbo frames\n");
pm_qos_update_request(&adapter->pm_qos_req, lat); cpu_latency_qos_update_request(&adapter->pm_qos_req, lat);
} else { } else {
pm_qos_update_request(&adapter->pm_qos_req, cpu_latency_qos_update_request(&adapter->pm_qos_req,
PM_QOS_DEFAULT_VALUE); PM_QOS_DEFAULT_VALUE);
} }
/* Enable Receives */ /* Enable Receives */
@ -4636,8 +4636,7 @@ int e1000e_open(struct net_device *netdev)
e1000_update_mng_vlan(adapter); e1000_update_mng_vlan(adapter);
/* DMA latency requirement to workaround jumbo issue */ /* DMA latency requirement to workaround jumbo issue */
pm_qos_add_request(&adapter->pm_qos_req, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&adapter->pm_qos_req, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
/* before we allocate an interrupt, we must be ready to handle it. /* before we allocate an interrupt, we must be ready to handle it.
* Setting DEBUG_SHIRQ in the kernel makes it fire an interrupt * Setting DEBUG_SHIRQ in the kernel makes it fire an interrupt
@ -4679,7 +4678,7 @@ int e1000e_open(struct net_device *netdev)
return 0; return 0;
err_req_irq: err_req_irq:
pm_qos_remove_request(&adapter->pm_qos_req); cpu_latency_qos_remove_request(&adapter->pm_qos_req);
e1000e_release_hw_control(adapter); e1000e_release_hw_control(adapter);
e1000_power_down_phy(adapter); e1000_power_down_phy(adapter);
e1000e_free_rx_resources(adapter->rx_ring); e1000e_free_rx_resources(adapter->rx_ring);
@ -4743,7 +4742,7 @@ int e1000e_close(struct net_device *netdev)
!test_bit(__E1000_TESTING, &adapter->state)) !test_bit(__E1000_TESTING, &adapter->state))
e1000e_release_hw_control(adapter); e1000e_release_hw_control(adapter);
pm_qos_remove_request(&adapter->pm_qos_req); cpu_latency_qos_remove_request(&adapter->pm_qos_req);
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);

View File

@ -1052,11 +1052,11 @@ static int ath10k_download_fw(struct ath10k *ar)
} }
memset(&latency_qos, 0, sizeof(latency_qos)); memset(&latency_qos, 0, sizeof(latency_qos));
pm_qos_add_request(&latency_qos, PM_QOS_CPU_DMA_LATENCY, 0); cpu_latency_qos_add_request(&latency_qos, 0);
ret = ath10k_bmi_fast_download(ar, address, data, data_len); ret = ath10k_bmi_fast_download(ar, address, data, data_len);
pm_qos_remove_request(&latency_qos); cpu_latency_qos_remove_request(&latency_qos);
return ret; return ret;
} }

View File

@ -1730,7 +1730,7 @@ static int ipw2100_up(struct ipw2100_priv *priv, int deferred)
/* the ipw2100 hardware really doesn't want power management delays /* the ipw2100 hardware really doesn't want power management delays
* longer than 175usec * longer than 175usec
*/ */
pm_qos_update_request(&ipw2100_pm_qos_req, 175); cpu_latency_qos_update_request(&ipw2100_pm_qos_req, 175);
/* If the interrupt is enabled, turn it off... */ /* If the interrupt is enabled, turn it off... */
spin_lock_irqsave(&priv->low_lock, flags); spin_lock_irqsave(&priv->low_lock, flags);
@ -1875,7 +1875,8 @@ static void ipw2100_down(struct ipw2100_priv *priv)
ipw2100_disable_interrupts(priv); ipw2100_disable_interrupts(priv);
spin_unlock_irqrestore(&priv->low_lock, flags); spin_unlock_irqrestore(&priv->low_lock, flags);
pm_qos_update_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&ipw2100_pm_qos_req,
PM_QOS_DEFAULT_VALUE);
/* We have to signal any supplicant if we are disassociating */ /* We have to signal any supplicant if we are disassociating */
if (associated) if (associated)
@ -6566,8 +6567,7 @@ static int __init ipw2100_init(void)
printk(KERN_INFO DRV_NAME ": %s, %s\n", DRV_DESCRIPTION, DRV_VERSION); printk(KERN_INFO DRV_NAME ": %s, %s\n", DRV_DESCRIPTION, DRV_VERSION);
printk(KERN_INFO DRV_NAME ": %s\n", DRV_COPYRIGHT); printk(KERN_INFO DRV_NAME ": %s\n", DRV_COPYRIGHT);
pm_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
ret = pci_register_driver(&ipw2100_pci_driver); ret = pci_register_driver(&ipw2100_pci_driver);
if (ret) if (ret)
@ -6594,7 +6594,7 @@ static void __exit ipw2100_exit(void)
&driver_attr_debug_level); &driver_attr_debug_level);
#endif #endif
pci_unregister_driver(&ipw2100_pci_driver); pci_unregister_driver(&ipw2100_pci_driver);
pm_qos_remove_request(&ipw2100_pm_qos_req); cpu_latency_qos_remove_request(&ipw2100_pm_qos_req);
} }
module_init(ipw2100_init); module_init(ipw2100_init);

View File

@ -67,7 +67,7 @@ struct idle_inject_device {
struct hrtimer timer; struct hrtimer timer;
unsigned int idle_duration_us; unsigned int idle_duration_us;
unsigned int run_duration_us; unsigned int run_duration_us;
unsigned long int cpumask[0]; unsigned long cpumask[];
}; };
static DEFINE_PER_CPU(struct idle_inject_thread, idle_inject_thread); static DEFINE_PER_CPU(struct idle_inject_thread, idle_inject_thread);

View File

@ -484,7 +484,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q)
} }
if (needs_wakeup_wait_mode(q)) if (needs_wakeup_wait_mode(q))
pm_qos_add_request(&q->pm_qos_req, PM_QOS_CPU_DMA_LATENCY, 0); cpu_latency_qos_add_request(&q->pm_qos_req, 0);
return 0; return 0;
} }
@ -492,7 +492,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q)
static void fsl_qspi_clk_disable_unprep(struct fsl_qspi *q) static void fsl_qspi_clk_disable_unprep(struct fsl_qspi *q)
{ {
if (needs_wakeup_wait_mode(q)) if (needs_wakeup_wait_mode(q))
pm_qos_remove_request(&q->pm_qos_req); cpu_latency_qos_remove_request(&q->pm_qos_req);
clk_disable_unprepare(q->clk); clk_disable_unprepare(q->clk);
clk_disable_unprepare(q->clk_en); clk_disable_unprepare(q->clk_en);

View File

@ -569,7 +569,7 @@ static void omap8250_uart_qos_work(struct work_struct *work)
struct omap8250_priv *priv; struct omap8250_priv *priv;
priv = container_of(work, struct omap8250_priv, qos_work); priv = container_of(work, struct omap8250_priv, qos_work);
pm_qos_update_request(&priv->pm_qos_request, priv->latency); cpu_latency_qos_update_request(&priv->pm_qos_request, priv->latency);
} }
#ifdef CONFIG_SERIAL_8250_DMA #ifdef CONFIG_SERIAL_8250_DMA
@ -1222,10 +1222,9 @@ static int omap8250_probe(struct platform_device *pdev)
DEFAULT_CLK_SPEED); DEFAULT_CLK_SPEED);
} }
priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
priv->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
pm_qos_add_request(&priv->pm_qos_request, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&priv->pm_qos_request, priv->latency);
priv->latency);
INIT_WORK(&priv->qos_work, omap8250_uart_qos_work); INIT_WORK(&priv->qos_work, omap8250_uart_qos_work);
spin_lock_init(&priv->rx_dma_lock); spin_lock_init(&priv->rx_dma_lock);
@ -1295,7 +1294,7 @@ static int omap8250_remove(struct platform_device *pdev)
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
serial8250_unregister_port(priv->line); serial8250_unregister_port(priv->line);
pm_qos_remove_request(&priv->pm_qos_request); cpu_latency_qos_remove_request(&priv->pm_qos_request);
device_init_wakeup(&pdev->dev, false); device_init_wakeup(&pdev->dev, false);
return 0; return 0;
} }
@ -1445,7 +1444,7 @@ static int omap8250_runtime_suspend(struct device *dev)
if (up->dma && up->dma->rxchan) if (up->dma && up->dma->rxchan)
omap_8250_rx_dma_flush(up); omap_8250_rx_dma_flush(up);
priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
schedule_work(&priv->qos_work); schedule_work(&priv->qos_work);
return 0; return 0;

View File

@ -831,7 +831,7 @@ static void serial_omap_uart_qos_work(struct work_struct *work)
struct uart_omap_port *up = container_of(work, struct uart_omap_port, struct uart_omap_port *up = container_of(work, struct uart_omap_port,
qos_work); qos_work);
pm_qos_update_request(&up->pm_qos_request, up->latency); cpu_latency_qos_update_request(&up->pm_qos_request, up->latency);
} }
static void static void
@ -1722,10 +1722,9 @@ static int serial_omap_probe(struct platform_device *pdev)
DEFAULT_CLK_SPEED); DEFAULT_CLK_SPEED);
} }
up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
up->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; up->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
pm_qos_add_request(&up->pm_qos_request, cpu_latency_qos_add_request(&up->pm_qos_request, up->latency);
PM_QOS_CPU_DMA_LATENCY, up->latency);
INIT_WORK(&up->qos_work, serial_omap_uart_qos_work); INIT_WORK(&up->qos_work, serial_omap_uart_qos_work);
platform_set_drvdata(pdev, up); platform_set_drvdata(pdev, up);
@ -1759,7 +1758,7 @@ err_add_port:
pm_runtime_dont_use_autosuspend(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
pm_qos_remove_request(&up->pm_qos_request); cpu_latency_qos_remove_request(&up->pm_qos_request);
device_init_wakeup(up->dev, false); device_init_wakeup(up->dev, false);
err_rs485: err_rs485:
err_port_line: err_port_line:
@ -1777,7 +1776,7 @@ static int serial_omap_remove(struct platform_device *dev)
pm_runtime_dont_use_autosuspend(up->dev); pm_runtime_dont_use_autosuspend(up->dev);
pm_runtime_put_sync(up->dev); pm_runtime_put_sync(up->dev);
pm_runtime_disable(up->dev); pm_runtime_disable(up->dev);
pm_qos_remove_request(&up->pm_qos_request); cpu_latency_qos_remove_request(&up->pm_qos_request);
device_init_wakeup(&dev->dev, false); device_init_wakeup(&dev->dev, false);
return 0; return 0;
@ -1869,7 +1868,7 @@ static int serial_omap_runtime_suspend(struct device *dev)
serial_omap_enable_wakeup(up, true); serial_omap_enable_wakeup(up, true);
up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
schedule_work(&up->qos_work); schedule_work(&up->qos_work);
return 0; return 0;

View File

@ -393,8 +393,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
} }
if (pdata.flags & CI_HDRC_PMQOS) if (pdata.flags & CI_HDRC_PMQOS)
pm_qos_add_request(&data->pm_qos_req, cpu_latency_qos_add_request(&data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
ret = imx_get_clks(dev); ret = imx_get_clks(dev);
if (ret) if (ret)
@ -478,7 +477,7 @@ disable_hsic_regulator:
/* don't overwrite original ret (cf. EPROBE_DEFER) */ /* don't overwrite original ret (cf. EPROBE_DEFER) */
regulator_disable(data->hsic_pad_regulator); regulator_disable(data->hsic_pad_regulator);
if (pdata.flags & CI_HDRC_PMQOS) if (pdata.flags & CI_HDRC_PMQOS)
pm_qos_remove_request(&data->pm_qos_req); cpu_latency_qos_remove_request(&data->pm_qos_req);
data->ci_pdev = NULL; data->ci_pdev = NULL;
return ret; return ret;
} }
@ -499,7 +498,7 @@ static int ci_hdrc_imx_remove(struct platform_device *pdev)
if (data->ci_pdev) { if (data->ci_pdev) {
imx_disable_unprepare_clks(&pdev->dev); imx_disable_unprepare_clks(&pdev->dev);
if (data->plat_data->flags & CI_HDRC_PMQOS) if (data->plat_data->flags & CI_HDRC_PMQOS)
pm_qos_remove_request(&data->pm_qos_req); cpu_latency_qos_remove_request(&data->pm_qos_req);
if (data->hsic_pad_regulator) if (data->hsic_pad_regulator)
regulator_disable(data->hsic_pad_regulator); regulator_disable(data->hsic_pad_regulator);
} }
@ -527,7 +526,7 @@ static int __maybe_unused imx_controller_suspend(struct device *dev)
imx_disable_unprepare_clks(dev); imx_disable_unprepare_clks(dev);
if (data->plat_data->flags & CI_HDRC_PMQOS) if (data->plat_data->flags & CI_HDRC_PMQOS)
pm_qos_remove_request(&data->pm_qos_req); cpu_latency_qos_remove_request(&data->pm_qos_req);
data->in_lpm = true; data->in_lpm = true;
@ -547,8 +546,7 @@ static int __maybe_unused imx_controller_resume(struct device *dev)
} }
if (data->plat_data->flags & CI_HDRC_PMQOS) if (data->plat_data->flags & CI_HDRC_PMQOS)
pm_qos_add_request(&data->pm_qos_req, cpu_latency_qos_add_request(&data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
ret = imx_prepare_enable_clks(dev); ret = imx_prepare_enable_clks(dev);
if (ret) if (ret)

View File

@ -752,7 +752,7 @@ ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_dispatch_gpe(acpi_handle gpe_device, u3
ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void)) ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void))
ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void)) ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void))
ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void)) ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void))
ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_gpe_status_set(void)) ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_gpe_status_set(u32 gpe_skip_number))
ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_fixed_event_status_set(void)) ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_fixed_event_status_set(void))
ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status

View File

@ -158,7 +158,7 @@ struct devfreq_stats {
* functions except for the context of callbacks defined in struct * functions except for the context of callbacks defined in struct
* devfreq_governor, the governor should protect its access with the * devfreq_governor, the governor should protect its access with the
* struct mutex lock in struct devfreq. A governor may use this mutex * struct mutex lock in struct devfreq. A governor may use this mutex
* to protect its own private data in void *data as well. * to protect its own private data in ``void *data`` as well.
*/ */
struct devfreq { struct devfreq {
struct list_head node; struct list_head node;
@ -201,24 +201,23 @@ struct devfreq_freqs {
}; };
#if defined(CONFIG_PM_DEVFREQ) #if defined(CONFIG_PM_DEVFREQ)
extern struct devfreq *devfreq_add_device(struct device *dev, struct devfreq *devfreq_add_device(struct device *dev,
struct devfreq_dev_profile *profile, struct devfreq_dev_profile *profile,
const char *governor_name, const char *governor_name,
void *data); void *data);
extern int devfreq_remove_device(struct devfreq *devfreq); int devfreq_remove_device(struct devfreq *devfreq);
extern struct devfreq *devm_devfreq_add_device(struct device *dev, struct devfreq *devm_devfreq_add_device(struct device *dev,
struct devfreq_dev_profile *profile, struct devfreq_dev_profile *profile,
const char *governor_name, const char *governor_name,
void *data); void *data);
extern void devm_devfreq_remove_device(struct device *dev, void devm_devfreq_remove_device(struct device *dev, struct devfreq *devfreq);
struct devfreq *devfreq);
/* Supposed to be called by PM callbacks */ /* Supposed to be called by PM callbacks */
extern int devfreq_suspend_device(struct devfreq *devfreq); int devfreq_suspend_device(struct devfreq *devfreq);
extern int devfreq_resume_device(struct devfreq *devfreq); int devfreq_resume_device(struct devfreq *devfreq);
extern void devfreq_suspend(void); void devfreq_suspend(void);
extern void devfreq_resume(void); void devfreq_resume(void);
/** /**
* update_devfreq() - Reevaluate the device and configure frequency * update_devfreq() - Reevaluate the device and configure frequency
@ -226,39 +225,38 @@ extern void devfreq_resume(void);
* *
* Note: devfreq->lock must be held * Note: devfreq->lock must be held
*/ */
extern int update_devfreq(struct devfreq *devfreq); int update_devfreq(struct devfreq *devfreq);
/* Helper functions for devfreq user device driver with OPP. */ /* Helper functions for devfreq user device driver with OPP. */
extern struct dev_pm_opp *devfreq_recommended_opp(struct device *dev, struct dev_pm_opp *devfreq_recommended_opp(struct device *dev,
unsigned long *freq, u32 flags); unsigned long *freq, u32 flags);
extern int devfreq_register_opp_notifier(struct device *dev, int devfreq_register_opp_notifier(struct device *dev,
struct devfreq *devfreq); struct devfreq *devfreq);
extern int devfreq_unregister_opp_notifier(struct device *dev, int devfreq_unregister_opp_notifier(struct device *dev,
struct devfreq *devfreq); struct devfreq *devfreq);
extern int devm_devfreq_register_opp_notifier(struct device *dev, int devm_devfreq_register_opp_notifier(struct device *dev,
struct devfreq *devfreq); struct devfreq *devfreq);
extern void devm_devfreq_unregister_opp_notifier(struct device *dev, void devm_devfreq_unregister_opp_notifier(struct device *dev,
struct devfreq *devfreq); struct devfreq *devfreq);
extern int devfreq_register_notifier(struct devfreq *devfreq, int devfreq_register_notifier(struct devfreq *devfreq,
struct notifier_block *nb, struct notifier_block *nb,
unsigned int list); unsigned int list);
extern int devfreq_unregister_notifier(struct devfreq *devfreq, int devfreq_unregister_notifier(struct devfreq *devfreq,
struct notifier_block *nb, struct notifier_block *nb,
unsigned int list); unsigned int list);
extern int devm_devfreq_register_notifier(struct device *dev, int devm_devfreq_register_notifier(struct device *dev,
struct devfreq *devfreq, struct devfreq *devfreq,
struct notifier_block *nb, struct notifier_block *nb,
unsigned int list); unsigned int list);
extern void devm_devfreq_unregister_notifier(struct device *dev, void devm_devfreq_unregister_notifier(struct device *dev,
struct devfreq *devfreq, struct devfreq *devfreq,
struct notifier_block *nb, struct notifier_block *nb,
unsigned int list); unsigned int list);
extern struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index);
int index);
#if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) #if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND)
/** /**
* struct devfreq_simple_ondemand_data - void *data fed to struct devfreq * struct devfreq_simple_ondemand_data - ``void *data`` fed to struct devfreq
* and devfreq_add_device * and devfreq_add_device
* @upthreshold: If the load is over this value, the frequency jumps. * @upthreshold: If the load is over this value, the frequency jumps.
* Specify 0 to use the default. Valid value = 0 to 100. * Specify 0 to use the default. Valid value = 0 to 100.
@ -278,7 +276,7 @@ struct devfreq_simple_ondemand_data {
#if IS_ENABLED(CONFIG_DEVFREQ_GOV_PASSIVE) #if IS_ENABLED(CONFIG_DEVFREQ_GOV_PASSIVE)
/** /**
* struct devfreq_passive_data - void *data fed to struct devfreq * struct devfreq_passive_data - ``void *data`` fed to struct devfreq
* and devfreq_add_device * and devfreq_add_device
* @parent: the devfreq instance of parent device. * @parent: the devfreq instance of parent device.
* @get_target_freq: Optional callback, Returns desired operating frequency * @get_target_freq: Optional callback, Returns desired operating frequency
@ -311,9 +309,9 @@ struct devfreq_passive_data {
#else /* !CONFIG_PM_DEVFREQ */ #else /* !CONFIG_PM_DEVFREQ */
static inline struct devfreq *devfreq_add_device(struct device *dev, static inline struct devfreq *devfreq_add_device(struct device *dev,
struct devfreq_dev_profile *profile, struct devfreq_dev_profile *profile,
const char *governor_name, const char *governor_name,
void *data) void *data)
{ {
return ERR_PTR(-ENOSYS); return ERR_PTR(-ENOSYS);
} }
@ -350,31 +348,31 @@ static inline void devfreq_suspend(void) {}
static inline void devfreq_resume(void) {} static inline void devfreq_resume(void) {}
static inline struct dev_pm_opp *devfreq_recommended_opp(struct device *dev, static inline struct dev_pm_opp *devfreq_recommended_opp(struct device *dev,
unsigned long *freq, u32 flags) unsigned long *freq, u32 flags)
{ {
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
} }
static inline int devfreq_register_opp_notifier(struct device *dev, static inline int devfreq_register_opp_notifier(struct device *dev,
struct devfreq *devfreq) struct devfreq *devfreq)
{ {
return -EINVAL; return -EINVAL;
} }
static inline int devfreq_unregister_opp_notifier(struct device *dev, static inline int devfreq_unregister_opp_notifier(struct device *dev,
struct devfreq *devfreq) struct devfreq *devfreq)
{ {
return -EINVAL; return -EINVAL;
} }
static inline int devm_devfreq_register_opp_notifier(struct device *dev, static inline int devm_devfreq_register_opp_notifier(struct device *dev,
struct devfreq *devfreq) struct devfreq *devfreq)
{ {
return -EINVAL; return -EINVAL;
} }
static inline void devm_devfreq_unregister_opp_notifier(struct device *dev, static inline void devm_devfreq_unregister_opp_notifier(struct device *dev,
struct devfreq *devfreq) struct devfreq *devfreq)
{ {
} }
@ -393,22 +391,22 @@ static inline int devfreq_unregister_notifier(struct devfreq *devfreq,
} }
static inline int devm_devfreq_register_notifier(struct device *dev, static inline int devm_devfreq_register_notifier(struct device *dev,
struct devfreq *devfreq, struct devfreq *devfreq,
struct notifier_block *nb, struct notifier_block *nb,
unsigned int list) unsigned int list)
{ {
return 0; return 0;
} }
static inline void devm_devfreq_unregister_notifier(struct device *dev, static inline void devm_devfreq_unregister_notifier(struct device *dev,
struct devfreq *devfreq, struct devfreq *devfreq,
struct notifier_block *nb, struct notifier_block *nb,
unsigned int list) unsigned int list)
{ {
} }
static inline struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, static inline struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev,
int index) int index)
{ {
return ERR_PTR(-ENODEV); return ERR_PTR(-ENODEV);
} }

View File

@ -1,22 +1,20 @@
/* SPDX-License-Identifier: GPL-2.0 */ /* SPDX-License-Identifier: GPL-2.0 */
/*
* Definitions related to Power Management Quality of Service (PM QoS).
*
* Copyright (C) 2020 Intel Corporation
*
* Authors:
* Mark Gross <mgross@linux.intel.com>
* Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*/
#ifndef _LINUX_PM_QOS_H #ifndef _LINUX_PM_QOS_H
#define _LINUX_PM_QOS_H #define _LINUX_PM_QOS_H
/* interface for the pm_qos_power infrastructure of the linux kernel.
*
* Mark Gross <mgross@linux.intel.com>
*/
#include <linux/plist.h> #include <linux/plist.h>
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/workqueue.h>
enum {
PM_QOS_RESERVED = 0,
PM_QOS_CPU_DMA_LATENCY,
/* insert new class ID */
PM_QOS_NUM_CLASSES,
};
enum pm_qos_flags_status { enum pm_qos_flags_status {
PM_QOS_FLAGS_UNDEFINED = -1, PM_QOS_FLAGS_UNDEFINED = -1,
@ -29,7 +27,7 @@ enum pm_qos_flags_status {
#define PM_QOS_LATENCY_ANY S32_MAX #define PM_QOS_LATENCY_ANY S32_MAX
#define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC) #define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC)
#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) #define PM_QOS_CPU_LATENCY_DEFAULT_VALUE (2000 * USEC_PER_SEC)
#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY
#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY
#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS
@ -40,22 +38,10 @@ enum pm_qos_flags_status {
#define PM_QOS_FLAG_NO_POWER_OFF (1 << 0) #define PM_QOS_FLAG_NO_POWER_OFF (1 << 0)
struct pm_qos_request {
struct plist_node node;
int pm_qos_class;
struct delayed_work work; /* for pm_qos_update_request_timeout */
};
struct pm_qos_flags_request {
struct list_head node;
s32 flags; /* Do not change to 64 bit */
};
enum pm_qos_type { enum pm_qos_type {
PM_QOS_UNITIALIZED, PM_QOS_UNITIALIZED,
PM_QOS_MAX, /* return the largest value */ PM_QOS_MAX, /* return the largest value */
PM_QOS_MIN, /* return the smallest value */ PM_QOS_MIN, /* return the smallest value */
PM_QOS_SUM /* return the sum */
}; };
/* /*
@ -72,6 +58,16 @@ struct pm_qos_constraints {
struct blocking_notifier_head *notifiers; struct blocking_notifier_head *notifiers;
}; };
struct pm_qos_request {
struct plist_node node;
struct pm_qos_constraints *qos;
};
struct pm_qos_flags_request {
struct list_head node;
s32 flags; /* Do not change to 64 bit */
};
struct pm_qos_flags { struct pm_qos_flags {
struct list_head list; struct list_head list;
s32 effective_flags; /* Do not change to 64 bit */ s32 effective_flags; /* Do not change to 64 bit */
@ -140,24 +136,31 @@ static inline int dev_pm_qos_request_active(struct dev_pm_qos_request *req)
return req->dev != NULL; return req->dev != NULL;
} }
s32 pm_qos_read_value(struct pm_qos_constraints *c);
int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node, int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
enum pm_qos_req_action action, int value); enum pm_qos_req_action action, int value);
bool pm_qos_update_flags(struct pm_qos_flags *pqf, bool pm_qos_update_flags(struct pm_qos_flags *pqf,
struct pm_qos_flags_request *req, struct pm_qos_flags_request *req,
enum pm_qos_req_action action, s32 val); enum pm_qos_req_action action, s32 val);
void pm_qos_add_request(struct pm_qos_request *req, int pm_qos_class,
s32 value);
void pm_qos_update_request(struct pm_qos_request *req,
s32 new_value);
void pm_qos_update_request_timeout(struct pm_qos_request *req,
s32 new_value, unsigned long timeout_us);
void pm_qos_remove_request(struct pm_qos_request *req);
int pm_qos_request(int pm_qos_class); #ifdef CONFIG_CPU_IDLE
int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier); s32 cpu_latency_qos_limit(void);
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier); bool cpu_latency_qos_request_active(struct pm_qos_request *req);
int pm_qos_request_active(struct pm_qos_request *req); void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value);
s32 pm_qos_read_value(struct pm_qos_constraints *c); void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value);
void cpu_latency_qos_remove_request(struct pm_qos_request *req);
#else
static inline s32 cpu_latency_qos_limit(void) { return INT_MAX; }
static inline bool cpu_latency_qos_request_active(struct pm_qos_request *req)
{
return false;
}
static inline void cpu_latency_qos_add_request(struct pm_qos_request *req,
s32 value) {}
static inline void cpu_latency_qos_update_request(struct pm_qos_request *req,
s32 new_value) {}
static inline void cpu_latency_qos_remove_request(struct pm_qos_request *req) {}
#endif
#ifdef CONFIG_PM #ifdef CONFIG_PM
enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask); enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask);

View File

@ -38,7 +38,7 @@ extern int pm_runtime_force_resume(struct device *dev);
extern int __pm_runtime_idle(struct device *dev, int rpmflags); extern int __pm_runtime_idle(struct device *dev, int rpmflags);
extern int __pm_runtime_suspend(struct device *dev, int rpmflags); extern int __pm_runtime_suspend(struct device *dev, int rpmflags);
extern int __pm_runtime_resume(struct device *dev, int rpmflags); extern int __pm_runtime_resume(struct device *dev, int rpmflags);
extern int pm_runtime_get_if_in_use(struct device *dev); extern int pm_runtime_get_if_active(struct device *dev, bool ign_usage_count);
extern int pm_schedule_suspend(struct device *dev, unsigned int delay); extern int pm_schedule_suspend(struct device *dev, unsigned int delay);
extern int __pm_runtime_set_status(struct device *dev, unsigned int status); extern int __pm_runtime_set_status(struct device *dev, unsigned int status);
extern int pm_runtime_barrier(struct device *dev); extern int pm_runtime_barrier(struct device *dev);
@ -60,6 +60,11 @@ extern void pm_runtime_put_suppliers(struct device *dev);
extern void pm_runtime_new_link(struct device *dev); extern void pm_runtime_new_link(struct device *dev);
extern void pm_runtime_drop_link(struct device *dev); extern void pm_runtime_drop_link(struct device *dev);
static inline int pm_runtime_get_if_in_use(struct device *dev)
{
return pm_runtime_get_if_active(dev, false);
}
static inline void pm_suspend_ignore_children(struct device *dev, bool enable) static inline void pm_suspend_ignore_children(struct device *dev, bool enable)
{ {
dev->power.ignore_children = enable; dev->power.ignore_children = enable;
@ -143,6 +148,11 @@ static inline int pm_runtime_get_if_in_use(struct device *dev)
{ {
return -EINVAL; return -EINVAL;
} }
static inline int pm_runtime_get_if_active(struct device *dev,
bool ign_usage_count)
{
return -EINVAL;
}
static inline int __pm_runtime_set_status(struct device *dev, static inline int __pm_runtime_set_status(struct device *dev,
unsigned int status) { return 0; } unsigned int status) { return 0; }
static inline int pm_runtime_barrier(struct device *dev) { return 0; } static inline int pm_runtime_barrier(struct device *dev) { return 0; }

View File

@ -359,75 +359,50 @@ DEFINE_EVENT(power_domain, power_domain_target,
); );
/* /*
* The pm qos events are used for pm qos update * CPU latency QoS events used for global CPU latency QoS list updates
*/ */
DECLARE_EVENT_CLASS(pm_qos_request, DECLARE_EVENT_CLASS(cpu_latency_qos_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value), TP_ARGS(value),
TP_STRUCT__entry( TP_STRUCT__entry(
__field( int, pm_qos_class )
__field( s32, value ) __field( s32, value )
), ),
TP_fast_assign( TP_fast_assign(
__entry->pm_qos_class = pm_qos_class;
__entry->value = value; __entry->value = value;
), ),
TP_printk("pm_qos_class=%s value=%d", TP_printk("CPU_DMA_LATENCY value=%d",
__print_symbolic(__entry->pm_qos_class,
{ PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }),
__entry->value) __entry->value)
); );
DEFINE_EVENT(pm_qos_request, pm_qos_add_request, DEFINE_EVENT(cpu_latency_qos_request, pm_qos_add_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value) TP_ARGS(value)
); );
DEFINE_EVENT(pm_qos_request, pm_qos_update_request, DEFINE_EVENT(cpu_latency_qos_request, pm_qos_update_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value) TP_ARGS(value)
); );
DEFINE_EVENT(pm_qos_request, pm_qos_remove_request, DEFINE_EVENT(cpu_latency_qos_request, pm_qos_remove_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value) TP_ARGS(value)
);
TRACE_EVENT(pm_qos_update_request_timeout,
TP_PROTO(int pm_qos_class, s32 value, unsigned long timeout_us),
TP_ARGS(pm_qos_class, value, timeout_us),
TP_STRUCT__entry(
__field( int, pm_qos_class )
__field( s32, value )
__field( unsigned long, timeout_us )
),
TP_fast_assign(
__entry->pm_qos_class = pm_qos_class;
__entry->value = value;
__entry->timeout_us = timeout_us;
),
TP_printk("pm_qos_class=%s value=%d, timeout_us=%ld",
__print_symbolic(__entry->pm_qos_class,
{ PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }),
__entry->value, __entry->timeout_us)
); );
/*
* General PM QoS events used for updates of PM QoS request lists
*/
DECLARE_EVENT_CLASS(pm_qos_update, DECLARE_EVENT_CLASS(pm_qos_update,
TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value), TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value),

View File

@ -1,31 +1,21 @@
// SPDX-License-Identifier: GPL-2.0-only // SPDX-License-Identifier: GPL-2.0-only
/* /*
* This module exposes the interface to kernel space for specifying * Power Management Quality of Service (PM QoS) support base.
* QoS dependencies. It provides infrastructure for registration of:
* *
* Dependents on a QoS value : register requests * Copyright (C) 2020 Intel Corporation
* Watchers of QoS value : get notified when target QoS value changes
* *
* This QoS design is best effort based. Dependents register their QoS needs. * Authors:
* Watchers register to keep track of the current QoS needs of the system. * Mark Gross <mgross@linux.intel.com>
* Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* *
* There are 3 basic classes of QoS parameter: latency, timeout, throughput * Provided here is an interface for specifying PM QoS dependencies. It allows
* each have defined units: * entities depending on QoS constraints to register their requests which are
* latency: usec * aggregated as appropriate to produce effective constraints (target values)
* timeout: usec <-- currently not used. * that can be monitored by entities needing to respect them, either by polling
* throughput: kbs (kilo byte / sec) * or through a built-in notification mechanism.
* *
* There are lists of pm_qos_objects each one wrapping requests, notifiers * In addition to the basic functionality, more specific interfaces for managing
* * global CPU latency QoS requests and frequency QoS requests are provided.
* User mode requests on a QOS parameter register themselves to the
* subsystem by opening the device node /dev/... and writing there request to
* the node. As long as the process holds a file handle open to the node the
* client continues to be accounted for. Upon file release the usermode
* request is removed and a new qos target is computed. This way when the
* request that the application has is cleaned up when closes the file
* pointer or exits the pm_qos_object will get an opportunity to clean up.
*
* Mark Gross <mgross@linux.intel.com>
*/ */
/*#define DEBUG*/ /*#define DEBUG*/
@ -54,56 +44,19 @@
* or pm_qos_object list and pm_qos_objects need to happen with pm_qos_lock * or pm_qos_object list and pm_qos_objects need to happen with pm_qos_lock
* held, taken with _irqsave. One lock to rule them all * held, taken with _irqsave. One lock to rule them all
*/ */
struct pm_qos_object {
struct pm_qos_constraints *constraints;
struct miscdevice pm_qos_power_miscdev;
char *name;
};
static DEFINE_SPINLOCK(pm_qos_lock); static DEFINE_SPINLOCK(pm_qos_lock);
static struct pm_qos_object null_pm_qos; /**
* pm_qos_read_value - Return the current effective constraint value.
static BLOCKING_NOTIFIER_HEAD(cpu_dma_lat_notifier); * @c: List of PM QoS constraint requests.
static struct pm_qos_constraints cpu_dma_constraints = { */
.list = PLIST_HEAD_INIT(cpu_dma_constraints.list), s32 pm_qos_read_value(struct pm_qos_constraints *c)
.target_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
.default_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
.no_constraint_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
.type = PM_QOS_MIN,
.notifiers = &cpu_dma_lat_notifier,
};
static struct pm_qos_object cpu_dma_pm_qos = {
.constraints = &cpu_dma_constraints,
.name = "cpu_dma_latency",
};
static struct pm_qos_object *pm_qos_array[] = {
&null_pm_qos,
&cpu_dma_pm_qos,
};
static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
size_t count, loff_t *f_pos);
static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
size_t count, loff_t *f_pos);
static int pm_qos_power_open(struct inode *inode, struct file *filp);
static int pm_qos_power_release(struct inode *inode, struct file *filp);
static const struct file_operations pm_qos_power_fops = {
.write = pm_qos_power_write,
.read = pm_qos_power_read,
.open = pm_qos_power_open,
.release = pm_qos_power_release,
.llseek = noop_llseek,
};
/* unlocked internal variant */
static inline int pm_qos_get_value(struct pm_qos_constraints *c)
{ {
struct plist_node *node; return READ_ONCE(c->target_value);
int total_value = 0; }
static int pm_qos_get_value(struct pm_qos_constraints *c)
{
if (plist_head_empty(&c->list)) if (plist_head_empty(&c->list))
return c->no_constraint_value; return c->no_constraint_value;
@ -114,111 +67,42 @@ static inline int pm_qos_get_value(struct pm_qos_constraints *c)
case PM_QOS_MAX: case PM_QOS_MAX:
return plist_last(&c->list)->prio; return plist_last(&c->list)->prio;
case PM_QOS_SUM:
plist_for_each(node, &c->list)
total_value += node->prio;
return total_value;
default: default:
/* runtime check for not using enum */ WARN(1, "Unknown PM QoS type in %s\n", __func__);
BUG();
return PM_QOS_DEFAULT_VALUE; return PM_QOS_DEFAULT_VALUE;
} }
} }
s32 pm_qos_read_value(struct pm_qos_constraints *c) static void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
{ {
return c->target_value; WRITE_ONCE(c->target_value, value);
} }
static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
{
c->target_value = value;
}
static int pm_qos_debug_show(struct seq_file *s, void *unused)
{
struct pm_qos_object *qos = (struct pm_qos_object *)s->private;
struct pm_qos_constraints *c;
struct pm_qos_request *req;
char *type;
unsigned long flags;
int tot_reqs = 0;
int active_reqs = 0;
if (IS_ERR_OR_NULL(qos)) {
pr_err("%s: bad qos param!\n", __func__);
return -EINVAL;
}
c = qos->constraints;
if (IS_ERR_OR_NULL(c)) {
pr_err("%s: Bad constraints on qos?\n", __func__);
return -EINVAL;
}
/* Lock to ensure we have a snapshot */
spin_lock_irqsave(&pm_qos_lock, flags);
if (plist_head_empty(&c->list)) {
seq_puts(s, "Empty!\n");
goto out;
}
switch (c->type) {
case PM_QOS_MIN:
type = "Minimum";
break;
case PM_QOS_MAX:
type = "Maximum";
break;
case PM_QOS_SUM:
type = "Sum";
break;
default:
type = "Unknown";
}
plist_for_each_entry(req, &c->list, node) {
char *state = "Default";
if ((req->node).prio != c->default_value) {
active_reqs++;
state = "Active";
}
tot_reqs++;
seq_printf(s, "%d: %d: %s\n", tot_reqs,
(req->node).prio, state);
}
seq_printf(s, "Type=%s, Value=%d, Requests: active=%d / total=%d\n",
type, pm_qos_get_value(c), active_reqs, tot_reqs);
out:
spin_unlock_irqrestore(&pm_qos_lock, flags);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(pm_qos_debug);
/** /**
* pm_qos_update_target - manages the constraints list and calls the notifiers * pm_qos_update_target - Update a list of PM QoS constraint requests.
* if needed * @c: List of PM QoS requests.
* @c: constraints data struct * @node: Target list entry.
* @node: request to add to the list, to update or to remove * @action: Action to carry out (add, update or remove).
* @action: action to take on the constraints list * @value: New request value for the target list entry.
* @value: value of the request to add or update
* *
* This function returns 1 if the aggregated constraint value has changed, 0 * Update the given list of PM QoS constraint requests, @c, by carrying an
* otherwise. * @action involving the @node list entry and @value on it.
*
* The recognized values of @action are PM_QOS_ADD_REQ (store @value in @node
* and add it to the list), PM_QOS_UPDATE_REQ (remove @node from the list, store
* @value in it and add it to the list again), and PM_QOS_REMOVE_REQ (remove
* @node from the list, ignore @value).
*
* Return: 1 if the aggregate constraint value has changed, 0 otherwise.
*/ */
int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node, int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
enum pm_qos_req_action action, int value) enum pm_qos_req_action action, int value)
{ {
unsigned long flags;
int prev_value, curr_value, new_value; int prev_value, curr_value, new_value;
int ret; unsigned long flags;
spin_lock_irqsave(&pm_qos_lock, flags); spin_lock_irqsave(&pm_qos_lock, flags);
prev_value = pm_qos_get_value(c); prev_value = pm_qos_get_value(c);
if (value == PM_QOS_DEFAULT_VALUE) if (value == PM_QOS_DEFAULT_VALUE)
new_value = c->default_value; new_value = c->default_value;
@ -231,9 +115,8 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
break; break;
case PM_QOS_UPDATE_REQ: case PM_QOS_UPDATE_REQ:
/* /*
* to change the list, we atomically remove, reinit * To change the list, atomically remove, reinit with new value
* with new value and add, then see if the extremal * and add, then see if the aggregate has changed.
* changed
*/ */
plist_del(node, &c->list); plist_del(node, &c->list);
/* fall through */ /* fall through */
@ -252,16 +135,14 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
spin_unlock_irqrestore(&pm_qos_lock, flags); spin_unlock_irqrestore(&pm_qos_lock, flags);
trace_pm_qos_update_target(action, prev_value, curr_value); trace_pm_qos_update_target(action, prev_value, curr_value);
if (prev_value != curr_value) {
ret = 1; if (prev_value == curr_value)
if (c->notifiers) return 0;
blocking_notifier_call_chain(c->notifiers,
(unsigned long)curr_value, if (c->notifiers)
NULL); blocking_notifier_call_chain(c->notifiers, curr_value, NULL);
} else {
ret = 0; return 1;
}
return ret;
} }
/** /**
@ -283,14 +164,12 @@ static void pm_qos_flags_remove_req(struct pm_qos_flags *pqf,
/** /**
* pm_qos_update_flags - Update a set of PM QoS flags. * pm_qos_update_flags - Update a set of PM QoS flags.
* @pqf: Set of flags to update. * @pqf: Set of PM QoS flags to update.
* @req: Request to add to the set, to modify, or to remove from the set. * @req: Request to add to the set, to modify, or to remove from the set.
* @action: Action to take on the set. * @action: Action to take on the set.
* @val: Value of the request to add or modify. * @val: Value of the request to add or modify.
* *
* Update the given set of PM QoS flags and call notifiers if the aggregate * Return: 1 if the aggregate constraint value has changed, 0 otherwise.
* value has changed. Returns 1 if the aggregate constraint value has changed,
* 0 otherwise.
*/ */
bool pm_qos_update_flags(struct pm_qos_flags *pqf, bool pm_qos_update_flags(struct pm_qos_flags *pqf,
struct pm_qos_flags_request *req, struct pm_qos_flags_request *req,
@ -326,288 +205,180 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
spin_unlock_irqrestore(&pm_qos_lock, irqflags); spin_unlock_irqrestore(&pm_qos_lock, irqflags);
trace_pm_qos_update_flags(action, prev_value, curr_value); trace_pm_qos_update_flags(action, prev_value, curr_value);
return prev_value != curr_value; return prev_value != curr_value;
} }
#ifdef CONFIG_CPU_IDLE
/* Definitions related to the CPU latency QoS. */
static struct pm_qos_constraints cpu_latency_constraints = {
.list = PLIST_HEAD_INIT(cpu_latency_constraints.list),
.target_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
.default_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
.no_constraint_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
.type = PM_QOS_MIN,
};
/** /**
* pm_qos_request - returns current system wide qos expectation * cpu_latency_qos_limit - Return current system-wide CPU latency QoS limit.
* @pm_qos_class: identification of which qos value is requested
*
* This function returns the current target value.
*/ */
int pm_qos_request(int pm_qos_class) s32 cpu_latency_qos_limit(void)
{ {
return pm_qos_read_value(pm_qos_array[pm_qos_class]->constraints); return pm_qos_read_value(&cpu_latency_constraints);
}
EXPORT_SYMBOL_GPL(pm_qos_request);
int pm_qos_request_active(struct pm_qos_request *req)
{
return req->pm_qos_class != 0;
}
EXPORT_SYMBOL_GPL(pm_qos_request_active);
static void __pm_qos_update_request(struct pm_qos_request *req,
s32 new_value)
{
trace_pm_qos_update_request(req->pm_qos_class, new_value);
if (new_value != req->node.prio)
pm_qos_update_target(
pm_qos_array[req->pm_qos_class]->constraints,
&req->node, PM_QOS_UPDATE_REQ, new_value);
} }
/** /**
* pm_qos_work_fn - the timeout handler of pm_qos_update_request_timeout * cpu_latency_qos_request_active - Check the given PM QoS request.
* @work: work struct for the delayed work (timeout) * @req: PM QoS request to check.
* *
* This cancels the timeout request by falling back to the default at timeout. * Return: 'true' if @req has been added to the CPU latency QoS list, 'false'
* otherwise.
*/ */
static void pm_qos_work_fn(struct work_struct *work) bool cpu_latency_qos_request_active(struct pm_qos_request *req)
{ {
struct pm_qos_request *req = container_of(to_delayed_work(work), return req->qos == &cpu_latency_constraints;
struct pm_qos_request, }
work); EXPORT_SYMBOL_GPL(cpu_latency_qos_request_active);
__pm_qos_update_request(req, PM_QOS_DEFAULT_VALUE); static void cpu_latency_qos_apply(struct pm_qos_request *req,
enum pm_qos_req_action action, s32 value)
{
int ret = pm_qos_update_target(req->qos, &req->node, action, value);
if (ret > 0)
wake_up_all_idle_cpus();
} }
/** /**
* pm_qos_add_request - inserts new qos request into the list * cpu_latency_qos_add_request - Add new CPU latency QoS request.
* @req: pointer to a preallocated handle * @req: Pointer to a preallocated handle.
* @pm_qos_class: identifies which list of qos request to use * @value: Requested constraint value.
* @value: defines the qos request
* *
* This function inserts a new entry in the pm_qos_class list of requested qos * Use @value to initialize the request handle pointed to by @req, insert it as
* performance characteristics. It recomputes the aggregate QoS expectations * a new entry to the CPU latency QoS list and recompute the effective QoS
* for the pm_qos_class of parameters and initializes the pm_qos_request * constraint for that list.
* handle. Caller needs to save this handle for later use in updates and *
* removal. * Callers need to save the handle for later use in updates and removal of the
* QoS request represented by it.
*/ */
void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value)
void pm_qos_add_request(struct pm_qos_request *req,
int pm_qos_class, s32 value)
{
if (!req) /*guard against callers passing in null */
return;
if (pm_qos_request_active(req)) {
WARN(1, KERN_ERR "pm_qos_add_request() called for already added request\n");
return;
}
req->pm_qos_class = pm_qos_class;
INIT_DELAYED_WORK(&req->work, pm_qos_work_fn);
trace_pm_qos_add_request(pm_qos_class, value);
pm_qos_update_target(pm_qos_array[pm_qos_class]->constraints,
&req->node, PM_QOS_ADD_REQ, value);
}
EXPORT_SYMBOL_GPL(pm_qos_add_request);
/**
* pm_qos_update_request - modifies an existing qos request
* @req : handle to list element holding a pm_qos request to use
* @value: defines the qos request
*
* Updates an existing qos request for the pm_qos_class of parameters along
* with updating the target pm_qos_class value.
*
* Attempts are made to make this code callable on hot code paths.
*/
void pm_qos_update_request(struct pm_qos_request *req,
s32 new_value)
{
if (!req) /*guard against callers passing in null */
return;
if (!pm_qos_request_active(req)) {
WARN(1, KERN_ERR "pm_qos_update_request() called for unknown object\n");
return;
}
cancel_delayed_work_sync(&req->work);
__pm_qos_update_request(req, new_value);
}
EXPORT_SYMBOL_GPL(pm_qos_update_request);
/**
* pm_qos_update_request_timeout - modifies an existing qos request temporarily.
* @req : handle to list element holding a pm_qos request to use
* @new_value: defines the temporal qos request
* @timeout_us: the effective duration of this qos request in usecs.
*
* After timeout_us, this qos request is cancelled automatically.
*/
void pm_qos_update_request_timeout(struct pm_qos_request *req, s32 new_value,
unsigned long timeout_us)
{ {
if (!req) if (!req)
return; return;
if (WARN(!pm_qos_request_active(req),
"%s called for unknown object.", __func__))
return;
cancel_delayed_work_sync(&req->work); if (cpu_latency_qos_request_active(req)) {
WARN(1, KERN_ERR "%s called for already added request\n", __func__);
trace_pm_qos_update_request_timeout(req->pm_qos_class,
new_value, timeout_us);
if (new_value != req->node.prio)
pm_qos_update_target(
pm_qos_array[req->pm_qos_class]->constraints,
&req->node, PM_QOS_UPDATE_REQ, new_value);
schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us));
}
/**
* pm_qos_remove_request - modifies an existing qos request
* @req: handle to request list element
*
* Will remove pm qos request from the list of constraints and
* recompute the current target value for the pm_qos_class. Call this
* on slow code paths.
*/
void pm_qos_remove_request(struct pm_qos_request *req)
{
if (!req) /*guard against callers passing in null */
return;
/* silent return to keep pcm code cleaner */
if (!pm_qos_request_active(req)) {
WARN(1, KERN_ERR "pm_qos_remove_request() called for unknown object\n");
return; return;
} }
cancel_delayed_work_sync(&req->work); trace_pm_qos_add_request(value);
trace_pm_qos_remove_request(req->pm_qos_class, PM_QOS_DEFAULT_VALUE); req->qos = &cpu_latency_constraints;
pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints, cpu_latency_qos_apply(req, PM_QOS_ADD_REQ, value);
&req->node, PM_QOS_REMOVE_REQ, }
PM_QOS_DEFAULT_VALUE); EXPORT_SYMBOL_GPL(cpu_latency_qos_add_request);
/**
* cpu_latency_qos_update_request - Modify existing CPU latency QoS request.
* @req : QoS request to update.
* @new_value: New requested constraint value.
*
* Use @new_value to update the QoS request represented by @req in the CPU
* latency QoS list along with updating the effective constraint value for that
* list.
*/
void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value)
{
if (!req)
return;
if (!cpu_latency_qos_request_active(req)) {
WARN(1, KERN_ERR "%s called for unknown object\n", __func__);
return;
}
trace_pm_qos_update_request(new_value);
if (new_value == req->node.prio)
return;
cpu_latency_qos_apply(req, PM_QOS_UPDATE_REQ, new_value);
}
EXPORT_SYMBOL_GPL(cpu_latency_qos_update_request);
/**
* cpu_latency_qos_remove_request - Remove existing CPU latency QoS request.
* @req: QoS request to remove.
*
* Remove the CPU latency QoS request represented by @req from the CPU latency
* QoS list along with updating the effective constraint value for that list.
*/
void cpu_latency_qos_remove_request(struct pm_qos_request *req)
{
if (!req)
return;
if (!cpu_latency_qos_request_active(req)) {
WARN(1, KERN_ERR "%s called for unknown object\n", __func__);
return;
}
trace_pm_qos_remove_request(PM_QOS_DEFAULT_VALUE);
cpu_latency_qos_apply(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
memset(req, 0, sizeof(*req)); memset(req, 0, sizeof(*req));
} }
EXPORT_SYMBOL_GPL(pm_qos_remove_request); EXPORT_SYMBOL_GPL(cpu_latency_qos_remove_request);
/** /* User space interface to the CPU latency QoS via misc device. */
* pm_qos_add_notifier - sets notification entry for changes to target value
* @pm_qos_class: identifies which qos target changes should be notified.
* @notifier: notifier block managed by caller.
*
* will register the notifier into a notification chain that gets called
* upon changes to the pm_qos_class target value.
*/
int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier)
{
int retval;
retval = blocking_notifier_chain_register( static int cpu_latency_qos_open(struct inode *inode, struct file *filp)
pm_qos_array[pm_qos_class]->constraints->notifiers,
notifier);
return retval;
}
EXPORT_SYMBOL_GPL(pm_qos_add_notifier);
/**
* pm_qos_remove_notifier - deletes notification entry from chain.
* @pm_qos_class: identifies which qos target changes are notified.
* @notifier: notifier block to be removed.
*
* will remove the notifier from the notification chain that gets called
* upon changes to the pm_qos_class target value.
*/
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
{
int retval;
retval = blocking_notifier_chain_unregister(
pm_qos_array[pm_qos_class]->constraints->notifiers,
notifier);
return retval;
}
EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
/* User space interface to PM QoS classes via misc devices */
static int register_pm_qos_misc(struct pm_qos_object *qos, struct dentry *d)
{
qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR;
qos->pm_qos_power_miscdev.name = qos->name;
qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops;
debugfs_create_file(qos->name, S_IRUGO, d, (void *)qos,
&pm_qos_debug_fops);
return misc_register(&qos->pm_qos_power_miscdev);
}
static int find_pm_qos_object_by_minor(int minor)
{
int pm_qos_class;
for (pm_qos_class = PM_QOS_CPU_DMA_LATENCY;
pm_qos_class < PM_QOS_NUM_CLASSES; pm_qos_class++) {
if (minor ==
pm_qos_array[pm_qos_class]->pm_qos_power_miscdev.minor)
return pm_qos_class;
}
return -1;
}
static int pm_qos_power_open(struct inode *inode, struct file *filp)
{
long pm_qos_class;
pm_qos_class = find_pm_qos_object_by_minor(iminor(inode));
if (pm_qos_class >= PM_QOS_CPU_DMA_LATENCY) {
struct pm_qos_request *req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req)
return -ENOMEM;
pm_qos_add_request(req, pm_qos_class, PM_QOS_DEFAULT_VALUE);
filp->private_data = req;
return 0;
}
return -EPERM;
}
static int pm_qos_power_release(struct inode *inode, struct file *filp)
{ {
struct pm_qos_request *req; struct pm_qos_request *req;
req = filp->private_data; req = kzalloc(sizeof(*req), GFP_KERNEL);
pm_qos_remove_request(req); if (!req)
return -ENOMEM;
cpu_latency_qos_add_request(req, PM_QOS_DEFAULT_VALUE);
filp->private_data = req;
return 0;
}
static int cpu_latency_qos_release(struct inode *inode, struct file *filp)
{
struct pm_qos_request *req = filp->private_data;
filp->private_data = NULL;
cpu_latency_qos_remove_request(req);
kfree(req); kfree(req);
return 0; return 0;
} }
static ssize_t cpu_latency_qos_read(struct file *filp, char __user *buf,
static ssize_t pm_qos_power_read(struct file *filp, char __user *buf, size_t count, loff_t *f_pos)
size_t count, loff_t *f_pos)
{ {
s32 value;
unsigned long flags;
struct pm_qos_request *req = filp->private_data; struct pm_qos_request *req = filp->private_data;
unsigned long flags;
s32 value;
if (!req) if (!req || !cpu_latency_qos_request_active(req))
return -EINVAL;
if (!pm_qos_request_active(req))
return -EINVAL; return -EINVAL;
spin_lock_irqsave(&pm_qos_lock, flags); spin_lock_irqsave(&pm_qos_lock, flags);
value = pm_qos_get_value(pm_qos_array[req->pm_qos_class]->constraints); value = pm_qos_get_value(&cpu_latency_constraints);
spin_unlock_irqrestore(&pm_qos_lock, flags); spin_unlock_irqrestore(&pm_qos_lock, flags);
return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32)); return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32));
} }
static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf, static ssize_t cpu_latency_qos_write(struct file *filp, const char __user *buf,
size_t count, loff_t *f_pos) size_t count, loff_t *f_pos)
{ {
s32 value; s32 value;
struct pm_qos_request *req;
if (count == sizeof(s32)) { if (count == sizeof(s32)) {
if (copy_from_user(&value, buf, sizeof(s32))) if (copy_from_user(&value, buf, sizeof(s32)))
@ -620,36 +391,38 @@ static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
return ret; return ret;
} }
req = filp->private_data; cpu_latency_qos_update_request(filp->private_data, value);
pm_qos_update_request(req, value);
return count; return count;
} }
static const struct file_operations cpu_latency_qos_fops = {
.write = cpu_latency_qos_write,
.read = cpu_latency_qos_read,
.open = cpu_latency_qos_open,
.release = cpu_latency_qos_release,
.llseek = noop_llseek,
};
static int __init pm_qos_power_init(void) static struct miscdevice cpu_latency_qos_miscdev = {
.minor = MISC_DYNAMIC_MINOR,
.name = "cpu_dma_latency",
.fops = &cpu_latency_qos_fops,
};
static int __init cpu_latency_qos_init(void)
{ {
int ret = 0; int ret;
int i;
struct dentry *d;
BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES); ret = misc_register(&cpu_latency_qos_miscdev);
if (ret < 0)
d = debugfs_create_dir("pm_qos", NULL); pr_err("%s: %s setup failed\n", __func__,
cpu_latency_qos_miscdev.name);
for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) {
ret = register_pm_qos_misc(pm_qos_array[i], d);
if (ret < 0) {
pr_err("%s: %s setup failed\n",
__func__, pm_qos_array[i]->name);
return ret;
}
}
return ret; return ret;
} }
late_initcall(cpu_latency_qos_init);
late_initcall(pm_qos_power_init); #endif /* CONFIG_CPU_IDLE */
/* Definitions related to the frequency QoS below. */ /* Definitions related to the frequency QoS below. */

View File

@ -409,21 +409,7 @@ snapshot_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
switch (cmd) { switch (cmd) {
case SNAPSHOT_GET_IMAGE_SIZE: case SNAPSHOT_GET_IMAGE_SIZE:
case SNAPSHOT_AVAIL_SWAP_SIZE: case SNAPSHOT_AVAIL_SWAP_SIZE:
case SNAPSHOT_ALLOC_SWAP_PAGE: { case SNAPSHOT_ALLOC_SWAP_PAGE:
compat_loff_t __user *uoffset = compat_ptr(arg);
loff_t offset;
mm_segment_t old_fs;
int err;
old_fs = get_fs();
set_fs(KERNEL_DS);
err = snapshot_ioctl(file, cmd, (unsigned long) &offset);
set_fs(old_fs);
if (!err && put_user(offset, uoffset))
err = -EFAULT;
return err;
}
case SNAPSHOT_CREATE_IMAGE: case SNAPSHOT_CREATE_IMAGE:
return snapshot_ioctl(file, cmd, return snapshot_ioctl(file, cmd,
(unsigned long) compat_ptr(arg)); (unsigned long) compat_ptr(arg));

View File

@ -748,11 +748,11 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
snd_pcm_timer_resolution_change(substream); snd_pcm_timer_resolution_change(substream);
snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP); snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP);
if (pm_qos_request_active(&substream->latency_pm_qos_req)) if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req))
pm_qos_remove_request(&substream->latency_pm_qos_req); cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
if ((usecs = period_to_usecs(runtime)) >= 0) if ((usecs = period_to_usecs(runtime)) >= 0)
pm_qos_add_request(&substream->latency_pm_qos_req, cpu_latency_qos_add_request(&substream->latency_pm_qos_req,
PM_QOS_CPU_DMA_LATENCY, usecs); usecs);
return 0; return 0;
_error: _error:
/* hardware might be unusable from this time, /* hardware might be unusable from this time,
@ -821,7 +821,7 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
return -EBADFD; return -EBADFD;
result = do_hw_free(substream); result = do_hw_free(substream);
snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN); snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
pm_qos_remove_request(&substream->latency_pm_qos_req); cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
return result; return result;
} }
@ -2599,8 +2599,8 @@ void snd_pcm_release_substream(struct snd_pcm_substream *substream)
substream->ops->close(substream); substream->ops->close(substream);
substream->hw_opened = 0; substream->hw_opened = 0;
} }
if (pm_qos_request_active(&substream->latency_pm_qos_req)) if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req))
pm_qos_remove_request(&substream->latency_pm_qos_req); cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
if (substream->pcm_release) { if (substream->pcm_release) {
substream->pcm_release(substream); substream->pcm_release(substream);
substream->pcm_release = NULL; substream->pcm_release = NULL;

View File

@ -325,8 +325,7 @@ int sst_context_init(struct intel_sst_drv *ctx)
ret = -ENOMEM; ret = -ENOMEM;
goto do_free_mem; goto do_free_mem;
} }
pm_qos_add_request(ctx->qos, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(ctx->qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
dev_dbg(ctx->dev, "Requesting FW %s now...\n", ctx->firmware_name); dev_dbg(ctx->dev, "Requesting FW %s now...\n", ctx->firmware_name);
ret = request_firmware_nowait(THIS_MODULE, true, ctx->firmware_name, ret = request_firmware_nowait(THIS_MODULE, true, ctx->firmware_name,
@ -364,7 +363,7 @@ void sst_context_cleanup(struct intel_sst_drv *ctx)
sysfs_remove_group(&ctx->dev->kobj, &sst_fw_version_attr_group); sysfs_remove_group(&ctx->dev->kobj, &sst_fw_version_attr_group);
flush_scheduled_work(); flush_scheduled_work();
destroy_workqueue(ctx->post_msg_wq); destroy_workqueue(ctx->post_msg_wq);
pm_qos_remove_request(ctx->qos); cpu_latency_qos_remove_request(ctx->qos);
kfree(ctx->fw_sg_list.src); kfree(ctx->fw_sg_list.src);
kfree(ctx->fw_sg_list.dst); kfree(ctx->fw_sg_list.dst);
ctx->fw_sg_list.list_len = 0; ctx->fw_sg_list.list_len = 0;

View File

@ -412,7 +412,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx)
return -ENOMEM; return -ENOMEM;
/* Prevent C-states beyond C6 */ /* Prevent C-states beyond C6 */
pm_qos_update_request(sst_drv_ctx->qos, 0); cpu_latency_qos_update_request(sst_drv_ctx->qos, 0);
sst_drv_ctx->sst_state = SST_FW_LOADING; sst_drv_ctx->sst_state = SST_FW_LOADING;
@ -442,7 +442,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx)
restore: restore:
/* Re-enable Deeper C-states beyond C6 */ /* Re-enable Deeper C-states beyond C6 */
pm_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE);
sst_free_block(sst_drv_ctx, block); sst_free_block(sst_drv_ctx, block);
dev_dbg(sst_drv_ctx->dev, "fw load successful!!!\n"); dev_dbg(sst_drv_ctx->dev, "fw load successful!!!\n");

View File

@ -112,7 +112,7 @@ static void omap_dmic_dai_shutdown(struct snd_pcm_substream *substream,
mutex_lock(&dmic->mutex); mutex_lock(&dmic->mutex);
pm_qos_remove_request(&dmic->pm_qos_req); cpu_latency_qos_remove_request(&dmic->pm_qos_req);
if (!dai->active) if (!dai->active)
dmic->active = 0; dmic->active = 0;
@ -230,8 +230,9 @@ static int omap_dmic_dai_prepare(struct snd_pcm_substream *substream,
struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai); struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai);
u32 ctrl; u32 ctrl;
if (pm_qos_request_active(&dmic->pm_qos_req)) if (cpu_latency_qos_request_active(&dmic->pm_qos_req))
pm_qos_update_request(&dmic->pm_qos_req, dmic->latency); cpu_latency_qos_update_request(&dmic->pm_qos_req,
dmic->latency);
/* Configure uplink threshold */ /* Configure uplink threshold */
omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold); omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold);

View File

@ -836,10 +836,10 @@ static void omap_mcbsp_dai_shutdown(struct snd_pcm_substream *substream,
int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
if (mcbsp->latency[stream2]) if (mcbsp->latency[stream2])
pm_qos_update_request(&mcbsp->pm_qos_req, cpu_latency_qos_update_request(&mcbsp->pm_qos_req,
mcbsp->latency[stream2]); mcbsp->latency[stream2]);
else if (mcbsp->latency[stream1]) else if (mcbsp->latency[stream1])
pm_qos_remove_request(&mcbsp->pm_qos_req); cpu_latency_qos_remove_request(&mcbsp->pm_qos_req);
mcbsp->latency[stream1] = 0; mcbsp->latency[stream1] = 0;
@ -863,10 +863,10 @@ static int omap_mcbsp_dai_prepare(struct snd_pcm_substream *substream,
if (!latency || mcbsp->latency[stream1] < latency) if (!latency || mcbsp->latency[stream1] < latency)
latency = mcbsp->latency[stream1]; latency = mcbsp->latency[stream1];
if (pm_qos_request_active(pm_qos_req)) if (cpu_latency_qos_request_active(pm_qos_req))
pm_qos_update_request(pm_qos_req, latency); cpu_latency_qos_update_request(pm_qos_req, latency);
else if (latency) else if (latency)
pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency); cpu_latency_qos_add_request(pm_qos_req, latency);
return 0; return 0;
} }
@ -1434,8 +1434,8 @@ static int asoc_mcbsp_remove(struct platform_device *pdev)
if (mcbsp->pdata->ops && mcbsp->pdata->ops->free) if (mcbsp->pdata->ops && mcbsp->pdata->ops->free)
mcbsp->pdata->ops->free(mcbsp->id); mcbsp->pdata->ops->free(mcbsp->id);
if (pm_qos_request_active(&mcbsp->pm_qos_req)) if (cpu_latency_qos_request_active(&mcbsp->pm_qos_req))
pm_qos_remove_request(&mcbsp->pm_qos_req); cpu_latency_qos_remove_request(&mcbsp->pm_qos_req);
if (mcbsp->pdata->buffer_size) if (mcbsp->pdata->buffer_size)
sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group); sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group);

View File

@ -281,10 +281,10 @@ static void omap_mcpdm_dai_shutdown(struct snd_pcm_substream *substream,
} }
if (mcpdm->latency[stream2]) if (mcpdm->latency[stream2])
pm_qos_update_request(&mcpdm->pm_qos_req, cpu_latency_qos_update_request(&mcpdm->pm_qos_req,
mcpdm->latency[stream2]); mcpdm->latency[stream2]);
else if (mcpdm->latency[stream1]) else if (mcpdm->latency[stream1])
pm_qos_remove_request(&mcpdm->pm_qos_req); cpu_latency_qos_remove_request(&mcpdm->pm_qos_req);
mcpdm->latency[stream1] = 0; mcpdm->latency[stream1] = 0;
@ -386,10 +386,10 @@ static int omap_mcpdm_prepare(struct snd_pcm_substream *substream,
if (!latency || mcpdm->latency[stream1] < latency) if (!latency || mcpdm->latency[stream1] < latency)
latency = mcpdm->latency[stream1]; latency = mcpdm->latency[stream1];
if (pm_qos_request_active(pm_qos_req)) if (cpu_latency_qos_request_active(pm_qos_req))
pm_qos_update_request(pm_qos_req, latency); cpu_latency_qos_update_request(pm_qos_req, latency);
else if (latency) else if (latency)
pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency); cpu_latency_qos_add_request(pm_qos_req, latency);
if (!omap_mcpdm_active(mcpdm)) { if (!omap_mcpdm_active(mcpdm)) {
omap_mcpdm_start(mcpdm); omap_mcpdm_start(mcpdm);
@ -451,8 +451,8 @@ static int omap_mcpdm_remove(struct snd_soc_dai *dai)
free_irq(mcpdm->irq, (void *)mcpdm); free_irq(mcpdm->irq, (void *)mcpdm);
pm_runtime_disable(mcpdm->dev); pm_runtime_disable(mcpdm->dev);
if (pm_qos_request_active(&mcpdm->pm_qos_req)) if (cpu_latency_qos_request_active(&mcpdm->pm_qos_req))
pm_qos_remove_request(&mcpdm->pm_qos_req); cpu_latency_qos_remove_request(&mcpdm->pm_qos_req);
return 0; return 0;
} }

View File

@ -235,7 +235,6 @@ def plot_duration_cpu():
output_png = 'all_cpu_durations.png' output_png = 'all_cpu_durations.png'
g_plot = common_all_gnuplot_settings(output_png) g_plot = common_all_gnuplot_settings(output_png)
# autoscale this one, no set y range # autoscale this one, no set y range
g_plot('set ytics 0, 500')
g_plot('set ylabel "Timer Duration (MilliSeconds)"') g_plot('set ylabel "Timer Duration (MilliSeconds)"')
g_plot('set title "{} : cpu durations : {:%F %H:%M}"'.format(testname, datetime.now())) g_plot('set title "{} : cpu durations : {:%F %H:%M}"'.format(testname, datetime.now()))