Additional power management updates for 5.1-rc1

- Fix registration of new cpuidle governors partially broken during
    the 5.0 development cycle by mistake (Rafael Wysocki).
 
  - Avoid integer overflows in the menu cpuidle governor by making
    it discard the overflowing data points upfront (Rafael Wysocki).
 
  - Fix minor mistake in the recent update of the iowait boost
    computation in the intel_pstate driver (Rafael Wysocki).
 
  - Drop incorrect __init annotation from one function in the pxa2xx
    cpufreq driver (Arnd Bergmann).
 
  - Fix the operating performance points (OPP) framework
    initialization for devices in multiple power domains if
    only one of them is scalable (Rajendra Nayak).
 
  - Fix mistake in dev_pm_opp_set_rate() which causes it to skip
    updating the performance state if the new frequency is the same
    as the old one (Viresh Kumar).
 
  - Rework the cancellation of wakeup source timers to avoid
    potential issues with it and do some cleanups unlocked by
    that change (Viresh Kumar, Rafael Wysocki).
 
  - Clean up the code computing the active/suspended time of devices
    in the PM-runtime framework after recent changes (Ulf Hansson).
 
  - Make the power management infrastructure code use pr_fmt()
    consistently (Joe Perches).
 
  - Clean up the generic power domains (genpd) framework somewhat
    (Aisheng Dong).
 
  - Improve kerneldoc comments for two functions in the cpufreq core
    (Rafael Wysocki).
 
  - Fix typo in a PM QoS file description comment (Aisheng Dong).
 
  - Update the handling of CPU boost frequencies in the cpupower
    utility (Abhishek Goel).
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJcija/AAoJEILEb/54YlRx7O8P/AlxJK8EKVkLq6/zIh/TcvQm
 G7LclduUtOQ6punqh/NFHPLWnazIO3/Rg1ApqApEn4SheYzrINix76gAd0cYvYHA
 /ZfkscQ0SeMjT6w1fVJ55ubKcJUQXrUxXzBaBdo0Z33FBBLxh9seJXQVF3ZSuahQ
 RhdrSoCtZEI7pHuCY91LanfL1LmHpApSLKcpPvCjvtwj2rm3L8zDFwYsuArbNCX+
 yvzhGJZ/vQFo5gJbf6M9msgtx2AfryHR3fgtM/RZhwI+7qPIuzFanuAydxBxgstT
 wmzpv2y3lvIv/y3q5SDt0LhEzcuUXtNZLpM1AGAlogo9ZgvhtjUpK+Gjkpn9g01r
 Y8qv+8BOL2PSrfZVyLXyyA8oRiDxbYlGQmoRy89zq9ukQorOhJ4kv+wApOq88krA
 AFZPMHTBFCqH9GHeUIOfGZN15/r3GCBGC8D0G8kl8MUI6cFlV85uliAJcuS3/A3s
 Z+xAfC/75ue2vlXhF8iiWnRuya5LRLwdwmMdlAWeguTLqNF3RlwlOroR7RTx9f8j
 sBauXyRO9ovxbfbmBqFIFWU1yBIMd3hU+XU47xjvvDdoKTl/TpgXT+AEofBaMokl
 rFbnK2LsVn3H5wXKD0CaH71qxiF/Q8g7BuoXc4SqWYI8N/pbQe+WEYoRvrb2NYkr
 gpkTZmNUXrDXu/9ypEoR
 =fHRq
 -----END PGP SIGNATURE-----

Merge tag 'pm-5.1-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull more power management updates from Rafael Wysocki:
 "These are mostly fixes and cleanups on top of the previously merged
  power management material for 5.1-rc1 with one cpupower utility update
  that wasn't pushed earlier due to unfortunate timing.

  Specifics:

   - Fix registration of new cpuidle governors partially broken during
     the 5.0 development cycle by mistake (Rafael Wysocki).

   - Avoid integer overflows in the menu cpuidle governor by making it
     discard the overflowing data points upfront (Rafael Wysocki).

   - Fix minor mistake in the recent update of the iowait boost
     computation in the intel_pstate driver (Rafael Wysocki).

   - Drop incorrect __init annotation from one function in the pxa2xx
     cpufreq driver (Arnd Bergmann).

   - Fix the operating performance points (OPP) framework initialization
     for devices in multiple power domains if only one of them is
     scalable (Rajendra Nayak).

   - Fix mistake in dev_pm_opp_set_rate() which causes it to skip
     updating the performance state if the new frequency is the same as
     the old one (Viresh Kumar).

   - Rework the cancellation of wakeup source timers to avoid potential
     issues with it and do some cleanups unlocked by that change (Viresh
     Kumar, Rafael Wysocki).

   - Clean up the code computing the active/suspended time of devices in
     the PM-runtime framework after recent changes (Ulf Hansson).

   - Make the power management infrastructure code use pr_fmt()
     consistently (Joe Perches).

   - Clean up the generic power domains (genpd) framework somewhat
     (Aisheng Dong).

   - Improve kerneldoc comments for two functions in the cpufreq core
     (Rafael Wysocki).

   - Fix typo in a PM QoS file description comment (Aisheng Dong).

   - Update the handling of CPU boost frequencies in the cpupower
     utility (Abhishek Goel)"

* tag 'pm-5.1-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  cpuidle: governor: Add new governors to cpuidle_governors again
  cpufreq: intel_pstate: Fix up iowait_boost computation
  PM / OPP: Update performance state when freq == old_freq
  PM / wakeup: Drop wakeup_source_drop()
  PM / wakeup: Rework wakeup source timer cancellation
  PM / domains: Remove one unnecessary blank line
  PM / Domains: Return early for all errors in _genpd_power_off()
  PM / Domains: Improve warn for multiple states but no governor
  OPP: Fix handling of multiple power domains
  PM / QoS: Fix typo in file description
  cpufreq: pxa2xx: remove incorrect __init annotation
  PM-runtime: Call pm_runtime_active|suspended_time() from sysfs
  PM-runtime: Consolidate code to get active/suspended time
  PM: Add and use pr_fmt()
  cpufreq: Improve kerneldoc comments for cpufreq_cpu_get/put()
  cpuidle: menu: Avoid overflows when computing variance
  tools/power/cpupower: Display boost frequency separately
This commit is contained in:
Linus Torvalds 2019-03-14 10:30:06 -07:00
commit 9352ca585b
21 changed files with 131 additions and 112 deletions

View File

@ -6,6 +6,8 @@
* This file is released under the GPLv2.
*/
#define pr_fmt(fmt) "PM: " fmt
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/io.h>
@ -457,19 +459,19 @@ static int _genpd_power_off(struct generic_pm_domain *genpd, bool timed)
time_start = ktime_get();
ret = genpd->power_off(genpd);
if (ret == -EBUSY)
if (ret)
return ret;
elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
if (elapsed_ns <= genpd->states[state_idx].power_off_latency_ns)
return ret;
return 0;
genpd->states[state_idx].power_off_latency_ns = elapsed_ns;
genpd->max_off_time_changed = true;
pr_debug("%s: Power-%s latency exceeded, new value %lld ns\n",
genpd->name, "off", elapsed_ns);
return ret;
return 0;
}
/**
@ -1657,8 +1659,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
if (!list_empty(&subdomain->master_links) || subdomain->device_count) {
pr_warn("%s: unable to remove subdomain %s\n", genpd->name,
subdomain->name);
pr_warn("%s: unable to remove subdomain %s\n",
genpd->name, subdomain->name);
ret = -EBUSY;
goto out;
}
@ -1766,8 +1768,8 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
ret = genpd_set_default_power_state(genpd);
if (ret)
return ret;
} else if (!gov) {
pr_warn("%s : no governor for states\n", genpd->name);
} else if (!gov && genpd->state_count > 1) {
pr_warn("%s: no governor for states\n", genpd->name);
}
device_initialize(&genpd->dev);

View File

@ -128,7 +128,6 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
off_on_time_ns = genpd->states[state].power_off_latency_ns +
genpd->states[state].power_on_latency_ns;
min_off_time_ns = -1;
/*
* Check if subdomains can be off for enough time.

View File

@ -17,6 +17,8 @@
* subsystem list maintains.
*/
#define pr_fmt(fmt) "PM: " fmt
#include <linux/device.h>
#include <linux/export.h>
#include <linux/mutex.h>
@ -128,7 +130,7 @@ void device_pm_add(struct device *dev)
if (device_pm_not_required(dev))
return;
pr_debug("PM: Adding info for %s:%s\n",
pr_debug("Adding info for %s:%s\n",
dev->bus ? dev->bus->name : "No Bus", dev_name(dev));
device_pm_check_callbacks(dev);
mutex_lock(&dpm_list_mtx);
@ -149,7 +151,7 @@ void device_pm_remove(struct device *dev)
if (device_pm_not_required(dev))
return;
pr_debug("PM: Removing info for %s:%s\n",
pr_debug("Removing info for %s:%s\n",
dev->bus ? dev->bus->name : "No Bus", dev_name(dev));
complete_all(&dev->power.completion);
mutex_lock(&dpm_list_mtx);
@ -168,7 +170,7 @@ void device_pm_remove(struct device *dev)
*/
void device_pm_move_before(struct device *deva, struct device *devb)
{
pr_debug("PM: Moving %s:%s before %s:%s\n",
pr_debug("Moving %s:%s before %s:%s\n",
deva->bus ? deva->bus->name : "No Bus", dev_name(deva),
devb->bus ? devb->bus->name : "No Bus", dev_name(devb));
/* Delete deva from dpm_list and reinsert before devb. */
@ -182,7 +184,7 @@ void device_pm_move_before(struct device *deva, struct device *devb)
*/
void device_pm_move_after(struct device *deva, struct device *devb)
{
pr_debug("PM: Moving %s:%s after %s:%s\n",
pr_debug("Moving %s:%s after %s:%s\n",
deva->bus ? deva->bus->name : "No Bus", dev_name(deva),
devb->bus ? devb->bus->name : "No Bus", dev_name(devb));
/* Delete deva from dpm_list and reinsert after devb. */
@ -195,7 +197,7 @@ void device_pm_move_after(struct device *deva, struct device *devb)
*/
void device_pm_move_last(struct device *dev)
{
pr_debug("PM: Moving %s:%s to end of list\n",
pr_debug("Moving %s:%s to end of list\n",
dev->bus ? dev->bus->name : "No Bus", dev_name(dev));
list_move_tail(&dev->power.entry, &dpm_list);
}
@ -418,7 +420,7 @@ static void pm_dev_dbg(struct device *dev, pm_message_t state, const char *info)
static void pm_dev_err(struct device *dev, pm_message_t state, const char *info,
int error)
{
printk(KERN_ERR "PM: Device %s failed to %s%s: error %d\n",
pr_err("Device %s failed to %s%s: error %d\n",
dev_name(dev), pm_verb(state.event), info, error);
}
@ -2022,8 +2024,7 @@ int dpm_prepare(pm_message_t state)
error = 0;
continue;
}
printk(KERN_INFO "PM: Device %s not prepared "
"for power transition: code %d\n",
pr_info("Device %s not prepared for power transition: code %d\n",
dev_name(dev), error);
put_device(dev);
break;
@ -2062,7 +2063,7 @@ EXPORT_SYMBOL_GPL(dpm_suspend_start);
void __suspend_report_result(const char *function, void *fn, int ret)
{
if (ret)
printk(KERN_ERR "%s(): %pF returns %d\n", function, fn, ret);
pr_err("%s(): %pF returns %d\n", function, fn, ret);
}
EXPORT_SYMBOL_GPL(__suspend_report_result);

View File

@ -21,6 +21,7 @@ static inline void pm_runtime_early_init(struct device *dev)
extern void pm_runtime_init(struct device *dev);
extern void pm_runtime_reinit(struct device *dev);
extern void pm_runtime_remove(struct device *dev);
extern u64 pm_runtime_active_time(struct device *dev);
#define WAKE_IRQ_DEDICATED_ALLOCATED BIT(0)
#define WAKE_IRQ_DEDICATED_MANAGED BIT(1)

View File

@ -22,7 +22,7 @@
* per-device constraint data struct.
*
* Note about the per-device constraint data struct allocation:
* . The per-device constraints data struct ptr is tored into the device
* . The per-device constraints data struct ptr is stored into the device
* dev_pm_info.
* . To minimize the data usage by the per-device constraints, the data struct
* is only allocated at the first call to dev_pm_qos_add_request.

View File

@ -64,7 +64,7 @@ static int rpm_suspend(struct device *dev, int rpmflags);
* runtime_status field is updated, to account the time in the old state
* correctly.
*/
void update_pm_runtime_accounting(struct device *dev)
static void update_pm_runtime_accounting(struct device *dev)
{
u64 now, last, delta;
@ -98,7 +98,7 @@ static void __update_runtime_status(struct device *dev, enum rpm_status status)
dev->power.runtime_status = status;
}
u64 pm_runtime_suspended_time(struct device *dev)
static u64 rpm_get_accounted_time(struct device *dev, bool suspended)
{
u64 time;
unsigned long flags;
@ -106,12 +106,22 @@ u64 pm_runtime_suspended_time(struct device *dev)
spin_lock_irqsave(&dev->power.lock, flags);
update_pm_runtime_accounting(dev);
time = dev->power.suspended_time;
time = suspended ? dev->power.suspended_time : dev->power.active_time;
spin_unlock_irqrestore(&dev->power.lock, flags);
return time;
}
u64 pm_runtime_active_time(struct device *dev)
{
return rpm_get_accounted_time(dev, false);
}
u64 pm_runtime_suspended_time(struct device *dev)
{
return rpm_get_accounted_time(dev, true);
}
EXPORT_SYMBOL_GPL(pm_runtime_suspended_time);
/**

View File

@ -125,13 +125,9 @@ static ssize_t runtime_active_time_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int ret;
u64 tmp;
spin_lock_irq(&dev->power.lock);
update_pm_runtime_accounting(dev);
tmp = dev->power.active_time;
u64 tmp = pm_runtime_active_time(dev);
do_div(tmp, NSEC_PER_MSEC);
ret = sprintf(buf, "%llu\n", tmp);
spin_unlock_irq(&dev->power.lock);
return ret;
}
@ -141,13 +137,9 @@ static ssize_t runtime_suspended_time_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int ret;
u64 tmp;
spin_lock_irq(&dev->power.lock);
update_pm_runtime_accounting(dev);
tmp = dev->power.suspended_time;
u64 tmp = pm_runtime_suspended_time(dev);
do_div(tmp, NSEC_PER_MSEC);
ret = sprintf(buf, "%llu\n", tmp);
spin_unlock_irq(&dev->power.lock);
return ret;
}

View File

@ -7,6 +7,8 @@
* devices may be working.
*/
#define pr_fmt(fmt) "PM: " fmt
#include <linux/pm-trace.h>
#include <linux/export.h>
#include <linux/rtc.h>

View File

@ -6,6 +6,8 @@
* This file is released under the GPLv2.
*/
#define pr_fmt(fmt) "PM: " fmt
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/sched/signal.h>
@ -106,23 +108,6 @@ struct wakeup_source *wakeup_source_create(const char *name)
}
EXPORT_SYMBOL_GPL(wakeup_source_create);
/**
* wakeup_source_drop - Prepare a struct wakeup_source object for destruction.
* @ws: Wakeup source to prepare for destruction.
*
* Callers must ensure that __pm_stay_awake() or __pm_wakeup_event() will never
* be run in parallel with this function for the same wakeup source object.
*/
void wakeup_source_drop(struct wakeup_source *ws)
{
if (!ws)
return;
del_timer_sync(&ws->timer);
__pm_relax(ws);
}
EXPORT_SYMBOL_GPL(wakeup_source_drop);
/*
* Record wakeup_source statistics being deleted into a dummy wakeup_source.
*/
@ -162,7 +147,7 @@ void wakeup_source_destroy(struct wakeup_source *ws)
if (!ws)
return;
wakeup_source_drop(ws);
__pm_relax(ws);
wakeup_source_record(ws);
kfree_const(ws->name);
kfree(ws);
@ -205,6 +190,13 @@ void wakeup_source_remove(struct wakeup_source *ws)
list_del_rcu(&ws->entry);
raw_spin_unlock_irqrestore(&events_lock, flags);
synchronize_srcu(&wakeup_srcu);
del_timer_sync(&ws->timer);
/*
* Clear timer.function to make wakeup_source_not_registered() treat
* this wakeup source as not registered.
*/
ws->timer.function = NULL;
}
EXPORT_SYMBOL_GPL(wakeup_source_remove);
@ -853,7 +845,7 @@ bool pm_wakeup_pending(void)
raw_spin_unlock_irqrestore(&events_lock, flags);
if (ret) {
pr_debug("PM: Wakeup pending, aborting suspend\n");
pr_debug("Wakeup pending, aborting suspend\n");
pm_print_active_wakeup_sources();
}

View File

@ -206,17 +206,15 @@ unsigned int cpufreq_generic_get(unsigned int cpu)
EXPORT_SYMBOL_GPL(cpufreq_generic_get);
/**
* cpufreq_cpu_get: returns policy for a cpu and marks it busy.
* cpufreq_cpu_get - Return policy for a CPU and mark it as busy.
* @cpu: CPU to find the policy for.
*
* @cpu: cpu to find policy for.
* Call cpufreq_cpu_get_raw() to obtain a cpufreq policy for @cpu and increment
* the kobject reference counter of that policy. Return a valid policy on
* success or NULL on failure.
*
* This returns policy for 'cpu', returns NULL if it doesn't exist.
* It also increments the kobject reference count to mark it busy and so would
* require a corresponding call to cpufreq_cpu_put() to decrement it back.
* If corresponding call cpufreq_cpu_put() isn't made, the policy wouldn't be
* freed as that depends on the kobj count.
*
* Return: A valid policy on success, otherwise NULL on failure.
* The policy returned by this function has to be released with the help of
* cpufreq_cpu_put() to balance its kobject reference counter properly.
*/
struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu)
{
@ -243,12 +241,8 @@ struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu)
EXPORT_SYMBOL_GPL(cpufreq_cpu_get);
/**
* cpufreq_cpu_put: Decrements the usage count of a policy
*
* @policy: policy earlier returned by cpufreq_cpu_get().
*
* This decrements the kobject reference count incremented earlier by calling
* cpufreq_cpu_get().
* cpufreq_cpu_put - Decrement kobject usage counter for cpufreq policy.
* @policy: cpufreq policy returned by cpufreq_cpu_get().
*/
void cpufreq_cpu_put(struct cpufreq_policy *policy)
{

View File

@ -1762,7 +1762,7 @@ static void intel_pstate_update_util(struct update_util_data *data, u64 time,
/* Start over if the CPU may have been idle. */
if (delta_ns > TICK_NSEC) {
cpu->iowait_boost = ONE_EIGHTH_FP;
} else if (cpu->iowait_boost) {
} else if (cpu->iowait_boost >= ONE_EIGHTH_FP) {
cpu->iowait_boost <<= 1;
if (cpu->iowait_boost > int_tofp(1))
cpu->iowait_boost = int_tofp(1);

View File

@ -143,7 +143,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
return ret;
}
static void __init pxa_cpufreq_init_voltages(void)
static void pxa_cpufreq_init_voltages(void)
{
vcc_core = regulator_get(NULL, "vcc_core");
if (IS_ERR(vcc_core)) {
@ -159,7 +159,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
return 0;
}
static void __init pxa_cpufreq_init_voltages(void) { }
static void pxa_cpufreq_init_voltages(void) { }
#endif
static void find_freq_tables(struct cpufreq_frequency_table **freq_table,

View File

@ -89,6 +89,7 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
mutex_lock(&cpuidle_lock);
if (__cpuidle_find_governor(gov->name) == NULL) {
ret = 0;
list_add_tail(&gov->governor_list, &cpuidle_governors);
if (!cpuidle_curr_governor ||
!strncasecmp(param_governor, gov->name, CPUIDLE_NAME_LEN) ||
(cpuidle_curr_governor->rating < gov->rating &&

View File

@ -186,7 +186,7 @@ static unsigned int get_typical_interval(struct menu_device *data,
unsigned int min, max, thresh, avg;
uint64_t sum, variance;
thresh = UINT_MAX; /* Discard outliers above this value */
thresh = INT_MAX; /* Discard outliers above this value */
again:

View File

@ -760,7 +760,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
old_freq, freq);
/* Scaling up? Configure required OPPs before frequency */
if (freq > old_freq) {
if (freq >= old_freq) {
ret = _set_required_opps(dev, opp_table, opp);
if (ret)
goto put_opp;

View File

@ -173,7 +173,7 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table,
struct opp_table **required_opp_tables;
struct device **genpd_virt_devs = NULL;
struct device_node *required_np, *np;
int count, i;
int count, count_pd, i;
/* Traversing the first OPP node is all we need */
np = of_get_next_available_child(opp_np, NULL);
@ -186,7 +186,19 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table,
if (!count)
goto put_np;
if (count > 1) {
/*
* Check the number of power-domains to know if we need to deal
* with virtual devices. In some cases we have devices with multiple
* power domains but with only one of them being scalable, hence
* 'count' could be 1, but we still have to deal with multiple genpds
* and virtual devices.
*/
count_pd = of_count_phandle_with_args(dev->of_node, "power-domains",
"#power-domain-cells");
if (!count_pd)
goto put_np;
if (count_pd > 1) {
genpd_virt_devs = kcalloc(count, sizeof(*genpd_virt_devs),
GFP_KERNEL);
if (!genpd_virt_devs)

View File

@ -643,7 +643,6 @@ struct dev_pm_info {
struct dev_pm_qos *qos;
};
extern void update_pm_runtime_accounting(struct device *dev);
extern int dev_pm_get_subsys_data(struct device *dev);
extern void dev_pm_put_subsys_data(struct device *dev);

View File

@ -96,7 +96,6 @@ static inline void device_set_wakeup_path(struct device *dev)
/* drivers/base/power/wakeup.c */
extern void wakeup_source_prepare(struct wakeup_source *ws, const char *name);
extern struct wakeup_source *wakeup_source_create(const char *name);
extern void wakeup_source_drop(struct wakeup_source *ws);
extern void wakeup_source_destroy(struct wakeup_source *ws);
extern void wakeup_source_add(struct wakeup_source *ws);
extern void wakeup_source_remove(struct wakeup_source *ws);
@ -134,8 +133,6 @@ static inline struct wakeup_source *wakeup_source_create(const char *name)
return NULL;
}
static inline void wakeup_source_drop(struct wakeup_source *ws) {}
static inline void wakeup_source_destroy(struct wakeup_source *ws) {}
static inline void wakeup_source_add(struct wakeup_source *ws) {}
@ -204,12 +201,6 @@ static inline void wakeup_source_init(struct wakeup_source *ws,
wakeup_source_add(ws);
}
static inline void wakeup_source_trash(struct wakeup_source *ws)
{
wakeup_source_remove(ws);
wakeup_source_drop(ws);
}
static inline void __pm_wakeup_event(struct wakeup_source *ws, unsigned int msec)
{
return pm_wakeup_ws_event(ws, msec, false);

View File

@ -333,17 +333,20 @@ void cpufreq_put_available_governors(struct cpufreq_available_governors *any)
}
struct cpufreq_available_frequencies
*cpufreq_get_available_frequencies(unsigned int cpu)
struct cpufreq_frequencies
*cpufreq_get_frequencies(const char *type, unsigned int cpu)
{
struct cpufreq_available_frequencies *first = NULL;
struct cpufreq_available_frequencies *current = NULL;
struct cpufreq_frequencies *first = NULL;
struct cpufreq_frequencies *current = NULL;
char one_value[SYSFS_PATH_MAX];
char linebuf[MAX_LINE_LEN];
char fname[MAX_LINE_LEN];
unsigned int pos, i;
unsigned int len;
len = sysfs_cpufreq_read_file(cpu, "scaling_available_frequencies",
snprintf(fname, MAX_LINE_LEN, "scaling_%s_frequencies", type);
len = sysfs_cpufreq_read_file(cpu, fname,
linebuf, sizeof(linebuf));
if (len == 0)
return NULL;
@ -389,9 +392,9 @@ struct cpufreq_available_frequencies
return NULL;
}
void cpufreq_put_available_frequencies(struct cpufreq_available_frequencies
*any) {
struct cpufreq_available_frequencies *tmp, *next;
void cpufreq_put_frequencies(struct cpufreq_frequencies *any)
{
struct cpufreq_frequencies *tmp, *next;
if (!any)
return;

View File

@ -28,10 +28,10 @@ struct cpufreq_available_governors {
struct cpufreq_available_governors *first;
};
struct cpufreq_available_frequencies {
struct cpufreq_frequencies {
unsigned long frequency;
struct cpufreq_available_frequencies *next;
struct cpufreq_available_frequencies *first;
struct cpufreq_frequencies *next;
struct cpufreq_frequencies *first;
};
@ -129,14 +129,14 @@ void cpufreq_put_available_governors(
*
* Only present on _some_ ->target() cpufreq drivers. For information purposes
* only. Please free allocated memory by calling
* cpufreq_put_available_frequencies after use.
* cpufreq_put_frequencies after use.
*/
struct cpufreq_available_frequencies
*cpufreq_get_available_frequencies(unsigned int cpu);
struct cpufreq_frequencies
*cpufreq_get_frequencies(const char *type, unsigned int cpu);
void cpufreq_put_available_frequencies(
struct cpufreq_available_frequencies *first);
void cpufreq_put_frequencies(
struct cpufreq_frequencies *first);
/* determine affected CPUs

View File

@ -161,19 +161,12 @@ static void print_duration(unsigned long duration)
return;
}
/* --boost / -b */
static int get_boost_mode(unsigned int cpu)
static int get_boost_mode_x86(unsigned int cpu)
{
int support, active, b_states = 0, ret, pstate_no, i;
/* ToDo: Make this more global */
unsigned long pstates[MAX_HW_PSTATES] = {0,};
if (cpupower_cpu_info.vendor != X86_VENDOR_AMD &&
cpupower_cpu_info.vendor != X86_VENDOR_HYGON &&
cpupower_cpu_info.vendor != X86_VENDOR_INTEL)
return 0;
ret = cpufreq_has_boost_support(cpu, &support, &active, &b_states);
if (ret) {
printf(_("Error while evaluating Boost Capabilities"
@ -248,6 +241,33 @@ static int get_boost_mode(unsigned int cpu)
return 0;
}
/* --boost / -b */
static int get_boost_mode(unsigned int cpu)
{
struct cpufreq_frequencies *freqs;
if (cpupower_cpu_info.vendor == X86_VENDOR_AMD ||
cpupower_cpu_info.vendor == X86_VENDOR_HYGON ||
cpupower_cpu_info.vendor == X86_VENDOR_INTEL)
return get_boost_mode_x86(cpu);
freqs = cpufreq_get_frequencies("boost", cpu);
if (freqs) {
printf(_(" boost frequency steps: "));
while (freqs->next) {
print_speed(freqs->frequency);
printf(", ");
freqs = freqs->next;
}
print_speed(freqs->frequency);
printf("\n");
cpufreq_put_frequencies(freqs);
}
return 0;
}
/* --freq / -f */
static int get_freq_kernel(unsigned int cpu, unsigned int human)
@ -456,7 +476,7 @@ static int get_latency(unsigned int cpu, unsigned int human)
static void debug_output_one(unsigned int cpu)
{
struct cpufreq_available_frequencies *freqs;
struct cpufreq_frequencies *freqs;
get_driver(cpu);
get_related_cpus(cpu);
@ -464,7 +484,7 @@ static void debug_output_one(unsigned int cpu)
get_latency(cpu, 1);
get_hardware_limits(cpu, 1);
freqs = cpufreq_get_available_frequencies(cpu);
freqs = cpufreq_get_frequencies("available", cpu);
if (freqs) {
printf(_(" available frequency steps: "));
while (freqs->next) {
@ -474,7 +494,7 @@ static void debug_output_one(unsigned int cpu)
}
print_speed(freqs->frequency);
printf("\n");
cpufreq_put_available_frequencies(freqs);
cpufreq_put_frequencies(freqs);
}
get_available_governors(cpu);