Merge branch 'pm-cpufreq'
* pm-cpufreq: (40 commits) thermal: exynos: boost: Automatic enable/disable of BOOST feature (at Exynos4412) cpufreq: exynos4x12: Change L0 driver data to CPUFREQ_BOOST_FREQ Documentation: cpufreq / boost: Update BOOST documentation cpufreq: exynos: Extend Exynos cpufreq driver to support boost cpufreq / boost: Kconfig: Support for software-managed BOOST acpi-cpufreq: Adjust the code to use the common boost attribute cpufreq: Add boost frequency support in core intel_pstate: Add trace point to report internal state. cpufreq: introduce cpufreq_generic_get() routine ARM: SA1100: Create dummy clk_get_rate() to avoid build failures cpufreq: stats: create sysfs entries when cpufreq_stats is a module cpufreq: stats: free table and remove sysfs entry in a single routine cpufreq: stats: remove hotplug notifiers cpufreq: stats: handle cpufreq_unregister_driver() and suspend/resume properly cpufreq: speedstep: remove unused speedstep_get_state powernow-k6: reorder frequencies powernow-k6: correctly initialize default parameters powernow-k6: disable cache when changing frequency Documentation: add ABI entry for intel_pstate cpufreq: exynos: Convert exynos-cpufreq to platform driver ...
This commit is contained in:
commit
7744064731
@ -200,3 +200,27 @@ Description: address and size of the percpu note.
|
||||
note of cpu#.
|
||||
|
||||
crash_notes_size: size of the note of cpu#.
|
||||
|
||||
|
||||
What: /sys/devices/system/cpu/intel_pstate/max_perf_pct
|
||||
/sys/devices/system/cpu/intel_pstate/min_perf_pct
|
||||
/sys/devices/system/cpu/intel_pstate/no_turbo
|
||||
Date: February 2013
|
||||
Contact: linux-pm@vger.kernel.org
|
||||
Description: Parameters for the Intel P-state driver
|
||||
|
||||
Logic for selecting the current P-state in Intel
|
||||
Sandybridge+ processors. The three knobs control
|
||||
limits for the P-state that will be requested by the
|
||||
driver.
|
||||
|
||||
max_perf_pct: limits the maximum P state that will be requested by
|
||||
the driver stated as a percentage of the available performance.
|
||||
|
||||
min_perf_pct: limits the minimum P state that will be requested by
|
||||
the driver stated as a percentage of the available performance.
|
||||
|
||||
no_turbo: limits the driver to selecting P states below the turbo
|
||||
frequency range.
|
||||
|
||||
More details can be found in Documentation/cpu-freq/intel-pstate.txt
|
||||
|
@ -17,8 +17,8 @@ Introduction
|
||||
Some CPUs support a functionality to raise the operating frequency of
|
||||
some cores in a multi-core package if certain conditions apply, mostly
|
||||
if the whole chip is not fully utilized and below it's intended thermal
|
||||
budget. This is done without operating system control by a combination
|
||||
of hardware and firmware.
|
||||
budget. The decision about boost disable/enable is made either at hardware
|
||||
(e.g. x86) or software (e.g ARM).
|
||||
On Intel CPUs this is called "Turbo Boost", AMD calls it "Turbo-Core",
|
||||
in technical documentation "Core performance boost". In Linux we use
|
||||
the term "boost" for convenience.
|
||||
@ -48,24 +48,24 @@ be desirable:
|
||||
User controlled switch
|
||||
----------------------
|
||||
|
||||
To allow the user to toggle the boosting functionality, the acpi-cpufreq
|
||||
driver exports a sysfs knob to disable it. There is a file:
|
||||
To allow the user to toggle the boosting functionality, the cpufreq core
|
||||
driver exports a sysfs knob to enable or disable it. There is a file:
|
||||
/sys/devices/system/cpu/cpufreq/boost
|
||||
which can either read "0" (boosting disabled) or "1" (boosting enabled).
|
||||
Reading the file is always supported, even if the processor does not
|
||||
support boosting. In this case the file will be read-only and always
|
||||
reads as "0". Explicitly changing the permissions and writing to that
|
||||
file anyway will return EINVAL.
|
||||
The file is exported only when cpufreq driver supports boosting.
|
||||
Explicitly changing the permissions and writing to that file anyway will
|
||||
return EINVAL.
|
||||
|
||||
On supported CPUs one can write either a "0" or a "1" into this file.
|
||||
This will either disable the boost functionality on all cores in the
|
||||
whole system (0) or will allow the hardware to boost at will (1).
|
||||
whole system (0) or will allow the software or hardware to boost at will
|
||||
(1).
|
||||
|
||||
Writing a "1" does not explicitly boost the system, but just allows the
|
||||
CPU (and the firmware) to boost at their discretion. Some implementations
|
||||
take external factors like the chip's temperature into account, so
|
||||
boosting once does not necessarily mean that it will occur every time
|
||||
even using the exact same software setup.
|
||||
CPU to boost at their discretion. Some implementations take external
|
||||
factors like the chip's temperature into account, so boosting once does
|
||||
not necessarily mean that it will occur every time even using the exact
|
||||
same software setup.
|
||||
|
||||
|
||||
AMD legacy cpb switch
|
||||
|
40
Documentation/cpu-freq/intel-pstate.txt
Normal file
40
Documentation/cpu-freq/intel-pstate.txt
Normal file
@ -0,0 +1,40 @@
|
||||
Intel P-state driver
|
||||
--------------------
|
||||
|
||||
This driver implements a scaling driver with an internal governor for
|
||||
Intel Core processors. The driver follows the same model as the
|
||||
Transmeta scaling driver (longrun.c) and implements the setpolicy()
|
||||
instead of target(). Scaling drivers that implement setpolicy() are
|
||||
assumed to implement internal governors by the cpufreq core. All the
|
||||
logic for selecting the current P state is contained within the
|
||||
driver; no external governor is used by the cpufreq core.
|
||||
|
||||
Intel SandyBridge+ processors are supported.
|
||||
|
||||
New sysfs files for controlling P state selection have been added to
|
||||
/sys/devices/system/cpu/intel_pstate/
|
||||
|
||||
max_perf_pct: limits the maximum P state that will be requested by
|
||||
the driver stated as a percentage of the available performance.
|
||||
|
||||
min_perf_pct: limits the minimum P state that will be requested by
|
||||
the driver stated as a percentage of the available performance.
|
||||
|
||||
no_turbo: limits the driver to selecting P states below the turbo
|
||||
frequency range.
|
||||
|
||||
For contemporary Intel processors, the frequency is controlled by the
|
||||
processor itself and the P-states exposed to software are related to
|
||||
performance levels. The idea that frequency can be set to a single
|
||||
frequency is fiction for Intel Core processors. Even if the scaling
|
||||
driver selects a single P state the actual frequency the processor
|
||||
will run at is selected by the processor itself.
|
||||
|
||||
New debugfs files have also been added to /sys/kernel/debug/pstate_snb/
|
||||
|
||||
deadband
|
||||
d_gain_pct
|
||||
i_gain_pct
|
||||
p_gain_pct
|
||||
sample_rate_ms
|
||||
setpoint
|
@ -303,6 +303,11 @@ void __init exynos_cpuidle_init(void)
|
||||
platform_device_register(&exynos_cpuidle);
|
||||
}
|
||||
|
||||
void __init exynos_cpufreq_init(void)
|
||||
{
|
||||
platform_device_register_simple("exynos-cpufreq", -1, NULL, 0);
|
||||
}
|
||||
|
||||
void __init exynos_init_late(void)
|
||||
{
|
||||
if (of_machine_is_compatible("samsung,exynos5440"))
|
||||
|
@ -22,6 +22,7 @@ void exynos_init_io(void);
|
||||
void exynos4_restart(enum reboot_mode mode, const char *cmd);
|
||||
void exynos5_restart(enum reboot_mode mode, const char *cmd);
|
||||
void exynos_cpuidle_init(void);
|
||||
void exynos_cpufreq_init(void);
|
||||
void exynos_init_late(void);
|
||||
|
||||
void exynos_firmware_init(void);
|
||||
|
@ -22,6 +22,7 @@
|
||||
static void __init exynos4_dt_machine_init(void)
|
||||
{
|
||||
exynos_cpuidle_init();
|
||||
exynos_cpufreq_init();
|
||||
|
||||
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
|
||||
}
|
||||
|
@ -44,6 +44,7 @@ static void __init exynos5_dt_machine_init(void)
|
||||
}
|
||||
|
||||
exynos_cpuidle_init();
|
||||
exynos_cpufreq_init();
|
||||
|
||||
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
|
||||
}
|
||||
|
@ -33,6 +33,13 @@ struct clk clk_##_name = { \
|
||||
|
||||
static DEFINE_SPINLOCK(clocks_lock);
|
||||
|
||||
/* Dummy clk routine to build generic kernel parts that may be using them */
|
||||
unsigned long clk_get_rate(struct clk *clk)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(clk_get_rate);
|
||||
|
||||
static void clk_gpio27_enable(struct clk *clk)
|
||||
{
|
||||
/*
|
||||
|
@ -20,6 +20,10 @@ if CPU_FREQ
|
||||
config CPU_FREQ_GOV_COMMON
|
||||
bool
|
||||
|
||||
config CPU_FREQ_BOOST_SW
|
||||
bool
|
||||
depends on THERMAL
|
||||
|
||||
config CPU_FREQ_STAT
|
||||
tristate "CPU frequency translation statistics"
|
||||
default y
|
||||
@ -181,7 +185,8 @@ config CPU_FREQ_GOV_CONSERVATIVE
|
||||
|
||||
config GENERIC_CPUFREQ_CPU0
|
||||
tristate "Generic CPU0 cpufreq driver"
|
||||
depends on HAVE_CLK && REGULATOR && PM_OPP && OF
|
||||
depends on HAVE_CLK && REGULATOR && OF
|
||||
select PM_OPP
|
||||
help
|
||||
This adds a generic cpufreq driver for CPU0 frequency management.
|
||||
It supports both uniprocessor (UP) and symmetric multiprocessor (SMP)
|
||||
|
@ -4,7 +4,8 @@
|
||||
|
||||
config ARM_BIG_LITTLE_CPUFREQ
|
||||
tristate "Generic ARM big LITTLE CPUfreq driver"
|
||||
depends on ARM_CPU_TOPOLOGY && PM_OPP && HAVE_CLK
|
||||
depends on ARM && BIG_LITTLE && ARM_CPU_TOPOLOGY && HAVE_CLK
|
||||
select PM_OPP
|
||||
help
|
||||
This enables the Generic CPUfreq driver for ARM big.LITTLE platforms.
|
||||
|
||||
@ -54,7 +55,8 @@ config ARM_EXYNOS5250_CPUFREQ
|
||||
config ARM_EXYNOS5440_CPUFREQ
|
||||
bool "SAMSUNG EXYNOS5440"
|
||||
depends on SOC_EXYNOS5440
|
||||
depends on HAVE_CLK && PM_OPP && OF
|
||||
depends on HAVE_CLK && OF
|
||||
select PM_OPP
|
||||
default y
|
||||
help
|
||||
This adds the CPUFreq driver for Samsung EXYNOS5440
|
||||
@ -64,6 +66,21 @@ config ARM_EXYNOS5440_CPUFREQ
|
||||
|
||||
If in doubt, say N.
|
||||
|
||||
config ARM_EXYNOS_CPU_FREQ_BOOST_SW
|
||||
bool "EXYNOS Frequency Overclocking - Software"
|
||||
depends on ARM_EXYNOS_CPUFREQ
|
||||
select CPU_FREQ_BOOST_SW
|
||||
select EXYNOS_THERMAL
|
||||
help
|
||||
This driver supports software managed overclocking (BOOST).
|
||||
It allows usage of special frequencies for Samsung Exynos
|
||||
processors if thermal conditions are appropriate.
|
||||
|
||||
It reguires, for safe operation, thermal framework with properly
|
||||
defined trip points.
|
||||
|
||||
If in doubt, say N.
|
||||
|
||||
config ARM_HIGHBANK_CPUFREQ
|
||||
tristate "Calxeda Highbank-based"
|
||||
depends on ARCH_HIGHBANK
|
||||
@ -79,11 +96,11 @@ config ARM_HIGHBANK_CPUFREQ
|
||||
If in doubt, say N.
|
||||
|
||||
config ARM_IMX6Q_CPUFREQ
|
||||
tristate "Freescale i.MX6Q cpufreq support"
|
||||
depends on SOC_IMX6Q
|
||||
tristate "Freescale i.MX6 cpufreq support"
|
||||
depends on ARCH_MXC
|
||||
depends on REGULATOR_ANATOP
|
||||
help
|
||||
This adds cpufreq driver support for Freescale i.MX6Q SOC.
|
||||
This adds cpufreq driver support for Freescale i.MX6 series SoCs.
|
||||
|
||||
If in doubt, say N.
|
||||
|
||||
|
@ -80,7 +80,6 @@ static struct acpi_processor_performance __percpu *acpi_perf_data;
|
||||
static struct cpufreq_driver acpi_cpufreq_driver;
|
||||
|
||||
static unsigned int acpi_pstate_strict;
|
||||
static bool boost_enabled, boost_supported;
|
||||
static struct msr __percpu *msrs;
|
||||
|
||||
static bool boost_state(unsigned int cpu)
|
||||
@ -133,49 +132,16 @@ static void boost_set_msrs(bool enable, const struct cpumask *cpumask)
|
||||
wrmsr_on_cpus(cpumask, msr_addr, msrs);
|
||||
}
|
||||
|
||||
static ssize_t _store_boost(const char *buf, size_t count)
|
||||
static int _store_boost(int val)
|
||||
{
|
||||
int ret;
|
||||
unsigned long val = 0;
|
||||
|
||||
if (!boost_supported)
|
||||
return -EINVAL;
|
||||
|
||||
ret = kstrtoul(buf, 10, &val);
|
||||
if (ret || (val > 1))
|
||||
return -EINVAL;
|
||||
|
||||
if ((val && boost_enabled) || (!val && !boost_enabled))
|
||||
return count;
|
||||
|
||||
get_online_cpus();
|
||||
|
||||
boost_set_msrs(val, cpu_online_mask);
|
||||
|
||||
put_online_cpus();
|
||||
|
||||
boost_enabled = val;
|
||||
pr_debug("Core Boosting %sabled.\n", val ? "en" : "dis");
|
||||
|
||||
return count;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t store_global_boost(struct kobject *kobj, struct attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
return _store_boost(buf, count);
|
||||
}
|
||||
|
||||
static ssize_t show_global_boost(struct kobject *kobj,
|
||||
struct attribute *attr, char *buf)
|
||||
{
|
||||
return sprintf(buf, "%u\n", boost_enabled);
|
||||
}
|
||||
|
||||
static struct global_attr global_boost = __ATTR(boost, 0644,
|
||||
show_global_boost,
|
||||
store_global_boost);
|
||||
|
||||
static ssize_t show_freqdomain_cpus(struct cpufreq_policy *policy, char *buf)
|
||||
{
|
||||
struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu);
|
||||
@ -186,15 +152,32 @@ static ssize_t show_freqdomain_cpus(struct cpufreq_policy *policy, char *buf)
|
||||
cpufreq_freq_attr_ro(freqdomain_cpus);
|
||||
|
||||
#ifdef CONFIG_X86_ACPI_CPUFREQ_CPB
|
||||
static ssize_t store_boost(const char *buf, size_t count)
|
||||
{
|
||||
int ret;
|
||||
unsigned long val = 0;
|
||||
|
||||
if (!acpi_cpufreq_driver.boost_supported)
|
||||
return -EINVAL;
|
||||
|
||||
ret = kstrtoul(buf, 10, &val);
|
||||
if (ret || (val > 1))
|
||||
return -EINVAL;
|
||||
|
||||
_store_boost((int) val);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t store_cpb(struct cpufreq_policy *policy, const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
return _store_boost(buf, count);
|
||||
return store_boost(buf, count);
|
||||
}
|
||||
|
||||
static ssize_t show_cpb(struct cpufreq_policy *policy, char *buf)
|
||||
{
|
||||
return sprintf(buf, "%u\n", boost_enabled);
|
||||
return sprintf(buf, "%u\n", acpi_cpufreq_driver.boost_enabled);
|
||||
}
|
||||
|
||||
cpufreq_freq_attr_rw(cpb);
|
||||
@ -554,7 +537,7 @@ static int boost_notify(struct notifier_block *nb, unsigned long action,
|
||||
switch (action) {
|
||||
case CPU_UP_PREPARE:
|
||||
case CPU_UP_PREPARE_FROZEN:
|
||||
boost_set_msrs(boost_enabled, cpumask);
|
||||
boost_set_msrs(acpi_cpufreq_driver.boost_enabled, cpumask);
|
||||
break;
|
||||
|
||||
case CPU_DOWN_PREPARE:
|
||||
@ -911,6 +894,7 @@ static struct cpufreq_driver acpi_cpufreq_driver = {
|
||||
.resume = acpi_cpufreq_resume,
|
||||
.name = "acpi-cpufreq",
|
||||
.attr = acpi_cpufreq_attr,
|
||||
.set_boost = _store_boost,
|
||||
};
|
||||
|
||||
static void __init acpi_cpufreq_boost_init(void)
|
||||
@ -921,33 +905,22 @@ static void __init acpi_cpufreq_boost_init(void)
|
||||
if (!msrs)
|
||||
return;
|
||||
|
||||
boost_supported = true;
|
||||
boost_enabled = boost_state(0);
|
||||
|
||||
acpi_cpufreq_driver.boost_supported = true;
|
||||
acpi_cpufreq_driver.boost_enabled = boost_state(0);
|
||||
get_online_cpus();
|
||||
|
||||
/* Force all MSRs to the same value */
|
||||
boost_set_msrs(boost_enabled, cpu_online_mask);
|
||||
boost_set_msrs(acpi_cpufreq_driver.boost_enabled,
|
||||
cpu_online_mask);
|
||||
|
||||
register_cpu_notifier(&boost_nb);
|
||||
|
||||
put_online_cpus();
|
||||
} else
|
||||
global_boost.attr.mode = 0444;
|
||||
|
||||
/* We create the boost file in any case, though for systems without
|
||||
* hardware support it will be read-only and hardwired to return 0.
|
||||
*/
|
||||
if (cpufreq_sysfs_create_file(&(global_boost.attr)))
|
||||
pr_warn(PFX "could not register global boost sysfs file\n");
|
||||
else
|
||||
pr_debug("registered global boost sysfs file\n");
|
||||
}
|
||||
}
|
||||
|
||||
static void __exit acpi_cpufreq_boost_exit(void)
|
||||
{
|
||||
cpufreq_sysfs_remove_file(&(global_boost.attr));
|
||||
|
||||
if (msrs) {
|
||||
unregister_cpu_notifier(&boost_nb);
|
||||
|
||||
@ -993,12 +966,11 @@ static int __init acpi_cpufreq_init(void)
|
||||
*iter = &cpb;
|
||||
}
|
||||
#endif
|
||||
acpi_cpufreq_boost_init();
|
||||
|
||||
ret = cpufreq_register_driver(&acpi_cpufreq_driver);
|
||||
if (ret)
|
||||
free_acpi_perf_data();
|
||||
else
|
||||
acpi_cpufreq_boost_init();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -488,7 +488,8 @@ static int bL_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
static struct cpufreq_driver bL_cpufreq_driver = {
|
||||
.name = "arm-big-little",
|
||||
.flags = CPUFREQ_STICKY |
|
||||
CPUFREQ_HAVE_GOVERNOR_PER_POLICY,
|
||||
CPUFREQ_HAVE_GOVERNOR_PER_POLICY |
|
||||
CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = bL_cpufreq_set_target,
|
||||
.get = bL_cpufreq_get_rate,
|
||||
|
@ -21,17 +21,8 @@
|
||||
#include <linux/export.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
static struct clk *cpuclk;
|
||||
static struct cpufreq_frequency_table *freq_table;
|
||||
|
||||
static unsigned int at32_get_speed(unsigned int cpu)
|
||||
{
|
||||
/* No SMP support */
|
||||
if (cpu)
|
||||
return 0;
|
||||
return (unsigned int)((clk_get_rate(cpuclk) + 500) / 1000);
|
||||
}
|
||||
|
||||
static unsigned int ref_freq;
|
||||
static unsigned long loops_per_jiffy_ref;
|
||||
|
||||
@ -39,7 +30,7 @@ static int at32_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
{
|
||||
unsigned int old_freq, new_freq;
|
||||
|
||||
old_freq = at32_get_speed(0);
|
||||
old_freq = policy->cur;
|
||||
new_freq = freq_table[index].frequency;
|
||||
|
||||
if (!ref_freq) {
|
||||
@ -50,7 +41,7 @@ static int at32_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
if (old_freq < new_freq)
|
||||
boot_cpu_data.loops_per_jiffy = cpufreq_scale(
|
||||
loops_per_jiffy_ref, ref_freq, new_freq);
|
||||
clk_set_rate(cpuclk, new_freq * 1000);
|
||||
clk_set_rate(policy->clk, new_freq * 1000);
|
||||
if (new_freq < old_freq)
|
||||
boot_cpu_data.loops_per_jiffy = cpufreq_scale(
|
||||
loops_per_jiffy_ref, ref_freq, new_freq);
|
||||
@ -61,6 +52,7 @@ static int at32_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
static int at32_cpufreq_driver_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int frequency, rate, min_freq;
|
||||
static struct clk *cpuclk;
|
||||
int retval, steps, i;
|
||||
|
||||
if (policy->cpu != 0)
|
||||
@ -103,6 +95,7 @@ static int at32_cpufreq_driver_init(struct cpufreq_policy *policy)
|
||||
frequency /= 2;
|
||||
}
|
||||
|
||||
policy->clk = cpuclk;
|
||||
freq_table[steps - 1].frequency = CPUFREQ_TABLE_END;
|
||||
|
||||
retval = cpufreq_table_validate_and_show(policy, freq_table);
|
||||
@ -123,7 +116,7 @@ static struct cpufreq_driver at32_driver = {
|
||||
.init = at32_cpufreq_driver_init,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = at32_set_target,
|
||||
.get = at32_get_speed,
|
||||
.get = cpufreq_generic_get,
|
||||
.flags = CPUFREQ_STICKY,
|
||||
};
|
||||
|
||||
|
@ -30,11 +30,6 @@ static struct clk *cpu_clk;
|
||||
static struct regulator *cpu_reg;
|
||||
static struct cpufreq_frequency_table *freq_table;
|
||||
|
||||
static unsigned int cpu0_get_speed(unsigned int cpu)
|
||||
{
|
||||
return clk_get_rate(cpu_clk) / 1000;
|
||||
}
|
||||
|
||||
static int cpu0_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
{
|
||||
struct dev_pm_opp *opp;
|
||||
@ -44,7 +39,7 @@ static int cpu0_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
int ret;
|
||||
|
||||
freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000);
|
||||
if (freq_Hz < 0)
|
||||
if (freq_Hz <= 0)
|
||||
freq_Hz = freq_table[index].frequency * 1000;
|
||||
|
||||
freq_exact = freq_Hz;
|
||||
@ -100,6 +95,7 @@ static int cpu0_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
|
||||
static int cpu0_cpufreq_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
policy->clk = cpu_clk;
|
||||
return cpufreq_generic_init(policy, freq_table, transition_latency);
|
||||
}
|
||||
|
||||
@ -107,7 +103,7 @@ static struct cpufreq_driver cpu0_cpufreq_driver = {
|
||||
.flags = CPUFREQ_STICKY,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = cpu0_set_target,
|
||||
.get = cpu0_get_speed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = cpu0_cpufreq_init,
|
||||
.exit = cpufreq_generic_exit,
|
||||
.name = "generic_cpu0",
|
||||
|
@ -39,7 +39,7 @@ static struct cpufreq_driver *cpufreq_driver;
|
||||
static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data);
|
||||
static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data_fallback);
|
||||
static DEFINE_RWLOCK(cpufreq_driver_lock);
|
||||
static DEFINE_MUTEX(cpufreq_governor_lock);
|
||||
DEFINE_MUTEX(cpufreq_governor_lock);
|
||||
static LIST_HEAD(cpufreq_policy_list);
|
||||
|
||||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
@ -176,6 +176,20 @@ int cpufreq_generic_init(struct cpufreq_policy *policy,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_generic_init);
|
||||
|
||||
unsigned int cpufreq_generic_get(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu);
|
||||
|
||||
if (!policy || IS_ERR(policy->clk)) {
|
||||
pr_err("%s: No %s associated to cpu: %d\n", __func__,
|
||||
policy ? "clk" : "policy", cpu);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return clk_get_rate(policy->clk) / 1000;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_generic_get);
|
||||
|
||||
struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy = NULL;
|
||||
@ -320,10 +334,51 @@ void cpufreq_notify_transition(struct cpufreq_policy *policy,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_notify_transition);
|
||||
|
||||
/* Do post notifications when there are chances that transition has failed */
|
||||
void cpufreq_notify_post_transition(struct cpufreq_policy *policy,
|
||||
struct cpufreq_freqs *freqs, int transition_failed)
|
||||
{
|
||||
cpufreq_notify_transition(policy, freqs, CPUFREQ_POSTCHANGE);
|
||||
if (!transition_failed)
|
||||
return;
|
||||
|
||||
swap(freqs->old, freqs->new);
|
||||
cpufreq_notify_transition(policy, freqs, CPUFREQ_PRECHANGE);
|
||||
cpufreq_notify_transition(policy, freqs, CPUFREQ_POSTCHANGE);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_notify_post_transition);
|
||||
|
||||
|
||||
/*********************************************************************
|
||||
* SYSFS INTERFACE *
|
||||
*********************************************************************/
|
||||
ssize_t show_boost(struct kobject *kobj,
|
||||
struct attribute *attr, char *buf)
|
||||
{
|
||||
return sprintf(buf, "%d\n", cpufreq_driver->boost_enabled);
|
||||
}
|
||||
|
||||
static ssize_t store_boost(struct kobject *kobj, struct attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
int ret, enable;
|
||||
|
||||
ret = sscanf(buf, "%d", &enable);
|
||||
if (ret != 1 || enable < 0 || enable > 1)
|
||||
return -EINVAL;
|
||||
|
||||
if (cpufreq_boost_trigger_state(enable)) {
|
||||
pr_err("%s: Cannot %s BOOST!\n", __func__,
|
||||
enable ? "enable" : "disable");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
pr_debug("%s: cpufreq BOOST %s\n", __func__,
|
||||
enable ? "enabled" : "disabled");
|
||||
|
||||
return count;
|
||||
}
|
||||
define_one_global_rw(boost);
|
||||
|
||||
static struct cpufreq_governor *__find_governor(const char *str_governor)
|
||||
{
|
||||
@ -929,6 +984,9 @@ static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy)
|
||||
struct kobject *kobj;
|
||||
struct completion *cmp;
|
||||
|
||||
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
|
||||
CPUFREQ_REMOVE_POLICY, policy);
|
||||
|
||||
down_read(&policy->rwsem);
|
||||
kobj = &policy->kobj;
|
||||
cmp = &policy->kobj_unregister;
|
||||
@ -1051,6 +1109,11 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
|
||||
goto err_set_policy_cpu;
|
||||
}
|
||||
|
||||
write_lock_irqsave(&cpufreq_driver_lock, flags);
|
||||
for_each_cpu(j, policy->cpus)
|
||||
per_cpu(cpufreq_cpu_data, j) = policy;
|
||||
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
|
||||
|
||||
if (cpufreq_driver->get) {
|
||||
policy->cur = cpufreq_driver->get(policy->cpu);
|
||||
if (!policy->cur) {
|
||||
@ -1059,6 +1122,46 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Sometimes boot loaders set CPU frequency to a value outside of
|
||||
* frequency table present with cpufreq core. In such cases CPU might be
|
||||
* unstable if it has to run on that frequency for long duration of time
|
||||
* and so its better to set it to a frequency which is specified in
|
||||
* freq-table. This also makes cpufreq stats inconsistent as
|
||||
* cpufreq-stats would fail to register because current frequency of CPU
|
||||
* isn't found in freq-table.
|
||||
*
|
||||
* Because we don't want this change to effect boot process badly, we go
|
||||
* for the next freq which is >= policy->cur ('cur' must be set by now,
|
||||
* otherwise we will end up setting freq to lowest of the table as 'cur'
|
||||
* is initialized to zero).
|
||||
*
|
||||
* We are passing target-freq as "policy->cur - 1" otherwise
|
||||
* __cpufreq_driver_target() would simply fail, as policy->cur will be
|
||||
* equal to target-freq.
|
||||
*/
|
||||
if ((cpufreq_driver->flags & CPUFREQ_NEED_INITIAL_FREQ_CHECK)
|
||||
&& has_target()) {
|
||||
/* Are we running at unknown frequency ? */
|
||||
ret = cpufreq_frequency_table_get_index(policy, policy->cur);
|
||||
if (ret == -EINVAL) {
|
||||
/* Warn user and fix it */
|
||||
pr_warn("%s: CPU%d: Running at unlisted freq: %u KHz\n",
|
||||
__func__, policy->cpu, policy->cur);
|
||||
ret = __cpufreq_driver_target(policy, policy->cur - 1,
|
||||
CPUFREQ_RELATION_L);
|
||||
|
||||
/*
|
||||
* Reaching here after boot in a few seconds may not
|
||||
* mean that system will remain stable at "unknown"
|
||||
* frequency for longer duration. Hence, a BUG_ON().
|
||||
*/
|
||||
BUG_ON(ret);
|
||||
pr_warn("%s: CPU%d: Unlisted initial frequency changed to: %u KHz\n",
|
||||
__func__, policy->cpu, policy->cur);
|
||||
}
|
||||
}
|
||||
|
||||
/* related cpus should atleast have policy->cpus */
|
||||
cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus);
|
||||
|
||||
@ -1085,15 +1188,12 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
|
||||
}
|
||||
#endif
|
||||
|
||||
write_lock_irqsave(&cpufreq_driver_lock, flags);
|
||||
for_each_cpu(j, policy->cpus)
|
||||
per_cpu(cpufreq_cpu_data, j) = policy;
|
||||
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
|
||||
|
||||
if (!frozen) {
|
||||
ret = cpufreq_add_dev_interface(policy, dev);
|
||||
if (ret)
|
||||
goto err_out_unregister;
|
||||
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
|
||||
CPUFREQ_CREATE_POLICY, policy);
|
||||
}
|
||||
|
||||
write_lock_irqsave(&cpufreq_driver_lock, flags);
|
||||
@ -1115,12 +1215,12 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
|
||||
return 0;
|
||||
|
||||
err_out_unregister:
|
||||
err_get_freq:
|
||||
write_lock_irqsave(&cpufreq_driver_lock, flags);
|
||||
for_each_cpu(j, policy->cpus)
|
||||
per_cpu(cpufreq_cpu_data, j) = NULL;
|
||||
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
|
||||
|
||||
err_get_freq:
|
||||
if (cpufreq_driver->exit)
|
||||
cpufreq_driver->exit(policy);
|
||||
err_set_policy_cpu:
|
||||
@ -1725,17 +1825,8 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
|
||||
pr_err("%s: Failed to change cpu frequency: %d\n",
|
||||
__func__, retval);
|
||||
|
||||
if (notify) {
|
||||
/*
|
||||
* Notify with old freq in case we failed to change
|
||||
* frequency
|
||||
*/
|
||||
if (retval)
|
||||
freqs.new = freqs.old;
|
||||
|
||||
cpufreq_notify_transition(policy, &freqs,
|
||||
CPUFREQ_POSTCHANGE);
|
||||
}
|
||||
if (notify)
|
||||
cpufreq_notify_post_transition(policy, &freqs, retval);
|
||||
}
|
||||
|
||||
out:
|
||||
@ -2119,6 +2210,73 @@ static struct notifier_block __refdata cpufreq_cpu_notifier = {
|
||||
.notifier_call = cpufreq_cpu_callback,
|
||||
};
|
||||
|
||||
/*********************************************************************
|
||||
* BOOST *
|
||||
*********************************************************************/
|
||||
static int cpufreq_boost_set_sw(int state)
|
||||
{
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
struct cpufreq_policy *policy;
|
||||
int ret = -EINVAL;
|
||||
|
||||
list_for_each_entry(policy, &cpufreq_policy_list, policy_list) {
|
||||
freq_table = cpufreq_frequency_get_table(policy->cpu);
|
||||
if (freq_table) {
|
||||
ret = cpufreq_frequency_table_cpuinfo(policy,
|
||||
freq_table);
|
||||
if (ret) {
|
||||
pr_err("%s: Policy frequency update failed\n",
|
||||
__func__);
|
||||
break;
|
||||
}
|
||||
policy->user_policy.max = policy->max;
|
||||
__cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int cpufreq_boost_trigger_state(int state)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret = 0;
|
||||
|
||||
if (cpufreq_driver->boost_enabled == state)
|
||||
return 0;
|
||||
|
||||
write_lock_irqsave(&cpufreq_driver_lock, flags);
|
||||
cpufreq_driver->boost_enabled = state;
|
||||
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
|
||||
|
||||
ret = cpufreq_driver->set_boost(state);
|
||||
if (ret) {
|
||||
write_lock_irqsave(&cpufreq_driver_lock, flags);
|
||||
cpufreq_driver->boost_enabled = !state;
|
||||
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
|
||||
|
||||
pr_err("%s: Cannot %s BOOST\n", __func__,
|
||||
state ? "enable" : "disable");
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int cpufreq_boost_supported(void)
|
||||
{
|
||||
if (likely(cpufreq_driver))
|
||||
return cpufreq_driver->boost_supported;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_boost_supported);
|
||||
|
||||
int cpufreq_boost_enabled(void)
|
||||
{
|
||||
return cpufreq_driver->boost_enabled;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_boost_enabled);
|
||||
|
||||
/*********************************************************************
|
||||
* REGISTER / UNREGISTER CPUFREQ DRIVER *
|
||||
*********************************************************************/
|
||||
@ -2159,9 +2317,25 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
|
||||
cpufreq_driver = driver_data;
|
||||
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
|
||||
|
||||
if (cpufreq_boost_supported()) {
|
||||
/*
|
||||
* Check if driver provides function to enable boost -
|
||||
* if not, use cpufreq_boost_set_sw as default
|
||||
*/
|
||||
if (!cpufreq_driver->set_boost)
|
||||
cpufreq_driver->set_boost = cpufreq_boost_set_sw;
|
||||
|
||||
ret = cpufreq_sysfs_create_file(&boost.attr);
|
||||
if (ret) {
|
||||
pr_err("%s: cannot register global BOOST sysfs file\n",
|
||||
__func__);
|
||||
goto err_null_driver;
|
||||
}
|
||||
}
|
||||
|
||||
ret = subsys_interface_register(&cpufreq_interface);
|
||||
if (ret)
|
||||
goto err_null_driver;
|
||||
goto err_boost_unreg;
|
||||
|
||||
if (!(cpufreq_driver->flags & CPUFREQ_STICKY)) {
|
||||
int i;
|
||||
@ -2188,6 +2362,9 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
|
||||
return 0;
|
||||
err_if_unreg:
|
||||
subsys_interface_unregister(&cpufreq_interface);
|
||||
err_boost_unreg:
|
||||
if (cpufreq_boost_supported())
|
||||
cpufreq_sysfs_remove_file(&boost.attr);
|
||||
err_null_driver:
|
||||
write_lock_irqsave(&cpufreq_driver_lock, flags);
|
||||
cpufreq_driver = NULL;
|
||||
@ -2214,6 +2391,9 @@ int cpufreq_unregister_driver(struct cpufreq_driver *driver)
|
||||
pr_debug("unregistering driver %s\n", driver->name);
|
||||
|
||||
subsys_interface_unregister(&cpufreq_interface);
|
||||
if (cpufreq_boost_supported())
|
||||
cpufreq_sysfs_remove_file(&boost.attr);
|
||||
|
||||
unregister_hotcpu_notifier(&cpufreq_cpu_notifier);
|
||||
|
||||
down_write(&cpufreq_rwsem);
|
||||
|
@ -119,8 +119,9 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
|
||||
{
|
||||
int i;
|
||||
|
||||
mutex_lock(&cpufreq_governor_lock);
|
||||
if (!policy->governor_enabled)
|
||||
return;
|
||||
goto out_unlock;
|
||||
|
||||
if (!all_cpus) {
|
||||
/*
|
||||
@ -135,6 +136,9 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
|
||||
for_each_cpu(i, policy->cpus)
|
||||
__gov_queue_work(i, dbs_data, delay);
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&cpufreq_governor_lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(gov_queue_work);
|
||||
|
||||
|
@ -257,6 +257,8 @@ static ssize_t show_sampling_rate_min_gov_pol \
|
||||
return sprintf(buf, "%u\n", dbs_data->min_sampling_rate); \
|
||||
}
|
||||
|
||||
extern struct mutex cpufreq_governor_lock;
|
||||
|
||||
void dbs_check_cpu(struct dbs_data *dbs_data, int cpu);
|
||||
bool need_load_eval(struct cpu_dbs_common_info *cdbs,
|
||||
unsigned int sampling_rate);
|
||||
|
@ -151,44 +151,36 @@ static int freq_table_get_index(struct cpufreq_stats *stat, unsigned int freq)
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* should be called late in the CPU removal sequence so that the stats
|
||||
* memory is still available in case someone tries to use it.
|
||||
*/
|
||||
static void cpufreq_stats_free_table(unsigned int cpu)
|
||||
static void __cpufreq_stats_free_table(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, cpu);
|
||||
struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table, policy->cpu);
|
||||
|
||||
if (stat) {
|
||||
pr_debug("%s: Free stat table\n", __func__);
|
||||
kfree(stat->time_in_state);
|
||||
kfree(stat);
|
||||
per_cpu(cpufreq_stats_table, cpu) = NULL;
|
||||
}
|
||||
if (!stat)
|
||||
return;
|
||||
|
||||
pr_debug("%s: Free stat table\n", __func__);
|
||||
|
||||
sysfs_remove_group(&policy->kobj, &stats_attr_group);
|
||||
kfree(stat->time_in_state);
|
||||
kfree(stat);
|
||||
per_cpu(cpufreq_stats_table, policy->cpu) = NULL;
|
||||
}
|
||||
|
||||
/* must be called early in the CPU removal sequence (before
|
||||
* cpufreq_remove_dev) so that policy is still valid.
|
||||
*/
|
||||
static void cpufreq_stats_free_sysfs(unsigned int cpu)
|
||||
static void cpufreq_stats_free_table(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
|
||||
struct cpufreq_policy *policy;
|
||||
|
||||
policy = cpufreq_cpu_get(cpu);
|
||||
if (!policy)
|
||||
return;
|
||||
|
||||
if (!cpufreq_frequency_get_table(cpu))
|
||||
goto put_ref;
|
||||
if (cpufreq_frequency_get_table(policy->cpu))
|
||||
__cpufreq_stats_free_table(policy);
|
||||
|
||||
if (!policy_is_shared(policy)) {
|
||||
pr_debug("%s: Free sysfs stat\n", __func__);
|
||||
sysfs_remove_group(&policy->kobj, &stats_attr_group);
|
||||
}
|
||||
|
||||
put_ref:
|
||||
cpufreq_cpu_put(policy);
|
||||
}
|
||||
|
||||
static int cpufreq_stats_create_table(struct cpufreq_policy *policy,
|
||||
static int __cpufreq_stats_create_table(struct cpufreq_policy *policy,
|
||||
struct cpufreq_frequency_table *table)
|
||||
{
|
||||
unsigned int i, j, count = 0, ret = 0;
|
||||
@ -261,6 +253,26 @@ error_get_fail:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void cpufreq_stats_create_table(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy;
|
||||
struct cpufreq_frequency_table *table;
|
||||
|
||||
/*
|
||||
* "likely(!policy)" because normally cpufreq_stats will be registered
|
||||
* before cpufreq driver
|
||||
*/
|
||||
policy = cpufreq_cpu_get(cpu);
|
||||
if (likely(!policy))
|
||||
return;
|
||||
|
||||
table = cpufreq_frequency_get_table(policy->cpu);
|
||||
if (likely(table))
|
||||
__cpufreq_stats_create_table(policy, table);
|
||||
|
||||
cpufreq_cpu_put(policy);
|
||||
}
|
||||
|
||||
static void cpufreq_stats_update_policy_cpu(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cpufreq_stats *stat = per_cpu(cpufreq_stats_table,
|
||||
@ -277,7 +289,7 @@ static void cpufreq_stats_update_policy_cpu(struct cpufreq_policy *policy)
|
||||
static int cpufreq_stat_notifier_policy(struct notifier_block *nb,
|
||||
unsigned long val, void *data)
|
||||
{
|
||||
int ret;
|
||||
int ret = 0;
|
||||
struct cpufreq_policy *policy = data;
|
||||
struct cpufreq_frequency_table *table;
|
||||
unsigned int cpu = policy->cpu;
|
||||
@ -287,15 +299,16 @@ static int cpufreq_stat_notifier_policy(struct notifier_block *nb,
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (val != CPUFREQ_NOTIFY)
|
||||
return 0;
|
||||
table = cpufreq_frequency_get_table(cpu);
|
||||
if (!table)
|
||||
return 0;
|
||||
ret = cpufreq_stats_create_table(policy, table);
|
||||
if (ret)
|
||||
return ret;
|
||||
return 0;
|
||||
|
||||
if (val == CPUFREQ_CREATE_POLICY)
|
||||
ret = __cpufreq_stats_create_table(policy, table);
|
||||
else if (val == CPUFREQ_REMOVE_POLICY)
|
||||
__cpufreq_stats_free_table(policy);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int cpufreq_stat_notifier_trans(struct notifier_block *nb,
|
||||
@ -334,29 +347,6 @@ static int cpufreq_stat_notifier_trans(struct notifier_block *nb,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cpufreq_stat_cpu_callback(struct notifier_block *nfb,
|
||||
unsigned long action,
|
||||
void *hcpu)
|
||||
{
|
||||
unsigned int cpu = (unsigned long)hcpu;
|
||||
|
||||
switch (action) {
|
||||
case CPU_DOWN_PREPARE:
|
||||
cpufreq_stats_free_sysfs(cpu);
|
||||
break;
|
||||
case CPU_DEAD:
|
||||
cpufreq_stats_free_table(cpu);
|
||||
break;
|
||||
}
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
/* priority=1 so this will get called before cpufreq_remove_dev */
|
||||
static struct notifier_block cpufreq_stat_cpu_notifier __refdata = {
|
||||
.notifier_call = cpufreq_stat_cpu_callback,
|
||||
.priority = 1,
|
||||
};
|
||||
|
||||
static struct notifier_block notifier_policy_block = {
|
||||
.notifier_call = cpufreq_stat_notifier_policy
|
||||
};
|
||||
@ -376,14 +366,14 @@ static int __init cpufreq_stats_init(void)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
register_hotcpu_notifier(&cpufreq_stat_cpu_notifier);
|
||||
for_each_online_cpu(cpu)
|
||||
cpufreq_stats_create_table(cpu);
|
||||
|
||||
ret = cpufreq_register_notifier(¬ifier_trans_block,
|
||||
CPUFREQ_TRANSITION_NOTIFIER);
|
||||
if (ret) {
|
||||
cpufreq_unregister_notifier(¬ifier_policy_block,
|
||||
CPUFREQ_POLICY_NOTIFIER);
|
||||
unregister_hotcpu_notifier(&cpufreq_stat_cpu_notifier);
|
||||
for_each_online_cpu(cpu)
|
||||
cpufreq_stats_free_table(cpu);
|
||||
return ret;
|
||||
@ -399,11 +389,8 @@ static void __exit cpufreq_stats_exit(void)
|
||||
CPUFREQ_POLICY_NOTIFIER);
|
||||
cpufreq_unregister_notifier(¬ifier_trans_block,
|
||||
CPUFREQ_TRANSITION_NOTIFIER);
|
||||
unregister_hotcpu_notifier(&cpufreq_stat_cpu_notifier);
|
||||
for_each_online_cpu(cpu) {
|
||||
for_each_online_cpu(cpu)
|
||||
cpufreq_stats_free_table(cpu);
|
||||
cpufreq_stats_free_sysfs(cpu);
|
||||
}
|
||||
}
|
||||
|
||||
MODULE_AUTHOR("Zou Nan hai <nanhai.zou@intel.com>");
|
||||
|
@ -58,14 +58,6 @@ static int davinci_verify_speed(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static unsigned int davinci_getspeed(unsigned int cpu)
|
||||
{
|
||||
if (cpu)
|
||||
return 0;
|
||||
|
||||
return clk_get_rate(cpufreq.armclk) / 1000;
|
||||
}
|
||||
|
||||
static int davinci_target(struct cpufreq_policy *policy, unsigned int idx)
|
||||
{
|
||||
struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data;
|
||||
@ -73,7 +65,7 @@ static int davinci_target(struct cpufreq_policy *policy, unsigned int idx)
|
||||
unsigned int old_freq, new_freq;
|
||||
int ret = 0;
|
||||
|
||||
old_freq = davinci_getspeed(0);
|
||||
old_freq = policy->cur;
|
||||
new_freq = pdata->freq_table[idx].frequency;
|
||||
|
||||
/* if moving to higher frequency, up the voltage beforehand */
|
||||
@ -116,6 +108,8 @@ static int davinci_cpu_init(struct cpufreq_policy *policy)
|
||||
return result;
|
||||
}
|
||||
|
||||
policy->clk = cpufreq.armclk;
|
||||
|
||||
/*
|
||||
* Time measurement across the target() function yields ~1500-1800us
|
||||
* time taken with no drivers on notification list.
|
||||
@ -126,10 +120,10 @@ static int davinci_cpu_init(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
static struct cpufreq_driver davinci_driver = {
|
||||
.flags = CPUFREQ_STICKY,
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = davinci_verify_speed,
|
||||
.target_index = davinci_target,
|
||||
.get = davinci_getspeed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = davinci_cpu_init,
|
||||
.exit = cpufreq_generic_exit,
|
||||
.name = "davinci",
|
||||
|
@ -26,32 +26,18 @@ static int dbx500_cpufreq_target(struct cpufreq_policy *policy,
|
||||
return clk_set_rate(armss_clk, freq_table[index].frequency * 1000);
|
||||
}
|
||||
|
||||
static unsigned int dbx500_cpufreq_getspeed(unsigned int cpu)
|
||||
{
|
||||
int i = 0;
|
||||
unsigned long freq = clk_get_rate(armss_clk) / 1000;
|
||||
|
||||
/* The value is rounded to closest frequency in the defined table. */
|
||||
while (freq_table[i + 1].frequency != CPUFREQ_TABLE_END) {
|
||||
if (freq < freq_table[i].frequency +
|
||||
(freq_table[i + 1].frequency - freq_table[i].frequency) / 2)
|
||||
return freq_table[i].frequency;
|
||||
i++;
|
||||
}
|
||||
|
||||
return freq_table[i].frequency;
|
||||
}
|
||||
|
||||
static int dbx500_cpufreq_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
policy->clk = armss_clk;
|
||||
return cpufreq_generic_init(policy, freq_table, 20 * 1000);
|
||||
}
|
||||
|
||||
static struct cpufreq_driver dbx500_cpufreq_driver = {
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_CONST_LOOPS,
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_CONST_LOOPS |
|
||||
CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = dbx500_cpufreq_target,
|
||||
.get = dbx500_cpufreq_getspeed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = dbx500_cpufreq_init,
|
||||
.name = "DBX500",
|
||||
.attr = cpufreq_generic_attr,
|
||||
|
@ -17,6 +17,7 @@
|
||||
#include <linux/regulator/consumer.h>
|
||||
#include <linux/cpufreq.h>
|
||||
#include <linux/suspend.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include <plat/cpu.h>
|
||||
|
||||
@ -30,11 +31,6 @@ static unsigned int locking_frequency;
|
||||
static bool frequency_locked;
|
||||
static DEFINE_MUTEX(cpufreq_lock);
|
||||
|
||||
static unsigned int exynos_getspeed(unsigned int cpu)
|
||||
{
|
||||
return clk_get_rate(exynos_info->cpu_clk) / 1000;
|
||||
}
|
||||
|
||||
static int exynos_cpufreq_get_index(unsigned int freq)
|
||||
{
|
||||
struct cpufreq_frequency_table *freq_table = exynos_info->freq_table;
|
||||
@ -214,25 +210,29 @@ static struct notifier_block exynos_cpufreq_nb = {
|
||||
|
||||
static int exynos_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
policy->clk = exynos_info->cpu_clk;
|
||||
return cpufreq_generic_init(policy, exynos_info->freq_table, 100000);
|
||||
}
|
||||
|
||||
static struct cpufreq_driver exynos_driver = {
|
||||
.flags = CPUFREQ_STICKY,
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = exynos_target,
|
||||
.get = exynos_getspeed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = exynos_cpufreq_cpu_init,
|
||||
.exit = cpufreq_generic_exit,
|
||||
.name = "exynos_cpufreq",
|
||||
.attr = cpufreq_generic_attr,
|
||||
#ifdef CONFIG_ARM_EXYNOS_CPU_FREQ_BOOST_SW
|
||||
.boost_supported = true,
|
||||
#endif
|
||||
#ifdef CONFIG_PM
|
||||
.suspend = exynos_cpufreq_suspend,
|
||||
.resume = exynos_cpufreq_resume,
|
||||
#endif
|
||||
};
|
||||
|
||||
static int __init exynos_cpufreq_init(void)
|
||||
static int exynos_cpufreq_probe(struct platform_device *pdev)
|
||||
{
|
||||
int ret = -EINVAL;
|
||||
|
||||
@ -263,7 +263,7 @@ static int __init exynos_cpufreq_init(void)
|
||||
goto err_vdd_arm;
|
||||
}
|
||||
|
||||
locking_frequency = exynos_getspeed(0);
|
||||
locking_frequency = clk_get_rate(exynos_info->cpu_clk) / 1000;
|
||||
|
||||
register_pm_notifier(&exynos_cpufreq_nb);
|
||||
|
||||
@ -281,4 +281,12 @@ err_vdd_arm:
|
||||
kfree(exynos_info);
|
||||
return -EINVAL;
|
||||
}
|
||||
late_initcall(exynos_cpufreq_init);
|
||||
|
||||
static struct platform_driver exynos_cpufreq_platdrv = {
|
||||
.driver = {
|
||||
.name = "exynos-cpufreq",
|
||||
.owner = THIS_MODULE,
|
||||
},
|
||||
.probe = exynos_cpufreq_probe,
|
||||
};
|
||||
module_platform_driver(exynos_cpufreq_platdrv);
|
||||
|
@ -32,7 +32,7 @@ static unsigned int exynos4x12_volt_table[] = {
|
||||
};
|
||||
|
||||
static struct cpufreq_frequency_table exynos4x12_freq_table[] = {
|
||||
{L0, CPUFREQ_ENTRY_INVALID},
|
||||
{CPUFREQ_BOOST_FREQ, 1500 * 1000},
|
||||
{L1, 1400 * 1000},
|
||||
{L2, 1300 * 1000},
|
||||
{L3, 1200 * 1000},
|
||||
|
@ -102,12 +102,12 @@ static void set_clkdiv(unsigned int div_index)
|
||||
cpu_relax();
|
||||
}
|
||||
|
||||
static void set_apll(unsigned int new_index,
|
||||
unsigned int old_index)
|
||||
static void set_apll(unsigned int index)
|
||||
{
|
||||
unsigned int tmp, pdiv;
|
||||
unsigned int tmp;
|
||||
unsigned int freq = apll_freq_5250[index].freq;
|
||||
|
||||
/* 1. MUX_CORE_SEL = MPLL, ARMCLK uses MPLL for lock time */
|
||||
/* MUX_CORE_SEL = MPLL, ARMCLK uses MPLL for lock time */
|
||||
clk_set_parent(moutcore, mout_mpll);
|
||||
|
||||
do {
|
||||
@ -116,24 +116,9 @@ static void set_apll(unsigned int new_index,
|
||||
tmp &= 0x7;
|
||||
} while (tmp != 0x2);
|
||||
|
||||
/* 2. Set APLL Lock time */
|
||||
pdiv = ((apll_freq_5250[new_index].mps >> 8) & 0x3f);
|
||||
clk_set_rate(mout_apll, freq * 1000);
|
||||
|
||||
__raw_writel((pdiv * 250), EXYNOS5_APLL_LOCK);
|
||||
|
||||
/* 3. Change PLL PMS values */
|
||||
tmp = __raw_readl(EXYNOS5_APLL_CON0);
|
||||
tmp &= ~((0x3ff << 16) | (0x3f << 8) | (0x7 << 0));
|
||||
tmp |= apll_freq_5250[new_index].mps;
|
||||
__raw_writel(tmp, EXYNOS5_APLL_CON0);
|
||||
|
||||
/* 4. wait_lock_time */
|
||||
do {
|
||||
cpu_relax();
|
||||
tmp = __raw_readl(EXYNOS5_APLL_CON0);
|
||||
} while (!(tmp & (0x1 << 29)));
|
||||
|
||||
/* 5. MUX_CORE_SEL = APLL */
|
||||
/* MUX_CORE_SEL = APLL */
|
||||
clk_set_parent(moutcore, mout_apll);
|
||||
|
||||
do {
|
||||
@ -141,55 +126,17 @@ static void set_apll(unsigned int new_index,
|
||||
tmp = __raw_readl(EXYNOS5_CLKMUX_STATCPU);
|
||||
tmp &= (0x7 << 16);
|
||||
} while (tmp != (0x1 << 16));
|
||||
|
||||
}
|
||||
|
||||
static bool exynos5250_pms_change(unsigned int old_index, unsigned int new_index)
|
||||
{
|
||||
unsigned int old_pm = apll_freq_5250[old_index].mps >> 8;
|
||||
unsigned int new_pm = apll_freq_5250[new_index].mps >> 8;
|
||||
|
||||
return (old_pm == new_pm) ? 0 : 1;
|
||||
}
|
||||
|
||||
static void exynos5250_set_frequency(unsigned int old_index,
|
||||
unsigned int new_index)
|
||||
{
|
||||
unsigned int tmp;
|
||||
|
||||
if (old_index > new_index) {
|
||||
if (!exynos5250_pms_change(old_index, new_index)) {
|
||||
/* 1. Change the system clock divider values */
|
||||
set_clkdiv(new_index);
|
||||
/* 2. Change just s value in apll m,p,s value */
|
||||
tmp = __raw_readl(EXYNOS5_APLL_CON0);
|
||||
tmp &= ~(0x7 << 0);
|
||||
tmp |= apll_freq_5250[new_index].mps & 0x7;
|
||||
__raw_writel(tmp, EXYNOS5_APLL_CON0);
|
||||
|
||||
} else {
|
||||
/* Clock Configuration Procedure */
|
||||
/* 1. Change the system clock divider values */
|
||||
set_clkdiv(new_index);
|
||||
/* 2. Change the apll m,p,s value */
|
||||
set_apll(new_index, old_index);
|
||||
}
|
||||
set_clkdiv(new_index);
|
||||
set_apll(new_index);
|
||||
} else if (old_index < new_index) {
|
||||
if (!exynos5250_pms_change(old_index, new_index)) {
|
||||
/* 1. Change just s value in apll m,p,s value */
|
||||
tmp = __raw_readl(EXYNOS5_APLL_CON0);
|
||||
tmp &= ~(0x7 << 0);
|
||||
tmp |= apll_freq_5250[new_index].mps & 0x7;
|
||||
__raw_writel(tmp, EXYNOS5_APLL_CON0);
|
||||
/* 2. Change the system clock divider values */
|
||||
set_clkdiv(new_index);
|
||||
} else {
|
||||
/* Clock Configuration Procedure */
|
||||
/* 1. Change the apll m,p,s value */
|
||||
set_apll(new_index, old_index);
|
||||
/* 2. Change the system clock divider values */
|
||||
set_clkdiv(new_index);
|
||||
}
|
||||
set_apll(new_index);
|
||||
set_clkdiv(new_index);
|
||||
}
|
||||
}
|
||||
|
||||
@ -222,7 +169,6 @@ int exynos5250_cpufreq_init(struct exynos_dvfs_info *info)
|
||||
info->volt_table = exynos5250_volt_table;
|
||||
info->freq_table = exynos5250_freq_table;
|
||||
info->set_freq = exynos5250_set_frequency;
|
||||
info->need_apll_change = exynos5250_pms_change;
|
||||
|
||||
return 0;
|
||||
|
||||
|
@ -100,7 +100,6 @@ struct exynos_dvfs_data {
|
||||
struct resource *mem;
|
||||
int irq;
|
||||
struct clk *cpu_clk;
|
||||
unsigned int cur_frequency;
|
||||
unsigned int latency;
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
unsigned int freq_count;
|
||||
@ -165,7 +164,7 @@ static int init_div_table(void)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void exynos_enable_dvfs(void)
|
||||
static void exynos_enable_dvfs(unsigned int cur_frequency)
|
||||
{
|
||||
unsigned int tmp, i, cpu;
|
||||
struct cpufreq_frequency_table *freq_table = dvfs_info->freq_table;
|
||||
@ -184,18 +183,18 @@ static void exynos_enable_dvfs(void)
|
||||
|
||||
/* Set initial performance index */
|
||||
for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++)
|
||||
if (freq_table[i].frequency == dvfs_info->cur_frequency)
|
||||
if (freq_table[i].frequency == cur_frequency)
|
||||
break;
|
||||
|
||||
if (freq_table[i].frequency == CPUFREQ_TABLE_END) {
|
||||
dev_crit(dvfs_info->dev, "Boot up frequency not supported\n");
|
||||
/* Assign the highest frequency */
|
||||
i = 0;
|
||||
dvfs_info->cur_frequency = freq_table[i].frequency;
|
||||
cur_frequency = freq_table[i].frequency;
|
||||
}
|
||||
|
||||
dev_info(dvfs_info->dev, "Setting dvfs initial frequency = %uKHZ",
|
||||
dvfs_info->cur_frequency);
|
||||
cur_frequency);
|
||||
|
||||
for (cpu = 0; cpu < CONFIG_NR_CPUS; cpu++) {
|
||||
tmp = __raw_readl(dvfs_info->base + XMU_C0_3_PSTATE + cpu * 4);
|
||||
@ -209,11 +208,6 @@ static void exynos_enable_dvfs(void)
|
||||
dvfs_info->base + XMU_DVFS_CTRL);
|
||||
}
|
||||
|
||||
static unsigned int exynos_getspeed(unsigned int cpu)
|
||||
{
|
||||
return dvfs_info->cur_frequency;
|
||||
}
|
||||
|
||||
static int exynos_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
{
|
||||
unsigned int tmp;
|
||||
@ -222,7 +216,7 @@ static int exynos_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
|
||||
mutex_lock(&cpufreq_lock);
|
||||
|
||||
freqs.old = dvfs_info->cur_frequency;
|
||||
freqs.old = policy->cur;
|
||||
freqs.new = freq_table[index].frequency;
|
||||
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
|
||||
@ -250,7 +244,7 @@ static void exynos_cpufreq_work(struct work_struct *work)
|
||||
goto skip_work;
|
||||
|
||||
mutex_lock(&cpufreq_lock);
|
||||
freqs.old = dvfs_info->cur_frequency;
|
||||
freqs.old = policy->cur;
|
||||
|
||||
cur_pstate = __raw_readl(dvfs_info->base + XMU_P_STATUS);
|
||||
if (cur_pstate >> C0_3_PSTATE_VALID_SHIFT & 0x1)
|
||||
@ -260,10 +254,9 @@ static void exynos_cpufreq_work(struct work_struct *work)
|
||||
|
||||
if (likely(index < dvfs_info->freq_count)) {
|
||||
freqs.new = freq_table[index].frequency;
|
||||
dvfs_info->cur_frequency = freqs.new;
|
||||
} else {
|
||||
dev_crit(dvfs_info->dev, "New frequency out of range\n");
|
||||
freqs.new = dvfs_info->cur_frequency;
|
||||
freqs.new = freqs.old;
|
||||
}
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
|
||||
|
||||
@ -307,15 +300,17 @@ static void exynos_sort_descend_freq_table(void)
|
||||
|
||||
static int exynos_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
policy->clk = dvfs_info->cpu_clk;
|
||||
return cpufreq_generic_init(policy, dvfs_info->freq_table,
|
||||
dvfs_info->latency);
|
||||
}
|
||||
|
||||
static struct cpufreq_driver exynos_driver = {
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_ASYNC_NOTIFICATION,
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_ASYNC_NOTIFICATION |
|
||||
CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = exynos_target,
|
||||
.get = exynos_getspeed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = exynos_cpufreq_cpu_init,
|
||||
.exit = cpufreq_generic_exit,
|
||||
.name = CPUFREQ_NAME,
|
||||
@ -335,6 +330,7 @@ static int exynos_cpufreq_probe(struct platform_device *pdev)
|
||||
int ret = -EINVAL;
|
||||
struct device_node *np;
|
||||
struct resource res;
|
||||
unsigned int cur_frequency;
|
||||
|
||||
np = pdev->dev.of_node;
|
||||
if (!np)
|
||||
@ -391,13 +387,13 @@ static int exynos_cpufreq_probe(struct platform_device *pdev)
|
||||
goto err_free_table;
|
||||
}
|
||||
|
||||
dvfs_info->cur_frequency = clk_get_rate(dvfs_info->cpu_clk);
|
||||
if (!dvfs_info->cur_frequency) {
|
||||
cur_frequency = clk_get_rate(dvfs_info->cpu_clk);
|
||||
if (!cur_frequency) {
|
||||
dev_err(dvfs_info->dev, "Failed to get clock rate\n");
|
||||
ret = -EINVAL;
|
||||
goto err_free_table;
|
||||
}
|
||||
dvfs_info->cur_frequency /= 1000;
|
||||
cur_frequency /= 1000;
|
||||
|
||||
INIT_WORK(&dvfs_info->irq_work, exynos_cpufreq_work);
|
||||
ret = devm_request_irq(dvfs_info->dev, dvfs_info->irq,
|
||||
@ -414,7 +410,7 @@ static int exynos_cpufreq_probe(struct platform_device *pdev)
|
||||
goto err_free_table;
|
||||
}
|
||||
|
||||
exynos_enable_dvfs();
|
||||
exynos_enable_dvfs(cur_frequency);
|
||||
ret = cpufreq_register_driver(&exynos_driver);
|
||||
if (ret) {
|
||||
dev_err(dvfs_info->dev,
|
||||
|
@ -32,6 +32,10 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
|
||||
|
||||
continue;
|
||||
}
|
||||
if (!cpufreq_boost_enabled()
|
||||
&& table[i].driver_data == CPUFREQ_BOOST_FREQ)
|
||||
continue;
|
||||
|
||||
pr_debug("table entry %u: %u kHz, %u driver_data\n",
|
||||
i, freq, table[i].driver_data);
|
||||
if (freq < min_freq)
|
||||
@ -178,11 +182,34 @@ int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_frequency_table_target);
|
||||
|
||||
int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy,
|
||||
unsigned int freq)
|
||||
{
|
||||
struct cpufreq_frequency_table *table;
|
||||
int i;
|
||||
|
||||
table = cpufreq_frequency_get_table(policy->cpu);
|
||||
if (unlikely(!table)) {
|
||||
pr_debug("%s: Unable to find frequency table\n", __func__);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
|
||||
if (table[i].frequency == freq)
|
||||
return i;
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_frequency_table_get_index);
|
||||
|
||||
static DEFINE_PER_CPU(struct cpufreq_frequency_table *, cpufreq_show_table);
|
||||
|
||||
/**
|
||||
* show_available_freqs - show available frequencies for the specified CPU
|
||||
*/
|
||||
static ssize_t show_available_freqs(struct cpufreq_policy *policy, char *buf)
|
||||
static ssize_t show_available_freqs(struct cpufreq_policy *policy, char *buf,
|
||||
bool show_boost)
|
||||
{
|
||||
unsigned int i = 0;
|
||||
unsigned int cpu = policy->cpu;
|
||||
@ -197,6 +224,20 @@ static ssize_t show_available_freqs(struct cpufreq_policy *policy, char *buf)
|
||||
for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) {
|
||||
if (table[i].frequency == CPUFREQ_ENTRY_INVALID)
|
||||
continue;
|
||||
/*
|
||||
* show_boost = true and driver_data = BOOST freq
|
||||
* display BOOST freqs
|
||||
*
|
||||
* show_boost = false and driver_data = BOOST freq
|
||||
* show_boost = true and driver_data != BOOST freq
|
||||
* continue - do not display anything
|
||||
*
|
||||
* show_boost = false and driver_data != BOOST freq
|
||||
* display NON BOOST freqs
|
||||
*/
|
||||
if (show_boost ^ (table[i].driver_data == CPUFREQ_BOOST_FREQ))
|
||||
continue;
|
||||
|
||||
count += sprintf(&buf[count], "%d ", table[i].frequency);
|
||||
}
|
||||
count += sprintf(&buf[count], "\n");
|
||||
@ -205,16 +246,39 @@ static ssize_t show_available_freqs(struct cpufreq_policy *policy, char *buf)
|
||||
|
||||
}
|
||||
|
||||
struct freq_attr cpufreq_freq_attr_scaling_available_freqs = {
|
||||
.attr = { .name = "scaling_available_frequencies",
|
||||
.mode = 0444,
|
||||
},
|
||||
.show = show_available_freqs,
|
||||
};
|
||||
#define cpufreq_attr_available_freq(_name) \
|
||||
struct freq_attr cpufreq_freq_attr_##_name##_freqs = \
|
||||
__ATTR_RO(_name##_frequencies)
|
||||
|
||||
/**
|
||||
* show_scaling_available_frequencies - show available normal frequencies for
|
||||
* the specified CPU
|
||||
*/
|
||||
static ssize_t scaling_available_frequencies_show(struct cpufreq_policy *policy,
|
||||
char *buf)
|
||||
{
|
||||
return show_available_freqs(policy, buf, false);
|
||||
}
|
||||
cpufreq_attr_available_freq(scaling_available);
|
||||
EXPORT_SYMBOL_GPL(cpufreq_freq_attr_scaling_available_freqs);
|
||||
|
||||
/**
|
||||
* show_available_boost_freqs - show available boost frequencies for
|
||||
* the specified CPU
|
||||
*/
|
||||
static ssize_t scaling_boost_frequencies_show(struct cpufreq_policy *policy,
|
||||
char *buf)
|
||||
{
|
||||
return show_available_freqs(policy, buf, true);
|
||||
}
|
||||
cpufreq_attr_available_freq(scaling_boost);
|
||||
EXPORT_SYMBOL_GPL(cpufreq_freq_attr_scaling_boost_freqs);
|
||||
|
||||
struct freq_attr *cpufreq_generic_attr[] = {
|
||||
&cpufreq_freq_attr_scaling_available_freqs,
|
||||
#ifdef CONFIG_CPU_FREQ_BOOST_SW
|
||||
&cpufreq_freq_attr_scaling_boost_freqs,
|
||||
#endif
|
||||
NULL,
|
||||
};
|
||||
EXPORT_SYMBOL_GPL(cpufreq_generic_attr);
|
||||
|
@ -35,10 +35,8 @@ static struct device *cpu_dev;
|
||||
static struct cpufreq_frequency_table *freq_table;
|
||||
static unsigned int transition_latency;
|
||||
|
||||
static unsigned int imx6q_get_speed(unsigned int cpu)
|
||||
{
|
||||
return clk_get_rate(arm_clk) / 1000;
|
||||
}
|
||||
static u32 *imx6_soc_volt;
|
||||
static u32 soc_opp_count;
|
||||
|
||||
static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
{
|
||||
@ -69,23 +67,22 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
|
||||
/* scaling up? scale voltage before frequency */
|
||||
if (new_freq > old_freq) {
|
||||
ret = regulator_set_voltage_tol(pu_reg, imx6_soc_volt[index], 0);
|
||||
if (ret) {
|
||||
dev_err(cpu_dev, "failed to scale vddpu up: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
ret = regulator_set_voltage_tol(soc_reg, imx6_soc_volt[index], 0);
|
||||
if (ret) {
|
||||
dev_err(cpu_dev, "failed to scale vddsoc up: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
ret = regulator_set_voltage_tol(arm_reg, volt, 0);
|
||||
if (ret) {
|
||||
dev_err(cpu_dev,
|
||||
"failed to scale vddarm up: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Need to increase vddpu and vddsoc for safety
|
||||
* if we are about to run at 1.2 GHz.
|
||||
*/
|
||||
if (new_freq == FREQ_1P2_GHZ / 1000) {
|
||||
regulator_set_voltage_tol(pu_reg,
|
||||
PU_SOC_VOLTAGE_HIGH, 0);
|
||||
regulator_set_voltage_tol(soc_reg,
|
||||
PU_SOC_VOLTAGE_HIGH, 0);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
@ -120,12 +117,15 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
"failed to scale vddarm down: %d\n", ret);
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
if (old_freq == FREQ_1P2_GHZ / 1000) {
|
||||
regulator_set_voltage_tol(pu_reg,
|
||||
PU_SOC_VOLTAGE_NORMAL, 0);
|
||||
regulator_set_voltage_tol(soc_reg,
|
||||
PU_SOC_VOLTAGE_NORMAL, 0);
|
||||
ret = regulator_set_voltage_tol(soc_reg, imx6_soc_volt[index], 0);
|
||||
if (ret) {
|
||||
dev_warn(cpu_dev, "failed to scale vddsoc down: %d\n", ret);
|
||||
ret = 0;
|
||||
}
|
||||
ret = regulator_set_voltage_tol(pu_reg, imx6_soc_volt[index], 0);
|
||||
if (ret) {
|
||||
dev_warn(cpu_dev, "failed to scale vddpu down: %d\n", ret);
|
||||
ret = 0;
|
||||
}
|
||||
}
|
||||
|
||||
@ -134,13 +134,15 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
|
||||
static int imx6q_cpufreq_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
policy->clk = arm_clk;
|
||||
return cpufreq_generic_init(policy, freq_table, transition_latency);
|
||||
}
|
||||
|
||||
static struct cpufreq_driver imx6q_cpufreq_driver = {
|
||||
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = imx6q_set_target,
|
||||
.get = imx6q_get_speed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = imx6q_cpufreq_init,
|
||||
.exit = cpufreq_generic_exit,
|
||||
.name = "imx6q-cpufreq",
|
||||
@ -153,6 +155,9 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
|
||||
struct dev_pm_opp *opp;
|
||||
unsigned long min_volt, max_volt;
|
||||
int num, ret;
|
||||
const struct property *prop;
|
||||
const __be32 *val;
|
||||
u32 nr, i, j;
|
||||
|
||||
cpu_dev = get_cpu_device(0);
|
||||
if (!cpu_dev) {
|
||||
@ -187,12 +192,25 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
|
||||
goto put_node;
|
||||
}
|
||||
|
||||
/* We expect an OPP table supplied by platform */
|
||||
/*
|
||||
* We expect an OPP table supplied by platform.
|
||||
* Just, incase the platform did not supply the OPP
|
||||
* table, it will try to get it.
|
||||
*/
|
||||
num = dev_pm_opp_get_opp_count(cpu_dev);
|
||||
if (num < 0) {
|
||||
ret = num;
|
||||
dev_err(cpu_dev, "no OPP table is found: %d\n", ret);
|
||||
goto put_node;
|
||||
ret = of_init_opp_table(cpu_dev);
|
||||
if (ret < 0) {
|
||||
dev_err(cpu_dev, "failed to init OPP table: %d\n", ret);
|
||||
goto put_node;
|
||||
}
|
||||
|
||||
num = dev_pm_opp_get_opp_count(cpu_dev);
|
||||
if (num < 0) {
|
||||
ret = num;
|
||||
dev_err(cpu_dev, "no OPP table is found: %d\n", ret);
|
||||
goto put_node;
|
||||
}
|
||||
}
|
||||
|
||||
ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
|
||||
@ -201,9 +219,61 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
|
||||
goto put_node;
|
||||
}
|
||||
|
||||
/* Make imx6_soc_volt array's size same as arm opp number */
|
||||
imx6_soc_volt = devm_kzalloc(cpu_dev, sizeof(*imx6_soc_volt) * num, GFP_KERNEL);
|
||||
if (imx6_soc_volt == NULL) {
|
||||
ret = -ENOMEM;
|
||||
goto free_freq_table;
|
||||
}
|
||||
|
||||
prop = of_find_property(np, "fsl,soc-operating-points", NULL);
|
||||
if (!prop || !prop->value)
|
||||
goto soc_opp_out;
|
||||
|
||||
/*
|
||||
* Each OPP is a set of tuples consisting of frequency and
|
||||
* voltage like <freq-kHz vol-uV>.
|
||||
*/
|
||||
nr = prop->length / sizeof(u32);
|
||||
if (nr % 2 || (nr / 2) < num)
|
||||
goto soc_opp_out;
|
||||
|
||||
for (j = 0; j < num; j++) {
|
||||
val = prop->value;
|
||||
for (i = 0; i < nr / 2; i++) {
|
||||
unsigned long freq = be32_to_cpup(val++);
|
||||
unsigned long volt = be32_to_cpup(val++);
|
||||
if (freq_table[j].frequency == freq) {
|
||||
imx6_soc_volt[soc_opp_count++] = volt;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
soc_opp_out:
|
||||
/* use fixed soc opp volt if no valid soc opp info found in dtb */
|
||||
if (soc_opp_count != num) {
|
||||
dev_warn(cpu_dev, "can NOT find valid fsl,soc-operating-points property in dtb, use default value!\n");
|
||||
for (j = 0; j < num; j++)
|
||||
imx6_soc_volt[j] = PU_SOC_VOLTAGE_NORMAL;
|
||||
if (freq_table[num - 1].frequency * 1000 == FREQ_1P2_GHZ)
|
||||
imx6_soc_volt[num - 1] = PU_SOC_VOLTAGE_HIGH;
|
||||
}
|
||||
|
||||
if (of_property_read_u32(np, "clock-latency", &transition_latency))
|
||||
transition_latency = CPUFREQ_ETERNAL;
|
||||
|
||||
/*
|
||||
* Calculate the ramp time for max voltage change in the
|
||||
* VDDSOC and VDDPU regulators.
|
||||
*/
|
||||
ret = regulator_set_voltage_time(soc_reg, imx6_soc_volt[0], imx6_soc_volt[num - 1]);
|
||||
if (ret > 0)
|
||||
transition_latency += ret * 1000;
|
||||
ret = regulator_set_voltage_time(pu_reg, imx6_soc_volt[0], imx6_soc_volt[num - 1]);
|
||||
if (ret > 0)
|
||||
transition_latency += ret * 1000;
|
||||
|
||||
/*
|
||||
* OPP is maintained in order of increasing frequency, and
|
||||
* freq_table initialised from OPP is therefore sorted in the
|
||||
@ -221,18 +291,6 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
|
||||
if (ret > 0)
|
||||
transition_latency += ret * 1000;
|
||||
|
||||
/* Count vddpu and vddsoc latency in for 1.2 GHz support */
|
||||
if (freq_table[num].frequency == FREQ_1P2_GHZ / 1000) {
|
||||
ret = regulator_set_voltage_time(pu_reg, PU_SOC_VOLTAGE_NORMAL,
|
||||
PU_SOC_VOLTAGE_HIGH);
|
||||
if (ret > 0)
|
||||
transition_latency += ret * 1000;
|
||||
ret = regulator_set_voltage_time(soc_reg, PU_SOC_VOLTAGE_NORMAL,
|
||||
PU_SOC_VOLTAGE_HIGH);
|
||||
if (ret > 0)
|
||||
transition_latency += ret * 1000;
|
||||
}
|
||||
|
||||
ret = cpufreq_register_driver(&imx6q_cpufreq_driver);
|
||||
if (ret) {
|
||||
dev_err(cpu_dev, "failed register driver: %d\n", ret);
|
||||
|
@ -190,6 +190,7 @@ static int integrator_cpufreq_init(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
static struct cpufreq_driver integrator_driver = {
|
||||
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = integrator_verify_policy,
|
||||
.target = integrator_set_target,
|
||||
.get = integrator_get,
|
||||
|
@ -35,6 +35,7 @@
|
||||
#define SAMPLE_COUNT 3
|
||||
|
||||
#define BYT_RATIOS 0x66a
|
||||
#define BYT_VIDS 0x66b
|
||||
|
||||
#define FRAC_BITS 8
|
||||
#define int_tofp(X) ((int64_t)(X) << FRAC_BITS)
|
||||
@ -50,6 +51,8 @@ static inline int32_t div_fp(int32_t x, int32_t y)
|
||||
return div_s64((int64_t)x << FRAC_BITS, (int64_t)y);
|
||||
}
|
||||
|
||||
static u64 energy_divisor;
|
||||
|
||||
struct sample {
|
||||
int32_t core_pct_busy;
|
||||
u64 aperf;
|
||||
@ -64,6 +67,12 @@ struct pstate_data {
|
||||
int turbo_pstate;
|
||||
};
|
||||
|
||||
struct vid_data {
|
||||
int32_t min;
|
||||
int32_t max;
|
||||
int32_t ratio;
|
||||
};
|
||||
|
||||
struct _pid {
|
||||
int setpoint;
|
||||
int32_t integral;
|
||||
@ -82,10 +91,9 @@ struct cpudata {
|
||||
struct timer_list timer;
|
||||
|
||||
struct pstate_data pstate;
|
||||
struct vid_data vid;
|
||||
struct _pid pid;
|
||||
|
||||
int min_pstate_count;
|
||||
|
||||
u64 prev_aperf;
|
||||
u64 prev_mperf;
|
||||
int sample_ptr;
|
||||
@ -106,7 +114,8 @@ struct pstate_funcs {
|
||||
int (*get_max)(void);
|
||||
int (*get_min)(void);
|
||||
int (*get_turbo)(void);
|
||||
void (*set)(int pstate);
|
||||
void (*set)(struct cpudata*, int pstate);
|
||||
void (*get_vid)(struct cpudata *);
|
||||
};
|
||||
|
||||
struct cpu_defaults {
|
||||
@ -358,6 +367,42 @@ static int byt_get_max_pstate(void)
|
||||
return (value >> 16) & 0xFF;
|
||||
}
|
||||
|
||||
static void byt_set_pstate(struct cpudata *cpudata, int pstate)
|
||||
{
|
||||
u64 val;
|
||||
int32_t vid_fp;
|
||||
u32 vid;
|
||||
|
||||
val = pstate << 8;
|
||||
if (limits.no_turbo)
|
||||
val |= (u64)1 << 32;
|
||||
|
||||
vid_fp = cpudata->vid.min + mul_fp(
|
||||
int_tofp(pstate - cpudata->pstate.min_pstate),
|
||||
cpudata->vid.ratio);
|
||||
|
||||
vid_fp = clamp_t(int32_t, vid_fp, cpudata->vid.min, cpudata->vid.max);
|
||||
vid = fp_toint(vid_fp);
|
||||
|
||||
val |= vid;
|
||||
|
||||
wrmsrl(MSR_IA32_PERF_CTL, val);
|
||||
}
|
||||
|
||||
static void byt_get_vid(struct cpudata *cpudata)
|
||||
{
|
||||
u64 value;
|
||||
|
||||
rdmsrl(BYT_VIDS, value);
|
||||
cpudata->vid.min = int_tofp((value >> 8) & 0x7f);
|
||||
cpudata->vid.max = int_tofp((value >> 16) & 0x7f);
|
||||
cpudata->vid.ratio = div_fp(
|
||||
cpudata->vid.max - cpudata->vid.min,
|
||||
int_tofp(cpudata->pstate.max_pstate -
|
||||
cpudata->pstate.min_pstate));
|
||||
}
|
||||
|
||||
|
||||
static int core_get_min_pstate(void)
|
||||
{
|
||||
u64 value;
|
||||
@ -384,7 +429,7 @@ static int core_get_turbo_pstate(void)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void core_set_pstate(int pstate)
|
||||
static void core_set_pstate(struct cpudata *cpudata, int pstate)
|
||||
{
|
||||
u64 val;
|
||||
|
||||
@ -425,7 +470,8 @@ static struct cpu_defaults byt_params = {
|
||||
.get_max = byt_get_max_pstate,
|
||||
.get_min = byt_get_min_pstate,
|
||||
.get_turbo = byt_get_max_pstate,
|
||||
.set = core_set_pstate,
|
||||
.set = byt_set_pstate,
|
||||
.get_vid = byt_get_vid,
|
||||
},
|
||||
};
|
||||
|
||||
@ -462,7 +508,7 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
|
||||
|
||||
cpu->pstate.current_pstate = pstate;
|
||||
|
||||
pstate_funcs.set(pstate);
|
||||
pstate_funcs.set(cpu, pstate);
|
||||
}
|
||||
|
||||
static inline void intel_pstate_pstate_increase(struct cpudata *cpu, int steps)
|
||||
@ -488,6 +534,9 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
|
||||
cpu->pstate.max_pstate = pstate_funcs.get_max();
|
||||
cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
|
||||
|
||||
if (pstate_funcs.get_vid)
|
||||
pstate_funcs.get_vid(cpu);
|
||||
|
||||
/*
|
||||
* goto max pstate so we don't slow up boot if we are built-in if we are
|
||||
* a module we will take care of it during normal operation
|
||||
@ -512,6 +561,7 @@ static inline void intel_pstate_sample(struct cpudata *cpu)
|
||||
|
||||
rdmsrl(MSR_IA32_APERF, aperf);
|
||||
rdmsrl(MSR_IA32_MPERF, mperf);
|
||||
|
||||
cpu->sample_ptr = (cpu->sample_ptr + 1) % SAMPLE_COUNT;
|
||||
cpu->samples[cpu->sample_ptr].aperf = aperf;
|
||||
cpu->samples[cpu->sample_ptr].mperf = mperf;
|
||||
@ -556,6 +606,7 @@ static inline void intel_pstate_adjust_busy_pstate(struct cpudata *cpu)
|
||||
ctl = pid_calc(pid, busy_scaled);
|
||||
|
||||
steps = abs(ctl);
|
||||
|
||||
if (ctl < 0)
|
||||
intel_pstate_pstate_increase(cpu, steps);
|
||||
else
|
||||
@ -565,17 +616,23 @@ static inline void intel_pstate_adjust_busy_pstate(struct cpudata *cpu)
|
||||
static void intel_pstate_timer_func(unsigned long __data)
|
||||
{
|
||||
struct cpudata *cpu = (struct cpudata *) __data;
|
||||
struct sample *sample;
|
||||
u64 energy;
|
||||
|
||||
intel_pstate_sample(cpu);
|
||||
|
||||
sample = &cpu->samples[cpu->sample_ptr];
|
||||
rdmsrl(MSR_PKG_ENERGY_STATUS, energy);
|
||||
|
||||
intel_pstate_adjust_busy_pstate(cpu);
|
||||
|
||||
if (cpu->pstate.current_pstate == cpu->pstate.min_pstate) {
|
||||
cpu->min_pstate_count++;
|
||||
if (!(cpu->min_pstate_count % 5)) {
|
||||
intel_pstate_set_pstate(cpu, cpu->pstate.max_pstate);
|
||||
}
|
||||
} else
|
||||
cpu->min_pstate_count = 0;
|
||||
trace_pstate_sample(fp_toint(sample->core_pct_busy),
|
||||
fp_toint(intel_pstate_get_scaled_busy(cpu)),
|
||||
cpu->pstate.current_pstate,
|
||||
sample->mperf,
|
||||
sample->aperf,
|
||||
div64_u64(energy, energy_divisor),
|
||||
sample->freq);
|
||||
|
||||
intel_pstate_set_sample_time(cpu);
|
||||
}
|
||||
@ -782,6 +839,7 @@ static void copy_cpu_funcs(struct pstate_funcs *funcs)
|
||||
pstate_funcs.get_min = funcs->get_min;
|
||||
pstate_funcs.get_turbo = funcs->get_turbo;
|
||||
pstate_funcs.set = funcs->set;
|
||||
pstate_funcs.get_vid = funcs->get_vid;
|
||||
}
|
||||
|
||||
#if IS_ENABLED(CONFIG_ACPI)
|
||||
@ -855,6 +913,7 @@ static int __init intel_pstate_init(void)
|
||||
int cpu, rc = 0;
|
||||
const struct x86_cpu_id *id;
|
||||
struct cpu_defaults *cpu_info;
|
||||
u64 units;
|
||||
|
||||
if (no_load)
|
||||
return -ENODEV;
|
||||
@ -888,8 +947,12 @@ static int __init intel_pstate_init(void)
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
rdmsrl(MSR_RAPL_POWER_UNIT, units);
|
||||
energy_divisor = 1 << ((units >> 8) & 0x1f); /* bits{12:8} */
|
||||
|
||||
intel_pstate_debug_expose_params();
|
||||
intel_pstate_sysfs_expose_params();
|
||||
|
||||
return rc;
|
||||
out:
|
||||
get_online_cpus();
|
||||
|
@ -97,6 +97,7 @@ static int kirkwood_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
static struct cpufreq_driver kirkwood_cpufreq_driver = {
|
||||
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.get = kirkwood_cpufreq_get_cpu_frequency,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = kirkwood_cpufreq_target,
|
||||
|
@ -24,8 +24,6 @@
|
||||
|
||||
static uint nowait;
|
||||
|
||||
static struct clk *cpuclk;
|
||||
|
||||
static void (*saved_cpu_wait) (void);
|
||||
|
||||
static int loongson2_cpu_freq_notifier(struct notifier_block *nb,
|
||||
@ -44,11 +42,6 @@ static int loongson2_cpu_freq_notifier(struct notifier_block *nb,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static unsigned int loongson2_cpufreq_get(unsigned int cpu)
|
||||
{
|
||||
return clk_get_rate(cpuclk);
|
||||
}
|
||||
|
||||
/*
|
||||
* Here we notify other drivers of the proposed change and the final change.
|
||||
*/
|
||||
@ -69,13 +62,14 @@ static int loongson2_cpufreq_target(struct cpufreq_policy *policy,
|
||||
set_cpus_allowed_ptr(current, &cpus_allowed);
|
||||
|
||||
/* setting the cpu frequency */
|
||||
clk_set_rate(cpuclk, freq);
|
||||
clk_set_rate(policy->clk, freq);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int loongson2_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
static struct clk *cpuclk;
|
||||
int i;
|
||||
unsigned long rate;
|
||||
int ret;
|
||||
@ -104,13 +98,14 @@ static int loongson2_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
return ret;
|
||||
}
|
||||
|
||||
policy->clk = cpuclk;
|
||||
return cpufreq_generic_init(policy, &loongson2_clockmod_table[0], 0);
|
||||
}
|
||||
|
||||
static int loongson2_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
cpufreq_frequency_table_put_attr(policy->cpu);
|
||||
clk_put(cpuclk);
|
||||
clk_put(policy->clk);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -119,7 +114,7 @@ static struct cpufreq_driver loongson2_cpufreq_driver = {
|
||||
.init = loongson2_cpufreq_cpu_init,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = loongson2_cpufreq_target,
|
||||
.get = loongson2_cpufreq_get,
|
||||
.get = cpufreq_generic_get,
|
||||
.exit = loongson2_cpufreq_exit,
|
||||
.attr = cpufreq_generic_attr,
|
||||
};
|
||||
|
@ -36,21 +36,9 @@
|
||||
|
||||
static struct cpufreq_frequency_table *freq_table;
|
||||
static atomic_t freq_table_users = ATOMIC_INIT(0);
|
||||
static struct clk *mpu_clk;
|
||||
static struct device *mpu_dev;
|
||||
static struct regulator *mpu_reg;
|
||||
|
||||
static unsigned int omap_getspeed(unsigned int cpu)
|
||||
{
|
||||
unsigned long rate;
|
||||
|
||||
if (cpu >= NR_CPUS)
|
||||
return 0;
|
||||
|
||||
rate = clk_get_rate(mpu_clk) / 1000;
|
||||
return rate;
|
||||
}
|
||||
|
||||
static int omap_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
{
|
||||
int r, ret;
|
||||
@ -58,11 +46,11 @@ static int omap_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
unsigned long freq, volt = 0, volt_old = 0, tol = 0;
|
||||
unsigned int old_freq, new_freq;
|
||||
|
||||
old_freq = omap_getspeed(policy->cpu);
|
||||
old_freq = policy->cur;
|
||||
new_freq = freq_table[index].frequency;
|
||||
|
||||
freq = new_freq * 1000;
|
||||
ret = clk_round_rate(mpu_clk, freq);
|
||||
ret = clk_round_rate(policy->clk, freq);
|
||||
if (IS_ERR_VALUE(ret)) {
|
||||
dev_warn(mpu_dev,
|
||||
"CPUfreq: Cannot find matching frequency for %lu\n",
|
||||
@ -100,7 +88,7 @@ static int omap_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
}
|
||||
}
|
||||
|
||||
ret = clk_set_rate(mpu_clk, new_freq * 1000);
|
||||
ret = clk_set_rate(policy->clk, new_freq * 1000);
|
||||
|
||||
/* scaling down? scale voltage after frequency */
|
||||
if (mpu_reg && (new_freq < old_freq)) {
|
||||
@ -108,7 +96,7 @@ static int omap_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
if (r < 0) {
|
||||
dev_warn(mpu_dev, "%s: unable to scale voltage down.\n",
|
||||
__func__);
|
||||
clk_set_rate(mpu_clk, old_freq * 1000);
|
||||
clk_set_rate(policy->clk, old_freq * 1000);
|
||||
return r;
|
||||
}
|
||||
}
|
||||
@ -126,9 +114,9 @@ static int omap_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
int result;
|
||||
|
||||
mpu_clk = clk_get(NULL, "cpufreq_ck");
|
||||
if (IS_ERR(mpu_clk))
|
||||
return PTR_ERR(mpu_clk);
|
||||
policy->clk = clk_get(NULL, "cpufreq_ck");
|
||||
if (IS_ERR(policy->clk))
|
||||
return PTR_ERR(policy->clk);
|
||||
|
||||
if (!freq_table) {
|
||||
result = dev_pm_opp_init_cpufreq_table(mpu_dev, &freq_table);
|
||||
@ -149,7 +137,7 @@ static int omap_cpu_init(struct cpufreq_policy *policy)
|
||||
|
||||
freq_table_free();
|
||||
fail:
|
||||
clk_put(mpu_clk);
|
||||
clk_put(policy->clk);
|
||||
return result;
|
||||
}
|
||||
|
||||
@ -157,15 +145,15 @@ static int omap_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
cpufreq_frequency_table_put_attr(policy->cpu);
|
||||
freq_table_free();
|
||||
clk_put(mpu_clk);
|
||||
clk_put(policy->clk);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver omap_driver = {
|
||||
.flags = CPUFREQ_STICKY,
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = omap_target,
|
||||
.get = omap_getspeed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = omap_cpu_init,
|
||||
.exit = omap_cpu_exit,
|
||||
.name = "omap",
|
||||
|
@ -213,6 +213,7 @@ static int pcc_cpufreq_target(struct cpufreq_policy *policy,
|
||||
cpu, target_freq,
|
||||
(pcch_virt_addr + pcc_cpu_data->input_offset));
|
||||
|
||||
freqs.old = policy->cur;
|
||||
freqs.new = target_freq;
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
|
||||
|
||||
@ -228,25 +229,20 @@ static int pcc_cpufreq_target(struct cpufreq_policy *policy,
|
||||
memset_io((pcch_virt_addr + pcc_cpu_data->input_offset), 0, BUF_SZ);
|
||||
|
||||
status = ioread16(&pcch_hdr->status);
|
||||
iowrite16(0, &pcch_hdr->status);
|
||||
|
||||
cpufreq_notify_post_transition(policy, &freqs, status != CMD_COMPLETE);
|
||||
spin_unlock(&pcc_lock);
|
||||
|
||||
if (status != CMD_COMPLETE) {
|
||||
pr_debug("target: FAILED for cpu %d, with status: 0x%x\n",
|
||||
cpu, status);
|
||||
goto cmd_incomplete;
|
||||
return -EINVAL;
|
||||
}
|
||||
iowrite16(0, &pcch_hdr->status);
|
||||
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
|
||||
pr_debug("target: was SUCCESSFUL for cpu %d\n", cpu);
|
||||
spin_unlock(&pcc_lock);
|
||||
|
||||
return 0;
|
||||
|
||||
cmd_incomplete:
|
||||
freqs.new = freqs.old;
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
|
||||
iowrite16(0, &pcch_hdr->status);
|
||||
spin_unlock(&pcc_lock);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int pcc_get_offset(int cpu)
|
||||
|
@ -26,41 +26,108 @@
|
||||
static unsigned int busfreq; /* FSB, in 10 kHz */
|
||||
static unsigned int max_multiplier;
|
||||
|
||||
static unsigned int param_busfreq = 0;
|
||||
static unsigned int param_max_multiplier = 0;
|
||||
|
||||
module_param_named(max_multiplier, param_max_multiplier, uint, S_IRUGO);
|
||||
MODULE_PARM_DESC(max_multiplier, "Maximum multiplier (allowed values: 20 30 35 40 45 50 55 60)");
|
||||
|
||||
module_param_named(bus_frequency, param_busfreq, uint, S_IRUGO);
|
||||
MODULE_PARM_DESC(bus_frequency, "Bus frequency in kHz");
|
||||
|
||||
/* Clock ratio multiplied by 10 - see table 27 in AMD#23446 */
|
||||
static struct cpufreq_frequency_table clock_ratio[] = {
|
||||
{45, /* 000 -> 4.5x */ 0},
|
||||
{50, /* 001 -> 5.0x */ 0},
|
||||
{40, /* 010 -> 4.0x */ 0},
|
||||
{55, /* 011 -> 5.5x */ 0},
|
||||
{20, /* 100 -> 2.0x */ 0},
|
||||
{30, /* 101 -> 3.0x */ 0},
|
||||
{60, /* 110 -> 6.0x */ 0},
|
||||
{55, /* 011 -> 5.5x */ 0},
|
||||
{50, /* 001 -> 5.0x */ 0},
|
||||
{45, /* 000 -> 4.5x */ 0},
|
||||
{40, /* 010 -> 4.0x */ 0},
|
||||
{35, /* 111 -> 3.5x */ 0},
|
||||
{30, /* 101 -> 3.0x */ 0},
|
||||
{20, /* 100 -> 2.0x */ 0},
|
||||
{0, CPUFREQ_TABLE_END}
|
||||
};
|
||||
|
||||
static const u8 index_to_register[8] = { 6, 3, 1, 0, 2, 7, 5, 4 };
|
||||
static const u8 register_to_index[8] = { 3, 2, 4, 1, 7, 6, 0, 5 };
|
||||
|
||||
static const struct {
|
||||
unsigned freq;
|
||||
unsigned mult;
|
||||
} usual_frequency_table[] = {
|
||||
{ 400000, 40 }, // 100 * 4
|
||||
{ 450000, 45 }, // 100 * 4.5
|
||||
{ 475000, 50 }, // 95 * 5
|
||||
{ 500000, 50 }, // 100 * 5
|
||||
{ 506250, 45 }, // 112.5 * 4.5
|
||||
{ 533500, 55 }, // 97 * 5.5
|
||||
{ 550000, 55 }, // 100 * 5.5
|
||||
{ 562500, 50 }, // 112.5 * 5
|
||||
{ 570000, 60 }, // 95 * 6
|
||||
{ 600000, 60 }, // 100 * 6
|
||||
{ 618750, 55 }, // 112.5 * 5.5
|
||||
{ 660000, 55 }, // 120 * 5.5
|
||||
{ 675000, 60 }, // 112.5 * 6
|
||||
{ 720000, 60 }, // 120 * 6
|
||||
};
|
||||
|
||||
#define FREQ_RANGE 3000
|
||||
|
||||
/**
|
||||
* powernow_k6_get_cpu_multiplier - returns the current FSB multiplier
|
||||
*
|
||||
* Returns the current setting of the frequency multiplier. Core clock
|
||||
* Returns the current setting of the frequency multiplier. Core clock
|
||||
* speed is frequency of the Front-Side Bus multiplied with this value.
|
||||
*/
|
||||
static int powernow_k6_get_cpu_multiplier(void)
|
||||
{
|
||||
u64 invalue = 0;
|
||||
unsigned long invalue = 0;
|
||||
u32 msrval;
|
||||
|
||||
local_irq_disable();
|
||||
|
||||
msrval = POWERNOW_IOPORT + 0x1;
|
||||
wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */
|
||||
invalue = inl(POWERNOW_IOPORT + 0x8);
|
||||
msrval = POWERNOW_IOPORT + 0x0;
|
||||
wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */
|
||||
|
||||
return clock_ratio[(invalue >> 5)&7].driver_data;
|
||||
local_irq_enable();
|
||||
|
||||
return clock_ratio[register_to_index[(invalue >> 5)&7]].driver_data;
|
||||
}
|
||||
|
||||
static void powernow_k6_set_cpu_multiplier(unsigned int best_i)
|
||||
{
|
||||
unsigned long outvalue, invalue;
|
||||
unsigned long msrval;
|
||||
unsigned long cr0;
|
||||
|
||||
/* we now need to transform best_i to the BVC format, see AMD#23446 */
|
||||
|
||||
/*
|
||||
* The processor doesn't respond to inquiry cycles while changing the
|
||||
* frequency, so we must disable cache.
|
||||
*/
|
||||
local_irq_disable();
|
||||
cr0 = read_cr0();
|
||||
write_cr0(cr0 | X86_CR0_CD);
|
||||
wbinvd();
|
||||
|
||||
outvalue = (1<<12) | (1<<10) | (1<<9) | (index_to_register[best_i]<<5);
|
||||
|
||||
msrval = POWERNOW_IOPORT + 0x1;
|
||||
wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */
|
||||
invalue = inl(POWERNOW_IOPORT + 0x8);
|
||||
invalue = invalue & 0x1f;
|
||||
outvalue = outvalue | invalue;
|
||||
outl(outvalue, (POWERNOW_IOPORT + 0x8));
|
||||
msrval = POWERNOW_IOPORT + 0x0;
|
||||
wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */
|
||||
|
||||
write_cr0(cr0);
|
||||
local_irq_enable();
|
||||
}
|
||||
|
||||
/**
|
||||
* powernow_k6_target - set the PowerNow! multiplier
|
||||
@ -71,8 +138,6 @@ static int powernow_k6_get_cpu_multiplier(void)
|
||||
static int powernow_k6_target(struct cpufreq_policy *policy,
|
||||
unsigned int best_i)
|
||||
{
|
||||
unsigned long outvalue = 0, invalue = 0;
|
||||
unsigned long msrval;
|
||||
struct cpufreq_freqs freqs;
|
||||
|
||||
if (clock_ratio[best_i].driver_data > max_multiplier) {
|
||||
@ -85,35 +150,63 @@ static int powernow_k6_target(struct cpufreq_policy *policy,
|
||||
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
|
||||
|
||||
/* we now need to transform best_i to the BVC format, see AMD#23446 */
|
||||
|
||||
outvalue = (1<<12) | (1<<10) | (1<<9) | (best_i<<5);
|
||||
|
||||
msrval = POWERNOW_IOPORT + 0x1;
|
||||
wrmsr(MSR_K6_EPMR, msrval, 0); /* enable the PowerNow port */
|
||||
invalue = inl(POWERNOW_IOPORT + 0x8);
|
||||
invalue = invalue & 0xf;
|
||||
outvalue = outvalue | invalue;
|
||||
outl(outvalue , (POWERNOW_IOPORT + 0x8));
|
||||
msrval = POWERNOW_IOPORT + 0x0;
|
||||
wrmsr(MSR_K6_EPMR, msrval, 0); /* disable it again */
|
||||
powernow_k6_set_cpu_multiplier(best_i);
|
||||
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
static int powernow_k6_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int i, f;
|
||||
unsigned khz;
|
||||
|
||||
if (policy->cpu != 0)
|
||||
return -ENODEV;
|
||||
|
||||
/* get frequencies */
|
||||
max_multiplier = powernow_k6_get_cpu_multiplier();
|
||||
busfreq = cpu_khz / max_multiplier;
|
||||
max_multiplier = 0;
|
||||
khz = cpu_khz;
|
||||
for (i = 0; i < ARRAY_SIZE(usual_frequency_table); i++) {
|
||||
if (khz >= usual_frequency_table[i].freq - FREQ_RANGE &&
|
||||
khz <= usual_frequency_table[i].freq + FREQ_RANGE) {
|
||||
khz = usual_frequency_table[i].freq;
|
||||
max_multiplier = usual_frequency_table[i].mult;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (param_max_multiplier) {
|
||||
for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) {
|
||||
if (clock_ratio[i].driver_data == param_max_multiplier) {
|
||||
max_multiplier = param_max_multiplier;
|
||||
goto have_max_multiplier;
|
||||
}
|
||||
}
|
||||
printk(KERN_ERR "powernow-k6: invalid max_multiplier parameter, valid parameters 20, 30, 35, 40, 45, 50, 55, 60\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!max_multiplier) {
|
||||
printk(KERN_WARNING "powernow-k6: unknown frequency %u, cannot determine current multiplier\n", khz);
|
||||
printk(KERN_WARNING "powernow-k6: use module parameters max_multiplier and bus_frequency\n");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
have_max_multiplier:
|
||||
param_max_multiplier = max_multiplier;
|
||||
|
||||
if (param_busfreq) {
|
||||
if (param_busfreq >= 50000 && param_busfreq <= 150000) {
|
||||
busfreq = param_busfreq / 10;
|
||||
goto have_busfreq;
|
||||
}
|
||||
printk(KERN_ERR "powernow-k6: invalid bus_frequency parameter, allowed range 50000 - 150000 kHz\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
busfreq = khz / max_multiplier;
|
||||
have_busfreq:
|
||||
param_busfreq = busfreq * 10;
|
||||
|
||||
/* table init */
|
||||
for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) {
|
||||
@ -125,7 +218,7 @@ static int powernow_k6_cpu_init(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
/* cpuinfo and default policy values */
|
||||
policy->cpuinfo.transition_latency = 200000;
|
||||
policy->cpuinfo.transition_latency = 500000;
|
||||
|
||||
return cpufreq_table_validate_and_show(policy, clock_ratio);
|
||||
}
|
||||
|
@ -964,14 +964,9 @@ static int transition_frequency_fidvid(struct powernow_k8_data *data,
|
||||
cpufreq_cpu_put(policy);
|
||||
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
|
||||
|
||||
res = transition_fid_vid(data, fid, vid);
|
||||
if (res)
|
||||
freqs.new = freqs.old;
|
||||
else
|
||||
freqs.new = find_khz_freq_from_fid(data->currfid);
|
||||
cpufreq_notify_post_transition(policy, &freqs, res);
|
||||
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
|
||||
return res;
|
||||
}
|
||||
|
||||
|
@ -24,12 +24,10 @@
|
||||
|
||||
/**
|
||||
* struct cpu_data - per CPU data struct
|
||||
* @clk: the clk of CPU
|
||||
* @parent: the parent node of cpu clock
|
||||
* @table: frequency table
|
||||
*/
|
||||
struct cpu_data {
|
||||
struct clk *clk;
|
||||
struct device_node *parent;
|
||||
struct cpufreq_frequency_table *table;
|
||||
};
|
||||
@ -81,13 +79,6 @@ static inline const struct cpumask *cpu_core_mask(int cpu)
|
||||
}
|
||||
#endif
|
||||
|
||||
static unsigned int corenet_cpufreq_get_speed(unsigned int cpu)
|
||||
{
|
||||
struct cpu_data *data = per_cpu(cpu_data, cpu);
|
||||
|
||||
return clk_get_rate(data->clk) / 1000;
|
||||
}
|
||||
|
||||
/* reduce the duplicated frequencies in frequency table */
|
||||
static void freq_table_redup(struct cpufreq_frequency_table *freq_table,
|
||||
int count)
|
||||
@ -158,8 +149,8 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
goto err_np;
|
||||
}
|
||||
|
||||
data->clk = of_clk_get(np, 0);
|
||||
if (IS_ERR(data->clk)) {
|
||||
policy->clk = of_clk_get(np, 0);
|
||||
if (IS_ERR(policy->clk)) {
|
||||
pr_err("%s: no clock information\n", __func__);
|
||||
goto err_nomem2;
|
||||
}
|
||||
@ -255,7 +246,7 @@ static int corenet_cpufreq_target(struct cpufreq_policy *policy,
|
||||
struct cpu_data *data = per_cpu(cpu_data, policy->cpu);
|
||||
|
||||
parent = of_clk_get(data->parent, data->table[index].driver_data);
|
||||
return clk_set_parent(data->clk, parent);
|
||||
return clk_set_parent(policy->clk, parent);
|
||||
}
|
||||
|
||||
static struct cpufreq_driver ppc_corenet_cpufreq_driver = {
|
||||
@ -265,7 +256,7 @@ static struct cpufreq_driver ppc_corenet_cpufreq_driver = {
|
||||
.exit = __exit_p(corenet_cpufreq_cpu_exit),
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = corenet_cpufreq_target,
|
||||
.get = corenet_cpufreq_get_speed,
|
||||
.get = cpufreq_generic_get,
|
||||
.attr = cpufreq_generic_attr,
|
||||
};
|
||||
|
||||
|
@ -423,6 +423,7 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
static struct cpufreq_driver pxa_cpufreq_driver = {
|
||||
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = pxa_set_target,
|
||||
.init = pxa_cpufreq_init,
|
||||
|
@ -201,6 +201,7 @@ static int pxa3xx_cpufreq_init(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
static struct cpufreq_driver pxa3xx_cpufreq_driver = {
|
||||
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = pxa3xx_cpufreq_set,
|
||||
.init = pxa3xx_cpufreq_init,
|
||||
|
@ -481,7 +481,7 @@ err_hclk:
|
||||
}
|
||||
|
||||
static struct cpufreq_driver s3c2416_cpufreq_driver = {
|
||||
.flags = 0,
|
||||
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = s3c2416_cpufreq_set_target,
|
||||
.get = s3c2416_cpufreq_get_speed,
|
||||
|
@ -22,8 +22,6 @@
|
||||
#include <linux/err.h>
|
||||
#include <linux/io.h>
|
||||
|
||||
#include <mach/hardware.h>
|
||||
|
||||
#include <asm/mach/arch.h>
|
||||
#include <asm/mach/map.h>
|
||||
|
||||
@ -55,7 +53,7 @@ static inline int within_khz(unsigned long a, unsigned long b)
|
||||
* specified in @cfg. The values are stored in @cfg for later use
|
||||
* by the relevant set routine if the request settings can be reached.
|
||||
*/
|
||||
int s3c2440_cpufreq_calcdivs(struct s3c_cpufreq_config *cfg)
|
||||
static int s3c2440_cpufreq_calcdivs(struct s3c_cpufreq_config *cfg)
|
||||
{
|
||||
unsigned int hdiv, pdiv;
|
||||
unsigned long hclk, fclk, armclk;
|
||||
@ -242,7 +240,7 @@ static int s3c2440_cpufreq_calctable(struct s3c_cpufreq_config *cfg,
|
||||
return ret;
|
||||
}
|
||||
|
||||
struct s3c_cpufreq_info s3c2440_cpufreq_info = {
|
||||
static struct s3c_cpufreq_info s3c2440_cpufreq_info = {
|
||||
.max = {
|
||||
.fclk = 400000000,
|
||||
.hclk = 133333333,
|
||||
|
@ -355,11 +355,6 @@ static int s3c_cpufreq_target(struct cpufreq_policy *policy,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static unsigned int s3c_cpufreq_get(unsigned int cpu)
|
||||
{
|
||||
return clk_get_rate(clk_arm) / 1000;
|
||||
}
|
||||
|
||||
struct clk *s3c_cpufreq_clk_get(struct device *dev, const char *name)
|
||||
{
|
||||
struct clk *clk;
|
||||
@ -373,6 +368,7 @@ struct clk *s3c_cpufreq_clk_get(struct device *dev, const char *name)
|
||||
|
||||
static int s3c_cpufreq_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
policy->clk = clk_arm;
|
||||
return cpufreq_generic_init(policy, ftab, cpu_cur.info->latency);
|
||||
}
|
||||
|
||||
@ -408,7 +404,7 @@ static int s3c_cpufreq_suspend(struct cpufreq_policy *policy)
|
||||
{
|
||||
suspend_pll.frequency = clk_get_rate(_clk_mpll);
|
||||
suspend_pll.driver_data = __raw_readl(S3C2410_MPLLCON);
|
||||
suspend_freq = s3c_cpufreq_get(0) * 1000;
|
||||
suspend_freq = clk_get_rate(clk_arm);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -448,9 +444,9 @@ static int s3c_cpufreq_resume(struct cpufreq_policy *policy)
|
||||
#endif
|
||||
|
||||
static struct cpufreq_driver s3c24xx_driver = {
|
||||
.flags = CPUFREQ_STICKY,
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.target = s3c_cpufreq_target,
|
||||
.get = s3c_cpufreq_get,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = s3c_cpufreq_init,
|
||||
.suspend = s3c_cpufreq_suspend,
|
||||
.resume = s3c_cpufreq_resume,
|
||||
@ -509,7 +505,7 @@ int __init s3c_cpufreq_setboard(struct s3c_cpufreq_board *board)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int __init s3c_cpufreq_auto_io(void)
|
||||
static int __init s3c_cpufreq_auto_io(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
|
@ -19,7 +19,6 @@
|
||||
#include <linux/regulator/consumer.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
static struct clk *armclk;
|
||||
static struct regulator *vddarm;
|
||||
static unsigned long regulator_latency;
|
||||
|
||||
@ -54,14 +53,6 @@ static struct cpufreq_frequency_table s3c64xx_freq_table[] = {
|
||||
};
|
||||
#endif
|
||||
|
||||
static unsigned int s3c64xx_cpufreq_get_speed(unsigned int cpu)
|
||||
{
|
||||
if (cpu != 0)
|
||||
return 0;
|
||||
|
||||
return clk_get_rate(armclk) / 1000;
|
||||
}
|
||||
|
||||
static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
unsigned int index)
|
||||
{
|
||||
@ -69,7 +60,7 @@ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
unsigned int old_freq, new_freq;
|
||||
int ret;
|
||||
|
||||
old_freq = clk_get_rate(armclk) / 1000;
|
||||
old_freq = clk_get_rate(policy->clk) / 1000;
|
||||
new_freq = s3c64xx_freq_table[index].frequency;
|
||||
dvfs = &s3c64xx_dvfs_table[s3c64xx_freq_table[index].driver_data];
|
||||
|
||||
@ -86,7 +77,7 @@ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
}
|
||||
#endif
|
||||
|
||||
ret = clk_set_rate(armclk, new_freq * 1000);
|
||||
ret = clk_set_rate(policy->clk, new_freq * 1000);
|
||||
if (ret < 0) {
|
||||
pr_err("Failed to set rate %dkHz: %d\n",
|
||||
new_freq, ret);
|
||||
@ -101,7 +92,7 @@ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
if (ret != 0) {
|
||||
pr_err("Failed to set VDDARM for %dkHz: %d\n",
|
||||
new_freq, ret);
|
||||
if (clk_set_rate(armclk, old_freq * 1000) < 0)
|
||||
if (clk_set_rate(policy->clk, old_freq * 1000) < 0)
|
||||
pr_err("Failed to restore original clock rate\n");
|
||||
|
||||
return ret;
|
||||
@ -110,7 +101,7 @@ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
#endif
|
||||
|
||||
pr_debug("Set actual frequency %lukHz\n",
|
||||
clk_get_rate(armclk) / 1000);
|
||||
clk_get_rate(policy->clk) / 1000);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -169,11 +160,11 @@ static int s3c64xx_cpufreq_driver_init(struct cpufreq_policy *policy)
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
armclk = clk_get(NULL, "armclk");
|
||||
if (IS_ERR(armclk)) {
|
||||
policy->clk = clk_get(NULL, "armclk");
|
||||
if (IS_ERR(policy->clk)) {
|
||||
pr_err("Unable to obtain ARMCLK: %ld\n",
|
||||
PTR_ERR(armclk));
|
||||
return PTR_ERR(armclk);
|
||||
PTR_ERR(policy->clk));
|
||||
return PTR_ERR(policy->clk);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_REGULATOR
|
||||
@ -193,7 +184,7 @@ static int s3c64xx_cpufreq_driver_init(struct cpufreq_policy *policy)
|
||||
unsigned long r;
|
||||
|
||||
/* Check for frequencies we can generate */
|
||||
r = clk_round_rate(armclk, freq->frequency * 1000);
|
||||
r = clk_round_rate(policy->clk, freq->frequency * 1000);
|
||||
r /= 1000;
|
||||
if (r != freq->frequency) {
|
||||
pr_debug("%dkHz unsupported by clock\n",
|
||||
@ -203,7 +194,7 @@ static int s3c64xx_cpufreq_driver_init(struct cpufreq_policy *policy)
|
||||
|
||||
/* If we have no regulator then assume startup
|
||||
* frequency is the maximum we can support. */
|
||||
if (!vddarm && freq->frequency > s3c64xx_cpufreq_get_speed(0))
|
||||
if (!vddarm && freq->frequency > clk_get_rate(policy->clk) / 1000)
|
||||
freq->frequency = CPUFREQ_ENTRY_INVALID;
|
||||
|
||||
freq++;
|
||||
@ -219,17 +210,17 @@ static int s3c64xx_cpufreq_driver_init(struct cpufreq_policy *policy)
|
||||
pr_err("Failed to configure frequency table: %d\n",
|
||||
ret);
|
||||
regulator_put(vddarm);
|
||||
clk_put(armclk);
|
||||
clk_put(policy->clk);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver s3c64xx_cpufreq_driver = {
|
||||
.flags = 0,
|
||||
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = s3c64xx_cpufreq_set_target,
|
||||
.get = s3c64xx_cpufreq_get_speed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = s3c64xx_cpufreq_driver_init,
|
||||
.name = "s3c",
|
||||
};
|
||||
|
@ -23,7 +23,6 @@
|
||||
#include <mach/map.h>
|
||||
#include <mach/regs-clock.h>
|
||||
|
||||
static struct clk *cpu_clk;
|
||||
static struct clk *dmc0_clk;
|
||||
static struct clk *dmc1_clk;
|
||||
static DEFINE_MUTEX(set_freq_lock);
|
||||
@ -164,14 +163,6 @@ static void s5pv210_set_refresh(enum s5pv210_dmc_port ch, unsigned long freq)
|
||||
__raw_writel(tmp1, reg);
|
||||
}
|
||||
|
||||
static unsigned int s5pv210_getspeed(unsigned int cpu)
|
||||
{
|
||||
if (cpu)
|
||||
return 0;
|
||||
|
||||
return clk_get_rate(cpu_clk) / 1000;
|
||||
}
|
||||
|
||||
static int s5pv210_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
{
|
||||
unsigned long reg;
|
||||
@ -193,7 +184,7 @@ static int s5pv210_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
goto exit;
|
||||
}
|
||||
|
||||
old_freq = s5pv210_getspeed(0);
|
||||
old_freq = policy->cur;
|
||||
new_freq = s5pv210_freq_table[index].frequency;
|
||||
|
||||
/* Finding current running level index */
|
||||
@ -471,9 +462,9 @@ static int __init s5pv210_cpu_init(struct cpufreq_policy *policy)
|
||||
unsigned long mem_type;
|
||||
int ret;
|
||||
|
||||
cpu_clk = clk_get(NULL, "armclk");
|
||||
if (IS_ERR(cpu_clk))
|
||||
return PTR_ERR(cpu_clk);
|
||||
policy->clk = clk_get(NULL, "armclk");
|
||||
if (IS_ERR(policy->clk))
|
||||
return PTR_ERR(policy->clk);
|
||||
|
||||
dmc0_clk = clk_get(NULL, "sclk_dmc0");
|
||||
if (IS_ERR(dmc0_clk)) {
|
||||
@ -516,7 +507,7 @@ static int __init s5pv210_cpu_init(struct cpufreq_policy *policy)
|
||||
out_dmc1:
|
||||
clk_put(dmc0_clk);
|
||||
out_dmc0:
|
||||
clk_put(cpu_clk);
|
||||
clk_put(policy->clk);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -560,10 +551,10 @@ static int s5pv210_cpufreq_reboot_notifier_event(struct notifier_block *this,
|
||||
}
|
||||
|
||||
static struct cpufreq_driver s5pv210_driver = {
|
||||
.flags = CPUFREQ_STICKY,
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = s5pv210_target,
|
||||
.get = s5pv210_getspeed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = s5pv210_cpu_init,
|
||||
.name = "s5pv210",
|
||||
#ifdef CONFIG_PM
|
||||
|
@ -201,7 +201,7 @@ static int __init sa1100_cpu_init(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
static struct cpufreq_driver sa1100_driver __refdata = {
|
||||
.flags = CPUFREQ_STICKY,
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = sa1100_target,
|
||||
.get = sa11x0_getspeed,
|
||||
|
@ -312,7 +312,7 @@ static int __init sa1110_cpu_init(struct cpufreq_policy *policy)
|
||||
/* sa1110_driver needs __refdata because it must remain after init registers
|
||||
* it with cpufreq_register_driver() */
|
||||
static struct cpufreq_driver sa1110_driver __refdata = {
|
||||
.flags = CPUFREQ_STICKY,
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = sa1110_target,
|
||||
.get = sa11x0_getspeed,
|
||||
|
@ -30,11 +30,6 @@ static struct {
|
||||
u32 cnt;
|
||||
} spear_cpufreq;
|
||||
|
||||
static unsigned int spear_cpufreq_get(unsigned int cpu)
|
||||
{
|
||||
return clk_get_rate(spear_cpufreq.clk) / 1000;
|
||||
}
|
||||
|
||||
static struct clk *spear1340_cpu_get_possible_parent(unsigned long newfreq)
|
||||
{
|
||||
struct clk *sys_pclk;
|
||||
@ -138,7 +133,7 @@ static int spear_cpufreq_target(struct cpufreq_policy *policy,
|
||||
}
|
||||
|
||||
newfreq = clk_round_rate(srcclk, newfreq * mult);
|
||||
if (newfreq < 0) {
|
||||
if (newfreq <= 0) {
|
||||
pr_err("clk_round_rate failed for cpu src clock\n");
|
||||
return newfreq;
|
||||
}
|
||||
@ -156,16 +151,17 @@ static int spear_cpufreq_target(struct cpufreq_policy *policy,
|
||||
|
||||
static int spear_cpufreq_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
policy->clk = spear_cpufreq.clk;
|
||||
return cpufreq_generic_init(policy, spear_cpufreq.freq_tbl,
|
||||
spear_cpufreq.transition_latency);
|
||||
}
|
||||
|
||||
static struct cpufreq_driver spear_cpufreq_driver = {
|
||||
.name = "cpufreq-spear",
|
||||
.flags = CPUFREQ_STICKY,
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = spear_cpufreq_target,
|
||||
.get = spear_cpufreq_get,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = spear_cpufreq_init,
|
||||
.exit = cpufreq_generic_exit,
|
||||
.attr = cpufreq_generic_attr,
|
||||
|
@ -140,38 +140,6 @@ static int speedstep_smi_get_freqs(unsigned int *low, unsigned int *high)
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* speedstep_get_state - set the SpeedStep state
|
||||
* @state: processor frequency state (SPEEDSTEP_LOW or SPEEDSTEP_HIGH)
|
||||
*
|
||||
*/
|
||||
static int speedstep_get_state(void)
|
||||
{
|
||||
u32 function = GET_SPEEDSTEP_STATE;
|
||||
u32 result, state, edi, command, dummy;
|
||||
|
||||
command = (smi_sig & 0xffffff00) | (smi_cmd & 0xff);
|
||||
|
||||
pr_debug("trying to determine current setting with command %x "
|
||||
"at port %x\n", command, smi_port);
|
||||
|
||||
__asm__ __volatile__(
|
||||
"push %%ebp\n"
|
||||
"out %%al, (%%dx)\n"
|
||||
"pop %%ebp\n"
|
||||
: "=a" (result),
|
||||
"=b" (state), "=D" (edi),
|
||||
"=c" (dummy), "=d" (dummy), "=S" (dummy)
|
||||
: "a" (command), "b" (function), "c" (0),
|
||||
"d" (smi_port), "S" (0), "D" (0)
|
||||
);
|
||||
|
||||
pr_debug("state is %x, result is %x\n", state, result);
|
||||
|
||||
return state & 1;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* speedstep_set_state - set the SpeedStep state
|
||||
* @state: new processor frequency state (SPEEDSTEP_LOW or SPEEDSTEP_HIGH)
|
||||
|
@ -47,21 +47,9 @@ static struct clk *pll_x_clk;
|
||||
static struct clk *pll_p_clk;
|
||||
static struct clk *emc_clk;
|
||||
|
||||
static unsigned long target_cpu_speed[NUM_CPUS];
|
||||
static DEFINE_MUTEX(tegra_cpu_lock);
|
||||
static bool is_suspended;
|
||||
|
||||
static unsigned int tegra_getspeed(unsigned int cpu)
|
||||
{
|
||||
unsigned long rate;
|
||||
|
||||
if (cpu >= NUM_CPUS)
|
||||
return 0;
|
||||
|
||||
rate = clk_get_rate(cpu_clk) / 1000;
|
||||
return rate;
|
||||
}
|
||||
|
||||
static int tegra_cpu_clk_set_rate(unsigned long rate)
|
||||
{
|
||||
int ret;
|
||||
@ -103,9 +91,6 @@ static int tegra_update_cpu_speed(struct cpufreq_policy *policy,
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
if (tegra_getspeed(0) == rate)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Vote on memory bus frequency based on cpu frequency
|
||||
* This sets the minimum frequency, display or avp may request higher
|
||||
@ -125,33 +110,16 @@ static int tegra_update_cpu_speed(struct cpufreq_policy *policy,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static unsigned long tegra_cpu_highest_speed(void)
|
||||
{
|
||||
unsigned long rate = 0;
|
||||
int i;
|
||||
|
||||
for_each_online_cpu(i)
|
||||
rate = max(rate, target_cpu_speed[i]);
|
||||
return rate;
|
||||
}
|
||||
|
||||
static int tegra_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
{
|
||||
unsigned int freq;
|
||||
int ret = 0;
|
||||
int ret = -EBUSY;
|
||||
|
||||
mutex_lock(&tegra_cpu_lock);
|
||||
|
||||
if (is_suspended)
|
||||
goto out;
|
||||
if (!is_suspended)
|
||||
ret = tegra_update_cpu_speed(policy,
|
||||
freq_table[index].frequency);
|
||||
|
||||
freq = freq_table[index].frequency;
|
||||
|
||||
target_cpu_speed[policy->cpu] = freq;
|
||||
|
||||
ret = tegra_update_cpu_speed(policy, tegra_cpu_highest_speed());
|
||||
|
||||
out:
|
||||
mutex_unlock(&tegra_cpu_lock);
|
||||
return ret;
|
||||
}
|
||||
@ -165,7 +133,8 @@ static int tegra_pm_notify(struct notifier_block *nb, unsigned long event,
|
||||
is_suspended = true;
|
||||
pr_info("Tegra cpufreq suspend: setting frequency to %d kHz\n",
|
||||
freq_table[0].frequency);
|
||||
tegra_update_cpu_speed(policy, freq_table[0].frequency);
|
||||
if (clk_get_rate(cpu_clk) / 1000 != freq_table[0].frequency)
|
||||
tegra_update_cpu_speed(policy, freq_table[0].frequency);
|
||||
cpufreq_cpu_put(policy);
|
||||
} else if (event == PM_POST_SUSPEND) {
|
||||
is_suspended = false;
|
||||
@ -189,8 +158,6 @@ static int tegra_cpu_init(struct cpufreq_policy *policy)
|
||||
clk_prepare_enable(emc_clk);
|
||||
clk_prepare_enable(cpu_clk);
|
||||
|
||||
target_cpu_speed[policy->cpu] = tegra_getspeed(policy->cpu);
|
||||
|
||||
/* FIXME: what's the actual transition time? */
|
||||
ret = cpufreq_generic_init(policy, freq_table, 300 * 1000);
|
||||
if (ret) {
|
||||
@ -202,6 +169,7 @@ static int tegra_cpu_init(struct cpufreq_policy *policy)
|
||||
if (policy->cpu == 0)
|
||||
register_pm_notifier(&tegra_cpu_pm_notifier);
|
||||
|
||||
policy->clk = cpu_clk;
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -214,9 +182,10 @@ static int tegra_cpu_exit(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
static struct cpufreq_driver tegra_cpufreq_driver = {
|
||||
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = tegra_target,
|
||||
.get = tegra_getspeed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = tegra_cpu_init,
|
||||
.exit = tegra_cpu_exit,
|
||||
.name = "tegra",
|
||||
|
@ -11,6 +11,7 @@
|
||||
* published by the Free Software Foundation.
|
||||
*/
|
||||
|
||||
#include <linux/err.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/init.h>
|
||||
@ -33,42 +34,34 @@ static int ucv2_verify_speed(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static unsigned int ucv2_getspeed(unsigned int cpu)
|
||||
{
|
||||
struct clk *mclk = clk_get(NULL, "MAIN_CLK");
|
||||
|
||||
if (cpu)
|
||||
return 0;
|
||||
return clk_get_rate(mclk)/1000;
|
||||
}
|
||||
|
||||
static int ucv2_target(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation)
|
||||
{
|
||||
unsigned int cur = ucv2_getspeed(0);
|
||||
struct cpufreq_freqs freqs;
|
||||
struct clk *mclk = clk_get(NULL, "MAIN_CLK");
|
||||
int ret;
|
||||
|
||||
freqs.old = policy->cur;
|
||||
freqs.new = target_freq;
|
||||
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
|
||||
ret = clk_set_rate(policy->mclk, target_freq * 1000);
|
||||
cpufreq_notify_post_transition(policy, &freqs, ret);
|
||||
|
||||
if (!clk_set_rate(mclk, target_freq * 1000)) {
|
||||
freqs.old = cur;
|
||||
freqs.new = target_freq;
|
||||
}
|
||||
|
||||
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int __init ucv2_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
if (policy->cpu != 0)
|
||||
return -EINVAL;
|
||||
|
||||
policy->min = policy->cpuinfo.min_freq = 250000;
|
||||
policy->max = policy->cpuinfo.max_freq = 1000000;
|
||||
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
|
||||
policy->clk = clk_get(NULL, "MAIN_CLK");
|
||||
if (IS_ERR(policy->clk))
|
||||
return PTR_ERR(policy->clk);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -76,7 +69,7 @@ static struct cpufreq_driver ucv2_driver = {
|
||||
.flags = CPUFREQ_STICKY,
|
||||
.verify = ucv2_verify_speed,
|
||||
.target = ucv2_target,
|
||||
.get = ucv2_getspeed,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = ucv2_cpu_init,
|
||||
.name = "UniCore-II",
|
||||
};
|
||||
|
@ -131,8 +131,8 @@ static const struct exynos_tmu_registers exynos4412_tmu_registers = {
|
||||
|
||||
#define EXYNOS4412_TMU_DATA \
|
||||
.threshold_falling = 10, \
|
||||
.trigger_levels[0] = 85, \
|
||||
.trigger_levels[1] = 103, \
|
||||
.trigger_levels[0] = 70, \
|
||||
.trigger_levels[1] = 95, \
|
||||
.trigger_levels[2] = 110, \
|
||||
.trigger_levels[3] = 120, \
|
||||
.trigger_enable[0] = true, \
|
||||
@ -155,12 +155,12 @@ static const struct exynos_tmu_registers exynos4412_tmu_registers = {
|
||||
.second_point_trim = 85, \
|
||||
.default_temp_offset = 50, \
|
||||
.freq_tab[0] = { \
|
||||
.freq_clip_max = 800 * 1000, \
|
||||
.temp_level = 85, \
|
||||
.freq_clip_max = 1400 * 1000, \
|
||||
.temp_level = 70, \
|
||||
}, \
|
||||
.freq_tab[1] = { \
|
||||
.freq_clip_max = 200 * 1000, \
|
||||
.temp_level = 103, \
|
||||
.freq_clip_max = 400 * 1000, \
|
||||
.temp_level = 95, \
|
||||
}, \
|
||||
.freq_tab_count = 2, \
|
||||
.registers = &exynos4412_tmu_registers, \
|
||||
|
@ -11,6 +11,7 @@
|
||||
#ifndef _LINUX_CPUFREQ_H
|
||||
#define _LINUX_CPUFREQ_H
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/cpumask.h>
|
||||
#include <linux/completion.h>
|
||||
#include <linux/kobject.h>
|
||||
@ -66,6 +67,7 @@ struct cpufreq_policy {
|
||||
unsigned int cpu; /* cpu nr of CPU managing this policy */
|
||||
unsigned int last_cpu; /* cpu nr of previous CPU that managed
|
||||
* this policy */
|
||||
struct clk *clk;
|
||||
struct cpufreq_cpuinfo cpuinfo;/* see above */
|
||||
|
||||
unsigned int min; /* in kHz */
|
||||
@ -225,6 +227,11 @@ struct cpufreq_driver {
|
||||
int (*suspend) (struct cpufreq_policy *policy);
|
||||
int (*resume) (struct cpufreq_policy *policy);
|
||||
struct freq_attr **attr;
|
||||
|
||||
/* platform specific boost support code */
|
||||
bool boost_supported;
|
||||
bool boost_enabled;
|
||||
int (*set_boost) (int state);
|
||||
};
|
||||
|
||||
/* flags */
|
||||
@ -252,6 +259,15 @@ struct cpufreq_driver {
|
||||
*/
|
||||
#define CPUFREQ_ASYNC_NOTIFICATION (1 << 4)
|
||||
|
||||
/*
|
||||
* Set by drivers which want cpufreq core to check if CPU is running at a
|
||||
* frequency present in freq-table exposed by the driver. For these drivers if
|
||||
* CPU is found running at an out of table freq, we will try to set it to a freq
|
||||
* from the table. And if that fails, we will stop further boot process by
|
||||
* issuing a BUG_ON().
|
||||
*/
|
||||
#define CPUFREQ_NEED_INITIAL_FREQ_CHECK (1 << 5)
|
||||
|
||||
int cpufreq_register_driver(struct cpufreq_driver *driver_data);
|
||||
int cpufreq_unregister_driver(struct cpufreq_driver *driver_data);
|
||||
|
||||
@ -299,6 +315,8 @@ cpufreq_verify_within_cpu_limits(struct cpufreq_policy *policy)
|
||||
#define CPUFREQ_NOTIFY (2)
|
||||
#define CPUFREQ_START (3)
|
||||
#define CPUFREQ_UPDATE_POLICY_CPU (4)
|
||||
#define CPUFREQ_CREATE_POLICY (5)
|
||||
#define CPUFREQ_REMOVE_POLICY (6)
|
||||
|
||||
#ifdef CONFIG_CPU_FREQ
|
||||
int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list);
|
||||
@ -306,6 +324,8 @@ int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list);
|
||||
|
||||
void cpufreq_notify_transition(struct cpufreq_policy *policy,
|
||||
struct cpufreq_freqs *freqs, unsigned int state);
|
||||
void cpufreq_notify_post_transition(struct cpufreq_policy *policy,
|
||||
struct cpufreq_freqs *freqs, int transition_failed);
|
||||
|
||||
#else /* CONFIG_CPU_FREQ */
|
||||
static inline int cpufreq_register_notifier(struct notifier_block *nb,
|
||||
@ -420,6 +440,7 @@ extern struct cpufreq_governor cpufreq_gov_conservative;
|
||||
|
||||
#define CPUFREQ_ENTRY_INVALID ~0
|
||||
#define CPUFREQ_TABLE_END ~1
|
||||
#define CPUFREQ_BOOST_FREQ ~2
|
||||
|
||||
struct cpufreq_frequency_table {
|
||||
unsigned int driver_data; /* driver specific data, not used by core */
|
||||
@ -439,10 +460,30 @@ int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation,
|
||||
unsigned int *index);
|
||||
int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy,
|
||||
unsigned int freq);
|
||||
|
||||
void cpufreq_frequency_table_update_policy_cpu(struct cpufreq_policy *policy);
|
||||
ssize_t cpufreq_show_cpus(const struct cpumask *mask, char *buf);
|
||||
|
||||
#ifdef CONFIG_CPU_FREQ
|
||||
int cpufreq_boost_trigger_state(int state);
|
||||
int cpufreq_boost_supported(void);
|
||||
int cpufreq_boost_enabled(void);
|
||||
#else
|
||||
static inline int cpufreq_boost_trigger_state(int state)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline int cpufreq_boost_supported(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline int cpufreq_boost_enabled(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
/* the following funtion is for cpufreq core use only */
|
||||
struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu);
|
||||
|
||||
@ -455,6 +496,7 @@ void cpufreq_frequency_table_put_attr(unsigned int cpu);
|
||||
int cpufreq_table_validate_and_show(struct cpufreq_policy *policy,
|
||||
struct cpufreq_frequency_table *table);
|
||||
|
||||
unsigned int cpufreq_generic_get(unsigned int cpu);
|
||||
int cpufreq_generic_init(struct cpufreq_policy *policy,
|
||||
struct cpufreq_frequency_table *table,
|
||||
unsigned int transition_latency);
|
||||
|
@ -35,6 +35,59 @@ DEFINE_EVENT(cpu, cpu_idle,
|
||||
TP_ARGS(state, cpu_id)
|
||||
);
|
||||
|
||||
TRACE_EVENT(pstate_sample,
|
||||
|
||||
TP_PROTO(u32 core_busy,
|
||||
u32 scaled_busy,
|
||||
u32 state,
|
||||
u64 mperf,
|
||||
u64 aperf,
|
||||
u32 energy,
|
||||
u32 freq
|
||||
),
|
||||
|
||||
TP_ARGS(core_busy,
|
||||
scaled_busy,
|
||||
state,
|
||||
mperf,
|
||||
aperf,
|
||||
energy,
|
||||
freq
|
||||
),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(u32, core_busy)
|
||||
__field(u32, scaled_busy)
|
||||
__field(u32, state)
|
||||
__field(u64, mperf)
|
||||
__field(u64, aperf)
|
||||
__field(u32, energy)
|
||||
__field(u32, freq)
|
||||
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->core_busy = core_busy;
|
||||
__entry->scaled_busy = scaled_busy;
|
||||
__entry->state = state;
|
||||
__entry->mperf = mperf;
|
||||
__entry->aperf = aperf;
|
||||
__entry->energy = energy;
|
||||
__entry->freq = freq;
|
||||
),
|
||||
|
||||
TP_printk("core_busy=%lu scaled=%lu state=%lu mperf=%llu aperf=%llu energy=%lu freq=%lu ",
|
||||
(unsigned long)__entry->core_busy,
|
||||
(unsigned long)__entry->scaled_busy,
|
||||
(unsigned long)__entry->state,
|
||||
(unsigned long long)__entry->mperf,
|
||||
(unsigned long long)__entry->aperf,
|
||||
(unsigned long)__entry->energy,
|
||||
(unsigned long)__entry->freq
|
||||
)
|
||||
|
||||
);
|
||||
|
||||
/* This file can get included multiple times, TRACE_HEADER_MULTI_READ at top */
|
||||
#ifndef _PWR_EVENT_AVOID_DOUBLE_DEFINING
|
||||
#define _PWR_EVENT_AVOID_DOUBLE_DEFINING
|
||||
|
Loading…
Reference in New Issue
Block a user