diff options
author | Linus Torvalds | 2019-09-17 19:15:14 -0700 |
---|---|---|
committer | Linus Torvalds | 2019-09-17 19:15:14 -0700 |
commit | 77dcfe2b9edc98286cf18e03c243c9b999f955d9 (patch) | |
tree | 0ba3c4002b6c26c715bf03fac81d63de13c01d96 /drivers/acpi | |
parent | 04cbfba6208592999d7bfe6609ec01dc3fde73f5 (diff) | |
parent | fc6763a2d7e0a7f49ccec97a46e92e9fb1f3f9dd (diff) |
Merge tag 'pm-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"These include a rework of the main suspend-to-idle code flow (related
to the handling of spurious wakeups), a switch over of several users
of cpufreq notifiers to QoS-based limits, a new devfreq driver for
Tegra20, a new cpuidle driver and governor for virtualized guests, an
extension of the wakeup sources framework to expose wakeup sources as
device objects in sysfs, and more.
Specifics:
- Rework the main suspend-to-idle control flow to avoid repeating
"noirq" device resume and suspend operations in case of spurious
wakeups from the ACPI EC and decouple the ACPI EC wakeups support
from the LPS0 _DSM support (Rafael Wysocki).
- Extend the wakeup sources framework to expose wakeup sources as
device objects in sysfs (Tri Vo, Stephen Boyd).
- Expose system suspend statistics in sysfs (Kalesh Singh).
- Introduce a new haltpoll cpuidle driver and a new matching governor
for virtualized guests wanting to do guest-side polling in the idle
loop (Marcelo Tosatti, Joao Martins, Wanpeng Li, Stephen Rothwell).
- Fix the menu and teo cpuidle governors to allow the scheduler tick
to be stopped if PM QoS is used to limit the CPU idle state exit
latency in some cases (Rafael Wysocki).
- Increase the resolution of the play_idle() argument to microseconds
for more fine-grained injection of CPU idle cycles (Daniel
Lezcano).
- Switch over some users of cpuidle notifiers to the new QoS-based
frequency limits and drop the CPUFREQ_ADJUST and CPUFREQ_NOTIFY
policy notifier events (Viresh Kumar).
- Add new cpufreq driver based on nvmem for sun50i (Yangtao Li).
- Add support for MT8183 and MT8516 to the mediatek cpufreq driver
(Andrew-sh.Cheng, Fabien Parent).
- Add i.MX8MN support to the imx-cpufreq-dt cpufreq driver (Anson
Huang).
- Add qcs404 to cpufreq-dt-platdev blacklist (Jorge Ramirez-Ortiz).
- Update the qcom cpufreq driver (among other things, to make it
easier to extend and to use kryo cpufreq for other nvmem-based
SoCs) and add qcs404 support to it (Niklas Cassel, Douglas
RAILLARD, Sibi Sankar, Sricharan R).
- Fix assorted issues and make assorted minor improvements in the
cpufreq code (Colin Ian King, Douglas RAILLARD, Florian Fainelli,
Gustavo Silva, Hariprasad Kelam).
- Add new devfreq driver for NVidia Tegra20 (Dmitry Osipenko, Arnd
Bergmann).
- Add new Exynos PPMU events to devfreq events and extend that
mechanism (Lukasz Luba).
- Fix and clean up the exynos-bus devfreq driver (Kamil Konieczny).
- Improve devfreq documentation and governor code, fix spelling typos
in devfreq (Ezequiel Garcia, Krzysztof Kozlowski, Leonard Crestez,
MyungJoo Ham, Gaël PORTAY).
- Add regulators enable and disable to the OPP (operating performance
points) framework (Kamil Konieczny).
- Update the OPP framework to support multiple opp-suspend properties
(Anson Huang).
- Fix assorted issues and make assorted minor improvements in the OPP
code (Niklas Cassel, Viresh Kumar, Yue Hu).
- Clean up the generic power domains (genpd) framework (Ulf Hansson).
- Clean up assorted pieces of power management code and documentation
(Akinobu Mita, Amit Kucheria, Chuhong Yuan).
- Update the pm-graph tool to version 5.5 including multiple fixes
and improvements (Todd Brandt).
- Update the cpupower utility (Benjamin Weis, Geert Uytterhoeven,
Sébastien Szymanski)"
* tag 'pm-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (126 commits)
cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available
cpuidle-haltpoll: do not set an owner to allow modunload
cpuidle-haltpoll: return -ENODEV on modinit failure
cpuidle-haltpoll: set haltpoll as preferred governor
cpuidle: allow governor switch on cpuidle_register_driver()
PM: runtime: Documentation: add runtime_status ABI document
pm-graph: make setVal unbuffered again for python2 and python3
powercap: idle_inject: Use higher resolution for idle injection
cpuidle: play_idle: Increase the resolution to usec
cpuidle-haltpoll: vcpu hotplug support
cpufreq: Add qcs404 to cpufreq-dt-platdev blacklist
cpufreq: qcom: Add support for qcs404 on nvmem driver
cpufreq: qcom: Refactor the driver to make it easier to extend
cpufreq: qcom: Re-organise kryo cpufreq to use it for other nvmem based qcom socs
dt-bindings: opp: Add qcom-opp bindings with properties needed for CPR
dt-bindings: opp: qcom-nvmem: Support pstates provided by a power domain
Documentation: cpufreq: Update policy notifier documentation
cpufreq: Remove CPUFREQ_ADJUST and CPUFREQ_NOTIFY policy notifier events
PM / Domains: Verify PM domain type in dev_pm_genpd_set_performance_state()
PM / Domains: Simplify genpd_lookup_dev()
...
Diffstat (limited to 'drivers/acpi')
-rw-r--r-- | drivers/acpi/acpica/evxfgpe.c | 6 | ||||
-rw-r--r-- | drivers/acpi/device_pm.c | 7 | ||||
-rw-r--r-- | drivers/acpi/ec.c | 57 | ||||
-rw-r--r-- | drivers/acpi/internal.h | 6 | ||||
-rw-r--r-- | drivers/acpi/processor_driver.c | 39 | ||||
-rw-r--r-- | drivers/acpi/processor_perflib.c | 100 | ||||
-rw-r--r-- | drivers/acpi/processor_thermal.c | 84 | ||||
-rw-r--r-- | drivers/acpi/sleep.c | 165 |
8 files changed, 245 insertions, 219 deletions
diff --git a/drivers/acpi/acpica/evxfgpe.c b/drivers/acpi/acpica/evxfgpe.c index 710488ec59e9..04a40d563dd6 100644 --- a/drivers/acpi/acpica/evxfgpe.c +++ b/drivers/acpi/acpica/evxfgpe.c @@ -644,17 +644,17 @@ ACPI_EXPORT_SYMBOL(acpi_get_gpe_status) * PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1 * gpe_number - GPE level within the GPE block * - * RETURN: None + * RETURN: INTERRUPT_HANDLED or INTERRUPT_NOT_HANDLED * * DESCRIPTION: Detect and dispatch a General Purpose Event to either a function * (e.g. EC) or method (e.g. _Lxx/_Exx) handler. * ******************************************************************************/ -void acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number) +u32 acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number) { ACPI_FUNCTION_TRACE(acpi_dispatch_gpe); - acpi_ev_detect_gpe(gpe_device, NULL, gpe_number); + return acpi_ev_detect_gpe(gpe_device, NULL, gpe_number); } ACPI_EXPORT_SYMBOL(acpi_dispatch_gpe) diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c index f616b16c1f0b..08bb9f2f2d23 100644 --- a/drivers/acpi/device_pm.c +++ b/drivers/acpi/device_pm.c @@ -166,6 +166,10 @@ int acpi_device_set_power(struct acpi_device *device, int state) || (state < ACPI_STATE_D0) || (state > ACPI_STATE_D3_COLD)) return -EINVAL; + acpi_handle_debug(device->handle, "Power state change: %s -> %s\n", + acpi_power_state_string(device->power.state), + acpi_power_state_string(state)); + /* Make sure this is a valid target state */ /* There is a special case for D0 addressed below. */ @@ -497,7 +501,8 @@ acpi_status acpi_add_pm_notifier(struct acpi_device *adev, struct device *dev, goto out; mutex_lock(&acpi_pm_notifier_lock); - adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev)); + adev->wakeup.ws = wakeup_source_register(&adev->dev, + dev_name(&adev->dev)); adev->wakeup.context.dev = dev; adev->wakeup.context.func = func; adev->wakeup.flags.notifier_present = true; diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c index c33756ed3304..da1e5c5ce150 100644 --- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -25,6 +25,7 @@ #include <linux/list.h> #include <linux/spinlock.h> #include <linux/slab.h> +#include <linux/suspend.h> #include <linux/acpi.h> #include <linux/dmi.h> #include <asm/io.h> @@ -1048,24 +1049,6 @@ void acpi_ec_unblock_transactions(void) acpi_ec_start(first_ec, true); } -void acpi_ec_mark_gpe_for_wake(void) -{ - if (first_ec && !ec_no_wakeup) - acpi_mark_gpe_for_wake(NULL, first_ec->gpe); -} - -void acpi_ec_set_gpe_wake_mask(u8 action) -{ - if (first_ec && !ec_no_wakeup) - acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action); -} - -void acpi_ec_dispatch_gpe(void) -{ - if (first_ec) - acpi_dispatch_gpe(NULL, first_ec->gpe); -} - /* -------------------------------------------------------------------------- Event Management -------------------------------------------------------------------------- */ @@ -1931,7 +1914,7 @@ static int acpi_ec_suspend(struct device *dev) struct acpi_ec *ec = acpi_driver_data(to_acpi_device(dev)); - if (acpi_sleep_no_ec_events() && ec_freeze_events) + if (!pm_suspend_no_platform() && ec_freeze_events) acpi_ec_disable_event(ec); return 0; } @@ -1948,8 +1931,7 @@ static int acpi_ec_suspend_noirq(struct device *dev) ec->reference_count >= 1) acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_DISABLE); - if (acpi_sleep_no_ec_events()) - acpi_ec_enter_noirq(ec); + acpi_ec_enter_noirq(ec); return 0; } @@ -1958,8 +1940,7 @@ static int acpi_ec_resume_noirq(struct device *dev) { struct acpi_ec *ec = acpi_driver_data(to_acpi_device(dev)); - if (acpi_sleep_no_ec_events()) - acpi_ec_leave_noirq(ec); + acpi_ec_leave_noirq(ec); if (ec_no_wakeup && test_bit(EC_FLAGS_STARTED, &ec->flags) && ec->reference_count >= 1) @@ -1976,7 +1957,35 @@ static int acpi_ec_resume(struct device *dev) acpi_ec_enable_event(ec); return 0; } -#endif + +void acpi_ec_mark_gpe_for_wake(void) +{ + if (first_ec && !ec_no_wakeup) + acpi_mark_gpe_for_wake(NULL, first_ec->gpe); +} +EXPORT_SYMBOL_GPL(acpi_ec_mark_gpe_for_wake); + +void acpi_ec_set_gpe_wake_mask(u8 action) +{ + if (pm_suspend_no_platform() && first_ec && !ec_no_wakeup) + acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action); +} + +bool acpi_ec_dispatch_gpe(void) +{ + u32 ret; + + if (!first_ec) + return false; + + ret = acpi_dispatch_gpe(NULL, first_ec->gpe); + if (ret == ACPI_INTERRUPT_HANDLED) { + pm_pr_dbg("EC GPE dispatched\n"); + return true; + } + return false; +} +#endif /* CONFIG_PM_SLEEP */ static const struct dev_pm_ops acpi_ec_pm = { SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(acpi_ec_suspend_noirq, acpi_ec_resume_noirq) diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h index f4c2fe6be4f2..afe6636f9ad3 100644 --- a/drivers/acpi/internal.h +++ b/drivers/acpi/internal.h @@ -194,9 +194,6 @@ void acpi_ec_ecdt_probe(void); void acpi_ec_dsdt_probe(void); void acpi_ec_block_transactions(void); void acpi_ec_unblock_transactions(void); -void acpi_ec_mark_gpe_for_wake(void); -void acpi_ec_set_gpe_wake_mask(u8 action); -void acpi_ec_dispatch_gpe(void); int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit, acpi_handle handle, acpi_ec_query_func func, void *data); @@ -204,6 +201,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit); #ifdef CONFIG_PM_SLEEP void acpi_ec_flush_work(void); +bool acpi_ec_dispatch_gpe(void); #endif @@ -212,11 +210,9 @@ void acpi_ec_flush_work(void); -------------------------------------------------------------------------- */ #ifdef CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT extern bool acpi_s2idle_wakeup(void); -extern bool acpi_sleep_no_ec_events(void); extern int acpi_sleep_init(void); #else static inline bool acpi_s2idle_wakeup(void) { return false; } -static inline bool acpi_sleep_no_ec_events(void) { return true; } static inline int acpi_sleep_init(void) { return -ENXIO; } #endif diff --git a/drivers/acpi/processor_driver.c b/drivers/acpi/processor_driver.c index aea8d674a33d..08da9c29f1e9 100644 --- a/drivers/acpi/processor_driver.c +++ b/drivers/acpi/processor_driver.c @@ -284,6 +284,29 @@ static int acpi_processor_stop(struct device *dev) return 0; } +bool acpi_processor_cpufreq_init; + +static int acpi_processor_notifier(struct notifier_block *nb, + unsigned long event, void *data) +{ + struct cpufreq_policy *policy = data; + int cpu = policy->cpu; + + if (event == CPUFREQ_CREATE_POLICY) { + acpi_thermal_cpufreq_init(cpu); + acpi_processor_ppc_init(cpu); + } else if (event == CPUFREQ_REMOVE_POLICY) { + acpi_processor_ppc_exit(cpu); + acpi_thermal_cpufreq_exit(cpu); + } + + return 0; +} + +static struct notifier_block acpi_processor_notifier_block = { + .notifier_call = acpi_processor_notifier, +}; + /* * We keep the driver loaded even when ACPI is not running. * This is needed for the powernow-k8 driver, that works even without @@ -310,8 +333,12 @@ static int __init acpi_processor_driver_init(void) cpuhp_setup_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD, "acpi/cpu-drv:dead", NULL, acpi_soft_cpu_dead); - acpi_thermal_cpufreq_init(); - acpi_processor_ppc_init(); + if (!cpufreq_register_notifier(&acpi_processor_notifier_block, + CPUFREQ_POLICY_NOTIFIER)) { + acpi_processor_cpufreq_init = true; + acpi_processor_ignore_ppc_init(); + } + acpi_processor_throttling_init(); return 0; err: @@ -324,8 +351,12 @@ static void __exit acpi_processor_driver_exit(void) if (acpi_disabled) return; - acpi_processor_ppc_exit(); - acpi_thermal_cpufreq_exit(); + if (acpi_processor_cpufreq_init) { + cpufreq_unregister_notifier(&acpi_processor_notifier_block, + CPUFREQ_POLICY_NOTIFIER); + acpi_processor_cpufreq_init = false; + } + cpuhp_remove_state_nocalls(hp_online); cpuhp_remove_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD); driver_unregister(&acpi_processor_driver); diff --git a/drivers/acpi/processor_perflib.c b/drivers/acpi/processor_perflib.c index ee87cb6f6e59..2261713d1aec 100644 --- a/drivers/acpi/processor_perflib.c +++ b/drivers/acpi/processor_perflib.c @@ -50,57 +50,13 @@ module_param(ignore_ppc, int, 0644); MODULE_PARM_DESC(ignore_ppc, "If the frequency of your machine gets wrongly" \ "limited by BIOS, this should help"); -#define PPC_REGISTERED 1 -#define PPC_IN_USE 2 - -static int acpi_processor_ppc_status; - -static int acpi_processor_ppc_notifier(struct notifier_block *nb, - unsigned long event, void *data) -{ - struct cpufreq_policy *policy = data; - struct acpi_processor *pr; - unsigned int ppc = 0; - - if (ignore_ppc < 0) - ignore_ppc = 0; - - if (ignore_ppc) - return 0; - - if (event != CPUFREQ_ADJUST) - return 0; - - mutex_lock(&performance_mutex); - - pr = per_cpu(processors, policy->cpu); - if (!pr || !pr->performance) - goto out; - - ppc = (unsigned int)pr->performance_platform_limit; - - if (ppc >= pr->performance->state_count) - goto out; - - cpufreq_verify_within_limits(policy, 0, - pr->performance->states[ppc]. - core_frequency * 1000); - - out: - mutex_unlock(&performance_mutex); - - return 0; -} - -static struct notifier_block acpi_ppc_notifier_block = { - .notifier_call = acpi_processor_ppc_notifier, -}; +static bool acpi_processor_ppc_in_use; static int acpi_processor_get_platform_limit(struct acpi_processor *pr) { acpi_status status = 0; unsigned long long ppc = 0; - + int ret; if (!pr) return -EINVAL; @@ -112,7 +68,7 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr) status = acpi_evaluate_integer(pr->handle, "_PPC", NULL, &ppc); if (status != AE_NOT_FOUND) - acpi_processor_ppc_status |= PPC_IN_USE; + acpi_processor_ppc_in_use = true; if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { ACPI_EXCEPTION((AE_INFO, status, "Evaluating _PPC")); @@ -124,6 +80,17 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr) pr->performance_platform_limit = (int)ppc; + if (ppc >= pr->performance->state_count || + unlikely(!dev_pm_qos_request_active(&pr->perflib_req))) + return 0; + + ret = dev_pm_qos_update_request(&pr->perflib_req, + pr->performance->states[ppc].core_frequency * 1000); + if (ret < 0) { + pr_warn("Failed to update perflib freq constraint: CPU%d (%d)\n", + pr->id, ret); + } + return 0; } @@ -184,23 +151,32 @@ int acpi_processor_get_bios_limit(int cpu, unsigned int *limit) } EXPORT_SYMBOL(acpi_processor_get_bios_limit); -void acpi_processor_ppc_init(void) +void acpi_processor_ignore_ppc_init(void) { - if (!cpufreq_register_notifier - (&acpi_ppc_notifier_block, CPUFREQ_POLICY_NOTIFIER)) - acpi_processor_ppc_status |= PPC_REGISTERED; - else - printk(KERN_DEBUG - "Warning: Processor Platform Limit not supported.\n"); + if (ignore_ppc < 0) + ignore_ppc = 0; +} + +void acpi_processor_ppc_init(int cpu) +{ + struct acpi_processor *pr = per_cpu(processors, cpu); + int ret; + + ret = dev_pm_qos_add_request(get_cpu_device(cpu), + &pr->perflib_req, DEV_PM_QOS_MAX_FREQUENCY, + INT_MAX); + if (ret < 0) { + pr_err("Failed to add freq constraint for CPU%d (%d)\n", cpu, + ret); + return; + } } -void acpi_processor_ppc_exit(void) +void acpi_processor_ppc_exit(int cpu) { - if (acpi_processor_ppc_status & PPC_REGISTERED) - cpufreq_unregister_notifier(&acpi_ppc_notifier_block, - CPUFREQ_POLICY_NOTIFIER); + struct acpi_processor *pr = per_cpu(processors, cpu); - acpi_processor_ppc_status &= ~PPC_REGISTERED; + dev_pm_qos_remove_request(&pr->perflib_req); } static int acpi_processor_get_performance_control(struct acpi_processor *pr) @@ -477,7 +453,7 @@ int acpi_processor_notify_smm(struct module *calling_module) static int is_done = 0; int result; - if (!(acpi_processor_ppc_status & PPC_REGISTERED)) + if (!acpi_processor_cpufreq_init) return -EBUSY; if (!try_module_get(calling_module)) @@ -513,7 +489,7 @@ int acpi_processor_notify_smm(struct module *calling_module) * we can allow the cpufreq driver to be rmmod'ed. */ is_done = 1; - if (!(acpi_processor_ppc_status & PPC_IN_USE)) + if (!acpi_processor_ppc_in_use) module_put(calling_module); return 0; @@ -742,7 +718,7 @@ acpi_processor_register_performance(struct acpi_processor_performance { struct acpi_processor *pr; - if (!(acpi_processor_ppc_status & PPC_REGISTERED)) + if (!acpi_processor_cpufreq_init) return -EINVAL; mutex_lock(&performance_mutex); diff --git a/drivers/acpi/processor_thermal.c b/drivers/acpi/processor_thermal.c index 50fb0107375e..ec2638f1df4f 100644 --- a/drivers/acpi/processor_thermal.c +++ b/drivers/acpi/processor_thermal.c @@ -35,7 +35,6 @@ ACPI_MODULE_NAME("processor_thermal"); #define CPUFREQ_THERMAL_MAX_STEP 3 static DEFINE_PER_CPU(unsigned int, cpufreq_thermal_reduction_pctg); -static unsigned int acpi_thermal_cpufreq_is_init = 0; #define reduction_pctg(cpu) \ per_cpu(cpufreq_thermal_reduction_pctg, phys_package_first_cpu(cpu)) @@ -61,35 +60,11 @@ static int phys_package_first_cpu(int cpu) static int cpu_has_cpufreq(unsigned int cpu) { struct cpufreq_policy policy; - if (!acpi_thermal_cpufreq_is_init || cpufreq_get_policy(&policy, cpu)) + if (!acpi_processor_cpufreq_init || cpufreq_get_policy(&policy, cpu)) return 0; return 1; } -static int acpi_thermal_cpufreq_notifier(struct notifier_block *nb, - unsigned long event, void *data) -{ - struct cpufreq_policy *policy = data; - unsigned long max_freq = 0; - - if (event != CPUFREQ_ADJUST) - goto out; - - max_freq = ( - policy->cpuinfo.max_freq * - (100 - reduction_pctg(policy->cpu) * 20) - ) / 100; - - cpufreq_verify_within_limits(policy, 0, max_freq); - - out: - return 0; -} - -static struct notifier_block acpi_thermal_cpufreq_notifier_block = { - .notifier_call = acpi_thermal_cpufreq_notifier, -}; - static int cpufreq_get_max_state(unsigned int cpu) { if (!cpu_has_cpufreq(cpu)) @@ -108,7 +83,10 @@ static int cpufreq_get_cur_state(unsigned int cpu) static int cpufreq_set_cur_state(unsigned int cpu, int state) { - int i; + struct cpufreq_policy *policy; + struct acpi_processor *pr; + unsigned long max_freq; + int i, ret; if (!cpu_has_cpufreq(cpu)) return 0; @@ -121,33 +99,53 @@ static int cpufreq_set_cur_state(unsigned int cpu, int state) * frequency. */ for_each_online_cpu(i) { - if (topology_physical_package_id(i) == + if (topology_physical_package_id(i) != topology_physical_package_id(cpu)) - cpufreq_update_policy(i); + continue; + + pr = per_cpu(processors, i); + + if (unlikely(!dev_pm_qos_request_active(&pr->thermal_req))) + continue; + + policy = cpufreq_cpu_get(i); + if (!policy) + return -EINVAL; + + max_freq = (policy->cpuinfo.max_freq * (100 - reduction_pctg(i) * 20)) / 100; + + cpufreq_cpu_put(policy); + + ret = dev_pm_qos_update_request(&pr->thermal_req, max_freq); + if (ret < 0) { + pr_warn("Failed to update thermal freq constraint: CPU%d (%d)\n", + pr->id, ret); + } } return 0; } -void acpi_thermal_cpufreq_init(void) +void acpi_thermal_cpufreq_init(int cpu) { - int i; - - i = cpufreq_register_notifier(&acpi_thermal_cpufreq_notifier_block, - CPUFREQ_POLICY_NOTIFIER); - if (!i) - acpi_thermal_cpufreq_is_init = 1; + struct acpi_processor *pr = per_cpu(processors, cpu); + int ret; + + ret = dev_pm_qos_add_request(get_cpu_device(cpu), + &pr->thermal_req, DEV_PM_QOS_MAX_FREQUENCY, + INT_MAX); + if (ret < 0) { + pr_err("Failed to add freq constraint for CPU%d (%d)\n", cpu, + ret); + return; + } } -void acpi_thermal_cpufreq_exit(void) +void acpi_thermal_cpufreq_exit(int cpu) { - if (acpi_thermal_cpufreq_is_init) - cpufreq_unregister_notifier - (&acpi_thermal_cpufreq_notifier_block, - CPUFREQ_POLICY_NOTIFIER); + struct acpi_processor *pr = per_cpu(processors, cpu); - acpi_thermal_cpufreq_is_init = 0; + dev_pm_qos_remove_request(&pr->thermal_req); } - #else /* ! CONFIG_CPU_FREQ */ static int cpufreq_get_max_state(unsigned int cpu) { diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c index f0fe7c15d657..9fa77d72ef27 100644 --- a/drivers/acpi/sleep.c +++ b/drivers/acpi/sleep.c @@ -89,6 +89,10 @@ bool acpi_sleep_state_supported(u8 sleep_state) } #ifdef CONFIG_ACPI_SLEEP +static bool sleep_no_lps0 __read_mostly; +module_param(sleep_no_lps0, bool, 0644); +MODULE_PARM_DESC(sleep_no_lps0, "Do not use the special LPS0 device interface"); + static u32 acpi_target_sleep_state = ACPI_STATE_S0; u32 acpi_target_system_state(void) @@ -158,11 +162,11 @@ static int __init init_nvs_nosave(const struct dmi_system_id *d) return 0; } -static bool acpi_sleep_no_lps0; +static bool acpi_sleep_default_s3; -static int __init init_no_lps0(const struct dmi_system_id *d) +static int __init init_default_s3(const struct dmi_system_id *d) { - acpi_sleep_no_lps0 = true; + acpi_sleep_default_s3 = true; return 0; } @@ -363,7 +367,7 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = { * S0 Idle firmware interface. */ { - .callback = init_no_lps0, + .callback = init_default_s3, .ident = "Dell XPS13 9360", .matches = { DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), @@ -376,7 +380,7 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = { * https://bugzilla.kernel.org/show_bug.cgi?id=199057). */ { - .callback = init_no_lps0, + .callback = init_default_s3, .ident = "ThinkPad X1 Tablet(2016)", .matches = { DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), @@ -524,8 +528,9 @@ static void acpi_pm_end(void) acpi_sleep_tts_switch(acpi_target_sleep_state); } #else /* !CONFIG_ACPI_SLEEP */ +#define sleep_no_lps0 (1) #define acpi_target_sleep_state ACPI_STATE_S0 -#define acpi_sleep_no_lps0 (false) +#define acpi_sleep_default_s3 (1) static inline void acpi_sleep_dmi_check(void) {} #endif /* CONFIG_ACPI_SLEEP */ @@ -691,7 +696,6 @@ static const struct platform_suspend_ops acpi_suspend_ops_old = { .recover = acpi_pm_finish, }; -static bool s2idle_in_progress; static bool s2idle_wakeup; /* @@ -904,42 +908,43 @@ static int lps0_device_attach(struct acpi_device *adev, if (lps0_device_handle) return 0; - if (acpi_sleep_no_lps0) { - acpi_handle_info(adev->handle, - "Low Power S0 Idle interface disabled\n"); - return 0; - } - if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) return 0; guid_parse(ACPI_LPS0_DSM_UUID, &lps0_dsm_guid); /* Check if the _DSM is present and as expected. */ out_obj = acpi_evaluate_dsm(adev->handle, &lps0_dsm_guid, 1, 0, NULL); - if (out_obj && out_obj->type == ACPI_TYPE_BUFFER) { - char bitmask = *(char *)out_obj->buffer.pointer; - - lps0_dsm_func_mask = bitmask; - lps0_device_handle = adev->handle; - /* - * Use suspend-to-idle by default if the default - * suspend mode was not set from the command line. - */ - if (mem_sleep_default > PM_SUSPEND_MEM) - mem_sleep_current = PM_SUSPEND_TO_IDLE; - - acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n", - bitmask); - - acpi_ec_mark_gpe_for_wake(); - } else { + if (!out_obj || out_obj->type != ACPI_TYPE_BUFFER) { acpi_handle_debug(adev->handle, "_DSM function 0 evaluation failed\n"); + return 0; } + + lps0_dsm_func_mask = *(char *)out_obj->buffer.pointer; + ACPI_FREE(out_obj); + acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n", + lps0_dsm_func_mask); + + lps0_device_handle = adev->handle; + lpi_device_get_constraints(); + /* + * Use suspend-to-idle by default if the default suspend mode was not + * set from the command line. + */ + if (mem_sleep_default > PM_SUSPEND_MEM && !acpi_sleep_default_s3) + mem_sleep_current = PM_SUSPEND_TO_IDLE; + + /* + * Some LPS0 systems, like ASUS Zenbook UX430UNR/i7-8550U, require the + * EC GPE to be enabled while suspended for certain wakeup devices to + * work, so mark it as wakeup-capable. + */ + acpi_ec_mark_gpe_for_wake(); + return 0; } @@ -951,98 +956,110 @@ static struct acpi_scan_handler lps0_handler = { static int acpi_s2idle_begin(void) { acpi_scan_lock_acquire(); - s2idle_in_progress = true; return 0; } static int acpi_s2idle_prepare(void) { - if (lps0_device_handle) { - acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF); - acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY); - + if (acpi_sci_irq_valid()) { + enable_irq_wake(acpi_sci_irq); acpi_ec_set_gpe_wake_mask(ACPI_GPE_ENABLE); } - if (acpi_sci_irq_valid()) - enable_irq_wake(acpi_sci_irq); - acpi_enable_wakeup_devices(ACPI_STATE_S0); /* Change the configuration of GPEs to avoid spurious wakeup. */ acpi_enable_all_wakeup_gpes(); acpi_os_wait_events_complete(); + + s2idle_wakeup = true; return 0; } -static void acpi_s2idle_wake(void) +static int acpi_s2idle_prepare_late(void) { - if (!lps0_device_handle) - return; + if (!lps0_device_handle || sleep_no_lps0) + return 0; if (pm_debug_messages_on) lpi_check_constraints(); + acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF); + acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY); + + return 0; +} + +static void acpi_s2idle_wake(void) +{ + /* + * If IRQD_WAKEUP_ARMED is set for the SCI at this point, the SCI has + * not triggered while suspended, so bail out. + */ + if (!acpi_sci_irq_valid() || + irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq))) + return; + /* - * If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means - * that the SCI has triggered while suspended, so cancel the wakeup in - * case it has not been a wakeup event (the GPEs will be checked later). + * If there are EC events to process, the wakeup may be a spurious one + * coming from the EC. */ - if (acpi_sci_irq_valid() && - !irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq))) { + if (acpi_ec_dispatch_gpe()) { + /* + * Cancel the wakeup and process all pending events in case + * there are any wakeup ones in there. + * + * Note that if any non-EC GPEs are active at this point, the + * SCI will retrigger after the rearming below, so no events + * should be missed by canceling the wakeup here. + */ pm_system_cancel_wakeup(); - s2idle_wakeup = true; /* - * On some platforms with the LPS0 _DSM device noirq resume - * takes too much time for EC wakeup events to survive, so look - * for them now. + * The EC driver uses the system workqueue and an additional + * special one, so those need to be flushed too. */ - acpi_ec_dispatch_gpe(); + acpi_os_wait_events_complete(); /* synchronize EC GPE processing */ + acpi_ec_flush_work(); + acpi_os_wait_events_complete(); /* synchronize Notify handling */ + + rearm_wake_irq(acpi_sci_irq); } } -static void acpi_s2idle_sync(void) +static void acpi_s2idle_restore_early(void) { - /* - * Process all pending events in case there are any wakeup ones. - * - * The EC driver uses the system workqueue and an additional special - * one, so those need to be flushed too. - */ - acpi_os_wait_events_complete(); /* synchronize SCI IRQ handling */ - acpi_ec_flush_work(); - acpi_os_wait_events_complete(); /* synchronize Notify handling */ - s2idle_wakeup = false; + if (!lps0_device_handle || sleep_no_lps0) + return; + + acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT); + acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON); } static void acpi_s2idle_restore(void) { + s2idle_wakeup = false; + acpi_enable_all_runtime_gpes(); acpi_disable_wakeup_devices(ACPI_STATE_S0); - if (acpi_sci_irq_valid()) - disable_irq_wake(acpi_sci_irq); - - if (lps0_device_handle) { + if (acpi_sci_irq_valid()) { acpi_ec_set_gpe_wake_mask(ACPI_GPE_DISABLE); - - acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT); - acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON); + disable_irq_wake(acpi_sci_irq); } } static void acpi_s2idle_end(void) { - s2idle_in_progress = false; acpi_scan_lock_release(); } static const struct platform_s2idle_ops acpi_s2idle_ops = { .begin = acpi_s2idle_begin, .prepare = acpi_s2idle_prepare, + .prepare_late = acpi_s2idle_prepare_late, .wake = acpi_s2idle_wake, - .sync = acpi_s2idle_sync, + .restore_early = acpi_s2idle_restore_early, .restore = acpi_s2idle_restore, .end = acpi_s2idle_end, }; @@ -1063,7 +1080,6 @@ static void acpi_sleep_suspend_setup(void) } #else /* !CONFIG_SUSPEND */ -#define s2idle_in_progress (false) #define s2idle_wakeup (false) #define lps0_device_handle (NULL) static inline void acpi_sleep_suspend_setup(void) {} @@ -1074,11 +1090,6 @@ bool acpi_s2idle_wakeup(void) return s2idle_wakeup; } -bool acpi_sleep_no_ec_events(void) -{ - return !s2idle_in_progress || !lps0_device_handle; -} - #ifdef CONFIG_PM_SLEEP static u32 saved_bm_rld; |