From b02f6695f7601c4f8442b9cf4636802e7fa8d550 Mon Sep 17 00:00:00 2001 From: Rafael J. Wysocki Date: Tue, 11 Feb 2014 00:35:23 +0100 Subject: PM / QoS: Rename device resume latency QoS items Rename symbols, variables, functions and structure fields related do the resume latency device PM QoS type so that it is clear where they belong (in particular, to avoid confusion with the latency tolerance device PM QoS type introduced by a subsequent changeset). Update the PM QoS documentation to better reflect its current state. Signed-off-by: Rafael J. Wysocki --- Documentation/power/pm_qos_interface.txt | 37 ++++++++++++++------------------ Documentation/trace/events-power.txt | 2 +- 2 files changed, 17 insertions(+), 22 deletions(-) (limited to 'Documentation') diff --git a/Documentation/power/pm_qos_interface.txt b/Documentation/power/pm_qos_interface.txt index 483632087788..22cb8f51182a 100644 --- a/Documentation/power/pm_qos_interface.txt +++ b/Documentation/power/pm_qos_interface.txt @@ -89,16 +89,16 @@ node. 2. PM QoS per-device latency and flags framework For each device, there are two lists of PM QoS requests. One is maintained -along with the aggregated target of latency value and the other is for PM QoS -flags. Values are updated in response to changes of the request list. +along with the aggregated target of resume latency value and the other is for +PM QoS flags. Values are updated in response to changes of the request list. -Target latency value is simply the minimum of the request values held in the -parameter list elements. The PM QoS flags aggregate value is a gather (bitwise -OR) of all list elements' values. Two device PM QoS flags are defined currently: -PM_QOS_FLAG_NO_POWER_OFF and PM_QOS_FLAG_REMOTE_WAKEUP. +Target resume latency value is simply the minimum of the request values held in +the parameter list elements. The PM QoS flags aggregate value is a gather +(bitwise OR) of all list elements' values. Two device PM QoS flags are defined +currently: PM_QOS_FLAG_NO_POWER_OFF and PM_QOS_FLAG_REMOTE_WAKEUP. -Note: the aggregated target value is implemented as an atomic variable so that -reading the aggregated value does not require any locking mechanism. +Note: the aggregated target value is implemented in such a way that reading the +aggregated value does not require any locking mechanism. From kernel mode the use of this interface is the following: @@ -137,14 +137,14 @@ Add a PM QoS request for the first direct ancestor of the given device whose power.ignore_children flag is unset. int dev_pm_qos_expose_latency_limit(device, value) -Add a request to the device's PM QoS list of latency constraints and create -a sysfs attribute pm_qos_resume_latency_us under the device's power directory -allowing user space to manipulate that request. +Add a request to the device's PM QoS list of resume latency constraints and +create a sysfs attribute pm_qos_resume_latency_us under the device's power +directory allowing user space to manipulate that request. void dev_pm_qos_hide_latency_limit(device) Drop the request added by dev_pm_qos_expose_latency_limit() from the device's -PM QoS list of latency constraints and remove sysfs attribute pm_qos_resume_latency_us -from the device's power directory. +PM QoS list of resume latency constraints and remove sysfs attribute +pm_qos_resume_latency_us from the device's power directory. int dev_pm_qos_expose_flags(device, value) Add a request to the device's PM QoS list of flags and create sysfs attributes @@ -163,7 +163,7 @@ a per-device notification tree and a global notification tree. int dev_pm_qos_add_notifier(device, notifier): Adds a notification callback function for the device. The callback is called when the aggregated value of the device constraints list -is changed. +is changed (for resume latency device PM QoS only). int dev_pm_qos_remove_notifier(device, notifier): Removes the notification callback function for the device. @@ -171,14 +171,9 @@ Removes the notification callback function for the device. int dev_pm_qos_add_global_notifier(notifier): Adds a notification callback function in the global notification tree of the framework. -The callback is called when the aggregated value for any device is changed. +The callback is called when the aggregated value for any device is changed +(for resume latency device PM QoS only). int dev_pm_qos_remove_global_notifier(notifier): Removes the notification callback function from the global notification tree of the framework. - - -From user mode: -No API for user space access to the per-device latency constraints is provided -yet - still under discussion. - diff --git a/Documentation/trace/events-power.txt b/Documentation/trace/events-power.txt index 3bd33b8dc7c4..21d514ced212 100644 --- a/Documentation/trace/events-power.txt +++ b/Documentation/trace/events-power.txt @@ -92,5 +92,5 @@ dev_pm_qos_remove_request "device=%s type=%s new_value=%d" The first parameter gives the device name which tries to add/update/remove QoS requests. -The second parameter gives the request type (e.g. "DEV_PM_QOS_LATENCY"). +The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY"). The third parameter is value to be added/updated/removed. -- cgit v1.2.3 From 2d984ad132a87ca2112f81f21039493176a8bca0 Mon Sep 17 00:00:00 2001 From: Rafael J. Wysocki Date: Tue, 11 Feb 2014 00:35:38 +0100 Subject: PM / QoS: Introcuce latency tolerance device PM QoS type Add a new latency tolerance device PM QoS type to be use for specifying active state (RPM_ACTIVE) memory access (DMA) latency tolerance requirements for devices. It may be used to prevent hardware from choosing overly aggressive energy-saving operation modes (causing too much latency to appear) for the whole platform. This feature reqiures hardware support, so it only will be available for devices having a new .set_latency_tolerance() callback in struct dev_pm_info populated, in which case the routine pointed to by it should implement whatever is necessary to transfer the effective requirement value to the hardware. Whenever the effective latency tolerance changes for the device, its .set_latency_tolerance() callback will be executed and the effective value will be passed to it. If that value is negative, which means that the list of latency tolerance requirements for the device is empty, the callback is expected to switch the underlying hardware latency tolerance control mechanism to an autonomous mode if available. If that value is PM_QOS_LATENCY_ANY, in turn, and the hardware supports a special "no requirement" setting, the callback is expected to use it. That allows software to prevent the hardware from automatically updating the device's latency tolerance in response to its power state changes (e.g. during transitions from D3cold to D0), which generally may be done in the autonomous latency tolerance control mode. If .set_latency_tolerance() is present for the device, a new pm_qos_latency_tolerance_us attribute will be present in the devivce's power directory in sysfs. Then, user space can use that attribute to specify its latency tolerance requirement for the device, if any. Writing "any" to it means "no requirement, but do not let the hardware control latency tolerance" and writing "auto" to it allows the hardware to be switched to the autonomous mode if there are no other requirements from the kernel side in the device's list. This changeset includes a fix from Mika Westerberg. Signed-off-by: Rafael J. Wysocki --- Documentation/ABI/testing/sysfs-devices-power | 27 ++++- Documentation/power/pm_qos_interface.txt | 59 +++++++++-- drivers/base/power/qos.c | 144 ++++++++++++++++++++++---- drivers/base/power/sysfs.c | 65 ++++++++++-- include/linux/pm.h | 1 + include/linux/pm_qos.h | 12 +++ kernel/power/qos.c | 13 ++- 7 files changed, 277 insertions(+), 44 deletions(-) (limited to 'Documentation') diff --git a/Documentation/ABI/testing/sysfs-devices-power b/Documentation/ABI/testing/sysfs-devices-power index efe449bdf811..7dbf96b724ed 100644 --- a/Documentation/ABI/testing/sysfs-devices-power +++ b/Documentation/ABI/testing/sysfs-devices-power @@ -187,7 +187,7 @@ Description: Not all drivers support this attribute. If it isn't supported, attempts to read or write it will yield I/O errors. -What: /sys/devices/.../power/pm_qos_latency_us +What: /sys/devices/.../power/pm_qos_resume_latency_us Date: March 2012 Contact: Rafael J. Wysocki Description: @@ -205,6 +205,31 @@ Description: This attribute has no effect on system-wide suspend/resume and hibernation. +What: /sys/devices/.../power/pm_qos_latency_tolerance_us +Date: January 2014 +Contact: Rafael J. Wysocki +Description: + The /sys/devices/.../power/pm_qos_latency_tolerance_us attribute + contains the PM QoS active state latency tolerance limit for the + given device in microseconds. That is the maximum memory access + latency the device can suffer without any visible adverse + effects on user space functionality. If that value is the + string "any", the latency does not matter to user space at all, + but hardware should not be allowed to set the latency tolerance + for the device automatically. + + Reading "auto" from this file means that the maximum memory + access latency for the device may be determined automatically + by the hardware as needed. Writing "auto" to it allows the + hardware to be switched to this mode if there are no other + latency tolerance requirements from the kernel side. + + This attribute is only present if the feature controlled by it + is supported by the hardware. + + This attribute has no effect on runtime suspend and resume of + devices and on system-wide suspend/resume and hibernation. + What: /sys/devices/.../power/pm_qos_no_power_off Date: September 2012 Contact: Rafael J. Wysocki diff --git a/Documentation/power/pm_qos_interface.txt b/Documentation/power/pm_qos_interface.txt index 22cb8f51182a..ed743bbad87c 100644 --- a/Documentation/power/pm_qos_interface.txt +++ b/Documentation/power/pm_qos_interface.txt @@ -88,17 +88,19 @@ node. 2. PM QoS per-device latency and flags framework -For each device, there are two lists of PM QoS requests. One is maintained -along with the aggregated target of resume latency value and the other is for -PM QoS flags. Values are updated in response to changes of the request list. +For each device, there are three lists of PM QoS requests. Two of them are +maintained along with the aggregated targets of resume latency and active +state latency tolerance (in microseconds) and the third one is for PM QoS flags. +Values are updated in response to changes of the request list. -Target resume latency value is simply the minimum of the request values held in -the parameter list elements. The PM QoS flags aggregate value is a gather -(bitwise OR) of all list elements' values. Two device PM QoS flags are defined -currently: PM_QOS_FLAG_NO_POWER_OFF and PM_QOS_FLAG_REMOTE_WAKEUP. +The target values of resume latency and active state latency tolerance are +simply the minimum of the request values held in the parameter list elements. +The PM QoS flags aggregate value is a gather (bitwise OR) of all list elements' +values. Two device PM QoS flags are defined currently: PM_QOS_FLAG_NO_POWER_OFF +and PM_QOS_FLAG_REMOTE_WAKEUP. -Note: the aggregated target value is implemented in such a way that reading the -aggregated value does not require any locking mechanism. +Note: The aggregated target values are implemented in such a way that reading +the aggregated value does not require any locking mechanism. From kernel mode the use of this interface is the following: @@ -177,3 +179,42 @@ The callback is called when the aggregated value for any device is changed int dev_pm_qos_remove_global_notifier(notifier): Removes the notification callback function from the global notification tree of the framework. + + +Active state latency tolerance + +This device PM QoS type is used to support systems in which hardware may switch +to energy-saving operation modes on the fly. In those systems, if the operation +mode chosen by the hardware attempts to save energy in an overly aggressive way, +it may cause excess latencies to be visible to software, causing it to miss +certain protocol requirements or target frame or sample rates etc. + +If there is a latency tolerance control mechanism for a given device available +to software, the .set_latency_tolerance callback in that device's dev_pm_info +structure should be populated. The routine pointed to by it is should implement +whatever is necessary to transfer the effective requirement value to the +hardware. + +Whenever the effective latency tolerance changes for the device, its +.set_latency_tolerance() callback will be executed and the effective value will +be passed to it. If that value is negative, which means that the list of +latency tolerance requirements for the device is empty, the callback is expected +to switch the underlying hardware latency tolerance control mechanism to an +autonomous mode if available. If that value is PM_QOS_LATENCY_ANY, in turn, and +the hardware supports a special "no requirement" setting, the callback is +expected to use it. That allows software to prevent the hardware from +automatically updating the device's latency tolerance in response to its power +state changes (e.g. during transitions from D3cold to D0), which generally may +be done in the autonomous latency tolerance control mode. + +If .set_latency_tolerance() is present for the device, sysfs attribute +pm_qos_latency_tolerance_us will be present in the devivce's power directory. +Then, user space can use that attribute to specify its latency tolerance +requirement for the device, if any. Writing "any" to it means "no requirement, +but do not let the hardware control latency tolerance" and writing "auto" to it +allows the hardware to be switched to the autonomous mode if there are no other +requirements from the kernel side in the device's list. + +Kernel code can use the functions described above along with the +DEV_PM_QOS_LATENCY_TOLERANCE device PM QoS type to add, remove and update +latency tolerance requirements for devices. diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c index c754e55f9dcb..84756f7f09d9 100644 --- a/drivers/base/power/qos.c +++ b/drivers/base/power/qos.c @@ -151,6 +151,14 @@ static int apply_constraint(struct dev_pm_qos_request *req, req); } break; + case DEV_PM_QOS_LATENCY_TOLERANCE: + ret = pm_qos_update_target(&qos->latency_tolerance, + &req->data.pnode, action, value); + if (ret) { + value = pm_qos_read_value(&qos->latency_tolerance); + req->dev->power.set_latency_tolerance(req->dev, value); + } + break; case DEV_PM_QOS_FLAGS: ret = pm_qos_update_flags(&qos->flags, &req->data.flr, action, value); @@ -194,6 +202,13 @@ static int dev_pm_qos_constraints_allocate(struct device *dev) c->type = PM_QOS_MIN; c->notifiers = n; + c = &qos->latency_tolerance; + plist_head_init(&c->list); + c->target_value = PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE; + c->default_value = PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE; + c->no_constraint_value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; + c->type = PM_QOS_MIN; + INIT_LIST_HEAD(&qos->flags.list); spin_lock_irq(&dev->power.lock); @@ -247,6 +262,11 @@ void dev_pm_qos_constraints_destroy(struct device *dev) apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE); memset(req, 0, sizeof(*req)); } + c = &qos->latency_tolerance; + plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) { + apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE); + memset(req, 0, sizeof(*req)); + } f = &qos->flags; list_for_each_entry_safe(req, tmp, &f->list, data.flr.node) { apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE); @@ -266,6 +286,40 @@ void dev_pm_qos_constraints_destroy(struct device *dev) mutex_unlock(&dev_pm_qos_sysfs_mtx); } +static bool dev_pm_qos_invalid_request(struct device *dev, + struct dev_pm_qos_request *req) +{ + return !req || (req->type == DEV_PM_QOS_LATENCY_TOLERANCE + && !dev->power.set_latency_tolerance); +} + +static int __dev_pm_qos_add_request(struct device *dev, + struct dev_pm_qos_request *req, + enum dev_pm_qos_req_type type, s32 value) +{ + int ret = 0; + + if (!dev || dev_pm_qos_invalid_request(dev, req)) + return -EINVAL; + + if (WARN(dev_pm_qos_request_active(req), + "%s() called for already added request\n", __func__)) + return -EINVAL; + + if (IS_ERR(dev->power.qos)) + ret = -ENODEV; + else if (!dev->power.qos) + ret = dev_pm_qos_constraints_allocate(dev); + + trace_dev_pm_qos_add_request(dev_name(dev), type, value); + if (!ret) { + req->dev = dev; + req->type = type; + ret = apply_constraint(req, PM_QOS_ADD_REQ, value); + } + return ret; +} + /** * dev_pm_qos_add_request - inserts new qos request into the list * @dev: target device for the constraint @@ -291,31 +345,11 @@ void dev_pm_qos_constraints_destroy(struct device *dev) int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req, enum dev_pm_qos_req_type type, s32 value) { - int ret = 0; - - if (!dev || !req) /*guard against callers passing in null */ - return -EINVAL; - - if (WARN(dev_pm_qos_request_active(req), - "%s() called for already added request\n", __func__)) - return -EINVAL; + int ret; mutex_lock(&dev_pm_qos_mtx); - - if (IS_ERR(dev->power.qos)) - ret = -ENODEV; - else if (!dev->power.qos) - ret = dev_pm_qos_constraints_allocate(dev); - - trace_dev_pm_qos_add_request(dev_name(dev), type, value); - if (!ret) { - req->dev = dev; - req->type = type; - ret = apply_constraint(req, PM_QOS_ADD_REQ, value); - } - + ret = __dev_pm_qos_add_request(dev, req, type, value); mutex_unlock(&dev_pm_qos_mtx); - return ret; } EXPORT_SYMBOL_GPL(dev_pm_qos_add_request); @@ -343,6 +377,7 @@ static int __dev_pm_qos_update_request(struct dev_pm_qos_request *req, switch(req->type) { case DEV_PM_QOS_RESUME_LATENCY: + case DEV_PM_QOS_LATENCY_TOLERANCE: curr_value = req->data.pnode.prio; break; case DEV_PM_QOS_FLAGS: @@ -563,6 +598,10 @@ static void __dev_pm_qos_drop_user_request(struct device *dev, req = dev->power.qos->resume_latency_req; dev->power.qos->resume_latency_req = NULL; break; + case DEV_PM_QOS_LATENCY_TOLERANCE: + req = dev->power.qos->latency_tolerance_req; + dev->power.qos->latency_tolerance_req = NULL; + break; case DEV_PM_QOS_FLAGS: req = dev->power.qos->flags_req; dev->power.qos->flags_req = NULL; @@ -768,6 +807,67 @@ int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set) pm_runtime_put(dev); return ret; } + +/** + * dev_pm_qos_get_user_latency_tolerance - Get user space latency tolerance. + * @dev: Device to obtain the user space latency tolerance for. + */ +s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev) +{ + s32 ret; + + mutex_lock(&dev_pm_qos_mtx); + ret = IS_ERR_OR_NULL(dev->power.qos) + || !dev->power.qos->latency_tolerance_req ? + PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT : + dev->power.qos->latency_tolerance_req->data.pnode.prio; + mutex_unlock(&dev_pm_qos_mtx); + return ret; +} + +/** + * dev_pm_qos_update_user_latency_tolerance - Update user space latency tolerance. + * @dev: Device to update the user space latency tolerance for. + * @val: New user space latency tolerance for @dev (negative values disable). + */ +int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) +{ + int ret; + + mutex_lock(&dev_pm_qos_mtx); + + if (IS_ERR_OR_NULL(dev->power.qos) + || !dev->power.qos->latency_tolerance_req) { + struct dev_pm_qos_request *req; + + if (val < 0) { + ret = -EINVAL; + goto out; + } + req = kzalloc(sizeof(*req), GFP_KERNEL); + if (!req) { + ret = -ENOMEM; + goto out; + } + ret = __dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY_TOLERANCE, val); + if (ret < 0) { + kfree(req); + goto out; + } + dev->power.qos->latency_tolerance_req = req; + } else { + if (val < 0) { + __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY_TOLERANCE); + ret = 0; + } else { + ret = __dev_pm_qos_update_request(dev->power.qos->latency_tolerance_req, val); + } + } + + out: + mutex_unlock(&dev_pm_qos_mtx); + return ret; +} #else /* !CONFIG_PM_RUNTIME */ static void __dev_pm_qos_hide_latency_limit(struct device *dev) {} static void __dev_pm_qos_hide_flags(struct device *dev) {} diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c index 4e24955aac8a..95b181d1ca6d 100644 --- a/drivers/base/power/sysfs.c +++ b/drivers/base/power/sysfs.c @@ -246,6 +246,40 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev, static DEVICE_ATTR(pm_qos_resume_latency_us, 0644, pm_qos_resume_latency_show, pm_qos_resume_latency_store); +static ssize_t pm_qos_latency_tolerance_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + s32 value = dev_pm_qos_get_user_latency_tolerance(dev); + + if (value < 0) + return sprintf(buf, "auto\n"); + else if (value == PM_QOS_LATENCY_ANY) + return sprintf(buf, "any\n"); + + return sprintf(buf, "%d\n", value); +} + +static ssize_t pm_qos_latency_tolerance_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t n) +{ + s32 value; + int ret; + + if (kstrtos32(buf, 0, &value)) { + if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n")) + value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; + else if (!strcmp(buf, "any") || !strcmp(buf, "any\n")) + value = PM_QOS_LATENCY_ANY; + } + ret = dev_pm_qos_update_user_latency_tolerance(dev, value); + return ret < 0 ? ret : n; +} + +static DEVICE_ATTR(pm_qos_latency_tolerance_us, 0644, + pm_qos_latency_tolerance_show, pm_qos_latency_tolerance_store); + static ssize_t pm_qos_no_power_off_show(struct device *dev, struct device_attribute *attr, char *buf) @@ -631,6 +665,17 @@ static struct attribute_group pm_qos_resume_latency_attr_group = { .attrs = pm_qos_resume_latency_attrs, }; +static struct attribute *pm_qos_latency_tolerance_attrs[] = { +#ifdef CONFIG_PM_RUNTIME + &dev_attr_pm_qos_latency_tolerance_us.attr, +#endif /* CONFIG_PM_RUNTIME */ + NULL, +}; +static struct attribute_group pm_qos_latency_tolerance_attr_group = { + .name = power_group_name, + .attrs = pm_qos_latency_tolerance_attrs, +}; + static struct attribute *pm_qos_flags_attrs[] = { #ifdef CONFIG_PM_RUNTIME &dev_attr_pm_qos_no_power_off.attr, @@ -656,18 +701,23 @@ int dpm_sysfs_add(struct device *dev) if (rc) goto err_out; } - if (device_can_wakeup(dev)) { rc = sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group); - if (rc) { - if (pm_runtime_callbacks_present(dev)) - sysfs_unmerge_group(&dev->kobj, - &pm_runtime_attr_group); - goto err_out; - } + if (rc) + goto err_runtime; + } + if (dev->power.set_latency_tolerance) { + rc = sysfs_merge_group(&dev->kobj, + &pm_qos_latency_tolerance_attr_group); + if (rc) + goto err_wakeup; } return 0; + err_wakeup: + sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group); + err_runtime: + sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group); err_out: sysfs_remove_group(&dev->kobj, &pm_attr_group); return rc; @@ -710,6 +760,7 @@ void rpm_sysfs_remove(struct device *dev) void dpm_sysfs_remove(struct device *dev) { + sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group); dev_pm_qos_constraints_destroy(dev); rpm_sysfs_remove(dev); sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group); diff --git a/include/linux/pm.h b/include/linux/pm.h index 8c6583a53a06..db2be5f3e030 100644 --- a/include/linux/pm.h +++ b/include/linux/pm.h @@ -582,6 +582,7 @@ struct dev_pm_info { unsigned long accounting_timestamp; #endif struct pm_subsys_data *subsys_data; /* Owned by the subsystem. */ + void (*set_latency_tolerance)(struct device *, s32); struct dev_pm_qos *qos; }; diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h index 2d8ce50877d8..0b476019be55 100644 --- a/include/linux/pm_qos.h +++ b/include/linux/pm_qos.h @@ -33,6 +33,9 @@ enum pm_qos_flags_status { #define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) #define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0 #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE 0 +#define PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE 0 +#define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT (-1) +#define PM_QOS_LATENCY_ANY ((s32)(~(__u32)0 >> 1)) #define PM_QOS_FLAG_NO_POWER_OFF (1 << 0) #define PM_QOS_FLAG_REMOTE_WAKEUP (1 << 1) @@ -50,6 +53,7 @@ struct pm_qos_flags_request { enum dev_pm_qos_req_type { DEV_PM_QOS_RESUME_LATENCY = 1, + DEV_PM_QOS_LATENCY_TOLERANCE, DEV_PM_QOS_FLAGS, }; @@ -89,8 +93,10 @@ struct pm_qos_flags { struct dev_pm_qos { struct pm_qos_constraints resume_latency; + struct pm_qos_constraints latency_tolerance; struct pm_qos_flags flags; struct dev_pm_qos_request *resume_latency_req; + struct dev_pm_qos_request *latency_tolerance_req; struct dev_pm_qos_request *flags_req; }; @@ -196,6 +202,8 @@ void dev_pm_qos_hide_latency_limit(struct device *dev); int dev_pm_qos_expose_flags(struct device *dev, s32 value); void dev_pm_qos_hide_flags(struct device *dev); int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set); +s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev); +int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val); static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) { @@ -215,6 +223,10 @@ static inline int dev_pm_qos_expose_flags(struct device *dev, s32 value) static inline void dev_pm_qos_hide_flags(struct device *dev) {} static inline int dev_pm_qos_update_flags(struct device *dev, s32 m, bool set) { return 0; } +static inline s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev) + { return PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; } +static inline int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) + { return 0; } static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) { return 0; } static inline s32 dev_pm_qos_requested_flags(struct device *dev) { return 0; } diff --git a/kernel/power/qos.c b/kernel/power/qos.c index e23ae38e647f..884b77058864 100644 --- a/kernel/power/qos.c +++ b/kernel/power/qos.c @@ -173,6 +173,7 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node, { unsigned long flags; int prev_value, curr_value, new_value; + int ret; spin_lock_irqsave(&pm_qos_lock, flags); prev_value = pm_qos_get_value(c); @@ -208,13 +209,15 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node, trace_pm_qos_update_target(action, prev_value, curr_value); if (prev_value != curr_value) { - blocking_notifier_call_chain(c->notifiers, - (unsigned long)curr_value, - NULL); - return 1; + ret = 1; + if (c->notifiers) + blocking_notifier_call_chain(c->notifiers, + (unsigned long)curr_value, + NULL); } else { - return 0; + ret = 0; } + return ret; } /** -- cgit v1.2.3 From 71d821fdaec08afcbfb3cf258c0d64ea0e336ff3 Mon Sep 17 00:00:00 2001 From: Rafael J. Wysocki Date: Tue, 11 Feb 2014 00:36:00 +0100 Subject: PM / QoS: Add type to dev_pm_qos_add_ancestor_request() arguments Rework dev_pm_qos_add_ancestor_request() so that device PM QoS type is passed to it as the third argument and make it support the DEV_PM_QOS_LATENCY_TOLERANCE device PM QoS type (in addition to DEV_PM_QOS_RESUME_LATENCY). That will allow the drivers of devices without latency tolerance hardware support to use their ancestors having it as proxies for their latency tolerance requirements. Signed-off-by: Rafael J. Wysocki --- Documentation/power/pm_qos_interface.txt | 6 ++++-- drivers/base/power/qos.c | 22 +++++++++++++++++----- drivers/input/touchscreen/st1232.c | 3 ++- include/linux/pm_qos.h | 7 +++++-- 4 files changed, 28 insertions(+), 10 deletions(-) (limited to 'Documentation') diff --git a/Documentation/power/pm_qos_interface.txt b/Documentation/power/pm_qos_interface.txt index ed743bbad87c..a5da5c7e7128 100644 --- a/Documentation/power/pm_qos_interface.txt +++ b/Documentation/power/pm_qos_interface.txt @@ -134,9 +134,11 @@ The meaning of the return values is as follows: PM_QOS_FLAGS_UNDEFINED: The device's PM QoS structure has not been initialized or the list of requests is empty. -int dev_pm_qos_add_ancestor_request(dev, handle, value) +int dev_pm_qos_add_ancestor_request(dev, handle, type, value) Add a PM QoS request for the first direct ancestor of the given device whose -power.ignore_children flag is unset. +power.ignore_children flag is unset (for DEV_PM_QOS_RESUME_LATENCY requests) +or whose power.set_latency_tolerance callback pointer is not NULL (for +DEV_PM_QOS_LATENCY_TOLERANCE requests). int dev_pm_qos_expose_latency_limit(device, value) Add a request to the device's PM QoS list of resume latency constraints and diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c index 84756f7f09d9..36b9eb4862cb 100644 --- a/drivers/base/power/qos.c +++ b/drivers/base/power/qos.c @@ -565,20 +565,32 @@ EXPORT_SYMBOL_GPL(dev_pm_qos_remove_global_notifier); * dev_pm_qos_add_ancestor_request - Add PM QoS request for device's ancestor. * @dev: Device whose ancestor to add the request for. * @req: Pointer to the preallocated handle. + * @type: Type of the request. * @value: Constraint latency value. */ int dev_pm_qos_add_ancestor_request(struct device *dev, - struct dev_pm_qos_request *req, s32 value) + struct dev_pm_qos_request *req, + enum dev_pm_qos_req_type type, s32 value) { struct device *ancestor = dev->parent; int ret = -ENODEV; - while (ancestor && !ancestor->power.ignore_children) - ancestor = ancestor->parent; + switch (type) { + case DEV_PM_QOS_RESUME_LATENCY: + while (ancestor && !ancestor->power.ignore_children) + ancestor = ancestor->parent; + break; + case DEV_PM_QOS_LATENCY_TOLERANCE: + while (ancestor && !ancestor->power.set_latency_tolerance) + ancestor = ancestor->parent; + + break; + default: + ancestor = NULL; + } if (ancestor) - ret = dev_pm_qos_add_request(ancestor, req, - DEV_PM_QOS_RESUME_LATENCY, value); + ret = dev_pm_qos_add_request(ancestor, req, type, value); if (ret < 0) req->dev = NULL; diff --git a/drivers/input/touchscreen/st1232.c b/drivers/input/touchscreen/st1232.c index 5c342b3139e8..3c0f57efe7b1 100644 --- a/drivers/input/touchscreen/st1232.c +++ b/drivers/input/touchscreen/st1232.c @@ -134,7 +134,8 @@ static irqreturn_t st1232_ts_irq_handler(int irq, void *dev_id) } else if (!ts->low_latency_req.dev) { /* First contact, request 100 us latency. */ dev_pm_qos_add_ancestor_request(&ts->client->dev, - &ts->low_latency_req, 100); + &ts->low_latency_req, + DEV_PM_QOS_RESUME_LATENCY, 100); } /* SYN_REPORT */ diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h index 0b476019be55..9ab4bf7c4646 100644 --- a/include/linux/pm_qos.h +++ b/include/linux/pm_qos.h @@ -149,7 +149,8 @@ int dev_pm_qos_remove_global_notifier(struct notifier_block *notifier); void dev_pm_qos_constraints_init(struct device *dev); void dev_pm_qos_constraints_destroy(struct device *dev); int dev_pm_qos_add_ancestor_request(struct device *dev, - struct dev_pm_qos_request *req, s32 value); + struct dev_pm_qos_request *req, + enum dev_pm_qos_req_type type, s32 value); #else static inline enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask) @@ -192,7 +193,9 @@ static inline void dev_pm_qos_constraints_destroy(struct device *dev) dev->power.power_state = PMSG_INVALID; } static inline int dev_pm_qos_add_ancestor_request(struct device *dev, - struct dev_pm_qos_request *req, s32 value) + struct dev_pm_qos_request *req, + enum dev_pm_qos_req_type type, + s32 value) { return 0; } #endif -- cgit v1.2.3 From 4dde507fc1984435f28862ddd1beb90822aa116c Mon Sep 17 00:00:00 2001 From: Lv Zheng Date: Tue, 11 Feb 2014 11:01:52 +0800 Subject: ACPICA: Add boot option to disable auto return object repair Sometimes, there might be bugs caused by unexpected AML which is compliant to the Windows but not compliant to the Linux implementation. There is a predefined validation mechanism implemented in ACPICA to repair the unexpected AML evaluation results that are caused by the unexpected AMLs. For example, BIOS may return misorder _CST result and the repair mechanism can make an ascending order on the returned _CST package object based on the C-state type. This mechanism is quite useful to implement an AML interpreter with better compliance with the real world where Windows is the de-facto standard and BIOS codes are only tested on one platform thus not compliant to the ACPI specification. But if a compliance issue hasn't been figured out yet, it will be difficult for developers to identify if the unexpected evaluation result is caused by this mechanism or by the AML interpreter. For example, _PR0 is expected to be a control method, but BIOS may use Package: "Name(_PR0, Package(1) {P1PR})". This boot option can disable the predefined validation mechanism so that developers can make sure the root cause comes from the parser/executer. This patch adds a new kernel parameter to disable this feature. A build test has been made on a Dell Inspiron mini 1100 (i386 z530) machine when this patch is applied and the corresponding boot test is performed w/ or w/o the new kernel parameter specified. References: https://bugzilla.kernel.org/show_bug.cgi?id=67901 Tested-by: Fabian Wehning Signed-off-by: Lv Zheng Signed-off-by: Rafael J. Wysocki --- Documentation/kernel-parameters.txt | 8 ++++++++ drivers/acpi/osl.c | 11 +++++++++++ 2 files changed, 19 insertions(+) (limited to 'Documentation') diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 7116fda7077f..bf0fda0125f0 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -231,6 +231,14 @@ bytes respectively. Such letter suffixes can also be entirely omitted. acpi_no_auto_ssdt [HW,ACPI] Disable automatic loading of SSDT + acpica_no_return_repair [HW, ACPI] + Disable AML predefined validation mechanism + This mechanism can repair the evaluation result to make + the return objects more ACPI specification compliant. + This option is useful for developers to identify the + root cause of an AML interpreter issue when the issue + has something to do with the repair mechanism. + acpi_os_name= [HW,ACPI] Tell ACPI BIOS the name of the OS Format: To spoof as Windows 98: ="Microsoft Windows" diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c index fc1aa7909690..0d7b7145399e 100644 --- a/drivers/acpi/osl.c +++ b/drivers/acpi/osl.c @@ -1780,6 +1780,17 @@ static int __init acpi_no_auto_ssdt_setup(char *s) __setup("acpi_no_auto_ssdt", acpi_no_auto_ssdt_setup); +static int __init acpi_disable_return_repair(char *s) +{ + printk(KERN_NOTICE PREFIX + "ACPI: Predefined validation mechanism disabled\n"); + acpi_gbl_disable_auto_repair = TRUE; + + return 1; +} + +__setup("acpica_no_return_repair", acpi_disable_return_repair); + acpi_status __init acpi_os_initialize(void) { acpi_os_map_generic_address(&acpi_gbl_FADT.xpm1a_event_block); -- cgit v1.2.3 From af02b5fdb1fb3c5d5f8d71f7f84e4fb243e1ae31 Mon Sep 17 00:00:00 2001 From: Geert Uytterhoeven Date: Tue, 11 Mar 2014 12:08:11 +0100 Subject: PM: Add missing "freeze" state Fix descriptions of /sys/power/state in the documentation and in a code comment. Signed-off-by: Geert Uytterhoeven Reviewed-by: Srivatsa S. Bhat Acked-by: Pavel Machek [rjw: Changelog] Signed-off-by: Rafael J. Wysocki --- Documentation/ABI/testing/sysfs-power | 5 +++-- kernel/power/main.c | 4 ++-- 2 files changed, 5 insertions(+), 4 deletions(-) (limited to 'Documentation') diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power index 205a73878441..64c9276e9421 100644 --- a/Documentation/ABI/testing/sysfs-power +++ b/Documentation/ABI/testing/sysfs-power @@ -12,8 +12,9 @@ Contact: Rafael J. Wysocki Description: The /sys/power/state file controls the system power state. Reading from this file returns what states are supported, - which is hard-coded to 'standby' (Power-On Suspend), 'mem' - (Suspend-to-RAM), and 'disk' (Suspend-to-Disk). + which is hard-coded to 'freeze' (Low-Power Idle), 'standby' + (Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk' + (Suspend-to-Disk). Writing to this file one of these strings causes the system to transition into that state. Please see the file diff --git a/kernel/power/main.c b/kernel/power/main.c index 1d1bf630e6e9..6271bc4073ef 100644 --- a/kernel/power/main.c +++ b/kernel/power/main.c @@ -282,8 +282,8 @@ struct kobject *power_kobj; * state - control system power state. * * show() returns what states are supported, which is hard-coded to - * 'standby' (Power-On Suspend), 'mem' (Suspend-to-RAM), and - * 'disk' (Suspend-to-Disk). + * 'freeze' (Low-Power Idle), 'standby' (Power-On Suspend), + * 'mem' (Suspend-to-RAM), and 'disk' (Suspend-to-Disk). * * store() accepts one of those strings, translates it into the * proper enumerated value, and initiates a suspend transition. -- cgit v1.2.3 From 0b443ead714f0cba797a7f2476dd756f22b5421e Mon Sep 17 00:00:00 2001 From: Viresh Kumar Date: Wed, 19 Mar 2014 11:24:58 +0530 Subject: cpufreq: remove unused notifier: CPUFREQ_{SUSPENDCHANGE|RESUMECHANGE} Two cpufreq notifiers CPUFREQ_RESUMECHANGE and CPUFREQ_SUSPENDCHANGE have not been used for some time, so remove them to clean up code a bit. Signed-off-by: Viresh Kumar Reviewed-by: Srivatsa S. Bhat [rjw: Changelog] Signed-off-by: Rafael J. Wysocki --- Documentation/cpu-freq/core.txt | 4 ---- arch/arm/kernel/smp.c | 3 +-- arch/arm/kernel/smp_twd.c | 2 +- arch/arm/mach-pxa/viper.c | 3 --- arch/powerpc/oprofile/op_model_cell.c | 3 +-- arch/sparc/kernel/time_64.c | 3 +-- arch/x86/kernel/tsc.c | 3 +-- drivers/cpufreq/cpufreq.c | 3 +-- drivers/pcmcia/sa11xx_base.c | 3 --- drivers/tty/serial/sh-sci.c | 3 +-- include/linux/cpufreq.h | 2 -- 11 files changed, 7 insertions(+), 25 deletions(-) (limited to 'Documentation') diff --git a/Documentation/cpu-freq/core.txt b/Documentation/cpu-freq/core.txt index ce0666e51036..0060d76b445f 100644 --- a/Documentation/cpu-freq/core.txt +++ b/Documentation/cpu-freq/core.txt @@ -92,7 +92,3 @@ values: cpu - number of the affected CPU old - old frequency new - new frequency - -If the cpufreq core detects the frequency has changed while the system -was suspended, these notifiers are called with CPUFREQ_RESUMECHANGE as -second argument. diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index b7b4c86e338b..7c4fada440f0 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -674,8 +674,7 @@ static int cpufreq_callback(struct notifier_block *nb, } if ((val == CPUFREQ_PRECHANGE && freq->old < freq->new) || - (val == CPUFREQ_POSTCHANGE && freq->old > freq->new) || - (val == CPUFREQ_RESUMECHANGE || val == CPUFREQ_SUSPENDCHANGE)) { + (val == CPUFREQ_POSTCHANGE && freq->old > freq->new)) { loops_per_jiffy = cpufreq_scale(global_l_p_j_ref, global_l_p_j_ref_freq, freq->new); diff --git a/arch/arm/kernel/smp_twd.c b/arch/arm/kernel/smp_twd.c index 6591e26fc13f..dfc32130bc44 100644 --- a/arch/arm/kernel/smp_twd.c +++ b/arch/arm/kernel/smp_twd.c @@ -166,7 +166,7 @@ static int twd_cpufreq_transition(struct notifier_block *nb, * frequency. The timer is local to a cpu, so cross-call to the * changing cpu. */ - if (state == CPUFREQ_POSTCHANGE || state == CPUFREQ_RESUMECHANGE) + if (state == CPUFREQ_POSTCHANGE) smp_call_function_single(freqs->cpu, twd_update_frequency, NULL, 1); diff --git a/arch/arm/mach-pxa/viper.c b/arch/arm/mach-pxa/viper.c index 29905b127ad9..41f27f667ca8 100644 --- a/arch/arm/mach-pxa/viper.c +++ b/arch/arm/mach-pxa/viper.c @@ -885,9 +885,6 @@ static int viper_cpufreq_notifier(struct notifier_block *nb, viper_set_core_cpu_voltage(freq->new, 0); } break; - case CPUFREQ_RESUMECHANGE: - viper_set_core_cpu_voltage(freq->new, 0); - break; default: /* ignore */ break; diff --git a/arch/powerpc/oprofile/op_model_cell.c b/arch/powerpc/oprofile/op_model_cell.c index 1f0ebdeea5f7..863d89386f60 100644 --- a/arch/powerpc/oprofile/op_model_cell.c +++ b/arch/powerpc/oprofile/op_model_cell.c @@ -1121,8 +1121,7 @@ oprof_cpufreq_notify(struct notifier_block *nb, unsigned long val, void *data) int ret = 0; struct cpufreq_freqs *frq = data; if ((val == CPUFREQ_PRECHANGE && frq->old < frq->new) || - (val == CPUFREQ_POSTCHANGE && frq->old > frq->new) || - (val == CPUFREQ_RESUMECHANGE || val == CPUFREQ_SUSPENDCHANGE)) + (val == CPUFREQ_POSTCHANGE && frq->old > frq->new)) set_spu_profiling_frequency(frq->new, spu_cycle_reset); return ret; } diff --git a/arch/sparc/kernel/time_64.c b/arch/sparc/kernel/time_64.c index c3d82b5f54ca..b397e053b872 100644 --- a/arch/sparc/kernel/time_64.c +++ b/arch/sparc/kernel/time_64.c @@ -659,8 +659,7 @@ static int sparc64_cpufreq_notifier(struct notifier_block *nb, unsigned long val ft->clock_tick_ref = cpu_data(cpu).clock_tick; } if ((val == CPUFREQ_PRECHANGE && freq->old < freq->new) || - (val == CPUFREQ_POSTCHANGE && freq->old > freq->new) || - (val == CPUFREQ_RESUMECHANGE)) { + (val == CPUFREQ_POSTCHANGE && freq->old > freq->new)) { cpu_data(cpu).clock_tick = cpufreq_scale(ft->clock_tick_ref, ft->ref_freq, diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index cfbe99f88830..7a9296ab8834 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -914,8 +914,7 @@ static int time_cpufreq_notifier(struct notifier_block *nb, unsigned long val, tsc_khz_ref = tsc_khz; } if ((val == CPUFREQ_PRECHANGE && freq->old < freq->new) || - (val == CPUFREQ_POSTCHANGE && freq->old > freq->new) || - (val == CPUFREQ_RESUMECHANGE)) { + (val == CPUFREQ_POSTCHANGE && freq->old > freq->new)) { *lpj = cpufreq_scale(loops_per_jiffy_ref, ref_freq, freq->new); tsc_khz = cpufreq_scale(tsc_khz_ref, ref_freq, freq->new); diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 24cd6ff9bb0b..e3aa9deb40d9 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -264,8 +264,7 @@ static void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci) pr_debug("saving %lu as reference value for loops_per_jiffy; freq is %u kHz\n", l_p_j_ref, l_p_j_ref_freq); } - if ((val == CPUFREQ_POSTCHANGE && ci->old != ci->new) || - (val == CPUFREQ_RESUMECHANGE || val == CPUFREQ_SUSPENDCHANGE)) { + if (val == CPUFREQ_POSTCHANGE && ci->old != ci->new) { loops_per_jiffy = cpufreq_scale(l_p_j_ref, l_p_j_ref_freq, ci->new); pr_debug("scaling loops_per_jiffy to %lu for frequency %u kHz\n", diff --git a/drivers/pcmcia/sa11xx_base.c b/drivers/pcmcia/sa11xx_base.c index 6eecd7cddf57..54d3089d157b 100644 --- a/drivers/pcmcia/sa11xx_base.c +++ b/drivers/pcmcia/sa11xx_base.c @@ -125,9 +125,6 @@ sa1100_pcmcia_frequency_change(struct soc_pcmcia_socket *skt, if (freqs->new < freqs->old) sa1100_pcmcia_set_mecr(skt, freqs->new); break; - case CPUFREQ_RESUMECHANGE: - sa1100_pcmcia_set_mecr(skt, freqs->new); - break; } return 0; diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index be33d2b0613b..7e0b62602632 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -1041,8 +1041,7 @@ static int sci_notifier(struct notifier_block *self, sci_port = container_of(self, struct sci_port, freq_transition); - if ((phase == CPUFREQ_POSTCHANGE) || - (phase == CPUFREQ_RESUMECHANGE)) { + if (phase == CPUFREQ_POSTCHANGE) { struct uart_port *port = &sci_port->port; spin_lock_irqsave(&port->lock, flags); diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 31c431e150a7..70929bcf1a9d 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -318,8 +318,6 @@ static inline void cpufreq_resume(void) {} /* Transition notifiers */ #define CPUFREQ_PRECHANGE (0) #define CPUFREQ_POSTCHANGE (1) -#define CPUFREQ_RESUMECHANGE (8) -#define CPUFREQ_SUSPENDCHANGE (9) /* Policy Notifiers */ #define CPUFREQ_ADJUST (0) -- cgit v1.2.3 From 367dc4aa932bfb33a3189064d33f7890a8ec1ca8 Mon Sep 17 00:00:00 2001 From: Dirk Brandewie Date: Wed, 19 Mar 2014 08:45:53 -0700 Subject: cpufreq: Add stop CPU callback to cpufreq_driver interface This callback allows the driver to do clean up before the CPU is completely down and its state cannot be modified. This is used by the intel_pstate driver to reduce the requested P state prior to the core going away. This is required because the requested P state of the offline core is used to select the package P state. This effectively sets the floor package P state to the requested P state on the offline core. Signed-off-by: Dirk Brandewie [rjw: Minor modifications] Signed-off-by: Rafael J. Wysocki --- Documentation/cpu-freq/cpu-drivers.txt | 8 +++++++- drivers/cpufreq/cpufreq.c | 2 ++ include/linux/cpufreq.h | 1 + 3 files changed, 10 insertions(+), 1 deletion(-) (limited to 'Documentation') diff --git a/Documentation/cpu-freq/cpu-drivers.txt b/Documentation/cpu-freq/cpu-drivers.txt index 8b1a4451422e..48da5fdcb9f1 100644 --- a/Documentation/cpu-freq/cpu-drivers.txt +++ b/Documentation/cpu-freq/cpu-drivers.txt @@ -61,7 +61,13 @@ target_index - See below on the differences. And optionally -cpufreq_driver.exit - A pointer to a per-CPU cleanup function. +cpufreq_driver.exit - A pointer to a per-CPU cleanup + function called during CPU_POST_DEAD + phase of cpu hotplug process. + +cpufreq_driver.stop_cpu - A pointer to a per-CPU stop function + called during CPU_DOWN_PREPARE phase of + cpu hotplug process. cpufreq_driver.resume - A pointer to a per-CPU resume function which is called with interrupts disabled diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 3c56c4759aa1..3aa7a7a226b3 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1337,6 +1337,8 @@ static int __cpufreq_remove_dev_prepare(struct device *dev, pr_debug("%s: policy Kobject moved to cpu: %d from: %d\n", __func__, new_cpu, cpu); } + } else if (cpufreq_driver->stop_cpu && cpufreq_driver->setpolicy) { + cpufreq_driver->stop_cpu(policy); } return 0; diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 70929bcf1a9d..2d2e62c8666a 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -227,6 +227,7 @@ struct cpufreq_driver { int (*bios_limit) (int cpu, unsigned int *limit); int (*exit) (struct cpufreq_policy *policy); + void (*stop_cpu) (struct cpufreq_policy *policy); int (*suspend) (struct cpufreq_policy *policy); int (*resume) (struct cpufreq_policy *policy); struct freq_attr **attr; -- cgit v1.2.3