From 8be3f96ceddb911539a53d87a66da84a04502366 Mon Sep 17 00:00:00 2001 From: ye xingchen Date: Wed, 12 Oct 2022 01:26:29 +0000 Subject: timers: Replace in_irq() with in_hardirq() Replace the obsolete and ambiguous macro in_irq() with new macro in_hardirq(). Signed-off-by: ye xingchen Signed-off-by: Thomas Gleixner Acked-by: John Stultz Link: https://lore.kernel.org/r/20221012012629.334966-1-ye.xingchen@zte.com.cn --- kernel/time/timer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 717fcb9fb14a..f40c88c156fe 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1422,7 +1422,7 @@ int del_timer_sync(struct timer_list *timer) * don't use it in hardirq context, because it * could lead to deadlock. */ - WARN_ON(in_irq() && !(timer->flags & TIMER_IRQSAFE)); + WARN_ON(in_hardirq() && !(timer->flags & TIMER_IRQSAFE)); /* * Must be able to sleep on PREEMPT_RT because of the slowpath in -- cgit v1.2.3 From 2f117484329b233455ee278f2d9b0a4356835060 Mon Sep 17 00:00:00 2001 From: Barnabás Pőcze Date: Mon, 14 Nov 2022 19:54:23 +0000 Subject: timerqueue: Use rb_entry_safe() in timerqueue_getnext() When `timerqueue_getnext()` is called on an empty timer queue, it will use `rb_entry()` on a NULL pointer, which is invalid. Fix that by using `rb_entry_safe()` which handles NULL pointers. This has not caused any issues so far because the offset of the `rb_node` member in `timerqueue_node` is 0, so `rb_entry()` is essentially a no-op. Fixes: 511885d7061e ("lib/timerqueue: Rely on rbtree semantics for next timer") Signed-off-by: Barnabás Pőcze Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20221114195421.342929-1-pobrn@protonmail.com --- include/linux/timerqueue.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/timerqueue.h b/include/linux/timerqueue.h index 93884086f392..adc80e29168e 100644 --- a/include/linux/timerqueue.h +++ b/include/linux/timerqueue.h @@ -35,7 +35,7 @@ struct timerqueue_node *timerqueue_getnext(struct timerqueue_head *head) { struct rb_node *leftmost = rb_first_cached(&head->rb_root); - return rb_entry(leftmost, struct timerqueue_node, node); + return rb_entry_safe(leftmost, struct timerqueue_node, node); } static inline void timerqueue_init(struct timerqueue_node *node) -- cgit v1.2.3 From b0b0aa5d858d4d2fe39a5e4486e0550e858108f6 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:31 +0100 Subject: Documentation: Remove bogus claim about del_timer_sync() del_timer_sync() does not return the number of times it tried to delete the timer which rearms itself. It's clearly documented: The function returns whether it has deactivated a pending timer or not. This part of the documentation is from 2003 where del_timer_sync() really returned the number of deletion attempts for unknown reasons. The code was rewritten in 2005, but the documentation was not updated. Signed-off-by: Thomas Gleixner Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/r/20221123201624.452282769@linutronix.de --- Documentation/kernel-hacking/locking.rst | 3 +-- Documentation/translations/it_IT/kernel-hacking/locking.rst | 4 +--- 2 files changed, 2 insertions(+), 5 deletions(-) diff --git a/Documentation/kernel-hacking/locking.rst b/Documentation/kernel-hacking/locking.rst index 6805ae6e86e6..b26e4a3a9b7e 100644 --- a/Documentation/kernel-hacking/locking.rst +++ b/Documentation/kernel-hacking/locking.rst @@ -1006,8 +1006,7 @@ Another common problem is deleting timers which restart themselves (by calling add_timer() at the end of their timer function). Because this is a fairly common case which is prone to races, you should use del_timer_sync() (``include/linux/timer.h``) to -handle this case. It returns the number of times the timer had to be -deleted before we finally stopped it from adding itself back in. +handle this case. Locking Speed ============= diff --git a/Documentation/translations/it_IT/kernel-hacking/locking.rst b/Documentation/translations/it_IT/kernel-hacking/locking.rst index 51af37f2d621..eddfba806e13 100644 --- a/Documentation/translations/it_IT/kernel-hacking/locking.rst +++ b/Documentation/translations/it_IT/kernel-hacking/locking.rst @@ -1027,9 +1027,7 @@ Un altro problema è l'eliminazione dei temporizzatori che si riavviano da soli (chiamando add_timer() alla fine della loro esecuzione). Dato che questo è un problema abbastanza comune con una propensione alle corse critiche, dovreste usare del_timer_sync() -(``include/linux/timer.h``) per gestire questo caso. Questa ritorna il -numero di volte che il temporizzatore è stato interrotto prima che -fosse in grado di fermarlo senza che si riavviasse. +(``include/linux/timer.h``) per gestire questo caso. Velocità della sincronizzazione =============================== -- cgit v1.2.3 From 80b55772d41d8afec68dbc4ff0368a9fe5d1f390 Mon Sep 17 00:00:00 2001 From: Steven Rostedt (Google) Date: Wed, 23 Nov 2022 21:18:32 +0100 Subject: ARM: spear: Do not use timer namespace for timer_shutdown() function A new "shutdown" timer state is being added to the generic timer code. One of the functions to change the timer into the state is called "timer_shutdown()". This means that there can not be other functions called "timer_shutdown()" as the timer code owns the "timer_*" name space. Rename timer_shutdown() to spear_timer_shutdown() to avoid this conflict. Signed-off-by: Steven Rostedt (Google) Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Acked-by: Arnd Bergmann Acked-by: Viresh Kumar Link: https://lkml.kernel.org/r/20221106212701.822440504@goodmis.org Link: https://lore.kernel.org/all/20221105060155.228348078@goodmis.org/ Link: https://lore.kernel.org/r/20221110064146.810953418@goodmis.org Link: https://lore.kernel.org/r/20221123201624.513863211@linutronix.de --- arch/arm/mach-spear/time.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm/mach-spear/time.c b/arch/arm/mach-spear/time.c index e979e2197f8e..5371c824786d 100644 --- a/arch/arm/mach-spear/time.c +++ b/arch/arm/mach-spear/time.c @@ -90,7 +90,7 @@ static void __init spear_clocksource_init(void) 200, 16, clocksource_mmio_readw_up); } -static inline void timer_shutdown(struct clock_event_device *evt) +static inline void spear_timer_shutdown(struct clock_event_device *evt) { u16 val = readw(gpt_base + CR(CLKEVT)); @@ -101,7 +101,7 @@ static inline void timer_shutdown(struct clock_event_device *evt) static int spear_shutdown(struct clock_event_device *evt) { - timer_shutdown(evt); + spear_timer_shutdown(evt); return 0; } @@ -111,7 +111,7 @@ static int spear_set_oneshot(struct clock_event_device *evt) u16 val; /* stop the timer */ - timer_shutdown(evt); + spear_timer_shutdown(evt); val = readw(gpt_base + CR(CLKEVT)); val |= CTRL_ONE_SHOT; @@ -126,7 +126,7 @@ static int spear_set_periodic(struct clock_event_device *evt) u16 val; /* stop the timer */ - timer_shutdown(evt); + spear_timer_shutdown(evt); period = clk_get_rate(gpt_clk) / HZ; period >>= CTRL_PRESCALER16; -- cgit v1.2.3 From 73737a5833ace25a8408b0d3b783637cb6bf29d1 Mon Sep 17 00:00:00 2001 From: Steven Rostedt (Google) Date: Wed, 23 Nov 2022 21:18:34 +0100 Subject: clocksource/drivers/arm_arch_timer: Do not use timer namespace for timer_shutdown() function A new "shutdown" timer state is being added to the generic timer code. One of the functions to change the timer into the state is called "timer_shutdown()". This means that there can not be other functions called "timer_shutdown()" as the timer code owns the "timer_*" name space. Rename timer_shutdown() to arch_timer_shutdown() to avoid this conflict. Signed-off-by: Steven Rostedt (Google) Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Acked-by: Marc Zyngier Link: https://lkml.kernel.org/r/20221106212702.002251651@goodmis.org Link: https://lore.kernel.org/all/20221105060155.409832154@goodmis.org/ Link: https://lore.kernel.org/r/20221110064146.981725531@goodmis.org Link: https://lore.kernel.org/r/20221123201624.574672568@linutronix.de --- drivers/clocksource/arm_arch_timer.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index a7ff77550e17..9c3420a0d19d 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -687,8 +687,8 @@ static irqreturn_t arch_timer_handler_virt_mem(int irq, void *dev_id) return timer_handler(ARCH_TIMER_MEM_VIRT_ACCESS, evt); } -static __always_inline int timer_shutdown(const int access, - struct clock_event_device *clk) +static __always_inline int arch_timer_shutdown(const int access, + struct clock_event_device *clk) { unsigned long ctrl; @@ -701,22 +701,22 @@ static __always_inline int timer_shutdown(const int access, static int arch_timer_shutdown_virt(struct clock_event_device *clk) { - return timer_shutdown(ARCH_TIMER_VIRT_ACCESS, clk); + return arch_timer_shutdown(ARCH_TIMER_VIRT_ACCESS, clk); } static int arch_timer_shutdown_phys(struct clock_event_device *clk) { - return timer_shutdown(ARCH_TIMER_PHYS_ACCESS, clk); + return arch_timer_shutdown(ARCH_TIMER_PHYS_ACCESS, clk); } static int arch_timer_shutdown_virt_mem(struct clock_event_device *clk) { - return timer_shutdown(ARCH_TIMER_MEM_VIRT_ACCESS, clk); + return arch_timer_shutdown(ARCH_TIMER_MEM_VIRT_ACCESS, clk); } static int arch_timer_shutdown_phys_mem(struct clock_event_device *clk) { - return timer_shutdown(ARCH_TIMER_MEM_PHYS_ACCESS, clk); + return arch_timer_shutdown(ARCH_TIMER_MEM_PHYS_ACCESS, clk); } static __always_inline void set_next_event(const int access, unsigned long evt, -- cgit v1.2.3 From 6e1fc2591f116dfb20b65cf27356475461d61bd8 Mon Sep 17 00:00:00 2001 From: Steven Rostedt (Google) Date: Wed, 23 Nov 2022 21:18:36 +0100 Subject: clocksource/drivers/sp804: Do not use timer namespace for timer_shutdown() function A new "shutdown" timer state is being added to the generic timer code. One of the functions to change the timer into the state is called "timer_shutdown()". This means that there can not be other functions called "timer_shutdown()" as the timer code owns the "timer_*" name space. Rename timer_shutdown() to evt_timer_shutdown() to avoid this conflict. Signed-off-by: Steven Rostedt (Google) Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lkml.kernel.org/r/20221106212702.182883323@goodmis.org Link: https://lore.kernel.org/all/20221105060155.592778858@goodmis.org/ Link: https://lore.kernel.org/r/20221110064147.158230501@goodmis.org Link: https://lore.kernel.org/r/20221123201624.634354813@linutronix.de --- drivers/clocksource/timer-sp804.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/clocksource/timer-sp804.c b/drivers/clocksource/timer-sp804.c index e6a87f4af2b5..cd1916c05325 100644 --- a/drivers/clocksource/timer-sp804.c +++ b/drivers/clocksource/timer-sp804.c @@ -155,14 +155,14 @@ static irqreturn_t sp804_timer_interrupt(int irq, void *dev_id) return IRQ_HANDLED; } -static inline void timer_shutdown(struct clock_event_device *evt) +static inline void evt_timer_shutdown(struct clock_event_device *evt) { writel(0, common_clkevt->ctrl); } static int sp804_shutdown(struct clock_event_device *evt) { - timer_shutdown(evt); + evt_timer_shutdown(evt); return 0; } @@ -171,7 +171,7 @@ static int sp804_set_periodic(struct clock_event_device *evt) unsigned long ctrl = TIMER_CTRL_32BIT | TIMER_CTRL_IE | TIMER_CTRL_PERIODIC | TIMER_CTRL_ENABLE; - timer_shutdown(evt); + evt_timer_shutdown(evt); writel(common_clkevt->reload, common_clkevt->load); writel(ctrl, common_clkevt->ctrl); return 0; -- cgit v1.2.3 From 9a5a305686971f4be10c6d7251c8348d74b3e014 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:37 +0100 Subject: timers: Get rid of del_singleshot_timer_sync() del_singleshot_timer_sync() used to be an optimization for deleting timers which are not rearmed from the timer callback function. This optimization turned out to be broken and got mapped to del_timer_sync() about 17 years ago. Get rid of the undocumented indirection and use del_timer_sync() directly. No functional change. Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/r/20221123201624.706987932@linutronix.de --- drivers/char/tpm/tpm-dev-common.c | 4 ++-- drivers/staging/wlan-ng/hfa384x_usb.c | 4 ++-- drivers/staging/wlan-ng/prism2usb.c | 6 +++--- include/linux/timer.h | 2 -- kernel/time/timer.c | 2 +- net/sunrpc/xprt.c | 2 +- 6 files changed, 9 insertions(+), 11 deletions(-) diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c index dc4c0a0a5129..30b4c288c1bb 100644 --- a/drivers/char/tpm/tpm-dev-common.c +++ b/drivers/char/tpm/tpm-dev-common.c @@ -155,7 +155,7 @@ ssize_t tpm_common_read(struct file *file, char __user *buf, out: if (!priv->response_length) { *off = 0; - del_singleshot_timer_sync(&priv->user_read_timer); + del_timer_sync(&priv->user_read_timer); flush_work(&priv->timeout_work); } mutex_unlock(&priv->buffer_mutex); @@ -262,7 +262,7 @@ __poll_t tpm_common_poll(struct file *file, poll_table *wait) void tpm_common_release(struct file *file, struct file_priv *priv) { flush_work(&priv->async_work); - del_singleshot_timer_sync(&priv->user_read_timer); + del_timer_sync(&priv->user_read_timer); flush_work(&priv->timeout_work); file->private_data = NULL; priv->response_length = 0; diff --git a/drivers/staging/wlan-ng/hfa384x_usb.c b/drivers/staging/wlan-ng/hfa384x_usb.c index 02fdef7a16c8..c7cd54171d99 100644 --- a/drivers/staging/wlan-ng/hfa384x_usb.c +++ b/drivers/staging/wlan-ng/hfa384x_usb.c @@ -1116,8 +1116,8 @@ cleanup: if (ctlx == get_active_ctlx(hw)) { spin_unlock_irqrestore(&hw->ctlxq.lock, flags); - del_singleshot_timer_sync(&hw->reqtimer); - del_singleshot_timer_sync(&hw->resptimer); + del_timer_sync(&hw->reqtimer); + del_timer_sync(&hw->resptimer); hw->req_timer_done = 1; hw->resp_timer_done = 1; usb_kill_urb(&hw->ctlx_urb); diff --git a/drivers/staging/wlan-ng/prism2usb.c b/drivers/staging/wlan-ng/prism2usb.c index e13da7fadfff..c13f1699e5a2 100644 --- a/drivers/staging/wlan-ng/prism2usb.c +++ b/drivers/staging/wlan-ng/prism2usb.c @@ -170,9 +170,9 @@ static void prism2sta_disconnect_usb(struct usb_interface *interface) */ prism2sta_ifstate(wlandev, P80211ENUM_ifstate_disable); - del_singleshot_timer_sync(&hw->throttle); - del_singleshot_timer_sync(&hw->reqtimer); - del_singleshot_timer_sync(&hw->resptimer); + del_timer_sync(&hw->throttle); + del_timer_sync(&hw->reqtimer); + del_timer_sync(&hw->resptimer); /* Unlink all the URBs. This "removes the wheels" * from the entire CTLX handling mechanism. diff --git a/include/linux/timer.h b/include/linux/timer.h index 648f00105f58..ea35d00123f0 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h @@ -190,8 +190,6 @@ extern int try_to_del_timer_sync(struct timer_list *timer); # define del_timer_sync(t) del_timer(t) #endif -#define del_singleshot_timer_sync(t) del_timer_sync(t) - extern void init_timers(void); struct hrtimer; extern enum hrtimer_restart it_real_fn(struct hrtimer *); diff --git a/kernel/time/timer.c b/kernel/time/timer.c index f40c88c156fe..190e06980d87 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1933,7 +1933,7 @@ signed long __sched schedule_timeout(signed long timeout) timer_setup_on_stack(&timer.timer, process_timeout, 0); __mod_timer(&timer.timer, expire, MOD_TIMER_NOTPENDING); schedule(); - del_singleshot_timer_sync(&timer.timer); + del_timer_sync(&timer.timer); /* Remove the timer from the object tracker */ destroy_timer_on_stack(&timer.timer); diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c index 656cec208371..ab453ede54f0 100644 --- a/net/sunrpc/xprt.c +++ b/net/sunrpc/xprt.c @@ -1164,7 +1164,7 @@ xprt_request_enqueue_receive(struct rpc_task *task) spin_unlock(&xprt->queue_lock); /* Turn off autodisconnect */ - del_singleshot_timer_sync(&xprt->timer); + del_timer_sync(&xprt->timer); return 0; } -- cgit v1.2.3 From 82ed6f7ef58f9634fe4462dd721902c580f01569 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:39 +0100 Subject: timers: Replace BUG_ON()s The timer code still has a few BUG_ON()s left which are crashing the kernel in situations where it still can recover or simply refuse to take an action. Remove the one in the hotplug callback which checks for the CPU being offline. If that happens then the whole hotplug machinery will explode in colourful ways. Replace the rest with WARN_ON_ONCE() and conditional returns where appropriate. Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/r/20221123201624.769128888@linutronix.de --- kernel/time/timer.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 190e06980d87..cec37b17a853 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1193,7 +1193,8 @@ EXPORT_SYMBOL(timer_reduce); */ void add_timer(struct timer_list *timer) { - BUG_ON(timer_pending(timer)); + if (WARN_ON_ONCE(timer_pending(timer))) + return; __mod_timer(timer, timer->expires, MOD_TIMER_NOTPENDING); } EXPORT_SYMBOL(add_timer); @@ -1210,7 +1211,8 @@ void add_timer_on(struct timer_list *timer, int cpu) struct timer_base *new_base, *base; unsigned long flags; - BUG_ON(timer_pending(timer) || !timer->function); + if (WARN_ON_ONCE(timer_pending(timer) || !timer->function)) + return; new_base = get_timer_cpu_base(timer->flags, cpu); @@ -2017,8 +2019,6 @@ int timers_dead_cpu(unsigned int cpu) struct timer_base *new_base; int b, i; - BUG_ON(cpu_online(cpu)); - for (b = 0; b < NR_BASES; b++) { old_base = per_cpu_ptr(&timer_bases[b], cpu); new_base = get_cpu_ptr(&timer_bases[b]); @@ -2035,7 +2035,8 @@ int timers_dead_cpu(unsigned int cpu) */ forward_timer_base(new_base); - BUG_ON(old_base->running_timer); + WARN_ON_ONCE(old_base->running_timer); + old_base->running_timer = NULL; for (i = 0; i < WHEEL_SIZE; i++) migrate_timer_list(new_base, old_base->vectors + i); -- cgit v1.2.3 From 14f043f1340bf30bc60af127bff39f55889fef26 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:40 +0100 Subject: timers: Update kernel-doc for various functions The kernel-doc of timer related functions is partially uncomprehensible word salad. Rewrite it to make it useful. Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/r/20221123201624.828703870@linutronix.de --- kernel/time/timer.c | 148 ++++++++++++++++++++++++++++++++-------------------- 1 file changed, 90 insertions(+), 58 deletions(-) diff --git a/kernel/time/timer.c b/kernel/time/timer.c index cec37b17a853..ea0150c366fe 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1121,14 +1121,16 @@ out_unlock: } /** - * mod_timer_pending - modify a pending timer's timeout - * @timer: the pending timer to be modified - * @expires: new timeout in jiffies + * mod_timer_pending - Modify a pending timer's timeout + * @timer: The pending timer to be modified + * @expires: New absolute timeout in jiffies * - * mod_timer_pending() is the same for pending timers as mod_timer(), - * but will not re-activate and modify already deleted timers. + * mod_timer_pending() is the same for pending timers as mod_timer(), but + * will not activate inactive timers. * - * It is useful for unserialized use of timers. + * Return: + * * %0 - The timer was inactive and not modified + * * %1 - The timer was active and requeued to expire at @expires */ int mod_timer_pending(struct timer_list *timer, unsigned long expires) { @@ -1137,24 +1139,27 @@ int mod_timer_pending(struct timer_list *timer, unsigned long expires) EXPORT_SYMBOL(mod_timer_pending); /** - * mod_timer - modify a timer's timeout - * @timer: the timer to be modified - * @expires: new timeout in jiffies - * - * mod_timer() is a more efficient way to update the expire field of an - * active timer (if the timer is inactive it will be activated) + * mod_timer - Modify a timer's timeout + * @timer: The timer to be modified + * @expires: New absolute timeout in jiffies * * mod_timer(timer, expires) is equivalent to: * * del_timer(timer); timer->expires = expires; add_timer(timer); * + * mod_timer() is more efficient than the above open coded sequence. In + * case that the timer is inactive, the del_timer() part is a NOP. The + * timer is in any case activated with the new expiry time @expires. + * * Note that if there are multiple unserialized concurrent users of the * same timer, then mod_timer() is the only safe way to modify the timeout, * since add_timer() cannot modify an already running timer. * - * The function returns whether it has modified a pending timer or not. - * (ie. mod_timer() of an inactive timer returns 0, mod_timer() of an - * active timer returns 1.) + * Return: + * * %0 - The timer was inactive and started + * * %1 - The timer was active and requeued to expire at @expires or + * the timer was active and not modified because @expires did + * not change the effective expiry time */ int mod_timer(struct timer_list *timer, unsigned long expires) { @@ -1165,11 +1170,18 @@ EXPORT_SYMBOL(mod_timer); /** * timer_reduce - Modify a timer's timeout if it would reduce the timeout * @timer: The timer to be modified - * @expires: New timeout in jiffies + * @expires: New absolute timeout in jiffies * * timer_reduce() is very similar to mod_timer(), except that it will only - * modify a running timer if that would reduce the expiration time (it will - * start a timer that isn't running). + * modify an enqueued timer if that would reduce the expiration time. If + * @timer is not enqueued it starts the timer. + * + * Return: + * * %0 - The timer was inactive and started + * * %1 - The timer was active and requeued to expire at @expires or + * the timer was active and not modified because @expires + * did not change the effective expiry time such that the + * timer would expire earlier than already scheduled */ int timer_reduce(struct timer_list *timer, unsigned long expires) { @@ -1178,18 +1190,21 @@ int timer_reduce(struct timer_list *timer, unsigned long expires) EXPORT_SYMBOL(timer_reduce); /** - * add_timer - start a timer - * @timer: the timer to be added + * add_timer - Start a timer + * @timer: The timer to be started * - * The kernel will do a ->function(@timer) callback from the - * timer interrupt at the ->expires point in the future. The - * current time is 'jiffies'. + * Start @timer to expire at @timer->expires in the future. @timer->expires + * is the absolute expiry time measured in 'jiffies'. When the timer expires + * timer->function(timer) will be invoked from soft interrupt context. * - * The timer's ->expires, ->function fields must be set prior calling this - * function. + * The @timer->expires and @timer->function fields must be set prior + * to calling this function. + * + * If @timer->expires is already in the past @timer will be queued to + * expire at the next timer tick. * - * Timers with an ->expires field in the past will be executed in the next - * timer tick. + * This can only operate on an inactive timer. Attempts to invoke this on + * an active timer are rejected with a warning. */ void add_timer(struct timer_list *timer) { @@ -1200,11 +1215,13 @@ void add_timer(struct timer_list *timer) EXPORT_SYMBOL(add_timer); /** - * add_timer_on - start a timer on a particular CPU - * @timer: the timer to be added - * @cpu: the CPU to start it on + * add_timer_on - Start a timer on a particular CPU + * @timer: The timer to be started + * @cpu: The CPU to start it on + * + * Same as add_timer() except that it starts the timer on the given CPU. * - * This is not very scalable on SMP. Double adds are not possible. + * See add_timer() for further details. */ void add_timer_on(struct timer_list *timer, int cpu) { @@ -1240,15 +1257,18 @@ void add_timer_on(struct timer_list *timer, int cpu) EXPORT_SYMBOL_GPL(add_timer_on); /** - * del_timer - deactivate a timer. - * @timer: the timer to be deactivated - * - * del_timer() deactivates a timer - this works on both active and inactive - * timers. - * - * The function returns whether it has deactivated a pending timer or not. - * (ie. del_timer() of an inactive timer returns 0, del_timer() of an - * active timer returns 1.) + * del_timer - Deactivate a timer. + * @timer: The timer to be deactivated + * + * The function only deactivates a pending timer, but contrary to + * del_timer_sync() it does not take into account whether the timer's + * callback function is concurrently executed on a different CPU or not. + * It neither prevents rearming of the timer. If @timer can be rearmed + * concurrently then the return value of this function is meaningless. + * + * Return: + * * %0 - The timer was not pending + * * %1 - The timer was pending and deactivated */ int del_timer(struct timer_list *timer) { @@ -1270,10 +1290,19 @@ EXPORT_SYMBOL(del_timer); /** * try_to_del_timer_sync - Try to deactivate a timer - * @timer: timer to delete + * @timer: Timer to deactivate + * + * This function tries to deactivate a timer. On success the timer is not + * queued and the timer callback function is not running on any CPU. * - * This function tries to deactivate a timer. Upon successful (ret >= 0) - * exit the timer is not queued and the handler is not running on any CPU. + * This function does not guarantee that the timer cannot be rearmed right + * after dropping the base lock. That needs to be prevented by the calling + * code if necessary. + * + * Return: + * * %0 - The timer was not pending + * * %1 - The timer was pending and deactivated + * * %-1 - The timer callback function is running on a different CPU */ int try_to_del_timer_sync(struct timer_list *timer) { @@ -1369,23 +1398,19 @@ static inline void del_timer_wait_running(struct timer_list *timer) { } #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT) /** - * del_timer_sync - deactivate a timer and wait for the handler to finish. - * @timer: the timer to be deactivated - * - * This function only differs from del_timer() on SMP: besides deactivating - * the timer it also makes sure the handler has finished executing on other - * CPUs. + * del_timer_sync - Deactivate a timer and wait for the handler to finish. + * @timer: The timer to be deactivated * * Synchronization rules: Callers must prevent restarting of the timer, * otherwise this function is meaningless. It must not be called from * interrupt contexts unless the timer is an irqsafe one. The caller must - * not hold locks which would prevent completion of the timer's - * handler. The timer's handler must not call add_timer_on(). Upon exit the - * timer is not queued and the handler is not running on any CPU. + * not hold locks which would prevent completion of the timer's callback + * function. The timer's handler must not call add_timer_on(). Upon exit + * the timer is not queued and the handler is not running on any CPU. * - * Note: For !irqsafe timers, you must not hold locks that are held in - * interrupt context while calling this function. Even if the lock has - * nothing to do with the timer in question. Here's why:: + * For !irqsafe timers, the caller must not hold locks that are held in + * interrupt context. Even if the lock has nothing to do with the timer in + * question. Here's why:: * * CPU0 CPU1 * ---- ---- @@ -1399,10 +1424,17 @@ static inline void del_timer_wait_running(struct timer_list *timer) { } * while (base->running_timer == mytimer); * * Now del_timer_sync() will never return and never release somelock. - * The interrupt on the other CPU is waiting to grab somelock but - * it has interrupted the softirq that CPU0 is waiting to finish. + * The interrupt on the other CPU is waiting to grab somelock but it has + * interrupted the softirq that CPU0 is waiting to finish. + * + * This function cannot guarantee that the timer is not rearmed again by + * some concurrent or preempting code, right after it dropped the base + * lock. If there is the possibility of a concurrent rearm then the return + * value of the function is meaningless. * - * The function returns whether it has deactivated a pending timer or not. + * Return: + * * %0 - The timer was not pending + * * %1 - The timer was pending and deactivated */ int del_timer_sync(struct timer_list *timer) { -- cgit v1.2.3 From 168f6b6ffbeec0b9333f3582e4cf637300858db5 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:42 +0100 Subject: timers: Use del_timer_sync() even on UP del_timer_sync() is assumed to be pointless on uniprocessor systems and can be mapped to del_timer() because in theory del_timer() can never be invoked while the timer callback function is executed. This is not entirely true because del_timer() can be invoked from interrupt context and therefore hit in the middle of a running timer callback. Contrary to that del_timer_sync() is not allowed to be invoked from interrupt context unless the affected timer is marked with TIMER_IRQSAFE. del_timer_sync() has proper checks in place to detect such a situation. Give up on the UP optimization and make del_timer_sync() unconditionally available. Co-developed-by: Steven Rostedt Signed-off-by: Steven Rostedt Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/all/20220407161745.7d6754b3@gandalf.local.home Link: https://lore.kernel.org/all/20221110064101.429013735@goodmis.org Link: https://lore.kernel.org/r/20221123201624.888306160@linutronix.de --- include/linux/timer.h | 7 +------ kernel/time/timer.c | 2 -- 2 files changed, 1 insertion(+), 8 deletions(-) diff --git a/include/linux/timer.h b/include/linux/timer.h index ea35d00123f0..31b8b6018f0d 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h @@ -183,12 +183,7 @@ extern int timer_reduce(struct timer_list *timer, unsigned long expires); extern void add_timer(struct timer_list *timer); extern int try_to_del_timer_sync(struct timer_list *timer); - -#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT) - extern int del_timer_sync(struct timer_list *timer); -#else -# define del_timer_sync(t) del_timer(t) -#endif +extern int del_timer_sync(struct timer_list *timer); extern void init_timers(void); struct hrtimer; diff --git a/kernel/time/timer.c b/kernel/time/timer.c index ea0150c366fe..550f0df3a2aa 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1396,7 +1396,6 @@ static inline void timer_sync_wait_running(struct timer_base *base) { } static inline void del_timer_wait_running(struct timer_list *timer) { } #endif -#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT) /** * del_timer_sync - Deactivate a timer and wait for the handler to finish. * @timer: The timer to be deactivated @@ -1477,7 +1476,6 @@ int del_timer_sync(struct timer_list *timer) return ret; } EXPORT_SYMBOL(del_timer_sync); -#endif static void call_timer_fn(struct timer_list *timer, void (*fn)(struct timer_list *), -- cgit v1.2.3 From 9b13df3fb64ee95e2397585404e442afee2c7d4f Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:44 +0100 Subject: timers: Rename del_timer_sync() to timer_delete_sync() The timer related functions do not have a strict timer_ prefixed namespace which is really annoying. Rename del_timer_sync() to timer_delete_sync() and provide del_timer_sync() as a wrapper. Document that del_timer_sync() is not for new code. Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Steven Rostedt (Google) Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/r/20221123201624.954785441@linutronix.de --- include/linux/timer.h | 15 ++++++++++++++- kernel/time/timer.c | 18 +++++++++--------- 2 files changed, 23 insertions(+), 10 deletions(-) diff --git a/include/linux/timer.h b/include/linux/timer.h index 31b8b6018f0d..551fa467726f 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h @@ -183,7 +183,20 @@ extern int timer_reduce(struct timer_list *timer, unsigned long expires); extern void add_timer(struct timer_list *timer); extern int try_to_del_timer_sync(struct timer_list *timer); -extern int del_timer_sync(struct timer_list *timer); +extern int timer_delete_sync(struct timer_list *timer); + +/** + * del_timer_sync - Delete a pending timer and wait for a running callback + * @timer: The timer to be deleted + * + * See timer_delete_sync() for detailed explanation. + * + * Do not use in new code. Use timer_delete_sync() instead. + */ +static inline int del_timer_sync(struct timer_list *timer) +{ + return timer_delete_sync(timer); +} extern void init_timers(void); struct hrtimer; diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 550f0df3a2aa..60e538b566e3 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1083,7 +1083,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires, unsigned int option /* * We are trying to schedule the timer on the new base. * However we can't change timer's base while it is running, - * otherwise del_timer_sync() can't detect that the timer's + * otherwise timer_delete_sync() can't detect that the timer's * handler yet has not finished. This also guarantees that the * timer is serialized wrt itself. */ @@ -1261,7 +1261,7 @@ EXPORT_SYMBOL_GPL(add_timer_on); * @timer: The timer to be deactivated * * The function only deactivates a pending timer, but contrary to - * del_timer_sync() it does not take into account whether the timer's + * timer_delete_sync() it does not take into account whether the timer's * callback function is concurrently executed on a different CPU or not. * It neither prevents rearming of the timer. If @timer can be rearmed * concurrently then the return value of this function is meaningless. @@ -1397,7 +1397,7 @@ static inline void del_timer_wait_running(struct timer_list *timer) { } #endif /** - * del_timer_sync - Deactivate a timer and wait for the handler to finish. + * timer_delete_sync - Deactivate a timer and wait for the handler to finish. * @timer: The timer to be deactivated * * Synchronization rules: Callers must prevent restarting of the timer, @@ -1419,10 +1419,10 @@ static inline void del_timer_wait_running(struct timer_list *timer) { } * spin_lock_irq(somelock); * * spin_lock(somelock); - * del_timer_sync(mytimer); + * timer_delete_sync(mytimer); * while (base->running_timer == mytimer); * - * Now del_timer_sync() will never return and never release somelock. + * Now timer_delete_sync() will never return and never release somelock. * The interrupt on the other CPU is waiting to grab somelock but it has * interrupted the softirq that CPU0 is waiting to finish. * @@ -1435,7 +1435,7 @@ static inline void del_timer_wait_running(struct timer_list *timer) { } * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated */ -int del_timer_sync(struct timer_list *timer) +int timer_delete_sync(struct timer_list *timer) { int ret; @@ -1475,7 +1475,7 @@ int del_timer_sync(struct timer_list *timer) return ret; } -EXPORT_SYMBOL(del_timer_sync); +EXPORT_SYMBOL(timer_delete_sync); static void call_timer_fn(struct timer_list *timer, void (*fn)(struct timer_list *), @@ -1497,8 +1497,8 @@ static void call_timer_fn(struct timer_list *timer, #endif /* * Couple the lock chain with the lock chain at - * del_timer_sync() by acquiring the lock_map around the fn() - * call here and in del_timer_sync(). + * timer_delete_sync() by acquiring the lock_map around the fn() + * call here and in timer_delete_sync(). */ lock_map_acquire(&lockdep_map); -- cgit v1.2.3 From bb663f0f3c396c6d05f6c5eeeea96ced20ff112e Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:45 +0100 Subject: timers: Rename del_timer() to timer_delete() The timer related functions do not have a strict timer_ prefixed namespace which is really annoying. Rename del_timer() to timer_delete() and provide del_timer() as a wrapper. Document that del_timer() is not for new code. Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Steven Rostedt (Google) Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/r/20221123201625.015535022@linutronix.de --- include/linux/timer.h | 15 ++++++++++++++- kernel/time/timer.c | 6 +++--- 2 files changed, 17 insertions(+), 4 deletions(-) diff --git a/include/linux/timer.h b/include/linux/timer.h index 551fa467726f..e338e173ce8b 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h @@ -169,7 +169,6 @@ static inline int timer_pending(const struct timer_list * timer) } extern void add_timer_on(struct timer_list *timer, int cpu); -extern int del_timer(struct timer_list * timer); extern int mod_timer(struct timer_list *timer, unsigned long expires); extern int mod_timer_pending(struct timer_list *timer, unsigned long expires); extern int timer_reduce(struct timer_list *timer, unsigned long expires); @@ -184,6 +183,7 @@ extern void add_timer(struct timer_list *timer); extern int try_to_del_timer_sync(struct timer_list *timer); extern int timer_delete_sync(struct timer_list *timer); +extern int timer_delete(struct timer_list *timer); /** * del_timer_sync - Delete a pending timer and wait for a running callback @@ -198,6 +198,19 @@ static inline int del_timer_sync(struct timer_list *timer) return timer_delete_sync(timer); } +/** + * del_timer - Delete a pending timer + * @timer: The timer to be deleted + * + * See timer_delete() for detailed explanation. + * + * Do not use in new code. Use timer_delete() instead. + */ +static inline int del_timer(struct timer_list *timer) +{ + return timer_delete(timer); +} + extern void init_timers(void); struct hrtimer; extern enum hrtimer_restart it_real_fn(struct hrtimer *); diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 60e538b566e3..2b5e8c26b697 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1257,7 +1257,7 @@ void add_timer_on(struct timer_list *timer, int cpu) EXPORT_SYMBOL_GPL(add_timer_on); /** - * del_timer - Deactivate a timer. + * timer_delete - Deactivate a timer * @timer: The timer to be deactivated * * The function only deactivates a pending timer, but contrary to @@ -1270,7 +1270,7 @@ EXPORT_SYMBOL_GPL(add_timer_on); * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated */ -int del_timer(struct timer_list *timer) +int timer_delete(struct timer_list *timer) { struct timer_base *base; unsigned long flags; @@ -1286,7 +1286,7 @@ int del_timer(struct timer_list *timer) return ret; } -EXPORT_SYMBOL(del_timer); +EXPORT_SYMBOL(timer_delete); /** * try_to_del_timer_sync - Try to deactivate a timer -- cgit v1.2.3 From 87bdd932e85881895d4720255b40ac28749c4e32 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:47 +0100 Subject: Documentation: Replace del_timer/del_timer_sync() Adjust to the new preferred function names. Suggested-by: Steven Rostedt Signed-off-by: Thomas Gleixner Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/r/20221123201625.075320635@linutronix.de --- Documentation/RCU/Design/Requirements/Requirements.rst | 2 +- Documentation/core-api/local_ops.rst | 2 +- Documentation/kernel-hacking/locking.rst | 11 +++++------ Documentation/timers/hrtimers.rst | 2 +- Documentation/translations/it_IT/kernel-hacking/locking.rst | 10 +++++----- Documentation/translations/zh_CN/core-api/local_ops.rst | 2 +- 6 files changed, 14 insertions(+), 15 deletions(-) diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst index a0f8164c8513..546f23abeca3 100644 --- a/Documentation/RCU/Design/Requirements/Requirements.rst +++ b/Documentation/RCU/Design/Requirements/Requirements.rst @@ -1858,7 +1858,7 @@ unloaded. After a given module has been unloaded, any attempt to call one of its functions results in a segmentation fault. The module-unload functions must therefore cancel any delayed calls to loadable-module functions, for example, any outstanding mod_timer() must be dealt -with via del_timer_sync() or similar. +with via timer_delete_sync() or similar. Unfortunately, there is no way to cancel an RCU callback; once you invoke call_rcu(), the callback function is eventually going to be diff --git a/Documentation/core-api/local_ops.rst b/Documentation/core-api/local_ops.rst index 2ac3f9f29845..a84f8b0c7ab2 100644 --- a/Documentation/core-api/local_ops.rst +++ b/Documentation/core-api/local_ops.rst @@ -191,7 +191,7 @@ Here is a sample module which implements a basic per cpu counter using static void __exit test_exit(void) { - del_timer_sync(&test_timer); + timer_delete_sync(&test_timer); } module_init(test_init); diff --git a/Documentation/kernel-hacking/locking.rst b/Documentation/kernel-hacking/locking.rst index b26e4a3a9b7e..c5b8678ed232 100644 --- a/Documentation/kernel-hacking/locking.rst +++ b/Documentation/kernel-hacking/locking.rst @@ -967,7 +967,7 @@ you might do the following:: while (list) { struct foo *next = list->next; - del_timer(&list->timer); + timer_delete(&list->timer); kfree(list); list = next; } @@ -981,7 +981,7 @@ the lock after we spin_unlock_bh(), and then try to free the element (which has already been freed!). This can be avoided by checking the result of -del_timer(): if it returns 1, the timer has been deleted. +timer_delete(): if it returns 1, the timer has been deleted. If 0, it means (in this case) that it is currently running, so we can do:: @@ -990,7 +990,7 @@ do:: while (list) { struct foo *next = list->next; - if (!del_timer(&list->timer)) { + if (!timer_delete(&list->timer)) { /* Give timer a chance to delete this */ spin_unlock_bh(&list_lock); goto retry; @@ -1005,8 +1005,7 @@ do:: Another common problem is deleting timers which restart themselves (by calling add_timer() at the end of their timer function). Because this is a fairly common case which is prone to races, you should -use del_timer_sync() (``include/linux/timer.h``) to -handle this case. +use timer_delete_sync() (``include/linux/timer.h``) to handle this case. Locking Speed ============= @@ -1334,7 +1333,7 @@ lock. - kfree() -- add_timer() and del_timer() +- add_timer() and timer_delete() Mutex API reference =================== diff --git a/Documentation/timers/hrtimers.rst b/Documentation/timers/hrtimers.rst index c1c20a693e8f..7ac448908d1f 100644 --- a/Documentation/timers/hrtimers.rst +++ b/Documentation/timers/hrtimers.rst @@ -118,7 +118,7 @@ existing timer wheel code, as it is mature and well suited. Sharing code was not really a win, due to the different data structures. Also, the hrtimer functions now have clearer behavior and clearer names - such as hrtimer_try_to_cancel() and hrtimer_cancel() [which are roughly -equivalent to del_timer() and del_timer_sync()] - so there's no direct +equivalent to timer_delete() and timer_delete_sync()] - so there's no direct 1:1 mapping between them on the algorithmic level, and thus no real potential for code sharing either. diff --git a/Documentation/translations/it_IT/kernel-hacking/locking.rst b/Documentation/translations/it_IT/kernel-hacking/locking.rst index eddfba806e13..b8ecf41273c5 100644 --- a/Documentation/translations/it_IT/kernel-hacking/locking.rst +++ b/Documentation/translations/it_IT/kernel-hacking/locking.rst @@ -990,7 +990,7 @@ potreste fare come segue:: while (list) { struct foo *next = list->next; - del_timer(&list->timer); + timer_delete(&list->timer); kfree(list); list = next; } @@ -1003,7 +1003,7 @@ e prenderà il *lock* solo dopo spin_unlock_bh(), e cercherà di eliminare il suo oggetto (che però è già stato eliminato). Questo può essere evitato controllando il valore di ritorno di -del_timer(): se ritorna 1, il temporizzatore è stato già +timer_delete(): se ritorna 1, il temporizzatore è stato già rimosso. Se 0, significa (in questo caso) che il temporizzatore è in esecuzione, quindi possiamo fare come segue:: @@ -1012,7 +1012,7 @@ esecuzione, quindi possiamo fare come segue:: while (list) { struct foo *next = list->next; - if (!del_timer(&list->timer)) { + if (!timer_delete(&list->timer)) { /* Give timer a chance to delete this */ spin_unlock_bh(&list_lock); goto retry; @@ -1026,7 +1026,7 @@ esecuzione, quindi possiamo fare come segue:: Un altro problema è l'eliminazione dei temporizzatori che si riavviano da soli (chiamando add_timer() alla fine della loro esecuzione). Dato che questo è un problema abbastanza comune con una propensione -alle corse critiche, dovreste usare del_timer_sync() +alle corse critiche, dovreste usare timer_delete_sync() (``include/linux/timer.h``) per gestire questo caso. Velocità della sincronizzazione @@ -1372,7 +1372,7 @@ contesto, o trattenendo un qualsiasi *lock*. - kfree() -- add_timer() e del_timer() +- add_timer() e timer_delete() Riferimento per l'API dei Mutex =============================== diff --git a/Documentation/translations/zh_CN/core-api/local_ops.rst b/Documentation/translations/zh_CN/core-api/local_ops.rst index 41e4525038e8..22493b9b829c 100644 --- a/Documentation/translations/zh_CN/core-api/local_ops.rst +++ b/Documentation/translations/zh_CN/core-api/local_ops.rst @@ -185,7 +185,7 @@ UP之间没有不同的行为,在你的架构的 ``local.h`` 中包括 ``asm-g static void __exit test_exit(void) { - del_timer_sync(&test_timer); + timer_delete_sync(&test_timer); } module_init(test_init); -- cgit v1.2.3 From d02e382cef06cc73561dd32dfdc171c00dcc416d Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 24 Nov 2022 09:22:36 +0100 Subject: timers: Silently ignore timers with a NULL function Tearing down timers which have circular dependencies to other functionality, e.g. workqueues, where the timer can schedule work and work can arm timers, is not trivial. In those cases it is desired to shutdown the timer in a way which prevents rearming of the timer. The mechanism to do so is to set timer->function to NULL and use this as an indicator for the timer arming functions to ignore the (re)arm request. In preparation for that replace the warnings in the relevant code paths with checks for timer->function == NULL. If the pointer is NULL, then discard the rearm request silently. Add debug_assert_init() instead of the WARN_ON_ONCE(!timer->function) checks so that debug objects can warn about non-initialized timers. The warning of debug objects does not warn if timer->function == NULL. It warns when timer was not initialized using timer_setup[_on_stack]() or via DEFINE_TIMER(). If developers fail to enable debug objects and then waste lots of time to figure out why their non-initialized timer is not firing, they deserve it. Same for initializing a timer with a NULL function. Co-developed-by: Steven Rostedt Signed-off-by: Steven Rostedt Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/all/20220407161745.7d6754b3@gandalf.local.home Link: https://lore.kernel.org/all/20221110064101.429013735@goodmis.org Link: https://lore.kernel.org/r/87wn7kdann.ffs@tglx --- kernel/time/timer.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 52 insertions(+), 5 deletions(-) diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 2b5e8c26b697..e4fcf5643b6f 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1017,7 +1017,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires, unsigned int option unsigned int idx = UINT_MAX; int ret = 0; - BUG_ON(!timer->function); + debug_assert_init(timer); /* * This is a common optimization triggered by the networking code - if @@ -1044,6 +1044,14 @@ __mod_timer(struct timer_list *timer, unsigned long expires, unsigned int option * dequeue/enqueue dance. */ base = lock_timer_base(timer, &flags); + /* + * Has @timer been shutdown? This needs to be evaluated + * while holding base lock to prevent a race against the + * shutdown code. + */ + if (!timer->function) + goto out_unlock; + forward_timer_base(base); if (timer_pending(timer) && (options & MOD_TIMER_REDUCE) && @@ -1070,6 +1078,14 @@ __mod_timer(struct timer_list *timer, unsigned long expires, unsigned int option } } else { base = lock_timer_base(timer, &flags); + /* + * Has @timer been shutdown? This needs to be evaluated + * while holding base lock to prevent a race against the + * shutdown code. + */ + if (!timer->function) + goto out_unlock; + forward_timer_base(base); } @@ -1128,8 +1144,12 @@ out_unlock: * mod_timer_pending() is the same for pending timers as mod_timer(), but * will not activate inactive timers. * + * If @timer->function == NULL then the start operation is silently + * discarded. + * * Return: - * * %0 - The timer was inactive and not modified + * * %0 - The timer was inactive and not modified or was in + * shutdown state and the operation was discarded * * %1 - The timer was active and requeued to expire at @expires */ int mod_timer_pending(struct timer_list *timer, unsigned long expires) @@ -1155,8 +1175,12 @@ EXPORT_SYMBOL(mod_timer_pending); * same timer, then mod_timer() is the only safe way to modify the timeout, * since add_timer() cannot modify an already running timer. * + * If @timer->function == NULL then the start operation is silently + * discarded. In this case the return value is 0 and meaningless. + * * Return: - * * %0 - The timer was inactive and started + * * %0 - The timer was inactive and started or was in shutdown + * state and the operation was discarded * * %1 - The timer was active and requeued to expire at @expires or * the timer was active and not modified because @expires did * not change the effective expiry time @@ -1176,8 +1200,12 @@ EXPORT_SYMBOL(mod_timer); * modify an enqueued timer if that would reduce the expiration time. If * @timer is not enqueued it starts the timer. * + * If @timer->function == NULL then the start operation is silently + * discarded. + * * Return: - * * %0 - The timer was inactive and started + * * %0 - The timer was inactive and started or was in shutdown + * state and the operation was discarded * * %1 - The timer was active and requeued to expire at @expires or * the timer was active and not modified because @expires * did not change the effective expiry time such that the @@ -1200,6 +1228,9 @@ EXPORT_SYMBOL(timer_reduce); * The @timer->expires and @timer->function fields must be set prior * to calling this function. * + * If @timer->function == NULL then the start operation is silently + * discarded. + * * If @timer->expires is already in the past @timer will be queued to * expire at the next timer tick. * @@ -1228,7 +1259,9 @@ void add_timer_on(struct timer_list *timer, int cpu) struct timer_base *new_base, *base; unsigned long flags; - if (WARN_ON_ONCE(timer_pending(timer) || !timer->function)) + debug_assert_init(timer); + + if (WARN_ON_ONCE(timer_pending(timer))) return; new_base = get_timer_cpu_base(timer->flags, cpu); @@ -1239,6 +1272,13 @@ void add_timer_on(struct timer_list *timer, int cpu) * wrong base locked. See lock_timer_base(). */ base = lock_timer_base(timer, &flags); + /* + * Has @timer been shutdown? This needs to be evaluated while + * holding base lock to prevent a race against the shutdown code. + */ + if (!timer->function) + goto out_unlock; + if (base != new_base) { timer->flags |= TIMER_MIGRATING; @@ -1252,6 +1292,7 @@ void add_timer_on(struct timer_list *timer, int cpu) debug_timer_activate(timer); internal_add_timer(base, timer); +out_unlock: raw_spin_unlock_irqrestore(&base->lock, flags); } EXPORT_SYMBOL_GPL(add_timer_on); @@ -1541,6 +1582,12 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head) fn = timer->function; + if (WARN_ON_ONCE(!fn)) { + /* Should never happen. Emphasis on should! */ + base->running_timer = NULL; + continue; + } + if (timer->flags & TIMER_IRQSAFE) { raw_spin_unlock(&base->lock); call_timer_fn(timer, fn, baseclk); -- cgit v1.2.3 From 8553b5f2774a66b1f293b7d783934210afb8f23c Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:50 +0100 Subject: timers: Split [try_to_]del_timer[_sync]() to prepare for shutdown mode Tearing down timers which have circular dependencies to other functionality, e.g. workqueues, where the timer can schedule work and work can arm timers, is not trivial. In those cases it is desired to shutdown the timer in a way which prevents rearming of the timer. The mechanism to do so is to set timer->function to NULL and use this as an indicator for the timer arming functions to ignore the (re)arm request. Split the inner workings of try_do_del_timer_sync(), del_timer_sync() and del_timer() into helper functions to prepare for implementing the shutdown functionality. No functional change. Co-developed-by: Steven Rostedt Signed-off-by: Steven Rostedt Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/all/20220407161745.7d6754b3@gandalf.local.home Link: https://lore.kernel.org/all/20221110064101.429013735@goodmis.org Link: https://lore.kernel.org/r/20221123201625.195147423@linutronix.de --- kernel/time/timer.c | 143 +++++++++++++++++++++++++++++++++------------------- 1 file changed, 92 insertions(+), 51 deletions(-) diff --git a/kernel/time/timer.c b/kernel/time/timer.c index e4fcf5643b6f..e635bb5e49b9 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1298,20 +1298,14 @@ out_unlock: EXPORT_SYMBOL_GPL(add_timer_on); /** - * timer_delete - Deactivate a timer + * __timer_delete - Internal function: Deactivate a timer * @timer: The timer to be deactivated * - * The function only deactivates a pending timer, but contrary to - * timer_delete_sync() it does not take into account whether the timer's - * callback function is concurrently executed on a different CPU or not. - * It neither prevents rearming of the timer. If @timer can be rearmed - * concurrently then the return value of this function is meaningless. - * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated */ -int timer_delete(struct timer_list *timer) +static int __timer_delete(struct timer_list *timer) { struct timer_base *base; unsigned long flags; @@ -1327,25 +1321,37 @@ int timer_delete(struct timer_list *timer) return ret; } -EXPORT_SYMBOL(timer_delete); /** - * try_to_del_timer_sync - Try to deactivate a timer - * @timer: Timer to deactivate + * timer_delete - Deactivate a timer + * @timer: The timer to be deactivated * - * This function tries to deactivate a timer. On success the timer is not - * queued and the timer callback function is not running on any CPU. + * The function only deactivates a pending timer, but contrary to + * timer_delete_sync() it does not take into account whether the timer's + * callback function is concurrently executed on a different CPU or not. + * It neither prevents rearming of the timer. If @timer can be rearmed + * concurrently then the return value of this function is meaningless. * - * This function does not guarantee that the timer cannot be rearmed right - * after dropping the base lock. That needs to be prevented by the calling - * code if necessary. + * Return: + * * %0 - The timer was not pending + * * %1 - The timer was pending and deactivated + */ +int timer_delete(struct timer_list *timer) +{ + return __timer_delete(timer); +} +EXPORT_SYMBOL(timer_delete); + +/** + * __try_to_del_timer_sync - Internal function: Try to deactivate a timer + * @timer: Timer to deactivate * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated * * %-1 - The timer callback function is running on a different CPU */ -int try_to_del_timer_sync(struct timer_list *timer) +static int __try_to_del_timer_sync(struct timer_list *timer) { struct timer_base *base; unsigned long flags; @@ -1362,6 +1368,27 @@ int try_to_del_timer_sync(struct timer_list *timer) return ret; } + +/** + * try_to_del_timer_sync - Try to deactivate a timer + * @timer: Timer to deactivate + * + * This function tries to deactivate a timer. On success the timer is not + * queued and the timer callback function is not running on any CPU. + * + * This function does not guarantee that the timer cannot be rearmed right + * after dropping the base lock. That needs to be prevented by the calling + * code if necessary. + * + * Return: + * * %0 - The timer was not pending + * * %1 - The timer was pending and deactivated + * * %-1 - The timer callback function is running on a different CPU + */ +int try_to_del_timer_sync(struct timer_list *timer) +{ + return __try_to_del_timer_sync(timer); +} EXPORT_SYMBOL(try_to_del_timer_sync); #ifdef CONFIG_PREEMPT_RT @@ -1438,45 +1465,15 @@ static inline void del_timer_wait_running(struct timer_list *timer) { } #endif /** - * timer_delete_sync - Deactivate a timer and wait for the handler to finish. + * __timer_delete_sync - Internal function: Deactivate a timer and wait + * for the handler to finish. * @timer: The timer to be deactivated * - * Synchronization rules: Callers must prevent restarting of the timer, - * otherwise this function is meaningless. It must not be called from - * interrupt contexts unless the timer is an irqsafe one. The caller must - * not hold locks which would prevent completion of the timer's callback - * function. The timer's handler must not call add_timer_on(). Upon exit - * the timer is not queued and the handler is not running on any CPU. - * - * For !irqsafe timers, the caller must not hold locks that are held in - * interrupt context. Even if the lock has nothing to do with the timer in - * question. Here's why:: - * - * CPU0 CPU1 - * ---- ---- - * - * call_timer_fn(); - * base->running_timer = mytimer; - * spin_lock_irq(somelock); - * - * spin_lock(somelock); - * timer_delete_sync(mytimer); - * while (base->running_timer == mytimer); - * - * Now timer_delete_sync() will never return and never release somelock. - * The interrupt on the other CPU is waiting to grab somelock but it has - * interrupted the softirq that CPU0 is waiting to finish. - * - * This function cannot guarantee that the timer is not rearmed again by - * some concurrent or preempting code, right after it dropped the base - * lock. If there is the possibility of a concurrent rearm then the return - * value of the function is meaningless. - * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated */ -int timer_delete_sync(struct timer_list *timer) +static int __timer_delete_sync(struct timer_list *timer) { int ret; @@ -1506,7 +1503,7 @@ int timer_delete_sync(struct timer_list *timer) lockdep_assert_preemption_enabled(); do { - ret = try_to_del_timer_sync(timer); + ret = __try_to_del_timer_sync(timer); if (unlikely(ret < 0)) { del_timer_wait_running(timer); @@ -1516,6 +1513,50 @@ int timer_delete_sync(struct timer_list *timer) return ret; } + +/** + * timer_delete_sync - Deactivate a timer and wait for the handler to finish. + * @timer: The timer to be deactivated + * + * Synchronization rules: Callers must prevent restarting of the timer, + * otherwise this function is meaningless. It must not be called from + * interrupt contexts unless the timer is an irqsafe one. The caller must + * not hold locks which would prevent completion of the timer's callback + * function. The timer's handler must not call add_timer_on(). Upon exit + * the timer is not queued and the handler is not running on any CPU. + * + * For !irqsafe timers, the caller must not hold locks that are held in + * interrupt context. Even if the lock has nothing to do with the timer in + * question. Here's why:: + * + * CPU0 CPU1 + * ---- ---- + * + * call_timer_fn(); + * base->running_timer = mytimer; + * spin_lock_irq(somelock); + * + * spin_lock(somelock); + * timer_delete_sync(mytimer); + * while (base->running_timer == mytimer); + * + * Now timer_delete_sync() will never return and never release somelock. + * The interrupt on the other CPU is waiting to grab somelock but it has + * interrupted the softirq that CPU0 is waiting to finish. + * + * This function cannot guarantee that the timer is not rearmed again by + * some concurrent or preempting code, right after it dropped the base + * lock. If there is the possibility of a concurrent rearm then the return + * value of the function is meaningless. + * + * Return: + * * %0 - The timer was not pending + * * %1 - The timer was pending and deactivated + */ +int timer_delete_sync(struct timer_list *timer) +{ + return __timer_delete_sync(timer); +} EXPORT_SYMBOL(timer_delete_sync); static void call_timer_fn(struct timer_list *timer, -- cgit v1.2.3 From 0cc04e80458a822300b93f82ed861a513edde194 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:52 +0100 Subject: timers: Add shutdown mechanism to the internal functions Tearing down timers which have circular dependencies to other functionality, e.g. workqueues, where the timer can schedule work and work can arm timers, is not trivial. In those cases it is desired to shutdown the timer in a way which prevents rearming of the timer. The mechanism to do so is to set timer->function to NULL and use this as an indicator for the timer arming functions to ignore the (re)arm request. Add a shutdown argument to the relevant internal functions which makes the actual deactivation code set timer->function to NULL which in turn prevents rearming of the timer. Co-developed-by: Steven Rostedt Signed-off-by: Steven Rostedt Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/all/20220407161745.7d6754b3@gandalf.local.home Link: https://lore.kernel.org/all/20221110064101.429013735@goodmis.org Link: https://lore.kernel.org/r/20221123201625.253883224@linutronix.de --- kernel/time/timer.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 54 insertions(+), 8 deletions(-) diff --git a/kernel/time/timer.c b/kernel/time/timer.c index e635bb5e49b9..167e43c9451c 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1300,12 +1300,19 @@ EXPORT_SYMBOL_GPL(add_timer_on); /** * __timer_delete - Internal function: Deactivate a timer * @timer: The timer to be deactivated + * @shutdown: If true, this indicates that the timer is about to be + * shutdown permanently. + * + * If @shutdown is true then @timer->function is set to NULL under the + * timer base lock which prevents further rearming of the time. In that + * case any attempt to rearm @timer after this function returns will be + * silently ignored. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated */ -static int __timer_delete(struct timer_list *timer) +static int __timer_delete(struct timer_list *timer, bool shutdown) { struct timer_base *base; unsigned long flags; @@ -1313,9 +1320,22 @@ static int __timer_delete(struct timer_list *timer) debug_assert_init(timer); - if (timer_pending(timer)) { + /* + * If @shutdown is set then the lock has to be taken whether the + * timer is pending or not to protect against a concurrent rearm + * which might hit between the lockless pending check and the lock + * aquisition. By taking the lock it is ensured that such a newly + * enqueued timer is dequeued and cannot end up with + * timer->function == NULL in the expiry code. + * + * If timer->function is currently executed, then this makes sure + * that the callback cannot requeue the timer. + */ + if (timer_pending(timer) || shutdown) { base = lock_timer_base(timer, &flags); ret = detach_if_pending(timer, base, true); + if (shutdown) + timer->function = NULL; raw_spin_unlock_irqrestore(&base->lock, flags); } @@ -1338,20 +1358,31 @@ static int __timer_delete(struct timer_list *timer) */ int timer_delete(struct timer_list *timer) { - return __timer_delete(timer); + return __timer_delete(timer, false); } EXPORT_SYMBOL(timer_delete); /** * __try_to_del_timer_sync - Internal function: Try to deactivate a timer * @timer: Timer to deactivate + * @shutdown: If true, this indicates that the timer is about to be + * shutdown permanently. + * + * If @shutdown is true then @timer->function is set to NULL under the + * timer base lock which prevents further rearming of the timer. Any + * attempt to rearm @timer after this function returns will be silently + * ignored. + * + * This function cannot guarantee that the timer cannot be rearmed + * right after dropping the base lock if @shutdown is false. That + * needs to be prevented by the calling code if necessary. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated * * %-1 - The timer callback function is running on a different CPU */ -static int __try_to_del_timer_sync(struct timer_list *timer) +static int __try_to_del_timer_sync(struct timer_list *timer, bool shutdown) { struct timer_base *base; unsigned long flags; @@ -1363,6 +1394,8 @@ static int __try_to_del_timer_sync(struct timer_list *timer) if (base->running_timer != timer) ret = detach_if_pending(timer, base, true); + if (shutdown) + timer->function = NULL; raw_spin_unlock_irqrestore(&base->lock, flags); @@ -1387,7 +1420,7 @@ static int __try_to_del_timer_sync(struct timer_list *timer) */ int try_to_del_timer_sync(struct timer_list *timer) { - return __try_to_del_timer_sync(timer); + return __try_to_del_timer_sync(timer, false); } EXPORT_SYMBOL(try_to_del_timer_sync); @@ -1468,12 +1501,25 @@ static inline void del_timer_wait_running(struct timer_list *timer) { } * __timer_delete_sync - Internal function: Deactivate a timer and wait * for the handler to finish. * @timer: The timer to be deactivated + * @shutdown: If true, @timer->function will be set to NULL under the + * timer base lock which prevents rearming of @timer + * + * If @shutdown is not set the timer can be rearmed later. If the timer can + * be rearmed concurrently, i.e. after dropping the base lock then the + * return value is meaningless. + * + * If @shutdown is set then @timer->function is set to NULL under timer + * base lock which prevents rearming of the timer. Any attempt to rearm + * a shutdown timer is silently ignored. + * + * If the timer should be reused after shutdown it has to be initialized + * again. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated */ -static int __timer_delete_sync(struct timer_list *timer) +static int __timer_delete_sync(struct timer_list *timer, bool shutdown) { int ret; @@ -1503,7 +1549,7 @@ static int __timer_delete_sync(struct timer_list *timer) lockdep_assert_preemption_enabled(); do { - ret = __try_to_del_timer_sync(timer); + ret = __try_to_del_timer_sync(timer, shutdown); if (unlikely(ret < 0)) { del_timer_wait_running(timer); @@ -1555,7 +1601,7 @@ static int __timer_delete_sync(struct timer_list *timer) */ int timer_delete_sync(struct timer_list *timer) { - return __timer_delete_sync(timer); + return __timer_delete_sync(timer, false); } EXPORT_SYMBOL(timer_delete_sync); -- cgit v1.2.3 From f571faf6e443b6011ccb585d57866177af1f643c Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:53 +0100 Subject: timers: Provide timer_shutdown[_sync]() Tearing down timers which have circular dependencies to other functionality, e.g. workqueues, where the timer can schedule work and work can arm timers, is not trivial. In those cases it is desired to shutdown the timer in a way which prevents rearming of the timer. The mechanism to do so is to set timer->function to NULL and use this as an indicator for the timer arming functions to ignore the (re)arm request. Expose new interfaces for this: timer_shutdown_sync() and timer_shutdown(). timer_shutdown_sync() has the same functionality as timer_delete_sync() plus the NULL-ification of the timer function. timer_shutdown() has the same functionality as timer_delete() plus the NULL-ification of the timer function. In both cases the rearming of the timer is prevented by silently discarding rearm attempts due to timer->function being NULL. Co-developed-by: Steven Rostedt Signed-off-by: Steven Rostedt Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/all/20220407161745.7d6754b3@gandalf.local.home Link: https://lore.kernel.org/all/20221110064101.429013735@goodmis.org Link: https://lore.kernel.org/r/20221123201625.314230270@linutronix.de --- include/linux/timer.h | 2 ++ kernel/time/timer.c | 66 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 68 insertions(+) diff --git a/include/linux/timer.h b/include/linux/timer.h index e338e173ce8b..9162f275819a 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h @@ -184,6 +184,8 @@ extern void add_timer(struct timer_list *timer); extern int try_to_del_timer_sync(struct timer_list *timer); extern int timer_delete_sync(struct timer_list *timer); extern int timer_delete(struct timer_list *timer); +extern int timer_shutdown_sync(struct timer_list *timer); +extern int timer_shutdown(struct timer_list *timer); /** * del_timer_sync - Delete a pending timer and wait for a running callback diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 167e43c9451c..63a8ce7177dd 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1362,6 +1362,27 @@ int timer_delete(struct timer_list *timer) } EXPORT_SYMBOL(timer_delete); +/** + * timer_shutdown - Deactivate a timer and prevent rearming + * @timer: The timer to be deactivated + * + * The function does not wait for an eventually running timer callback on a + * different CPU but it prevents rearming of the timer. Any attempt to arm + * @timer after this function returns will be silently ignored. + * + * This function is useful for teardown code and should only be used when + * timer_shutdown_sync() cannot be invoked due to locking or context constraints. + * + * Return: + * * %0 - The timer was not pending + * * %1 - The timer was pending + */ +int timer_shutdown(struct timer_list *timer) +{ + return __timer_delete(timer, true); +} +EXPORT_SYMBOL_GPL(timer_shutdown); + /** * __try_to_del_timer_sync - Internal function: Try to deactivate a timer * @timer: Timer to deactivate @@ -1595,6 +1616,9 @@ static int __timer_delete_sync(struct timer_list *timer, bool shutdown) * lock. If there is the possibility of a concurrent rearm then the return * value of the function is meaningless. * + * If such a guarantee is needed, e.g. for teardown situations then use + * timer_shutdown_sync() instead. + * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated @@ -1605,6 +1629,48 @@ int timer_delete_sync(struct timer_list *timer) } EXPORT_SYMBOL(timer_delete_sync); +/** + * timer_shutdown_sync - Shutdown a timer and prevent rearming + * @timer: The timer to be shutdown + * + * When the function returns it is guaranteed that: + * - @timer is not queued + * - The callback function of @timer is not running + * - @timer cannot be enqueued again. Any attempt to rearm + * @timer is silently ignored. + * + * See timer_delete_sync() for synchronization rules. + * + * This function is useful for final teardown of an infrastructure where + * the timer is subject to a circular dependency problem. + * + * A common pattern for this is a timer and a workqueue where the timer can + * schedule work and work can arm the timer. On shutdown the workqueue must + * be destroyed and the timer must be prevented from rearming. Unless the + * code has conditionals like 'if (mything->in_shutdown)' to prevent that + * there is no way to get this correct with timer_delete_sync(). + * + * timer_shutdown_sync() is solving the problem. The correct ordering of + * calls in this case is: + * + * timer_shutdown_sync(&mything->timer); + * workqueue_destroy(&mything->workqueue); + * + * After this 'mything' can be safely freed. + * + * This obviously implies that the timer is not required to be functional + * for the rest of the shutdown operation. + * + * Return: + * * %0 - The timer was not pending + * * %1 - The timer was pending + */ +int timer_shutdown_sync(struct timer_list *timer) +{ + return __timer_delete_sync(timer, true); +} +EXPORT_SYMBOL_GPL(timer_shutdown_sync); + static void call_timer_fn(struct timer_list *timer, void (*fn)(struct timer_list *), unsigned long baseclk) -- cgit v1.2.3 From a31323bef2b66455920d054b160c17d4240f8fd4 Mon Sep 17 00:00:00 2001 From: Steven Rostedt (Google) Date: Wed, 23 Nov 2022 21:18:55 +0100 Subject: timers: Update the documentation to reflect on the new timer_shutdown() API In order to make sure that a timer is not re-armed after it is stopped before freeing, a new shutdown state is added to the timer code. The API timer_shutdown_sync() and timer_shutdown() must be called before the object that holds the timer can be freed. Update the documentation to reflect this new workflow. [ tglx: Updated to the new semantics and updated the zh_CN version ] Signed-off-by: Steven Rostedt (Google) Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Link: https://lore.kernel.org/r/20221110064147.712934793@goodmis.org Link: https://lore.kernel.org/r/20221123201625.375284489@linutronix.de --- Documentation/RCU/Design/Requirements/Requirements.rst | 2 +- Documentation/core-api/local_ops.rst | 2 +- Documentation/kernel-hacking/locking.rst | 5 +++++ Documentation/translations/zh_CN/core-api/local_ops.rst | 2 +- 4 files changed, 8 insertions(+), 3 deletions(-) diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst index 546f23abeca3..49387d823619 100644 --- a/Documentation/RCU/Design/Requirements/Requirements.rst +++ b/Documentation/RCU/Design/Requirements/Requirements.rst @@ -1858,7 +1858,7 @@ unloaded. After a given module has been unloaded, any attempt to call one of its functions results in a segmentation fault. The module-unload functions must therefore cancel any delayed calls to loadable-module functions, for example, any outstanding mod_timer() must be dealt -with via timer_delete_sync() or similar. +with via timer_shutdown_sync() or similar. Unfortunately, there is no way to cancel an RCU callback; once you invoke call_rcu(), the callback function is eventually going to be diff --git a/Documentation/core-api/local_ops.rst b/Documentation/core-api/local_ops.rst index a84f8b0c7ab2..0b42ceaaf3c4 100644 --- a/Documentation/core-api/local_ops.rst +++ b/Documentation/core-api/local_ops.rst @@ -191,7 +191,7 @@ Here is a sample module which implements a basic per cpu counter using static void __exit test_exit(void) { - timer_delete_sync(&test_timer); + timer_shutdown_sync(&test_timer); } module_init(test_init); diff --git a/Documentation/kernel-hacking/locking.rst b/Documentation/kernel-hacking/locking.rst index c5b8678ed232..c756786e17ae 100644 --- a/Documentation/kernel-hacking/locking.rst +++ b/Documentation/kernel-hacking/locking.rst @@ -1007,6 +1007,11 @@ calling add_timer() at the end of their timer function). Because this is a fairly common case which is prone to races, you should use timer_delete_sync() (``include/linux/timer.h``) to handle this case. +Before freeing a timer, timer_shutdown() or timer_shutdown_sync() should be +called which will keep it from being rearmed. Any subsequent attempt to +rearm the timer will be silently ignored by the core code. + + Locking Speed ============= diff --git a/Documentation/translations/zh_CN/core-api/local_ops.rst b/Documentation/translations/zh_CN/core-api/local_ops.rst index 22493b9b829c..eb5423f60f17 100644 --- a/Documentation/translations/zh_CN/core-api/local_ops.rst +++ b/Documentation/translations/zh_CN/core-api/local_ops.rst @@ -185,7 +185,7 @@ UP之间没有不同的行为,在你的架构的 ``local.h`` 中包括 ``asm-g static void __exit test_exit(void) { - timer_delete_sync(&test_timer); + timer_shutdown_sync(&test_timer); } module_init(test_init); -- cgit v1.2.3 From e0d3da982c96aeddc1bbf1cf9469dbb9ebdca657 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 23 Nov 2022 21:18:57 +0100 Subject: Bluetooth: hci_qca: Fix the teardown problem for real While discussing solutions for the teardown problem which results from circular dependencies between timers and workqueues, where timers schedule work from their timer callback and workqueues arm the timers from work items, it was discovered that the recent fix to the QCA code is incorrect. That commit fixes the obvious problem of using del_timer() instead of del_timer_sync() and reorders the teardown calls to destroy_workqueue(wq); del_timer_sync(t); This makes it less likely to explode, but it's still broken: destroy_workqueue(wq); /* After this point @wq cannot be touched anymore */ ---> timer expires queue_work(wq) <---- Results in a NULL pointer dereference deep in the work queue core code. del_timer_sync(t); Use the new timer_shutdown_sync() function to ensure that the timers are disarmed, no timer callbacks are running and the timers cannot be armed again. This restores the original teardown sequence: timer_shutdown_sync(t); destroy_workqueue(wq); which is now correct because the timer core silently ignores potential rearming attempts which can happen when destroy_workqueue() drains pending work before mopping up the workqueue. Fixes: 72ef98445aca ("Bluetooth: hci_qca: Use del_timer_sync() before freeing") Signed-off-by: Thomas Gleixner Tested-by: Guenter Roeck Reviewed-by: Jacob Keller Reviewed-by: Anna-Maria Behnsen Acked-by: Luiz Augusto von Dentz Link: https://lore.kernel.org/all/87iljhsftt.ffs@tglx Link: https://lore.kernel.org/r/20221123201625.435907114@linutronix.de --- drivers/bluetooth/hci_qca.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c index 8df11016fd51..ba8be8e1bebd 100644 --- a/drivers/bluetooth/hci_qca.c +++ b/drivers/bluetooth/hci_qca.c @@ -696,9 +696,15 @@ static int qca_close(struct hci_uart *hu) skb_queue_purge(&qca->tx_wait_q); skb_queue_purge(&qca->txq); skb_queue_purge(&qca->rx_memdump_q); + /* + * Shut the timers down so they can't be rearmed when + * destroy_workqueue() drains pending work which in turn might try + * to arm a timer. After shutdown rearm attempts are silently + * ignored by the timer core code. + */ + timer_shutdown_sync(&qca->tx_idle_timer); + timer_shutdown_sync(&qca->wake_retrans_timer); destroy_workqueue(qca->workqueue); - del_timer_sync(&qca->tx_idle_timer); - del_timer_sync(&qca->wake_retrans_timer); qca->hu = NULL; kfree_skb(qca->rx_skb); -- cgit v1.2.3 From d6c494e8ee932b2b21ff4b718eebb378e91b3da0 Mon Sep 17 00:00:00 2001 From: Jann Horn Date: Wed, 30 Nov 2022 12:53:20 +0100 Subject: vdso/timens: Refactor copy-pasted find_timens_vvar_page() helper into one copy find_timens_vvar_page() is not architecture-specific, as can be seen from how all five per-architecture versions of it are the same. (arm64, powerpc and riscv are exactly the same; x86 and s390 have two characters difference inside a comment, less blank lines, and mark the !CONFIG_TIME_NS version as inline.) Refactor the five copies into a central copy in kernel/time/namespace.c. Signed-off-by: Jann Horn Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20221130115320.2918447-1-jannh@google.com --- arch/arm64/kernel/vdso.c | 22 ---------------------- arch/powerpc/kernel/vdso.c | 22 ---------------------- arch/riscv/kernel/vdso.c | 22 ---------------------- arch/s390/kernel/vdso.c | 20 -------------------- arch/x86/entry/vdso/vma.c | 23 ----------------------- include/linux/time_namespace.h | 6 ++++++ kernel/time/namespace.c | 18 ++++++++++++++++++ 7 files changed, 24 insertions(+), 109 deletions(-) diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 99ae81ab91a7..e59a32aa0c49 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -151,28 +151,6 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns) mmap_read_unlock(mm); return 0; } - -static struct page *find_timens_vvar_page(struct vm_area_struct *vma) -{ - if (likely(vma->vm_mm == current->mm)) - return current->nsproxy->time_ns->vvar_page; - - /* - * VM_PFNMAP | VM_IO protect .fault() handler from being called - * through interfaces like /proc/$pid/mem or - * process_vm_{readv,writev}() as long as there's no .access() - * in special_mapping_vmops. - * For more details check_vma_flags() and __access_remote_vm() - */ - WARN(1, "vvar_page accessed remotely"); - - return NULL; -} -#else -static struct page *find_timens_vvar_page(struct vm_area_struct *vma) -{ - return NULL; -} #endif static vm_fault_t vvar_fault(const struct vm_special_mapping *sm, diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c index 4abc01949702..507f8228f983 100644 --- a/arch/powerpc/kernel/vdso.c +++ b/arch/powerpc/kernel/vdso.c @@ -129,28 +129,6 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns) return 0; } - -static struct page *find_timens_vvar_page(struct vm_area_struct *vma) -{ - if (likely(vma->vm_mm == current->mm)) - return current->nsproxy->time_ns->vvar_page; - - /* - * VM_PFNMAP | VM_IO protect .fault() handler from being called - * through interfaces like /proc/$pid/mem or - * process_vm_{readv,writev}() as long as there's no .access() - * in special_mapping_vmops. - * For more details check_vma_flags() and __access_remote_vm() - */ - WARN(1, "vvar_page accessed remotely"); - - return NULL; -} -#else -static struct page *find_timens_vvar_page(struct vm_area_struct *vma) -{ - return NULL; -} #endif static vm_fault_t vvar_fault(const struct vm_special_mapping *sm, diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c index 123d05255fcf..e410275918ac 100644 --- a/arch/riscv/kernel/vdso.c +++ b/arch/riscv/kernel/vdso.c @@ -137,28 +137,6 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns) mmap_read_unlock(mm); return 0; } - -static struct page *find_timens_vvar_page(struct vm_area_struct *vma) -{ - if (likely(vma->vm_mm == current->mm)) - return current->nsproxy->time_ns->vvar_page; - - /* - * VM_PFNMAP | VM_IO protect .fault() handler from being called - * through interfaces like /proc/$pid/mem or - * process_vm_{readv,writev}() as long as there's no .access() - * in special_mapping_vmops. - * For more details check_vma_flags() and __access_remote_vm() - */ - WARN(1, "vvar_page accessed remotely"); - - return NULL; -} -#else -static struct page *find_timens_vvar_page(struct vm_area_struct *vma) -{ - return NULL; -} #endif static vm_fault_t vvar_fault(const struct vm_special_mapping *sm, diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c index 3105ca5bd470..d6df7169c01f 100644 --- a/arch/s390/kernel/vdso.c +++ b/arch/s390/kernel/vdso.c @@ -44,21 +44,6 @@ struct vdso_data *arch_get_vdso_data(void *vvar_page) return (struct vdso_data *)(vvar_page); } -static struct page *find_timens_vvar_page(struct vm_area_struct *vma) -{ - if (likely(vma->vm_mm == current->mm)) - return current->nsproxy->time_ns->vvar_page; - /* - * VM_PFNMAP | VM_IO protect .fault() handler from being called - * through interfaces like /proc/$pid/mem or - * process_vm_{readv,writev}() as long as there's no .access() - * in special_mapping_vmops(). - * For more details check_vma_flags() and __access_remote_vm() - */ - WARN(1, "vvar_page accessed remotely"); - return NULL; -} - /* * The VVAR page layout depends on whether a task belongs to the root or * non-root time namespace. Whenever a task changes its namespace, the VVAR @@ -84,11 +69,6 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns) mmap_read_unlock(mm); return 0; } -#else -static inline struct page *find_timens_vvar_page(struct vm_area_struct *vma) -{ - return NULL; -} #endif static vm_fault_t vvar_fault(const struct vm_special_mapping *sm, diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c index 311eae30e089..6b36485054e8 100644 --- a/arch/x86/entry/vdso/vma.c +++ b/arch/x86/entry/vdso/vma.c @@ -98,24 +98,6 @@ static int vdso_mremap(const struct vm_special_mapping *sm, } #ifdef CONFIG_TIME_NS -static struct page *find_timens_vvar_page(struct vm_area_struct *vma) -{ - if (likely(vma->vm_mm == current->mm)) - return current->nsproxy->time_ns->vvar_page; - - /* - * VM_PFNMAP | VM_IO protect .fault() handler from being called - * through interfaces like /proc/$pid/mem or - * process_vm_{readv,writev}() as long as there's no .access() - * in special_mapping_vmops(). - * For more details check_vma_flags() and __access_remote_vm() - */ - - WARN(1, "vvar_page accessed remotely"); - - return NULL; -} - /* * The vvar page layout depends on whether a task belongs to the root or * non-root time namespace. Whenever a task changes its namespace, the VVAR @@ -140,11 +122,6 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns) return 0; } -#else -static inline struct page *find_timens_vvar_page(struct vm_area_struct *vma) -{ - return NULL; -} #endif static vm_fault_t vvar_fault(const struct vm_special_mapping *sm, diff --git a/include/linux/time_namespace.h b/include/linux/time_namespace.h index 3146f1c056c9..bb9d3f5542f8 100644 --- a/include/linux/time_namespace.h +++ b/include/linux/time_namespace.h @@ -45,6 +45,7 @@ struct time_namespace *copy_time_ns(unsigned long flags, void free_time_ns(struct time_namespace *ns); void timens_on_fork(struct nsproxy *nsproxy, struct task_struct *tsk); struct vdso_data *arch_get_vdso_data(void *vvar_page); +struct page *find_timens_vvar_page(struct vm_area_struct *vma); static inline void put_time_ns(struct time_namespace *ns) { @@ -141,6 +142,11 @@ static inline void timens_on_fork(struct nsproxy *nsproxy, return; } +static inline struct page *find_timens_vvar_page(struct vm_area_struct *vma) +{ + return NULL; +} + static inline void timens_add_monotonic(struct timespec64 *ts) { } static inline void timens_add_boottime(struct timespec64 *ts) { } diff --git a/kernel/time/namespace.c b/kernel/time/namespace.c index aec832801c26..0775b9ec952a 100644 --- a/kernel/time/namespace.c +++ b/kernel/time/namespace.c @@ -192,6 +192,24 @@ static void timens_setup_vdso_data(struct vdso_data *vdata, offset[CLOCK_BOOTTIME_ALARM] = boottime; } +struct page *find_timens_vvar_page(struct vm_area_struct *vma) +{ + if (likely(vma->vm_mm == current->mm)) + return current->nsproxy->time_ns->vvar_page; + + /* + * VM_PFNMAP | VM_IO protect .fault() handler from being called + * through interfaces like /proc/$pid/mem or + * process_vm_{readv,writev}() as long as there's no .access() + * in special_mapping_vmops(). + * For more details check_vma_flags() and __access_remote_vm() + */ + + WARN(1, "vvar_page accessed remotely"); + + return NULL; +} + /* * Protects possibly multiple offsets writers racing each other * and tasks entering the namespace. -- cgit v1.2.3 From 3f44f7156f59cae06e9160eafb5d8b2dfd09e639 Mon Sep 17 00:00:00 2001 From: Wolfram Sang Date: Wed, 30 Nov 2022 22:06:09 +0100 Subject: clocksource/drivers/sh_cmt: Access registers according to spec Documentation for most CMTs say that it takes two input clocks before changes propagate to the timer. This is especially relevant when the timer is stopped to change further settings. Implement the delays according to the spec. To avoid unnecessary delays in atomic mode, also check if the to-be-written value actually differs. CMCNT is a bit special because testing showed that it requires 3 cycles to propagate, which affects all CMTs. Also, the WRFLAG needs to be checked before writing. This fixes "cannot clear CMCNT" messages which occur often on R-Car Gen4 SoCs, but only very rarely on older SoCs for some reason. Fixes: 81b3b2711072 ("clocksource: sh_cmt: Add support for multiple channels per device") Signed-off-by: Wolfram Sang Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20221130210609.7718-1-wsa+renesas@sang-engineering.com --- drivers/clocksource/sh_cmt.c | 88 +++++++++++++++++++++++++++----------------- 1 file changed, 55 insertions(+), 33 deletions(-) diff --git a/drivers/clocksource/sh_cmt.c b/drivers/clocksource/sh_cmt.c index 64dcb082d4cf..7b952aa52c0b 100644 --- a/drivers/clocksource/sh_cmt.c +++ b/drivers/clocksource/sh_cmt.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -116,6 +117,7 @@ struct sh_cmt_device { void __iomem *mapbase; struct clk *clk; unsigned long rate; + unsigned int reg_delay; raw_spinlock_t lock; /* Protect the shared start/stop register */ @@ -247,10 +249,17 @@ static inline u32 sh_cmt_read_cmstr(struct sh_cmt_channel *ch) static inline void sh_cmt_write_cmstr(struct sh_cmt_channel *ch, u32 value) { - if (ch->iostart) - ch->cmt->info->write_control(ch->iostart, 0, value); - else - ch->cmt->info->write_control(ch->cmt->mapbase, 0, value); + u32 old_value = sh_cmt_read_cmstr(ch); + + if (value != old_value) { + if (ch->iostart) { + ch->cmt->info->write_control(ch->iostart, 0, value); + udelay(ch->cmt->reg_delay); + } else { + ch->cmt->info->write_control(ch->cmt->mapbase, 0, value); + udelay(ch->cmt->reg_delay); + } + } } static inline u32 sh_cmt_read_cmcsr(struct sh_cmt_channel *ch) @@ -260,7 +269,12 @@ static inline u32 sh_cmt_read_cmcsr(struct sh_cmt_channel *ch) static inline void sh_cmt_write_cmcsr(struct sh_cmt_channel *ch, u32 value) { - ch->cmt->info->write_control(ch->ioctrl, CMCSR, value); + u32 old_value = sh_cmt_read_cmcsr(ch); + + if (value != old_value) { + ch->cmt->info->write_control(ch->ioctrl, CMCSR, value); + udelay(ch->cmt->reg_delay); + } } static inline u32 sh_cmt_read_cmcnt(struct sh_cmt_channel *ch) @@ -268,14 +282,33 @@ static inline u32 sh_cmt_read_cmcnt(struct sh_cmt_channel *ch) return ch->cmt->info->read_count(ch->ioctrl, CMCNT); } -static inline void sh_cmt_write_cmcnt(struct sh_cmt_channel *ch, u32 value) +static inline int sh_cmt_write_cmcnt(struct sh_cmt_channel *ch, u32 value) { + /* Tests showed that we need to wait 3 clocks here */ + unsigned int cmcnt_delay = DIV_ROUND_UP(3 * ch->cmt->reg_delay, 2); + u32 reg; + + if (ch->cmt->info->model > SH_CMT_16BIT) { + int ret = read_poll_timeout_atomic(sh_cmt_read_cmcsr, reg, + !(reg & SH_CMT32_CMCSR_WRFLG), + 1, cmcnt_delay, false, ch); + if (ret < 0) + return ret; + } + ch->cmt->info->write_count(ch->ioctrl, CMCNT, value); + udelay(cmcnt_delay); + return 0; } static inline void sh_cmt_write_cmcor(struct sh_cmt_channel *ch, u32 value) { - ch->cmt->info->write_count(ch->ioctrl, CMCOR, value); + u32 old_value = ch->cmt->info->read_count(ch->ioctrl, CMCOR); + + if (value != old_value) { + ch->cmt->info->write_count(ch->ioctrl, CMCOR, value); + udelay(ch->cmt->reg_delay); + } } static u32 sh_cmt_get_counter(struct sh_cmt_channel *ch, u32 *has_wrapped) @@ -319,7 +352,7 @@ static void sh_cmt_start_stop_ch(struct sh_cmt_channel *ch, int start) static int sh_cmt_enable(struct sh_cmt_channel *ch) { - int k, ret; + int ret; dev_pm_syscore_device(&ch->cmt->pdev->dev, true); @@ -347,26 +380,9 @@ static int sh_cmt_enable(struct sh_cmt_channel *ch) } sh_cmt_write_cmcor(ch, 0xffffffff); - sh_cmt_write_cmcnt(ch, 0); - - /* - * According to the sh73a0 user's manual, as CMCNT can be operated - * only by the RCLK (Pseudo 32 kHz), there's one restriction on - * modifying CMCNT register; two RCLK cycles are necessary before - * this register is either read or any modification of the value - * it holds is reflected in the LSI's actual operation. - * - * While at it, we're supposed to clear out the CMCNT as of this - * moment, so make sure it's processed properly here. This will - * take RCLKx2 at maximum. - */ - for (k = 0; k < 100; k++) { - if (!sh_cmt_read_cmcnt(ch)) - break; - udelay(1); - } + ret = sh_cmt_write_cmcnt(ch, 0); - if (sh_cmt_read_cmcnt(ch)) { + if (ret || sh_cmt_read_cmcnt(ch)) { dev_err(&ch->cmt->pdev->dev, "ch%u: cannot clear CMCNT\n", ch->index); ret = -ETIMEDOUT; @@ -995,8 +1011,8 @@ MODULE_DEVICE_TABLE(of, sh_cmt_of_table); static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev) { - unsigned int mask; - unsigned int i; + unsigned int mask, i; + unsigned long rate; int ret; cmt->pdev = pdev; @@ -1032,10 +1048,16 @@ static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev) if (ret < 0) goto err_clk_unprepare; - if (cmt->info->width == 16) - cmt->rate = clk_get_rate(cmt->clk) / 512; - else - cmt->rate = clk_get_rate(cmt->clk) / 8; + rate = clk_get_rate(cmt->clk); + if (!rate) { + ret = -EINVAL; + goto err_clk_disable; + } + + /* We shall wait 2 input clks after register writes */ + if (cmt->info->model >= SH_CMT_48BIT) + cmt->reg_delay = DIV_ROUND_UP(2UL * USEC_PER_SEC, rate); + cmt->rate = rate / (cmt->info->width == 16 ? 512 : 8); /* Map the memory resource(s). */ ret = sh_cmt_map_memory(cmt); -- cgit v1.2.3 From 035629547999d0e9095886f248c2580dc56f36c6 Mon Sep 17 00:00:00 2001 From: Lukas Bulwahn Date: Wed, 23 Nov 2022 09:31:59 +0100 Subject: clocksource/drivers/ingenic-ost: Define pm functions properly in platform_driver struct Commit ca7b72b5a5f2 ("clocksource: Add driver for the Ingenic JZ47xx OST") adds the struct platform_driver ingenic_ost_driver, with the definition of pm functions under the non-existing config PM_SUSPEND, which means the intended pm functions were never actually included in any build. As the only callbacks are .suspend_noirq and .resume_noirq, we can assume that it is intended to be CONFIG_PM_SLEEP. Since commit 1a3c7bb08826 ("PM: core: Add new *_PM_OPS macros, deprecate old ones"), the default pattern for platform_driver definitions conditional for CONFIG_PM_SLEEP is to use pm_sleep_ptr(). As __maybe_unused annotations on the dev_pm_ops structure and its callbacks are not needed anymore, remove these as well. Suggested-by: Paul Cercueil Signed-off-by: Lukas Bulwahn Signed-off-by: Thomas Gleixner Reviewed-by: Paul Cercueil Link: https://lore.kernel.org/r/20221123083159.22821-1-lukas.bulwahn@gmail.com --- drivers/clocksource/ingenic-ost.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/clocksource/ingenic-ost.c b/drivers/clocksource/ingenic-ost.c index 06d25754e606..9f7c280a1336 100644 --- a/drivers/clocksource/ingenic-ost.c +++ b/drivers/clocksource/ingenic-ost.c @@ -141,7 +141,7 @@ static int __init ingenic_ost_probe(struct platform_device *pdev) return 0; } -static int __maybe_unused ingenic_ost_suspend(struct device *dev) +static int ingenic_ost_suspend(struct device *dev) { struct ingenic_ost *ost = dev_get_drvdata(dev); @@ -150,14 +150,14 @@ static int __maybe_unused ingenic_ost_suspend(struct device *dev) return 0; } -static int __maybe_unused ingenic_ost_resume(struct device *dev) +static int ingenic_ost_resume(struct device *dev) { struct ingenic_ost *ost = dev_get_drvdata(dev); return clk_enable(ost->clk); } -static const struct dev_pm_ops __maybe_unused ingenic_ost_pm_ops = { +static const struct dev_pm_ops ingenic_ost_pm_ops = { /* _noirq: We want the OST clock to be gated last / ungated first */ .suspend_noirq = ingenic_ost_suspend, .resume_noirq = ingenic_ost_resume, @@ -181,9 +181,7 @@ static const struct of_device_id ingenic_ost_of_match[] = { static struct platform_driver ingenic_ost_driver = { .driver = { .name = "ingenic-ost", -#ifdef CONFIG_PM_SUSPEND - .pm = &ingenic_ost_pm_ops, -#endif + .pm = pm_sleep_ptr(&ingenic_ost_pm_ops), .of_match_table = ingenic_ost_of_match, }, }; -- cgit v1.2.3 From ebe11732838f39bd10bddafd4dfe2f97010fde62 Mon Sep 17 00:00:00 2001 From: Lukas Bulwahn Date: Wed, 2 Nov 2022 10:10:48 +0100 Subject: clockevents: Repair kernel-doc for clockevent_delta2ns() Since the introduction of clockevents, i.e., commit d316c57ff6bf ("clockevents: add core functionality"), there has been a mismatch between the function and the kernel-doc comment for clockevent_delta2ns(). Hence, ./scripts/kernel-doc -none kernel/time/clockevents.c warns about it. Adjust the kernel-doc comment for clockevent_delta2ns() for make W=1 happiness. Signed-off-by: Lukas Bulwahn Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20221102091048.15068-1-lukas.bulwahn@gmail.com --- kernel/time/clockevents.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c index 5d85014d59b5..960143b183cd 100644 --- a/kernel/time/clockevents.c +++ b/kernel/time/clockevents.c @@ -76,7 +76,7 @@ static u64 cev_delta2ns(unsigned long latch, struct clock_event_device *evt, } /** - * clockevents_delta2ns - Convert a latch value (device ticks) to nanoseconds + * clockevent_delta2ns - Convert a latch value (device ticks) to nanoseconds * @latch: value to convert * @evt: pointer to clock event device descriptor * -- cgit v1.2.3 From 9ffa5e6b8f93ccf1a217dd12e204c1a5e211abef Mon Sep 17 00:00:00 2001 From: Johan Jonker Date: Fri, 28 Oct 2022 16:41:30 +0200 Subject: dt-bindings: timer: rockchip: Add rockchip,rk3128-timer Add rockchip,rk3128-timer compatible string. Signed-off-by: Johan Jonker Acked-by: Krzysztof Kozlowski Reviewed-by: Heiko Stuebner Signed-off-by: Daniel Lezcano Link: https://lore.kernel.org/r/0e57f38f-bace-8556-7258-aa0b3c0ac103@gmail.com --- Documentation/devicetree/bindings/timer/rockchip,rk-timer.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/devicetree/bindings/timer/rockchip,rk-timer.yaml b/Documentation/devicetree/bindings/timer/rockchip,rk-timer.yaml index dc3bc1e62fe9..b61ed1a431bb 100644 --- a/Documentation/devicetree/bindings/timer/rockchip,rk-timer.yaml +++ b/Documentation/devicetree/bindings/timer/rockchip,rk-timer.yaml @@ -18,6 +18,7 @@ properties: - enum: - rockchip,rv1108-timer - rockchip,rk3036-timer + - rockchip,rk3128-timer - rockchip,rk3188-timer - rockchip,rk3228-timer - rockchip,rk3229-timer -- cgit v1.2.3 From aa3f72ea9410f8c9394a5d25bbf40a4cfb56f5a0 Mon Sep 17 00:00:00 2001 From: Jonathan Neuschäfer Date: Fri, 4 Nov 2022 17:18:45 +0100 Subject: dt-bindings: timer: nuvoton,npcm7xx-timer: Allow specifying all clocks The timer module contains multiple timers. In the WPCM450 SoC, each timer runs off a clock can be gated individually. To model this correctly, the timer node in the devicetree needs to take multiple clock inputs. Signed-off-by: Jonathan Neuschäfer Reviewed-by: Rob Herring Reviewed-by: Joel Stanley Link: https://lore.kernel.org/r/20221104161850.2889894-2-j.neuschaefer@gmx.net Signed-off-by: Daniel Lezcano --- .../devicetree/bindings/timer/nuvoton,npcm7xx-timer.yaml | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/Documentation/devicetree/bindings/timer/nuvoton,npcm7xx-timer.yaml b/Documentation/devicetree/bindings/timer/nuvoton,npcm7xx-timer.yaml index 737af78ad70c..d53e1bb98b8a 100644 --- a/Documentation/devicetree/bindings/timer/nuvoton,npcm7xx-timer.yaml +++ b/Documentation/devicetree/bindings/timer/nuvoton,npcm7xx-timer.yaml @@ -25,7 +25,13 @@ properties: - description: The timer interrupt of timer 0 clocks: - maxItems: 1 + items: + - description: The reference clock for timer 0 + - description: The reference clock for timer 1 + - description: The reference clock for timer 2 + - description: The reference clock for timer 3 + - description: The reference clock for timer 4 + minItems: 1 required: - compatible -- cgit v1.2.3 From db78539fc95cf62b0b8f274368fcd8202eac91f9 Mon Sep 17 00:00:00 2001 From: Jonathan Neuschäfer Date: Fri, 4 Nov 2022 17:18:46 +0100 Subject: clocksource/drivers/timer-npcm7xx: Enable timer 1 clock before use In the WPCM450 SoC, the clocks for each timer can be gated individually. To prevent the timer 1 clock from being gated, enable it explicitly. Signed-off-by: Jonathan Neuschäfer Reviewed-by: Joel Stanley Link: https://lore.kernel.org/r/20221104161850.2889894-3-j.neuschaefer@gmx.net Signed-off-by: Daniel Lezcano --- drivers/clocksource/timer-npcm7xx.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/clocksource/timer-npcm7xx.c b/drivers/clocksource/timer-npcm7xx.c index a00520cbb660..9af30af5f989 100644 --- a/drivers/clocksource/timer-npcm7xx.c +++ b/drivers/clocksource/timer-npcm7xx.c @@ -188,6 +188,7 @@ static void __init npcm7xx_clocksource_init(void) static int __init npcm7xx_timer_init(struct device_node *np) { + struct clk *clk; int ret; ret = timer_of_init(np, &npcm7xx_to); @@ -199,6 +200,15 @@ static int __init npcm7xx_timer_init(struct device_node *np) npcm7xx_to.of_clk.rate = npcm7xx_to.of_clk.rate / (NPCM7XX_Tx_MIN_PRESCALE + 1); + /* Enable the clock for timer1, if it exists */ + clk = of_clk_get(np, 1); + if (clk) { + if (!IS_ERR(clk)) + clk_prepare_enable(clk); + else + pr_warn("%pOF: Failed to get clock for timer1: %pe", np, clk); + } + npcm7xx_clocksource_init(); npcm7xx_clockevents_init(); -- cgit v1.2.3 From 45ae272a948a03a7d55748bf52d2f47d3b4e1d5a Mon Sep 17 00:00:00 2001 From: Joe Korty Date: Mon, 21 Nov 2022 14:53:43 +0000 Subject: clocksource/drivers/arm_arch_timer: Fix XGene-1 TVAL register math error The TVAL register is 32 bit signed. Thus only the lower 31 bits are available to specify when an interrupt is to occur at some time in the near future. Attempting to specify a larger interval with TVAL results in a negative time delta which means the timer fires immediately upon being programmed, rather than firing at that expected future time. The solution is for Linux to declare that TVAL is a 31 bit register rather than give its true size of 32 bits. This prevents Linux from programming TVAL with a too-large value. Note that, prior to 5.16, this little trick was the standard way to handle TVAL in Linux, so there is nothing new happening here on that front. The softlockup detector hides the issue, because it keeps generating short timer deadlines that are within the scope of the broken timer. Disable it, and you start using NO_HZ with much longer timer deadlines, which turns into an interrupt flood: 11: 1124855130 949168462 758009394 76417474 104782230 30210281 310890 1734323687 GICv2 29 Level arch_timer And "much longer" isn't that long: it takes less than 43s to underflow TVAL at 50MHz (the frequency of the counter on XGene-1). Some comments on the v1 version of this patch by Marc Zyngier: XGene implements CVAL (a 64bit comparator) in terms of TVAL (a countdown register) instead of the other way around. TVAL being a 32bit register, the width of the counter should equally be 32. However, TVAL is a *signed* value, and keeps counting down in the negative range once the timer fires. It means that any TVAL value with bit 31 set will fire immediately, as it cannot be distinguished from an already expired timer. Reducing the timer range back to a paltry 31 bits papers over the issue. Another problem cannot be fixed though, which is that the timer interrupt *must* be handled within the negative countdown period, or the interrupt will be lost (TVAL will rollover to a positive value, indicative of a new timer deadline). Cc: stable@vger.kernel.org # 5.16+ Fixes: 012f18850452 ("clocksource/drivers/arm_arch_timer: Work around broken CVAL implementations") Signed-off-by: Joe Korty Reviewed-by: Marc Zyngier [maz: revamped the commit message] Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20221024165422.GA51107@zipoli.concurrent-rt.com Link: https://lore.kernel.org/r/20221121145343.896018-1-maz@kernel.org Signed-off-by: Daniel Lezcano --- drivers/clocksource/arm_arch_timer.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index 9c3420a0d19d..e2920da18ea1 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -806,6 +806,9 @@ static u64 __arch_timer_check_delta(void) /* * XGene-1 implements CVAL in terms of TVAL, meaning * that the maximum timer range is 32bit. Shame on them. + * + * Note that TVAL is signed, thus has only 31 of its + * 32 bits to express magnitude. */ MIDR_ALL_VERSIONS(MIDR_CPU_MODEL(ARM_CPU_IMP_APM, APM_CPU_PART_POTENZA)), @@ -813,8 +816,8 @@ static u64 __arch_timer_check_delta(void) }; if (is_midr_in_range_list(read_cpuid_id(), broken_cval_midrs)) { - pr_warn_once("Broken CNTx_CVAL_EL1, limiting width to 32bits"); - return CLOCKSOURCE_MASK(32); + pr_warn_once("Broken CNTx_CVAL_EL1, using 32 bit TVAL instead.\n"); + return CLOCKSOURCE_MASK(31); } #endif return CLOCKSOURCE_MASK(arch_counter_get_width()); -- cgit v1.2.3 From 9688498b1648aa98a3ee45d9f07763c099f6fb12 Mon Sep 17 00:00:00 2001 From: Tony Lindgren Date: Fri, 28 Oct 2022 13:35:26 +0300 Subject: clocksource/drivers/timer-ti-dm: Fix warning for omap_timer_match We can now get a warning for 'omap_timer_match' defined but not used. Let's fix this by dropping of_match_ptr for omap_timer_match. Reported-by: kernel test robot Fixes: ab0bbef3ae0f ("clocksource/drivers/timer-ti-dm: Make timer selectable for ARCH_K3") Signed-off-by: Tony Lindgren Link: https://lore.kernel.org/r/20221028103526.40319-1-tony@atomide.com Signed-off-by: Daniel Lezcano --- drivers/clocksource/timer-ti-dm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/clocksource/timer-ti-dm.c b/drivers/clocksource/timer-ti-dm.c index cad29ded3a48..00af1a8e34fb 100644 --- a/drivers/clocksource/timer-ti-dm.c +++ b/drivers/clocksource/timer-ti-dm.c @@ -1258,7 +1258,7 @@ static struct platform_driver omap_dm_timer_driver = { .remove = omap_dm_timer_remove, .driver = { .name = "omap_timer", - .of_match_table = of_match_ptr(omap_timer_match), + .of_match_table = omap_timer_match, .pm = &omap_dm_timer_pm_ops, }, }; -- cgit v1.2.3 From dedb2aced3e958c6f4811d3e6b392652ff0eea01 Mon Sep 17 00:00:00 2001 From: Tony Lindgren Date: Fri, 28 Oct 2022 13:36:04 +0300 Subject: clocksource/drivers/timer-ti-dm: Make timer_get_irq static We can make timer_get_irq() static as noted by Janusz. It is only used by omap_rproc_get_timer_irq() via platform data. Reported-by: Janusz Krzysztofik Signed-off-by: Tony Lindgren Link: https://lore.kernel.org/r/20221028103604.40385-1-tony@atomide.com Signed-off-by: Daniel Lezcano --- drivers/clocksource/timer-ti-dm.c | 2 +- include/clocksource/timer-ti-dm.h | 2 -- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/clocksource/timer-ti-dm.c b/drivers/clocksource/timer-ti-dm.c index 00af1a8e34fb..eeeeb3c08c91 100644 --- a/drivers/clocksource/timer-ti-dm.c +++ b/drivers/clocksource/timer-ti-dm.c @@ -643,7 +643,7 @@ static int omap_dm_timer_free(struct omap_dm_timer *cookie) return 0; } -int omap_dm_timer_get_irq(struct omap_dm_timer *cookie) +static int omap_dm_timer_get_irq(struct omap_dm_timer *cookie) { struct dmtimer *timer = to_dmtimer(cookie); if (timer) diff --git a/include/clocksource/timer-ti-dm.h b/include/clocksource/timer-ti-dm.h index 77eceeae708c..dcc1712f75e7 100644 --- a/include/clocksource/timer-ti-dm.h +++ b/include/clocksource/timer-ti-dm.h @@ -62,8 +62,6 @@ struct omap_dm_timer { }; -int omap_dm_timer_get_irq(struct omap_dm_timer *timer); - u32 omap_dm_timer_modify_idlect_mask(u32 inputmask); /* -- cgit v1.2.3 From 822963b96dfdb72ec4fb1395fbdfa778656b49d1 Mon Sep 17 00:00:00 2001 From: Tony Lindgren Date: Fri, 28 Oct 2022 13:38:13 +0300 Subject: clocksource/drivers/timer-ti-dm: Clear settings on probe and free Clear the timer control register on driver probe and omap_dm_timer_free(). Otherwise we assume the consumer driver takes care of properly initializing timer interrupts on PWM driver module reload for example. AFAIK this is not currently needed as a fix, I just happened to run into this while cleaning up things. Signed-off-by: Tony Lindgren Link: https://lore.kernel.org/r/20221028103813.40783-1-tony@atomide.com Signed-off-by: Daniel Lezcano --- drivers/clocksource/timer-ti-dm.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/drivers/clocksource/timer-ti-dm.c b/drivers/clocksource/timer-ti-dm.c index eeeeb3c08c91..b24b903a8822 100644 --- a/drivers/clocksource/timer-ti-dm.c +++ b/drivers/clocksource/timer-ti-dm.c @@ -633,6 +633,8 @@ static struct omap_dm_timer *omap_dm_timer_request_by_node(struct device_node *n static int omap_dm_timer_free(struct omap_dm_timer *cookie) { struct dmtimer *timer; + struct device *dev; + int rc; timer = to_dmtimer(cookie); if (unlikely(!timer)) @@ -640,6 +642,17 @@ static int omap_dm_timer_free(struct omap_dm_timer *cookie) WARN_ON(!timer->reserved); timer->reserved = 0; + + dev = &timer->pdev->dev; + rc = pm_runtime_resume_and_get(dev); + if (rc) + return rc; + + /* Clear timer configuration */ + dmtimer_write(timer, OMAP_TIMER_CTRL_REG, 0); + + pm_runtime_put_sync(dev); + return 0; } @@ -1135,6 +1148,10 @@ static int omap_dm_timer_probe(struct platform_device *pdev) goto err_disable; } __omap_dm_timer_init_regs(timer); + + /* Clear timer configuration */ + dmtimer_write(timer, OMAP_TIMER_CTRL_REG, 0); + pm_runtime_put(dev); } -- cgit v1.2.3 From 180d35a7c05d520314a590c99ad8643d0213f28b Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Sat, 29 Oct 2022 19:44:27 +0800 Subject: clocksource/drivers/timer-ti-dm: Fix missing clk_disable_unprepare in dmtimer_systimer_init_clock() If clk_get_rate() fails which is called after clk_prepare_enable(), clk_disable_unprepare() need be called in error path to disable the clock in dmtimer_systimer_init_clock(). Fixes: 52762fbd1c47 ("clocksource/drivers/timer-ti-dm: Add clockevent and clocksource support") Signed-off-by: Yang Yingliang Reviewed-by: Tony Lindgren Link: https://lore.kernel.org/r/20221029114427.946520-1-yangyingliang@huawei.com Signed-off-by: Daniel Lezcano --- drivers/clocksource/timer-ti-dm-systimer.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c index 2737407ff069..632523c1232f 100644 --- a/drivers/clocksource/timer-ti-dm-systimer.c +++ b/drivers/clocksource/timer-ti-dm-systimer.c @@ -345,8 +345,10 @@ static int __init dmtimer_systimer_init_clock(struct dmtimer_systimer *t, return error; r = clk_get_rate(clock); - if (!r) + if (!r) { + clk_disable_unprepare(clock); return -ENODEV; + } if (is_ick) t->ick = clock; -- cgit v1.2.3 From 4238568744c0a150d8901e7847092a0f871c938d Mon Sep 17 00:00:00 2001 From: Christophe JAILLET Date: Tue, 1 Nov 2022 22:13:58 +0100 Subject: clocksource/drivers/arm_arch_timer: Use kstrtobool() instead of strtobool() strtobool() is the same as kstrtobool(). However, the latter is more used within the kernel. In order to remove strtobool() and slightly simplify kstrtox.h, switch to the other function name. While at it, include the corresponding header file () Signed-off-by: Christophe JAILLET Acked-by: Mark Rutland Link: https://lore.kernel.org/r/f430bb12e12eb225ab1206db0be64b755ddafbdc.1667336095.git.christophe.jaillet@wanadoo.fr Signed-off-by: Daniel Lezcano --- drivers/clocksource/arm_arch_timer.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index e2920da18ea1..1695c56a2aae 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -97,7 +98,7 @@ static bool evtstrm_enable __ro_after_init = IS_ENABLED(CONFIG_ARM_ARCH_TIMER_EV static int __init early_evtstrm_cfg(char *buf) { - return strtobool(buf, &evtstrm_enable); + return kstrtobool(buf, &evtstrm_enable); } early_param("clocksource.arm_arch_timer.evtstrm", early_evtstrm_cfg); -- cgit v1.2.3 From bbf687daab58ef8d09916d69537ff6fa2c849e88 Mon Sep 17 00:00:00 2001 From: Wolfram Sang Date: Thu, 3 Nov 2022 21:48:58 +0100 Subject: dt-bindings: timer: renesas,tmu: Add r8a779g0 support Signed-off-by: Wolfram Sang Reviewed-by: Geert Uytterhoeven Acked-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20221103204859.24667-1-wsa+renesas@sang-engineering.com Signed-off-by: Daniel Lezcano --- Documentation/devicetree/bindings/timer/renesas,tmu.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/devicetree/bindings/timer/renesas,tmu.yaml b/Documentation/devicetree/bindings/timer/renesas,tmu.yaml index 60f4c059bcff..a67e427a9e7e 100644 --- a/Documentation/devicetree/bindings/timer/renesas,tmu.yaml +++ b/Documentation/devicetree/bindings/timer/renesas,tmu.yaml @@ -38,6 +38,7 @@ properties: - renesas,tmu-r8a77995 # R-Car D3 - renesas,tmu-r8a779a0 # R-Car V3U - renesas,tmu-r8a779f0 # R-Car S4-8 + - renesas,tmu-r8a779g0 # R-Car V4H - const: renesas,tmu reg: -- cgit v1.2.3 From 83571a4389039b1be2d77655b2ce47543d407e41 Mon Sep 17 00:00:00 2001 From: Wolfram Sang Date: Fri, 4 Nov 2022 16:06:42 +0100 Subject: dt-bindings: timer: renesas,cmt: Add r8a779g0 CMT support Signed-off-by: Wolfram Sang Acked-by: Rob Herring Reviewed-by: Geert Uytterhoeven Link: https://lore.kernel.org/r/20221104150642.4587-1-wsa+renesas@sang-engineering.com Signed-off-by: Daniel Lezcano --- Documentation/devicetree/bindings/timer/renesas,cmt.yaml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/Documentation/devicetree/bindings/timer/renesas,cmt.yaml b/Documentation/devicetree/bindings/timer/renesas,cmt.yaml index bde6c9b66bf4..a0be1755ea28 100644 --- a/Documentation/devicetree/bindings/timer/renesas,cmt.yaml +++ b/Documentation/devicetree/bindings/timer/renesas,cmt.yaml @@ -102,12 +102,14 @@ properties: - enum: - renesas,r8a779a0-cmt0 # 32-bit CMT0 on R-Car V3U - renesas,r8a779f0-cmt0 # 32-bit CMT0 on R-Car S4-8 + - renesas,r8a779g0-cmt0 # 32-bit CMT0 on R-Car V4H - const: renesas,rcar-gen4-cmt0 # 32-bit CMT0 on R-Car Gen4 - items: - enum: - renesas,r8a779a0-cmt1 # 48-bit CMT on R-Car V3U - renesas,r8a779f0-cmt1 # 48-bit CMT on R-Car S4-8 + - renesas,r8a779g0-cmt1 # 48-bit CMT on R-Car V4H - const: renesas,rcar-gen4-cmt1 # 48-bit CMT on R-Car Gen4 reg: -- cgit v1.2.3