diff options
author | Greg Kroah-Hartman | 2018-09-19 07:41:46 +0200 |
---|---|---|
committer | Greg Kroah-Hartman | 2018-09-19 07:41:46 +0200 |
commit | f21f7fa263ac005713f0a7a43179c5aea0fabe85 (patch) | |
tree | e260aa43863177bee64c14fe9d012a8f4022a730 /kernel | |
parent | eba2d6b34a32bdc3585c5810633ec38f9472380c (diff) | |
parent | 83f365554e47997ec68dc4eca3f5dce525cd15c3 (diff) |
Merge tag 'trace-v4.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Steven writes:
"Vaibhav Nagarnaik found that modifying the ring buffer size could cause
a huge latency in the system because it does a while loop to free pages
without releasing the CPU (on non preempt kernels). In a case where there
are hundreds of thousands of pages to free it could actually cause a system
stall. A properly place cond_resched() solves this issue."
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/trace/ring_buffer.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 1d92d4a982fd..65bd4616220d 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -1546,6 +1546,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) tmp_iter_page = first_page; do { + cond_resched(); + to_remove_page = tmp_iter_page; rb_inc_page(cpu_buffer, &tmp_iter_page); |