diff options
author | Linus Torvalds | 2014-08-04 11:50:00 -0700 |
---|---|---|
committer | Linus Torvalds | 2014-08-04 11:50:00 -0700 |
commit | b8c0aa46b3e86083721b57ed2eec6bd2c29ebfba (patch) | |
tree | 45e349bf8a14aa99279d323fdc515e849fd349f3 /arch/sh | |
parent | c7ed326fa7cafb83ced5a8b02517a61672fe9e90 (diff) | |
parent | dc6f03f26f570104a2bb03f9d1deb588026d7c75 (diff) |
Merge tag 'trace-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing updates from Steven Rostedt:
"This pull request has a lot of work done. The main thing is the
changes to the ftrace function callback infrastructure. It's
introducing a way to allow different functions to call directly
different trampolines instead of all calling the same "mcount" one.
The only user of this for now is the function graph tracer, which
always had a different trampoline, but the function tracer trampoline
was called and did basically nothing, and then the function graph
tracer trampoline was called. The difference now, is that the
function graph tracer trampoline can be called directly if a function
is only being traced by the function graph trampoline. If function
tracing is also happening on the same function, the old way is still
done.
The accounting for this takes up more memory when function graph
tracing is activated, as it needs to keep track of which functions it
uses. I have a new way that wont take as much memory, but it's not
ready yet for this merge window, and will have to wait for the next
one.
Another big change was the removal of the ftrace_start/stop() calls
that were used by the suspend/resume code that stopped function
tracing when entering into suspend and resume paths. The stop of
ftrace was done because there was some function that would crash the
system if one called smp_processor_id()! The stop/start was a big
hammer to solve the issue at the time, which was when ftrace was first
introduced into Linux. Now ftrace has better infrastructure to debug
such issues, and I found the problem function and labeled it with
"notrace" and function tracing can now safely be activated all the way
down into the guts of suspend and resume
Other changes include clean ups of uprobe code, clean up of the
trace_seq() code, and other various small fixes and clean ups to
ftrace and tracing"
* tag 'trace-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (57 commits)
ftrace: Add warning if tramp hash does not match nr_trampolines
ftrace: Fix trampoline hash update check on rec->flags
ring-buffer: Use rb_page_size() instead of open coded head_page size
ftrace: Rename ftrace_ops field from trampolines to nr_trampolines
tracing: Convert local function_graph functions to static
ftrace: Do not copy old hash when resetting
tracing: let user specify tracing_thresh after selecting function_graph
ring-buffer: Always run per-cpu ring buffer resize with schedule_work_on()
tracing: Remove function_trace_stop and HAVE_FUNCTION_TRACE_MCOUNT_TEST
s390/ftrace: remove check of obsolete variable function_trace_stop
arm64, ftrace: Remove check of obsolete variable function_trace_stop
Blackfin: ftrace: Remove check of obsolete variable function_trace_stop
metag: ftrace: Remove check of obsolete variable function_trace_stop
microblaze: ftrace: Remove check of obsolete variable function_trace_stop
MIPS: ftrace: Remove check of obsolete variable function_trace_stop
parisc: ftrace: Remove check of obsolete variable function_trace_stop
sh: ftrace: Remove check of obsolete variable function_trace_stop
sparc64,ftrace: Remove check of obsolete variable function_trace_stop
tile: ftrace: Remove check of obsolete variable function_trace_stop
ftrace: x86: Remove check of obsolete variable function_trace_stop
...
Diffstat (limited to 'arch/sh')
-rw-r--r-- | arch/sh/Kconfig | 1 | ||||
-rw-r--r-- | arch/sh/kernel/ftrace.c | 3 | ||||
-rw-r--r-- | arch/sh/lib/mcount.S | 24 |
3 files changed, 5 insertions, 23 deletions
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig index 834b67c4db5a..aa2df3eaeb29 100644 --- a/arch/sh/Kconfig +++ b/arch/sh/Kconfig @@ -57,7 +57,6 @@ config SUPERH32 select HAVE_FUNCTION_TRACER select HAVE_FTRACE_MCOUNT_RECORD select HAVE_DYNAMIC_FTRACE - select HAVE_FUNCTION_TRACE_MCOUNT_TEST select HAVE_FTRACE_NMI_ENTER if DYNAMIC_FTRACE select ARCH_WANT_IPC_PARSE_VERSION select HAVE_FUNCTION_GRAPH_TRACER diff --git a/arch/sh/kernel/ftrace.c b/arch/sh/kernel/ftrace.c index 3c74f53db6db..079d70e6d74b 100644 --- a/arch/sh/kernel/ftrace.c +++ b/arch/sh/kernel/ftrace.c @@ -344,6 +344,9 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr) struct ftrace_graph_ent trace; unsigned long return_hooker = (unsigned long)&return_to_handler; + if (unlikely(ftrace_graph_is_dead())) + return; + if (unlikely(atomic_read(¤t->tracing_graph_pause))) return; diff --git a/arch/sh/lib/mcount.S b/arch/sh/lib/mcount.S index 52aa2011d753..7a8572f9d58b 100644 --- a/arch/sh/lib/mcount.S +++ b/arch/sh/lib/mcount.S @@ -92,13 +92,6 @@ mcount: rts nop #else -#ifndef CONFIG_DYNAMIC_FTRACE - mov.l .Lfunction_trace_stop, r0 - mov.l @r0, r0 - tst r0, r0 - bf ftrace_stub -#endif - MCOUNT_ENTER() #ifdef CONFIG_DYNAMIC_FTRACE @@ -174,11 +167,6 @@ ftrace_graph_call: .globl ftrace_caller ftrace_caller: - mov.l .Lfunction_trace_stop, r0 - mov.l @r0, r0 - tst r0, r0 - bf ftrace_stub - MCOUNT_ENTER() .globl ftrace_call @@ -196,8 +184,6 @@ ftrace_call: #endif /* CONFIG_DYNAMIC_FTRACE */ .align 2 -.Lfunction_trace_stop: - .long function_trace_stop /* * NOTE: From here on the locations of the .Lftrace_stub label and @@ -217,12 +203,7 @@ ftrace_stub: #ifdef CONFIG_FUNCTION_GRAPH_TRACER .globl ftrace_graph_caller ftrace_graph_caller: - mov.l 2f, r0 - mov.l @r0, r0 - tst r0, r0 - bt 1f - - mov.l 3f, r1 + mov.l 2f, r1 jmp @r1 nop 1: @@ -242,8 +223,7 @@ ftrace_graph_caller: MCOUNT_LEAVE() .align 2 -2: .long function_trace_stop -3: .long skip_trace +2: .long skip_trace .Lprepare_ftrace_return: .long prepare_ftrace_return |