diff options
author | Namhyung Kim | 2023-01-03 22:44:01 -0800 |
---|---|---|
committer | Arnaldo Carvalho de Melo | 2023-01-04 10:52:07 -0300 |
commit | 2d656b0f81b22101db0447f890e39fdd736b745e (patch) | |
tree | 2b1f2cc4bc78d71efd225f5bfd20c643ccc33187 | |
parent | fb710ddee75fb96f50ee6d004ef777a0cf7ad5a3 (diff) |
perf stat: Fix handling of unsupported cgroup events when using BPF counters
When --for-each-cgroup option is used, it fails when any of events is
not supported and exits immediately. This is not how 'perf stat'
handles unsupported events.
Let's ignore the failure and proceed with others so that the output is
similar to when BPF counters are not used:
Before:
$ sudo ./perf stat -a --bpf-counters -e L1-icache-loads,L1-dcache-loads --for-each-cgroup system.slice,user.slice sleep 1
Failed to open first cgroup events
$
After it shows output similat to when --bpf-counters isn't specified:
$ sudo ./perf stat -a --bpf-counters -e L1-icache-loads,L1-dcache-loads --for-each-cgroup system.slice,user.slice sleep 1
Performance counter stats for 'system wide':
<not supported> L1-icache-loads system.slice
29,892,418 L1-dcache-loads system.slice
<not supported> L1-icache-loads user.slice
52,497,220 L1-dcache-loads user.slice
$
Fixes: 944138f048f7d759 ("perf stat: Enable BPF counter with --for-each-cgroup")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/r/20230104064402.1551516-4-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-rw-r--r-- | tools/perf/util/bpf_counter_cgroup.c | 14 |
1 files changed, 3 insertions, 11 deletions
diff --git a/tools/perf/util/bpf_counter_cgroup.c b/tools/perf/util/bpf_counter_cgroup.c index 3c2df7522f6f..1c82377ed78b 100644 --- a/tools/perf/util/bpf_counter_cgroup.c +++ b/tools/perf/util/bpf_counter_cgroup.c @@ -116,27 +116,19 @@ static int bperf_load_program(struct evlist *evlist) /* open single copy of the events w/o cgroup */ err = evsel__open_per_cpu(evsel, evsel->core.cpus, -1); - if (err) { - pr_err("Failed to open first cgroup events\n"); - goto out; - } + if (err == 0) + evsel->supported = true; map_fd = bpf_map__fd(skel->maps.events); perf_cpu_map__for_each_cpu(cpu, j, evsel->core.cpus) { int fd = FD(evsel, j); __u32 idx = evsel->core.idx * total_cpus + cpu.cpu; - err = bpf_map_update_elem(map_fd, &idx, &fd, - BPF_ANY); - if (err < 0) { - pr_err("Failed to update perf_event fd\n"); - goto out; - } + bpf_map_update_elem(map_fd, &idx, &fd, BPF_ANY); } evsel->cgrp = leader_cgrp; } - evsel->supported = true; if (evsel->cgrp == cgrp) continue; |