aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)Author
2023-05-11selftests/bpf: Fix leaked bpf_link in get_stackid_cannot_attachSong Liu
[ Upstream commit c1e07a80cf23d3a6e96172bc9a73bfa912a9fcbc ] skel->links.oncpu is leaked in one case. This causes test perf_branches fails when it runs after get_stackid_cannot_attach: ./test_progs -t get_stackid_cannot_attach,perf_branches 84 get_stackid_cannot_attach:OK test_perf_branches_common:PASS:test_perf_branches_load 0 nsec test_perf_branches_common:PASS:attach_perf_event 0 nsec test_perf_branches_common:PASS:set_affinity 0 nsec check_good_sample:FAIL:output not valid no valid sample from prog 146/1 perf_branches/perf_branches_hw:FAIL 146/2 perf_branches/perf_branches_no_hw:OK 146 perf_branches:FAIL All error logs: test_perf_branches_common:PASS:test_perf_branches_load 0 nsec test_perf_branches_common:PASS:attach_perf_event 0 nsec test_perf_branches_common:PASS:set_affinity 0 nsec check_good_sample:FAIL:output not valid no valid sample from prog 146/1 perf_branches/perf_branches_hw:FAIL 146 perf_branches:FAIL Summary: 1/1 PASSED, 0 SKIPPED, 1 FAILED Fix this by adding the missing bpf_link__destroy(). Fixes: 346938e9380c ("selftests/bpf: Add get_stackid_cannot_attach") Signed-off-by: Song Liu <song@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230412210423.900851-3-song@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests/bpf: Use read_perf_max_sample_freq() in perf_event_stackmapSong Liu
[ Upstream commit de6d014a09bf12a9a8959d60c0a1d4a41d394a89 ] Currently, perf_event sample period in perf_event_stackmap is set too low that the test fails randomly. Fix this by using the max sample frequency, from read_perf_max_sample_freq(). Move read_perf_max_sample_freq() to testing_helpers.c. Replace the CHECK() with if-printf, as CHECK is not available in testing_helpers.c. Fixes: 1da4864c2b20 ("selftests/bpf: Add callchain_stackid") Signed-off-by: Song Liu <song@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230412210423.900851-2-song@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11bpftool: Fix bug for long instructions in program CFG dumpsQuentin Monnet
[ Upstream commit 67cf52cdb6c8fa6365d29106555dacf95c9fd374 ] When dumping the control flow graphs for programs using the 16-byte long load instruction, we need to skip the second part of this instruction when looking for the next instruction to process. Otherwise, we end up printing "BUG_ld_00" from the kernel disassembler in the CFG. Fixes: efcef17a6d65 ("tools: bpftool: generate .dot graph from CFG information") Signed-off-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/r/20230405132120.59886-3-quentin@isovalent.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests/bpf: Wait for receive in cg_storage_multi testYiFei Zhu
[ Upstream commit 5af607a861d43ffff830fc1890033e579ec44799 ] In some cases the loopback latency might be large enough, causing the assertion on invocations to be run before ingress prog getting executed. The assertion would fail and the test would flake. This can be reliably reproduced by arbitrarily increasing the loopback latency (thanks to [1]): tc qdisc add dev lo root handle 1: htb default 12 tc class add dev lo parent 1:1 classid 1:12 htb rate 20kbps ceil 20kbps tc qdisc add dev lo parent 1:12 netem delay 100ms Fix this by waiting on the receive end, instead of instantly returning to the assert. The call to read() will wait for the default SO_RCVTIMEO timeout of 3 seconds provided by start_server(). [1] https://gist.github.com/kstevens715/4598301 Reported-by: Martin KaFai Lau <martin.lau@linux.dev> Link: https://lore.kernel.org/bpf/9c5c8b7e-1d89-a3af-5400-14fde81f4429@linux.dev/ Fixes: 3573f384014f ("selftests/bpf: Test CGROUP_STORAGE behavior on shared egress + ingress") Acked-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: YiFei Zhu <zhuyifei@google.com> Link: https://lore.kernel.org/r/20230405193354.1956209-1-zhuyifei@google.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests: xsk: Deflakify STATS_RX_DROPPED testKal Conley
[ Upstream commit 68e7322142f5e731af222892d384d311835db0f1 ] Fix flaky STATS_RX_DROPPED test. The receiver calls getsockopt after receiving the last (valid) packet which is not the final packet sent in the test (valid and invalid packets are sent in alternating fashion with the final packet being invalid). Since the last packet may or may not have been dropped already, both outcomes must be allowed. This issue could also be fixed by making sure the last packet sent is valid. This alternative is left as an exercise to the reader (or the benevolent maintainers of this file). This problem was quite visible on certain setups. On one machine this failure was observed 50% of the time. Also, remove a redundant assignment of pkt_stream->nb_pkts. This field is already initialized by __pkt_stream_alloc. Fixes: 27e934bec35b ("selftests: xsk: make stat tests not spin on getsockopt") Signed-off-by: Kal Conley <kal.conley@dectris.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230403120400.31018-1-kal.conley@dectris.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests: xsk: Disable IPv6 on VETH1Kal Conley
[ Upstream commit f2b50f17268390567bc0e95642170d88f336c8f4 ] This change fixes flakiness in the BIDIRECTIONAL test: # [is_pkt_valid] expected length [60], got length [90] not ok 1 FAIL: SKB BUSY-POLL BIDIRECTIONAL When IPv6 is enabled, the interface will periodically send MLDv1 and MLDv2 packets. These packets can cause the BIDIRECTIONAL test to fail since it uses VETH0 for RX. For other tests, this was not a problem since they only receive on VETH1 and IPv6 was already disabled on VETH0. Fixes: a89052572ebb ("selftests/bpf: Xsk selftests framework") Signed-off-by: Kal Conley <kal.conley@dectris.com> Link: https://lore.kernel.org/r/20230405082905.6303-1-kal.conley@dectris.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests: xsk: Use correct UMEM size in testapp_invalid_descKal Conley
[ Upstream commit 7a2050df244e2c9a4e90882052b7907450ad10ed ] Avoid UMEM_SIZE macro in testapp_invalid_desc which is incorrect when the frame size is not XSK_UMEM__DEFAULT_FRAME_SIZE. Also remove the macro since it's no longer being used. Fixes: 909f0e28207c ("selftests: xsk: Add tests for 2K frame size") Signed-off-by: Kal Conley <kal.conley@dectris.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230403145047.33065-2-kal.conley@dectris.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11bpf: Fix __reg_bound_offset 64->32 var_off subreg propagationDaniel Borkmann
[ Upstream commit 7be14c1c9030f73cc18b4ff23b78a0a081f16188 ] Xu reports that after commit 3f50f132d840 ("bpf: Verifier, do explicit ALU32 bounds tracking"), the following BPF program is rejected by the verifier: 0: (61) r2 = *(u32 *)(r1 +0) ; R2_w=pkt(off=0,r=0,imm=0) 1: (61) r3 = *(u32 *)(r1 +4) ; R3_w=pkt_end(off=0,imm=0) 2: (bf) r1 = r2 3: (07) r1 += 1 4: (2d) if r1 > r3 goto pc+8 5: (71) r1 = *(u8 *)(r2 +0) ; R1_w=scalar(umax=255,var_off=(0x0; 0xff)) 6: (18) r0 = 0x7fffffffffffff10 8: (0f) r1 += r0 ; R1_w=scalar(umin=0x7fffffffffffff10,umax=0x800000000000000f) 9: (18) r0 = 0x8000000000000000 11: (07) r0 += 1 12: (ad) if r0 < r1 goto pc-2 13: (b7) r0 = 0 14: (95) exit And the verifier log says: func#0 @0 0: R1=ctx(off=0,imm=0) R10=fp0 0: (61) r2 = *(u32 *)(r1 +0) ; R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=0,imm=0) 1: (61) r3 = *(u32 *)(r1 +4) ; R1=ctx(off=0,imm=0) R3_w=pkt_end(off=0,imm=0) 2: (bf) r1 = r2 ; R1_w=pkt(off=0,r=0,imm=0) R2_w=pkt(off=0,r=0,imm=0) 3: (07) r1 += 1 ; R1_w=pkt(off=1,r=0,imm=0) 4: (2d) if r1 > r3 goto pc+8 ; R1_w=pkt(off=1,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) 5: (71) r1 = *(u8 *)(r2 +0) ; R1_w=scalar(umax=255,var_off=(0x0; 0xff)) R2_w=pkt(off=0,r=1,imm=0) 6: (18) r0 = 0x7fffffffffffff10 ; R0_w=9223372036854775568 8: (0f) r1 += r0 ; R0_w=9223372036854775568 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775823,s32_min=-240,s32_max=15) 9: (18) r0 = 0x8000000000000000 ; R0_w=-9223372036854775808 11: (07) r0 += 1 ; R0_w=-9223372036854775807 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775809) 13: (b7) r0 = 0 ; R0_w=0 14: (95) exit from 12 to 11: R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775810,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775806 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775806 R1_w=scalar(umin=9223372036854775810,umax=9223372036854775810,var_off=(0x8000000000000000; 0xffffffff)) 13: safe [...] from 12 to 11: R0_w=-9223372036854775795 R1=scalar(umin=9223372036854775822,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775794 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775794 R1=scalar(umin=9223372036854775822,umax=9223372036854775822,var_off=(0x8000000000000000; 0xffffffff)) 13: safe from 12 to 11: R0_w=-9223372036854775794 R1=scalar(umin=9223372036854775823,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775793 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775793 R1=scalar(umin=9223372036854775823,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) 13: safe from 12 to 11: R0_w=-9223372036854775793 R1=scalar(umin=9223372036854775824,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775792 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775792 R1=scalar(umin=9223372036854775824,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) 13: safe [...] The 64bit umin=9223372036854775810 bound continuously bumps by +1 while umax=9223372036854775823 stays as-is until the verifier complexity limit is reached and the program gets finally rejected. During this simulation, the umin also eventually surpasses umax. Looking at the first 'from 12 to 11' output line from the loop, R1 has the following state: R1_w=scalar(umin=0x8000000000000002 (9223372036854775810), umax=0x800000000000000f (9223372036854775823), var_off=(0x8000000000000000; 0xffffffff)) The var_off has technically not an inconsistent state but it's very imprecise and far off surpassing 64bit umax bounds whereas the expected output with refined known bits in var_off should have been like: R1_w=scalar(umin=0x8000000000000002 (9223372036854775810), umax=0x800000000000000f (9223372036854775823), var_off=(0x8000000000000000; 0xf)) In the above log, var_off stays as var_off=(0x8000000000000000; 0xffffffff) and does not converge into a narrower mask where more bits become known, eventually transforming R1 into a constant upon umin=9223372036854775823, umax=9223372036854775823 case where the verifier would have terminated and let the program pass. The __reg_combine_64_into_32() marks the subregister unknown and propagates 64bit {s,u}min/{s,u}max bounds to their 32bit equivalents iff they are within the 32bit universe. The question came up whether __reg_combine_64_into_32() should special case the situation that when 64bit {s,u}min bounds have the same value as 64bit {s,u}max bounds to then assign the latter as well to the 32bit reg->{s,u}32_{min,max}_value. As can be seen from the above example however, that is just /one/ special case and not a /generic/ solution given above example would still not be addressed this way and remain at an imprecise var_off=(0x8000000000000000; 0xffffffff). The improvement is needed in __reg_bound_offset() to refine var32_off with the updated var64_off instead of the prior reg->var_off. The reg_bounds_sync() code first refines information about the register's min/max bounds via __update_reg_bounds() from the current var_off, then in __reg_deduce_bounds() from sign bit and with the potentially learned bits from bounds it'll update the var_off tnum in __reg_bound_offset(). For example, intersecting with the old var_off might have improved bounds slightly, e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), then new var_off will then result in (0; 0x7f...fc). The intersected var64_off holds then the universe which is a superset of var32_off. The point for the latter is not to broaden, but to further refine known bits based on the intersection of var_off with 32 bit bounds, so that we later construct the final var_off from upper and lower 32 bits. The final __update_reg_bounds() can then potentially still slightly refine bounds if more bits became known from the new var_off. After the improvement, we can see R1 converging successively: func#0 @0 0: R1=ctx(off=0,imm=0) R10=fp0 0: (61) r2 = *(u32 *)(r1 +0) ; R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=0,imm=0) 1: (61) r3 = *(u32 *)(r1 +4) ; R1=ctx(off=0,imm=0) R3_w=pkt_end(off=0,imm=0) 2: (bf) r1 = r2 ; R1_w=pkt(off=0,r=0,imm=0) R2_w=pkt(off=0,r=0,imm=0) 3: (07) r1 += 1 ; R1_w=pkt(off=1,r=0,imm=0) 4: (2d) if r1 > r3 goto pc+8 ; R1_w=pkt(off=1,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) 5: (71) r1 = *(u8 *)(r2 +0) ; R1_w=scalar(umax=255,var_off=(0x0; 0xff)) R2_w=pkt(off=0,r=1,imm=0) 6: (18) r0 = 0x7fffffffffffff10 ; R0_w=9223372036854775568 8: (0f) r1 += r0 ; R0_w=9223372036854775568 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775823,s32_min=-240,s32_max=15) 9: (18) r0 = 0x8000000000000000 ; R0_w=-9223372036854775808 11: (07) r0 += 1 ; R0_w=-9223372036854775807 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775809) 13: (b7) r0 = 0 ; R0_w=0 14: (95) exit from 12 to 11: R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775810,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775806 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775806 R1_w=-9223372036854775806 13: safe from 12 to 11: R0_w=-9223372036854775806 R1_w=scalar(umin=9223372036854775811,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775805 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775805 R1_w=-9223372036854775805 13: safe [...] from 12 to 11: R0_w=-9223372036854775798 R1=scalar(umin=9223372036854775819,umax=9223372036854775823,var_off=(0x8000000000000008; 0x7),s32_min=8,s32_max=15,u32_min=8,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775797 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775797 R1=-9223372036854775797 13: safe from 12 to 11: R0_w=-9223372036854775797 R1=scalar(umin=9223372036854775820,umax=9223372036854775823,var_off=(0x800000000000000c; 0x3),s32_min=12,s32_max=15,u32_min=12,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775796 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775796 R1=-9223372036854775796 13: safe from 12 to 11: R0_w=-9223372036854775796 R1=scalar(umin=9223372036854775821,umax=9223372036854775823,var_off=(0x800000000000000c; 0x3),s32_min=12,s32_max=15,u32_min=12,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775795 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775795 R1=-9223372036854775795 13: safe from 12 to 11: R0_w=-9223372036854775795 R1=scalar(umin=9223372036854775822,umax=9223372036854775823,var_off=(0x800000000000000e; 0x1),s32_min=14,s32_max=15,u32_min=14,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775794 12: (ad) if r0 < r1 goto pc-2 ; R0_w=-9223372036854775794 R1=-9223372036854775794 13: safe from 12 to 11: R0_w=-9223372036854775794 R1=-9223372036854775793 R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0 11: (07) r0 += 1 ; R0_w=-9223372036854775793 12: (ad) if r0 < r1 goto pc-2 last_idx 12 first_idx 12 parent didn't have regs=1 stack=0 marks: R0_rw=P-9223372036854775801 R1_r=scalar(umin=9223372036854775815,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0 last_idx 11 first_idx 11 regs=1 stack=0 before 11: (07) r0 += 1 parent didn't have regs=1 stack=0 marks: R0_rw=P-9223372036854775805 R1_rw=scalar(umin=9223372036854775812,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0 last_idx 12 first_idx 0 regs=1 stack=0 before 12: (ad) if r0 < r1 goto pc-2 regs=1 stack=0 before 11: (07) r0 += 1 regs=1 stack=0 before 12: (ad) if r0 < r1 goto pc-2 regs=1 stack=0 before 11: (07) r0 += 1 regs=1 stack=0 before 12: (ad) if r0 < r1 goto pc-2 regs=1 stack=0 before 11: (07) r0 += 1 regs=1 stack=0 before 9: (18) r0 = 0x8000000000000000 last_idx 12 first_idx 12 parent didn't have regs=2 stack=0 marks: R0_rw=P-9223372036854775801 R1_r=Pscalar(umin=9223372036854775815,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0 last_idx 11 first_idx 11 regs=2 stack=0 before 11: (07) r0 += 1 parent didn't have regs=2 stack=0 marks: R0_rw=P-9223372036854775805 R1_rw=Pscalar(umin=9223372036854775812,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0 last_idx 12 first_idx 0 regs=2 stack=0 before 12: (ad) if r0 < r1 goto pc-2 regs=2 stack=0 before 11: (07) r0 += 1 regs=2 stack=0 before 12: (ad) if r0 < r1 goto pc-2 regs=2 stack=0 before 11: (07) r0 += 1 regs=2 stack=0 before 12: (ad) if r0 < r1 goto pc-2 regs=2 stack=0 before 11: (07) r0 += 1 regs=2 stack=0 before 9: (18) r0 = 0x8000000000000000 regs=2 stack=0 before 8: (0f) r1 += r0 regs=3 stack=0 before 6: (18) r0 = 0x7fffffffffffff10 regs=2 stack=0 before 5: (71) r1 = *(u8 *)(r2 +0) 13: safe from 4 to 13: safe verification time 322 usec stack depth 0 processed 56 insns (limit 1000000) max_states_per_insn 1 total_states 3 peak_states 3 mark_read 1 This also fixes up a test case along with this improvement where we match on the verifier log. The updated log now has a refined var_off, too. Fixes: 3f50f132d840 ("bpf: Verifier, do explicit ALU32 bounds tracking") Reported-by: Xu Kuohai <xukuohai@huaweicloud.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20230314203424.4015351-2-xukuohai@huaweicloud.com Link: https://lore.kernel.org/bpf/20230322213056.2470-1-daniel@iogearbox.net Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11libbpf: Fix ld_imm64 copy logic for ksym in light skeleton.Alexei Starovoitov
[ Upstream commit a506d6ce1dd184051037dc9d26c3eb187c9fe625 ] Unlike normal libbpf the light skeleton 'loader' program is doing btf_find_by_name_kind() call at run-time to find ksym in the kernel and populate its {btf_id, btf_obj_fd} pair in ld_imm64 insn. To avoid doing the search multiple times for the same ksym it remembers the first patched ld_imm64 insn and copies {btf_id, btf_obj_fd} from it into subsequent ld_imm64 insn. Fix a bug in copying logic, since it may incorrectly clear BPF_PSEUDO_BTF_ID flag. Also replace always true if (btf_obj_fd >= 0) check with unconditional JMP_JA to clarify the code. Fixes: d995816b77eb ("libbpf: Avoid reload of imm for weak, unresolved, repeating ksym") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230319203014.55866-1-alexei.starovoitov@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests/bpf: Fix a fd leak in an error path in network_helpers.cMartin KaFai Lau
[ Upstream commit 226efec2b0efad60d4a6c4b2c3a8710dafc4dc21 ] In __start_server, it leaks a fd when setsockopt(SO_REUSEPORT) fails. This patch fixes it. Fixes: eed92afdd14c ("bpf: selftest: Test batching and bpf_(get|set)sockopt in bpf tcp iter") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20230316000726.1016773-2-martin.lau@linux.dev Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11tools: bpftool: Remove invalid \' json escapeLuis Gerhorst
[ Upstream commit c679bbd611c08b0559ffae079330bc4e5574696a ] RFC8259 ("The JavaScript Object Notation (JSON) Data Interchange Format") only specifies \", \\, \/, \b, \f, \n, \r, and \r as valid two-character escape sequences. This does not include \', which is not required in JSON because it exclusively uses double quotes as string separators. Solidus (/) may be escaped, but does not have to. Only reverse solidus (\), double quotes ("), and the control characters have to be escaped. Therefore, with this fix, bpftool correctly supports all valid two-character escape sequences (but still does not support characters that require multi-character escape sequences). Witout this fix, attempting to load a JSON file generated by bpftool using Python 3.10.6's default json.load() may fail with the error "Invalid \escape" if the file contains the invalid escaped single quote (\'). Fixes: b66e907cfee2 ("tools: bpftool: copy JSON writer from iproute2 repository") Signed-off-by: Luis Gerhorst <gerhorst@cs.fau.de> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20230227150853.16863-1-gerhorst@cs.fau.de Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11tools/x86/kcpuid: Fix avx512bw and avx512lvl fields in Fn00000007Terry Bowman
[ Upstream commit 4e347bdf44c1fd4296a7b9657a2c0e1bd900fa50 ] Leaf Fn00000007 contains avx512bw at bit 26 and avx512vl at bit 28. This is incorrect per the SDM. Correct avx512bw to be bit 30 and avx512lvl to be bit 31. Fixes: c6b2f240bf8d ("tools/x86: Add a kcpuid tool to show raw CPU features") Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Feng Tang <feng.tang@intel.com> Link: https://lore.kernel.org/r/20230206141832.4162264-2-terry.bowman@amd.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests/resctrl: Check for return value after write_schemata()Ilpo Järvinen
[ Upstream commit 0d45c83b95da414e98ad333e723141a94f6e2c64 ] MBA test case writes schemata but it does not check if the write is successful or not. Add the error check and return error properly. Fixes: 01fee6b4d1f9 ("selftests/resctrl: Add MBA test") Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests/resctrl: Allow ->setup() to return errorsIlpo Järvinen
[ Upstream commit fa10366cc6f4cc862871f8938426d85c2481f084 ] resctrl_val() assumes ->setup() always returns either 0 to continue tests or < 0 in case of the normal termination of tests after x runs. The latter overlaps with normal error returns. Define END_OF_TESTS (=1) to differentiate the normal termination of tests and return errors as negative values. Alter callers of ->setup() to handle errors properly. Fixes: 790bf585b0ee ("selftests/resctrl: Add Cache Allocation Technology (CAT) selftest") Fixes: ecdbb911f22d ("selftests/resctrl: Add MBM test") Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests/resctrl: Move ->setup() call outside of test specific branchesIlpo Järvinen
[ Upstream commit c90b3b588e369c20087699316259fa5ebbb16f2d ] resctrl_val() function is called only by MBM, MBA, and CMT tests which means the else branch is never used. Both test branches call param->setup(). Remove the unused else branch and place the ->setup() call outside of the test specific branches reducing code duplication. Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> Stable-dep-of: fa10366cc6f4 ("selftests/resctrl: Allow ->setup() to return errors") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests/resctrl: Return NULL if malloc_and_init_memory() did not alloc memIlpo Järvinen
[ Upstream commit 22a8be280383812235131dda18a8212a59fadd2d ] malloc_and_init_memory() in fill_buf isn't checking if memalign() successfully allocated memory or not before accessing the memory. Check the return value of memalign() and return NULL if allocating aligned memory fails. Fixes: a2561b12fe39 ("selftests/resctrl: Add built in benchmark") Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-11selftests mount: Fix mount_setattr_test builds failedAnh Tuan Phan
[ Upstream commit f1594bc676579133a3cd906d7d27733289edfb86 ] When compiling selftests with target mount_setattr I encountered some errors with the below messages: mount_setattr_test.c: In function ‘mount_setattr_thread’: mount_setattr_test.c:343:16: error: variable ‘attr’ has initializer but incomplete type 343 | struct mount_attr attr = { | ^~~~~~~~~~ These errors might be because of linux/mount.h is not included. This patch resolves that issue. Signed-off-by: Anh Tuan Phan <tuananhlfc@gmail.com> Acked-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-26tools/mm/page_owner_sort.c: fix TGID output when cull=tg is usedSteve Chou
commit 9235756885e865070c4be2facda75262dbd85967 upstream. When using cull option with 'tg' flag, the fprintf is using pid instead of tgid. It should use tgid instead. Link: https://lkml.kernel.org/r/20230411034929.2071501-1-steve_chou@pesi.com.tw Fixes: 9c8a0a8e599f4a ("tools/vm/page_owner_sort.c: support for user-defined culling rules") Signed-off-by: Steve Chou <steve_chou@pesi.com.tw> Cc: Jiajian Ye <yejiajian2018@email.szu.edu.cn> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-04-26selftests: sigaltstack: fix -WuninitializedNick Desaulniers
[ Upstream commit 05107edc910135d27fe557267dc45be9630bf3dd ] Building sigaltstack with clang via: $ ARCH=x86 make LLVM=1 -C tools/testing/selftests/sigaltstack/ produces the following warning: warning: variable 'sp' is uninitialized when used here [-Wuninitialized] if (sp < (unsigned long)sstack || ^~ Clang expects these to be declared at global scope; we've fixed this in the kernel proper by using the macro `current_stack_pointer`. This is defined in different headers for different target architectures, so just create a new header that defines the arch-specific register names for the stack pointer register, and define it for more targets (at least the ones that support current_stack_pointer/ARCH_HAS_CURRENT_STACK_POINTER). Reported-by: Linux Kernel Functional Testing <lkft@linaro.org> Link: https://lore.kernel.org/lkml/CA+G9fYsi3OOu7yCsMutpzKDnBMAzJBCPimBp86LhGBa0eCnEpA@mail.gmail.com/ Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Linux Kernel Functional Testing <lkft@linaro.org> Tested-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20maple_tree: fix write memory barrier of nodes once dead for RCU modeLiam R. Howlett
[ Upstream commit c13af03de46ba27674dd9fb31a17c0d480081139 ] During the development of the maple tree, the strategy of freeing multiple nodes changed and, in the process, the pivots were reused to store pointers to dead nodes. To ensure the readers see accurate pivots, the writers need to mark the nodes as dead and call smp_wmb() to ensure any readers can identify the node as dead before using the pivot values. There were two places where the old method of marking the node as dead without smp_wmb() were being used, which resulted in RCU readers seeing the wrong pivot value before seeing the node was dead. Fix this race condition by using mte_set_node_dead() which has the smp_wmb() call to ensure the race is closed. Add a WARN_ON() to the ma_free_rcu() call to ensure all nodes being freed are marked as dead to ensure there are no other call paths besides the two updated paths. This is necessary for the RCU mode of the maple tree. Link: https://lkml.kernel.org/r/20230227173632.3292573-6-surenb@google.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by: Suren Baghdasaryan <surenb@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20x86/hyperv: KVM: Rename "hv_enlightenments" to "hv_vmcb_enlightenments"Sean Christopherson
[ Upstream commit 26b516bb39215cf60aa1fb55d0a6fd73058698fa ] Now that KVM isn't littered with "struct hv_enlightenments" casts, rename the struct to "hv_vmcb_enlightenments" to highlight the fact that the struct is specifically for SVM's VMCB. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221101145426.251680-5-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Stable-dep-of: e5c972c1fada ("KVM: SVM: Flush Hyper-V TLB when required") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20KVM: SVM: Add a proper field for Hyper-V VMCB enlightenmentsSean Christopherson
[ Upstream commit 68ae7c7bc56a4504ed5efde7c2f8d6024148a35e ] Add a union to provide hv_enlightenments side-by-side with the sw_reserved bytes that Hyper-V's enlightenments overlay. Casting sw_reserved everywhere is messy, confusing, and unnecessarily unsafe. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221101145426.251680-4-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Stable-dep-of: e5c972c1fada ("KVM: SVM: Flush Hyper-V TLB when required") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20KVM: selftests: Move "struct hv_enlightenments" to x86_64/svm.hSean Christopherson
[ Upstream commit 381fc63ac0754e05d3921e9d399b89dfdfd2b2e5 ] Move Hyper-V's VMCB "struct hv_enlightenments" to the svm.h header so that the struct can be referenced in "struct vmcb_control_area". Alternatively, a dedicated header for SVM+Hyper-V could be added, a la x86_64/evmcs.h, but it doesn't appear that Hyper-V will end up needing a wholesale replacement for the VMCB. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221101145426.251680-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Stable-dep-of: e5c972c1fada ("KVM: SVM: Flush Hyper-V TLB when required") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20x86/hyperv: Move VMCB enlightenment definitions to hyperv-tlfs.hSean Christopherson
[ Upstream commit 089fe572a2e0a89e36a455d299d801770293d08f ] Move Hyper-V's VMCB enlightenment definitions to the TLFS header; the definitions come directly from the TLFS[*], not from KVM. No functional change intended. [*] https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/tlfs/datatypes/hv_svm_enlightened_vmcb_fields [vitaly: rename VMCB_HV_ -> HV_VMCB_ to match the rest of hyperv-tlfs.h, keep svm/hyperv.h] Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20221101145426.251680-2-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Stable-dep-of: e5c972c1fada ("KVM: SVM: Flush Hyper-V TLB when required") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20selftests/bpf: Fix progs/find_vma_fail1.c build error.Alexei Starovoitov
[ Upstream commit 32513d40d908b267508d37994753d9bd1600914b ] The commit 11e456cae91e ("selftests/bpf: Fix compilation errors: Assign a value to a constant") fixed the issue cleanly in bpf-next. This is an alternative fix in bpf tree to avoid merge conflict between bpf and bpf-next. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20libbpf: Fix single-line struct definition output in btf_dumpAndrii Nakryiko
[ Upstream commit 872aec4b5f635d94111d48ec3c57fbe078d64e7d ] btf_dump APIs emit unnecessary tabs when emitting struct/union definition that fits on the single line. Before this patch we'd get: struct blah {<tab>}; This patch fixes this and makes sure that we get more natural: struct blah {}; Fixes: 44a726c3f23c ("bpftool: Print newline before '}' for struct with padding only fields") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20221212211505.558851-2-andrii@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-20selftests: openvswitch: adjust datapath NL message declarationAaron Conole
[ Upstream commit 306dc21361993f4fe50a15d4db6b1a4de5d0adb0 ] The netlink message for creating a new datapath takes an array of ports for the PID creation. This shouldn't cause much issue but correct it for future cases where we need to do decode of datapath information that could include the per-cpu PID map. Fixes: 25f16c873fb1 ("selftests: add openvswitch selftest suite") Signed-off-by: Aaron Conole <aconole@redhat.com> Link: https://lore.kernel.org/r/20230412115828.3991806-1-aconole@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-13bpftool: Print newline before '}' for struct with padding only fieldsEduard Zingerman
[ Upstream commit 44a726c3f23cf762ef4ce3c1709aefbcbe97f62c ] btf_dump_emit_struct_def attempts to print empty structures at a single line, e.g. `struct empty {}`. However, it has to account for a case when there are no regular but some padding fields in the struct. In such case `vlen` would be zero, but size would be non-zero. E.g. here is struct bpf_timer from vmlinux.h before this patch: struct bpf_timer { long: 64; long: 64;}; And after this patch: struct bpf_dynptr { long: 64; long: 64; }; Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221001104425.415768-1-eddyz87@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-13maple_tree: remove GFP_ZERO from kmem_cache_alloc() and kmem_cache_alloc_bulk()Liam R. Howlett
commit 541e06b772c1aaffb3b6a245ccface36d7107af2 upstream. Preallocations are common in the VMA code to avoid allocating under certain locking conditions. The preallocations must also cover the worst-case scenario. Removing the GFP_ZERO flag from the kmem_cache_alloc() (and bulk variant) calls will reduce the amount of time spent zeroing memory that may not be used. Only zero out the necessary area to keep track of the allocations in the maple state. Zero the entire node prior to using it in the tree. This required internal changes to node counting on allocation, so the test code is also updated. This restores some micro-benchmark performance: up to +9% in mmtests mmap1 by my testing +10% to +20% in mmap, mmapaddr, mmapmany tests reported by Red Hat Link: https://bugzilla.redhat.com/show_bug.cgi?id=2149636 Link: https://lkml.kernel.org/r/20230105160427.2988454-1-Liam.Howlett@oracle.com Cc: stable@vger.kernel.org Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam Howlett <Liam.Howlett@oracle.com> Reported-by: Jirka Hladky <jhladky@redhat.com> Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-04-06libbpf: Fix btf_dump's packed struct determinationAndrii Nakryiko
[ Upstream commit 4fb877aaa179dcdb1676d55216482febaada457e ] Fix bug in btf_dump's logic of determining if a given struct type is packed or not. The notion of "natural alignment" is not needed and is even harmful in this case, so drop it altogether. The biggest difference in btf_is_struct_packed() compared to its original implementation is that we don't really use btf__align_of() to determine overall alignment of a struct type (because it could be 1 for both packed and non-packed struct, depending on specifci field definitions), and just use field's actual alignment to calculate whether any field is requiring packing or struct's size overall necessitates packing. Add two simple test cases that demonstrate the difference this change would make. Fixes: ea2ce1ba99aa ("libbpf: Fix BTF-to-C converter's padding logic") Reported-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20221215183605.4149488-1-andrii@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-06selftests/bpf: Add few corner cases to test padding handling of btf_dumpAndrii Nakryiko
[ Upstream commit b148c8b9b926e257a59c8eb2cd6fa3adfd443254 ] Add few hand-crafted cases and few randomized cases found using script from [0] that tests btf_dump's padding logic. [0] https://lore.kernel.org/bpf/85f83c333f5355c8ac026f835b18d15060725fcb.camel@ericsson.com/ Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20221212211505.558851-7-andrii@kernel.org Stable-dep-of: 4fb877aaa179 ("libbpf: Fix btf_dump's packed struct determination") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-06libbpf: Fix BTF-to-C converter's padding logicAndrii Nakryiko
[ Upstream commit ea2ce1ba99aa6a60c8d8a706e3abadf3de372163 ] Turns out that btf_dump API doesn't handle a bunch of tricky corner cases, as reported by Per, and further discovered using his testing Python script ([0]). This patch revamps btf_dump's padding logic significantly, making it more correct and also avoiding unnecessary explicit padding, where compiler would pad naturally. This overall topic turned out to be very tricky and subtle, there are lots of subtle corner cases. The comments in the code tries to give some clues, but comments themselves are supposed to be paired with good understanding of C alignment and padding rules. Plus some experimentation to figure out subtle things like whether `long :0;` means that struct is now forced to be long-aligned (no, it's not, turns out). Anyways, Per's script, while not completely correct in some known situations, doesn't show any obvious cases where this logic breaks, so this is a nice improvement over the previous state of this logic. Some selftests had to be adjusted to accommodate better use of natural alignment rules, eliminating some unnecessary padding, or changing it to `type: 0;` alignment markers. Note also that for when we are in between bitfields, we emit explicit bit size, while otherwise we use `: 0`, this feels much more natural in practice. Next patch will add few more test cases, found through randomized Per's script. [0] https://lore.kernel.org/bpf/85f83c333f5355c8ac026f835b18d15060725fcb.camel@ericsson.com/ Reported-by: Per Sundström XP <per.xp.sundstrom@ericsson.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20221212211505.558851-6-andrii@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-06selftests/bpf: Test btf dump for struct with padding only fieldsEduard Zingerman
[ Upstream commit d503f1176b14f722a40ea5110312614982f9a80b ] Structures with zero regular fields but some padding constitute a special case in btf_dump.c:btf_dump_emit_struct_def with regards to newline before closing '}'. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221001104425.415768-2-eddyz87@gmail.com Stable-dep-of: ea2ce1ba99aa ("libbpf: Fix BTF-to-C converter's padding logic") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-06tools/power turbostat: fix decoding of HWP_STATUSAntti Laakso
[ Upstream commit 92c25393586ac799b9b7d9e50434f3c44a7622c4 ] The "excursion to minimum" information is in bit2 in HWP_STATUS MSR. Fix the bitmask used for decoding the register. Signed-off-by: Antti Laakso <antti.laakso@intel.com> Reviewed-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-06tools/power turbostat: Fix /dev/cpu_dma_latency warningsPrarit Bhargava
[ Upstream commit 40aafc7d58d3544f152a863a0e9863014b6d5d8c ] When running as non-root the following error is seen in turbostat: turbostat: fopen /dev/cpu_dma_latency : Permission denied turbostat and the man page have information on how to avoid other permission errors, so these can be fixed the same way. Provide better /dev/cpu_dma_latency warnings that provide instructions on how to avoid the error, and update the man page. Signed-off-by: Prarit Bhargava <prarit@redhat.com> Cc: linux-pm@vger.kernel.org Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-06ACPI: tools: pfrut: Check if the input of level and type is in the right ↵Chen Yu
numeric range [ Upstream commit 0bc23d8b2237a104d7f8379d687aa4cb82e2968b ] The user provides arbitrary non-numeic value to level and type, which could bring unexpected behavior. In this case the expected behavior would be to throw an error. pfrut -h usage: pfrut [OPTIONS] code injection: -l, --load -s, --stage -a, --activate -u, --update [stage and activate] -q, --query -d, --revid update telemetry: -G, --getloginfo -T, --type(0:execution, 1:history) -L, --level(0, 1, 2, 4) -R, --read -D, --revid log pfrut -T A pfrut -G log_level:0 log_type:0 log_revid:2 max_data_size:65536 chunk1_size:0 chunk2_size:1530 rollover_cnt:0 reset_cnt:17 Fix this by restricting the input to be in the expected range. Reported-by: Hariganesh Govindarajulu <hariganesh.govindarajulu@intel.com> Suggested-by: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-30selftests/x86/amx: Add a ptrace testChang S. Bae
commit 62faca1ca10cc84e99ae7f38aa28df2bc945369b upstream. Include a test case to validate the XTILEDATA injection to the target. Also, it ensures the kernel's ability to copy states between different XSAVE formats. Refactor the memcmp() code to be usable for the state validation. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20230227210504.18520-3-chang.seok.bae%40intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-30act_mirred: use the backlog for nested calls to mirred ingressDavide Caratti
[ Upstream commit ca22da2fbd693b54dc8e3b7b54ccc9f7e9ba3640 ] William reports kernel soft-lockups on some OVS topologies when TC mirred egress->ingress action is hit by local TCP traffic [1]. The same can also be reproduced with SCTP (thanks Xin for verifying), when client and server reach themselves through mirred egress to ingress, and one of the two peers sends a "heartbeat" packet (from within a timer). Enqueueing to backlog proved to fix this soft lockup; however, as Cong noticed [2], we should preserve - when possible - the current mirred behavior that counts as "overlimits" any eventual packet drop subsequent to the mirred forwarding action [3]. A compromise solution might use the backlog only when tcf_mirred_act() has a nest level greater than one: change tcf_mirred_forward() accordingly. Also, add a kselftest that can reproduce the lockup and verifies TC mirred ability to account for further packet drops after TC mirred egress->ingress (when the nest level is 1). [1] https://lore.kernel.org/netdev/33dc43f587ec1388ba456b4915c75f02a8aae226.1663945716.git.dcaratti@redhat.com/ [2] https://lore.kernel.org/netdev/Y0w%2FWWY60gqrtGLp@pop-os.localdomain/ [3] such behavior is not guaranteed: for example, if RPS or skb RX timestamping is enabled on the mirred target device, the kernel can defer receiving the skb and return NET_RX_SUCCESS inside tcf_mirred_forward(). Reported-by: William Zhao <wizhao@redhat.com> CC: Xin Long <lucien.xin@gmail.com> Signed-off-by: Davide Caratti <dcaratti@redhat.com> Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-30selftests/bpf: check that modifier resolves after pointerLorenz Bauer
[ Upstream commit dfdd608c3b365f0fd49d7e13911ebcde06b9865b ] Add a regression test that ensures that a VAR pointing at a modifier which follows a PTR (or STRUCT or ARRAY) is resolved correctly by the datasec validator. Signed-off-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/r/20230306112138.155352-3-lmb@isovalent.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-30bootconfig: Fix testcase to increase max nodeMasami Hiramatsu (Google)
[ Upstream commit b69245126a48e50882021180fa5d264dc7149ccc ] Since commit 6c40624930c5 ("bootconfig: Increase max nodes of bootconfig from 1024 to 8192 for DCC support") increased the max number of bootconfig node to 8192, the bootconfig testcase of the max number of nodes fails. To fix this issue, we can not simply increase the number in the test script because the test bootconfig file becomes too big (>32KB). To fix that, we can use a combination of three alphabets (26^3 = 17576). But with that, we can not express the 8193 (just one exceed from the limitation) because it also exceeds the max size of bootconfig. So, the first 26 nodes will just use one alphabet. With this fix, test-bootconfig.sh passes all tests. Link: https://lore.kernel.org/all/167888844790.791176.670805252426835131.stgit@devnote2/ Reported-by: Heinz Wiesinger <pprkut@slackware.com> Link: https://lore.kernel.org/all/2463802.XAFRqVoOGU@amaterasu.liwjatan.org Fixes: 6c40624930c5 ("bootconfig: Increase max nodes of bootconfig from 1024 to 8192 for DCC support") Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-22selftests: net: devlink_port_split.py: skip test if no suitable device availablePo-Hsu Lin
[ Upstream commit 24994513ad13ff2c47ba91d2b5df82c3d496c370 ] The `devlink -j port show` command output may not contain the "flavour" key, an example from Ubuntu 22.10 s390x LPAR(5.19.0-37-generic), with mlx4 driver and iproute2-5.15.0: {"port":{"pci/0001:00:00.0/1":{"type":"eth","netdev":"ens301"}, "pci/0001:00:00.0/2":{"type":"eth","netdev":"ens301d1"}, "pci/0002:00:00.0/1":{"type":"eth","netdev":"ens317"}, "pci/0002:00:00.0/2":{"type":"eth","netdev":"ens317d1"}}} This will cause a KeyError exception. Create a validate_devlink_output() to check for this "flavour" from devlink command output to avoid this KeyError exception. Also let it handle the check for `devlink -j dev show` output in main(). Apart from this, if the test was not started because the max lanes of the designated device is 0. The script will still return 0 and thus causing a false-negative test result. Use a found_max_lanes flag to determine if these tests were skipped due to this reason and return KSFT_SKIP to make it more clear. Link: https://bugs.launchpad.net/bugs/1937133 Fixes: f3348a82e727 ("selftests: net: Add port split test") Signed-off-by: Po-Hsu Lin <po-hsu.lin@canonical.com> Link: https://lore.kernel.org/r/20230315165353.229590-1-po-hsu.lin@canonical.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-22selftests: fix LLVM build for i386 and x86_64Guillaume Tucker
[ Upstream commit 624c60f326c6e5a80b008e8a5c7feffe8c27dc72 ] Add missing cases for the i386 and x86_64 architectures when determining the LLVM target for building kselftest. Fixes: 795285ef2425 ("selftests: Fix clang cross compilation") Signed-off-by: Guillaume Tucker <guillaume.tucker@collabora.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-17Revert "bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES"Martin KaFai Lau
commit 181127fb76e62d06ab17a75fd610129688612343 upstream. This reverts commit 6c20822fada1b8adb77fa450d03a0d449686a4a9. build bot failed on arch with different cache line size: https://lore.kernel.org/bpf/50c35055-afa9-d01e-9a05-ea5351280e4f@intel.com/ Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-17bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMESAlexander Lobakin
[ Upstream commit 6c20822fada1b8adb77fa450d03a0d449686a4a9 ] &xdp_buff and &xdp_frame are bound in a way that xdp_buff->data_hard_start == xdp_frame It's always the case and e.g. xdp_convert_buff_to_frame() relies on this. IOW, the following: for (u32 i = 0; i < 0xdead; i++) { xdpf = xdp_convert_buff_to_frame(&xdp); xdp_convert_frame_to_buff(xdpf, &xdp); } shouldn't ever modify @xdpf's contents or the pointer itself. However, "live packet" code wrongly treats &xdp_frame as part of its context placed *before* the data_hard_start. With such flow, data_hard_start is sizeof(*xdpf) off to the right and no longer points to the XDP frame. Instead of replacing `sizeof(ctx)` with `offsetof(ctx, xdpf)` in several places and praying that there are no more miscalcs left somewhere in the code, unionize ::frm with ::data in a flex array, so that both starts pointing to the actual data_hard_start and the XDP frame actually starts being a part of it, i.e. a part of the headroom, not the context. A nice side effect is that the maximum frame size for this mode gets increased by 40 bytes, as xdp_buff::frame_sz includes everything from data_hard_start (-> includes xdpf already) to the end of XDP/skb shared info. Also update %MAX_PKT_SIZE accordingly in the selftests code. Leave it hardcoded for 64 bit && 4k pages, it can be made more flexible later on. Minor: align `&head->data` with how `head->frm` is assigned for consistency. Minor #2: rename 'frm' to 'frame' in &xdp_page_head while at it for clarity. (was found while testing XDP traffic generator on ice, which calls xdp_convert_frame_to_buff() for each XDP frame) Fixes: b530e9e1063e ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN") Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Link: https://lore.kernel.org/r/20230215185440.4126672-1-aleksander.lobakin@intel.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-17perf stat: Fix counting when initial delay configuredChangbin Du
[ Upstream commit 25f69c69bc3ca8c781a94473f28d443d745768e3 ] When creating counters with initial delay configured, the enable_on_exec field is not set. So we need to enable the counters later. The problem is, when a workload is specified the target__none() is true. So we also need to check stat_config.initial_delay. In this change, we add a new field 'initial_delay' for struct target which could be shared by other subcommands. And define target__enable_on_exec() which returns whether enable_on_exec should be set on normal cases. Before this fix the event is not counted: $ ./perf stat -e instructions -D 100 sleep 2 Events disabled Events enabled Performance counter stats for 'sleep 2': <not counted> instructions 1.901661124 seconds time elapsed 0.001602000 seconds user 0.000000000 seconds sys After fix it works: $ ./perf stat -e instructions -D 100 sleep 2 Events disabled Events enabled Performance counter stats for 'sleep 2': 404,214 instructions 1.901743475 seconds time elapsed 0.001617000 seconds user 0.000000000 seconds sys Fixes: c587e77e100fa40e ("perf stat: Do not delay the workload with --delay") Signed-off-by: Changbin Du <changbin.du@huawei.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Hui Wang <hw.huiwang@huawei.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20230302031146.2801588-2-changbin.du@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-17selftests: nft_nat: ensuring the listening side is up before starting the clientHangbin Liu
[ Upstream commit 2067e7a00aa604b94de31d64f29b8893b1696f26 ] The test_local_dnat_portonly() function initiates the client-side as soon as it sets the listening side to the background. This could lead to a race condition where the server may not be ready to listen. To ensure that the server-side is up and running before initiating the client-side, a delay is introduced to the test_local_dnat_portonly() function. Before the fix: # ./nft_nat.sh PASS: netns routing/connectivity: ns0-rthlYrBU can reach ns1-rthlYrBU and ns2-rthlYrBU PASS: ping to ns1-rthlYrBU was ip NATted to ns2-rthlYrBU PASS: ping to ns1-rthlYrBU OK after ip nat output chain flush PASS: ipv6 ping to ns1-rthlYrBU was ip6 NATted to ns2-rthlYrBU 2023/02/27 04:11:03 socat[6055] E connect(5, AF=2 10.0.1.99:2000, 16): Connection refused ERROR: inet port rewrite After the fix: # ./nft_nat.sh PASS: netns routing/connectivity: ns0-9sPJV6JJ can reach ns1-9sPJV6JJ and ns2-9sPJV6JJ PASS: ping to ns1-9sPJV6JJ was ip NATted to ns2-9sPJV6JJ PASS: ping to ns1-9sPJV6JJ OK after ip nat output chain flush PASS: ipv6 ping to ns1-9sPJV6JJ was ip6 NATted to ns2-9sPJV6JJ PASS: inet port rewrite without l3 address Fixes: 282e5f8fe907 ("netfilter: nat: really support inet nat without l3 address") Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-17perf inject: Fix --buildid-all not to eat up MMAP2Namhyung Kim
commit ce9f1c05d2edfa6cdf2c1a510495d333e11810a8 upstream. When MMAP2 has the PERF_RECORD_MISC_MMAP_BUILD_ID flag, it means the record already has the build-id info. So it marks the DSO as hit, to skip if the same DSO is not processed if it happens to miss the build-id later. But it missed to copy the MMAP2 record itself so it'd fail to symbolize samples for those regions. For example, the following generates 249 MMAP2 events. $ perf record --buildid-mmap -o- true | perf report --stat -i- | grep MMAP2 MMAP2 events: 249 (86.8%) Adding perf inject should not change the number of events like this $ perf record --buildid-mmap -o- true | perf inject -b | \ > perf report --stat -i- | grep MMAP2 MMAP2 events: 249 (86.5%) But when --buildid-all is used, it eats most of the MMAP2 events. $ perf record --buildid-mmap -o- true | perf inject -b --buildid-all | \ > perf report --stat -i- | grep MMAP2 MMAP2 events: 1 ( 2.5%) With this patch, it shows the original number now. $ perf record --buildid-mmap -o- true | perf inject -b --buildid-all | \ > perf report --stat -i- | grep MMAP2 MMAP2 events: 249 (86.5%) Committer testing: Before: $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf inject -b | perf report --stat -i- | grep MMAP2 MMAP2 events: 58 (36.2%) $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf report --stat -i- | grep MMAP2 MMAP2 events: 58 (36.2%) $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf inject -b --buildid-all | perf report --stat -i- | grep MMAP2 MMAP2 events: 2 ( 1.9%) $ After: $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf inject -b | perf report --stat -i- | grep MMAP2 MMAP2 events: 58 (29.3%) $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf report --stat -i- | grep MMAP2 MMAP2 events: 58 (34.3%) $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf inject -b --buildid-all | perf report --stat -i- | grep MMAP2 MMAP2 events: 58 (38.4%) $ Fixes: f7fc0d1c915a74ff ("perf inject: Do not inject BUILD_ID record if MMAP2 has it") Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230223070155.54251-1-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-11tools/iio/iio_utils:fix memory leakYulong Zhang
[ Upstream commit f2edf0c819a4823cd6c288801ce737e8d4fcde06 ] 1. fopen sysfs without fclose. 2. asprintf filename without free. 3. if asprintf return error,do not need to free the buffer. Signed-off-by: Yulong Zhang <yulong.zhang@metoak.net> Link: https://lore.kernel.org/r/20230117025147.69890-1-yulong.zhang@metoak.net Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-11netfilter: ip6t_rpfilter: Fix regression with VRF interfacesPhil Sutter
[ Upstream commit efb056e5f1f0036179b2f92c1c15f5ea7a891d70 ] When calling ip6_route_lookup() for the packet arriving on the VRF interface, the result is always the real (slave) interface. Expect this when validating the result. Fixes: acc641ab95b66 ("netfilter: rpfilter/fib: Populate flowic_l3mdev field") Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-11objtool: Fix memory leak in create_static_call_sections()Miaoqian Lin
[ Upstream commit 3da73f102309fe29150e5c35acd20dd82063ff67 ] strdup() allocates memory for key_name. We need to release the memory in the following error paths. Add free() to avoid memory leak. Fixes: 1e7e47883830 ("x86/static_call: Add inline static call implementation for x86-64") Signed-off-by: Miaoqian Lin <linmq006@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20221205080642.558583-1-linmq006@gmail.com Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Sasha Levin <sashal@kernel.org>