diff options
author | David S. Miller | 2021-02-16 13:14:06 -0800 |
---|---|---|
committer | David S. Miller | 2021-02-16 13:14:06 -0800 |
commit | b8af417e4d93caeefb89bbfbd56ec95dedd8dab5 (patch) | |
tree | 1c8d22e1aec330238830a43cc8aee0cf768ae1c7 /Documentation | |
parent | 9ec5eea5b6acfae7279203097eeec5d02d01d9b7 (diff) | |
parent | 45159b27637b0fef6d5ddb86fc7c46b13c77960f (diff) |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2021-02-16
The following pull-request contains BPF updates for your *net-next* tree.
There's a small merge conflict between 7eeba1706eba ("tcp: Add receive timestamp
support for receive zerocopy.") from net-next tree and 9cacf81f8161 ("bpf: Remove
extra lock_sock for TCP_ZEROCOPY_RECEIVE") from bpf-next tree. Resolve as follows:
[...]
lock_sock(sk);
err = tcp_zerocopy_receive(sk, &zc, &tss);
err = BPF_CGROUP_RUN_PROG_GETSOCKOPT_KERN(sk, level, optname,
&zc, &len, err);
release_sock(sk);
[...]
We've added 116 non-merge commits during the last 27 day(s) which contain
a total of 156 files changed, 5662 insertions(+), 1489 deletions(-).
The main changes are:
1) Adds support of pointers to types with known size among global function
args to overcome the limit on max # of allowed args, from Dmitrii Banshchikov.
2) Add bpf_iter for task_vma which can be used to generate information similar
to /proc/pid/maps, from Song Liu.
3) Enable bpf_{g,s}etsockopt() from all sock_addr related program hooks. Allow
rewriting bind user ports from BPF side below the ip_unprivileged_port_start
range, both from Stanislav Fomichev.
4) Prevent recursion on fentry/fexit & sleepable programs and allow map-in-map
as well as per-cpu maps for the latter, from Alexei Starovoitov.
5) Add selftest script to run BPF CI locally. Also enable BPF ringbuffer
for sleepable programs, both from KP Singh.
6) Extend verifier to enable variable offset read/write access to the BPF
program stack, from Andrei Matei.
7) Improve tc & XDP MTU handling and add a new bpf_check_mtu() helper to
query device MTU from programs, from Jesper Dangaard Brouer.
8) Allow bpf_get_socket_cookie() helper also be called from [sleepable] BPF
tracing programs, from Florent Revest.
9) Extend x86 JIT to pad JMPs with NOPs for helping image to converge when
otherwise too many passes are required, from Gary Lin.
10) Verifier fixes on atomics with BPF_FETCH as well as function-by-function
verification both related to zero-extension handling, from Ilya Leoshkevich.
11) Better kernel build integration of resolve_btfids tool, from Jiri Olsa.
12) Batch of AF_XDP selftest cleanups and small performance improvement
for libbpf's xsk map redirect for newer kernels, from Björn Töpel.
13) Follow-up BPF doc and verifier improvements around atomics with
BPF_FETCH, from Brendan Jackman.
14) Permit zero-sized data sections e.g. if ELF .rodata section contains
read-only data from local variables, from Yonghong Song.
15) veth driver skb bulk-allocation for ndo_xdp_xmit, from Lorenzo Bianconi.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/bpf/bpf_design_QA.rst | 6 | ||||
-rw-r--r-- | Documentation/bpf/bpf_devel_QA.rst | 11 | ||||
-rw-r--r-- | Documentation/networking/filter.rst | 28 |
3 files changed, 28 insertions, 17 deletions
diff --git a/Documentation/bpf/bpf_design_QA.rst b/Documentation/bpf/bpf_design_QA.rst index 2df7b067ab93..0e15f9b05c9d 100644 --- a/Documentation/bpf/bpf_design_QA.rst +++ b/Documentation/bpf/bpf_design_QA.rst @@ -208,6 +208,12 @@ data structures and compile with kernel internal headers. Both of these kernel internals are subject to change and can break with newer kernels such that the program needs to be adapted accordingly. +Q: Are tracepoints part of the stable ABI? +------------------------------------------ +A: NO. Tracepoints are tied to internal implementation details hence they are +subject to change and can break with newer kernels. BPF programs need to change +accordingly when this happens. + Q: How much stack space a BPF program uses? ------------------------------------------- A: Currently all program types are limited to 512 bytes of stack diff --git a/Documentation/bpf/bpf_devel_QA.rst b/Documentation/bpf/bpf_devel_QA.rst index 5b613d2a5f1a..2ed89abbf9a4 100644 --- a/Documentation/bpf/bpf_devel_QA.rst +++ b/Documentation/bpf/bpf_devel_QA.rst @@ -501,16 +501,19 @@ All LLVM releases can be found at: http://releases.llvm.org/ Q: Got it, so how do I build LLVM manually anyway? -------------------------------------------------- -A: You need cmake and gcc-c++ as build requisites for LLVM. Once you have -that set up, proceed with building the latest LLVM and clang version +A: We recommend that developers who want the fastest incremental builds +use the Ninja build system, you can find it in your system's package +manager, usually the package is ninja or ninja-build. + +You need ninja, cmake and gcc-c++ as build requisites for LLVM. Once you +have that set up, proceed with building the latest LLVM and clang version from the git repositories:: $ git clone https://github.com/llvm/llvm-project.git - $ mkdir -p llvm-project/llvm/build/install + $ mkdir -p llvm-project/llvm/build $ cd llvm-project/llvm/build $ cmake .. -G "Ninja" -DLLVM_TARGETS_TO_BUILD="BPF;X86" \ -DLLVM_ENABLE_PROJECTS="clang" \ - -DBUILD_SHARED_LIBS=OFF \ -DCMAKE_BUILD_TYPE=Release \ -DLLVM_BUILD_RUNTIME=OFF $ ninja diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst index f6d8f90e9a56..251c6bd73d15 100644 --- a/Documentation/networking/filter.rst +++ b/Documentation/networking/filter.rst @@ -1048,12 +1048,12 @@ Unlike classic BPF instruction set, eBPF has generic load/store operations:: Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW. It also includes atomic operations, which use the immediate field for extra -encoding. +encoding:: .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg -The basic atomic operations supported are: +The basic atomic operations supported are:: BPF_ADD BPF_AND @@ -1066,33 +1066,35 @@ memory location addresed by ``dst_reg + off`` is atomically modified, with immediate, then these operations also overwrite ``src_reg`` with the value that was in memory before it was modified. -The more special operations are: +The more special operations are:: BPF_XCHG This atomically exchanges ``src_reg`` with the value addressed by ``dst_reg + -off``. +off``. :: BPF_CMPXCHG This atomically compares the value addressed by ``dst_reg + off`` with -``R0``. If they match it is replaced with ``src_reg``, The value that was there -before is loaded back to ``R0``. +``R0``. If they match it is replaced with ``src_reg``. In either case, the +value that was there before is zero-extended and loaded back to ``R0``. Note that 1 and 2 byte atomic operations are not supported. -Except ``BPF_ADD`` _without_ ``BPF_FETCH`` (for legacy reasons), all 4 byte -atomic operations require alu32 mode. Clang enables this mode by default in -architecture v3 (``-mcpu=v3``). For older versions it can be enabled with +Clang can generate atomic instructions by default when ``-mcpu=v3`` is +enabled. If a lower version for ``-mcpu`` is set, the only atomic instruction +Clang can generate is ``BPF_ADD`` *without* ``BPF_FETCH``. If you need to enable +the atomics features, while keeping a lower ``-mcpu`` version, you can use ``-Xclang -target-feature -Xclang +alu32``. -You may encounter BPF_XADD - this is a legacy name for BPF_ATOMIC, referring to -the exclusive-add operation encoded when the immediate field is zero. +You may encounter ``BPF_XADD`` - this is a legacy name for ``BPF_ATOMIC``, +referring to the exclusive-add operation encoded when the immediate field is +zero. -eBPF has one 16-byte instruction: BPF_LD | BPF_DW | BPF_IMM which consists +eBPF has one 16-byte instruction: ``BPF_LD | BPF_DW | BPF_IMM`` which consists of two consecutive ``struct bpf_insn`` 8-byte blocks and interpreted as single instruction that loads 64-bit immediate value into a dst_reg. -Classic BPF has similar instruction: BPF_LD | BPF_W | BPF_IMM which loads +Classic BPF has similar instruction: ``BPF_LD | BPF_W | BPF_IMM`` which loads 32-bit immediate value into a register. eBPF verifier |