diff options
author | Linus Torvalds | 2023-04-26 16:07:23 -0700 |
---|---|---|
committer | Linus Torvalds | 2023-04-26 16:07:23 -0700 |
commit | 6e98b09da931a00bf4e0477d0fa52748bf28fcce (patch) | |
tree | 9c658ed95add5693f42f29f63df80a2ede3f6ec2 /include | |
parent | b68ee1c6131c540a62ecd443be89c406401df091 (diff) | |
parent | 9b78d919632b7149d311aaad5a977e4b48b10321 (diff) |
Merge tag 'net-next-6.4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Paolo Abeni:
"Core:
- Introduce a config option to tweak MAX_SKB_FRAGS. Increasing the
default value allows for better BIG TCP performances
- Reduce compound page head access for zero-copy data transfers
- RPS/RFS improvements, avoiding unneeded NET_RX_SOFTIRQ when
possible
- Threaded NAPI improvements, adding defer skb free support and
unneeded softirq avoidance
- Address dst_entry reference count scalability issues, via false
sharing avoidance and optimize refcount tracking
- Add lockless accesses annotation to sk_err[_soft]
- Optimize again the skb struct layout
- Extends the skb drop reasons to make it usable by multiple
subsystems
- Better const qualifier awareness for socket casts
BPF:
- Add skb and XDP typed dynptrs which allow BPF programs for more
ergonomic and less brittle iteration through data and
variable-sized accesses
- Add a new BPF netfilter program type and minimal support to hook
BPF programs to netfilter hooks such as prerouting or forward
- Add more precise memory usage reporting for all BPF map types
- Adds support for using {FOU,GUE} encap with an ipip device
operating in collect_md mode and add a set of BPF kfuncs for
controlling encap params
- Allow BPF programs to detect at load time whether a particular
kfunc exists or not, and also add support for this in light
skeleton
- Bigger batch of BPF verifier improvements to prepare for upcoming
BPF open-coded iterators allowing for less restrictive looping
capabilities
- Rework RCU enforcement in the verifier, add kptr_rcu and enforce
BPF programs to NULL-check before passing such pointers into kfunc
- Add support for kptrs in percpu hashmaps, percpu LRU hashmaps and
in local storage maps
- Enable RCU semantics for task BPF kptrs and allow referenced kptr
tasks to be stored in BPF maps
- Add support for refcounted local kptrs to the verifier for allowing
shared ownership, useful for adding a node to both the BPF list and
rbtree
- Add BPF verifier support for ST instructions in
convert_ctx_access() which will help new -mcpu=v4 clang flag to
start emitting them
- Add ARM32 USDT support to libbpf
- Improve bpftool's visual program dump which produces the control
flow graph in a DOT format by adding C source inline annotations
Protocols:
- IPv4: Allow adding to IPv4 address a 'protocol' tag. Such value
indicates the provenance of the IP address
- IPv6: optimize route lookup, dropping unneeded R/W lock acquisition
- Add the handshake upcall mechanism, allowing the user-space to
implement generic TLS handshake on kernel's behalf
- Bridge: support per-{Port, VLAN} neighbor suppression, increasing
resilience to nodes failures
- SCTP: add support for Fair Capacity and Weighted Fair Queueing
schedulers
- MPTCP: delay first subflow allocation up to its first usage. This
will allow for later better LSM interaction
- xfrm: Remove inner/outer modes from input/output path. These are
not needed anymore
- WiFi:
- reduced neighbor report (RNR) handling for AP mode
- HW timestamping support
- support for randomized auth/deauth TA for PASN privacy
- per-link debugfs for multi-link
- TC offload support for mac80211 drivers
- mac80211 mesh fast-xmit and fast-rx support
- enable Wi-Fi 7 (EHT) mesh support
Netfilter:
- Add nf_tables 'brouting' support, to force a packet to be routed
instead of being bridged
- Update bridge netfilter and ovs conntrack helpers to handle IPv6
Jumbo packets properly, i.e. fetch the packet length from
hop-by-hop extension header. This is needed for BIT TCP support
- The iptables 32bit compat interface isn't compiled in by default
anymore
- Move ip(6)tables builtin icmp matches to the udptcp one. This has
the advantage that icmp/icmpv6 match doesn't load the
iptables/ip6tables modules anymore when iptables-nft is used
- Extended netlink error report for netdevice in flowtables and
netdev/chains. Allow for incrementally add/delete devices to netdev
basechain. Allow to create netdev chain without device
Driver API:
- Remove redundant Device Control Error Reporting Enable, as PCI core
has already error reporting enabled at enumeration time
- Move Multicast DB netlink handlers to core, allowing devices other
then bridge to use them
- Allow the page_pool to directly recycle the pages from safely
localized NAPI
- Implement lockless TX queue stop/wake combo macros, allowing for
further code de-duplication and sanitization
- Add YNL support for user headers and struct attrs
- Add partial YNL specification for devlink
- Add partial YNL specification for ethtool
- Add tc-mqprio and tc-taprio support for preemptible traffic classes
- Add tx push buf len param to ethtool, specifies the maximum number
of bytes of a transmitted packet a driver can push directly to the
underlying device
- Add basic LED support for switch/phy
- Add NAPI documentation, stop relaying on external links
- Convert dsa_master_ioctl() to netdev notifier. This is a
preparatory work to make the hardware timestamping layer selectable
by user space
- Add transceiver support and improve the error messages for CAN-FD
controllers
New hardware / drivers:
- Ethernet:
- AMD/Pensando core device support
- MediaTek MT7981 SoC
- MediaTek MT7988 SoC
- Broadcom BCM53134 embedded switch
- Texas Instruments CPSW9G ethernet switch
- Qualcomm EMAC3 DWMAC ethernet
- StarFive JH7110 SoC
- NXP CBTX ethernet PHY
- WiFi:
- Apple M1 Pro/Max devices
- RealTek rtl8710bu/rtl8188gu
- RealTek rtl8822bs, rtl8822cs and rtl8821cs SDIO chipset
- Bluetooth:
- Realtek RTL8821CS, RTL8851B, RTL8852BS
- Mediatek MT7663, MT7922
- NXP w8997
- Actions Semi ATS2851
- QTI WCN6855
- Marvell 88W8997
- Can:
- STMicroelectronics bxcan stm32f429
Drivers:
- Ethernet NICs:
- Intel (1G, icg):
- add tracking and reporting of QBV config errors
- add support for configuring max SDU for each Tx queue
- Intel (100G, ice):
- refactor mailbox overflow detection to support Scalable IOV
- GNSS interface optimization
- Intel (i40e):
- support XDP multi-buffer
- nVidia/Mellanox:
- add the support for linux bridge multicast offload
- enable TC offload for egress and engress MACVLAN over bond
- add support for VxLAN GBP encap/decap flows offload
- extend packet offload to fully support libreswan
- support tunnel mode in mlx5 IPsec packet offload
- extend XDP multi-buffer support
- support MACsec VLAN offload
- add support for dynamic msix vectors allocation
- drop RX page_cache and fully use page_pool
- implement thermal zone to report NIC temperature
- Netronome/Corigine:
- add support for multi-zone conntrack offload
- Solarflare/Xilinx:
- support offloading TC VLAN push/pop actions to the MAE
- support TC decap rules
- support unicast PTP
- Other NICs:
- Broadcom (bnxt): enforce software based freq adjustments only on
shared PHC NIC
- RealTek (r8169): refactor to addess ASPM issues during NAPI poll
- Micrel (lan8841): add support for PTP_PF_PEROUT
- Cadence (macb): enable PTP unicast
- Engleder (tsnep): add XDP socket zero-copy support
- virtio-net: implement exact header length guest feature
- veth: add page_pool support for page recycling
- vxlan: add MDB data path support
- gve: add XDP support for GQI-QPL format
- geneve: accept every ethertype
- macvlan: allow some packets to bypass broadcast queue
- mana: add support for jumbo frame
- Ethernet high-speed switches:
- Microchip (sparx5): Add support for TC flower templates
- Ethernet embedded switches:
- Broadcom (b54):
- configure 6318 and 63268 RGMII ports
- Marvell (mv88e6xxx):
- faster C45 bus scan
- Microchip:
- lan966x:
- add support for IS1 VCAP
- better TX/RX from/to CPU performances
- ksz9477: add ETS Qdisc support
- ksz8: enhance static MAC table operations and error handling
- sama7g5: add PTP capability
- NXP (ocelot):
- add support for external ports
- add support for preemptible traffic classes
- Texas Instruments:
- add CPSWxG SGMII support for J7200 and J721E
- Intel WiFi (iwlwifi):
- preparation for Wi-Fi 7 EHT and multi-link support
- EHT (Wi-Fi 7) sniffer support
- hardware timestamping support for some devices/firwmares
- TX beacon protection on newer hardware
- Qualcomm 802.11ax WiFi (ath11k):
- MU-MIMO parameters support
- ack signal support for management packets
- RealTek WiFi (rtw88):
- SDIO bus support
- better support for some SDIO devices (e.g. MAC address from
efuse)
- RealTek WiFi (rtw89):
- HW scan support for 8852b
- better support for 6 GHz scanning
- support for various newer firmware APIs
- framework firmware backwards compatibility
- MediaTek WiFi (mt76):
- P2P support
- mesh A-MSDU support
- EHT (Wi-Fi 7) support
- coredump support"
* tag 'net-next-6.4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2078 commits)
net: phy: hide the PHYLIB_LEDS knob
net: phy: marvell-88x2222: remove unnecessary (void*) conversions
tcp/udp: Fix memleaks of sk and zerocopy skbs with TX timestamp.
net: amd: Fix link leak when verifying config failed
net: phy: marvell: Fix inconsistent indenting in led_blink_set
lan966x: Don't use xdp_frame when action is XDP_TX
tsnep: Add XDP socket zero-copy TX support
tsnep: Add XDP socket zero-copy RX support
tsnep: Move skb receive action to separate function
tsnep: Add functions for queue enable/disable
tsnep: Rework TX/RX queue initialization
tsnep: Replace modulo operation with mask
net: phy: dp83867: Add led_brightness_set support
net: phy: Fix reading LED reg property
drivers: nfc: nfcsim: remove return value check of `dev_dir`
net: phy: dp83867: Remove unnecessary (void*) conversions
net: ethtool: coalesce: try to make user settings stick twice
net: mana: Check if netdev/napi_alloc_frag returns single page
net: mana: Rename mana_refill_rxoob and remove some empty lines
net: veth: add page_pool stats
...
Diffstat (limited to 'include')
138 files changed, 5029 insertions, 1164 deletions
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h index 77bc5522e61c..4226379a232d 100644 --- a/include/linux/atomic/atomic-arch-fallback.h +++ b/include/linux/atomic/atomic-arch-fallback.h @@ -1208,15 +1208,21 @@ arch_atomic_inc_and_test(atomic_t *v) #define arch_atomic_inc_and_test arch_atomic_inc_and_test #endif +#ifndef arch_atomic_add_negative_relaxed +#ifdef arch_atomic_add_negative +#define arch_atomic_add_negative_acquire arch_atomic_add_negative +#define arch_atomic_add_negative_release arch_atomic_add_negative +#define arch_atomic_add_negative_relaxed arch_atomic_add_negative +#endif /* arch_atomic_add_negative */ + #ifndef arch_atomic_add_negative /** - * arch_atomic_add_negative - add and test if negative + * arch_atomic_add_negative - Add and test if negative * @i: integer value to add * @v: pointer of type atomic_t * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. + * Atomically adds @i to @v and returns true if the result is negative, + * or false when the result is greater than or equal to zero. */ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v) @@ -1226,6 +1232,95 @@ arch_atomic_add_negative(int i, atomic_t *v) #define arch_atomic_add_negative arch_atomic_add_negative #endif +#ifndef arch_atomic_add_negative_acquire +/** + * arch_atomic_add_negative_acquire - Add and test if negative + * @i: integer value to add + * @v: pointer of type atomic_t + * + * Atomically adds @i to @v and returns true if the result is negative, + * or false when the result is greater than or equal to zero. + */ +static __always_inline bool +arch_atomic_add_negative_acquire(int i, atomic_t *v) +{ + return arch_atomic_add_return_acquire(i, v) < 0; +} +#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire +#endif + +#ifndef arch_atomic_add_negative_release +/** + * arch_atomic_add_negative_release - Add and test if negative + * @i: integer value to add + * @v: pointer of type atomic_t + * + * Atomically adds @i to @v and returns true if the result is negative, + * or false when the result is greater than or equal to zero. + */ +static __always_inline bool +arch_atomic_add_negative_release(int i, atomic_t *v) +{ + return arch_atomic_add_return_release(i, v) < 0; +} +#define arch_atomic_add_negative_release arch_atomic_add_negative_release +#endif + +#ifndef arch_atomic_add_negative_relaxed +/** + * arch_atomic_add_negative_relaxed - Add and test if negative + * @i: integer value to add + * @v: pointer of type atomic_t + * + * Atomically adds @i to @v and returns true if the result is negative, + * or false when the result is greater than or equal to zero. + */ +static __always_inline bool +arch_atomic_add_negative_relaxed(int i, atomic_t *v) +{ + return arch_atomic_add_return_relaxed(i, v) < 0; +} +#define arch_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed +#endif + +#else /* arch_atomic_add_negative_relaxed */ + +#ifndef arch_atomic_add_negative_acquire +static __always_inline bool +arch_atomic_add_negative_acquire(int i, atomic_t *v) +{ + bool ret = arch_atomic_add_negative_relaxed(i, v); + __atomic_acquire_fence(); + return ret; +} +#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire +#endif + +#ifndef arch_atomic_add_negative_release +static __always_inline bool +arch_atomic_add_negative_release(int i, atomic_t *v) +{ + __atomic_release_fence(); + return arch_atomic_add_negative_relaxed(i, v); +} +#define arch_atomic_add_negative_release arch_atomic_add_negative_release +#endif + +#ifndef arch_atomic_add_negative +static __always_inline bool +arch_atomic_add_negative(int i, atomic_t *v) +{ + bool ret; + __atomic_pre_full_fence(); + ret = arch_atomic_add_negative_relaxed(i, v); + __atomic_post_full_fence(); + return ret; +} +#define arch_atomic_add_negative arch_atomic_add_negative +#endif + +#endif /* arch_atomic_add_negative_relaxed */ + #ifndef arch_atomic_fetch_add_unless /** * arch_atomic_fetch_add_unless - add unless the number is already a given value @@ -2329,15 +2424,21 @@ arch_atomic64_inc_and_test(atomic64_t *v) #define arch_atomic64_inc_and_test arch_atomic64_inc_and_test #endif +#ifndef arch_atomic64_add_negative_relaxed +#ifdef arch_atomic64_add_negative +#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative +#define arch_atomic64_add_negative_release arch_atomic64_add_negative +#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative +#endif /* arch_atomic64_add_negative */ + #ifndef arch_atomic64_add_negative /** - * arch_atomic64_add_negative - add and test if negative + * arch_atomic64_add_negative - Add and test if negative * @i: integer value to add * @v: pointer of type atomic64_t * - * Atomically adds @i to @v and returns true - * if the result is negative, or false when - * result is greater than or equal to zero. + * Atomically adds @i to @v and returns true if the result is negative, + * or false when the result is greater than or equal to zero. */ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v) @@ -2347,6 +2448,95 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v) #define arch_atomic64_add_negative arch_atomic64_add_negative #endif +#ifndef arch_atomic64_add_negative_acquire +/** + * arch_atomic64_add_negative_acquire - Add and test if negative + * @i: integer value to add + * @v: pointer of type atomic64_t + * + * Atomically adds @i to @v and returns true if the result is negative, + * or false when the result is greater than or equal to zero. + */ +static __always_inline bool +arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return_acquire(i, v) < 0; +} +#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire +#endif + +#ifndef arch_atomic64_add_negative_release +/** + * arch_atomic64_add_negative_release - Add and test if negative + * @i: integer value to add + * @v: pointer of type atomic64_t + * + * Atomically adds @i to @v and returns true if the result is negative, + * or false when the result is greater than or equal to zero. + */ +static __always_inline bool +arch_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return_release(i, v) < 0; +} +#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release +#endif + +#ifndef arch_atomic64_add_negative_relaxed +/** + * arch_atomic64_add_negative_relaxed - Add and test if negative + * @i: integer value to add + * @v: pointer of type atomic64_t + * + * Atomically adds @i to @v and returns true if the result is negative, + * or false when the result is greater than or equal to zero. + */ +static __always_inline bool +arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + return arch_atomic64_add_return_relaxed(i, v) < 0; +} +#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed +#endif + +#else /* arch_atomic64_add_negative_relaxed */ + +#ifndef arch_atomic64_add_negative_acquire +static __always_inline bool +arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + bool ret = arch_atomic64_add_negative_relaxed(i, v); + __atomic_acquire_fence(); + return ret; +} +#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire +#endif + +#ifndef arch_atomic64_add_negative_release +static __always_inline bool +arch_atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + __atomic_release_fence(); + return arch_atomic64_add_negative_relaxed(i, v); +} +#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release +#endif + +#ifndef arch_atomic64_add_negative +static __always_inline bool +arch_atomic64_add_negative(s64 i, atomic64_t *v) +{ + bool ret; + __atomic_pre_full_fence(); + ret = arch_atomic64_add_negative_relaxed(i, v); + __atomic_post_full_fence(); + return ret; +} +#define arch_atomic64_add_negative arch_atomic64_add_negative +#endif + +#endif /* arch_atomic64_add_negative_relaxed */ + #ifndef arch_atomic64_fetch_add_unless /** * arch_atomic64_fetch_add_unless - add unless the number is already a given value @@ -2456,4 +2646,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v) #endif #endif /* _LINUX_ATOMIC_FALLBACK_H */ -// b5e87bdd5ede61470c29f7a7e4de781af3770f09 +// 00071fffa021cec66f6290d706d69c91df87bade diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h index 7a139ec030b0..0496816738ca 100644 --- a/include/linux/atomic/atomic-instrumented.h +++ b/include/linux/atomic/atomic-instrumented.h @@ -592,6 +592,28 @@ atomic_add_negative(int i, atomic_t *v) return arch_atomic_add_negative(i, v); } +static __always_inline bool +atomic_add_negative_acquire(int i, atomic_t *v) +{ + instrument_atomic_read_write(v, sizeof(*v)); + return arch_atomic_add_negative_acquire(i, v); +} + +static __always_inline bool +atomic_add_negative_release(int i, atomic_t *v) +{ + kcsan_release(); + instrument_atomic_read_write(v, sizeof(*v)); + return arch_atomic_add_negative_release(i, v); +} + +static __always_inline bool +atomic_add_negative_relaxed(int i, atomic_t *v) +{ + instrument_atomic_read_write(v, sizeof(*v)); + return arch_atomic_add_negative_relaxed(i, v); +} + static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { @@ -1211,6 +1233,28 @@ atomic64_add_negative(s64 i, atomic64_t *v) return arch_atomic64_add_negative(i, v); } +static __always_inline bool +atomic64_add_negative_acquire(s64 i, atomic64_t *v) +{ + instrument_atomic_read_write(v, sizeof(*v)); + return arch_atomic64_add_negative_acquire(i, v); +} + +static __always_inline bool +atomic64_add_negative_release(s64 i, atomic64_t *v) +{ + kcsan_release(); + instrument_atomic_read_write(v, sizeof(*v)); + return arch_atomic64_add_negative_release(i, v); +} + +static __always_inline bool +atomic64_add_negative_relaxed(s64 i, atomic64_t *v) +{ + instrument_atomic_read_write(v, sizeof(*v)); + return arch_atomic64_add_negative_relaxed(i, v); +} + static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { @@ -1830,6 +1874,28 @@ atomic_long_add_negative(long i, atomic_long_t *v) return arch_atomic_long_add_negative(i, v); } +static __always_inline bool +atomic_long_add_negative_acquire(long i, atomic_long_t *v) +{ + instrument_atomic_read_write(v, sizeof(*v)); + return arch_atomic_long_add_negative_acquire(i, v); +} + +static __always_inline bool +atomic_long_add_negative_release(long i, atomic_long_t *v) +{ + kcsan_release(); + instrument_atomic_read_write(v, sizeof(*v)); + return arch_atomic_long_add_negative_release(i, v); +} + +static __always_inline bool +atomic_long_add_negative_relaxed(long i, atomic_long_t *v) +{ + instrument_atomic_read_write(v, sizeof(*v)); + return arch_atomic_long_add_negative_relaxed(i, v); +} + static __always_inline long atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { @@ -2083,4 +2149,4 @@ atomic_long_dec_if_positive(atomic_long_t *v) }) #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ -// 764f741eb77a7ad565dc8d99ce2837d5542e8aee +// 1b485de9cbaa4900de59e14ee2084357eaeb1c3a diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index 800b8c35992d..2fc51ba66beb 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -479,6 +479,24 @@ arch_atomic_long_add_negative(long i, atomic_long_t *v) return arch_atomic64_add_negative(i, v); } +static __always_inline bool +arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v) +{ + return arch_atomic64_add_negative_acquire(i, v); +} + +static __always_inline bool +arch_atomic_long_add_negative_release(long i, atomic_long_t *v) +{ + return arch_atomic64_add_negative_release(i, v); +} + +static __always_inline bool +arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic64_add_negative_relaxed(i, v); +} + static __always_inline long arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { @@ -973,6 +991,24 @@ arch_atomic_long_add_negative(long i, atomic_long_t *v) return arch_atomic_add_negative(i, v); } +static __always_inline bool +arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v) +{ + return arch_atomic_add_negative_acquire(i, v); +} + +static __always_inline bool +arch_atomic_long_add_negative_release(long i, atomic_long_t *v) +{ + return arch_atomic_add_negative_release(i, v); +} + +static __always_inline bool +arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) +{ + return arch_atomic_add_negative_relaxed(i, v); +} + static __always_inline long arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { @@ -1011,4 +1047,4 @@ arch_atomic_long_dec_if_positive(atomic_long_t *v) #endif /* CONFIG_64BIT */ #endif /* _LINUX_ATOMIC_LONG_H */ -// e8f0e08ff072b74d180eabe2ad001282b38c2c88 +// a194c07d7d2f4b0e178d3c118c919775d5d65f50 diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 520b238abd5a..e53ceee1df37 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -96,11 +96,11 @@ struct bpf_map_ops { /* funcs callable from userspace and from eBPF programs */ void *(*map_lookup_elem)(struct bpf_map *map, void *key); - int (*map_update_elem)(struct bpf_map *map, void *key, void *value, u64 flags); - int (*map_delete_elem)(struct bpf_map *map, void *key); - int (*map_push_elem)(struct bpf_map *map, void *value, u64 flags); - int (*map_pop_elem)(struct bpf_map *map, void *value); - int (*map_peek_elem)(struct bpf_map *map, void *value); + long (*map_update_elem)(struct bpf_map *map, void *key, void *value, u64 flags); + long (*map_delete_elem)(struct bpf_map *map, void *key); + long (*map_push_elem)(struct bpf_map *map, void *value, u64 flags); + long (*map_pop_elem)(struct bpf_map *map, void *value); + long (*map_peek_elem)(struct bpf_map *map, void *value); void *(*map_lookup_percpu_elem)(struct bpf_map *map, void *key, u32 cpu); /* funcs called by prog_array and perf_event_array map */ @@ -139,7 +139,7 @@ struct bpf_map_ops { struct bpf_local_storage __rcu ** (*map_owner_storage_ptr)(void *owner); /* Misc helpers.*/ - int (*map_redirect)(struct bpf_map *map, u64 key, u64 flags); + long (*map_redirect)(struct bpf_map *map, u64 key, u64 flags); /* map_meta_equal must be implemented for maps that can be * used as an inner map. It is a runtime check to ensure @@ -157,10 +157,12 @@ struct bpf_map_ops { int (*map_set_for_each_callback_args)(struct bpf_verifier_env *env, struct bpf_func_state *caller, struct bpf_func_state *callee); - int (*map_for_each_callback)(struct bpf_map *map, + long (*map_for_each_callback)(struct bpf_map *map, bpf_callback_t callback_fn, void *callback_ctx, u64 flags); + u64 (*map_mem_usage)(const struct bpf_map *map); + /* BTF id of struct allocated by map_alloc */ int *map_btf_id; @@ -185,11 +187,17 @@ enum btf_field_type { BPF_RB_NODE = (1 << 7), BPF_GRAPH_NODE_OR_ROOT = BPF_LIST_NODE | BPF_LIST_HEAD | BPF_RB_NODE | BPF_RB_ROOT, + BPF_REFCOUNT = (1 << 8), }; +typedef void (*btf_dtor_kfunc_t)(void *); + struct btf_field_kptr { struct btf *btf; struct module *module; + /* dtor used if btf_is_kernel(btf), otherwise the type is + * program-allocated, dtor is NULL, and __bpf_obj_drop_impl is used + */ btf_dtor_kfunc_t dtor; u32 btf_id; }; @@ -203,6 +211,7 @@ struct btf_field_graph_root { struct btf_field { u32 offset; + u32 size; enum btf_field_type type; union { struct btf_field_kptr kptr; @@ -215,15 +224,10 @@ struct btf_record { u32 field_mask; int spin_lock_off; int timer_off; + int refcount_off; struct btf_field fields[]; }; -struct btf_field_offs { - u32 cnt; - u32 field_off[BTF_FIELDS_MAX]; - u8 field_sz[BTF_FIELDS_MAX]; -}; - struct bpf_map { /* The first two cachelines with read-mostly members of which some * are also accessed in fast-path (e.g. ops, max_entries). @@ -250,7 +254,6 @@ struct bpf_map { struct obj_cgroup *objcg; #endif char name[BPF_OBJ_NAME_LEN]; - struct btf_field_offs *field_offs; /* The 3rd and 4th cacheline with misc members to avoid false sharing * particularly with refcounting. */ @@ -292,6 +295,8 @@ static inline const char *btf_field_type_name(enum btf_field_type type) return "bpf_rb_root"; case BPF_RB_NODE: return "bpf_rb_node"; + case BPF_REFCOUNT: + return "bpf_refcount"; default: WARN_ON_ONCE(1); return "unknown"; @@ -316,6 +321,8 @@ static inline u32 btf_field_type_size(enum btf_field_type type) return sizeof(struct bpf_rb_root); case BPF_RB_NODE: return sizeof(struct bpf_rb_node); + case BPF_REFCOUNT: + return sizeof(struct bpf_refcount); default: WARN_ON_ONCE(1); return 0; @@ -340,12 +347,42 @@ static inline u32 btf_field_type_align(enum btf_field_type type) return __alignof__(struct bpf_rb_root); case BPF_RB_NODE: return __alignof__(struct bpf_rb_node); + case BPF_REFCOUNT: + return __alignof__(struct bpf_refcount); default: WARN_ON_ONCE(1); return 0; } } +static inline void bpf_obj_init_field(const struct btf_field *field, void *addr) +{ + memset(addr, 0, field->size); + + switch (field->type) { + case BPF_REFCOUNT: + refcount_set((refcount_t *)addr, 1); + break; + case BPF_RB_NODE: + RB_CLEAR_NODE((struct rb_node *)addr); + break; + case BPF_LIST_HEAD: + case BPF_LIST_NODE: + INIT_LIST_HEAD((struct list_head *)addr); + break; + case BPF_RB_ROOT: + /* RB_ROOT_CACHED 0-inits, no need to do anything after memset */ + case BPF_SPIN_LOCK: + case BPF_TIMER: + case BPF_KPTR_UNREF: + case BPF_KPTR_REF: + break; + default: + WARN_ON_ONCE(1); + return; + } +} + static inline bool btf_record_has_field(const struct btf_record *rec, enum btf_field_type type) { if (IS_ERR_OR_NULL(rec)) @@ -353,14 +390,14 @@ static inline bool btf_record_has_field(const struct btf_record *rec, enum btf_f return rec->field_mask & type; } -static inline void bpf_obj_init(const struct btf_field_offs *foffs, void *obj) +static inline void bpf_obj_init(const struct btf_record *rec, void *obj) { int i; - if (!foffs) + if (IS_ERR_OR_NULL(rec)) return; - for (i = 0; i < foffs->cnt; i++) - memset(obj + foffs->field_off[i], 0, foffs->field_sz[i]); + for (i = 0; i < rec->cnt; i++) + bpf_obj_init_field(&rec->fields[i], obj + rec->fields[i].offset); } /* 'dst' must be a temporary buffer and should not point to memory that is being @@ -372,7 +409,7 @@ static inline void bpf_obj_init(const struct btf_field_offs *foffs, void *obj) */ static inline void check_and_init_map_value(struct bpf_map *map, void *dst) { - bpf_obj_init(map->field_offs, dst); + bpf_obj_init(map->record, dst); } /* memcpy that is used with 8-byte aligned pointers, power-of-8 size and @@ -392,14 +429,14 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size) } /* copy everything but bpf_spin_lock, bpf_timer, and kptrs. There could be one of each. */ -static inline void bpf_obj_memcpy(struct btf_field_offs *foffs, +static inline void bpf_obj_memcpy(struct btf_record *rec, void *dst, void *src, u32 size, bool long_memcpy) { u32 curr_off = 0; int i; - if (likely(!foffs)) { + if (IS_ERR_OR_NULL(rec)) { if (long_memcpy) bpf_long_memcpy(dst, src, round_up(size, 8)); else @@ -407,49 +444,49 @@ static inline void bpf_obj_memcpy(struct btf_field_offs *foffs, return; } - for (i = 0; i < foffs->cnt; i++) { - u32 next_off = foffs->field_off[i]; + for (i = 0; i < rec->cnt; i++) { + u32 next_off = rec->fields[i].offset; u32 sz = next_off - curr_off; memcpy(dst + curr_off, src + curr_off, sz); - curr_off += foffs->field_sz[i] + sz; + curr_off += rec->fields[i].size + sz; } memcpy(dst + curr_off, src + curr_off, size - curr_off); } static inline void copy_map_value(struct bpf_map *map, void *dst, void *src) { - bpf_obj_memcpy(map->field_offs, dst, src, map->value_size, false); + bpf_obj_memcpy(map->record, dst, src, map->value_size, false); } static inline void copy_map_value_long(struct bpf_map *map, void *dst, void *src) { - bpf_obj_memcpy(map->field_offs, dst, src, map->value_size, true); + bpf_obj_memcpy(map->record, dst, src, map->value_size, true); } -static inline void bpf_obj_memzero(struct btf_field_offs *foffs, void *dst, u32 size) +static inline void bpf_obj_memzero(struct btf_record *rec, void *dst, u32 size) { u32 curr_off = 0; int i; - if (likely(!foffs)) { + if (IS_ERR_OR_NULL(rec)) { memset(dst, 0, size); return; } - for (i = 0; i < foffs->cnt; i++) { - u32 next_off = foffs->field_off[i]; + for (i = 0; i < rec->cnt; i++) { + u32 next_off = rec->fields[i].offset; u32 sz = next_off - curr_off; memset(dst + curr_off, 0, sz); - curr_off += foffs->field_sz[i] + sz; + curr_off += rec->fields[i].size + sz; } memset(dst + curr_off, 0, size - curr_off); } static inline void zero_map_value(struct bpf_map *map, void *dst) { - bpf_obj_memzero(map->field_offs, dst, map->value_size); + bpf_obj_memzero(map->record, dst, map->value_size); } void copy_map_value_locked(struct bpf_map *map, void *dst, void *src, @@ -607,11 +644,18 @@ enum bpf_type_flag { */ NON_OWN_REF = BIT(14 + BPF_BASE_TYPE_BITS), + /* DYNPTR points to sk_buff */ + DYNPTR_TYPE_SKB = BIT(15 + BPF_BASE_TYPE_BITS), + + /* DYNPTR points to xdp_buff */ + DYNPTR_TYPE_XDP = BIT(16 + BPF_BASE_TYPE_BITS), + __BPF_TYPE_FLAG_MAX, __BPF_TYPE_LAST_FLAG = __BPF_TYPE_FLAG_MAX - 1, }; -#define DYNPTR_TYPE_FLAG_MASK (DYNPTR_TYPE_LOCAL | DYNPTR_TYPE_RINGBUF) +#define DYNPTR_TYPE_FLAG_MASK (DYNPTR_TYPE_LOCAL | DYNPTR_TYPE_RINGBUF | DYNPTR_TYPE_SKB \ + | DYNPTR_TYPE_XDP) /* Max number of base types. */ #define BPF_BASE_TYPE_LIMIT (1UL << BPF_BASE_TYPE_BITS) @@ -879,8 +923,7 @@ struct bpf_verifier_ops { struct bpf_prog *prog, u32 *target_size); int (*btf_struct_access)(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag); + int off, int size); }; struct bpf_prog_offload_ops { @@ -1089,6 +1132,7 @@ struct bpf_trampoline { struct bpf_attach_target_info { struct btf_func_model fmodel; long tgt_addr; + struct module *tgt_mod; const char *tgt_name; const struct btf_type *tgt_type; }; @@ -1124,6 +1168,37 @@ static __always_inline __nocfi unsigned int bpf_dispatcher_nop_func( return bpf_func(ctx, insnsi); } +/* the implementation of the opaque uapi struct bpf_dynptr */ +struct bpf_dynptr_kern { + void *data; + /* Size represents the number of usable bytes of dynptr data. + * If for example the offset is at 4 for a local dynptr whose data is + * of type u64, the number of usable bytes is 4. + * + * The upper 8 bits are reserved. It is as follows: + * Bits 0 - 23 = size + * Bits 24 - 30 = dynptr type + * Bit 31 = whether dynptr is read-only + */ + u32 size; + u32 offset; +} __aligned(8); + +enum bpf_dynptr_type { + BPF_DYNPTR_TYPE_INVALID, + /* Points to memory that is local to the bpf program */ + BPF_DYNPTR_TYPE_LOCAL, + /* Underlying data is a ringbuf record */ + BPF_DYNPTR_TYPE_RINGBUF, + /* Underlying data is a sk_buff */ + BPF_DYNPTR_TYPE_SKB, + /* Underlying data is a xdp_buff */ + BPF_DYNPTR_TYPE_XDP, +}; + +int bpf_dynptr_check_size(u32 size); +u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr); + #ifdef CONFIG_BPF_JIT int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr); int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr); @@ -1361,6 +1436,7 @@ struct bpf_prog_aux { * main prog always has linfo_idx == 0 */ u32 linfo_idx; + struct module *mod; u32 num_exentries; struct exception_table_entry *extable; union { @@ -1429,6 +1505,8 @@ struct bpf_link_ops { void (*show_fdinfo)(const struct bpf_link *link, struct seq_file *seq); int (*fill_link_info)(const struct bpf_link *link, struct bpf_link_info *info); + int (*update_map)(struct bpf_link *link, struct bpf_map *new_map, + struct bpf_map *old_map); }; struct bpf_tramp_link { @@ -1471,6 +1549,8 @@ struct bpf_struct_ops { void *kdata, const void *udata); int (*reg)(void *kdata); void (*unreg)(void *kdata); + int (*update)(void *kdata, void *old_kdata); + int (*validate)(void *kdata); const struct btf_type *type; const struct btf_type *value_type; const char *name; @@ -1505,6 +1585,7 @@ static inline void bpf_module_put(const void *data, struct module *owner) else module_put(owner); } +int bpf_struct_ops_link_create(union bpf_attr *attr); #ifdef CONFIG_NET /* Define it here to avoid the use of forward declaration */ @@ -1545,6 +1626,11 @@ static inline int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, { return -EINVAL; } +static inline int bpf_struct_ops_link_create(union bpf_attr *attr) +{ + return -EOPNOTSUPP; +} + #endif #if defined(CONFIG_CGROUP_BPF) && defined(CONFIG_BPF_LSM) @@ -1577,8 +1663,12 @@ struct bpf_array { #define BPF_COMPLEXITY_LIMIT_INSNS 1000000 /* yes. 1M insns */ #define MAX_TAIL_CALL_CNT 33 -/* Maximum number of loops for bpf_loop */ -#define BPF_MAX_LOOPS BIT(23) +/* Maximum number of loops for bpf_loop and bpf_iter_num. + * It's enum to expose it (and thus make it discoverable) through BTF. + */ +enum { + BPF_MAX_LOOPS = 8 * 1024 * 1024, +}; #define BPF_F_ACCESS_MASK (BPF_F_RDONLY | \ BPF_F_RDONLY_PROG | \ @@ -1881,7 +1971,7 @@ void bpf_prog_free_id(struct bpf_prog *prog); void bpf_map_free_id(struct bpf_map *map); struct btf_field *btf_record_find(const struct btf_record *rec, - u32 offset, enum btf_field_type type); + u32 offset, u32 field_mask); void btf_record_free(struct btf_record *rec); void bpf_map_free_record(struct bpf_map *map); struct btf_record *btf_record_dup(const struct btf_record *rec); @@ -1894,6 +1984,7 @@ struct bpf_map *bpf_map_get_with_uref(u32 ufd); struct bpf_map *__bpf_map_get(struct fd f); void bpf_map_inc(struct bpf_map *map); void bpf_map_inc_with_uref(struct bpf_map *map); +struct bpf_map *__bpf_map_inc_not_zero(struct bpf_map *map, bool uref); struct bpf_map * __must_check bpf_map_inc_not_zero(struct bpf_map *map); void bpf_map_put_with_uref(struct bpf_map *map); void bpf_map_put(struct bpf_map *map); @@ -2114,7 +2205,7 @@ int bpf_check_uarg_tail_zero(bpfptr_t uaddr, size_t expected_size, size_t actual_size); /* verify correctness of eBPF program */ -int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr); +int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size); #ifndef CONFIG_BPF_JIT_ALWAYS_ON void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth); @@ -2173,6 +2264,9 @@ int bpf_prog_test_run_raw_tp(struct bpf_prog *prog, int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr, union bpf_attr __user *uattr); +int bpf_prog_test_run_nf(struct bpf_prog *prog, + const union bpf_attr *kattr, + union bpf_attr __user *uattr); bool btf_ctx_access(int off, int size, enum bpf_access_type type, const struct bpf_prog *prog, struct bpf_insn_access_aux *info); @@ -2202,7 +2296,7 @@ static inline bool bpf_tracing_btf_ctx_access(int off, int size, int btf_struct_access(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag); + u32 *next_btf_id, enum bpf_type_flag *flag, const char **field_name); bool btf_struct_ids_match(struct bpf_verifier_log *log, const struct btf *btf, u32 id, int off, const struct btf *need_btf, u32 need_type_id, @@ -2234,6 +2328,9 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog); const struct btf_func_model * bpf_jit_find_kfunc_model(const struct bpf_prog *prog, const struct bpf_insn *insn); +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, + u16 btf_fd_idx, u8 **func_addr); + struct bpf_core_ctx { struct bpf_verifier_log *log; const struct btf *btf; @@ -2241,7 +2338,7 @@ struct bpf_core_ctx { bool btf_nested_type_is_trusted(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off); + const char *field_name, u32 btf_id, const char *suffix); bool btf_type_ids_nocast_alias(struct bpf_verifier_log *log, const struct btf *reg_btf, u32 reg_id, @@ -2266,6 +2363,11 @@ static inline bool has_current_bpf_ctx(void) } void notrace bpf_prog_inc_misses_counter(struct bpf_prog *prog); + +void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, + enum bpf_dynptr_type type, u32 offset, u32 size); +void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr); +void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr); #else /* !CONFIG_BPF_SYSCALL */ static inline struct bpf_prog *bpf_prog_get(u32 ufd) { @@ -2451,7 +2553,8 @@ static inline struct bpf_prog *bpf_prog_by_id(u32 id) static inline int btf_struct_access(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag) + u32 *next_btf_id, enum bpf_type_flag *flag, + const char **field_name) { return -EACCES; } @@ -2478,6 +2581,13 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog, return NULL; } +static inline int +bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, + u16 btf_fd_idx, u8 **func_addr) +{ + return -ENOTSUPP; +} + static inline bool unprivileged_ebpf_enabled(void) { return false; @@ -2495,6 +2605,19 @@ static inline void bpf_prog_inc_misses_counter(struct bpf_prog *prog) static inline void bpf_cgrp_storage_free(struct cgroup *cgroup) { } + +static inline void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, + enum bpf_dynptr_type type, u32 offset, u32 size) +{ +} + +static inline void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr) +{ +} + +static inline void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr) +{ +} #endif /* CONFIG_BPF_SYSCALL */ void __bpf_free_used_btfs(struct bpf_prog_aux *aux, @@ -2566,6 +2689,7 @@ static inline bool bpf_map_is_offloaded(struct bpf_map *map) struct bpf_map *bpf_map_offload_map_alloc(union bpf_attr *attr); void bpf_map_offload_map_free(struct bpf_map *map); +u64 bpf_map_offload_map_mem_usage(const struct bpf_map *map); int bpf_prog_test_run_syscall(struct bpf_prog *prog, const union bpf_attr *kattr, union bpf_attr __user *uattr); @@ -2637,6 +2761,11 @@ static inline void bpf_map_offload_map_free(struct bpf_map *map) { } +static inline u64 bpf_map_offload_map_mem_usage(const struct bpf_map *map) +{ + return 0; +} + static inline int bpf_prog_test_run_syscall(struct bpf_prog *prog, const union bpf_attr *kattr, union bpf_attr __user *uattr) @@ -2801,6 +2930,8 @@ u32 bpf_sock_convert_ctx_access(enum bpf_access_type type, struct bpf_insn *insn_buf, struct bpf_prog *prog, u32 *target_size); +int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags, + struct bpf_dynptr_kern *ptr); #else static inline bool bpf_sock_common_is_valid_access(int off, int size, enum bpf_access_type type, @@ -2822,6 +2953,11 @@ static inline u32 bpf_sock_convert_ctx_access(enum bpf_access_type type, { return 0; } +static inline int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags, + struct bpf_dynptr_kern *ptr) +{ + return -EOPNOTSUPP; +} #endif #ifdef CONFIG_INET @@ -2913,36 +3049,6 @@ int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args, u32 num_args, struct bpf_bprintf_data *data); void bpf_bprintf_cleanup(struct bpf_bprintf_data *data); -/* the implementation of the opaque uapi struct bpf_dynptr */ -struct bpf_dynptr_kern { - void *data; - /* Size represents the number of usable bytes of dynptr data. - * If for example the offset is at 4 for a local dynptr whose data is - * of type u64, the number of usable bytes is 4. - * - * The upper 8 bits are reserved. It is as follows: - * Bits 0 - 23 = size - * Bits 24 - 30 = dynptr type - * Bit 31 = whether dynptr is read-only - */ - u32 size; - u32 offset; -} __aligned(8); - -enum bpf_dynptr_type { - BPF_DYNPTR_TYPE_INVALID, - /* Points to memory that is local to the bpf program */ - BPF_DYNPTR_TYPE_LOCAL, - /* Underlying data is a kernel-produced ringbuf record */ - BPF_DYNPTR_TYPE_RINGBUF, -}; - -void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data, - enum bpf_dynptr_type type, u32 offset, u32 size); -void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr); -int bpf_dynptr_check_size(u32 size); -u32 bpf_dynptr_get_size(const struct bpf_dynptr_kern *ptr); - #ifdef CONFIG_BPF_LSM void bpf_cgroup_atype_get(u32 attach_btf_id, int cgroup_atype); void bpf_cgroup_atype_put(int cgroup_atype); diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h index 6d37a40cd90e..173ec7f43ed1 100644 --- a/include/linux/bpf_local_storage.h +++ b/include/linux/bpf_local_storage.h @@ -13,6 +13,7 @@ #include <linux/list.h> #include <linux/hash.h> #include <linux/types.h> +#include <linux/bpf_mem_alloc.h> #include <uapi/linux/btf.h> #define BPF_LOCAL_STORAGE_CACHE_SIZE 16 @@ -55,6 +56,9 @@ struct bpf_local_storage_map { u32 bucket_log; u16 elem_size; u16 cache_idx; + struct bpf_mem_alloc selem_ma; + struct bpf_mem_alloc storage_ma; + bool bpf_ma; }; struct bpf_local_storage_data { @@ -83,6 +87,7 @@ struct bpf_local_storage_elem { struct bpf_local_storage { struct bpf_local_storage_data __rcu *cache[BPF_LOCAL_STORAGE_CACHE_SIZE]; + struct bpf_local_storage_map __rcu *smap; struct hlist_head list; /* List of bpf_local_storage_elem */ void *owner; /* The object that owns the above "list" of * bpf_local_storage_elem. @@ -121,14 +126,15 @@ int bpf_local_storage_map_alloc_check(union bpf_attr *attr); struct bpf_map * bpf_local_storage_map_alloc(union bpf_attr *attr, - struct bpf_local_storage_cache *cache); + struct bpf_local_storage_cache *cache, + bool bpf_ma); struct bpf_local_storage_data * bpf_local_storage_lookup(struct bpf_local_storage *local_storage, struct bpf_local_storage_map *smap, bool cacheit_lockit); -bool bpf_local_storage_unlink_nolock(struct bpf_local_storage *local_storage); +void bpf_local_storage_destroy(struct bpf_local_storage *local_storage); void bpf_local_storage_map_free(struct bpf_map *map, struct bpf_local_storage_cache *cache, @@ -142,17 +148,19 @@ int bpf_local_storage_map_check_btf(const struct bpf_map *map, void bpf_selem_link_storage_nolock(struct bpf_local_storage *local_storage, struct bpf_local_storage_elem *selem); -void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool use_trace_rcu); +void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool reuse_now); void bpf_selem_link_map(struct bpf_local_storage_map *smap, struct bpf_local_storage_elem *selem); -void bpf_selem_unlink_map(struct bpf_local_storage_elem *selem); - struct bpf_local_storage_elem * bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner, void *value, bool charge_mem, gfp_t gfp_flags); +void bpf_selem_free(struct bpf_local_storage_elem *selem, + struct bpf_local_storage_map *smap, + bool reuse_now); + int bpf_local_storage_alloc(void *owner, struct bpf_local_storage_map *smap, @@ -163,6 +171,6 @@ struct bpf_local_storage_data * bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, void *value, u64 map_flags, gfp_t gfp_flags); -void bpf_local_storage_free_rcu(struct rcu_head *rcu); +u64 bpf_local_storage_map_mem_usage(const struct bpf_map *map); #endif /* _BPF_LOCAL_STORAGE_H */ diff --git a/include/linux/bpf_mem_alloc.h b/include/linux/bpf_mem_alloc.h index 3e164b8efaa9..3929be5743f4 100644 --- a/include/linux/bpf_mem_alloc.h +++ b/include/linux/bpf_mem_alloc.h @@ -14,6 +14,13 @@ struct bpf_mem_alloc { struct work_struct work; }; +/* 'size != 0' is for bpf_mem_alloc which manages fixed-size objects. + * Alloc and free are done with bpf_mem_cache_{alloc,free}(). + * + * 'size = 0' is for bpf_mem_alloc which manages many fixed-size objects. + * Alloc and free are done with bpf_mem_{alloc,free}() and the size of + * the returned object is given by the size argument of bpf_mem_alloc(). + */ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu); void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma); @@ -24,5 +31,7 @@ void bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr); /* kmem_cache_alloc/free equivalent: */ void *bpf_mem_cache_alloc(struct bpf_mem_alloc *ma); void bpf_mem_cache_free(struct bpf_mem_alloc *ma, void *ptr); +void bpf_mem_cache_raw_free(void *ptr); +void *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags); #endif /* _BPF_MEM_ALLOC_H */ diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h index d4ee3ccd3753..fc0d6f32c687 100644 --- a/include/linux/bpf_types.h +++ b/include/linux/bpf_types.h @@ -79,6 +79,10 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_LSM, lsm, #endif BPF_PROG_TYPE(BPF_PROG_TYPE_SYSCALL, bpf_syscall, void *, void *) +#ifdef CONFIG_NETFILTER_BPF_LINK +BPF_PROG_TYPE(BPF_PROG_TYPE_NETFILTER, netfilter, + struct bpf_nf_ctx, struct bpf_nf_ctx) +#endif BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops) BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_ARRAY, percpu_array_map_ops) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index cf1bb1cf4a7b..3dd29a53b711 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -59,6 +59,14 @@ struct bpf_active_lock { u32 id; }; +#define ITER_PREFIX "bpf_iter_" + +enum bpf_iter_state { + BPF_ITER_STATE_INVALID, /* for non-first slot */ + BPF_ITER_STATE_ACTIVE, + BPF_ITER_STATE_DRAINED, +}; + struct bpf_reg_state { /* Ordering of fields matters. See states_equal() */ enum bpf_reg_type type; @@ -103,6 +111,18 @@ struct bpf_reg_state { bool first_slot; } dynptr; + /* For bpf_iter stack slots */ + struct { + /* BTF container and BTF type ID describing + * struct bpf_iter_<type> of an iterator state + */ + struct btf *btf; + u32 btf_id; + /* packing following two fields to fit iter state into 16 bytes */ + enum bpf_iter_state state:2; + int depth:30; + } iter; + /* Max size from any of the above. */ struct { unsigned long raw1; @@ -141,6 +161,8 @@ struct bpf_reg_state { * same reference to the socket, to determine proper reference freeing. * For stack slots that are dynptrs, this is used to track references to * the dynptr to determine proper reference freeing. + * Similarly to dynptrs, we use ID to track "belonging" of a reference + * to a specific instance of bpf_iter. */ u32 id; /* PTR_TO_SOCKET and PTR_TO_TCP_SOCK could be a ptr returned @@ -211,9 +233,11 @@ enum bpf_stack_slot_type { * is stored in bpf_stack_state->spilled_ptr.dynptr.type */ STACK_DYNPTR, + STACK_ITER, }; #define BPF_REG_SIZE 8 /* size of eBPF register in bytes */ + #define BPF_DYNPTR_SIZE sizeof(struct bpf_dynptr_kern) #define BPF_DYNPTR_NR_SLOTS (BPF_DYNPTR_SIZE / BPF_REG_SIZE) @@ -440,7 +464,12 @@ struct bpf_insn_aux_data { */ struct bpf_loop_inline_state loop_inline_state; }; - u64 obj_new_size; /* remember the size of type passed to bpf_obj_new to rewrite R1 */ + union { + /* remember the size of type passed to bpf_obj_new to rewrite R1 */ + u64 obj_new_size; + /* remember the offset of node field within type to rewrite */ + u64 insert_off; + }; struct btf_struct_meta *kptr_struct_meta; u64 map_key_state; /* constant (32 bit) key tracking for maps */ int ctx_field_size; /* the ctx field size for load insn, maybe 0 */ @@ -448,12 +477,17 @@ struct bpf_insn_aux_data { bool sanitize_stack_spill; /* subject to Spectre v4 sanitation */ bool zext_dst; /* this insn zero extends dst reg */ bool storage_get_func_atomic; /* bpf_*_storage_get() with atomic memory alloc */ + bool is_iter_next; /* bpf_iter_<type>_next() kfunc call */ u8 alu_state; /* used in combination with alu_limit */ /* below fields are initialized once */ unsigned int orig_idx; /* original instruction index */ - bool prune_point; bool jmp_point; + bool prune_point; + /* ensure we check state equivalence and save state checkpoint and + * this instruction, regardless of any heuristics + */ + bool force_checkpoint; }; #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */ @@ -462,39 +496,36 @@ struct bpf_insn_aux_data { #define BPF_VERIFIER_TMP_LOG_SIZE 1024 struct bpf_verifier_log { - u32 level; - char kbuf[BPF_VERIFIER_TMP_LOG_SIZE]; + /* Logical start and end positions of a "log window" of the verifier log. + * start_pos == 0 means we haven't truncated anything. + * Once truncation starts to happen, start_pos + len_total == end_pos, + * except during log reset situations, in which (end_pos - start_pos) + * might get smaller than len_total (see bpf_vlog_reset()). + * Generally, (end_pos - start_pos) gives number of useful data in + * user log buffer. + */ + u64 start_pos; + u64 end_pos; char __user *ubuf; - u32 len_used; + u32 level; u32 len_total; + u32 len_max; + char kbuf[BPF_VERIFIER_TMP_LOG_SIZE]; }; -static inline bool bpf_verifier_log_full(const struct bpf_verifier_log *log) -{ - return log->len_used >= log->len_total - 1; -} - #define BPF_LOG_LEVEL1 1 #define BPF_LOG_LEVEL2 2 #define BPF_LOG_STATS 4 +#define BPF_LOG_FIXED 8 #define BPF_LOG_LEVEL (BPF_LOG_LEVEL1 | BPF_LOG_LEVEL2) -#define BPF_LOG_MASK (BPF_LOG_LEVEL | BPF_LOG_STATS) +#define BPF_LOG_MASK (BPF_LOG_LEVEL | BPF_LOG_STATS | BPF_LOG_FIXED) #define BPF_LOG_KERNEL (BPF_LOG_MASK + 1) /* kernel internal flag */ #define BPF_LOG_MIN_ALIGNMENT 8U #define BPF_LOG_ALIGNMENT 40U static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log) { - return log && - ((log->level && log->ubuf && !bpf_verifier_log_full(log)) || - log->level == BPF_LOG_KERNEL); -} - -static inline bool -bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log) -{ - return log->len_total >= 128 && log->len_total <= UINT_MAX >> 2 && - log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK); + return log && log->level; } #define BPF_MAX_SUBPROGS 256 @@ -537,7 +568,6 @@ struct bpf_verifier_env { bool bypass_spec_v1; bool bypass_spec_v4; bool seen_direct_write; - bool rcu_tag_supported; struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */ const struct bpf_line_info *prev_linfo; struct bpf_verifier_log log; @@ -575,7 +605,7 @@ struct bpf_verifier_env { u32 scratched_regs; /* Same as scratched_regs but for stack slots */ u64 scratched_stack_slots; - u32 prev_log_len, prev_insn_print_len; + u64 prev_log_pos, prev_insn_print_pos; /* buffer used in reg_type_str() to generate reg_type string */ char type_str_buf[TYPE_STR_BUF_LEN]; }; @@ -586,6 +616,10 @@ __printf(2, 3) void bpf_verifier_log_write(struct bpf_verifier_env *env, const char *fmt, ...); __printf(2, 3) void bpf_log(struct bpf_verifier_log *log, const char *fmt, ...); +int bpf_vlog_init(struct bpf_verifier_log *log, u32 log_level, + char __user *log_buf, u32 log_size); +void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos); +int bpf_vlog_finalize(struct bpf_verifier_log *log, u32 *log_size_actual); static inline struct bpf_func_state *cur_func(struct bpf_verifier_env *env) { @@ -616,9 +650,6 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env, enum bpf_arg_type arg_type); int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, u32 mem_size); -struct bpf_call_arg_meta; -int process_dynptr_func(struct bpf_verifier_env *env, int regno, - enum bpf_arg_type arg_type, struct bpf_call_arg_meta *meta); /* this lives here instead of in bpf.h because it needs to dereference tgt_prog */ static inline u64 bpf_trampoline_compute_key(const struct bpf_prog *tgt_prog, diff --git a/include/linux/btf.h b/include/linux/btf.h index 49e0fe6d8274..508199e38415 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -18,7 +18,6 @@ #define KF_ACQUIRE (1 << 0) /* kfunc is an acquire function */ #define KF_RELEASE (1 << 1) /* kfunc is a release function */ #define KF_RET_NULL (1 << 2) /* kfunc returns a pointer that may be NULL */ -#define KF_KPTR_GET (1 << 3) /* kfunc returns reference to a kptr */ /* Trusted arguments are those which are guaranteed to be valid when passed to * the kfunc. It is used to enforce that pointers obtained from either acquire * kfuncs, or from the main kernel on a tracepoint or struct_ops callback @@ -70,7 +69,11 @@ #define KF_TRUSTED_ARGS (1 << 4) /* kfunc only takes trusted pointer arguments */ #define KF_SLEEPABLE (1 << 5) /* kfunc may sleep */ #define KF_DESTRUCTIVE (1 << 6) /* kfunc performs destructive actions */ -#define KF_RCU (1 << 7) /* kfunc only takes rcu pointer arguments */ +#define KF_RCU (1 << 7) /* kfunc takes either rcu or trusted pointer arguments */ +/* only one of KF_ITER_{NEW,NEXT,DESTROY} could be specified per kfunc */ +#define KF_ITER_NEW (1 << 8) /* kfunc implements BPF iter constructor */ +#define KF_ITER_NEXT (1 << 9) /* kfunc implements BPF iter next method */ +#define KF_ITER_DESTROY (1 << 10) /* kfunc implements BPF iter destructor */ /* * Tag marking a kernel function as a kfunc. This is meant to minimize the @@ -109,7 +112,6 @@ struct btf_id_dtor_kfunc { struct btf_struct_meta { u32 btf_id; struct btf_record *record; - struct btf_field_offs *field_offs; }; struct btf_struct_metas { @@ -117,13 +119,11 @@ struct btf_struct_metas { struct btf_struct_meta types[]; }; -typedef void (*btf_dtor_kfunc_t)(void *); - extern const struct file_operations btf_fops; void btf_get(struct btf *btf); void btf_put(struct btf *btf); -int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr); +int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_sz); struct btf *btf_get_by_fd(int fd); int btf_get_info_by_fd(const struct btf *btf, const union bpf_attr *attr, @@ -205,7 +205,6 @@ int btf_find_timer(const struct btf *btf, const struct btf_type *t); struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type *t, u32 field_mask, u32 value_size); int btf_check_and_fixup_fields(const struct btf *btf, struct btf_record *rec); -struct btf_field_offs *btf_parse_field_offs(struct btf_record *rec); bool btf_type_is_void(const struct btf_type *t); s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind); const struct btf_type *btf_type_skip_modifiers(const struct btf *btf, diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h index 3a4f7cd882ca..00950cc03bff 100644 --- a/include/linux/btf_ids.h +++ b/include/linux/btf_ids.h @@ -204,7 +204,7 @@ extern struct btf_id_set8 name; #else -#define BTF_ID_LIST(name) static u32 __maybe_unused name[16]; +#define BTF_ID_LIST(name) static u32 __maybe_unused name[64]; #define BTF_ID(prefix, name) #define BTF_ID_FLAGS(prefix, name, ...) #define BTF_ID_UNUSED diff --git a/include/linux/cpu_rmap.h b/include/linux/cpu_rmap.h index be8aea04d023..cae324d10965 100644 --- a/include/linux/cpu_rmap.h +++ b/include/linux/cpu_rmap.h @@ -16,14 +16,13 @@ * struct cpu_rmap - CPU affinity reverse-map * @refcount: kref for object * @size: Number of objects to be reverse-mapped - * @used: Number of objects added * @obj: Pointer to array of object pointers * @near: For each CPU, the index and distance to the nearest object, * based on affinity masks */ struct cpu_rmap { struct kref refcount; - u16 size, used; + u16 size; void **obj; struct { u16 index; @@ -61,6 +60,7 @@ static inline struct cpu_rmap *alloc_irq_cpu_rmap(unsigned int size) } extern void free_irq_cpu_rmap(struct cpu_rmap *rmap); +int irq_cpu_rmap_remove(struct cpu_rmap *rmap, int irq); extern int irq_cpu_rmap_add(struct cpu_rmap *rmap, int irq); #endif /* __LINUX_CPU_RMAP_H */ diff --git a/include/linux/dccp.h b/include/linux/dccp.h index 07e547c02fd8..325af611909f 100644 --- a/include/linux/dccp.h +++ b/include/linux/dccp.h @@ -305,10 +305,8 @@ struct dccp_sock { struct timer_list dccps_xmit_timer; }; -static inline struct dccp_sock *dccp_sk(const struct sock *sk) -{ - return (struct dccp_sock *)sk; -} +#define dccp_sk(ptr) container_of_const(ptr, struct dccp_sock, \ + dccps_inet_connection.icsk_inet.sk) static inline const char *dccp_role(const struct sock *sk) { diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h index 2792185dda22..62b61527bcc4 100644 --- a/include/linux/ethtool.h +++ b/include/linux/ethtool.h @@ -75,6 +75,8 @@ enum { * @tx_push: The flag of tx push mode * @rx_push: The flag of rx push mode * @cqe_size: Size of TX/RX completion queue event + * @tx_push_buf_len: Size of TX push buffer + * @tx_push_buf_max_len: Maximum allowed size of TX push buffer */ struct kernel_ethtool_ringparam { u32 rx_buf_len; @@ -82,6 +84,8 @@ struct kernel_ethtool_ringparam { u8 tx_push; u8 rx_push; u32 cqe_size; + u32 tx_push_buf_len; + u32 tx_push_buf_max_len; }; /** @@ -90,12 +94,14 @@ struct kernel_ethtool_ringparam { * @ETHTOOL_RING_USE_CQE_SIZE: capture for setting cqe_size * @ETHTOOL_RING_USE_TX_PUSH: capture for setting tx_push * @ETHTOOL_RING_USE_RX_PUSH: capture for setting rx_push + * @ETHTOOL_RING_USE_TX_PUSH_BUF_LEN: capture for setting tx_push_buf_len */ enum ethtool_supported_ring_param { - ETHTOOL_RING_USE_RX_BUF_LEN = BIT(0), - ETHTOOL_RING_USE_CQE_SIZE = BIT(1), - ETHTOOL_RING_USE_TX_PUSH = BIT(2), - ETHTOOL_RING_USE_RX_PUSH = BIT(3), + ETHTOOL_RING_USE_RX_BUF_LEN = BIT(0), + ETHTOOL_RING_USE_CQE_SIZE = BIT(1), + ETHTOOL_RING_USE_TX_PUSH = BIT(2), + ETHTOOL_RING_USE_RX_PUSH = BIT(3), + ETHTOOL_RING_USE_TX_PUSH_BUF_LEN = BIT(4), }; #define __ETH_RSS_HASH_BIT(bit) ((u32)1 << (bit)) @@ -705,6 +711,7 @@ struct ethtool_mm_stats { * @get_dump_data: Get dump data. * @set_dump: Set dump specific flags to the device. * @get_ts_info: Get the time stamping and PTP hardware clock capabilities. + * It may be called with RCU, or rtnl or reference on the device. * Drivers supporting transmit time stamps in software should set this to * ethtool_op_get_ts_info(). * @get_module_info: Get the size and type of the eeprom contained within diff --git a/include/linux/ethtool_netlink.h b/include/linux/ethtool_netlink.h index 17003b385756..fae0dfb9a9c8 100644 --- a/include/linux/ethtool_netlink.h +++ b/include/linux/ethtool_netlink.h @@ -39,6 +39,7 @@ void ethtool_aggregate_pause_stats(struct net_device *dev, struct ethtool_pause_stats *pause_stats); void ethtool_aggregate_rmon_stats(struct net_device *dev, struct ethtool_rmon_stats *rmon_stats); +bool ethtool_dev_mm_supported(struct net_device *dev); #else static inline int ethnl_cable_test_alloc(struct phy_device *phydev, u8 cmd) @@ -112,5 +113,10 @@ ethtool_aggregate_rmon_stats(struct net_device *dev, { } +static inline bool ethtool_dev_mm_supported(struct net_device *dev) +{ + return false; +} + #endif /* IS_ENABLED(CONFIG_ETHTOOL_NETLINK) */ #endif /* _LINUX_ETHTOOL_NETLINK_H_ */ diff --git a/include/linux/filter.h b/include/linux/filter.h index 1727898f1641..bbce89937fde 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -571,8 +571,7 @@ DECLARE_STATIC_KEY_FALSE(bpf_stats_enabled_key); extern struct mutex nf_conn_btf_access_lock; extern int (*nfct_btf_struct_access)(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag); + int off, int size); typedef unsigned int (*bpf_dispatcher_fn)(const void *ctx, const struct bpf_insn *insnsi, @@ -921,6 +920,7 @@ void bpf_jit_compile(struct bpf_prog *prog); bool bpf_jit_needs_zext(void); bool bpf_jit_supports_subprog_tailcalls(void); bool bpf_jit_supports_kfunc_call(void); +bool bpf_jit_supports_far_kfunc_call(void); bool bpf_helper_changes_pkt_data(void *func); static inline bool bpf_dump_raw_ok(const struct cred *cred) @@ -1504,9 +1504,9 @@ static inline bool bpf_sk_lookup_run_v6(struct net *net, int protocol, } #endif /* IS_ENABLED(CONFIG_IPV6) */ -static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u64 index, - u64 flags, const u64 flag_mask, - void *lookup_elem(struct bpf_map *map, u32 key)) +static __always_inline long __bpf_xdp_redirect_map(struct bpf_map *map, u64 index, + u64 flags, const u64 flag_mask, + void *lookup_elem(struct bpf_map *map, u32 key)) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); const u64 action_mask = XDP_ABORTED | XDP_DROP | XDP_PASS | XDP_TX; @@ -1542,4 +1542,50 @@ static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u64 index return XDP_REDIRECT; } +#ifdef CONFIG_NET +int __bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset, void *to, u32 len); +int __bpf_skb_store_bytes(struct sk_buff *skb, u32 offset, const void *from, + u32 len, u64 flags); +int __bpf_xdp_load_bytes(struct xdp_buff *xdp, u32 offset, void *buf, u32 len); +int __bpf_xdp_store_bytes(struct xdp_buff *xdp, u32 offset, void *buf, u32 len); +void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len); +void bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off, + void *buf, unsigned long len, bool flush); +#else /* CONFIG_NET */ +static inline int __bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset, + void *to, u32 len) +{ + return -EOPNOTSUPP; +} + +static inline int __bpf_skb_store_bytes(struct sk_buff *skb, u32 offset, + const void *from, u32 len, u64 flags) +{ + return -EOPNOTSUPP; +} + +static inline int __bpf_xdp_load_bytes(struct xdp_buff *xdp, u32 offset, + void *buf, u32 len) +{ + return -EOPNOTSUPP; +} + +static inline int __bpf_xdp_store_bytes(struct xdp_buff *xdp, u32 offset, + void *buf, u32 len) +{ + return -EOPNOTSUPP; +} + +static inline void *bpf_xdp_pointer(struct xdp_buff *xdp, u32 offset, u32 len) +{ + return NULL; +} + +static inline void *bpf_xdp_copy_buf(struct xdp_buff *xdp, unsigned long off, void *buf, + unsigned long len, bool flush) +{ + return NULL; +} +#endif /* CONFIG_NET */ + #endif /* __LINUX_FILTER_H__ */ diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h index 2463bdd2a382..c4cf296e7eaf 100644 --- a/include/linux/ieee80211.h +++ b/include/linux/ieee80211.h @@ -9,7 +9,7 @@ * Copyright (c) 2006, Michael Wu <flamingice@sourmilk.net> * Copyright (c) 2013 - 2014 Intel Mobile Communications GmbH * Copyright (c) 2016 - 2017 Intel Deutschland GmbH - * Copyright (c) 2018 - 2022 Intel Corporation + * Copyright (c) 2018 - 2023 Intel Corporation */ #ifndef LINUX_IEEE80211_H @@ -783,20 +783,6 @@ static inline bool ieee80211_is_any_nullfunc(__le16 fc) } /** - * ieee80211_is_bufferable_mmpdu - check if frame is bufferable MMPDU - * @fc: frame control field in little-endian byteorder - */ -static inline bool ieee80211_is_bufferable_mmpdu(__le16 fc) -{ - /* IEEE 802.11-2012, definition of "bufferable management frame"; - * note that this ignores the IBSS special case. */ - return ieee80211_is_mgmt(fc) && - (ieee80211_is_action(fc) || - ieee80211_is_disassoc(fc) || - ieee80211_is_deauth(fc)); -} - -/** * ieee80211_is_first_frag - check if IEEE80211_SCTL_FRAG is not set * @seq_ctrl: frame sequence control bytes in little-endian byteorder */ @@ -3557,11 +3543,6 @@ enum ieee80211_unprotected_wnm_actioncode { WLAN_UNPROTECTED_WNM_ACTION_TIMING_MEASUREMENT_RESPONSE = 1, }; -/* Public action codes */ -enum ieee80211_public_actioncode { - WLAN_PUBLIC_ACTION_FTM_RESPONSE = 33, -}; - /* Security key length */ enum ieee80211_key_len { WLAN_KEY_LEN_WEP40 = 5, @@ -3653,7 +3634,7 @@ enum ieee80211_pub_actioncode { WLAN_PUB_ACTION_NETWORK_CHANNEL_CONTROL = 30, WLAN_PUB_ACTION_WHITE_SPACE_MAP_ANN = 31, WLAN_PUB_ACTION_FTM_REQUEST = 32, - WLAN_PUB_ACTION_FTM = 33, + WLAN_PUB_ACTION_FTM_RESPONSE = 33, WLAN_PUB_ACTION_FILS_DISCOVERY = 34, }; @@ -4138,6 +4119,44 @@ static inline u8 *ieee80211_get_DA(struct ieee80211_hdr *hdr) } /** + * ieee80211_is_bufferable_mmpdu - check if frame is bufferable MMPDU + * @skb: the skb to check, starting with the 802.11 header + */ +static inline bool ieee80211_is_bufferable_mmpdu(struct sk_buff *skb) +{ + struct ieee80211_mgmt *mgmt = (void *)skb->data; + __le16 fc = mgmt->frame_control; + + /* + * IEEE 802.11 REVme D2.0 definition of bufferable MMPDU; + * note that this ignores the IBSS special case. + */ + if (!ieee80211_is_mgmt(fc)) + return false; + + if (ieee80211_is_disassoc(fc) || ieee80211_is_deauth(fc)) + return true; + + if (!ieee80211_is_action(fc)) + return false; + + if (skb->len < offsetofend(typeof(*mgmt), u.action.u.ftm.action_code)) + return true; + + /* action frame - additionally check for non-bufferable FTM */ + + if (mgmt->u.action.category != WLAN_CATEGORY_PUBLIC && + mgmt->u.action.category != WLAN_CATEGORY_PROTECTED_DUAL_OF_ACTION) + return true; + + if (mgmt->u.action.u.ftm.action_code == WLAN_PUB_ACTION_FTM_REQUEST || + mgmt->u.action.u.ftm.action_code == WLAN_PUB_ACTION_FTM_RESPONSE) + return false; + + return true; +} + +/** * _ieee80211_is_robust_mgmt_frame - check if frame is a robust management frame * @hdr: the frame (buffer must include at least the first octet of payload) */ @@ -4383,7 +4402,7 @@ static inline bool ieee80211_is_ftm(struct sk_buff *skb) return false; if (mgmt->u.action.u.ftm.action_code == - WLAN_PUBLIC_ACTION_FTM_RESPONSE && + WLAN_PUB_ACTION_FTM_RESPONSE && skb->len >= offsetofend(typeof(*mgmt), u.action.u.ftm)) return true; diff --git a/include/linux/if_bridge.h b/include/linux/if_bridge.h index 1668ac4d7adc..3ff96ae31bf6 100644 --- a/include/linux/if_bridge.h +++ b/include/linux/if_bridge.h @@ -60,6 +60,7 @@ struct br_ip_list { #define BR_TX_FWD_OFFLOAD BIT(20) #define BR_PORT_LOCKED BIT(21) #define BR_PORT_MAB BIT(22) +#define BR_NEIGH_VLAN_SUPPRESS BIT(23) #define BR_DEFAULT_AGEING_TIME (300 * HZ) diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h index 6864b89ef868..0f40f379d75c 100644 --- a/include/linux/if_vlan.h +++ b/include/linux/if_vlan.h @@ -62,6 +62,14 @@ static inline struct vlan_ethhdr *vlan_eth_hdr(const struct sk_buff *skb) return (struct vlan_ethhdr *)skb_mac_header(skb); } +/* Prefer this version in TX path, instead of + * skb_reset_mac_header() + vlan_eth_hdr() + */ +static inline struct vlan_ethhdr *skb_vlan_eth_hdr(const struct sk_buff *skb) +{ + return (struct vlan_ethhdr *)skb->data; +} + #define VLAN_PRIO_MASK 0xe000 /* Priority Code Point */ #define VLAN_PRIO_SHIFT 13 #define VLAN_CFI_MASK 0x1000 /* Canonical Format Indicator / Drop Eligible Indicator */ @@ -351,7 +359,8 @@ static inline int __vlan_insert_inner_tag(struct sk_buff *skb, /* Move the mac header sans proto to the beginning of the new header. */ if (likely(mac_len > ETH_TLEN)) memmove(skb->data, skb->data + VLAN_HLEN, mac_len - ETH_TLEN); - skb->mac_header -= VLAN_HLEN; + if (skb_mac_header_was_set(skb)) + skb->mac_header -= VLAN_HLEN; veth = (struct vlan_ethhdr *)(skb->data + mac_len - ETH_HLEN); @@ -528,7 +537,7 @@ static inline void __vlan_hwaccel_put_tag(struct sk_buff *skb, */ static inline int __vlan_get_tag(const struct sk_buff *skb, u16 *vlan_tci) { - struct vlan_ethhdr *veth = (struct vlan_ethhdr *)skb->data; + struct vlan_ethhdr *veth = skb_vlan_eth_hdr(skb); if (!eth_type_vlan(veth->h_vlan_proto)) return -EINVAL; @@ -677,6 +686,27 @@ static inline void vlan_set_encap_proto(struct sk_buff *skb, } /** + * vlan_remove_tag - remove outer VLAN tag from payload + * @skb: skbuff to remove tag from + * @vlan_tci: buffer to store value + * + * Expects the skb to contain a VLAN tag in the payload, and to have skb->data + * pointing at the MAC header. + * + * Returns a new pointer to skb->data, or NULL on failure to pull. + */ +static inline void *vlan_remove_tag(struct sk_buff *skb, u16 *vlan_tci) +{ + struct vlan_hdr *vhdr = (struct vlan_hdr *)(skb->data + ETH_HLEN); + + *vlan_tci = ntohs(vhdr->h_vlan_TCI); + + memmove(skb->data + VLAN_HLEN, skb->data, 2 * ETH_ALEN); + vlan_set_encap_proto(skb, vhdr); + return __skb_pull(skb, VLAN_HLEN); +} + +/** * skb_vlan_tagged - check if skb is vlan tagged. * @skb: skbuff to query * @@ -712,7 +742,7 @@ static inline bool skb_vlan_tagged_multi(struct sk_buff *skb) if (unlikely(!pskb_may_pull(skb, VLAN_ETH_HLEN))) return false; - veh = (struct vlan_ethhdr *)skb->data; + veh = skb_vlan_eth_hdr(skb); protocol = veh->h_vlan_encapsulated_proto; } diff --git a/include/linux/igmp.h b/include/linux/igmp.h index b19d3284551f..ebf4349a53af 100644 --- a/include/linux/igmp.h +++ b/include/linux/igmp.h @@ -122,7 +122,7 @@ extern int ip_mc_msfget(struct sock *sk, struct ip_msfilter *msf, sockptr_t optval, sockptr_t optlen); extern int ip_mc_gsfget(struct sock *sk, struct group_filter *gsf, sockptr_t optval, size_t offset); -extern int ip_mc_sf_allow(struct sock *sk, __be32 local, __be32 rmt, +extern int ip_mc_sf_allow(const struct sock *sk, __be32 local, __be32 rmt, int dif, int sdif); extern void ip_mc_init_dev(struct in_device *); extern void ip_mc_destroy_dev(struct in_device *); diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h index 37dfdcfcdd54..839247a4f48e 100644 --- a/include/linux/ipv6.h +++ b/include/linux/ipv6.h @@ -336,10 +336,7 @@ static inline struct ipv6_pinfo *inet6_sk(const struct sock *__sk) return sk_fullsock(__sk) ? inet_sk(__sk)->pinet6 : NULL; } -static inline struct raw6_sock *raw6_sk(const struct sock *sk) -{ - return (struct raw6_sock *)sk; -} +#define raw6_sk(ptr) container_of_const(ptr, struct raw6_sock, inet.sk) #define ipv6_only_sock(sk) (sk->sk_ipv6only) #define ipv6_sk_rxinfo(sk) ((sk)->sk_family == PF_INET6 && \ diff --git a/include/linux/leds.h b/include/linux/leds.h index d71201a968b6..aa48e643f655 100644 --- a/include/linux/leds.h +++ b/include/linux/leds.h @@ -82,7 +82,15 @@ struct led_init_data { bool devname_mandatory; }; +#if IS_ENABLED(CONFIG_NEW_LEDS) enum led_default_state led_init_default_state_get(struct fwnode_handle *fwnode); +#else +static inline enum led_default_state +led_init_default_state_get(struct fwnode_handle *fwnode) +{ + return LEDS_DEFSTATE_OFF; +} +#endif struct led_hw_trigger_type { int dummy; @@ -217,9 +225,19 @@ static inline int led_classdev_register(struct device *parent, return led_classdev_register_ext(parent, led_cdev, NULL); } +#if IS_ENABLED(CONFIG_LEDS_CLASS) int devm_led_classdev_register_ext(struct device *parent, struct led_classdev *led_cdev, struct led_init_data *init_data); +#else +static inline int +devm_led_classdev_register_ext(struct device *parent, + struct led_classdev *led_cdev, + struct led_init_data *init_data) +{ + return 0; +} +#endif static inline int devm_led_classdev_register(struct device *parent, struct led_classdev *led_cdev) diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h index 1db19a9d26e3..c0af74efd3cb 100644 --- a/include/linux/mlx5/device.h +++ b/include/linux/mlx5/device.h @@ -443,6 +443,8 @@ enum { MLX5_OPCODE_UMR = 0x25, + MLX5_OPCODE_FLOW_TBL_ACCESS = 0x2c, + MLX5_OPCODE_ACCESS_ASO = 0x2d, }; @@ -1367,6 +1369,12 @@ enum mlx5_qcam_feature_groups { #define MLX5_CAP_ESW_INGRESS_ACL_MAX(mdev, cap) \ MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, flow_table_properties_esw_acl_ingress.cap) +#define MLX5_CAP_ESW_FT_FIELD_SUPPORT_2(mdev, cap) \ + MLX5_CAP_ESW_FLOWTABLE(mdev, ft_field_support_2_esw_fdb.cap) + +#define MLX5_CAP_ESW_FT_FIELD_SUPPORT_2_MAX(mdev, cap) \ + MLX5_CAP_ESW_FLOWTABLE_MAX(mdev, ft_field_support_2_esw_fdb.cap) + #define MLX5_CAP_ESW(mdev, cap) \ MLX5_GET(e_switch_cap, \ mdev->caps.hca[MLX5_CAP_ESWITCH]->cur, cap) diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 7e225e41d55b..a4c4f737f9c1 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -134,6 +134,7 @@ enum { MLX5_REG_PCAM = 0x507f, MLX5_REG_NODE_DESC = 0x6001, MLX5_REG_HOST_ENDIANNESS = 0x7004, + MLX5_REG_MTMP = 0x900A, MLX5_REG_MCIA = 0x9014, MLX5_REG_MFRL = 0x9028, MLX5_REG_MLCR = 0x902b, @@ -438,6 +439,7 @@ struct mlx5_core_health { struct work_struct report_work; struct devlink_health_reporter *fw_reporter; struct devlink_health_reporter *fw_fatal_reporter; + struct devlink_health_reporter *vnic_reporter; struct delayed_work update_fw_log_ts_work; }; @@ -731,6 +733,7 @@ struct mlx5_fw_tracer; struct mlx5_vxlan; struct mlx5_geneve; struct mlx5_hv_vhca; +struct mlx5_thermal; #define MLX5_LOG_SW_ICM_BLOCK_SIZE(dev) (MLX5_CAP_DEV_MEM(dev, log_sw_icm_alloc_granularity)) #define MLX5_SW_ICM_BLOCK_SIZE(dev) (1 << MLX5_LOG_SW_ICM_BLOCK_SIZE(dev)) @@ -749,6 +752,7 @@ enum { struct mlx5_profile { u64 mask; u8 log_max_qp; + u8 num_cmd_caches; struct { int size; int limit; @@ -808,6 +812,7 @@ struct mlx5_core_dev { struct mlx5_rsc_dump *rsc_dump; u32 vsc_addr; struct mlx5_hv_vhca *hv_vhca; + struct mlx5_thermal *thermal; }; struct mlx5_db { @@ -1303,4 +1308,10 @@ enum { MLX5_OCTWORD = 16, }; +struct msi_map mlx5_msix_alloc(struct mlx5_core_dev *dev, + irqreturn_t (*handler)(int, void *), + const struct irq_affinity_desc *affdesc, + const char *name); +void mlx5_msix_free(struct mlx5_core_dev *dev, struct msi_map map); + #endif /* MLX5_DRIVER_H */ diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 66d76e97a087..b42696d74c9f 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -69,7 +69,7 @@ enum { MLX5_SET_HCA_CAP_OP_MOD_ATOMIC = 0x3, MLX5_SET_HCA_CAP_OP_MOD_ROCE = 0x4, MLX5_SET_HCA_CAP_OP_MOD_GENERAL_DEVICE2 = 0x20, - MLX5_SET_HCA_CAP_OP_MODE_PORT_SELECTION = 0x25, + MLX5_SET_HCA_CAP_OP_MOD_PORT_SELECTION = 0x25, }; enum { @@ -78,12 +78,15 @@ enum { enum { MLX5_OBJ_TYPE_SW_ICM = 0x0008, + MLX5_OBJ_TYPE_HEADER_MODIFY_ARGUMENT = 0x23, }; enum { MLX5_GENERAL_OBJ_TYPES_CAP_SW_ICM = (1ULL << MLX5_OBJ_TYPE_SW_ICM), MLX5_GENERAL_OBJ_TYPES_CAP_GENEVE_TLV_OPT = (1ULL << 11), MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q = (1ULL << 13), + MLX5_GENERAL_OBJ_TYPES_CAP_HEADER_MODIFY_ARGUMENT = + (1ULL << MLX5_OBJ_TYPE_HEADER_MODIFY_ARGUMENT), MLX5_GENERAL_OBJ_TYPES_CAP_MACSEC_OFFLOAD = (1ULL << 39), }; @@ -321,6 +324,10 @@ enum { MLX5_FT_NIC_TX_RDMA_2_NIC_TX = BIT(1), }; +enum { + MLX5_CMD_OP_MOD_UPDATE_HEADER_MODIFY_ARGUMENT = 0x1, +}; + struct mlx5_ifc_flow_table_fields_supported_bits { u8 outer_dmac[0x1]; u8 outer_smac[0x1]; @@ -404,10 +411,13 @@ struct mlx5_ifc_flow_table_fields_supported_bits { u8 metadata_reg_c_0[0x1]; }; +/* Table 2170 - Flow Table Fields Supported 2 Format */ struct mlx5_ifc_flow_table_fields_supported_2_bits { u8 reserved_at_0[0xe]; u8 bth_opcode[0x1]; - u8 reserved_at_f[0x11]; + u8 reserved_at_f[0x1]; + u8 tunnel_header_0_1[0x1]; + u8 reserved_at_11[0xf]; u8 reserved_at_20[0x60]; }; @@ -453,9 +463,11 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 max_ft_level[0x8]; u8 reformat_add_esp_trasport[0x1]; - u8 reserved_at_41[0x2]; + u8 reformat_l2_to_l3_esp_tunnel[0x1]; + u8 reserved_at_42[0x1]; u8 reformat_del_esp_trasport[0x1]; - u8 reserved_at_44[0x2]; + u8 reformat_l3_esp_tunnel_to_l2[0x1]; + u8 reserved_at_45[0x1]; u8 execute_aso[0x1]; u8 reserved_at_47[0x19]; @@ -877,7 +889,12 @@ enum { struct mlx5_ifc_flow_table_eswitch_cap_bits { u8 fdb_to_vport_reg_c_id[0x8]; - u8 reserved_at_8[0xd]; + u8 reserved_at_8[0x5]; + u8 fdb_uplink_hairpin[0x1]; + u8 fdb_multi_path_any_table_limit_regc[0x1]; + u8 reserved_at_f[0x3]; + u8 fdb_multi_path_any_table[0x1]; + u8 reserved_at_13[0x2]; u8 fdb_modify_header_fwd_to_table[0x1]; u8 fdb_ipv4_ttl_modify[0x1]; u8 flow_source[0x1]; @@ -895,7 +912,13 @@ struct mlx5_ifc_flow_table_eswitch_cap_bits { struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties_esw_acl_egress; - u8 reserved_at_800[0x1000]; + u8 reserved_at_800[0xC00]; + + struct mlx5_ifc_flow_table_fields_supported_2_bits ft_field_support_2_esw_fdb; + + struct mlx5_ifc_flow_table_fields_supported_2_bits ft_field_bitmask_support_2_esw_fdb; + + u8 reserved_at_1500[0x300]; u8 sw_steering_fdb_action_drop_icm_address_rx[0x40]; @@ -1913,7 +1936,14 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_750[0x4]; u8 max_dynamic_vf_msix_table_size[0xc]; - u8 reserved_at_760[0x20]; + u8 reserved_at_760[0x3]; + u8 log_max_num_header_modify_argument[0x5]; + u8 reserved_at_768[0x4]; + u8 log_header_modify_argument_granularity[0x4]; + u8 reserved_at_770[0x3]; + u8 log_header_modify_argument_max_alloc[0x5]; + u8 reserved_at_778[0x8]; + u8 vhca_tunnel_commands[0x40]; u8 match_definer_format_supported[0x40]; }; @@ -6347,6 +6377,18 @@ struct mlx5_ifc_general_obj_out_cmd_hdr_bits { u8 reserved_at_60[0x20]; }; +struct mlx5_ifc_modify_header_arg_bits { + u8 reserved_at_0[0x80]; + + u8 reserved_at_80[0x8]; + u8 access_pd[0x18]; +}; + +struct mlx5_ifc_create_modify_header_arg_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_modify_header_arg_bits arg; +}; + struct mlx5_ifc_create_match_definer_in_bits { struct mlx5_ifc_general_obj_in_cmd_hdr_bits general_obj_in_cmd_hdr; @@ -6590,7 +6632,9 @@ enum mlx5_reformat_ctx_type { MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2 = 0x3, MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL = 0x4, MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV4 = 0x5, + MLX5_REFORMAT_TYPE_L2_TO_L3_ESP_TUNNEL = 0x6, MLX5_REFORMAT_TYPE_DEL_ESP_TRANSPORT = 0x8, + MLX5_REFORMAT_TYPE_L3_ESP_TUNNEL_TO_L2 = 0x9, MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV6 = 0xb, MLX5_REFORMAT_TYPE_INSERT_HDR = 0xf, MLX5_REFORMAT_TYPE_REMOVE_HDR = 0x10, @@ -10869,6 +10913,31 @@ struct mlx5_ifc_mrtc_reg_bits { u8 time_l[0x20]; }; +struct mlx5_ifc_mtmp_reg_bits { + u8 reserved_at_0[0x14]; + u8 sensor_index[0xc]; + + u8 reserved_at_20[0x10]; + u8 temperature[0x10]; + + u8 mte[0x1]; + u8 mtr[0x1]; + u8 reserved_at_42[0xe]; + u8 max_temperature[0x10]; + + u8 tee[0x2]; + u8 reserved_at_62[0xe]; + u8 temp_threshold_hi[0x10]; + + u8 reserved_at_80[0x10]; + u8 temp_threshold_lo[0x10]; + + u8 reserved_at_a0[0x20]; + + u8 sensor_name_hi[0x20]; + u8 sensor_name_lo[0x20]; +}; + union mlx5_ifc_ports_control_registers_document_bits { struct mlx5_ifc_bufferx_reg_bits bufferx_reg; struct mlx5_ifc_eth_2819_cntrs_grp_data_layout_bits eth_2819_cntrs_grp_data_layout; @@ -10931,6 +11000,7 @@ union mlx5_ifc_ports_control_registers_document_bits { struct mlx5_ifc_mfrl_reg_bits mfrl_reg; struct mlx5_ifc_mtutc_reg_bits mtutc_reg; struct mlx5_ifc_mrtc_reg_bits mrtc_reg; + struct mlx5_ifc_mtmp_reg_bits mtmp_reg; u8 reserved_at_0[0x60e0]; }; diff --git a/include/linux/mlx5/port.h b/include/linux/mlx5/port.h index e96ee1e348cb..98b2e1e149f9 100644 --- a/include/linux/mlx5/port.h +++ b/include/linux/mlx5/port.h @@ -141,6 +141,12 @@ enum mlx5_ptys_width { MLX5_PTYS_WIDTH_12X = 1 << 4, }; +struct mlx5_port_eth_proto { + u32 cap; + u32 admin; + u32 oper; +}; + #define MLX5E_PROT_MASK(link_mode) (1U << link_mode) #define MLX5_GET_ETH_PROTO(reg, out, ext, field) \ (ext ? MLX5_GET(reg, out, ext_##field) : \ @@ -218,4 +224,14 @@ int mlx5_set_trust_state(struct mlx5_core_dev *mdev, u8 trust_state); int mlx5_query_trust_state(struct mlx5_core_dev *mdev, u8 *trust_state); int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, u8 dscp, u8 prio); int mlx5_query_dscp2prio(struct mlx5_core_dev *mdev, u8 *dscp2prio); + +int mlx5_port_query_eth_proto(struct mlx5_core_dev *dev, u8 port, bool ext, + struct mlx5_port_eth_proto *eproto); +bool mlx5_ptys_ext_supported(struct mlx5_core_dev *mdev); +u32 mlx5_port_ptys2speed(struct mlx5_core_dev *mdev, u32 eth_proto_oper, + bool force_legacy); +u32 mlx5_port_speed2linkmodes(struct mlx5_core_dev *mdev, u32 speed, + bool force_legacy); +int mlx5_port_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed); + #endif /* __MLX5_PORT_H__ */ diff --git a/include/linux/mlx5/qp.h b/include/linux/mlx5/qp.h index df55fbb65717..bd53cf4be7bd 100644 --- a/include/linux/mlx5/qp.h +++ b/include/linux/mlx5/qp.h @@ -499,6 +499,16 @@ struct mlx5_stride_block_ctrl_seg { __be16 num_entries; }; +struct mlx5_wqe_flow_update_ctrl_seg { + __be32 flow_idx_update; + __be32 dest_handle; + u8 reserved0[40]; +}; + +struct mlx5_wqe_header_modify_argument_update_seg { + u8 argument_list[64]; +}; + struct mlx5_core_qp { struct mlx5_core_rsc_common common; /* must be first */ void (*event) (struct mlx5_core_qp *, int); diff --git a/include/linux/mmc/sdio_ids.h b/include/linux/mmc/sdio_ids.h index 0e4ef9c5127a..c653accdc7fd 100644 --- a/include/linux/mmc/sdio_ids.h +++ b/include/linux/mmc/sdio_ids.h @@ -74,10 +74,13 @@ #define SDIO_DEVICE_ID_BROADCOM_43362 0xa962 #define SDIO_DEVICE_ID_BROADCOM_43364 0xa9a4 #define SDIO_DEVICE_ID_BROADCOM_43430 0xa9a6 -#define SDIO_DEVICE_ID_BROADCOM_CYPRESS_43439 0xa9af +#define SDIO_DEVICE_ID_BROADCOM_43439 0xa9af #define SDIO_DEVICE_ID_BROADCOM_43455 0xa9bf #define SDIO_DEVICE_ID_BROADCOM_CYPRESS_43752 0xaae8 +#define SDIO_VENDOR_ID_CYPRESS 0x04b4 +#define SDIO_DEVICE_ID_BROADCOM_CYPRESS_43439 0xbd3d + #define SDIO_VENDOR_ID_MARVELL 0x02df #define SDIO_DEVICE_ID_MARVELL_LIBERTAS 0x9103 #define SDIO_DEVICE_ID_MARVELL_8688_WLAN 0x9104 @@ -112,6 +115,15 @@ #define SDIO_VENDOR_ID_MICROCHIP_WILC 0x0296 #define SDIO_DEVICE_ID_MICROCHIP_WILC1000 0x5347 +#define SDIO_VENDOR_ID_REALTEK 0x024c +#define SDIO_DEVICE_ID_REALTEK_RTW8723BS 0xb723 +#define SDIO_DEVICE_ID_REALTEK_RTW8821BS 0xb821 +#define SDIO_DEVICE_ID_REALTEK_RTW8822BS 0xb822 +#define SDIO_DEVICE_ID_REALTEK_RTW8821CS 0xc821 +#define SDIO_DEVICE_ID_REALTEK_RTW8822CS 0xc822 +#define SDIO_DEVICE_ID_REALTEK_RTW8723DS 0xd723 +#define SDIO_DEVICE_ID_REALTEK_RTW8821DS 0xd821 + #define SDIO_VENDOR_ID_SIANO 0x039a #define SDIO_DEVICE_ID_SIANO_NOVA_B0 0x0201 #define SDIO_DEVICE_ID_SIANO_NICE 0x0202 diff --git a/include/linux/module.h b/include/linux/module.h index e65a71b04490..3730ed99e5f7 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -608,14 +608,6 @@ static inline bool within_module(unsigned long addr, const struct module *mod) /* Search for module by name: must be in a RCU-sched critical section. */ struct module *find_module(const char *name); -/* Returns 0 and fills in value, defined and namebuf, or -ERANGE if - symnum out of range. */ -int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type, - char *name, char *module_name, int *exported); - -/* Look for this name: can be of form module:name. */ -unsigned long module_kallsyms_lookup_name(const char *name); - extern void __noreturn __module_put_and_kthread_exit(struct module *mod, long code); #define module_put_and_kthread_exit(code) __module_put_and_kthread_exit(THIS_MODULE, code) @@ -662,17 +654,6 @@ static inline void __module_get(struct module *module) /* Dereference module function descriptor */ void *dereference_module_function_descriptor(struct module *mod, void *ptr); -/* For kallsyms to ask for address resolution. namebuf should be at - * least KSYM_NAME_LEN long: a pointer to namebuf is returned if - * found, otherwise NULL. */ -const char *module_address_lookup(unsigned long addr, - unsigned long *symbolsize, - unsigned long *offset, - char **modname, const unsigned char **modbuildid, - char *namebuf); -int lookup_module_symbol_name(unsigned long addr, char *symname); -int lookup_module_symbol_attrs(unsigned long addr, unsigned long *size, unsigned long *offset, char *modname, char *name); - int register_module_notifier(struct notifier_block *nb); int unregister_module_notifier(struct notifier_block *nb); @@ -763,39 +744,6 @@ static inline void module_put(struct module *module) #define module_name(mod) "kernel" -/* For kallsyms to ask for address resolution. NULL means not found. */ -static inline const char *module_address_lookup(unsigned long addr, - unsigned long *symbolsize, - unsigned long *offset, - char **modname, - const unsigned char **modbuildid, - char *namebuf) -{ - return NULL; -} - -static inline int lookup_module_symbol_name(unsigned long addr, char *symname) -{ - return -ERANGE; -} - -static inline int lookup_module_symbol_attrs(unsigned long addr, unsigned long *size, unsigned long *offset, char *modname, char *name) -{ - return -ERANGE; -} - -static inline int module_get_kallsym(unsigned int symnum, unsigned long *value, - char *type, char *name, - char *module_name, int *exported) -{ - return -ERANGE; -} - -static inline unsigned long module_kallsyms_lookup_name(const char *name) -{ - return 0; -} - static inline int register_module_notifier(struct notifier_block *nb) { /* no events will happen anyway, so this can always succeed */ @@ -891,7 +839,36 @@ int module_kallsyms_on_each_symbol(const char *modname, int (*fn)(void *, const char *, struct module *, unsigned long), void *data); -#else + +/* For kallsyms to ask for address resolution. namebuf should be at + * least KSYM_NAME_LEN long: a pointer to namebuf is returned if + * found, otherwise NULL. + */ +const char *module_address_lookup(unsigned long addr, + unsigned long *symbolsize, + unsigned long *offset, + char **modname, const unsigned char **modbuildid, + char *namebuf); +int lookup_module_symbol_name(unsigned long addr, char *symname); +int lookup_module_symbol_attrs(unsigned long addr, + unsigned long *size, + unsigned long *offset, + char *modname, + char *name); + +/* Returns 0 and fills in value, defined and namebuf, or -ERANGE if + * symnum out of range. + */ +int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type, + char *name, char *module_name, int *exported); + +/* Look for this name: can be of form module:name. */ +unsigned long module_kallsyms_lookup_name(const char *name); + +unsigned long find_kallsyms_symbol_value(struct module *mod, const char *name); + +#else /* CONFIG_MODULES && CONFIG_KALLSYMS */ + static inline int module_kallsyms_on_each_symbol(const char *modname, int (*fn)(void *, const char *, struct module *, unsigned long), @@ -899,6 +876,50 @@ static inline int module_kallsyms_on_each_symbol(const char *modname, { return -EOPNOTSUPP; } + +/* For kallsyms to ask for address resolution. NULL means not found. */ +static inline const char *module_address_lookup(unsigned long addr, + unsigned long *symbolsize, + unsigned long *offset, + char **modname, + const unsigned char **modbuildid, + char *namebuf) +{ + return NULL; +} + +static inline int lookup_module_symbol_name(unsigned long addr, char *symname) +{ + return -ERANGE; +} + +static inline int lookup_module_symbol_attrs(unsigned long addr, + unsigned long *size, + unsigned long *offset, + char *modname, + char *name) +{ + return -ERANGE; +} + +static inline int module_get_kallsym(unsigned int symnum, unsigned long *value, + char *type, char *name, + char *module_name, int *exported) +{ + return -ERANGE; +} + +static inline unsigned long module_kallsyms_lookup_name(const char *name) +{ + return 0; +} + +static inline unsigned long find_kallsyms_symbol_value(struct module *mod, + const char *name) +{ + return 0; +} + #endif /* CONFIG_MODULES && CONFIG_KALLSYMS */ #endif /* _LINUX_MODULE_H */ diff --git a/include/linux/net_tstamp.h b/include/linux/net_tstamp.h new file mode 100644 index 000000000000..fd67f3cc0c4b --- /dev/null +++ b/include/linux/net_tstamp.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _LINUX_NET_TIMESTAMPING_H_ +#define _LINUX_NET_TIMESTAMPING_H_ + +#include <uapi/linux/net_tstamp.h> + +/** + * struct kernel_hwtstamp_config - Kernel copy of struct hwtstamp_config + * + * @flags: see struct hwtstamp_config + * @tx_type: see struct hwtstamp_config + * @rx_filter: see struct hwtstamp_config + * + * Prefer using this structure for in-kernel processing of hardware + * timestamping configuration, over the inextensible struct hwtstamp_config + * exposed to the %SIOCGHWTSTAMP and %SIOCSHWTSTAMP ioctl UAPI. + */ +struct kernel_hwtstamp_config { + int flags; + int tx_type; + int rx_filter; +}; + +static inline void hwtstamp_config_to_kernel(struct kernel_hwtstamp_config *kernel_cfg, + const struct hwtstamp_config *cfg) +{ + kernel_cfg->flags = cfg->flags; + kernel_cfg->tx_type = cfg->tx_type; + kernel_cfg->rx_filter = cfg->rx_filter; +} + +#endif /* _LINUX_NET_TIMESTAMPING_H_ */ diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index c35f04f636f1..08fbd4622ccf 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -52,6 +52,7 @@ #include <linux/rbtree.h> #include <net/net_trackers.h> #include <net/net_debug.h> +#include <net/dropreason-core.h> struct netpoll_info; struct device; @@ -359,18 +360,22 @@ struct napi_struct { unsigned long gro_bitmask; int (*poll)(struct napi_struct *, int); #ifdef CONFIG_NETPOLL + /* CPU actively polling if netpoll is configured */ int poll_owner; #endif + /* CPU on which NAPI has been scheduled for processing */ + int list_owner; struct net_device *dev; struct gro_list gro_hash[GRO_HASH_BUCKETS]; struct sk_buff *skb; struct list_head rx_list; /* Pending GRO_NORMAL skbs */ int rx_count; /* length of rx_list */ + unsigned int napi_id; struct hrtimer timer; + struct task_struct *thread; + /* control-path-only fields follow */ struct list_head dev_list; struct hlist_node napi_hash_node; - unsigned int napi_id; - struct task_struct *thread; }; enum { @@ -508,15 +513,18 @@ static inline bool napi_reschedule(struct napi_struct *napi) return false; } -bool napi_complete_done(struct napi_struct *n, int work_done); /** - * napi_complete - NAPI processing complete - * @n: NAPI context + * napi_complete_done - NAPI processing complete + * @n: NAPI context + * @work_done: number of packets processed * - * Mark NAPI processing as complete. - * Consider using napi_complete_done() instead. + * Mark NAPI processing as complete. Should only be called if poll budget + * has not been completely consumed. + * Prefer over napi_complete(). * Return false if device should avoid rearming interrupts. */ +bool napi_complete_done(struct napi_struct *n, int work_done); + static inline bool napi_complete(struct napi_struct *n) { return napi_complete_done(n, 0); @@ -1308,6 +1316,17 @@ struct netdev_net_notifier { * Used to add FDB entries to dump requests. Implementers should add * entries to skb and update idx with the number of entries. * + * int (*ndo_mdb_add)(struct net_device *dev, struct nlattr *tb[], + * u16 nlmsg_flags, struct netlink_ext_ack *extack); + * Adds an MDB entry to dev. + * int (*ndo_mdb_del)(struct net_device *dev, struct nlattr *tb[], + * struct netlink_ext_ack *extack); + * Deletes the MDB entry from dev. + * int (*ndo_mdb_dump)(struct net_device *dev, struct sk_buff *skb, + * struct netlink_callback *cb); + * Dumps MDB entries from dev. The first argument (marker) in the netlink + * callback is used by core rtnetlink code. + * * int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh, * u16 flags, struct netlink_ext_ack *extack) * int (*ndo_bridge_getlink)(struct sk_buff *skb, u32 pid, u32 seq, @@ -1570,6 +1589,16 @@ struct net_device_ops { const unsigned char *addr, u16 vid, u32 portid, u32 seq, struct netlink_ext_ack *extack); + int (*ndo_mdb_add)(struct net_device *dev, + struct nlattr *tb[], + u16 nlmsg_flags, + struct netlink_ext_ack *extack); + int (*ndo_mdb_del)(struct net_device *dev, + struct nlattr *tb[], + struct netlink_ext_ack *extack); + int (*ndo_mdb_dump)(struct net_device *dev, + struct sk_buff *skb, + struct netlink_callback *cb); int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh, u16 flags, @@ -2463,6 +2492,7 @@ static inline struct netdev_queue *netdev_get_tx_queue(const struct net_device *dev, unsigned int index) { + DEBUG_NET_WARN_ON_ONCE(index >= dev->num_tx_queues); return &dev->_tx[index]; } @@ -2958,7 +2988,8 @@ netdev_notifier_info_to_extack(const struct netdev_notifier_info *info) } int call_netdevice_notifiers(unsigned long val, struct net_device *dev); - +int call_netdevice_notifiers_info(unsigned long val, + struct netdev_notifier_info *info); extern rwlock_t dev_base_lock; /* Device list lock */ @@ -3163,6 +3194,10 @@ struct softnet_data { #ifdef CONFIG_RPS struct softnet_data *rps_ipi_list; #endif + + bool in_net_rx_action; + bool in_napi_threaded_poll; + #ifdef CONFIG_NET_FLOW_LIMIT struct sd_flow_limit __rcu *flow_limit; #endif @@ -3307,6 +3342,7 @@ static inline void netif_tx_wake_all_queues(struct net_device *dev) static __always_inline void netif_tx_stop_queue(struct netdev_queue *dev_queue) { + /* Must be an atomic op see netif_txq_try_stop() */ set_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state); } @@ -3503,7 +3539,7 @@ static inline void netdev_tx_completed_queue(struct netdev_queue *dev_queue, * netdev_tx_sent_queue will miss the update and cause the queue to * be stopped forever */ - smp_mb(); + smp_mb(); /* NOTE: netdev_txq_completed_mb() assumes this exists */ if (unlikely(dql_avail(&dev_queue->dql) < 0)) return; @@ -3807,13 +3843,8 @@ static inline unsigned int get_netdev_rx_queue_index( int netif_get_num_default_rss_queues(void); -enum skb_free_reason { - SKB_REASON_CONSUMED, - SKB_REASON_DROPPED, -}; - -void __dev_kfree_skb_irq(struct sk_buff *skb, enum skb_free_reason reason); -void __dev_kfree_skb_any(struct sk_buff *skb, enum skb_free_reason reason); +void dev_kfree_skb_irq_reason(struct sk_buff *skb, enum skb_drop_reason reason); +void dev_kfree_skb_any_reason(struct sk_buff *skb, enum skb_drop_reason reason); /* * It is not allowed to call kfree_skb() or consume_skb() from hardware @@ -3836,22 +3867,22 @@ void __dev_kfree_skb_any(struct sk_buff *skb, enum skb_free_reason reason); */ static inline void dev_kfree_skb_irq(struct sk_buff *skb) { - __dev_kfree_skb_irq(skb, SKB_REASON_DROPPED); + dev_kfree_skb_irq_reason(skb, SKB_DROP_REASON_NOT_SPECIFIED); } static inline void dev_consume_skb_irq(struct sk_buff *skb) { - __dev_kfree_skb_irq(skb, SKB_REASON_CONSUMED); + dev_kfree_skb_irq_reason(skb, SKB_CONSUMED); } static inline void dev_kfree_skb_any(struct sk_buff *skb) { - __dev_kfree_skb_any(skb, SKB_REASON_DROPPED); + dev_kfree_skb_any_reason(skb, SKB_DROP_REASON_NOT_SPECIFIED); } static inline void dev_consume_skb_any(struct sk_buff *skb) { - __dev_kfree_skb_any(skb, SKB_REASON_CONSUMED); + dev_kfree_skb_any_reason(skb, SKB_CONSUMED); } u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h index c8e03bcaecaa..0762444e3767 100644 --- a/include/linux/netfilter.h +++ b/include/linux/netfilter.h @@ -80,6 +80,7 @@ typedef unsigned int nf_hookfn(void *priv, enum nf_hook_ops_type { NF_HOOK_OP_UNDEFINED, NF_HOOK_OP_NF_TABLES, + NF_HOOK_OP_BPF, }; struct nf_hook_ops { diff --git a/include/linux/netfilter/nfnetlink.h b/include/linux/netfilter/nfnetlink.h index 241e005f290a..e9a9ab34a7cc 100644 --- a/include/linux/netfilter/nfnetlink.h +++ b/include/linux/netfilter/nfnetlink.h @@ -45,7 +45,6 @@ struct nfnetlink_subsystem { int (*commit)(struct net *net, struct sk_buff *skb); int (*abort)(struct net *net, struct sk_buff *skb, enum nfnl_abort_action action); - void (*cleanup)(struct net *net); bool (*valid_genid)(struct net *net, u32 genid); }; diff --git a/include/linux/netfilter_ipv6.h b/include/linux/netfilter_ipv6.h index 48314ade1506..7834c0be2831 100644 --- a/include/linux/netfilter_ipv6.h +++ b/include/linux/netfilter_ipv6.h @@ -197,6 +197,8 @@ static inline int nf_cookie_v6_check(const struct ipv6hdr *iph, __sum16 nf_ip6_checksum(struct sk_buff *skb, unsigned int hook, unsigned int dataoff, u_int8_t protocol); +int nf_ip6_check_hbh_len(struct sk_buff *skb, u32 *plen); + int ipv6_netfilter_init(void); void ipv6_netfilter_fini(void); diff --git a/include/linux/netlink.h b/include/linux/netlink.h index c43ac7690eca..19c0791ed9d5 100644 --- a/include/linux/netlink.h +++ b/include/linux/netlink.h @@ -50,7 +50,6 @@ struct netlink_kernel_cfg { struct mutex *cb_mutex; int (*bind)(struct net *net, int group); void (*unbind)(struct net *net, int group); - bool (*compare)(struct net *net, struct sock *sk); }; struct sock *__netlink_kernel_create(struct net *net, int unit, @@ -162,9 +161,31 @@ struct netlink_ext_ack { } \ } while (0) +#define NL_SET_ERR_MSG_ATTR_POL_FMT(extack, attr, pol, fmt, args...) do { \ + struct netlink_ext_ack *__extack = (extack); \ + \ + if (!__extack) \ + break; \ + \ + if (snprintf(__extack->_msg_buf, NETLINK_MAX_FMTMSG_LEN, \ + "%s" fmt "%s", "", ##args, "") >= \ + NETLINK_MAX_FMTMSG_LEN) \ + net_warn_ratelimited("%s" fmt "%s", "truncated extack: ", \ + ##args, "\n"); \ + \ + do_trace_netlink_extack(__extack->_msg_buf); \ + \ + __extack->_msg = __extack->_msg_buf; \ + __extack->bad_attr = (attr); \ + __extack->policy = (pol); \ +} while (0) + #define NL_SET_ERR_MSG_ATTR(extack, attr, msg) \ NL_SET_ERR_MSG_ATTR_POL(extack, attr, NULL, msg) +#define NL_SET_ERR_MSG_ATTR_FMT(extack, attr, msg, args...) \ + NL_SET_ERR_MSG_ATTR_POL_FMT(extack, attr, NULL, msg, ##args) + #define NL_SET_ERR_ATTR_MISS(extack, nest, type) do { \ struct netlink_ext_ack *__extack = (extack); \ \ diff --git a/include/linux/pcs/pcs-mtk-lynxi.h b/include/linux/pcs/pcs-mtk-lynxi.h new file mode 100644 index 000000000000..be3b4ab32f4a --- /dev/null +++ b/include/linux/pcs/pcs-mtk-lynxi.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_PCS_MTK_LYNXI_H +#define __LINUX_PCS_MTK_LYNXI_H + +#include <linux/phylink.h> +#include <linux/regmap.h> + +#define MTK_SGMII_FLAG_PN_SWAP BIT(0) +struct phylink_pcs *mtk_pcs_lynxi_create(struct device *dev, + struct regmap *regmap, + u32 ana_rgc3, u32 flags); +void mtk_pcs_lynxi_destroy(struct phylink_pcs *pcs); +#endif diff --git a/include/linux/pds/pds_adminq.h b/include/linux/pds/pds_adminq.h new file mode 100644 index 000000000000..98a60ce87b92 --- /dev/null +++ b/include/linux/pds/pds_adminq.h @@ -0,0 +1,647 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _PDS_CORE_ADMINQ_H_ +#define _PDS_CORE_ADMINQ_H_ + +#define PDSC_ADMINQ_MAX_POLL_INTERVAL 256 + +enum pds_core_adminq_flags { + PDS_AQ_FLAG_FASTPOLL = BIT(1), /* completion poll at 1ms */ +}; + +/* + * enum pds_core_adminq_opcode - AdminQ command opcodes + * These commands are only processed on AdminQ, not available in devcmd + */ +enum pds_core_adminq_opcode { + PDS_AQ_CMD_NOP = 0, + + /* Client control */ + PDS_AQ_CMD_CLIENT_REG = 6, + PDS_AQ_CMD_CLIENT_UNREG = 7, + PDS_AQ_CMD_CLIENT_CMD = 8, + + /* LIF commands */ + PDS_AQ_CMD_LIF_IDENTIFY = 20, + PDS_AQ_CMD_LIF_INIT = 21, + PDS_AQ_CMD_LIF_RESET = 22, + PDS_AQ_CMD_LIF_GETATTR = 23, + PDS_AQ_CMD_LIF_SETATTR = 24, + PDS_AQ_CMD_LIF_SETPHC = 25, + + PDS_AQ_CMD_RX_MODE_SET = 30, + PDS_AQ_CMD_RX_FILTER_ADD = 31, + PDS_AQ_CMD_RX_FILTER_DEL = 32, + + /* Queue commands */ + PDS_AQ_CMD_Q_IDENTIFY = 39, + PDS_AQ_CMD_Q_INIT = 40, + PDS_AQ_CMD_Q_CONTROL = 41, + + /* SR/IOV commands */ + PDS_AQ_CMD_VF_GETATTR = 60, + PDS_AQ_CMD_VF_SETATTR = 61, +}; + +/* + * enum pds_core_notifyq_opcode - NotifyQ event codes + */ +enum pds_core_notifyq_opcode { + PDS_EVENT_LINK_CHANGE = 1, + PDS_EVENT_RESET = 2, + PDS_EVENT_XCVR = 5, + PDS_EVENT_CLIENT = 6, +}; + +#define PDS_COMP_COLOR_MASK 0x80 + +/** + * struct pds_core_notifyq_event - Generic event reporting structure + * @eid: event number + * @ecode: event code + * + * This is the generic event report struct from which the other + * actual events will be formed. + */ +struct pds_core_notifyq_event { + __le64 eid; + __le16 ecode; +}; + +/** + * struct pds_core_link_change_event - Link change event notification + * @eid: event number + * @ecode: event code = PDS_EVENT_LINK_CHANGE + * @link_status: link up/down, with error bits + * @link_speed: speed of the network link + * + * Sent when the network link state changes between UP and DOWN + */ +struct pds_core_link_change_event { + __le64 eid; + __le16 ecode; + __le16 link_status; + __le32 link_speed; /* units of 1Mbps: e.g. 10000 = 10Gbps */ +}; + +/** + * struct pds_core_reset_event - Reset event notification + * @eid: event number + * @ecode: event code = PDS_EVENT_RESET + * @reset_code: reset type + * @state: 0=pending, 1=complete, 2=error + * + * Sent when the NIC or some subsystem is going to be or + * has been reset. + */ +struct pds_core_reset_event { + __le64 eid; + __le16 ecode; + u8 reset_code; + u8 state; +}; + +/** + * struct pds_core_client_event - Client event notification + * @eid: event number + * @ecode: event code = PDS_EVENT_CLIENT + * @client_id: client to sent event to + * @client_event: wrapped event struct for the client + * + * Sent when an event needs to be passed on to a client + */ +struct pds_core_client_event { + __le64 eid; + __le16 ecode; + __le16 client_id; + u8 client_event[54]; +}; + +/** + * struct pds_core_notifyq_cmd - Placeholder for building qcq + * @data: anonymous field for building the qcq + */ +struct pds_core_notifyq_cmd { + __le32 data; /* Not used but needed for qcq structure */ +}; + +/* + * union pds_core_notifyq_comp - Overlay of notifyq event structures + */ +union pds_core_notifyq_comp { + struct { + __le64 eid; + __le16 ecode; + }; + struct pds_core_notifyq_event event; + struct pds_core_link_change_event link_change; + struct pds_core_reset_event reset; + u8 data[64]; +}; + +#define PDS_DEVNAME_LEN 32 +/** + * struct pds_core_client_reg_cmd - Register a new client with DSC + * @opcode: opcode PDS_AQ_CMD_CLIENT_REG + * @rsvd: word boundary padding + * @devname: text name of client device + * @vif_type: what type of device (enum pds_core_vif_types) + * + * Tell the DSC of the new client, and receive a client_id from DSC. + */ +struct pds_core_client_reg_cmd { + u8 opcode; + u8 rsvd[3]; + char devname[PDS_DEVNAME_LEN]; + u8 vif_type; +}; + +/** + * struct pds_core_client_reg_comp - Client registration completion + * @status: Status of the command (enum pdc_core_status_code) + * @rsvd: Word boundary padding + * @comp_index: Index in the descriptor ring for which this is the completion + * @client_id: New id assigned by DSC + * @rsvd1: Word boundary padding + * @color: Color bit + */ +struct pds_core_client_reg_comp { + u8 status; + u8 rsvd; + __le16 comp_index; + __le16 client_id; + u8 rsvd1[9]; + u8 color; +}; + +/** + * struct pds_core_client_unreg_cmd - Unregister a client from DSC + * @opcode: opcode PDS_AQ_CMD_CLIENT_UNREG + * @rsvd: word boundary padding + * @client_id: id of client being removed + * + * Tell the DSC this client is going away and remove its context + * This uses the generic completion. + */ +struct pds_core_client_unreg_cmd { + u8 opcode; + u8 rsvd; + __le16 client_id; +}; + +/** + * struct pds_core_client_request_cmd - Pass along a wrapped client AdminQ cmd + * @opcode: opcode PDS_AQ_CMD_CLIENT_CMD + * @rsvd: word boundary padding + * @client_id: id of client being removed + * @client_cmd: the wrapped client command + * + * Proxy post an adminq command for the client. + * This uses the generic completion. + */ +struct pds_core_client_request_cmd { + u8 opcode; + u8 rsvd; + __le16 client_id; + u8 client_cmd[60]; +}; + +#define PDS_CORE_MAX_FRAGS 16 + +#define PDS_CORE_QCQ_F_INITED BIT(0) +#define PDS_CORE_QCQ_F_SG BIT(1) +#define PDS_CORE_QCQ_F_INTR BIT(2) +#define PDS_CORE_QCQ_F_TX_STATS BIT(3) +#define PDS_CORE_QCQ_F_RX_STATS BIT(4) +#define PDS_CORE_QCQ_F_NOTIFYQ BIT(5) +#define PDS_CORE_QCQ_F_CMB_RINGS BIT(6) +#define PDS_CORE_QCQ_F_CORE BIT(7) + +enum pds_core_lif_type { + PDS_CORE_LIF_TYPE_DEFAULT = 0, +}; + +/** + * union pds_core_lif_config - LIF configuration + * @state: LIF state (enum pds_core_lif_state) + * @rsvd: Word boundary padding + * @name: LIF name + * @rsvd2: Word boundary padding + * @features: LIF features active (enum pds_core_hw_features) + * @queue_count: Queue counts per queue-type + * @words: Full union buffer size + */ +union pds_core_lif_config { + struct { + u8 state; + u8 rsvd[3]; + char name[PDS_CORE_IFNAMSIZ]; + u8 rsvd2[12]; + __le64 features; + __le32 queue_count[PDS_CORE_QTYPE_MAX]; + } __packed; + __le32 words[64]; +}; + +/** + * struct pds_core_lif_status - LIF status register + * @eid: most recent NotifyQ event id + * @rsvd: full struct size + */ +struct pds_core_lif_status { + __le64 eid; + u8 rsvd[56]; +}; + +/** + * struct pds_core_lif_info - LIF info structure + * @config: LIF configuration structure + * @status: LIF status structure + */ +struct pds_core_lif_info { + union pds_core_lif_config config; + struct pds_core_lif_status status; +}; + +/** + * struct pds_core_lif_identity - LIF identity information (type-specific) + * @features: LIF features (see enum pds_core_hw_features) + * @version: Identify structure version + * @hw_index: LIF hardware index + * @rsvd: Word boundary padding + * @max_nb_sessions: Maximum number of sessions supported + * @rsvd2: buffer padding + * @config: LIF config struct with features, q counts + */ +struct pds_core_lif_identity { + __le64 features; + u8 version; + u8 hw_index; + u8 rsvd[2]; + __le32 max_nb_sessions; + u8 rsvd2[120]; + union pds_core_lif_config config; +}; + +/** + * struct pds_core_lif_identify_cmd - Get LIF identity info command + * @opcode: Opcode PDS_AQ_CMD_LIF_IDENTIFY + * @type: LIF type (enum pds_core_lif_type) + * @client_id: Client identifier + * @ver: Version of identify returned by device + * @rsvd: Word boundary padding + * @ident_pa: DMA address to receive identity info + * + * Firmware will copy LIF identity data (struct pds_core_lif_identity) + * into the buffer address given. + */ +struct pds_core_lif_identify_cmd { + u8 opcode; + u8 type; + __le16 client_id; + u8 ver; + u8 rsvd[3]; + __le64 ident_pa; +}; + +/** + * struct pds_core_lif_identify_comp - LIF identify command completion + * @status: Status of the command (enum pds_core_status_code) + * @ver: Version of identify returned by device + * @bytes: Bytes copied into the buffer + * @rsvd: Word boundary padding + * @color: Color bit + */ +struct pds_core_lif_identify_comp { + u8 status; + u8 ver; + __le16 bytes; + u8 rsvd[11]; + u8 color; +}; + +/** + * struct pds_core_lif_init_cmd - LIF init command + * @opcode: Opcode PDS_AQ_CMD_LIF_INIT + * @type: LIF type (enum pds_core_lif_type) + * @client_id: Client identifier + * @rsvd: Word boundary padding + * @info_pa: Destination address for LIF info (struct pds_core_lif_info) + */ +struct pds_core_lif_init_cmd { + u8 opcode; + u8 type; + __le16 client_id; + __le32 rsvd; + __le64 info_pa; +}; + +/** + * struct pds_core_lif_init_comp - LIF init command completion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd: Word boundary padding + * @hw_index: Hardware index of the initialized LIF + * @rsvd1: Word boundary padding + * @color: Color bit + */ +struct pds_core_lif_init_comp { + u8 status; + u8 rsvd; + __le16 hw_index; + u8 rsvd1[11]; + u8 color; +}; + +/** + * struct pds_core_lif_reset_cmd - LIF reset command + * Will reset only the specified LIF. + * @opcode: Opcode PDS_AQ_CMD_LIF_RESET + * @rsvd: Word boundary padding + * @client_id: Client identifier + */ +struct pds_core_lif_reset_cmd { + u8 opcode; + u8 rsvd; + __le16 client_id; +}; + +/** + * enum pds_core_lif_attr - List of LIF attributes + * @PDS_CORE_LIF_ATTR_STATE: LIF state attribute + * @PDS_CORE_LIF_ATTR_NAME: LIF name attribute + * @PDS_CORE_LIF_ATTR_FEATURES: LIF features attribute + * @PDS_CORE_LIF_ATTR_STATS_CTRL: LIF statistics control attribute + */ +enum pds_core_lif_attr { + PDS_CORE_LIF_ATTR_STATE = 0, + PDS_CORE_LIF_ATTR_NAME = 1, + PDS_CORE_LIF_ATTR_FEATURES = 4, + PDS_CORE_LIF_ATTR_STATS_CTRL = 6, +}; + +/** + * struct pds_core_lif_setattr_cmd - Set LIF attributes on the NIC + * @opcode: Opcode PDS_AQ_CMD_LIF_SETATTR + * @attr: Attribute type (enum pds_core_lif_attr) + * @client_id: Client identifier + * @state: LIF state (enum pds_core_lif_state) + * @name: The name string, 0 terminated + * @features: Features (enum pds_core_hw_features) + * @stats_ctl: Stats control commands (enum pds_core_stats_ctl_cmd) + * @rsvd: Command Buffer padding + */ +struct pds_core_lif_setattr_cmd { + u8 opcode; + u8 attr; + __le16 client_id; + union { + u8 state; + char name[PDS_CORE_IFNAMSIZ]; + __le64 features; + u8 stats_ctl; + u8 rsvd[60]; + } __packed; +}; + +/** + * struct pds_core_lif_setattr_comp - LIF set attr command completion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd: Word boundary padding + * @comp_index: Index in the descriptor ring for which this is the completion + * @features: Features (enum pds_core_hw_features) + * @rsvd2: Word boundary padding + * @color: Color bit + */ +struct pds_core_lif_setattr_comp { + u8 status; + u8 rsvd; + __le16 comp_index; + union { + __le64 features; + u8 rsvd2[11]; + } __packed; + u8 color; +}; + +/** + * struct pds_core_lif_getattr_cmd - Get LIF attributes from the NIC + * @opcode: Opcode PDS_AQ_CMD_LIF_GETATTR + * @attr: Attribute type (enum pds_core_lif_attr) + * @client_id: Client identifier + */ +struct pds_core_lif_getattr_cmd { + u8 opcode; + u8 attr; + __le16 client_id; +}; + +/** + * struct pds_core_lif_getattr_comp - LIF get attr command completion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd: Word boundary padding + * @comp_index: Index in the descriptor ring for which this is the completion + * @state: LIF state (enum pds_core_lif_state) + * @name: LIF name string, 0 terminated + * @features: Features (enum pds_core_hw_features) + * @rsvd2: Word boundary padding + * @color: Color bit + */ +struct pds_core_lif_getattr_comp { + u8 status; + u8 rsvd; + __le16 comp_index; + union { + u8 state; + __le64 features; + u8 rsvd2[11]; + } __packed; + u8 color; +}; + +/** + * union pds_core_q_identity - Queue identity information + * @version: Queue type version that can be used with FW + * @supported: Bitfield of queue versions, first bit = ver 0 + * @rsvd: Word boundary padding + * @features: Queue features + * @desc_sz: Descriptor size + * @comp_sz: Completion descriptor size + * @rsvd2: Word boundary padding + */ +struct pds_core_q_identity { + u8 version; + u8 supported; + u8 rsvd[6]; +#define PDS_CORE_QIDENT_F_CQ 0x01 /* queue has completion ring */ + __le64 features; + __le16 desc_sz; + __le16 comp_sz; + u8 rsvd2[6]; +}; + +/** + * struct pds_core_q_identify_cmd - queue identify command + * @opcode: Opcode PDS_AQ_CMD_Q_IDENTIFY + * @type: Logical queue type (enum pds_core_logical_qtype) + * @client_id: Client identifier + * @ver: Highest queue type version that the driver supports + * @rsvd: Word boundary padding + * @ident_pa: DMA address to receive the data (struct pds_core_q_identity) + */ +struct pds_core_q_identify_cmd { + u8 opcode; + u8 type; + __le16 client_id; + u8 ver; + u8 rsvd[3]; + __le64 ident_pa; +}; + +/** + * struct pds_core_q_identify_comp - queue identify command completion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd: Word boundary padding + * @comp_index: Index in the descriptor ring for which this is the completion + * @ver: Queue type version that can be used with FW + * @rsvd1: Word boundary padding + * @color: Color bit + */ +struct pds_core_q_identify_comp { + u8 status; + u8 rsvd; + __le16 comp_index; + u8 ver; + u8 rsvd1[10]; + u8 color; +}; + +/** + * struct pds_core_q_init_cmd - Queue init command + * @opcode: Opcode PDS_AQ_CMD_Q_INIT + * @type: Logical queue type + * @client_id: Client identifier + * @ver: Queue type version + * @rsvd: Word boundary padding + * @index: (LIF, qtype) relative admin queue index + * @intr_index: Interrupt control register index, or Event queue index + * @pid: Process ID + * @flags: + * IRQ: Interrupt requested on completion + * ENA: Enable the queue. If ENA=0 the queue is initialized + * but remains disabled, to be later enabled with the + * Queue Enable command. If ENA=1, then queue is + * initialized and then enabled. + * @cos: Class of service for this queue + * @ring_size: Queue ring size, encoded as a log2(size), in + * number of descriptors. The actual ring size is + * (1 << ring_size). For example, to select a ring size + * of 64 descriptors write ring_size = 6. The minimum + * ring_size value is 2 for a ring of 4 descriptors. + * The maximum ring_size value is 12 for a ring of 4k + * descriptors. Values of ring_size <2 and >12 are + * reserved. + * @ring_base: Queue ring base address + * @cq_ring_base: Completion queue ring base address + */ +struct pds_core_q_init_cmd { + u8 opcode; + u8 type; + __le16 client_id; + u8 ver; + u8 rsvd[3]; + __le32 index; + __le16 pid; + __le16 intr_index; + __le16 flags; +#define PDS_CORE_QINIT_F_IRQ 0x01 /* Request interrupt on completion */ +#define PDS_CORE_QINIT_F_ENA 0x02 /* Enable the queue */ + u8 cos; +#define PDS_CORE_QSIZE_MIN_LG2 2 +#define PDS_CORE_QSIZE_MAX_LG2 12 + u8 ring_size; + __le64 ring_base; + __le64 cq_ring_base; +} __packed; + +/** + * struct pds_core_q_init_comp - Queue init command completion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd: Word boundary padding + * @comp_index: Index in the descriptor ring for which this is the completion + * @hw_index: Hardware Queue ID + * @hw_type: Hardware Queue type + * @rsvd2: Word boundary padding + * @color: Color + */ +struct pds_core_q_init_comp { + u8 status; + u8 rsvd; + __le16 comp_index; + __le32 hw_index; + u8 hw_type; + u8 rsvd2[6]; + u8 color; +}; + +union pds_core_adminq_cmd { + u8 opcode; + u8 bytes[64]; + + struct pds_core_client_reg_cmd client_reg; + struct pds_core_client_unreg_cmd client_unreg; + struct pds_core_client_request_cmd client_request; + + struct pds_core_lif_identify_cmd lif_ident; + struct pds_core_lif_init_cmd lif_init; + struct pds_core_lif_reset_cmd lif_reset; + struct pds_core_lif_setattr_cmd lif_setattr; + struct pds_core_lif_getattr_cmd lif_getattr; + + struct pds_core_q_identify_cmd q_ident; + struct pds_core_q_init_cmd q_init; +}; + +union pds_core_adminq_comp { + struct { + u8 status; + u8 rsvd; + __le16 comp_index; + u8 rsvd2[11]; + u8 color; + }; + u32 words[4]; + + struct pds_core_client_reg_comp client_reg; + + struct pds_core_lif_identify_comp lif_ident; + struct pds_core_lif_init_comp lif_init; + struct pds_core_lif_setattr_comp lif_setattr; + struct pds_core_lif_getattr_comp lif_getattr; + + struct pds_core_q_identify_comp q_ident; + struct pds_core_q_init_comp q_init; +}; + +#ifndef __CHECKER__ +static_assert(sizeof(union pds_core_adminq_cmd) == 64); +static_assert(sizeof(union pds_core_adminq_comp) == 16); +static_assert(sizeof(union pds_core_notifyq_comp) == 64); +#endif /* __CHECKER__ */ + +/* The color bit is a 'done' bit for the completion descriptors + * where the meaning alternates between '1' and '0' for alternating + * passes through the completion descriptor ring. + */ +static inline bool pdsc_color_match(u8 color, bool done_color) +{ + return (!!(color & PDS_COMP_COLOR_MASK)) == done_color; +} + +struct pdsc; +int pdsc_adminq_post(struct pdsc *pdsc, + union pds_core_adminq_cmd *cmd, + union pds_core_adminq_comp *comp, + bool fast_poll); + +#endif /* _PDS_CORE_ADMINQ_H_ */ diff --git a/include/linux/pds/pds_auxbus.h b/include/linux/pds/pds_auxbus.h new file mode 100644 index 000000000000..214ef12302d0 --- /dev/null +++ b/include/linux/pds/pds_auxbus.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _PDSC_AUXBUS_H_ +#define _PDSC_AUXBUS_H_ + +#include <linux/auxiliary_bus.h> + +struct pds_auxiliary_dev { + struct auxiliary_device aux_dev; + struct pci_dev *vf_pdev; + u16 client_id; +}; + +int pds_client_adminq_cmd(struct pds_auxiliary_dev *padev, + union pds_core_adminq_cmd *req, + size_t req_len, + union pds_core_adminq_comp *resp, + u64 flags); +#endif /* _PDSC_AUXBUS_H_ */ diff --git a/include/linux/pds/pds_common.h b/include/linux/pds/pds_common.h new file mode 100644 index 000000000000..060331486d50 --- /dev/null +++ b/include/linux/pds/pds_common.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) OR BSD-2-Clause */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc. */ + +#ifndef _PDS_COMMON_H_ +#define _PDS_COMMON_H_ + +#define PDS_CORE_DRV_NAME "pds_core" + +/* the device's internal addressing uses up to 52 bits */ +#define PDS_CORE_ADDR_LEN 52 +#define PDS_CORE_ADDR_MASK (BIT_ULL(PDS_ADDR_LEN) - 1) +#define PDS_PAGE_SIZE 4096 + +enum pds_core_driver_type { + PDS_DRIVER_LINUX = 1, + PDS_DRIVER_WIN = 2, + PDS_DRIVER_DPDK = 3, + PDS_DRIVER_FREEBSD = 4, + PDS_DRIVER_IPXE = 5, + PDS_DRIVER_ESXI = 6, +}; + +enum pds_core_vif_types { + PDS_DEV_TYPE_CORE = 0, + PDS_DEV_TYPE_VDPA = 1, + PDS_DEV_TYPE_VFIO = 2, + PDS_DEV_TYPE_ETH = 3, + PDS_DEV_TYPE_RDMA = 4, + PDS_DEV_TYPE_LM = 5, + + /* new ones added before this line */ + PDS_DEV_TYPE_MAX = 16 /* don't change - used in struct size */ +}; + +#define PDS_DEV_TYPE_CORE_STR "Core" +#define PDS_DEV_TYPE_VDPA_STR "vDPA" +#define PDS_DEV_TYPE_VFIO_STR "VFio" +#define PDS_DEV_TYPE_ETH_STR "Eth" +#define PDS_DEV_TYPE_RDMA_STR "RDMA" +#define PDS_DEV_TYPE_LM_STR "LM" + +#define PDS_CORE_IFNAMSIZ 16 + +/** + * enum pds_core_logical_qtype - Logical Queue Types + * @PDS_CORE_QTYPE_ADMINQ: Administrative Queue + * @PDS_CORE_QTYPE_NOTIFYQ: Notify Queue + * @PDS_CORE_QTYPE_RXQ: Receive Queue + * @PDS_CORE_QTYPE_TXQ: Transmit Queue + * @PDS_CORE_QTYPE_EQ: Event Queue + * @PDS_CORE_QTYPE_MAX: Max queue type supported + */ +enum pds_core_logical_qtype { + PDS_CORE_QTYPE_ADMINQ = 0, + PDS_CORE_QTYPE_NOTIFYQ = 1, + PDS_CORE_QTYPE_RXQ = 2, + PDS_CORE_QTYPE_TXQ = 3, + PDS_CORE_QTYPE_EQ = 4, + + PDS_CORE_QTYPE_MAX = 16 /* don't change - used in struct size */ +}; + +int pdsc_register_notify(struct notifier_block *nb); +void pdsc_unregister_notify(struct notifier_block *nb); +void *pdsc_get_pf_struct(struct pci_dev *vf_pdev); +int pds_client_register(struct pci_dev *pf_pdev, char *devname); +int pds_client_unregister(struct pci_dev *pf_pdev, u16 client_id); +#endif /* _PDS_COMMON_H_ */ diff --git a/include/linux/pds/pds_core_if.h b/include/linux/pds/pds_core_if.h new file mode 100644 index 000000000000..e838a2b90440 --- /dev/null +++ b/include/linux/pds/pds_core_if.h @@ -0,0 +1,571 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) OR BSD-2-Clause */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc. */ + +#ifndef _PDS_CORE_IF_H_ +#define _PDS_CORE_IF_H_ + +#define PCI_VENDOR_ID_PENSANDO 0x1dd8 +#define PCI_DEVICE_ID_PENSANDO_CORE_PF 0x100c +#define PCI_DEVICE_ID_VIRTIO_NET_TRANS 0x1000 +#define PCI_DEVICE_ID_PENSANDO_IONIC_ETH_VF 0x1003 +#define PCI_DEVICE_ID_PENSANDO_VDPA_VF 0x100b +#define PDS_CORE_BARS_MAX 4 +#define PDS_CORE_PCI_BAR_DBELL 1 + +/* Bar0 */ +#define PDS_CORE_DEV_INFO_SIGNATURE 0x44455649 /* 'DEVI' */ +#define PDS_CORE_BAR0_SIZE 0x8000 +#define PDS_CORE_BAR0_DEV_INFO_REGS_OFFSET 0x0000 +#define PDS_CORE_BAR0_DEV_CMD_REGS_OFFSET 0x0800 +#define PDS_CORE_BAR0_DEV_CMD_DATA_REGS_OFFSET 0x0c00 +#define PDS_CORE_BAR0_INTR_STATUS_OFFSET 0x1000 +#define PDS_CORE_BAR0_INTR_CTRL_OFFSET 0x2000 +#define PDS_CORE_DEV_CMD_DONE 0x00000001 + +#define PDS_CORE_DEVCMD_TIMEOUT 5 + +#define PDS_CORE_CLIENT_ID 0 +#define PDS_CORE_ASIC_TYPE_CAPRI 0 + +/* + * enum pds_core_cmd_opcode - Device commands + */ +enum pds_core_cmd_opcode { + /* Core init */ + PDS_CORE_CMD_NOP = 0, + PDS_CORE_CMD_IDENTIFY = 1, + PDS_CORE_CMD_RESET = 2, + PDS_CORE_CMD_INIT = 3, + + PDS_CORE_CMD_FW_DOWNLOAD = 4, + PDS_CORE_CMD_FW_CONTROL = 5, + + /* SR/IOV commands */ + PDS_CORE_CMD_VF_GETATTR = 60, + PDS_CORE_CMD_VF_SETATTR = 61, + PDS_CORE_CMD_VF_CTRL = 62, + + /* Add commands before this line */ + PDS_CORE_CMD_MAX, + PDS_CORE_CMD_COUNT +}; + +/* + * enum pds_core_status_code - Device command return codes + */ +enum pds_core_status_code { + PDS_RC_SUCCESS = 0, /* Success */ + PDS_RC_EVERSION = 1, /* Incorrect version for request */ + PDS_RC_EOPCODE = 2, /* Invalid cmd opcode */ + PDS_RC_EIO = 3, /* I/O error */ + PDS_RC_EPERM = 4, /* Permission denied */ + PDS_RC_EQID = 5, /* Bad qid */ + PDS_RC_EQTYPE = 6, /* Bad qtype */ + PDS_RC_ENOENT = 7, /* No such element */ + PDS_RC_EINTR = 8, /* operation interrupted */ + PDS_RC_EAGAIN = 9, /* Try again */ + PDS_RC_ENOMEM = 10, /* Out of memory */ + PDS_RC_EFAULT = 11, /* Bad address */ + PDS_RC_EBUSY = 12, /* Device or resource busy */ + PDS_RC_EEXIST = 13, /* object already exists */ + PDS_RC_EINVAL = 14, /* Invalid argument */ + PDS_RC_ENOSPC = 15, /* No space left or alloc failure */ + PDS_RC_ERANGE = 16, /* Parameter out of range */ + PDS_RC_BAD_ADDR = 17, /* Descriptor contains a bad ptr */ + PDS_RC_DEV_CMD = 18, /* Device cmd attempted on AdminQ */ + PDS_RC_ENOSUPP = 19, /* Operation not supported */ + PDS_RC_ERROR = 29, /* Generic error */ + PDS_RC_ERDMA = 30, /* Generic RDMA error */ + PDS_RC_EVFID = 31, /* VF ID does not exist */ + PDS_RC_BAD_FW = 32, /* FW file is invalid or corrupted */ + PDS_RC_ECLIENT = 33, /* No such client id */ +}; + +/** + * struct pds_core_drv_identity - Driver identity information + * @drv_type: Driver type (enum pds_core_driver_type) + * @os_dist: OS distribution, numeric format + * @os_dist_str: OS distribution, string format + * @kernel_ver: Kernel version, numeric format + * @kernel_ver_str: Kernel version, string format + * @driver_ver_str: Driver version, string format + */ +struct pds_core_drv_identity { + __le32 drv_type; + __le32 os_dist; + char os_dist_str[128]; + __le32 kernel_ver; + char kernel_ver_str[32]; + char driver_ver_str[32]; +}; + +#define PDS_DEV_TYPE_MAX 16 +/** + * struct pds_core_dev_identity - Device identity information + * @version: Version of device identify + * @type: Identify type (0 for now) + * @state: Device state + * @rsvd: Word boundary padding + * @nlifs: Number of LIFs provisioned + * @nintrs: Number of interrupts provisioned + * @ndbpgs_per_lif: Number of doorbell pages per LIF + * @intr_coal_mult: Interrupt coalescing multiplication factor + * Scale user-supplied interrupt coalescing + * value in usecs to device units using: + * device units = usecs * mult / div + * @intr_coal_div: Interrupt coalescing division factor + * Scale user-supplied interrupt coalescing + * value in usecs to device units using: + * device units = usecs * mult / div + * @vif_types: How many of each VIF device type is supported + */ +struct pds_core_dev_identity { + u8 version; + u8 type; + u8 state; + u8 rsvd; + __le32 nlifs; + __le32 nintrs; + __le32 ndbpgs_per_lif; + __le32 intr_coal_mult; + __le32 intr_coal_div; + __le16 vif_types[PDS_DEV_TYPE_MAX]; +}; + +#define PDS_CORE_IDENTITY_VERSION_1 1 + +/** + * struct pds_core_dev_identify_cmd - Driver/device identify command + * @opcode: Opcode PDS_CORE_CMD_IDENTIFY + * @ver: Highest version of identify supported by driver + * + * Expects to find driver identification info (struct pds_core_drv_identity) + * in cmd_regs->data. Driver should keep the devcmd interface locked + * while preparing the driver info. + */ +struct pds_core_dev_identify_cmd { + u8 opcode; + u8 ver; +}; + +/** + * struct pds_core_dev_identify_comp - Device identify command completion + * @status: Status of the command (enum pds_core_status_code) + * @ver: Version of identify returned by device + * + * Device identification info (struct pds_core_dev_identity) can be found + * in cmd_regs->data. Driver should keep the devcmd interface locked + * while reading the results. + */ +struct pds_core_dev_identify_comp { + u8 status; + u8 ver; +}; + +/** + * struct pds_core_dev_reset_cmd - Device reset command + * @opcode: Opcode PDS_CORE_CMD_RESET + * + * Resets and clears all LIFs, VDevs, and VIFs on the device. + */ +struct pds_core_dev_reset_cmd { + u8 opcode; +}; + +/** + * struct pds_core_dev_reset_comp - Reset command completion + * @status: Status of the command (enum pds_core_status_code) + */ +struct pds_core_dev_reset_comp { + u8 status; +}; + +/* + * struct pds_core_dev_init_data - Pointers and info needed for the Core + * initialization PDS_CORE_CMD_INIT command. The in and out structs are + * overlays on the pds_core_dev_cmd_regs.data space for passing data down + * to the firmware on init, and then returning initialization results. + */ +struct pds_core_dev_init_data_in { + __le64 adminq_q_base; + __le64 adminq_cq_base; + __le64 notifyq_cq_base; + __le32 flags; + __le16 intr_index; + u8 adminq_ring_size; + u8 notifyq_ring_size; +}; + +struct pds_core_dev_init_data_out { + __le32 core_hw_index; + __le32 adminq_hw_index; + __le32 notifyq_hw_index; + u8 adminq_hw_type; + u8 notifyq_hw_type; +}; + +/** + * struct pds_core_dev_init_cmd - Core device initialize + * @opcode: opcode PDS_CORE_CMD_INIT + * + * Initializes the core device and sets up the AdminQ and NotifyQ. + * Expects to find initialization data (struct pds_core_dev_init_data_in) + * in cmd_regs->data. Driver should keep the devcmd interface locked + * while preparing the driver info. + */ +struct pds_core_dev_init_cmd { + u8 opcode; +}; + +/** + * struct pds_core_dev_init_comp - Core init completion + * @status: Status of the command (enum pds_core_status_code) + * + * Initialization result data (struct pds_core_dev_init_data_in) + * is found in cmd_regs->data. + */ +struct pds_core_dev_init_comp { + u8 status; +}; + +/** + * struct pds_core_fw_download_cmd - Firmware download command + * @opcode: opcode + * @rsvd: Word boundary padding + * @addr: DMA address of the firmware buffer + * @offset: offset of the firmware buffer within the full image + * @length: number of valid bytes in the firmware buffer + */ +struct pds_core_fw_download_cmd { + u8 opcode; + u8 rsvd[3]; + __le32 offset; + __le64 addr; + __le32 length; +}; + +/** + * struct pds_core_fw_download_comp - Firmware download completion + * @status: Status of the command (enum pds_core_status_code) + */ +struct pds_core_fw_download_comp { + u8 status; +}; + +/** + * enum pds_core_fw_control_oper - FW control operations + * @PDS_CORE_FW_INSTALL_ASYNC: Install firmware asynchronously + * @PDS_CORE_FW_INSTALL_STATUS: Firmware installation status + * @PDS_CORE_FW_ACTIVATE_ASYNC: Activate firmware asynchronously + * @PDS_CORE_FW_ACTIVATE_STATUS: Firmware activate status + * @PDS_CORE_FW_UPDATE_CLEANUP: Cleanup any firmware update leftovers + * @PDS_CORE_FW_GET_BOOT: Return current active firmware slot + * @PDS_CORE_FW_SET_BOOT: Set active firmware slot for next boot + * @PDS_CORE_FW_GET_LIST: Return list of installed firmware images + */ +enum pds_core_fw_control_oper { + PDS_CORE_FW_INSTALL_ASYNC = 0, + PDS_CORE_FW_INSTALL_STATUS = 1, + PDS_CORE_FW_ACTIVATE_ASYNC = 2, + PDS_CORE_FW_ACTIVATE_STATUS = 3, + PDS_CORE_FW_UPDATE_CLEANUP = 4, + PDS_CORE_FW_GET_BOOT = 5, + PDS_CORE_FW_SET_BOOT = 6, + PDS_CORE_FW_GET_LIST = 7, +}; + +enum pds_core_fw_slot { + PDS_CORE_FW_SLOT_INVALID = 0, + PDS_CORE_FW_SLOT_A = 1, + PDS_CORE_FW_SLOT_B = 2, + PDS_CORE_FW_SLOT_GOLD = 3, +}; + +/** + * struct pds_core_fw_control_cmd - Firmware control command + * @opcode: opcode + * @rsvd: Word boundary padding + * @oper: firmware control operation (enum pds_core_fw_control_oper) + * @slot: slot to operate on (enum pds_core_fw_slot) + */ +struct pds_core_fw_control_cmd { + u8 opcode; + u8 rsvd[3]; + u8 oper; + u8 slot; +}; + +/** + * struct pds_core_fw_control_comp - Firmware control copletion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd: Word alignment space + * @slot: Slot number (enum pds_core_fw_slot) + * @rsvd1: Struct padding + * @color: Color bit + */ +struct pds_core_fw_control_comp { + u8 status; + u8 rsvd[3]; + u8 slot; + u8 rsvd1[10]; + u8 color; +}; + +struct pds_core_fw_name_info { +#define PDS_CORE_FWSLOT_BUFLEN 8 +#define PDS_CORE_FWVERS_BUFLEN 32 + char slotname[PDS_CORE_FWSLOT_BUFLEN]; + char fw_version[PDS_CORE_FWVERS_BUFLEN]; +}; + +struct pds_core_fw_list_info { +#define PDS_CORE_FWVERS_LIST_LEN 16 + u8 num_fw_slots; + struct pds_core_fw_name_info fw_names[PDS_CORE_FWVERS_LIST_LEN]; +} __packed; + +enum pds_core_vf_attr { + PDS_CORE_VF_ATTR_SPOOFCHK = 1, + PDS_CORE_VF_ATTR_TRUST = 2, + PDS_CORE_VF_ATTR_MAC = 3, + PDS_CORE_VF_ATTR_LINKSTATE = 4, + PDS_CORE_VF_ATTR_VLAN = 5, + PDS_CORE_VF_ATTR_RATE = 6, + PDS_CORE_VF_ATTR_STATSADDR = 7, +}; + +/** + * enum pds_core_vf_link_status - Virtual Function link status + * @PDS_CORE_VF_LINK_STATUS_AUTO: Use link state of the uplink + * @PDS_CORE_VF_LINK_STATUS_UP: Link always up + * @PDS_CORE_VF_LINK_STATUS_DOWN: Link always down + */ +enum pds_core_vf_link_status { + PDS_CORE_VF_LINK_STATUS_AUTO = 0, + PDS_CORE_VF_LINK_STATUS_UP = 1, + PDS_CORE_VF_LINK_STATUS_DOWN = 2, +}; + +/** + * struct pds_core_vf_setattr_cmd - Set VF attributes on the NIC + * @opcode: Opcode + * @attr: Attribute type (enum pds_core_vf_attr) + * @vf_index: VF index + * @macaddr: mac address + * @vlanid: vlan ID + * @maxrate: max Tx rate in Mbps + * @spoofchk: enable address spoof checking + * @trust: enable VF trust + * @linkstate: set link up or down + * @stats: stats addr struct + * @stats.pa: set DMA address for VF stats + * @stats.len: length of VF stats space + * @pad: force union to specific size + */ +struct pds_core_vf_setattr_cmd { + u8 opcode; + u8 attr; + __le16 vf_index; + union { + u8 macaddr[6]; + __le16 vlanid; + __le32 maxrate; + u8 spoofchk; + u8 trust; + u8 linkstate; + struct { + __le64 pa; + __le32 len; + } stats; + u8 pad[60]; + } __packed; +}; + +struct pds_core_vf_setattr_comp { + u8 status; + u8 attr; + __le16 vf_index; + __le16 comp_index; + u8 rsvd[9]; + u8 color; +}; + +/** + * struct pds_core_vf_getattr_cmd - Get VF attributes from the NIC + * @opcode: Opcode + * @attr: Attribute type (enum pds_core_vf_attr) + * @vf_index: VF index + */ +struct pds_core_vf_getattr_cmd { + u8 opcode; + u8 attr; + __le16 vf_index; +}; + +struct pds_core_vf_getattr_comp { + u8 status; + u8 attr; + __le16 vf_index; + union { + u8 macaddr[6]; + __le16 vlanid; + __le32 maxrate; + u8 spoofchk; + u8 trust; + u8 linkstate; + __le64 stats_pa; + u8 pad[11]; + } __packed; + u8 color; +}; + +enum pds_core_vf_ctrl_opcode { + PDS_CORE_VF_CTRL_START_ALL = 0, + PDS_CORE_VF_CTRL_START = 1, +}; + +/** + * struct pds_core_vf_ctrl_cmd - VF control command + * @opcode: Opcode for the command + * @ctrl_opcode: VF control operation type + * @vf_index: VF Index. It is unused if op START_ALL is used. + */ + +struct pds_core_vf_ctrl_cmd { + u8 opcode; + u8 ctrl_opcode; + __le16 vf_index; +}; + +/** + * struct pds_core_vf_ctrl_comp - VF_CTRL command completion. + * @status: Status of the command (enum pds_core_status_code) + */ +struct pds_core_vf_ctrl_comp { + u8 status; +}; + +/* + * union pds_core_dev_cmd - Overlay of core device command structures + */ +union pds_core_dev_cmd { + u8 opcode; + u32 words[16]; + + struct pds_core_dev_identify_cmd identify; + struct pds_core_dev_init_cmd init; + struct pds_core_dev_reset_cmd reset; + struct pds_core_fw_download_cmd fw_download; + struct pds_core_fw_control_cmd fw_control; + + struct pds_core_vf_setattr_cmd vf_setattr; + struct pds_core_vf_getattr_cmd vf_getattr; + struct pds_core_vf_ctrl_cmd vf_ctrl; +}; + +/* + * union pds_core_dev_comp - Overlay of core device completion structures + */ +union pds_core_dev_comp { + u8 status; + u8 bytes[16]; + + struct pds_core_dev_identify_comp identify; + struct pds_core_dev_reset_comp reset; + struct pds_core_dev_init_comp init; + struct pds_core_fw_download_comp fw_download; + struct pds_core_fw_control_comp fw_control; + + struct pds_core_vf_setattr_comp vf_setattr; + struct pds_core_vf_getattr_comp vf_getattr; + struct pds_core_vf_ctrl_comp vf_ctrl; +}; + +/** + * struct pds_core_dev_hwstamp_regs - Hardware current timestamp registers + * @tick_low: Low 32 bits of hardware timestamp + * @tick_high: High 32 bits of hardware timestamp + */ +struct pds_core_dev_hwstamp_regs { + u32 tick_low; + u32 tick_high; +}; + +/** + * struct pds_core_dev_info_regs - Device info register format (read-only) + * @signature: Signature value of 0x44455649 ('DEVI') + * @version: Current version of info + * @asic_type: Asic type + * @asic_rev: Asic revision + * @fw_status: Firmware status + * bit 0 - 1 = fw running + * bit 4-7 - 4 bit generation number, changes on fw restart + * @fw_heartbeat: Firmware heartbeat counter + * @serial_num: Serial number + * @fw_version: Firmware version + * @oprom_regs: oprom_regs to store oprom debug enable/disable and bmp + * @rsvd_pad1024: Struct padding + * @hwstamp: Hardware current timestamp registers + * @rsvd_pad2048: Struct padding + */ +struct pds_core_dev_info_regs { +#define PDS_CORE_DEVINFO_FWVERS_BUFLEN 32 +#define PDS_CORE_DEVINFO_SERIAL_BUFLEN 32 + u32 signature; + u8 version; + u8 asic_type; + u8 asic_rev; +#define PDS_CORE_FW_STS_F_STOPPED 0x00 +#define PDS_CORE_FW_STS_F_RUNNING 0x01 +#define PDS_CORE_FW_STS_F_GENERATION 0xF0 + u8 fw_status; + __le32 fw_heartbeat; + char fw_version[PDS_CORE_DEVINFO_FWVERS_BUFLEN]; + char serial_num[PDS_CORE_DEVINFO_SERIAL_BUFLEN]; + u8 oprom_regs[32]; /* reserved */ + u8 rsvd_pad1024[916]; + struct pds_core_dev_hwstamp_regs hwstamp; /* on 1k boundary */ + u8 rsvd_pad2048[1016]; +} __packed; + +/** + * struct pds_core_dev_cmd_regs - Device command register format (read-write) + * @doorbell: Device Cmd Doorbell, write-only + * Write a 1 to signal device to process cmd + * @done: Command completed indicator, poll for completion + * bit 0 == 1 when command is complete + * @cmd: Opcode-specific command bytes + * @comp: Opcode-specific response bytes + * @rsvd: Struct padding + * @data: Opcode-specific side-data + */ +struct pds_core_dev_cmd_regs { + u32 doorbell; + u32 done; + union pds_core_dev_cmd cmd; + union pds_core_dev_comp comp; + u8 rsvd[48]; + u32 data[478]; +} __packed; + +/** + * struct pds_core_dev_regs - Device register format for bar 0 page 0 + * @info: Device info registers + * @devcmd: Device command registers + */ +struct pds_core_dev_regs { + struct pds_core_dev_info_regs info; + struct pds_core_dev_cmd_regs devcmd; +} __packed; + +#ifndef __CHECKER__ +static_assert(sizeof(struct pds_core_drv_identity) <= 1912); +static_assert(sizeof(struct pds_core_dev_identity) <= 1912); +static_assert(sizeof(union pds_core_dev_cmd) == 64); +static_assert(sizeof(union pds_core_dev_comp) == 16); +static_assert(sizeof(struct pds_core_dev_info_regs) == 2048); +static_assert(sizeof(struct pds_core_dev_cmd_regs) == 2048); +static_assert(sizeof(struct pds_core_dev_regs) == 4096); +#endif /* __CHECKER__ */ + +#endif /* _PDS_CORE_IF_H_ */ diff --git a/include/linux/pds/pds_intr.h b/include/linux/pds/pds_intr.h new file mode 100644 index 000000000000..56277c37248c --- /dev/null +++ b/include/linux/pds/pds_intr.h @@ -0,0 +1,163 @@ +/* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) OR BSD-2-Clause */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc. */ + +#ifndef _PDS_INTR_H_ +#define _PDS_INTR_H_ + +/* + * Interrupt control register + * @coal_init: Coalescing timer initial value, in + * device units. Use @identity->intr_coal_mult + * and @identity->intr_coal_div to convert from + * usecs to device units: + * + * coal_init = coal_usecs * coal_mutl / coal_div + * + * When an interrupt is sent the interrupt + * coalescing timer current value + * (@coalescing_curr) is initialized with this + * value and begins counting down. No more + * interrupts are sent until the coalescing + * timer reaches 0. When @coalescing_init=0 + * interrupt coalescing is effectively disabled + * and every interrupt assert results in an + * interrupt. Reset value: 0 + * @mask: Interrupt mask. When @mask=1 the interrupt + * resource will not send an interrupt. When + * @mask=0 the interrupt resource will send an + * interrupt if an interrupt event is pending + * or on the next interrupt assertion event. + * Reset value: 1 + * @credits: Interrupt credits. This register indicates + * how many interrupt events the hardware has + * sent. When written by software this + * register atomically decrements @int_credits + * by the value written. When @int_credits + * becomes 0 then the "pending interrupt" bit + * in the Interrupt Status register is cleared + * by the hardware and any pending but unsent + * interrupts are cleared. + * !!!IMPORTANT!!! This is a signed register. + * @flags: Interrupt control flags + * @unmask -- When this bit is written with a 1 + * the interrupt resource will set mask=0. + * @coal_timer_reset -- When this + * bit is written with a 1 the + * @coalescing_curr will be reloaded with + * @coalescing_init to reset the coalescing + * timer. + * @mask_on_assert: Automatically mask on assertion. When + * @mask_on_assert=1 the interrupt resource + * will set @mask=1 whenever an interrupt is + * sent. When using interrupts in Legacy + * Interrupt mode the driver must select + * @mask_on_assert=0 for proper interrupt + * operation. + * @coalescing_curr: Coalescing timer current value, in + * microseconds. When this value reaches 0 + * the interrupt resource is again eligible to + * send an interrupt. If an interrupt event + * is already pending when @coalescing_curr + * reaches 0 the pending interrupt will be + * sent, otherwise an interrupt will be sent + * on the next interrupt assertion event. + */ +struct pds_core_intr { + u32 coal_init; + u32 mask; + u16 credits; + u16 flags; +#define PDS_CORE_INTR_F_UNMASK 0x0001 +#define PDS_CORE_INTR_F_TIMER_RESET 0x0002 + u32 mask_on_assert; + u32 coalescing_curr; + u32 rsvd6[3]; +}; + +#ifndef __CHECKER__ +static_assert(sizeof(struct pds_core_intr) == 32); +#endif /* __CHECKER__ */ + +#define PDS_CORE_INTR_CTRL_REGS_MAX 2048 +#define PDS_CORE_INTR_CTRL_COAL_MAX 0x3F +#define PDS_CORE_INTR_INDEX_NOT_ASSIGNED -1 + +struct pds_core_intr_status { + u32 status[2]; +}; + +/** + * enum pds_core_intr_mask_vals - valid values for mask and mask_assert. + * @PDS_CORE_INTR_MASK_CLEAR: unmask interrupt. + * @PDS_CORE_INTR_MASK_SET: mask interrupt. + */ +enum pds_core_intr_mask_vals { + PDS_CORE_INTR_MASK_CLEAR = 0, + PDS_CORE_INTR_MASK_SET = 1, +}; + +/** + * enum pds_core_intr_credits_bits - Bitwise composition of credits values. + * @PDS_CORE_INTR_CRED_COUNT: bit mask of credit count, no shift needed. + * @PDS_CORE_INTR_CRED_COUNT_SIGNED: bit mask of credit count, including sign bit. + * @PDS_CORE_INTR_CRED_UNMASK: unmask the interrupt. + * @PDS_CORE_INTR_CRED_RESET_COALESCE: reset the coalesce timer. + * @PDS_CORE_INTR_CRED_REARM: unmask the and reset the timer. + */ +enum pds_core_intr_credits_bits { + PDS_CORE_INTR_CRED_COUNT = 0x7fffu, + PDS_CORE_INTR_CRED_COUNT_SIGNED = 0xffffu, + PDS_CORE_INTR_CRED_UNMASK = 0x10000u, + PDS_CORE_INTR_CRED_RESET_COALESCE = 0x20000u, + PDS_CORE_INTR_CRED_REARM = (PDS_CORE_INTR_CRED_UNMASK | + PDS_CORE_INTR_CRED_RESET_COALESCE), +}; + +static inline void +pds_core_intr_coal_init(struct pds_core_intr __iomem *intr_ctrl, u32 coal) +{ + iowrite32(coal, &intr_ctrl->coal_init); +} + +static inline void +pds_core_intr_mask(struct pds_core_intr __iomem *intr_ctrl, u32 mask) +{ + iowrite32(mask, &intr_ctrl->mask); +} + +static inline void +pds_core_intr_credits(struct pds_core_intr __iomem *intr_ctrl, + u32 cred, u32 flags) +{ + if (WARN_ON_ONCE(cred > PDS_CORE_INTR_CRED_COUNT)) { + cred = ioread32(&intr_ctrl->credits); + cred &= PDS_CORE_INTR_CRED_COUNT_SIGNED; + } + + iowrite32(cred | flags, &intr_ctrl->credits); +} + +static inline void +pds_core_intr_clean_flags(struct pds_core_intr __iomem *intr_ctrl, u32 flags) +{ + u32 cred; + + cred = ioread32(&intr_ctrl->credits); + cred &= PDS_CORE_INTR_CRED_COUNT_SIGNED; + cred |= flags; + iowrite32(cred, &intr_ctrl->credits); +} + +static inline void +pds_core_intr_clean(struct pds_core_intr __iomem *intr_ctrl) +{ + pds_core_intr_clean_flags(intr_ctrl, PDS_CORE_INTR_CRED_RESET_COALESCE); +} + +static inline void +pds_core_intr_mask_assert(struct pds_core_intr __iomem *intr_ctrl, u32 mask) +{ + iowrite32(mask, &intr_ctrl->mask_on_assert); +} + +#endif /* _PDS_INTR_H_ */ diff --git a/include/linux/phy.h b/include/linux/phy.h index db7c0bd67559..c5a0dc829714 100644 --- a/include/linux/phy.h +++ b/include/linux/phy.h @@ -14,6 +14,7 @@ #include <linux/compiler.h> #include <linux/spinlock.h> #include <linux/ethtool.h> +#include <linux/leds.h> #include <linux/linkmode.h> #include <linux/netlink.h> #include <linux/mdio.h> @@ -600,6 +601,7 @@ struct macsec_ops; * @phy_num_led_triggers: Number of triggers in @phy_led_triggers * @led_link_trigger: LED trigger for link up/down * @last_triggered: last LED trigger for link speed + * @leds: list of PHY LED structures * @master_slave_set: User requested master/slave configuration * @master_slave_get: Current master/slave advertisement * @master_slave_state: Current master/slave configuration @@ -699,6 +701,7 @@ struct phy_device { struct phy_led_trigger *led_link_trigger; #endif + struct list_head leds; /* * Interrupt number for this PHY @@ -835,6 +838,23 @@ struct phy_plca_status { }; /** + * struct phy_led: An LED driven by the PHY + * + * @list: List of LEDs + * @phydev: PHY this LED is attached to + * @led_cdev: Standard LED class structure + * @index: Number of the LED + */ +struct phy_led { + struct list_head list; + struct phy_device *phydev; + struct led_classdev led_cdev; + u8 index; +}; + +#define to_phy_led(d) container_of(d, struct phy_led, led_cdev) + +/** * struct phy_driver - Driver structure for a particular PHY type * * @mdiodrv: Data common to all MDIO devices @@ -1056,6 +1076,27 @@ struct phy_driver { /** @get_plca_status: Return the current PLCA status info */ int (*get_plca_status)(struct phy_device *dev, struct phy_plca_status *plca_st); + + /** + * @led_brightness_set: Set a PHY LED brightness. Index + * indicates which of the PHYs led should be set. Value + * follows the standard LED class meaning, e.g. LED_OFF, + * LED_HALF, LED_FULL. + */ + int (*led_brightness_set)(struct phy_device *dev, + u8 index, enum led_brightness value); + + /** + * @led_blink_set: Set a PHY LED brightness. Index indicates + * which of the PHYs led should be configured to blink. Delays + * are in milliseconds and if both are zero then a sensible + * default should be chosen. The call should adjust the + * timings in that case and if it can't match the values + * specified exactly. + */ + int (*led_blink_set)(struct phy_device *dev, u8 index, + unsigned long *delay_on, + unsigned long *delay_off); }; #define to_phy_driver(d) container_of(to_mdio_common_driver(d), \ struct phy_driver, mdiodrv) @@ -1130,16 +1171,15 @@ static inline int phy_read(struct phy_device *phydev, u32 regnum) #define phy_read_poll_timeout(phydev, regnum, val, cond, sleep_us, \ timeout_us, sleep_before_read) \ ({ \ - int __ret = read_poll_timeout(phy_read, val, (cond) || val < 0, \ + int __ret = read_poll_timeout(phy_read, val, val < 0 || (cond), \ sleep_us, timeout_us, sleep_before_read, phydev, regnum); \ - if (val < 0) \ + if (val < 0) \ __ret = val; \ if (__ret) \ phydev_err(phydev, "%s failed: %d\n", __func__, __ret); \ __ret; \ }) - /** * __phy_read - convenience function for reading a given PHY register * @phydev: the phy_device struct diff --git a/include/linux/phylink.h b/include/linux/phylink.h index 637698ed5cb6..71755c66c162 100644 --- a/include/linux/phylink.h +++ b/include/linux/phylink.h @@ -93,7 +93,6 @@ static inline bool phylink_autoneg_inband(unsigned int mode) * the medium link mode (@speed and @duplex) and the speed/duplex of the phy * interface mode (@interface) are different. * @link: true if the link is up. - * @an_enabled: true if autonegotiation is enabled/desired. * @an_complete: true if autonegotiation has completed. */ struct phylink_link_state { @@ -105,7 +104,6 @@ struct phylink_link_state { int pause; int rate_matching; unsigned int link:1; - unsigned int an_enabled:1; unsigned int an_complete:1; }; diff --git a/include/linux/platform_data/nfcmrvl.h b/include/linux/platform_data/nfcmrvl.h deleted file mode 100644 index 9e75ac8d19be..000000000000 --- a/include/linux/platform_data/nfcmrvl.h +++ /dev/null @@ -1,48 +0,0 @@ -/* - * Copyright (C) 2015, Marvell International Ltd. - * - * This software file (the "File") is distributed by Marvell International - * Ltd. under the terms of the GNU General Public License Version 2, June 1991 - * (the "License"). You may use, redistribute and/or modify this File in - * accordance with the terms and conditions of the License, a copy of which - * is available on the worldwide web at - * http://www.gnu.org/licenses/old-licenses/gpl-2.0.txt. - * - * THE FILE IS DISTRIBUTED AS-IS, WITHOUT WARRANTY OF ANY KIND, AND THE - * IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE - * ARE EXPRESSLY DISCLAIMED. The License provides additional details about - * this warranty disclaimer. - */ - -#ifndef _NFCMRVL_PTF_H_ -#define _NFCMRVL_PTF_H_ - -struct nfcmrvl_platform_data { - /* - * Generic - */ - - /* GPIO that is wired to RESET_N signal */ - int reset_n_io; - /* Tell if transport is muxed in HCI one */ - unsigned int hci_muxed; - - /* - * UART specific - */ - - /* Tell if UART needs flow control at init */ - unsigned int flow_control; - /* Tell if firmware supports break control for power management */ - unsigned int break_control; - - - /* - * I2C specific - */ - - unsigned int irq; - unsigned int irq_polarity; -}; - -#endif /* _NFCMRVL_PTF_H_ */ diff --git a/include/linux/ptp_kvm.h b/include/linux/ptp_kvm.h index c2e28deef33a..746fd67c3480 100644 --- a/include/linux/ptp_kvm.h +++ b/include/linux/ptp_kvm.h @@ -14,6 +14,7 @@ struct timespec64; struct clocksource; int kvm_arch_ptp_init(void); +void kvm_arch_ptp_exit(void); int kvm_arch_ptp_get_clock(struct timespec64 *ts); int kvm_arch_ptp_get_crosststamp(u64 *cycle, struct timespec64 *tspec, struct clocksource **cs); diff --git a/include/linux/rcuref.h b/include/linux/rcuref.h new file mode 100644 index 000000000000..2c8bfd0f1b6b --- /dev/null +++ b/include/linux/rcuref.h @@ -0,0 +1,155 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _LINUX_RCUREF_H +#define _LINUX_RCUREF_H + +#include <linux/atomic.h> +#include <linux/bug.h> +#include <linux/limits.h> +#include <linux/lockdep.h> +#include <linux/preempt.h> +#include <linux/rcupdate.h> + +#define RCUREF_ONEREF 0x00000000U +#define RCUREF_MAXREF 0x7FFFFFFFU +#define RCUREF_SATURATED 0xA0000000U +#define RCUREF_RELEASED 0xC0000000U +#define RCUREF_DEAD 0xE0000000U +#define RCUREF_NOREF 0xFFFFFFFFU + +/** + * rcuref_init - Initialize a rcuref reference count with the given reference count + * @ref: Pointer to the reference count + * @cnt: The initial reference count typically '1' + */ +static inline void rcuref_init(rcuref_t *ref, unsigned int cnt) +{ + atomic_set(&ref->refcnt, cnt - 1); +} + +/** + * rcuref_read - Read the number of held reference counts of a rcuref + * @ref: Pointer to the reference count + * + * Return: The number of held references (0 ... N) + */ +static inline unsigned int rcuref_read(rcuref_t *ref) +{ + unsigned int c = atomic_read(&ref->refcnt); + + /* Return 0 if within the DEAD zone. */ + return c >= RCUREF_RELEASED ? 0 : c + 1; +} + +extern __must_check bool rcuref_get_slowpath(rcuref_t *ref); + +/** + * rcuref_get - Acquire one reference on a rcuref reference count + * @ref: Pointer to the reference count + * + * Similar to atomic_inc_not_zero() but saturates at RCUREF_MAXREF. + * + * Provides no memory ordering, it is assumed the caller has guaranteed the + * object memory to be stable (RCU, etc.). It does provide a control dependency + * and thereby orders future stores. See documentation in lib/rcuref.c + * + * Return: + * False if the attempt to acquire a reference failed. This happens + * when the last reference has been put already + * + * True if a reference was successfully acquired + */ +static inline __must_check bool rcuref_get(rcuref_t *ref) +{ + /* + * Unconditionally increase the reference count. The saturation and + * dead zones provide enough tolerance for this. + */ + if (likely(!atomic_add_negative_relaxed(1, &ref->refcnt))) + return true; + + /* Handle the cases inside the saturation and dead zones */ + return rcuref_get_slowpath(ref); +} + +extern __must_check bool rcuref_put_slowpath(rcuref_t *ref); + +/* + * Internal helper. Do not invoke directly. + */ +static __always_inline __must_check bool __rcuref_put(rcuref_t *ref) +{ + RCU_LOCKDEP_WARN(!rcu_read_lock_held() && preemptible(), + "suspicious rcuref_put_rcusafe() usage"); + /* + * Unconditionally decrease the reference count. The saturation and + * dead zones provide enough tolerance for this. + */ + if (likely(!atomic_add_negative_release(-1, &ref->refcnt))) + return false; + + /* + * Handle the last reference drop and cases inside the saturation + * and dead zones. + */ + return rcuref_put_slowpath(ref); +} + +/** + * rcuref_put_rcusafe -- Release one reference for a rcuref reference count RCU safe + * @ref: Pointer to the reference count + * + * Provides release memory ordering, such that prior loads and stores are done + * before, and provides an acquire ordering on success such that free() + * must come after. + * + * Can be invoked from contexts, which guarantee that no grace period can + * happen which would free the object concurrently if the decrement drops + * the last reference and the slowpath races against a concurrent get() and + * put() pair. rcu_read_lock()'ed and atomic contexts qualify. + * + * Return: + * True if this was the last reference with no future references + * possible. This signals the caller that it can safely release the + * object which is protected by the reference counter. + * + * False if there are still active references or the put() raced + * with a concurrent get()/put() pair. Caller is not allowed to + * release the protected object. + */ +static inline __must_check bool rcuref_put_rcusafe(rcuref_t *ref) +{ + return __rcuref_put(ref); +} + +/** + * rcuref_put -- Release one reference for a rcuref reference count + * @ref: Pointer to the reference count + * + * Can be invoked from any context. + * + * Provides release memory ordering, such that prior loads and stores are done + * before, and provides an acquire ordering on success such that free() + * must come after. + * + * Return: + * + * True if this was the last reference with no future references + * possible. This signals the caller that it can safely schedule the + * object, which is protected by the reference counter, for + * deconstruction. + * + * False if there are still active references or the put() raced + * with a concurrent get()/put() pair. Caller is not allowed to + * deconstruct the protected object. + */ +static inline __must_check bool rcuref_put(rcuref_t *ref) +{ + bool released; + + preempt_disable(); + released = __rcuref_put(ref); + preempt_enable(); + return released; +} + +#endif diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h index b6e6378dcbbd..3d6cf306cd55 100644 --- a/include/linux/rtnetlink.h +++ b/include/linux/rtnetlink.h @@ -63,16 +63,6 @@ static inline bool lockdep_rtnl_is_held(void) rcu_dereference_check(p, lockdep_rtnl_is_held()) /** - * rcu_dereference_bh_rtnl - rcu_dereference_bh with debug checking - * @p: The pointer to read, prior to dereference - * - * Do an rcu_dereference_bh(p), but check caller either holds rcu_read_lock_bh() - * or RTNL. Note : Please prefer rtnl_dereference() or rcu_dereference_bh() - */ -#define rcu_dereference_bh_rtnl(p) \ - rcu_dereference_bh_check(p, lockdep_rtnl_is_held()) - -/** * rtnl_dereference - fetch RCU pointer when updates are prevented by RTNL * @p: The pointer to read, prior to dereferencing * diff --git a/include/linux/sched.h b/include/linux/sched.h index e1e605b1255b..3f5395ae86bc 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1318,11 +1318,6 @@ struct task_struct { struct tlbflush_unmap_batch tlb_ubc; - union { - refcount_t rcu_users; - struct rcu_head rcu; - }; - /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; @@ -1459,6 +1454,8 @@ struct task_struct { unsigned long saved_state_change; # endif #endif + struct rcu_head rcu; + refcount_t rcu_users; int pagefault_disabled; #ifdef CONFIG_MMU struct task_struct *oom_reaper_list; diff --git a/include/linux/sctp.h b/include/linux/sctp.h index 358dc08e0831..836a7e200f39 100644 --- a/include/linux/sctp.h +++ b/include/linux/sctp.h @@ -222,7 +222,7 @@ struct sctp_datahdr { __be16 stream; __be16 ssn; __u32 ppid; - __u8 payload[]; + /* __u8 payload[]; */ }; struct sctp_data_chunk { @@ -270,7 +270,7 @@ struct sctp_inithdr { __be16 num_outbound_streams; __be16 num_inbound_streams; __be32 initial_tsn; - __u8 params[]; + /* __u8 params[]; */ }; struct sctp_init_chunk { @@ -385,7 +385,7 @@ struct sctp_sackhdr { __be32 a_rwnd; __be16 num_gap_ack_blocks; __be16 num_dup_tsns; - union sctp_sack_variable variable[]; + /* union sctp_sack_variable variable[]; */ }; struct sctp_sack_chunk { @@ -443,7 +443,7 @@ struct sctp_shutdown_chunk { struct sctp_errhdr { __be16 cause; __be16 length; - __u8 variable[]; + /* __u8 variable[]; */ }; struct sctp_operr_chunk { @@ -603,7 +603,7 @@ struct sctp_fwdtsn_skip { struct sctp_fwdtsn_hdr { __be32 new_cum_tsn; - struct sctp_fwdtsn_skip skip[]; + /* struct sctp_fwdtsn_skip skip[]; */ }; struct sctp_fwdtsn_chunk { @@ -620,7 +620,7 @@ struct sctp_ifwdtsn_skip { struct sctp_ifwdtsn_hdr { __be32 new_cum_tsn; - struct sctp_ifwdtsn_skip skip[]; + /* struct sctp_ifwdtsn_skip skip[]; */ }; struct sctp_ifwdtsn_chunk { @@ -667,7 +667,7 @@ struct sctp_addip_param { struct sctp_addiphdr { __be32 serial; - __u8 params[]; + /* __u8 params[]; */ }; struct sctp_addip_chunk { @@ -727,7 +727,7 @@ struct sctp_addip_chunk { struct sctp_authhdr { __be16 shkey_id; __be16 hmac_id; - __u8 hmac[]; + /* __u8 hmac[]; */ }; struct sctp_auth_chunk { @@ -742,7 +742,7 @@ struct sctp_infox { struct sctp_reconf_chunk { struct sctp_chunkhdr chunk_hdr; - __u8 params[]; + /* __u8 params[]; */ }; struct sctp_strreset_outreq { diff --git a/include/linux/serdev.h b/include/linux/serdev.h index 5f6bfe4f6d95..f5f97fa25e8a 100644 --- a/include/linux/serdev.h +++ b/include/linux/serdev.h @@ -93,6 +93,7 @@ struct serdev_controller_ops { void (*wait_until_sent)(struct serdev_controller *, long); int (*get_tiocm)(struct serdev_controller *); int (*set_tiocm)(struct serdev_controller *, unsigned int, unsigned int); + int (*break_ctl)(struct serdev_controller *ctrl, unsigned int break_state); }; /** @@ -203,6 +204,7 @@ int serdev_device_write_buf(struct serdev_device *, const unsigned char *, size_ void serdev_device_wait_until_sent(struct serdev_device *, long); int serdev_device_get_tiocm(struct serdev_device *); int serdev_device_set_tiocm(struct serdev_device *, int, int); +int serdev_device_break_ctl(struct serdev_device *serdev, int break_state); void serdev_device_write_wakeup(struct serdev_device *); int serdev_device_write(struct serdev_device *, const unsigned char *, size_t, long); void serdev_device_write_flush(struct serdev_device *); @@ -250,11 +252,15 @@ static inline int serdev_device_write_buf(struct serdev_device *serdev, static inline void serdev_device_wait_until_sent(struct serdev_device *sdev, long timeout) {} static inline int serdev_device_get_tiocm(struct serdev_device *serdev) { - return -ENOTSUPP; + return -EOPNOTSUPP; } static inline int serdev_device_set_tiocm(struct serdev_device *serdev, int set, int clear) { - return -ENOTSUPP; + return -EOPNOTSUPP; +} +static inline int serdev_device_break_ctl(struct serdev_device *serdev, int break_state) +{ + return -EOPNOTSUPP; } static inline int serdev_device_write(struct serdev_device *sdev, const unsigned char *buf, size_t count, unsigned long timeout) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index dbcaac8b6966..738776ab8838 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -37,7 +37,7 @@ #include <linux/netfilter/nf_conntrack_common.h> #endif #include <net/net_debug.h> -#include <net/dropreason.h> +#include <net/dropreason-core.h> /** * DOC: skb checksums @@ -346,18 +346,12 @@ struct sk_buff_head { struct sk_buff; -/* To allow 64K frame to be packed as single skb without frag_list we - * require 64K/PAGE_SIZE pages plus 1 additional page to allow for - * buffers which do not start on a page boundary. - * - * Since GRO uses frags we allocate at least 16 regardless of page - * size. - */ -#if (65536/PAGE_SIZE + 1) < 16 -#define MAX_SKB_FRAGS 16UL -#else -#define MAX_SKB_FRAGS (65536/PAGE_SIZE + 1) +#ifndef CONFIG_MAX_SKB_FRAGS +# define CONFIG_MAX_SKB_FRAGS 17 #endif + +#define MAX_SKB_FRAGS CONFIG_MAX_SKB_FRAGS + extern int sysctl_max_skb_frags; /* Set skb_shinfo(skb)->gso_size to this in case you want skb_segment to @@ -811,7 +805,6 @@ typedef unsigned char *sk_buff_data_t; * @csum_level: indicates the number of consecutive checksums found in * the packet minus one that have been verified as * CHECKSUM_UNNECESSARY (max 3) - * @scm_io_uring: SKB holds io_uring registered files * @dst_pending_confirm: need to confirm neighbour * @decrypted: Decrypted SKB * @slow_gro: state present at GRO time, slower prepare step required @@ -942,38 +935,44 @@ struct sk_buff { /* public: */ __u8 pkt_type:3; /* see PKT_TYPE_MAX */ __u8 ignore_df:1; - __u8 nf_trace:1; + __u8 dst_pending_confirm:1; __u8 ip_summed:2; __u8 ooo_okay:1; + /* private: */ + __u8 __mono_tc_offset[0]; + /* public: */ + __u8 mono_delivery_time:1; /* See SKB_MONO_DELIVERY_TIME_MASK */ +#ifdef CONFIG_NET_CLS_ACT + __u8 tc_at_ingress:1; /* See TC_AT_INGRESS_MASK */ + __u8 tc_skip_classify:1; +#endif + __u8 remcsum_offload:1; + __u8 csum_complete_sw:1; + __u8 csum_level:2; + __u8 inner_protocol_type:1; + __u8 l4_hash:1; __u8 sw_hash:1; +#ifdef CONFIG_WIRELESS __u8 wifi_acked_valid:1; __u8 wifi_acked:1; +#endif __u8 no_fcs:1; /* Indicates the inner headers are valid in the skbuff. */ __u8 encapsulation:1; __u8 encap_hdr_csum:1; __u8 csum_valid:1; - - /* private: */ - __u8 __pkt_vlan_present_offset[0]; - /* public: */ - __u8 remcsum_offload:1; - __u8 csum_complete_sw:1; - __u8 csum_level:2; - __u8 dst_pending_confirm:1; - __u8 mono_delivery_time:1; /* See SKB_MONO_DELIVERY_TIME_MASK */ -#ifdef CONFIG_NET_CLS_ACT - __u8 tc_skip_classify:1; - __u8 tc_at_ingress:1; /* See TC_AT_INGRESS_MASK */ -#endif #ifdef CONFIG_IPV6_NDISC_NODETYPE __u8 ndisc_nodetype:2; #endif +#if IS_ENABLED(CONFIG_IP_VS) __u8 ipvs_property:1; - __u8 inner_protocol_type:1; +#endif +#if IS_ENABLED(CONFIG_NETFILTER_XT_TARGET_TRACE) || IS_ENABLED(CONFIG_NF_TABLES) + __u8 nf_trace:1; +#endif #ifdef CONFIG_NET_SWITCHDEV __u8 offload_fwd_mark:1; __u8 offload_l3_fwd_mark:1; @@ -989,13 +988,16 @@ struct sk_buff { __u8 decrypted:1; #endif __u8 slow_gro:1; +#if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; - __u8 scm_io_uring:1; +#endif #ifdef CONFIG_NET_SCHED __u16 tc_index; /* traffic control index */ #endif + u16 alloc_cpu; + union { __wsum csum; struct { @@ -1019,7 +1021,6 @@ struct sk_buff { unsigned int sender_cpu; }; #endif - u16 alloc_cpu; #ifdef CONFIG_NETWORK_SECMARK __u32 secmark; #endif @@ -1075,13 +1076,13 @@ struct sk_buff { * around, you also must adapt these constants. */ #ifdef __BIG_ENDIAN_BITFIELD -#define TC_AT_INGRESS_MASK (1 << 0) -#define SKB_MONO_DELIVERY_TIME_MASK (1 << 2) +#define SKB_MONO_DELIVERY_TIME_MASK (1 << 7) +#define TC_AT_INGRESS_MASK (1 << 6) #else -#define TC_AT_INGRESS_MASK (1 << 7) -#define SKB_MONO_DELIVERY_TIME_MASK (1 << 5) +#define SKB_MONO_DELIVERY_TIME_MASK (1 << 0) +#define TC_AT_INGRESS_MASK (1 << 1) #endif -#define PKT_VLAN_PRESENT_OFFSET offsetof(struct sk_buff, __pkt_vlan_present_offset) +#define SKB_BF_MONO_TC_OFFSET offsetof(struct sk_buff, __mono_tc_offset) #ifdef __KERNEL__ /* @@ -1196,6 +1197,15 @@ static inline unsigned int skb_napi_id(const struct sk_buff *skb) #endif } +static inline bool skb_wifi_acked_valid(const struct sk_buff *skb) +{ +#ifdef CONFIG_WIRELESS + return skb->wifi_acked_valid; +#else + return 0; +#endif +} + /** * skb_unref - decrement the skb's reference count * @skb: buffer @@ -3243,7 +3253,7 @@ static inline struct sk_buff *napi_alloc_skb(struct napi_struct *napi, void napi_consume_skb(struct sk_buff *skb, int budget); void napi_skb_free_stolen_head(struct sk_buff *skb); -void __kfree_skb_defer(struct sk_buff *skb); +void __napi_kfree_skb(struct sk_buff *skb, enum skb_drop_reason reason); /** * __dev_alloc_pages - allocate page for network Rx @@ -3395,6 +3405,18 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f) __skb_frag_ref(&skb_shinfo(skb)->frags[f]); } +static inline void +napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) +{ + struct page *page = skb_frag_page(frag); + +#ifdef CONFIG_PAGE_POOL + if (recycle && page_pool_return_skb_page(page, napi_safe)) + return; +#endif + put_page(page); +} + /** * __skb_frag_unref - release a reference on a paged fragment. * @frag: the paged fragment @@ -3405,13 +3427,7 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f) */ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) { - struct page *page = skb_frag_page(frag); - -#ifdef CONFIG_PAGE_POOL - if (recycle && page_pool_return_skb_page(page)) - return; -#endif - put_page(page); + napi_frag_unref(frag, recycle, false); } /** @@ -5050,9 +5066,30 @@ static inline void skb_reset_redirect(struct sk_buff *skb) skb->redirected = 0; } +static inline void skb_set_redirected_noclear(struct sk_buff *skb, + bool from_ingress) +{ + skb->redirected = 1; +#ifdef CONFIG_NET_REDIRECT + skb->from_ingress = from_ingress; +#endif +} + static inline bool skb_csum_is_sctp(struct sk_buff *skb) { +#if IS_ENABLED(CONFIG_IP_SCTP) return skb->csum_not_inet; +#else + return 0; +#endif +} + +static inline void skb_reset_csum_not_inet(struct sk_buff *skb) +{ + skb->ip_summed = CHECKSUM_NONE; +#if IS_ENABLED(CONFIG_IP_SCTP) + skb->csum_not_inet = 0; +#endif } static inline void skb_set_kcov_handle(struct sk_buff *skb, @@ -5072,12 +5109,12 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb) #endif } -#ifdef CONFIG_PAGE_POOL static inline void skb_mark_for_recycle(struct sk_buff *skb) { +#ifdef CONFIG_PAGE_POOL skb->pp_recycle = 1; -} #endif +} #endif /* __KERNEL__ */ #endif /* _LINUX_SKBUFF_H */ diff --git a/include/linux/smscphy.h b/include/linux/smscphy.h index 1a136271ba6a..e1c88627755a 100644 --- a/include/linux/smscphy.h +++ b/include/linux/smscphy.h @@ -28,4 +28,14 @@ #define MII_LAN83C185_MODE_POWERDOWN 0xC0 /* Power Down mode */ #define MII_LAN83C185_MODE_ALL 0xE0 /* All capable mode */ +int smsc_phy_config_intr(struct phy_device *phydev); +irqreturn_t smsc_phy_handle_interrupt(struct phy_device *phydev); +int smsc_phy_config_init(struct phy_device *phydev); +int lan87xx_read_status(struct phy_device *phydev); +int smsc_phy_get_tunable(struct phy_device *phydev, + struct ethtool_tunable *tuna, void *data); +int smsc_phy_set_tunable(struct phy_device *phydev, + struct ethtool_tunable *tuna, const void *data); +int smsc_phy_probe(struct phy_device *phydev); + #endif /* __LINUX_SMSCPHY_H__ */ diff --git a/include/linux/soc/mediatek/mtk_wed.h b/include/linux/soc/mediatek/mtk_wed.h index fd0b0605cf90..b2b28180dff7 100644 --- a/include/linux/soc/mediatek/mtk_wed.h +++ b/include/linux/soc/mediatek/mtk_wed.h @@ -6,6 +6,7 @@ #include <linux/regmap.h> #include <linux/pci.h> #include <linux/skbuff.h> +#include <linux/netdevice.h> #define MTK_WED_TX_QUEUES 2 #define MTK_WED_RX_QUEUES 2 @@ -179,6 +180,8 @@ struct mtk_wed_ops { u32 (*irq_get)(struct mtk_wed_device *dev, u32 mask); void (*irq_set_mask)(struct mtk_wed_device *dev, u32 mask); + int (*setup_tc)(struct mtk_wed_device *wed, struct net_device *dev, + enum tc_setup_type type, void *type_data); }; extern const struct mtk_wed_ops __rcu *mtk_soc_wed_ops; @@ -237,6 +240,8 @@ mtk_wed_get_rx_capa(struct mtk_wed_device *dev) (_dev)->ops->msg_update(_dev, _id, _msg, _len) #define mtk_wed_device_stop(_dev) (_dev)->ops->stop(_dev) #define mtk_wed_device_dma_reset(_dev) (_dev)->ops->reset_dma(_dev) +#define mtk_wed_device_setup_tc(_dev, _netdev, _type, _type_data) \ + (_dev)->ops->setup_tc(_dev, _netdev, _type, _type_data) #else static inline bool mtk_wed_device_active(struct mtk_wed_device *dev) { @@ -255,6 +260,7 @@ static inline bool mtk_wed_device_active(struct mtk_wed_device *dev) #define mtk_wed_device_update_msg(_dev, _id, _msg, _len) -ENODEV #define mtk_wed_device_stop(_dev) do {} while (0) #define mtk_wed_device_dma_reset(_dev) do {} while (0) +#define mtk_wed_device_setup_tc(_dev, _netdev, _type, _type_data) -EOPNOTSUPP #endif #endif diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h index a2414c187483..225751a8fd8e 100644 --- a/include/linux/stmmac.h +++ b/include/linux/stmmac.h @@ -186,6 +186,24 @@ struct stmmac_safety_feature_cfg { u32 tmouten; }; +/* Addresses that may be customized by a platform */ +struct dwmac4_addrs { + u32 dma_chan; + u32 dma_chan_offset; + u32 mtl_chan; + u32 mtl_chan_offset; + u32 mtl_ets_ctrl; + u32 mtl_ets_ctrl_offset; + u32 mtl_txq_weight; + u32 mtl_txq_weight_offset; + u32 mtl_send_slp_cred; + u32 mtl_send_slp_cred_offset; + u32 mtl_high_cred; + u32 mtl_high_cred_offset; + u32 mtl_low_cred; + u32 mtl_low_cred_offset; +}; + struct plat_stmmacenet_data { int bus_id; int phy_addr; @@ -223,6 +241,7 @@ struct plat_stmmacenet_data { struct stmmac_rxq_cfg rx_queues_cfg[MTL_MAX_RX_QUEUES]; struct stmmac_txq_cfg tx_queues_cfg[MTL_MAX_TX_QUEUES]; void (*fix_mac_speed)(void *priv, unsigned int speed); + int (*fix_soc_reset)(void *priv, void __iomem *ioaddr); int (*serdes_powerup)(struct net_device *ndev, void *priv); void (*serdes_powerdown)(struct net_device *ndev, void *priv); void (*speed_mode_2500)(struct net_device *ndev, void *priv); @@ -273,5 +292,6 @@ struct plat_stmmacenet_data { bool use_phy_wol; bool sph_disable; bool serdes_up_after_phy_linkup; + const struct dwmac4_addrs *dwmac4_addrs; }; #endif diff --git a/include/linux/tcp.h b/include/linux/tcp.h index ca7f05a130d2..b4c08ac86983 100644 --- a/include/linux/tcp.h +++ b/include/linux/tcp.h @@ -472,10 +472,12 @@ enum tsq_flags { TCPF_MTU_REDUCED_DEFERRED = (1UL << TCP_MTU_REDUCED_DEFERRED), }; -static inline struct tcp_sock *tcp_sk(const struct sock *sk) -{ - return (struct tcp_sock *)sk; -} +#define tcp_sk(ptr) container_of_const(ptr, struct tcp_sock, inet_conn.icsk_inet.sk) + +/* Variant of tcp_sk() upgrading a const sock to a read/write tcp socket. + * Used in context of (lockless) tcp listeners. + */ +#define tcp_sk_rw(ptr) container_of(ptr, struct tcp_sock, inet_conn.icsk_inet.sk) struct tcp_timewait_sock { struct inet_timewait_sock tw_sk; diff --git a/include/linux/types.h b/include/linux/types.h index ea8cf60a8a79..688fb943556a 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -175,6 +175,12 @@ typedef struct { } atomic64_t; #endif +typedef struct { + atomic_t refcnt; +} rcuref_t; + +#define RCUREF_INIT(i) { .refcnt = ATOMIC_INIT(i - 1) } + struct list_head { struct list_head *next, *prev; }; diff --git a/include/linux/udp.h b/include/linux/udp.h index a2892e151644..43c1fb2d2c21 100644 --- a/include/linux/udp.h +++ b/include/linux/udp.h @@ -97,10 +97,7 @@ struct udp_sock { #define UDP_MAX_SEGMENTS (1 << 6UL) -static inline struct udp_sock *udp_sk(const struct sock *sk) -{ - return (struct udp_sock *)sk; -} +#define udp_sk(ptr) container_of_const(ptr, struct udp_sock, inet.sk) static inline void udp_set_no_check6_tx(struct sock *sk, bool val) { diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index 3f9c16611306..c58453699ee9 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -245,4 +245,5 @@ u32 virtio_transport_get_credit(struct virtio_vsock_sock *vvs, u32 wanted); void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit); void virtio_transport_deliver_tap_pkt(struct sk_buff *skb); int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *list); +int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t read_actor); #endif /* _LINUX_VIRTIO_VSOCK_H */ diff --git a/include/linux/wwan.h b/include/linux/wwan.h index 24d76500b1cc..01fa15506286 100644 --- a/include/linux/wwan.h +++ b/include/linux/wwan.h @@ -64,11 +64,21 @@ struct wwan_port_ops { poll_table *wait); }; +/** struct wwan_port_caps - The WWAN port capbilities + * @frag_len: WWAN port TX fragments length + * @headroom_len: WWAN port TX fragments reserved headroom length + */ +struct wwan_port_caps { + size_t frag_len; + unsigned int headroom_len; +}; + /** * wwan_create_port - Add a new WWAN port * @parent: Device to use as parent and shared by all WWAN ports * @type: WWAN port type * @ops: WWAN port operations + * @caps: WWAN port capabilities * @drvdata: Pointer to caller driver data * * Allocate and register a new WWAN port. The port will be automatically exposed @@ -86,6 +96,7 @@ struct wwan_port_ops { struct wwan_port *wwan_create_port(struct device *parent, enum wwan_port_type type, const struct wwan_port_ops *ops, + struct wwan_port_caps *caps, void *drvdata); /** diff --git a/include/net/addrconf.h b/include/net/addrconf.h index c04f359655b8..82da55101b5a 100644 --- a/include/net/addrconf.h +++ b/include/net/addrconf.h @@ -223,7 +223,7 @@ int ipv6_sock_mc_drop(struct sock *sk, int ifindex, const struct in6_addr *addr); void __ipv6_sock_mc_close(struct sock *sk); void ipv6_sock_mc_close(struct sock *sk); -bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr, +bool inet6_mc_check(const struct sock *sk, const struct in6_addr *mc_addr, const struct in6_addr *src_addr); int ipv6_dev_mc_inc(struct net_device *dev, const struct in6_addr *addr); diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h index ba717eac0229..01a35e113ab9 100644 --- a/include/net/af_rxrpc.h +++ b/include/net/af_rxrpc.h @@ -57,7 +57,8 @@ int rxrpc_kernel_recv_data(struct socket *, struct rxrpc_call *, struct iov_iter *, size_t *, bool, u32 *, u16 *); bool rxrpc_kernel_abort_call(struct socket *, struct rxrpc_call *, u32, int, enum rxrpc_abort_reason); -void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *); +void rxrpc_kernel_shutdown_call(struct socket *sock, struct rxrpc_call *call); +void rxrpc_kernel_put_call(struct socket *sock, struct rxrpc_call *call); void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *, struct sockaddr_rxrpc *); bool rxrpc_kernel_get_srtt(struct socket *, struct rxrpc_call *, u32 *); diff --git a/include/net/af_unix.h b/include/net/af_unix.h index 480fa579787e..824c258143a3 100644 --- a/include/net/af_unix.h +++ b/include/net/af_unix.h @@ -11,6 +11,7 @@ void unix_inflight(struct user_struct *user, struct file *fp); void unix_notinflight(struct user_struct *user, struct file *fp); void unix_destruct_scm(struct sk_buff *skb); +void io_uring_destruct_scm(struct sk_buff *skb); void unix_gc(void); void wait_for_unix_gc(void); struct sock *unix_get_socket(struct file *filp); @@ -73,10 +74,7 @@ struct unix_sock { #endif }; -static inline struct unix_sock *unix_sk(const struct sock *sk) -{ - return (struct unix_sock *)sk; -} +#define unix_sk(ptr) container_of_const(ptr, struct unix_sock, sk) #define peer_wait peer_wq.wait diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h index 568a87c5e0d0..0e7504a42925 100644 --- a/include/net/af_vsock.h +++ b/include/net/af_vsock.h @@ -75,6 +75,7 @@ struct vsock_sock { void *trans; }; +s64 vsock_connectible_has_data(struct vsock_sock *vsk); s64 vsock_stream_has_data(struct vsock_sock *vsk); s64 vsock_stream_has_space(struct vsock_sock *vsk); struct sock *vsock_create_connected(struct sock *parent); @@ -173,6 +174,9 @@ struct vsock_transport { /* Addressing. */ u32 (*get_local_cid)(void); + + /* Read a single skb */ + int (*read_skb)(struct vsock_sock *, skb_read_actor_t); }; /**** CORE ****/ @@ -225,5 +229,18 @@ int vsock_init_tap(void); int vsock_add_tap(struct vsock_tap *vt); int vsock_remove_tap(struct vsock_tap *vt); void vsock_deliver_tap(struct sk_buff *build_skb(void *opaque), void *opaque); +int vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, + int flags); +int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, + size_t len, int flags); + +#ifdef CONFIG_BPF_SYSCALL +extern struct proto vsock_proto; +int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore); +void __init vsock_bpf_build_proto(void); +#else +static inline void __init vsock_bpf_build_proto(void) +{} +#endif #endif /* __AF_VSOCK_H__ */ diff --git a/include/net/arp.h b/include/net/arp.h index d7ef4ec71dfe..e8747e0713c7 100644 --- a/include/net/arp.h +++ b/include/net/arp.h @@ -38,11 +38,11 @@ static inline struct neighbour *__ipv4_neigh_lookup(struct net_device *dev, u32 { struct neighbour *n; - rcu_read_lock_bh(); + rcu_read_lock(); n = __ipv4_neigh_lookup_noref(dev, key); if (n && !refcount_inc_not_zero(&n->refcnt)) n = NULL; - rcu_read_unlock_bh(); + rcu_read_unlock(); return n; } @@ -51,10 +51,10 @@ static inline void __ipv4_confirm_neigh(struct net_device *dev, u32 key) { struct neighbour *n; - rcu_read_lock_bh(); + rcu_read_lock(); n = __ipv4_neigh_lookup_noref(dev, key); neigh_confirm(n); - rcu_read_unlock_bh(); + rcu_read_unlock(); } void arp_init(void); diff --git a/include/net/ax25.h b/include/net/ax25.h index f8cf3629a419..0d939e5aee4e 100644 --- a/include/net/ax25.h +++ b/include/net/ax25.h @@ -260,10 +260,7 @@ struct ax25_sock { struct ax25_cb *cb; }; -static inline struct ax25_sock *ax25_sk(const struct sock *sk) -{ - return (struct ax25_sock *) sk; -} +#define ax25_sk(ptr) container_of_const(ptr, struct ax25_sock, sk) static inline struct ax25_cb *sk_to_ax25(const struct sock *sk) { diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h index bcc5a4cd2c17..1b4230cd42a3 100644 --- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -1,6 +1,7 @@ /* BlueZ - Bluetooth protocol stack for Linux Copyright (C) 2000-2001 Qualcomm Incorporated + Copyright 2023 NXP Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com> @@ -171,23 +172,39 @@ struct bt_iso_io_qos { __u8 rtn; }; -struct bt_iso_qos { - union { - __u8 cig; - __u8 big; - }; - union { - __u8 cis; - __u8 bis; - }; - union { - __u8 sca; - __u8 sync_interval; - }; +struct bt_iso_ucast_qos { + __u8 cig; + __u8 cis; + __u8 sca; + __u8 packing; + __u8 framing; + struct bt_iso_io_qos in; + struct bt_iso_io_qos out; +}; + +struct bt_iso_bcast_qos { + __u8 big; + __u8 bis; + __u8 sync_interval; __u8 packing; __u8 framing; struct bt_iso_io_qos in; struct bt_iso_io_qos out; + __u8 encryption; + __u8 bcode[16]; + __u8 options; + __u16 skip; + __u16 sync_timeout; + __u8 sync_cte_type; + __u8 mse; + __u16 timeout; +}; + +struct bt_iso_qos { + union { + struct bt_iso_ucast_qos ucast; + struct bt_iso_bcast_qos bcast; + }; }; #define BT_ISO_PHY_1M 0x01 diff --git a/include/net/bluetooth/coredump.h b/include/net/bluetooth/coredump.h new file mode 100644 index 000000000000..72f51b587a04 --- /dev/null +++ b/include/net/bluetooth/coredump.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022 Google Corporation + */ + +#ifndef __COREDUMP_H +#define __COREDUMP_H + +#define DEVCOREDUMP_TIMEOUT msecs_to_jiffies(10000) /* 10 sec */ + +typedef void (*coredump_t)(struct hci_dev *hdev); +typedef void (*dmp_hdr_t)(struct hci_dev *hdev, struct sk_buff *skb); +typedef void (*notify_change_t)(struct hci_dev *hdev, int state); + +/* struct hci_devcoredump - Devcoredump state + * + * @supported: Indicates if FW dump collection is supported by driver + * @state: Current state of dump collection + * @timeout: Indicates a timeout for collecting the devcoredump + * + * @alloc_size: Total size of the dump + * @head: Start of the dump + * @tail: Pointer to current end of dump + * @end: head + alloc_size for easy comparisons + * + * @dump_q: Dump queue for state machine to process + * @dump_rx: Devcoredump state machine work + * @dump_timeout: Devcoredump timeout work + * + * @coredump: Called from the driver's .coredump() function. + * @dmp_hdr: Create a dump header to identify controller/fw/driver info + * @notify_change: Notify driver when devcoredump state has changed + */ +struct hci_devcoredump { + bool supported; + + enum devcoredump_state { + HCI_DEVCOREDUMP_IDLE, + HCI_DEVCOREDUMP_ACTIVE, + HCI_DEVCOREDUMP_DONE, + HCI_DEVCOREDUMP_ABORT, + HCI_DEVCOREDUMP_TIMEOUT, + } state; + + unsigned long timeout; + + size_t alloc_size; + char *head; + char *tail; + char *end; + + struct sk_buff_head dump_q; + struct work_struct dump_rx; + struct delayed_work dump_timeout; + + coredump_t coredump; + dmp_hdr_t dmp_hdr; + notify_change_t notify_change; +}; + +#ifdef CONFIG_DEV_COREDUMP + +void hci_devcd_reset(struct hci_dev *hdev); +void hci_devcd_rx(struct work_struct *work); +void hci_devcd_timeout(struct work_struct *work); + +int hci_devcd_register(struct hci_dev *hdev, coredump_t coredump, + dmp_hdr_t dmp_hdr, notify_change_t notify_change); +int hci_devcd_init(struct hci_dev *hdev, u32 dump_size); +int hci_devcd_append(struct hci_dev *hdev, struct sk_buff *skb); +int hci_devcd_append_pattern(struct hci_dev *hdev, u8 pattern, u32 len); +int hci_devcd_complete(struct hci_dev *hdev); +int hci_devcd_abort(struct hci_dev *hdev); + +#else + +static inline void hci_devcd_reset(struct hci_dev *hdev) {} +static inline void hci_devcd_rx(struct work_struct *work) {} +static inline void hci_devcd_timeout(struct work_struct *work) {} + +static inline int hci_devcd_register(struct hci_dev *hdev, coredump_t coredump, + dmp_hdr_t dmp_hdr, + notify_change_t notify_change) +{ + return -EOPNOTSUPP; +} + +static inline int hci_devcd_init(struct hci_dev *hdev, u32 dump_size) +{ + return -EOPNOTSUPP; +} + +static inline int hci_devcd_append(struct hci_dev *hdev, struct sk_buff *skb) +{ + return -EOPNOTSUPP; +} + +static inline int hci_devcd_append_pattern(struct hci_dev *hdev, + u8 pattern, u32 len) +{ + return -EOPNOTSUPP; +} + +static inline int hci_devcd_complete(struct hci_dev *hdev) +{ + return -EOPNOTSUPP; +} + +static inline int hci_devcd_abort(struct hci_dev *hdev) +{ + return -EOPNOTSUPP; +} + +#endif /* CONFIG_DEV_COREDUMP */ + +#endif /* __COREDUMP_H */ diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h index 400f8a7d0c3f..07df96c47ef4 100644 --- a/include/net/bluetooth/hci.h +++ b/include/net/bluetooth/hci.h @@ -294,6 +294,21 @@ enum { * during the hdev->setup vendor callback. */ HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG, + + /* When this quirk is set, max_page for local extended features + * is set to 1, even if controller reports higher number. Some + * controllers (e.g. RTL8723CS) report more pages, but they + * don't actually support features declared there. + */ + HCI_QUIRK_BROKEN_LOCAL_EXT_FEATURES_PAGE_2, + + /* + * When this quirk is set, the HCI_OP_LE_SET_RPA_TIMEOUT command is + * skipped during initialization. This is required for the Actions + * Semiconductor ATS2851 based controllers, which erroneously claims + * to support it. + */ + HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, }; /* HCI device flags */ diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index d5311ceb21c6..a6c8aee2f256 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -1,6 +1,7 @@ /* BlueZ - Bluetooth protocol stack for Linux Copyright (c) 2000-2001, 2010, Code Aurora Forum. All rights reserved. + Copyright 2023 NXP Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com> @@ -32,6 +33,7 @@ #include <net/bluetooth/hci.h> #include <net/bluetooth/hci_sync.h> #include <net/bluetooth/hci_sock.h> +#include <net/bluetooth/coredump.h> /* HCI priority */ #define HCI_PRIO_MAX 7 @@ -590,6 +592,10 @@ struct hci_dev { const char *fw_info; struct dentry *debugfs; +#ifdef CONFIG_DEV_COREDUMP + struct hci_devcoredump dump; +#endif + struct device dev; struct rfkill *rfkill; @@ -764,7 +770,10 @@ struct hci_conn { void *iso_data; struct amp_mgr *amp_mgr; - struct hci_conn *link; + struct list_head link_list; + struct hci_conn *parent; + struct hci_link *link; + struct bt_codec codec; void (*connect_cfm_cb) (struct hci_conn *conn, u8 status); @@ -774,6 +783,11 @@ struct hci_conn { void (*cleanup)(struct hci_conn *conn); }; +struct hci_link { + struct list_head list; + struct hci_conn *conn; +}; + struct hci_chan { struct list_head list; __u16 handle; @@ -979,7 +993,7 @@ static inline bool hci_conn_sc_enabled(struct hci_conn *conn) static inline void hci_conn_hash_add(struct hci_dev *hdev, struct hci_conn *c) { struct hci_conn_hash *h = &hdev->conn_hash; - list_add_rcu(&c->list, &h->list); + list_add_tail_rcu(&c->list, &h->list); switch (c->type) { case ACL_LINK: h->acl_num++; @@ -1091,7 +1105,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev, if (bacmp(&c->dst, ba) || c->type != ISO_LINK) continue; - if (c->iso_qos.big == big && c->iso_qos.bis == bis) { + if (c->iso_qos.bcast.big == big && c->iso_qos.bcast.bis == bis) { rcu_read_unlock(); return c; } @@ -1166,7 +1180,9 @@ static inline struct hci_conn *hci_conn_hash_lookup_le(struct hci_dev *hdev, static inline struct hci_conn *hci_conn_hash_lookup_cis(struct hci_dev *hdev, bdaddr_t *ba, - __u8 ba_type) + __u8 ba_type, + __u8 cig, + __u8 id) { struct hci_conn_hash *h = &hdev->conn_hash; struct hci_conn *c; @@ -1177,6 +1193,14 @@ static inline struct hci_conn *hci_conn_hash_lookup_cis(struct hci_dev *hdev, if (c->type != ISO_LINK) continue; + /* Match CIG ID if set */ + if (cig != BT_ISO_QOS_CIG_UNSET && cig != c->iso_qos.ucast.cig) + continue; + + /* Match CIS ID if set */ + if (id != BT_ISO_QOS_CIS_UNSET && id != c->iso_qos.ucast.cis) + continue; + if (ba_type == c->dst_type && !bacmp(&c->dst, ba)) { rcu_read_unlock(); return c; @@ -1200,7 +1224,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_cig(struct hci_dev *hdev, if (c->type != ISO_LINK) continue; - if (handle == c->iso_qos.cig) { + if (handle == c->iso_qos.ucast.cig) { rcu_read_unlock(); return c; } @@ -1223,7 +1247,7 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev, if (bacmp(&c->dst, BDADDR_ANY) || c->type != ISO_LINK) continue; - if (handle == c->iso_qos.big) { + if (handle == c->iso_qos.bcast.big) { rcu_read_unlock(); return c; } @@ -1332,7 +1356,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst, __u8 dst_type, struct bt_iso_qos *qos, __u8 data_len, __u8 *data); int hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, __u8 dst_type, - __u8 sid); + __u8 sid, struct bt_iso_qos *qos); int hci_le_big_create_sync(struct hci_dev *hdev, struct bt_iso_qos *qos, __u16 sync_handle, __u8 num_bis, __u8 bis[]); int hci_conn_check_link_mode(struct hci_conn *conn); @@ -1377,12 +1401,14 @@ static inline void hci_conn_put(struct hci_conn *conn) put_device(&conn->dev); } -static inline void hci_conn_hold(struct hci_conn *conn) +static inline struct hci_conn *hci_conn_hold(struct hci_conn *conn) { BT_DBG("hcon %p orig refcnt %d", conn, atomic_read(&conn->refcnt)); atomic_inc(&conn->refcnt); cancel_delayed_work(&conn->disc_work); + + return conn; } static inline void hci_conn_drop(struct hci_conn *conn) @@ -1497,6 +1523,15 @@ static inline void hci_set_aosp_capable(struct hci_dev *hdev) #endif } +static inline void hci_devcd_setup(struct hci_dev *hdev) +{ +#ifdef CONFIG_DEV_COREDUMP + INIT_WORK(&hdev->dump.dump_rx, hci_devcd_rx); + INIT_DELAYED_WORK(&hdev->dump.dump_timeout, hci_devcd_timeout); + skb_queue_head_init(&hdev->dump.dump_q); +#endif +} + int hci_dev_open(__u16 dev); int hci_dev_close(__u16 dev); int hci_dev_do_close(struct hci_dev *hdev); @@ -1668,9 +1703,13 @@ void hci_conn_del_sysfs(struct hci_conn *conn); #define scan_1m(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_1M) || \ ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_1M)) +#define le_2m_capable(dev) (((dev)->le_features[1] & HCI_LE_PHY_2M)) + #define scan_2m(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_2M) || \ ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_2M)) +#define le_coded_capable(dev) (((dev)->le_features[1] & HCI_LE_PHY_CODED)) + #define scan_coded(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_CODED) || \ ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_CODED)) diff --git a/include/net/bluetooth/hci_sync.h b/include/net/bluetooth/hci_sync.h index 17f5a4c32f36..2495be4d8b82 100644 --- a/include/net/bluetooth/hci_sync.h +++ b/include/net/bluetooth/hci_sync.h @@ -41,6 +41,8 @@ void hci_cmd_sync_clear(struct hci_dev *hdev); void hci_cmd_sync_cancel(struct hci_dev *hdev, int err); void __hci_cmd_sync_cancel(struct hci_dev *hdev, int err); +int hci_cmd_sync_submit(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, + void *data, hci_cmd_sync_work_destroy_t destroy); int hci_cmd_sync_queue(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, void *data, hci_cmd_sync_work_destroy_t destroy); @@ -122,6 +124,8 @@ int hci_abort_conn_sync(struct hci_dev *hdev, struct hci_conn *conn, u8 reason); int hci_le_create_conn_sync(struct hci_dev *hdev, struct hci_conn *conn); +int hci_le_create_cis_sync(struct hci_dev *hdev, struct hci_conn *conn); + int hci_le_remove_cig_sync(struct hci_dev *hdev, u8 handle); int hci_le_terminate_big_sync(struct hci_dev *hdev, u8 handle, u8 reason); diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h index 2f766e3437ce..cf393e72d6ed 100644 --- a/include/net/bluetooth/l2cap.h +++ b/include/net/bluetooth/l2cap.h @@ -694,7 +694,7 @@ struct l2cap_conn { struct sk_buff_head pending_rx; struct work_struct pending_rx_work; - struct work_struct id_addr_update_work; + struct delayed_work id_addr_timer; __u8 disc_reason; diff --git a/include/net/bluetooth/mgmt.h b/include/net/bluetooth/mgmt.h index e18a927669c0..a5801649f619 100644 --- a/include/net/bluetooth/mgmt.h +++ b/include/net/bluetooth/mgmt.h @@ -91,26 +91,26 @@ struct mgmt_rp_read_index_list { #define MGMT_MAX_NAME_LENGTH (HCI_MAX_NAME_LENGTH + 1) #define MGMT_MAX_SHORT_NAME_LENGTH (HCI_MAX_SHORT_NAME_LENGTH + 1) -#define MGMT_SETTING_POWERED 0x00000001 -#define MGMT_SETTING_CONNECTABLE 0x00000002 -#define MGMT_SETTING_FAST_CONNECTABLE 0x00000004 -#define MGMT_SETTING_DISCOVERABLE 0x00000008 -#define MGMT_SETTING_BONDABLE 0x00000010 -#define MGMT_SETTING_LINK_SECURITY 0x00000020 -#define MGMT_SETTING_SSP 0x00000040 -#define MGMT_SETTING_BREDR 0x00000080 -#define MGMT_SETTING_HS 0x00000100 -#define MGMT_SETTING_LE 0x00000200 -#define MGMT_SETTING_ADVERTISING 0x00000400 -#define MGMT_SETTING_SECURE_CONN 0x00000800 -#define MGMT_SETTING_DEBUG_KEYS 0x00001000 -#define MGMT_SETTING_PRIVACY 0x00002000 -#define MGMT_SETTING_CONFIGURATION 0x00004000 -#define MGMT_SETTING_STATIC_ADDRESS 0x00008000 -#define MGMT_SETTING_PHY_CONFIGURATION 0x00010000 -#define MGMT_SETTING_WIDEBAND_SPEECH 0x00020000 -#define MGMT_SETTING_CIS_CENTRAL 0x00040000 -#define MGMT_SETTING_CIS_PERIPHERAL 0x00080000 +#define MGMT_SETTING_POWERED BIT(0) +#define MGMT_SETTING_CONNECTABLE BIT(1) +#define MGMT_SETTING_FAST_CONNECTABLE BIT(2) +#define MGMT_SETTING_DISCOVERABLE BIT(3) +#define MGMT_SETTING_BONDABLE BIT(4) +#define MGMT_SETTING_LINK_SECURITY BIT(5) +#define MGMT_SETTING_SSP BIT(6) +#define MGMT_SETTING_BREDR BIT(7) +#define MGMT_SETTING_HS BIT(8) +#define MGMT_SETTING_LE BIT(9) +#define MGMT_SETTING_ADVERTISING BIT(10) +#define MGMT_SETTING_SECURE_CONN BIT(11) +#define MGMT_SETTING_DEBUG_KEYS BIT(12) +#define MGMT_SETTING_PRIVACY BIT(13) +#define MGMT_SETTING_CONFIGURATION BIT(14) +#define MGMT_SETTING_STATIC_ADDRESS BIT(15) +#define MGMT_SETTING_PHY_CONFIGURATION BIT(16) +#define MGMT_SETTING_WIDEBAND_SPEECH BIT(17) +#define MGMT_SETTING_CIS_CENTRAL BIT(18) +#define MGMT_SETTING_CIS_PERIPHERAL BIT(19) #define MGMT_OP_READ_INFO 0x0004 #define MGMT_READ_INFO_SIZE 0 @@ -635,21 +635,21 @@ struct mgmt_rp_get_phy_configuration { } __packed; #define MGMT_GET_PHY_CONFIGURATION_SIZE 0 -#define MGMT_PHY_BR_1M_1SLOT 0x00000001 -#define MGMT_PHY_BR_1M_3SLOT 0x00000002 -#define MGMT_PHY_BR_1M_5SLOT 0x00000004 -#define MGMT_PHY_EDR_2M_1SLOT 0x00000008 -#define MGMT_PHY_EDR_2M_3SLOT 0x00000010 -#define MGMT_PHY_EDR_2M_5SLOT 0x00000020 -#define MGMT_PHY_EDR_3M_1SLOT 0x00000040 -#define MGMT_PHY_EDR_3M_3SLOT 0x00000080 -#define MGMT_PHY_EDR_3M_5SLOT 0x00000100 -#define MGMT_PHY_LE_1M_TX 0x00000200 -#define MGMT_PHY_LE_1M_RX 0x00000400 -#define MGMT_PHY_LE_2M_TX 0x00000800 -#define MGMT_PHY_LE_2M_RX 0x00001000 -#define MGMT_PHY_LE_CODED_TX 0x00002000 -#define MGMT_PHY_LE_CODED_RX 0x00004000 +#define MGMT_PHY_BR_1M_1SLOT BIT(0) +#define MGMT_PHY_BR_1M_3SLOT BIT(1) +#define MGMT_PHY_BR_1M_5SLOT BIT(2) +#define MGMT_PHY_EDR_2M_1SLOT BIT(3) +#define MGMT_PHY_EDR_2M_3SLOT BIT(4) +#define MGMT_PHY_EDR_2M_5SLOT BIT(5) +#define MGMT_PHY_EDR_3M_1SLOT BIT(6) +#define MGMT_PHY_EDR_3M_3SLOT BIT(7) +#define MGMT_PHY_EDR_3M_5SLOT BIT(8) +#define MGMT_PHY_LE_1M_TX BIT(9) +#define MGMT_PHY_LE_1M_RX BIT(10) +#define MGMT_PHY_LE_2M_TX BIT(11) +#define MGMT_PHY_LE_2M_RX BIT(12) +#define MGMT_PHY_LE_CODED_TX BIT(13) +#define MGMT_PHY_LE_CODED_RX BIT(14) #define MGMT_PHY_BREDR_MASK (MGMT_PHY_BR_1M_1SLOT | MGMT_PHY_BR_1M_3SLOT | \ MGMT_PHY_BR_1M_5SLOT | MGMT_PHY_EDR_2M_1SLOT | \ @@ -974,11 +974,11 @@ struct mgmt_ev_auth_failed { __u8 status; } __packed; -#define MGMT_DEV_FOUND_CONFIRM_NAME 0x01 -#define MGMT_DEV_FOUND_LEGACY_PAIRING 0x02 -#define MGMT_DEV_FOUND_NOT_CONNECTABLE 0x04 -#define MGMT_DEV_FOUND_INITIATED_CONN 0x08 -#define MGMT_DEV_FOUND_NAME_REQUEST_FAILED 0x10 +#define MGMT_DEV_FOUND_CONFIRM_NAME BIT(0) +#define MGMT_DEV_FOUND_LEGACY_PAIRING BIT(1) +#define MGMT_DEV_FOUND_NOT_CONNECTABLE BIT(2) +#define MGMT_DEV_FOUND_INITIATED_CONN BIT(3) +#define MGMT_DEV_FOUND_NAME_REQUEST_FAILED BIT(4) #define MGMT_EV_DEVICE_FOUND 0x0012 struct mgmt_ev_device_found { diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h index f115b2550309..9e04f69712b1 100644 --- a/include/net/cfg80211.h +++ b/include/net/cfg80211.h @@ -828,6 +828,18 @@ struct cfg80211_fils_aad { }; /** + * struct cfg80211_set_hw_timestamp - enable/disable HW timestamping + * @macaddr: peer MAC address. NULL to enable/disable HW timestamping for all + * addresses. + * @enable: if set, enable HW timestamping for the specified MAC address. + * Otherwise disable HW timestamping for the specified MAC address. + */ +struct cfg80211_set_hw_timestamp { + const u8 *macaddr; + bool enable; +}; + +/** * cfg80211_get_chandef_type - return old channel type from chandef * @chandef: the channel definition * @@ -939,6 +951,15 @@ int cfg80211_chandef_dfs_required(struct wiphy *wiphy, enum nl80211_iftype iftype); /** + * nl80211_send_chandef - sends the channel definition. + * @msg: the msg to send channel definition + * @chandef: the channel definition to check + * + * Returns: 0 if sent the channel definition to msg, < 0 on error + **/ +int nl80211_send_chandef(struct sk_buff *msg, const struct cfg80211_chan_def *chandef); + +/** * ieee80211_chanwidth_rate_flags - return rate flags for channel width * @width: the channel width of the channel * @@ -1167,6 +1188,23 @@ struct cfg80211_mbssid_elems { }; /** + * struct cfg80211_rnr_elems - Reduced neighbor report (RNR) elements + * + * @cnt: Number of elements in array %elems. + * + * @elem: Array of RNR element(s) to be added into Beacon frames. + * @elem.data: Data for RNR elements. + * @elem.len: Length of data. + */ +struct cfg80211_rnr_elems { + u8 cnt; + struct { + const u8 *data; + size_t len; + } elem[]; +}; + +/** * struct cfg80211_beacon_data - beacon data * @link_id: the link ID for the AP MLD link sending this beacon * @head: head portion of beacon (before TIM IE) @@ -1186,6 +1224,7 @@ struct cfg80211_mbssid_elems { * @probe_resp_len: length of probe response template (@probe_resp) * @probe_resp: probe response template (AP mode only) * @mbssid_ies: multiple BSSID elements + * @rnr_ies: reduced neighbor report elements * @ftm_responder: enable FTM responder functionality; -1 for no change * (which also implies no change in LCI/civic location data) * @lci: Measurement Report element content, starting with Measurement Token @@ -1209,6 +1248,7 @@ struct cfg80211_beacon_data { const u8 *lci; const u8 *civicloc; struct cfg80211_mbssid_elems *mbssid_ies; + struct cfg80211_rnr_elems *rnr_ies; s8 ftm_responder; size_t head_len, tail_len; @@ -4330,6 +4370,8 @@ struct mgmt_frame_regs { * @add_link_station: Add a link to a station. * @mod_link_station: Modify a link of a station. * @del_link_station: Remove a link of a station. + * + * @set_hw_timestamp: Enable/disable HW timestamping of TM/FTM frames. */ struct cfg80211_ops { int (*suspend)(struct wiphy *wiphy, struct cfg80211_wowlan *wow); @@ -4683,6 +4725,8 @@ struct cfg80211_ops { struct link_station_parameters *params); int (*del_link_station)(struct wiphy *wiphy, struct net_device *dev, struct link_station_del_parameters *params); + int (*set_hw_timestamp)(struct wiphy *wiphy, struct net_device *dev, + struct cfg80211_set_hw_timestamp *hwts); }; /* @@ -5139,6 +5183,8 @@ struct wiphy_iftype_akm_suites { int n_akm_suites; }; +#define CFG80211_HW_TIMESTAMP_ALL_PEERS 0xffff + /** * struct wiphy - wireless hardware description * @mtx: mutex for the data (structures) of this device @@ -5348,6 +5394,13 @@ struct wiphy_iftype_akm_suites { * NL80211_MAX_NR_AKM_SUITES in order to avoid compatibility issues with * legacy userspace and maximum allowed value is * CFG80211_MAX_NUM_AKM_SUITES. + * + * @hw_timestamp_max_peers: maximum number of peers that the driver supports + * enabling HW timestamping for concurrently. Setting this field to a + * non-zero value indicates that the driver supports HW timestamping. + * A value of %CFG80211_HW_TIMESTAMP_ALL_PEERS indicates the driver + * supports enabling HW timestamping for all peers (i.e. no need to + * specify a mac address). */ struct wiphy { struct mutex mtx; @@ -5496,6 +5549,8 @@ struct wiphy { u8 ema_max_profile_periodicity; u16 max_num_akm_suites; + u16 hw_timestamp_max_peers; + char priv[] __aligned(NETDEV_ALIGN); }; @@ -6247,10 +6302,13 @@ static inline int ieee80211_data_to_8023(struct sk_buff *skb, const u8 *addr, * mesh control field. * * @skb: The input A-MSDU frame without any headers. - * @mesh_hdr: use standard compliant mesh A-MSDU subframe header + * @mesh_hdr: the type of mesh header to test + * 0: non-mesh A-MSDU length field + * 1: big-endian mesh A-MSDU length field + * 2: little-endian mesh A-MSDU length field * Returns: true if subframe header lengths are valid for the @mesh_hdr mode */ -bool ieee80211_is_valid_amsdu(struct sk_buff *skb, bool mesh_hdr); +bool ieee80211_is_valid_amsdu(struct sk_buff *skb, u8 mesh_hdr); /** * ieee80211_amsdu_to_8023s - decode an IEEE 802.11n A-MSDU frame @@ -6267,13 +6325,13 @@ bool ieee80211_is_valid_amsdu(struct sk_buff *skb, bool mesh_hdr); * @extra_headroom: The hardware extra headroom for SKBs in the @list. * @check_da: DA to check in the inner ethernet header, or NULL * @check_sa: SA to check in the inner ethernet header, or NULL - * @mesh_control: A-MSDU subframe header includes the mesh control field + * @mesh_control: see mesh_hdr in ieee80211_is_valid_amsdu */ void ieee80211_amsdu_to_8023s(struct sk_buff *skb, struct sk_buff_head *list, const u8 *addr, enum nl80211_iftype iftype, const unsigned int extra_headroom, const u8 *check_da, const u8 *check_sa, - bool mesh_control); + u8 mesh_control); /** * ieee80211_get_8023_tunnel_proto - get RFC1042 or bridge tunnel encap protocol @@ -6814,13 +6872,11 @@ enum cfg80211_bss_frame_type { * @ie: IEs * @ielen: length of IEs * @band: enum nl80211_band of the channel - * @ftype: frame type * * Returns the channel number, or -1 if none could be determined. */ int cfg80211_get_ies_channel_number(const u8 *ie, size_t ielen, - enum nl80211_band band, - enum cfg80211_bss_frame_type ftype); + enum nl80211_band band); /** * cfg80211_inform_bss_data - inform cfg80211 of a new BSS @@ -8101,6 +8157,7 @@ void cfg80211_control_port_tx_status(struct wireless_dev *wdev, u64 cookie, * responsible for any cleanup. The caller must also ensure that * skb->protocol is set appropriately. * @unencrypted: Whether the frame was received unencrypted + * @link_id: the link the frame was received on, -1 if not applicable or unknown * * This function is used to inform userspace about a received control port * frame. It should only be used if userspace indicated it wants to receive @@ -8111,8 +8168,8 @@ void cfg80211_control_port_tx_status(struct wireless_dev *wdev, u64 cookie, * * Return: %true if the frame was passed to userspace */ -bool cfg80211_rx_control_port(struct net_device *dev, - struct sk_buff *skb, bool unencrypted); +bool cfg80211_rx_control_port(struct net_device *dev, struct sk_buff *skb, + bool unencrypted, int link_id); /** * cfg80211_cqm_rssi_notify - connection quality monitoring rssi event diff --git a/include/net/dropreason-core.h b/include/net/dropreason-core.h new file mode 100644 index 000000000000..a2b953b57689 --- /dev/null +++ b/include/net/dropreason-core.h @@ -0,0 +1,370 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef _LINUX_DROPREASON_CORE_H +#define _LINUX_DROPREASON_CORE_H + +#define DEFINE_DROP_REASON(FN, FNe) \ + FN(NOT_SPECIFIED) \ + FN(NO_SOCKET) \ + FN(PKT_TOO_SMALL) \ + FN(TCP_CSUM) \ + FN(SOCKET_FILTER) \ + FN(UDP_CSUM) \ + FN(NETFILTER_DROP) \ + FN(OTHERHOST) \ + FN(IP_CSUM) \ + FN(IP_INHDR) \ + FN(IP_RPFILTER) \ + FN(UNICAST_IN_L2_MULTICAST) \ + FN(XFRM_POLICY) \ + FN(IP_NOPROTO) \ + FN(SOCKET_RCVBUFF) \ + FN(PROTO_MEM) \ + FN(TCP_MD5NOTFOUND) \ + FN(TCP_MD5UNEXPECTED) \ + FN(TCP_MD5FAILURE) \ + FN(SOCKET_BACKLOG) \ + FN(TCP_FLAGS) \ + FN(TCP_ZEROWINDOW) \ + FN(TCP_OLD_DATA) \ + FN(TCP_OVERWINDOW) \ + FN(TCP_OFOMERGE) \ + FN(TCP_RFC7323_PAWS) \ + FN(TCP_INVALID_SEQUENCE) \ + FN(TCP_RESET) \ + FN(TCP_INVALID_SYN) \ + FN(TCP_CLOSE) \ + FN(TCP_FASTOPEN) \ + FN(TCP_OLD_ACK) \ + FN(TCP_TOO_OLD_ACK) \ + FN(TCP_ACK_UNSENT_DATA) \ + FN(TCP_OFO_QUEUE_PRUNE) \ + FN(TCP_OFO_DROP) \ + FN(IP_OUTNOROUTES) \ + FN(BPF_CGROUP_EGRESS) \ + FN(IPV6DISABLED) \ + FN(NEIGH_CREATEFAIL) \ + FN(NEIGH_FAILED) \ + FN(NEIGH_QUEUEFULL) \ + FN(NEIGH_DEAD) \ + FN(TC_EGRESS) \ + FN(QDISC_DROP) \ + FN(CPU_BACKLOG) \ + FN(XDP) \ + FN(TC_INGRESS) \ + FN(UNHANDLED_PROTO) \ + FN(SKB_CSUM) \ + FN(SKB_GSO_SEG) \ + FN(SKB_UCOPY_FAULT) \ + FN(DEV_HDR) \ + FN(DEV_READY) \ + FN(FULL_RING) \ + FN(NOMEM) \ + FN(HDR_TRUNC) \ + FN(TAP_FILTER) \ + FN(TAP_TXFILTER) \ + FN(ICMP_CSUM) \ + FN(INVALID_PROTO) \ + FN(IP_INADDRERRORS) \ + FN(IP_INNOROUTES) \ + FN(PKT_TOO_BIG) \ + FN(DUP_FRAG) \ + FN(FRAG_REASM_TIMEOUT) \ + FN(FRAG_TOO_FAR) \ + FN(TCP_MINTTL) \ + FN(IPV6_BAD_EXTHDR) \ + FN(IPV6_NDISC_FRAG) \ + FN(IPV6_NDISC_HOP_LIMIT) \ + FN(IPV6_NDISC_BAD_CODE) \ + FN(IPV6_NDISC_BAD_OPTIONS) \ + FN(IPV6_NDISC_NS_OTHERHOST) \ + FNe(MAX) + +/** + * enum skb_drop_reason - the reasons of skb drops + * + * The reason of skb drop, which is used in kfree_skb_reason(). + */ +enum skb_drop_reason { + /** + * @SKB_NOT_DROPPED_YET: skb is not dropped yet (used for no-drop case) + */ + SKB_NOT_DROPPED_YET = 0, + /** @SKB_CONSUMED: packet has been consumed */ + SKB_CONSUMED, + /** @SKB_DROP_REASON_NOT_SPECIFIED: drop reason is not specified */ + SKB_DROP_REASON_NOT_SPECIFIED, + /** @SKB_DROP_REASON_NO_SOCKET: socket not found */ + SKB_DROP_REASON_NO_SOCKET, + /** @SKB_DROP_REASON_PKT_TOO_SMALL: packet size is too small */ + SKB_DROP_REASON_PKT_TOO_SMALL, + /** @SKB_DROP_REASON_TCP_CSUM: TCP checksum error */ + SKB_DROP_REASON_TCP_CSUM, + /** @SKB_DROP_REASON_SOCKET_FILTER: dropped by socket filter */ + SKB_DROP_REASON_SOCKET_FILTER, + /** @SKB_DROP_REASON_UDP_CSUM: UDP checksum error */ + SKB_DROP_REASON_UDP_CSUM, + /** @SKB_DROP_REASON_NETFILTER_DROP: dropped by netfilter */ + SKB_DROP_REASON_NETFILTER_DROP, + /** + * @SKB_DROP_REASON_OTHERHOST: packet don't belong to current host + * (interface is in promisc mode) + */ + SKB_DROP_REASON_OTHERHOST, + /** @SKB_DROP_REASON_IP_CSUM: IP checksum error */ + SKB_DROP_REASON_IP_CSUM, + /** + * @SKB_DROP_REASON_IP_INHDR: there is something wrong with IP header (see + * IPSTATS_MIB_INHDRERRORS) + */ + SKB_DROP_REASON_IP_INHDR, + /** + * @SKB_DROP_REASON_IP_RPFILTER: IP rpfilter validate failed. see the + * document for rp_filter in ip-sysctl.rst for more information + */ + SKB_DROP_REASON_IP_RPFILTER, + /** + * @SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST: destination address of L2 is + * multicast, but L3 is unicast. + */ + SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST, + /** @SKB_DROP_REASON_XFRM_POLICY: xfrm policy check failed */ + SKB_DROP_REASON_XFRM_POLICY, + /** @SKB_DROP_REASON_IP_NOPROTO: no support for IP protocol */ + SKB_DROP_REASON_IP_NOPROTO, + /** @SKB_DROP_REASON_SOCKET_RCVBUFF: socket receive buff is full */ + SKB_DROP_REASON_SOCKET_RCVBUFF, + /** + * @SKB_DROP_REASON_PROTO_MEM: proto memory limition, such as udp packet + * drop out of udp_memory_allocated. + */ + SKB_DROP_REASON_PROTO_MEM, + /** + * @SKB_DROP_REASON_TCP_MD5NOTFOUND: no MD5 hash and one expected, + * corresponding to LINUX_MIB_TCPMD5NOTFOUND + */ + SKB_DROP_REASON_TCP_MD5NOTFOUND, + /** + * @SKB_DROP_REASON_TCP_MD5UNEXPECTED: MD5 hash and we're not expecting + * one, corresponding to LINUX_MIB_TCPMD5UNEXPECTED + */ + SKB_DROP_REASON_TCP_MD5UNEXPECTED, + /** + * @SKB_DROP_REASON_TCP_MD5FAILURE: MD5 hash and its wrong, corresponding + * to LINUX_MIB_TCPMD5FAILURE + */ + SKB_DROP_REASON_TCP_MD5FAILURE, + /** + * @SKB_DROP_REASON_SOCKET_BACKLOG: failed to add skb to socket backlog ( + * see LINUX_MIB_TCPBACKLOGDROP) + */ + SKB_DROP_REASON_SOCKET_BACKLOG, + /** @SKB_DROP_REASON_TCP_FLAGS: TCP flags invalid */ + SKB_DROP_REASON_TCP_FLAGS, + /** + * @SKB_DROP_REASON_TCP_ZEROWINDOW: TCP receive window size is zero, + * see LINUX_MIB_TCPZEROWINDOWDROP + */ + SKB_DROP_REASON_TCP_ZEROWINDOW, + /** + * @SKB_DROP_REASON_TCP_OLD_DATA: the TCP data reveived is already + * received before (spurious retrans may happened), see + * LINUX_MIB_DELAYEDACKLOST + */ + SKB_DROP_REASON_TCP_OLD_DATA, + /** + * @SKB_DROP_REASON_TCP_OVERWINDOW: the TCP data is out of window, + * the seq of the first byte exceed the right edges of receive + * window + */ + SKB_DROP_REASON_TCP_OVERWINDOW, + /** + * @SKB_DROP_REASON_TCP_OFOMERGE: the data of skb is already in the ofo + * queue, corresponding to LINUX_MIB_TCPOFOMERGE + */ + SKB_DROP_REASON_TCP_OFOMERGE, + /** + * @SKB_DROP_REASON_TCP_RFC7323_PAWS: PAWS check, corresponding to + * LINUX_MIB_PAWSESTABREJECTED + */ + SKB_DROP_REASON_TCP_RFC7323_PAWS, + /** @SKB_DROP_REASON_TCP_INVALID_SEQUENCE: Not acceptable SEQ field */ + SKB_DROP_REASON_TCP_INVALID_SEQUENCE, + /** @SKB_DROP_REASON_TCP_RESET: Invalid RST packet */ + SKB_DROP_REASON_TCP_RESET, + /** + * @SKB_DROP_REASON_TCP_INVALID_SYN: Incoming packet has unexpected + * SYN flag + */ + SKB_DROP_REASON_TCP_INVALID_SYN, + /** @SKB_DROP_REASON_TCP_CLOSE: TCP socket in CLOSE state */ + SKB_DROP_REASON_TCP_CLOSE, + /** @SKB_DROP_REASON_TCP_FASTOPEN: dropped by FASTOPEN request socket */ + SKB_DROP_REASON_TCP_FASTOPEN, + /** @SKB_DROP_REASON_TCP_OLD_ACK: TCP ACK is old, but in window */ + SKB_DROP_REASON_TCP_OLD_ACK, + /** @SKB_DROP_REASON_TCP_TOO_OLD_ACK: TCP ACK is too old */ + SKB_DROP_REASON_TCP_TOO_OLD_ACK, + /** + * @SKB_DROP_REASON_TCP_ACK_UNSENT_DATA: TCP ACK for data we haven't + * sent yet + */ + SKB_DROP_REASON_TCP_ACK_UNSENT_DATA, + /** @SKB_DROP_REASON_TCP_OFO_QUEUE_PRUNE: pruned from TCP OFO queue */ + SKB_DROP_REASON_TCP_OFO_QUEUE_PRUNE, + /** @SKB_DROP_REASON_TCP_OFO_DROP: data already in receive queue */ + SKB_DROP_REASON_TCP_OFO_DROP, + /** @SKB_DROP_REASON_IP_OUTNOROUTES: route lookup failed */ + SKB_DROP_REASON_IP_OUTNOROUTES, + /** + * @SKB_DROP_REASON_BPF_CGROUP_EGRESS: dropped by BPF_PROG_TYPE_CGROUP_SKB + * eBPF program + */ + SKB_DROP_REASON_BPF_CGROUP_EGRESS, + /** @SKB_DROP_REASON_IPV6DISABLED: IPv6 is disabled on the device */ + SKB_DROP_REASON_IPV6DISABLED, + /** @SKB_DROP_REASON_NEIGH_CREATEFAIL: failed to create neigh entry */ + SKB_DROP_REASON_NEIGH_CREATEFAIL, + /** @SKB_DROP_REASON_NEIGH_FAILED: neigh entry in failed state */ + SKB_DROP_REASON_NEIGH_FAILED, + /** @SKB_DROP_REASON_NEIGH_QUEUEFULL: arp_queue for neigh entry is full */ + SKB_DROP_REASON_NEIGH_QUEUEFULL, + /** @SKB_DROP_REASON_NEIGH_DEAD: neigh entry is dead */ + SKB_DROP_REASON_NEIGH_DEAD, + /** @SKB_DROP_REASON_TC_EGRESS: dropped in TC egress HOOK */ + SKB_DROP_REASON_TC_EGRESS, + /** + * @SKB_DROP_REASON_QDISC_DROP: dropped by qdisc when packet outputting ( + * failed to enqueue to current qdisc) + */ + SKB_DROP_REASON_QDISC_DROP, + /** + * @SKB_DROP_REASON_CPU_BACKLOG: failed to enqueue the skb to the per CPU + * backlog queue. This can be caused by backlog queue full (see + * netdev_max_backlog in net.rst) or RPS flow limit + */ + SKB_DROP_REASON_CPU_BACKLOG, + /** @SKB_DROP_REASON_XDP: dropped by XDP in input path */ + SKB_DROP_REASON_XDP, + /** @SKB_DROP_REASON_TC_INGRESS: dropped in TC ingress HOOK */ + SKB_DROP_REASON_TC_INGRESS, + /** @SKB_DROP_REASON_UNHANDLED_PROTO: protocol not implemented or not supported */ + SKB_DROP_REASON_UNHANDLED_PROTO, + /** @SKB_DROP_REASON_SKB_CSUM: sk_buff checksum computation error */ + SKB_DROP_REASON_SKB_CSUM, + /** @SKB_DROP_REASON_SKB_GSO_SEG: gso segmentation error */ + SKB_DROP_REASON_SKB_GSO_SEG, + /** + * @SKB_DROP_REASON_SKB_UCOPY_FAULT: failed to copy data from user space, + * e.g., via zerocopy_sg_from_iter() or skb_orphan_frags_rx() + */ + SKB_DROP_REASON_SKB_UCOPY_FAULT, + /** @SKB_DROP_REASON_DEV_HDR: device driver specific header/metadata is invalid */ + SKB_DROP_REASON_DEV_HDR, + /** + * @SKB_DROP_REASON_DEV_READY: the device is not ready to xmit/recv due to + * any of its data structure that is not up/ready/initialized, + * e.g., the IFF_UP is not set, or driver specific tun->tfiles[txq] + * is not initialized + */ + SKB_DROP_REASON_DEV_READY, + /** @SKB_DROP_REASON_FULL_RING: ring buffer is full */ + SKB_DROP_REASON_FULL_RING, + /** @SKB_DROP_REASON_NOMEM: error due to OOM */ + SKB_DROP_REASON_NOMEM, + /** + * @SKB_DROP_REASON_HDR_TRUNC: failed to trunc/extract the header from + * networking data, e.g., failed to pull the protocol header from + * frags via pskb_may_pull() + */ + SKB_DROP_REASON_HDR_TRUNC, + /** + * @SKB_DROP_REASON_TAP_FILTER: dropped by (ebpf) filter directly attached + * to tun/tap, e.g., via TUNSETFILTEREBPF + */ + SKB_DROP_REASON_TAP_FILTER, + /** + * @SKB_DROP_REASON_TAP_TXFILTER: dropped by tx filter implemented at + * tun/tap, e.g., check_filter() + */ + SKB_DROP_REASON_TAP_TXFILTER, + /** @SKB_DROP_REASON_ICMP_CSUM: ICMP checksum error */ + SKB_DROP_REASON_ICMP_CSUM, + /** + * @SKB_DROP_REASON_INVALID_PROTO: the packet doesn't follow RFC 2211, + * such as a broadcasts ICMP_TIMESTAMP + */ + SKB_DROP_REASON_INVALID_PROTO, + /** + * @SKB_DROP_REASON_IP_INADDRERRORS: host unreachable, corresponding to + * IPSTATS_MIB_INADDRERRORS + */ + SKB_DROP_REASON_IP_INADDRERRORS, + /** + * @SKB_DROP_REASON_IP_INNOROUTES: network unreachable, corresponding to + * IPSTATS_MIB_INADDRERRORS + */ + SKB_DROP_REASON_IP_INNOROUTES, + /** + * @SKB_DROP_REASON_PKT_TOO_BIG: packet size is too big (maybe exceed the + * MTU) + */ + SKB_DROP_REASON_PKT_TOO_BIG, + /** @SKB_DROP_REASON_DUP_FRAG: duplicate fragment */ + SKB_DROP_REASON_DUP_FRAG, + /** @SKB_DROP_REASON_FRAG_REASM_TIMEOUT: fragment reassembly timeout */ + SKB_DROP_REASON_FRAG_REASM_TIMEOUT, + /** + * @SKB_DROP_REASON_FRAG_TOO_FAR: ipv4 fragment too far. + * (/proc/sys/net/ipv4/ipfrag_max_dist) + */ + SKB_DROP_REASON_FRAG_TOO_FAR, + /** + * @SKB_DROP_REASON_TCP_MINTTL: ipv4 ttl or ipv6 hoplimit below + * the threshold (IP_MINTTL or IPV6_MINHOPCOUNT). + */ + SKB_DROP_REASON_TCP_MINTTL, + /** @SKB_DROP_REASON_IPV6_BAD_EXTHDR: Bad IPv6 extension header. */ + SKB_DROP_REASON_IPV6_BAD_EXTHDR, + /** @SKB_DROP_REASON_IPV6_NDISC_FRAG: invalid frag (suppress_frag_ndisc). */ + SKB_DROP_REASON_IPV6_NDISC_FRAG, + /** @SKB_DROP_REASON_IPV6_NDISC_HOP_LIMIT: invalid hop limit. */ + SKB_DROP_REASON_IPV6_NDISC_HOP_LIMIT, + /** @SKB_DROP_REASON_IPV6_NDISC_BAD_CODE: invalid NDISC icmp6 code. */ + SKB_DROP_REASON_IPV6_NDISC_BAD_CODE, + /** @SKB_DROP_REASON_IPV6_NDISC_BAD_OPTIONS: invalid NDISC options. */ + SKB_DROP_REASON_IPV6_NDISC_BAD_OPTIONS, + /** + * @SKB_DROP_REASON_IPV6_NDISC_NS_OTHERHOST: NEIGHBOUR SOLICITATION + * for another host. + */ + SKB_DROP_REASON_IPV6_NDISC_NS_OTHERHOST, + /** + * @SKB_DROP_REASON_MAX: the maximum of core drop reasons, which + * shouldn't be used as a real 'reason' - only for tracing code gen + */ + SKB_DROP_REASON_MAX, + + /** + * @SKB_DROP_REASON_SUBSYS_MASK: subsystem mask in drop reasons, + * see &enum skb_drop_reason_subsys + */ + SKB_DROP_REASON_SUBSYS_MASK = 0xffff0000, +}; + +#define SKB_DROP_REASON_SUBSYS_SHIFT 16 + +#define SKB_DR_INIT(name, reason) \ + enum skb_drop_reason name = SKB_DROP_REASON_##reason +#define SKB_DR(name) \ + SKB_DR_INIT(name, NOT_SPECIFIED) +#define SKB_DR_SET(name, reason) \ + (name = SKB_DROP_REASON_##reason) +#define SKB_DR_OR(name, reason) \ + do { \ + if (name == SKB_DROP_REASON_NOT_SPECIFIED || \ + name == SKB_NOT_DROPPED_YET) \ + SKB_DR_SET(name, reason); \ + } while (0) + +#endif diff --git a/include/net/dropreason.h b/include/net/dropreason.h index c0a3ea806cd5..685fb37df8e8 100644 --- a/include/net/dropreason.h +++ b/include/net/dropreason.h @@ -2,362 +2,42 @@ #ifndef _LINUX_DROPREASON_H #define _LINUX_DROPREASON_H - -#define DEFINE_DROP_REASON(FN, FNe) \ - FN(NOT_SPECIFIED) \ - FN(NO_SOCKET) \ - FN(PKT_TOO_SMALL) \ - FN(TCP_CSUM) \ - FN(SOCKET_FILTER) \ - FN(UDP_CSUM) \ - FN(NETFILTER_DROP) \ - FN(OTHERHOST) \ - FN(IP_CSUM) \ - FN(IP_INHDR) \ - FN(IP_RPFILTER) \ - FN(UNICAST_IN_L2_MULTICAST) \ - FN(XFRM_POLICY) \ - FN(IP_NOPROTO) \ - FN(SOCKET_RCVBUFF) \ - FN(PROTO_MEM) \ - FN(TCP_MD5NOTFOUND) \ - FN(TCP_MD5UNEXPECTED) \ - FN(TCP_MD5FAILURE) \ - FN(SOCKET_BACKLOG) \ - FN(TCP_FLAGS) \ - FN(TCP_ZEROWINDOW) \ - FN(TCP_OLD_DATA) \ - FN(TCP_OVERWINDOW) \ - FN(TCP_OFOMERGE) \ - FN(TCP_RFC7323_PAWS) \ - FN(TCP_INVALID_SEQUENCE) \ - FN(TCP_RESET) \ - FN(TCP_INVALID_SYN) \ - FN(TCP_CLOSE) \ - FN(TCP_FASTOPEN) \ - FN(TCP_OLD_ACK) \ - FN(TCP_TOO_OLD_ACK) \ - FN(TCP_ACK_UNSENT_DATA) \ - FN(TCP_OFO_QUEUE_PRUNE) \ - FN(TCP_OFO_DROP) \ - FN(IP_OUTNOROUTES) \ - FN(BPF_CGROUP_EGRESS) \ - FN(IPV6DISABLED) \ - FN(NEIGH_CREATEFAIL) \ - FN(NEIGH_FAILED) \ - FN(NEIGH_QUEUEFULL) \ - FN(NEIGH_DEAD) \ - FN(TC_EGRESS) \ - FN(QDISC_DROP) \ - FN(CPU_BACKLOG) \ - FN(XDP) \ - FN(TC_INGRESS) \ - FN(UNHANDLED_PROTO) \ - FN(SKB_CSUM) \ - FN(SKB_GSO_SEG) \ - FN(SKB_UCOPY_FAULT) \ - FN(DEV_HDR) \ - FN(DEV_READY) \ - FN(FULL_RING) \ - FN(NOMEM) \ - FN(HDR_TRUNC) \ - FN(TAP_FILTER) \ - FN(TAP_TXFILTER) \ - FN(ICMP_CSUM) \ - FN(INVALID_PROTO) \ - FN(IP_INADDRERRORS) \ - FN(IP_INNOROUTES) \ - FN(PKT_TOO_BIG) \ - FN(DUP_FRAG) \ - FN(FRAG_REASM_TIMEOUT) \ - FN(FRAG_TOO_FAR) \ - FN(TCP_MINTTL) \ - FN(IPV6_BAD_EXTHDR) \ - FN(IPV6_NDISC_FRAG) \ - FN(IPV6_NDISC_HOP_LIMIT) \ - FN(IPV6_NDISC_BAD_CODE) \ - FN(IPV6_NDISC_BAD_OPTIONS) \ - FN(IPV6_NDISC_NS_OTHERHOST) \ - FNe(MAX) +#include <net/dropreason-core.h> /** - * enum skb_drop_reason - the reasons of skb drops - * - * The reason of skb drop, which is used in kfree_skb_reason(). + * enum skb_drop_reason_subsys - subsystem tag for (extended) drop reasons */ -enum skb_drop_reason { - /** - * @SKB_NOT_DROPPED_YET: skb is not dropped yet (used for no-drop case) - */ - SKB_NOT_DROPPED_YET = 0, - /** @SKB_CONSUMED: packet has been consumed */ - SKB_CONSUMED, - /** @SKB_DROP_REASON_NOT_SPECIFIED: drop reason is not specified */ - SKB_DROP_REASON_NOT_SPECIFIED, - /** @SKB_DROP_REASON_NO_SOCKET: socket not found */ - SKB_DROP_REASON_NO_SOCKET, - /** @SKB_DROP_REASON_PKT_TOO_SMALL: packet size is too small */ - SKB_DROP_REASON_PKT_TOO_SMALL, - /** @SKB_DROP_REASON_TCP_CSUM: TCP checksum error */ - SKB_DROP_REASON_TCP_CSUM, - /** @SKB_DROP_REASON_SOCKET_FILTER: dropped by socket filter */ - SKB_DROP_REASON_SOCKET_FILTER, - /** @SKB_DROP_REASON_UDP_CSUM: UDP checksum error */ - SKB_DROP_REASON_UDP_CSUM, - /** @SKB_DROP_REASON_NETFILTER_DROP: dropped by netfilter */ - SKB_DROP_REASON_NETFILTER_DROP, - /** - * @SKB_DROP_REASON_OTHERHOST: packet don't belong to current host - * (interface is in promisc mode) - */ - SKB_DROP_REASON_OTHERHOST, - /** @SKB_DROP_REASON_IP_CSUM: IP checksum error */ - SKB_DROP_REASON_IP_CSUM, - /** - * @SKB_DROP_REASON_IP_INHDR: there is something wrong with IP header (see - * IPSTATS_MIB_INHDRERRORS) - */ - SKB_DROP_REASON_IP_INHDR, - /** - * @SKB_DROP_REASON_IP_RPFILTER: IP rpfilter validate failed. see the - * document for rp_filter in ip-sysctl.rst for more information - */ - SKB_DROP_REASON_IP_RPFILTER, - /** - * @SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST: destination address of L2 is - * multicast, but L3 is unicast. - */ - SKB_DROP_REASON_UNICAST_IN_L2_MULTICAST, - /** @SKB_DROP_REASON_XFRM_POLICY: xfrm policy check failed */ - SKB_DROP_REASON_XFRM_POLICY, - /** @SKB_DROP_REASON_IP_NOPROTO: no support for IP protocol */ - SKB_DROP_REASON_IP_NOPROTO, - /** @SKB_DROP_REASON_SOCKET_RCVBUFF: socket receive buff is full */ - SKB_DROP_REASON_SOCKET_RCVBUFF, - /** - * @SKB_DROP_REASON_PROTO_MEM: proto memory limition, such as udp packet - * drop out of udp_memory_allocated. - */ - SKB_DROP_REASON_PROTO_MEM, - /** - * @SKB_DROP_REASON_TCP_MD5NOTFOUND: no MD5 hash and one expected, - * corresponding to LINUX_MIB_TCPMD5NOTFOUND - */ - SKB_DROP_REASON_TCP_MD5NOTFOUND, - /** - * @SKB_DROP_REASON_TCP_MD5UNEXPECTED: MD5 hash and we're not expecting - * one, corresponding to LINUX_MIB_TCPMD5UNEXPECTED - */ - SKB_DROP_REASON_TCP_MD5UNEXPECTED, - /** - * @SKB_DROP_REASON_TCP_MD5FAILURE: MD5 hash and its wrong, corresponding - * to LINUX_MIB_TCPMD5FAILURE - */ - SKB_DROP_REASON_TCP_MD5FAILURE, - /** - * @SKB_DROP_REASON_SOCKET_BACKLOG: failed to add skb to socket backlog ( - * see LINUX_MIB_TCPBACKLOGDROP) - */ - SKB_DROP_REASON_SOCKET_BACKLOG, - /** @SKB_DROP_REASON_TCP_FLAGS: TCP flags invalid */ - SKB_DROP_REASON_TCP_FLAGS, - /** - * @SKB_DROP_REASON_TCP_ZEROWINDOW: TCP receive window size is zero, - * see LINUX_MIB_TCPZEROWINDOWDROP - */ - SKB_DROP_REASON_TCP_ZEROWINDOW, - /** - * @SKB_DROP_REASON_TCP_OLD_DATA: the TCP data reveived is already - * received before (spurious retrans may happened), see - * LINUX_MIB_DELAYEDACKLOST - */ - SKB_DROP_REASON_TCP_OLD_DATA, - /** - * @SKB_DROP_REASON_TCP_OVERWINDOW: the TCP data is out of window, - * the seq of the first byte exceed the right edges of receive - * window - */ - SKB_DROP_REASON_TCP_OVERWINDOW, - /** - * @SKB_DROP_REASON_TCP_OFOMERGE: the data of skb is already in the ofo - * queue, corresponding to LINUX_MIB_TCPOFOMERGE - */ - SKB_DROP_REASON_TCP_OFOMERGE, - /** - * @SKB_DROP_REASON_TCP_RFC7323_PAWS: PAWS check, corresponding to - * LINUX_MIB_PAWSESTABREJECTED - */ - SKB_DROP_REASON_TCP_RFC7323_PAWS, - /** @SKB_DROP_REASON_TCP_INVALID_SEQUENCE: Not acceptable SEQ field */ - SKB_DROP_REASON_TCP_INVALID_SEQUENCE, - /** @SKB_DROP_REASON_TCP_RESET: Invalid RST packet */ - SKB_DROP_REASON_TCP_RESET, - /** - * @SKB_DROP_REASON_TCP_INVALID_SYN: Incoming packet has unexpected - * SYN flag - */ - SKB_DROP_REASON_TCP_INVALID_SYN, - /** @SKB_DROP_REASON_TCP_CLOSE: TCP socket in CLOSE state */ - SKB_DROP_REASON_TCP_CLOSE, - /** @SKB_DROP_REASON_TCP_FASTOPEN: dropped by FASTOPEN request socket */ - SKB_DROP_REASON_TCP_FASTOPEN, - /** @SKB_DROP_REASON_TCP_OLD_ACK: TCP ACK is old, but in window */ - SKB_DROP_REASON_TCP_OLD_ACK, - /** @SKB_DROP_REASON_TCP_TOO_OLD_ACK: TCP ACK is too old */ - SKB_DROP_REASON_TCP_TOO_OLD_ACK, - /** - * @SKB_DROP_REASON_TCP_ACK_UNSENT_DATA: TCP ACK for data we haven't - * sent yet - */ - SKB_DROP_REASON_TCP_ACK_UNSENT_DATA, - /** @SKB_DROP_REASON_TCP_OFO_QUEUE_PRUNE: pruned from TCP OFO queue */ - SKB_DROP_REASON_TCP_OFO_QUEUE_PRUNE, - /** @SKB_DROP_REASON_TCP_OFO_DROP: data already in receive queue */ - SKB_DROP_REASON_TCP_OFO_DROP, - /** @SKB_DROP_REASON_IP_OUTNOROUTES: route lookup failed */ - SKB_DROP_REASON_IP_OUTNOROUTES, - /** - * @SKB_DROP_REASON_BPF_CGROUP_EGRESS: dropped by BPF_PROG_TYPE_CGROUP_SKB - * eBPF program - */ - SKB_DROP_REASON_BPF_CGROUP_EGRESS, - /** @SKB_DROP_REASON_IPV6DISABLED: IPv6 is disabled on the device */ - SKB_DROP_REASON_IPV6DISABLED, - /** @SKB_DROP_REASON_NEIGH_CREATEFAIL: failed to create neigh entry */ - SKB_DROP_REASON_NEIGH_CREATEFAIL, - /** @SKB_DROP_REASON_NEIGH_FAILED: neigh entry in failed state */ - SKB_DROP_REASON_NEIGH_FAILED, - /** @SKB_DROP_REASON_NEIGH_QUEUEFULL: arp_queue for neigh entry is full */ - SKB_DROP_REASON_NEIGH_QUEUEFULL, - /** @SKB_DROP_REASON_NEIGH_DEAD: neigh entry is dead */ - SKB_DROP_REASON_NEIGH_DEAD, - /** @SKB_DROP_REASON_TC_EGRESS: dropped in TC egress HOOK */ - SKB_DROP_REASON_TC_EGRESS, - /** - * @SKB_DROP_REASON_QDISC_DROP: dropped by qdisc when packet outputting ( - * failed to enqueue to current qdisc) - */ - SKB_DROP_REASON_QDISC_DROP, - /** - * @SKB_DROP_REASON_CPU_BACKLOG: failed to enqueue the skb to the per CPU - * backlog queue. This can be caused by backlog queue full (see - * netdev_max_backlog in net.rst) or RPS flow limit - */ - SKB_DROP_REASON_CPU_BACKLOG, - /** @SKB_DROP_REASON_XDP: dropped by XDP in input path */ - SKB_DROP_REASON_XDP, - /** @SKB_DROP_REASON_TC_INGRESS: dropped in TC ingress HOOK */ - SKB_DROP_REASON_TC_INGRESS, - /** @SKB_DROP_REASON_UNHANDLED_PROTO: protocol not implemented or not supported */ - SKB_DROP_REASON_UNHANDLED_PROTO, - /** @SKB_DROP_REASON_SKB_CSUM: sk_buff checksum computation error */ - SKB_DROP_REASON_SKB_CSUM, - /** @SKB_DROP_REASON_SKB_GSO_SEG: gso segmentation error */ - SKB_DROP_REASON_SKB_GSO_SEG, - /** - * @SKB_DROP_REASON_SKB_UCOPY_FAULT: failed to copy data from user space, - * e.g., via zerocopy_sg_from_iter() or skb_orphan_frags_rx() - */ - SKB_DROP_REASON_SKB_UCOPY_FAULT, - /** @SKB_DROP_REASON_DEV_HDR: device driver specific header/metadata is invalid */ - SKB_DROP_REASON_DEV_HDR, - /** - * @SKB_DROP_REASON_DEV_READY: the device is not ready to xmit/recv due to - * any of its data structure that is not up/ready/initialized, - * e.g., the IFF_UP is not set, or driver specific tun->tfiles[txq] - * is not initialized - */ - SKB_DROP_REASON_DEV_READY, - /** @SKB_DROP_REASON_FULL_RING: ring buffer is full */ - SKB_DROP_REASON_FULL_RING, - /** @SKB_DROP_REASON_NOMEM: error due to OOM */ - SKB_DROP_REASON_NOMEM, - /** - * @SKB_DROP_REASON_HDR_TRUNC: failed to trunc/extract the header from - * networking data, e.g., failed to pull the protocol header from - * frags via pskb_may_pull() - */ - SKB_DROP_REASON_HDR_TRUNC, - /** - * @SKB_DROP_REASON_TAP_FILTER: dropped by (ebpf) filter directly attached - * to tun/tap, e.g., via TUNSETFILTEREBPF - */ - SKB_DROP_REASON_TAP_FILTER, - /** - * @SKB_DROP_REASON_TAP_TXFILTER: dropped by tx filter implemented at - * tun/tap, e.g., check_filter() - */ - SKB_DROP_REASON_TAP_TXFILTER, - /** @SKB_DROP_REASON_ICMP_CSUM: ICMP checksum error */ - SKB_DROP_REASON_ICMP_CSUM, - /** - * @SKB_DROP_REASON_INVALID_PROTO: the packet doesn't follow RFC 2211, - * such as a broadcasts ICMP_TIMESTAMP - */ - SKB_DROP_REASON_INVALID_PROTO, - /** - * @SKB_DROP_REASON_IP_INADDRERRORS: host unreachable, corresponding to - * IPSTATS_MIB_INADDRERRORS - */ - SKB_DROP_REASON_IP_INADDRERRORS, - /** - * @SKB_DROP_REASON_IP_INNOROUTES: network unreachable, corresponding to - * IPSTATS_MIB_INADDRERRORS - */ - SKB_DROP_REASON_IP_INNOROUTES, - /** - * @SKB_DROP_REASON_PKT_TOO_BIG: packet size is too big (maybe exceed the - * MTU) - */ - SKB_DROP_REASON_PKT_TOO_BIG, - /** @SKB_DROP_REASON_DUP_FRAG: duplicate fragment */ - SKB_DROP_REASON_DUP_FRAG, - /** @SKB_DROP_REASON_FRAG_REASM_TIMEOUT: fragment reassembly timeout */ - SKB_DROP_REASON_FRAG_REASM_TIMEOUT, - /** - * @SKB_DROP_REASON_FRAG_TOO_FAR: ipv4 fragment too far. - * (/proc/sys/net/ipv4/ipfrag_max_dist) - */ - SKB_DROP_REASON_FRAG_TOO_FAR, +enum skb_drop_reason_subsys { + /** @SKB_DROP_REASON_SUBSYS_CORE: core drop reasons defined above */ + SKB_DROP_REASON_SUBSYS_CORE, + /** - * @SKB_DROP_REASON_TCP_MINTTL: ipv4 ttl or ipv6 hoplimit below - * the threshold (IP_MINTTL or IPV6_MINHOPCOUNT). - */ - SKB_DROP_REASON_TCP_MINTTL, - /** @SKB_DROP_REASON_IPV6_BAD_EXTHDR: Bad IPv6 extension header. */ - SKB_DROP_REASON_IPV6_BAD_EXTHDR, - /** @SKB_DROP_REASON_IPV6_NDISC_FRAG: invalid frag (suppress_frag_ndisc). */ - SKB_DROP_REASON_IPV6_NDISC_FRAG, - /** @SKB_DROP_REASON_IPV6_NDISC_HOP_LIMIT: invalid hop limit. */ - SKB_DROP_REASON_IPV6_NDISC_HOP_LIMIT, - /** @SKB_DROP_REASON_IPV6_NDISC_BAD_CODE: invalid NDISC icmp6 code. */ - SKB_DROP_REASON_IPV6_NDISC_BAD_CODE, - /** @SKB_DROP_REASON_IPV6_NDISC_BAD_OPTIONS: invalid NDISC options. */ - SKB_DROP_REASON_IPV6_NDISC_BAD_OPTIONS, - /** @SKB_DROP_REASON_IPV6_NDISC_NS_OTHERHOST: NEIGHBOUR SOLICITATION - * for another host. + * @SKB_DROP_REASON_SUBSYS_MAC80211_UNUSABLE: mac80211 drop reasons + * for unusable frames, see net/mac80211/drop.h */ - SKB_DROP_REASON_IPV6_NDISC_NS_OTHERHOST, + SKB_DROP_REASON_SUBSYS_MAC80211_UNUSABLE, + /** - * @SKB_DROP_REASON_MAX: the maximum of drop reason, which shouldn't be - * used as a real 'reason' + * @SKB_DROP_REASON_SUBSYS_MAC80211_MONITOR: mac80211 drop reasons + * for frames still going to monitor, see net/mac80211/drop.h */ - SKB_DROP_REASON_MAX, + SKB_DROP_REASON_SUBSYS_MAC80211_MONITOR, + + /** @SKB_DROP_REASON_SUBSYS_NUM: number of subsystems defined */ + SKB_DROP_REASON_SUBSYS_NUM +}; + +struct drop_reason_list { + const char * const *reasons; + size_t n_reasons; }; -#define SKB_DR_INIT(name, reason) \ - enum skb_drop_reason name = SKB_DROP_REASON_##reason -#define SKB_DR(name) \ - SKB_DR_INIT(name, NOT_SPECIFIED) -#define SKB_DR_SET(name, reason) \ - (name = SKB_DROP_REASON_##reason) -#define SKB_DR_OR(name, reason) \ - do { \ - if (name == SKB_DROP_REASON_NOT_SPECIFIED || \ - name == SKB_NOT_DROPPED_YET) \ - SKB_DR_SET(name, reason); \ - } while (0) +/* Note: due to dynamic registrations, access must be under RCU */ +extern const struct drop_reason_list __rcu * +drop_reasons_by_subsys[SKB_DROP_REASON_SUBSYS_NUM]; -extern const char * const drop_reasons[]; +void drop_reasons_register_subsys(enum skb_drop_reason_subsys subsys, + const struct drop_reason_list *list); +void drop_reasons_unregister_subsys(enum skb_drop_reason_subsys subsys); #endif diff --git a/include/net/dsa.h b/include/net/dsa.h index a15f17a38eca..8903053fa5aa 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -109,16 +109,6 @@ struct dsa_device_ops { bool promisc_on_master; }; -/* This structure defines the control interfaces that are overlayed by the - * DSA layer on top of the DSA CPU/management net_device instance. This is - * used by the core net_device layer while calling various net_device_ops - * function pointers. - */ -struct dsa_netdevice_ops { - int (*ndo_eth_ioctl)(struct net_device *dev, struct ifreq *ifr, - int cmd); -}; - struct dsa_lag { struct net_device *dev; unsigned int id; @@ -317,11 +307,6 @@ struct dsa_port { */ const struct ethtool_ops *orig_ethtool_ops; - /* - * Original copy of the master netdev net_device_ops - */ - const struct dsa_netdevice_ops *netdev_ops; - /* List of MAC addresses that must be forwarded on this port. * These are only valid on CPU ports and DSA links. */ @@ -1339,42 +1324,6 @@ static inline void dsa_tag_generic_flow_dissect(const struct sk_buff *skb, #endif } -#if IS_ENABLED(CONFIG_NET_DSA) -static inline int __dsa_netdevice_ops_check(struct net_device *dev) -{ - int err = -EOPNOTSUPP; - - if (!dev->dsa_ptr) - return err; - - if (!dev->dsa_ptr->netdev_ops) - return err; - - return 0; -} - -static inline int dsa_ndo_eth_ioctl(struct net_device *dev, struct ifreq *ifr, - int cmd) -{ - const struct dsa_netdevice_ops *ops; - int err; - - err = __dsa_netdevice_ops_check(dev); - if (err) - return err; - - ops = dev->dsa_ptr->netdev_ops; - - return ops->ndo_eth_ioctl(dev, ifr, cmd); -} -#else -static inline int dsa_ndo_eth_ioctl(struct net_device *dev, struct ifreq *ifr, - int cmd) -{ - return -EOPNOTSUPP; -} -#endif - void dsa_unregister_switch(struct dsa_switch *ds); int dsa_register_switch(struct dsa_switch *ds); void dsa_switch_shutdown(struct dsa_switch *ds); diff --git a/include/net/dsa_stubs.h b/include/net/dsa_stubs.h new file mode 100644 index 000000000000..361811750a54 --- /dev/null +++ b/include/net/dsa_stubs.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * include/net/dsa_stubs.h - Stubs for the Distributed Switch Architecture framework + */ + +#include <linux/mutex.h> +#include <linux/netdevice.h> +#include <linux/net_tstamp.h> +#include <net/dsa.h> + +#if IS_ENABLED(CONFIG_NET_DSA) + +extern const struct dsa_stubs *dsa_stubs; + +struct dsa_stubs { + int (*master_hwtstamp_validate)(struct net_device *dev, + const struct kernel_hwtstamp_config *config, + struct netlink_ext_ack *extack); +}; + +static inline int dsa_master_hwtstamp_validate(struct net_device *dev, + const struct kernel_hwtstamp_config *config, + struct netlink_ext_ack *extack) +{ + if (!netdev_uses_dsa(dev)) + return 0; + + /* rtnl_lock() is a sufficient guarantee, because as long as + * netdev_uses_dsa() returns true, the dsa_core module is still + * registered, and so, dsa_unregister_stubs() couldn't have run. + * For netdev_uses_dsa() to start returning false, it would imply that + * dsa_master_teardown() has executed, which requires rtnl_lock(). + */ + ASSERT_RTNL(); + + return dsa_stubs->master_hwtstamp_validate(dev, config, extack); +} + +#else + +static inline int dsa_master_hwtstamp_validate(struct net_device *dev, + const struct kernel_hwtstamp_config *config, + struct netlink_ext_ack *extack) +{ + return 0; +} + +#endif diff --git a/include/net/dst.h b/include/net/dst.h index d67fda89cd0f..78884429deed 100644 --- a/include/net/dst.h +++ b/include/net/dst.h @@ -16,6 +16,7 @@ #include <linux/bug.h> #include <linux/jiffies.h> #include <linux/refcount.h> +#include <linux/rcuref.h> #include <net/neighbour.h> #include <asm/processor.h> #include <linux/indirect_call_wrapper.h> @@ -61,23 +62,36 @@ struct dst_entry { unsigned short trailer_len; /* space to reserve at tail */ /* - * __refcnt wants to be on a different cache line from + * __rcuref wants to be on a different cache line from * input/output/ops or performance tanks badly */ #ifdef CONFIG_64BIT - atomic_t __refcnt; /* 64-bit offset 64 */ + rcuref_t __rcuref; /* 64-bit offset 64 */ #endif int __use; unsigned long lastuse; - struct lwtunnel_state *lwtstate; struct rcu_head rcu_head; short error; short __pad; __u32 tclassid; #ifndef CONFIG_64BIT - atomic_t __refcnt; /* 32-bit offset 64 */ + struct lwtunnel_state *lwtstate; + rcuref_t __rcuref; /* 32-bit offset 64 */ #endif netdevice_tracker dev_tracker; + + /* + * Used by rtable and rt6_info. Moves lwtstate into the next cache + * line on 64bit so that lwtstate does not cause false sharing with + * __rcuref under contention of __rcuref. This also puts the + * frequently accessed members of rtable and rt6_info out of the + * __rcuref cache line. + */ + struct list_head rt_uncached; + struct uncached_list *rt_uncached_list; +#ifdef CONFIG_64BIT + struct lwtunnel_state *lwtstate; +#endif }; struct dst_metrics { @@ -225,10 +239,10 @@ static inline void dst_hold(struct dst_entry *dst) { /* * If your kernel compilation stops here, please check - * the placement of __refcnt in struct dst_entry + * the placement of __rcuref in struct dst_entry */ - BUILD_BUG_ON(offsetof(struct dst_entry, __refcnt) & 63); - WARN_ON(atomic_inc_not_zero(&dst->__refcnt) == 0); + BUILD_BUG_ON(offsetof(struct dst_entry, __rcuref) & 63); + WARN_ON(!rcuref_get(&dst->__rcuref)); } static inline void dst_use_noref(struct dst_entry *dst, unsigned long time) @@ -292,7 +306,7 @@ static inline void skb_dst_copy(struct sk_buff *nskb, const struct sk_buff *oskb */ static inline bool dst_hold_safe(struct dst_entry *dst) { - return atomic_inc_not_zero(&dst->__refcnt); + return rcuref_get(&dst->__rcuref); } /** diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h index 5ccf52ef8809..85b2281576ed 100644 --- a/include/net/flow_dissector.h +++ b/include/net/flow_dissector.h @@ -14,7 +14,9 @@ struct sk_buff; /** * struct flow_dissector_key_control: - * @thoff: Transport header offset + * @thoff: Transport header offset + * @addr_type: Type of key. One of FLOW_DISSECTOR_KEY_* + * @flags: Key flags. Any of FLOW_DIS_(IS_FRAGMENT|FIRST_FRAGENCAPSULATION) */ struct flow_dissector_key_control { u16 thoff; @@ -36,8 +38,9 @@ enum flow_dissect_ret { /** * struct flow_dissector_key_basic: - * @n_proto: Network header protocol (eg. IPv4/IPv6) + * @n_proto: Network header protocol (eg. IPv4/IPv6) * @ip_proto: Transport header protocol (eg. TCP/UDP) + * @padding: Unused */ struct flow_dissector_key_basic { __be16 n_proto; @@ -135,6 +138,7 @@ struct flow_dissector_key_tipc { * struct flow_dissector_key_addrs: * @v4addrs: IPv4 addresses * @v6addrs: IPv6 addresses + * @tipckey: TIPC key */ struct flow_dissector_key_addrs { union { @@ -145,14 +149,12 @@ struct flow_dissector_key_addrs { }; /** - * flow_dissector_key_arp: - * @ports: Operation, source and target addresses for an ARP header - * for Ethernet hardware addresses and IPv4 protocol addresses - * sip: Sender IP address - * tip: Target IP address - * op: Operation - * sha: Sender hardware address - * tpa: Target hardware address + * struct flow_dissector_key_arp: + * @sip: Sender IP address + * @tip: Target IP address + * @op: Operation + * @sha: Sender hardware address + * @tha: Target hardware address */ struct flow_dissector_key_arp { __u32 sip; @@ -163,10 +165,10 @@ struct flow_dissector_key_arp { }; /** - * flow_dissector_key_tp_ports: - * @ports: port numbers of Transport header - * src: source port number - * dst: destination port number + * struct flow_dissector_key_ports: + * @ports: port numbers of Transport header + * @src: source port number + * @dst: destination port number */ struct flow_dissector_key_ports { union { @@ -195,10 +197,10 @@ struct flow_dissector_key_ports_range { }; /** - * flow_dissector_key_icmp: - * type: ICMP type - * code: ICMP code - * id: session identifier + * struct flow_dissector_key_icmp: + * @type: ICMP type + * @code: ICMP code + * @id: Session identifier */ struct flow_dissector_key_icmp { struct { diff --git a/include/net/fou.h b/include/net/fou.h index 80f56e275b08..824eb4b231fd 100644 --- a/include/net/fou.h +++ b/include/net/fou.h @@ -17,4 +17,6 @@ int __fou_build_header(struct sk_buff *skb, struct ip_tunnel_encap *e, int __gue_build_header(struct sk_buff *skb, struct ip_tunnel_encap *e, u8 *protocol, __be16 *sport, int type); +int register_fou_bpf(void); + #endif diff --git a/include/net/handshake.h b/include/net/handshake.h new file mode 100644 index 000000000000..3352b1ab43b3 --- /dev/null +++ b/include/net/handshake.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Generic netlink HANDSHAKE service. + * + * Author: Chuck Lever <chuck.lever@oracle.com> + * + * Copyright (c) 2023, Oracle and/or its affiliates. + */ + +#ifndef _NET_HANDSHAKE_H +#define _NET_HANDSHAKE_H + +enum { + TLS_NO_KEYRING = 0, + TLS_NO_PEERID = 0, + TLS_NO_CERT = 0, + TLS_NO_PRIVKEY = 0, +}; + +typedef void (*tls_done_func_t)(void *data, int status, + key_serial_t peerid); + +struct tls_handshake_args { + struct socket *ta_sock; + tls_done_func_t ta_done; + void *ta_data; + unsigned int ta_timeout_ms; + key_serial_t ta_keyring; + key_serial_t ta_my_cert; + key_serial_t ta_my_privkey; + unsigned int ta_num_peerids; + key_serial_t ta_my_peerids[5]; +}; + +int tls_client_hello_anon(const struct tls_handshake_args *args, gfp_t flags); +int tls_client_hello_x509(const struct tls_handshake_args *args, gfp_t flags); +int tls_client_hello_psk(const struct tls_handshake_args *args, gfp_t flags); +int tls_server_hello_x509(const struct tls_handshake_args *args, gfp_t flags); +int tls_server_hello_psk(const struct tls_handshake_args *args, gfp_t flags); + +bool tls_handshake_cancel(struct sock *sk); + +#endif /* _NET_HANDSHAKE_H */ diff --git a/include/net/ieee80211_radiotap.h b/include/net/ieee80211_radiotap.h index 598f53d2a3a0..f980a72f2ce6 100644 --- a/include/net/ieee80211_radiotap.h +++ b/include/net/ieee80211_radiotap.h @@ -1,6 +1,6 @@ /* * Copyright (c) 2017 Intel Deutschland GmbH - * Copyright (c) 2018-2019, 2021 Intel Corporation + * Copyright (c) 2018-2019, 2021-2022 Intel Corporation * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -82,11 +82,14 @@ enum ieee80211_radiotap_presence { IEEE80211_RADIOTAP_HE_MU = 24, IEEE80211_RADIOTAP_ZERO_LEN_PSDU = 26, IEEE80211_RADIOTAP_LSIG = 27, + IEEE80211_RADIOTAP_TLV = 28, /* valid in every it_present bitmap, even vendor namespaces */ IEEE80211_RADIOTAP_RADIOTAP_NAMESPACE = 29, IEEE80211_RADIOTAP_VENDOR_NAMESPACE = 30, - IEEE80211_RADIOTAP_EXT = 31 + IEEE80211_RADIOTAP_EXT = 31, + IEEE80211_RADIOTAP_EHT_USIG = 33, + IEEE80211_RADIOTAP_EHT = 34, }; /* for IEEE80211_RADIOTAP_FLAGS */ @@ -360,6 +363,214 @@ enum ieee80211_radiotap_zero_len_psdu_type { IEEE80211_RADIOTAP_ZERO_LEN_PSDU_VENDOR = 0xff, }; +struct ieee80211_radiotap_tlv { + __le16 type; + __le16 len; + u8 data[]; +} __packed; + +/** + * struct ieee80211_radiotap_vendor_content - radiotap vendor data content + * @oui: radiotap vendor namespace OUI + * @oui_subtype: radiotap vendor sub namespace + * @vendor_type: radiotap vendor type + * @reserved: should always be set to zero (to avoid leaking memory) + * @data: the actual vendor namespace data + */ +struct ieee80211_radiotap_vendor_content { + u8 oui[3]; + u8 oui_subtype; + __le16 vendor_type; + __le16 reserved; + u8 data[]; +} __packed; + +/** + * struct ieee80211_radiotap_vendor_tlv - vendor radiotap data information + * @type: should always be set to IEEE80211_RADIOTAP_VENDOR_NAMESPACE + * @len: length of data + * @content: vendor content see @ieee80211_radiotap_vendor_content + */ +struct ieee80211_radiotap_vendor_tlv { + __le16 type; /* IEEE80211_RADIOTAP_VENDOR_NAMESPACE */ + __le16 len; + struct ieee80211_radiotap_vendor_content content; +}; + +/* ieee80211_radiotap_eht_usig - content of U-SIG tlv (type 33) + * see www.radiotap.org/fields/U-SIG.html for details + */ +struct ieee80211_radiotap_eht_usig { + __le32 common; + __le32 value; + __le32 mask; +} __packed; + +/* ieee80211_radiotap_eht - content of EHT tlv (type 34) + * see www.radiotap.org/fields/EHT.html for details + */ +struct ieee80211_radiotap_eht { + __le32 known; + __le32 data[9]; + __le32 user_info[]; +} __packed; + +/* Known field for EHT TLV + * The ending defines for what the field applies as following + * O - OFDMA (including TB), M - MU-MIMO, S - EHT sounding. + */ +enum ieee80211_radiotap_eht_known { + IEEE80211_RADIOTAP_EHT_KNOWN_SPATIAL_REUSE = 0x00000002, + IEEE80211_RADIOTAP_EHT_KNOWN_GI = 0x00000004, + IEEE80211_RADIOTAP_EHT_KNOWN_EHT_LTF = 0x00000010, + IEEE80211_RADIOTAP_EHT_KNOWN_LDPC_EXTRA_SYM_OM = 0x00000020, + IEEE80211_RADIOTAP_EHT_KNOWN_PRE_PADD_FACOR_OM = 0x00000040, + IEEE80211_RADIOTAP_EHT_KNOWN_PE_DISAMBIGUITY_OM = 0x00000080, + IEEE80211_RADIOTAP_EHT_KNOWN_DISREGARD_O = 0x00000100, + IEEE80211_RADIOTAP_EHT_KNOWN_DISREGARD_S = 0x00000200, + IEEE80211_RADIOTAP_EHT_KNOWN_CRC1 = 0x00002000, + IEEE80211_RADIOTAP_EHT_KNOWN_TAIL1 = 0x00004000, + IEEE80211_RADIOTAP_EHT_KNOWN_CRC2_O = 0x00008000, + IEEE80211_RADIOTAP_EHT_KNOWN_TAIL2_O = 0x00010000, + IEEE80211_RADIOTAP_EHT_KNOWN_NSS_S = 0x00020000, + IEEE80211_RADIOTAP_EHT_KNOWN_BEAMFORMED_S = 0x00040000, + IEEE80211_RADIOTAP_EHT_KNOWN_NR_NON_OFDMA_USERS_M = 0x00080000, + IEEE80211_RADIOTAP_EHT_KNOWN_ENCODING_BLOCK_CRC_M = 0x00100000, + IEEE80211_RADIOTAP_EHT_KNOWN_ENCODING_BLOCK_TAIL_M = 0x00200000, + IEEE80211_RADIOTAP_EHT_KNOWN_RU_MRU_SIZE_OM = 0x00400000, + IEEE80211_RADIOTAP_EHT_KNOWN_RU_MRU_INDEX_OM = 0x00800000, + IEEE80211_RADIOTAP_EHT_KNOWN_RU_ALLOC_TB_FMT = 0x01000000, + IEEE80211_RADIOTAP_EHT_KNOWN_PRIMARY_80 = 0x02000000, +}; + +enum ieee80211_radiotap_eht_data { + /* Data 0 */ + IEEE80211_RADIOTAP_EHT_DATA0_SPATIAL_REUSE = 0x00000078, + IEEE80211_RADIOTAP_EHT_DATA0_GI = 0x00000180, + IEEE80211_RADIOTAP_EHT_DATA0_LTF = 0x00000600, + IEEE80211_RADIOTAP_EHT_DATA0_EHT_LTF = 0x00003800, + IEEE80211_RADIOTAP_EHT_DATA0_LDPC_EXTRA_SYM_OM = 0x00004000, + IEEE80211_RADIOTAP_EHT_DATA0_PRE_PADD_FACOR_OM = 0x00018000, + IEEE80211_RADIOTAP_EHT_DATA0_PE_DISAMBIGUITY_OM = 0x00020000, + IEEE80211_RADIOTAP_EHT_DATA0_DISREGARD_S = 0x000c0000, + IEEE80211_RADIOTAP_EHT_DATA0_DISREGARD_O = 0x003c0000, + IEEE80211_RADIOTAP_EHT_DATA0_CRC1_O = 0x03c00000, + IEEE80211_RADIOTAP_EHT_DATA0_TAIL1_O = 0xfc000000, + /* Data 1 */ + IEEE80211_RADIOTAP_EHT_DATA1_RU_SIZE = 0x0000001f, + IEEE80211_RADIOTAP_EHT_DATA1_RU_INDEX = 0x00001fe0, + IEEE80211_RADIOTAP_EHT_DATA1_RU_ALLOC_CC_1_1_1 = 0x003fe000, + IEEE80211_RADIOTAP_EHT_DATA1_RU_ALLOC_CC_1_1_1_KNOWN = 0x00400000, + IEEE80211_RADIOTAP_EHT_DATA1_PRIMARY_80 = 0xc0000000, + /* Data 2 */ + IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_2_1_1 = 0x000001ff, + IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_2_1_1_KNOWN = 0x00000200, + IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_1_1_2 = 0x0007fc00, + IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_1_1_2_KNOWN = 0x00080000, + IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_2_1_2 = 0x1ff00000, + IEEE80211_RADIOTAP_EHT_DATA2_RU_ALLOC_CC_2_1_2_KNOWN = 0x20000000, + /* Data 3 */ + IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_1_2_1 = 0x000001ff, + IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_1_2_1_KNOWN = 0x00000200, + IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_2_2_1 = 0x0007fc00, + IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_2_2_1_KNOWN = 0x00080000, + IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_1_2_2 = 0x1ff00000, + IEEE80211_RADIOTAP_EHT_DATA3_RU_ALLOC_CC_1_2_2_KNOWN = 0x20000000, + /* Data 4 */ + IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_2_2_2 = 0x000001ff, + IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_2_2_2_KNOWN = 0x00000200, + IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_1_2_3 = 0x0007fc00, + IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_1_2_3_KNOWN = 0x00080000, + IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_2_2_3 = 0x1ff00000, + IEEE80211_RADIOTAP_EHT_DATA4_RU_ALLOC_CC_2_2_3_KNOWN = 0x20000000, + /* Data 5 */ + IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_1_2_4 = 0x000001ff, + IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_1_2_4_KNOWN = 0x00000200, + IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_2_2_4 = 0x0007fc00, + IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_2_2_4_KNOWN = 0x00080000, + IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_1_2_5 = 0x1ff00000, + IEEE80211_RADIOTAP_EHT_DATA5_RU_ALLOC_CC_1_2_5_KNOWN = 0x20000000, + /* Data 6 */ + IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_2_2_5 = 0x000001ff, + IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_2_2_5_KNOWN = 0x00000200, + IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_1_2_6 = 0x0007fc00, + IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_1_2_6_KNOWN = 0x00080000, + IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_2_2_6 = 0x1ff00000, + IEEE80211_RADIOTAP_EHT_DATA6_RU_ALLOC_CC_2_2_6_KNOWN = 0x20000000, + /* Data 7 */ + IEEE80211_RADIOTAP_EHT_DATA7_CRC2_O = 0x0000000f, + IEEE80211_RADIOTAP_EHT_DATA7_TAIL_2_O = 0x000003f0, + IEEE80211_RADIOTAP_EHT_DATA7_NSS_S = 0x0000f000, + IEEE80211_RADIOTAP_EHT_DATA7_BEAMFORMED_S = 0x00010000, + IEEE80211_RADIOTAP_EHT_DATA7_NUM_OF_NON_OFDMA_USERS = 0x000e0000, + IEEE80211_RADIOTAP_EHT_DATA7_USER_ENCODING_BLOCK_CRC = 0x00f00000, + IEEE80211_RADIOTAP_EHT_DATA7_USER_ENCODING_BLOCK_TAIL = 0x3f000000, + /* Data 8 */ + IEEE80211_RADIOTAP_EHT_DATA8_RU_ALLOC_TB_FMT_PS_160 = 0x00000001, + IEEE80211_RADIOTAP_EHT_DATA8_RU_ALLOC_TB_FMT_B0 = 0x00000002, + IEEE80211_RADIOTAP_EHT_DATA8_RU_ALLOC_TB_FMT_B7_B1 = 0x000001fc, +}; + +enum ieee80211_radiotap_eht_user_info { + IEEE80211_RADIOTAP_EHT_USER_INFO_STA_ID_KNOWN = 0x00000001, + IEEE80211_RADIOTAP_EHT_USER_INFO_MCS_KNOWN = 0x00000002, + IEEE80211_RADIOTAP_EHT_USER_INFO_CODING_KNOWN = 0x00000004, + IEEE80211_RADIOTAP_EHT_USER_INFO_NSS_KNOWN_O = 0x00000010, + IEEE80211_RADIOTAP_EHT_USER_INFO_BEAMFORMING_KNOWN_O = 0x00000020, + IEEE80211_RADIOTAP_EHT_USER_INFO_SPATIAL_CONFIG_KNOWN_M = 0x00000040, + IEEE80211_RADIOTAP_EHT_USER_INFO_DATA_FOR_USER = 0x00000080, + IEEE80211_RADIOTAP_EHT_USER_INFO_STA_ID = 0x0007ff00, + IEEE80211_RADIOTAP_EHT_USER_INFO_CODING = 0x00080000, + IEEE80211_RADIOTAP_EHT_USER_INFO_MCS = 0x00f00000, + IEEE80211_RADIOTAP_EHT_USER_INFO_NSS_O = 0x0f000000, + IEEE80211_RADIOTAP_EHT_USER_INFO_BEAMFORMING_O = 0x20000000, + IEEE80211_RADIOTAP_EHT_USER_INFO_SPATIAL_CONFIG_M = 0x3f000000, + IEEE80211_RADIOTAP_EHT_USER_INFO_RESEVED_c0000000 = 0xc0000000, +}; + +enum ieee80211_radiotap_eht_usig_common { + IEEE80211_RADIOTAP_EHT_USIG_COMMON_PHY_VER_KNOWN = 0x00000001, + IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_KNOWN = 0x00000002, + IEEE80211_RADIOTAP_EHT_USIG_COMMON_UL_DL_KNOWN = 0x00000004, + IEEE80211_RADIOTAP_EHT_USIG_COMMON_BSS_COLOR_KNOWN = 0x00000008, + IEEE80211_RADIOTAP_EHT_USIG_COMMON_TXOP_KNOWN = 0x00000010, + IEEE80211_RADIOTAP_EHT_USIG_COMMON_BAD_USIG_CRC = 0x00000020, + IEEE80211_RADIOTAP_EHT_USIG_COMMON_PHY_VER = 0x00007000, + IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW = 0x00038000, + IEEE80211_RADIOTAP_EHT_USIG_COMMON_UL_DL = 0x00040000, + IEEE80211_RADIOTAP_EHT_USIG_COMMON_BSS_COLOR = 0x01f80000, + IEEE80211_RADIOTAP_EHT_USIG_COMMON_TXOP = 0xfe000000, +}; + +enum ieee80211_radiotap_eht_usig_mu { + /* MU-USIG-1 */ + IEEE80211_RADIOTAP_EHT_USIG1_MU_B20_B24_DISREGARD = 0x0000001f, + IEEE80211_RADIOTAP_EHT_USIG1_MU_B25_VALIDATE = 0x00000020, + /* MU-USIG-2 */ + IEEE80211_RADIOTAP_EHT_USIG2_MU_B0_B1_PPDU_TYPE = 0x000000c0, + IEEE80211_RADIOTAP_EHT_USIG2_MU_B2_VALIDATE = 0x00000100, + IEEE80211_RADIOTAP_EHT_USIG2_MU_B3_B7_PUNCTURED_INFO = 0x00003e00, + IEEE80211_RADIOTAP_EHT_USIG2_MU_B8_VALIDATE = 0x00004000, + IEEE80211_RADIOTAP_EHT_USIG2_MU_B9_B10_SIG_MCS = 0x00018000, + IEEE80211_RADIOTAP_EHT_USIG2_MU_B11_B15_EHT_SIG_SYMBOLS = 0x003e0000, + IEEE80211_RADIOTAP_EHT_USIG2_MU_B16_B19_CRC = 0x03c00000, + IEEE80211_RADIOTAP_EHT_USIG2_MU_B20_B25_TAIL = 0xfc000000, +}; + +enum ieee80211_radiotap_eht_usig_tb { + /* TB-USIG-1 */ + IEEE80211_RADIOTAP_EHT_USIG1_TB_B20_B25_DISREGARD = 0x0000001f, + + /* TB-USIG-2 */ + IEEE80211_RADIOTAP_EHT_USIG2_TB_B0_B1_PPDU_TYPE = 0x000000c0, + IEEE80211_RADIOTAP_EHT_USIG2_TB_B2_VALIDATE = 0x00000100, + IEEE80211_RADIOTAP_EHT_USIG2_TB_B3_B6_SPATIAL_REUSE_1 = 0x00001e00, + IEEE80211_RADIOTAP_EHT_USIG2_TB_B7_B10_SPATIAL_REUSE_2 = 0x0001e000, + IEEE80211_RADIOTAP_EHT_USIG2_TB_B11_B15_DISREGARD = 0x003e0000, + IEEE80211_RADIOTAP_EHT_USIG2_TB_B16_B19_CRC = 0x03c00000, + IEEE80211_RADIOTAP_EHT_USIG2_TB_B20_B25_TAIL = 0xfc000000, +}; + /** * ieee80211_get_radiotap_len - get radiotap header length */ diff --git a/include/net/inet_frag.h b/include/net/inet_frag.h index b23ddec3cd5c..325ad893f624 100644 --- a/include/net/inet_frag.h +++ b/include/net/inet_frag.h @@ -7,7 +7,7 @@ #include <linux/in6.h> #include <linux/rbtree_types.h> #include <linux/refcount.h> -#include <net/dropreason.h> +#include <net/dropreason-core.h> /* Per netns frag queues directory */ struct fqdir { diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h index 51857117ac09..caa20a905531 100644 --- a/include/net/inet_sock.h +++ b/include/net/inet_sock.h @@ -305,10 +305,7 @@ static inline struct sock *skb_to_full_sk(const struct sk_buff *skb) return sk_to_full_sk(skb->sk); } -static inline struct inet_sock *inet_sk(const struct sock *sk) -{ - return (struct inet_sock *)sk; -} +#define inet_sk(ptr) container_of_const(ptr, struct inet_sock, sk) static inline void __inet_sk_copy_descendant(struct sock *sk_to, const struct sock *sk_from, diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h index 6268963d9599..05e6f756feaf 100644 --- a/include/net/ip6_fib.h +++ b/include/net/ip6_fib.h @@ -217,9 +217,6 @@ struct rt6_info { struct inet6_dev *rt6i_idev; u32 rt6i_flags; - struct list_head rt6i_uncached; - struct uncached_list *rt6i_uncached_list; - /* more non-fragment space at head required */ unsigned short rt6i_nfheader_len; }; @@ -472,13 +469,10 @@ void rt6_get_prefsrc(const struct rt6_info *rt, struct in6_addr *addr) rcu_read_lock(); from = rcu_dereference(rt->from); - if (from) { + if (from) *addr = from->fib6_prefsrc.addr; - } else { - struct in6_addr in6_zero = {}; - - *addr = in6_zero; - } + else + *addr = in6addr_any; rcu_read_unlock(); } diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h index 81ee387a1fc4..3556595ce59a 100644 --- a/include/net/ip6_route.h +++ b/include/net/ip6_route.h @@ -100,7 +100,7 @@ static inline struct dst_entry *ip6_route_output(struct net *net, static inline void ip6_rt_put_flags(struct rt6_info *rt, int flags) { if (!(flags & RT6_LOOKUP_F_DST_NOREF) || - !list_empty(&rt->rt6i_uncached)) + !list_empty(&rt->dst.rt_uncached)) ip6_rt_put(rt); } diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h index fca357679816..ed4b6ad3fcac 100644 --- a/include/net/ip_tunnels.h +++ b/include/net/ip_tunnels.h @@ -57,6 +57,13 @@ struct ip_tunnel_key { __u8 flow_flags; }; +struct ip_tunnel_encap { + u16 type; + u16 flags; + __be16 sport; + __be16 dport; +}; + /* Flags for ip_tunnel_info mode. */ #define IP_TUNNEL_INFO_TX 0x01 /* represents tx tunnel parameters */ #define IP_TUNNEL_INFO_IPV6 0x02 /* key contains IPv6 addresses */ @@ -67,8 +74,15 @@ struct ip_tunnel_key { GENMASK((sizeof_field(struct ip_tunnel_info, \ options_len) * BITS_PER_BYTE) - 1, 0) +#define ip_tunnel_info_opts(info) \ + _Generic(info, \ + const struct ip_tunnel_info * : ((const void *)((info) + 1)),\ + struct ip_tunnel_info * : ((void *)((info) + 1))\ + ) + struct ip_tunnel_info { struct ip_tunnel_key key; + struct ip_tunnel_encap encap; #ifdef CONFIG_DST_CACHE struct dst_cache dst_cache; #endif @@ -86,13 +100,6 @@ struct ip_tunnel_6rd_parm { }; #endif -struct ip_tunnel_encap { - u16 type; - u16 flags; - __be16 sport; - __be16 dport; -}; - struct ip_tunnel_prl_entry { struct ip_tunnel_prl_entry __rcu *next; __be32 addr; @@ -293,6 +300,7 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, __be32 remote, __be32 local, __be32 key); +void ip_tunnel_md_udp_encap(struct sk_buff *skb, struct ip_tunnel_info *info); int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb, const struct tnl_ptk_info *tpi, struct metadata_dst *tun_dst, bool log_ecn_error); @@ -371,22 +379,23 @@ static inline int ip_encap_hlen(struct ip_tunnel_encap *e) return hlen; } -static inline int ip_tunnel_encap(struct sk_buff *skb, struct ip_tunnel *t, +static inline int ip_tunnel_encap(struct sk_buff *skb, + struct ip_tunnel_encap *e, u8 *protocol, struct flowi4 *fl4) { const struct ip_tunnel_encap_ops *ops; int ret = -EINVAL; - if (t->encap.type == TUNNEL_ENCAP_NONE) + if (e->type == TUNNEL_ENCAP_NONE) return 0; - if (t->encap.type >= MAX_IPTUN_ENCAP_OPS) + if (e->type >= MAX_IPTUN_ENCAP_OPS) return -EINVAL; rcu_read_lock(); - ops = rcu_dereference(iptun_encaps[t->encap.type]); + ops = rcu_dereference(iptun_encaps[e->type]); if (likely(ops && ops->build_header)) - ret = ops->build_header(skb, &t->encap, protocol, fl4); + ret = ops->build_header(skb, e, protocol, fl4); rcu_read_unlock(); return ret; @@ -485,11 +494,6 @@ static inline void iptunnel_xmit_stats(struct net_device *dev, int pkt_len) } } -static inline void *ip_tunnel_info_opts(struct ip_tunnel_info *info) -{ - return info + 1; -} - static inline void ip_tunnel_info_opts_get(void *to, const struct ip_tunnel_info *info) { diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h index 6d71a5ff52df..ff406ef4fd4a 100644 --- a/include/net/ip_vs.h +++ b/include/net/ip_vs.h @@ -265,26 +265,6 @@ static inline const char *ip_vs_dbg_addr(int af, char *buf, size_t buf_len, pr_err(msg, ##__VA_ARGS__); \ } while (0) -#ifdef CONFIG_IP_VS_DEBUG -#define EnterFunction(level) \ - do { \ - if (level <= ip_vs_get_debug_level()) \ - printk(KERN_DEBUG \ - pr_fmt("Enter: %s, %s line %i\n"), \ - __func__, __FILE__, __LINE__); \ - } while (0) -#define LeaveFunction(level) \ - do { \ - if (level <= ip_vs_get_debug_level()) \ - printk(KERN_DEBUG \ - pr_fmt("Leave: %s, %s line %i\n"), \ - __func__, __FILE__, __LINE__); \ - } while (0) -#else -#define EnterFunction(level) do {} while (0) -#define LeaveFunction(level) do {} while (0) -#endif - /* The port number of FTP service (in network order). */ #define FTPPORT cpu_to_be16(21) #define FTPDATA cpu_to_be16(20) @@ -604,7 +584,7 @@ struct ip_vs_conn { spinlock_t lock; /* lock for state transition */ volatile __u16 state; /* state info */ volatile __u16 old_state; /* old state, to be used for - * state transition triggerd + * state transition triggered * synchronization */ __u32 fwmark; /* Fire wall mark from skb */ @@ -630,8 +610,10 @@ struct ip_vs_conn { */ struct ip_vs_app *app; /* bound ip_vs_app object */ void *app_data; /* Application private data */ - struct ip_vs_seq in_seq; /* incoming seq. struct */ - struct ip_vs_seq out_seq; /* outgoing seq. struct */ + struct_group(sync_conn_opt, + struct ip_vs_seq in_seq; /* incoming seq. struct */ + struct ip_vs_seq out_seq; /* outgoing seq. struct */ + ); const struct ip_vs_pe *pe; char *pe_data; @@ -653,7 +635,7 @@ struct ip_vs_service_user_kern { u16 protocol; union nf_inet_addr addr; /* virtual ip address */ __be16 port; - u32 fwmark; /* firwall mark of service */ + u32 fwmark; /* firewall mark of service */ /* virtual service options */ char *sched_name; @@ -1054,7 +1036,7 @@ struct netns_ipvs { struct ipvs_sync_daemon_cfg bcfg; /* Backup Configuration */ /* net name space ptr */ struct net *net; /* Needed by timer routines */ - /* Number of heterogeneous destinations, needed becaus heterogeneous + /* Number of heterogeneous destinations, needed because heterogeneous * are not supported when synchronization is enabled. */ unsigned int mixed_address_family_dests; diff --git a/include/net/mac80211.h b/include/net/mac80211.h index 219fd15893b0..ac0370e76874 100644 --- a/include/net/mac80211.h +++ b/include/net/mac80211.h @@ -534,6 +534,7 @@ struct ieee80211_fils_discovery { * This structure keeps information about a BSS (and an association * to that BSS) that can change during the lifetime of the BSS. * + * @vif: reference to owning VIF * @addr: (link) address used locally * @link_id: link ID, or 0 for non-MLO * @htc_trig_based_pkt_ext: default PE in 4us units, if BSS supports HE @@ -656,6 +657,9 @@ struct ieee80211_fils_discovery { * write-protected by sdata_lock and local->mtx so holding either is fine * for read access. * @color_change_color: the bss color that will be used after the change. + * @ht_ldpc: in AP mode, indicates interface has HT LDPC capability. + * @vht_ldpc: in AP mode, indicates interface has VHT LDPC capability. + * @he_ldpc: in AP mode, indicates interface has HE LDPC capability. * @vht_su_beamformer: in AP mode, does this BSS support operation as an VHT SU * beamformer * @vht_su_beamformee: in AP mode, does this BSS support operation as an VHT SU @@ -673,8 +677,16 @@ struct ieee80211_fils_discovery { * @he_full_ul_mumimo: does this BSS support the reception (AP) or transmission * (non-AP STA) of an HE TB PPDU on an RU that spans the entire PPDU * bandwidth + * @eht_su_beamformer: in AP-mode, does this BSS enable operation as an EHT SU + * beamformer + * @eht_su_beamformee: in AP-mode, does this BSS enable operation as an EHT SU + * beamformee + * @eht_mu_beamformer: in AP-mode, does this BSS enable operation as an EHT MU + * beamformer */ struct ieee80211_bss_conf { + struct ieee80211_vif *vif; + const u8 *bssid; unsigned int link_id; u8 addr[ETH_ALEN] __aligned(2); @@ -750,6 +762,9 @@ struct ieee80211_bss_conf { bool color_change_active; u8 color_change_color; + bool ht_ldpc; + bool vht_ldpc; + bool he_ldpc; bool vht_su_beamformer; bool vht_su_beamformee; bool vht_mu_beamformer; @@ -758,6 +773,9 @@ struct ieee80211_bss_conf { bool he_su_beamformee; bool he_mu_beamformer; bool he_full_ul_mumimo; + bool eht_su_beamformer; + bool eht_su_beamformee; + bool eht_mu_beamformer; }; /** @@ -1372,9 +1390,12 @@ ieee80211_tx_info_clear_status(struct ieee80211_tx_info *info) * subframes share the same sequence number. Reported subframes can be * either regular MSDU or singly A-MSDUs. Subframes must not be * interleaved with other frames. - * @RX_FLAG_RADIOTAP_VENDOR_DATA: This frame contains vendor-specific - * radiotap data in the skb->data (before the frame) as described by - * the &struct ieee80211_vendor_radiotap. + * @RX_FLAG_RADIOTAP_TLV_AT_END: This frame contains radiotap TLVs in the + * skb->data (before the 802.11 header). + * If used, the SKB's mac_header pointer must be set to point + * to the 802.11 header after the TLVs, and any padding added after TLV + * data to align to 4 must be cleared by the driver putting the TLVs + * in the skb. * @RX_FLAG_ALLOW_SAME_PN: Allow the same PN as same packet before. * This is used for AMSDU subframes which can have the same PN as * the first subframe. @@ -1426,7 +1447,7 @@ enum mac80211_rx_flags { RX_FLAG_ONLY_MONITOR = BIT(17), RX_FLAG_SKIP_MONITOR = BIT(18), RX_FLAG_AMSDU_MORE = BIT(19), - RX_FLAG_RADIOTAP_VENDOR_DATA = BIT(20), + RX_FLAG_RADIOTAP_TLV_AT_END = BIT(20), RX_FLAG_MIC_STRIPPED = BIT(21), RX_FLAG_ALLOW_SAME_PN = BIT(22), RX_FLAG_ICV_STRIPPED = BIT(23), @@ -1567,39 +1588,6 @@ ieee80211_rx_status_to_khz(struct ieee80211_rx_status *rx_status) } /** - * struct ieee80211_vendor_radiotap - vendor radiotap data information - * @present: presence bitmap for this vendor namespace - * (this could be extended in the future if any vendor needs more - * bits, the radiotap spec does allow for that) - * @align: radiotap vendor namespace alignment. This defines the needed - * alignment for the @data field below, not for the vendor namespace - * description itself (which has a fixed 2-byte alignment) - * Must be a power of two, and be set to at least 1! - * @oui: radiotap vendor namespace OUI - * @subns: radiotap vendor sub namespace - * @len: radiotap vendor sub namespace skip length, if alignment is done - * then that's added to this, i.e. this is only the length of the - * @data field. - * @pad: number of bytes of padding after the @data, this exists so that - * the skb data alignment can be preserved even if the data has odd - * length - * @data: the actual vendor namespace data - * - * This struct, including the vendor data, goes into the skb->data before - * the 802.11 header. It's split up in mac80211 using the align/oui/subns - * data. - */ -struct ieee80211_vendor_radiotap { - u32 present; - u8 align; - u8 oui[3]; - u8 subns; - u8 pad; - u16 len; - u8 data[]; -} __packed; - -/** * enum ieee80211_conf_flags - configuration flags * * Flags to define PHY configuration options @@ -3841,6 +3829,12 @@ struct ieee80211_prep_tx_info { * the station. See @sta_pre_rcu_remove if needed. * This callback can sleep. * + * @link_add_debugfs: Drivers can use this callback to add debugfs files + * when a link is added to a mac80211 vif. This callback should be within + * a CONFIG_MAC80211_DEBUGFS conditional. This callback can sleep. + * For non-MLO the callback will be called once for the default bss_conf + * with the vif's directory rather than a separate subdirectory. + * * @sta_add_debugfs: Drivers can use this callback to add debugfs files * when a station is added to mac80211's station list. This callback * should be within a CONFIG_MAC80211_DEBUGFS conditional. This @@ -3956,6 +3950,10 @@ struct ieee80211_prep_tx_info { * Note that vif can be NULL. * The callback can sleep. * + * @flush_sta: Flush or drop all pending frames from the hardware queue(s) for + * the given station, as it's about to be removed. + * The callback can sleep. + * * @channel_switch: Drivers that need (or want) to offload the channel * switch operation for CSAs received from the AP may implement this * callback. They must then call ieee80211_chswitch_done() to indicate @@ -4230,6 +4228,13 @@ struct ieee80211_prep_tx_info { * Note that a sta can also be inserted or removed with valid links, * i.e. passed to @sta_add/@sta_state with sta->valid_links not zero. * In fact, cannot change from having valid_links and not having them. + * @set_hw_timestamp: Enable/disable HW timestamping of TM/FTM frames. This is + * not restored at HW reset by mac80211 so drivers need to take care of + * that. + * @net_setup_tc: Called from .ndo_setup_tc in order to prepare hardware + * flow offloading for flows originating from the vif. + * Note that the driver must not assume that the vif driver_data is valid + * at this point, since the callback can be called during netdev teardown. */ struct ieee80211_ops { void (*tx)(struct ieee80211_hw *hw, @@ -4319,6 +4324,10 @@ struct ieee80211_ops { int (*sta_remove)(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_sta *sta); #ifdef CONFIG_MAC80211_DEBUGFS + void (*link_add_debugfs)(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_bss_conf *link_conf, + struct dentry *dir); void (*sta_add_debugfs)(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_sta *sta, @@ -4410,6 +4419,8 @@ struct ieee80211_ops { #endif void (*flush)(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u32 queues, bool drop); + void (*flush_sta)(struct ieee80211_hw *hw, struct ieee80211_vif *vif, + struct ieee80211_sta *sta); void (*channel_switch)(struct ieee80211_hw *hw, struct ieee80211_vif *vif, struct ieee80211_channel_switch *ch_switch); @@ -4589,6 +4600,14 @@ struct ieee80211_ops { struct ieee80211_vif *vif, struct ieee80211_sta *sta, u16 old_links, u16 new_links); + int (*set_hw_timestamp)(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct cfg80211_set_hw_timestamp *hwts); + int (*net_setup_tc)(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct net_device *dev, + enum tc_setup_type type, + void *type_data); }; /** @@ -5197,26 +5216,6 @@ void ieee80211_tx_status_irqsafe(struct ieee80211_hw *hw, struct sk_buff *skb); /** - * ieee80211_tx_status_8023 - transmit status callback for 802.3 frame format - * - * Call this function for all transmitted data frames after their transmit - * completion. This callback should only be called for data frames which - * are using driver's (or hardware's) offload capability of encap/decap - * 802.11 frames. - * - * This function may not be called in IRQ context. Calls to this function - * for a single hardware must be synchronized against each other and all - * calls in the same tx status family. - * - * @hw: the hardware the frame was transmitted by - * @vif: the interface for which the frame was transmitted - * @skb: the frame that was transmitted, owned by mac80211 after this call - */ -void ieee80211_tx_status_8023(struct ieee80211_hw *hw, - struct ieee80211_vif *vif, - struct sk_buff *skb); - -/** * ieee80211_report_low_ack - report non-responding station * * When operating in AP-mode, call this function to report a non-responding @@ -5273,6 +5272,74 @@ ieee80211_beacon_get_template(struct ieee80211_hw *hw, unsigned int link_id); /** + * ieee80211_beacon_get_template_ema_index - EMA beacon template generation + * @hw: pointer obtained from ieee80211_alloc_hw(). + * @vif: &struct ieee80211_vif pointer from the add_interface callback. + * @offs: &struct ieee80211_mutable_offsets pointer to struct that will + * receive the offsets that may be updated by the driver. + * @link_id: the link id to which the beacon belongs (or 0 for a non-MLD AP). + * @ema_index: index of the beacon in the EMA set. + * + * This function follows the same rules as ieee80211_beacon_get_template() + * but returns a beacon template which includes multiple BSSID element at the + * requested index. + * + * Return: The beacon template. %NULL indicates the end of EMA templates. + */ +struct sk_buff * +ieee80211_beacon_get_template_ema_index(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + struct ieee80211_mutable_offsets *offs, + unsigned int link_id, u8 ema_index); + +/** + * struct ieee80211_ema_beacons - List of EMA beacons + * @cnt: count of EMA beacons. + * + * @bcn: array of EMA beacons. + * @bcn.skb: the skb containing this specific beacon + * @bcn.offs: &struct ieee80211_mutable_offsets pointer to struct that will + * receive the offsets that may be updated by the driver. + */ +struct ieee80211_ema_beacons { + u8 cnt; + struct { + struct sk_buff *skb; + struct ieee80211_mutable_offsets offs; + } bcn[]; +}; + +/** + * ieee80211_beacon_get_template_ema_list - EMA beacon template generation + * @hw: pointer obtained from ieee80211_alloc_hw(). + * @vif: &struct ieee80211_vif pointer from the add_interface callback. + * @link_id: the link id to which the beacon belongs (or 0 for a non-MLD AP) + * + * This function follows the same rules as ieee80211_beacon_get_template() + * but allocates and returns a pointer to list of all beacon templates required + * to cover all profiles in the multiple BSSID set. Each template includes only + * one multiple BSSID element. + * + * Driver must call ieee80211_beacon_free_ema_list() to free the memory. + * + * Return: EMA beacon templates of type struct ieee80211_ema_beacons *. + * %NULL on error. + */ +struct ieee80211_ema_beacons * +ieee80211_beacon_get_template_ema_list(struct ieee80211_hw *hw, + struct ieee80211_vif *vif, + unsigned int link_id); + +/** + * ieee80211_beacon_free_ema_list - free an EMA beacon template list + * @ema_beacons: list of EMA beacons of type &struct ieee80211_ema_beacons pointers. + * + * This function will free a list previously acquired by calling + * ieee80211_beacon_get_template_ema_list() + */ +void ieee80211_beacon_free_ema_list(struct ieee80211_ema_beacons *ema_beacons); + +/** * ieee80211_beacon_get_tim - beacon generation function * @hw: pointer obtained from ieee80211_alloc_hw(). * @vif: &struct ieee80211_vif pointer from the add_interface callback. @@ -5985,6 +6052,20 @@ void ieee80211_queue_delayed_work(struct ieee80211_hw *hw, unsigned long delay); /** + * ieee80211_refresh_tx_agg_session_timer - Refresh a tx agg session timer. + * @sta: the station for which to start a BA session + * @tid: the TID to BA on. + * + * This function allows low level driver to refresh tx agg session timer + * to maintain BA session, the session level will still be managed by the + * mac80211. + * + * Note: must be called in an RCU critical section. + */ +void ieee80211_refresh_tx_agg_session_timer(struct ieee80211_sta *sta, + u16 tid); + +/** * ieee80211_start_tx_ba_session - Start a tx Block Ack session. * @sta: the station for which to start a BA session * @tid: the TID to BA on. diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index 56189e4252da..96c120160f15 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -145,6 +145,7 @@ struct gdma_general_req { }; /* HW DATA */ #define GDMA_MESSAGE_V1 1 +#define GDMA_MESSAGE_V2 2 struct gdma_general_resp { struct gdma_resp_hdr hdr; @@ -354,6 +355,9 @@ struct gdma_context { struct gdma_resource msix_resource; struct gdma_irq_context *irq_contexts; + /* L2 MTU */ + u16 adapter_mtu; + /* This maps a CQ index to the queue structure. */ unsigned int max_num_cqs; struct gdma_queue **cq_table; diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h index 3bb579962a14..cd386aa7c7cc 100644 --- a/include/net/mana/mana.h +++ b/include/net/mana/mana.h @@ -36,10 +36,8 @@ enum TRI_STATE { #define COMP_ENTRY_SIZE 64 -#define ADAPTER_MTU_SIZE 1500 -#define MAX_FRAME_SIZE (ADAPTER_MTU_SIZE + 14) - #define RX_BUFFERS_PER_QUEUE 512 +#define MANA_RX_DATA_ALIGN 64 #define MAX_SEND_BUFFERS_PER_QUEUE 256 @@ -48,6 +46,10 @@ enum TRI_STATE { #define MAX_PORTS_IN_MANA_DEV 256 +/* Update this count whenever the respective structures are changed */ +#define MANA_STATS_RX_COUNT 5 +#define MANA_STATS_TX_COUNT 11 + struct mana_stats_rx { u64 packets; u64 bytes; @@ -61,6 +63,14 @@ struct mana_stats_tx { u64 packets; u64 bytes; u64 xdp_xmit; + u64 tso_packets; + u64 tso_bytes; + u64 tso_inner_packets; + u64 tso_inner_bytes; + u64 short_pkt_fmt; + u64 long_pkt_fmt; + u64 csum_partial; + u64 mana_map_err; struct u64_stats_sync syncp; }; @@ -270,7 +280,6 @@ struct mana_recv_buf_oob { struct gdma_wqe_request wqe_req; void *buf_va; - dma_addr_t buf_dma_addr; /* SGL of the buffer going to be sent has part of the work request. */ u32 num_sge; @@ -283,6 +292,11 @@ struct mana_recv_buf_oob { struct gdma_posted_wqe_info wqe_inf; }; +#define MANA_RXBUF_PAD (SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) \ + + ETH_HLEN) + +#define MANA_XDP_MTU_MAX (PAGE_SIZE - MANA_RXBUF_PAD - XDP_PACKET_HEADROOM) + struct mana_rxq { struct gdma_queue *gdma_rq; /* Cache the gdma receive queue id */ @@ -292,6 +306,8 @@ struct mana_rxq { u32 rxq_idx; u32 datasize; + u32 alloc_size; + u32 headroom; mana_handle_t rxobj; @@ -310,7 +326,7 @@ struct mana_rxq { struct bpf_prog __rcu *bpf_prog; struct xdp_rxq_info xdp_rxq; - struct page *xdp_save_page; + void *xdp_save_va; /* for reusing */ bool xdp_flush; int xdp_rc; /* XDP redirect return code */ @@ -331,6 +347,12 @@ struct mana_tx_qp { struct mana_ethtool_stats { u64 stop_queue; u64 wake_queue; + u64 tx_cqes; + u64 tx_cqe_err; + u64 tx_cqe_unknown_type; + u64 rx_cqes; + u64 rx_coalesced_err; + u64 rx_cqe_unknown_type; }; struct mana_context { @@ -369,6 +391,14 @@ struct mana_port_context { /* This points to an array of num_queues of RQ pointers. */ struct mana_rxq **rxqs; + /* pre-allocated rx buffer array */ + void **rxbufs_pre; + dma_addr_t *das_pre; + int rxbpre_total; + u32 rxbpre_datasize; + u32 rxbpre_alloc_size; + u32 rxbpre_headroom; + struct bpf_prog *bpf_prog; /* Create num_queues EQs, SQs, SQ-CQs, RQs and RQ-CQs, respectively. */ @@ -468,6 +498,11 @@ struct mana_query_device_cfg_resp { u16 max_num_vports; u16 reserved; u32 max_num_eqs; + + /* response v2: */ + u16 adapter_mtu; + u16 reserved2; + u32 reserved3; }; /* HW DATA */ /* Query vPort Configuration */ diff --git a/include/net/ndisc.h b/include/net/ndisc.h index 07e5168cdaf9..52eae0943433 100644 --- a/include/net/ndisc.h +++ b/include/net/ndisc.h @@ -395,11 +395,11 @@ static inline struct neighbour *__ipv6_neigh_lookup(struct net_device *dev, cons { struct neighbour *n; - rcu_read_lock_bh(); + rcu_read_lock(); n = __ipv6_neigh_lookup_noref(dev, pkey); if (n && !refcount_inc_not_zero(&n->refcnt)) n = NULL; - rcu_read_unlock_bh(); + rcu_read_unlock(); return n; } @@ -409,10 +409,10 @@ static inline void __ipv6_confirm_neigh(struct net_device *dev, { struct neighbour *n; - rcu_read_lock_bh(); + rcu_read_lock(); n = __ipv6_neigh_lookup_noref(dev, pkey); neigh_confirm(n); - rcu_read_unlock_bh(); + rcu_read_unlock(); } static inline void __ipv6_confirm_neigh_stub(struct net_device *dev, @@ -420,10 +420,10 @@ static inline void __ipv6_confirm_neigh_stub(struct net_device *dev, { struct neighbour *n; - rcu_read_lock_bh(); + rcu_read_lock(); n = __ipv6_neigh_lookup_noref_stub(dev, pkey); neigh_confirm(n); - rcu_read_unlock_bh(); + rcu_read_unlock(); } /* uses ipv6_stub and is meant for use outside of IPv6 core */ diff --git a/include/net/neighbour.h b/include/net/neighbour.h index 2f2a6023fb0e..3fa5774bddac 100644 --- a/include/net/neighbour.h +++ b/include/net/neighbour.h @@ -299,14 +299,14 @@ static inline struct neighbour *___neigh_lookup_noref( const void *pkey, struct net_device *dev) { - struct neigh_hash_table *nht = rcu_dereference_bh(tbl->nht); + struct neigh_hash_table *nht = rcu_dereference(tbl->nht); struct neighbour *n; u32 hash_val; hash_val = hash(pkey, dev, nht->hash_rnd) >> (32 - nht->hash_shift); - for (n = rcu_dereference_bh(nht->hash_buckets[hash_val]); + for (n = rcu_dereference(nht->hash_buckets[hash_val]); n != NULL; - n = rcu_dereference_bh(n->next)) { + n = rcu_dereference(n->next)) { if (n->dev == dev && key_eq(n, pkey)) return n; } @@ -336,8 +336,6 @@ void neigh_table_init(int index, struct neigh_table *tbl); int neigh_table_clear(int index, struct neigh_table *tbl); struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey, struct net_device *dev); -struct neighbour *neigh_lookup_nodev(struct neigh_table *tbl, struct net *net, - const void *pkey); struct neighbour *__neigh_create(struct neigh_table *tbl, const void *pkey, struct net_device *dev, bool want_ref); static inline struct neighbour *neigh_create(struct neigh_table *tbl, @@ -466,7 +464,7 @@ static __always_inline int neigh_event_send_probe(struct neighbour *neigh, if (READ_ONCE(neigh->used) != now) WRITE_ONCE(neigh->used, now); - if (!(neigh->nud_state & (NUD_CONNECTED | NUD_DELAY | NUD_PROBE))) + if (!(READ_ONCE(neigh->nud_state) & (NUD_CONNECTED | NUD_DELAY | NUD_PROBE))) return __neigh_event_send(neigh, skb, immediate_ok); return 0; } diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h new file mode 100644 index 000000000000..d68b0a483431 --- /dev/null +++ b/include/net/netdev_queues.h @@ -0,0 +1,173 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_NET_QUEUES_H +#define _LINUX_NET_QUEUES_H + +#include <linux/netdevice.h> + +/** + * DOC: Lockless queue stopping / waking helpers. + * + * The netif_txq_maybe_stop() and __netif_txq_completed_wake() + * macros are designed to safely implement stopping + * and waking netdev queues without full lock protection. + * + * We assume that there can be no concurrent stop attempts and no concurrent + * wake attempts. The try-stop should happen from the xmit handler, + * while wake up should be triggered from NAPI poll context. + * The two may run concurrently (single producer, single consumer). + * + * The try-stop side is expected to run from the xmit handler and therefore + * it does not reschedule Tx (netif_tx_start_queue() instead of + * netif_tx_wake_queue()). Uses of the ``stop`` macros outside of the xmit + * handler may lead to xmit queue being enabled but not run. + * The waking side does not have similar context restrictions. + * + * The macros guarantee that rings will not remain stopped if there's + * space available, but they do *not* prevent false wake ups when + * the ring is full! Drivers should check for ring full at the start + * for the xmit handler. + * + * All descriptor ring indexes (and other relevant shared state) must + * be updated before invoking the macros. + */ + +#define netif_txq_try_stop(txq, get_desc, start_thrs) \ + ({ \ + int _res; \ + \ + netif_tx_stop_queue(txq); \ + /* Producer index and stop bit must be visible \ + * to consumer before we recheck. \ + * Pairs with a barrier in __netif_txq_completed_wake(). \ + */ \ + smp_mb__after_atomic(); \ + \ + /* We need to check again in a case another \ + * CPU has just made room available. \ + */ \ + _res = 0; \ + if (unlikely(get_desc >= start_thrs)) { \ + netif_tx_start_queue(txq); \ + _res = -1; \ + } \ + _res; \ + }) \ + +/** + * netif_txq_maybe_stop() - locklessly stop a Tx queue, if needed + * @txq: struct netdev_queue to stop/start + * @get_desc: get current number of free descriptors (see requirements below!) + * @stop_thrs: minimal number of available descriptors for queue to be left + * enabled + * @start_thrs: minimal number of descriptors to re-enable the queue, can be + * equal to @stop_thrs or higher to avoid frequent waking + * + * All arguments may be evaluated multiple times, beware of side effects. + * @get_desc must be a formula or a function call, it must always + * return up-to-date information when evaluated! + * Expected to be used from ndo_start_xmit, see the comment on top of the file. + * + * Returns: + * 0 if the queue was stopped + * 1 if the queue was left enabled + * -1 if the queue was re-enabled (raced with waking) + */ +#define netif_txq_maybe_stop(txq, get_desc, stop_thrs, start_thrs) \ + ({ \ + int _res; \ + \ + _res = 1; \ + if (unlikely(get_desc < stop_thrs)) \ + _res = netif_txq_try_stop(txq, get_desc, start_thrs); \ + _res; \ + }) \ + +/* Variant of netdev_tx_completed_queue() which guarantees smp_mb() if + * @bytes != 0, regardless of kernel config. + */ +static inline void +netdev_txq_completed_mb(struct netdev_queue *dev_queue, + unsigned int pkts, unsigned int bytes) +{ + if (IS_ENABLED(CONFIG_BQL)) + netdev_tx_completed_queue(dev_queue, pkts, bytes); + else if (bytes) + smp_mb(); +} + +/** + * __netif_txq_completed_wake() - locklessly wake a Tx queue, if needed + * @txq: struct netdev_queue to stop/start + * @pkts: number of packets completed + * @bytes: number of bytes completed + * @get_desc: get current number of free descriptors (see requirements below!) + * @start_thrs: minimal number of descriptors to re-enable the queue + * @down_cond: down condition, predicate indicating that the queue should + * not be woken up even if descriptors are available + * + * All arguments may be evaluated multiple times. + * @get_desc must be a formula or a function call, it must always + * return up-to-date information when evaluated! + * Reports completed pkts/bytes to BQL. + * + * Returns: + * 0 if the queue was woken up + * 1 if the queue was already enabled (or disabled but @down_cond is true) + * -1 if the queue was left unchanged (@start_thrs not reached) + */ +#define __netif_txq_completed_wake(txq, pkts, bytes, \ + get_desc, start_thrs, down_cond) \ + ({ \ + int _res; \ + \ + /* Report to BQL and piggy back on its barrier. \ + * Barrier makes sure that anybody stopping the queue \ + * after this point sees the new consumer index. \ + * Pairs with barrier in netif_txq_try_stop(). \ + */ \ + netdev_txq_completed_mb(txq, pkts, bytes); \ + \ + _res = -1; \ + if (pkts && likely(get_desc > start_thrs)) { \ + _res = 1; \ + if (unlikely(netif_tx_queue_stopped(txq)) && \ + !(down_cond)) { \ + netif_tx_wake_queue(txq); \ + _res = 0; \ + } \ + } \ + _res; \ + }) + +#define netif_txq_completed_wake(txq, pkts, bytes, get_desc, start_thrs) \ + __netif_txq_completed_wake(txq, pkts, bytes, get_desc, start_thrs, false) + +/* subqueue variants follow */ + +#define netif_subqueue_try_stop(dev, idx, get_desc, start_thrs) \ + ({ \ + struct netdev_queue *txq; \ + \ + txq = netdev_get_tx_queue(dev, idx); \ + netif_txq_try_stop(txq, get_desc, start_thrs); \ + }) + +#define netif_subqueue_maybe_stop(dev, idx, get_desc, stop_thrs, start_thrs) \ + ({ \ + struct netdev_queue *txq; \ + \ + txq = netdev_get_tx_queue(dev, idx); \ + netif_txq_maybe_stop(txq, get_desc, stop_thrs, start_thrs); \ + }) + +#define netif_subqueue_completed_wake(dev, idx, pkts, bytes, \ + get_desc, start_thrs) \ + ({ \ + struct netdev_queue *txq; \ + \ + txq = netdev_get_tx_queue(dev, idx); \ + netif_txq_completed_wake(txq, pkts, bytes, \ + get_desc, start_thrs); \ + }) + +#endif diff --git a/include/net/netfilter/nf_bpf_link.h b/include/net/netfilter/nf_bpf_link.h new file mode 100644 index 000000000000..6c984b0ea838 --- /dev/null +++ b/include/net/netfilter/nf_bpf_link.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +struct bpf_nf_ctx { + const struct nf_hook_state *state; + struct sk_buff *skb; +}; + +#if IS_ENABLED(CONFIG_NETFILTER_BPF_LINK) +int bpf_nf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); +#else +static inline int bpf_nf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) +{ + return -EOPNOTSUPP; +} +#endif diff --git a/include/net/netfilter/nf_conntrack_core.h b/include/net/netfilter/nf_conntrack_core.h index 71d1269fe4d4..3384859a8921 100644 --- a/include/net/netfilter/nf_conntrack_core.h +++ b/include/net/netfilter/nf_conntrack_core.h @@ -89,7 +89,11 @@ static inline void __nf_ct_set_timeout(struct nf_conn *ct, u64 timeout) { if (timeout > INT_MAX) timeout = INT_MAX; - WRITE_ONCE(ct->timeout, nfct_time_stamp + (u32)timeout); + + if (nf_ct_is_confirmed(ct)) + WRITE_ONCE(ct->timeout, nfct_time_stamp + (u32)timeout); + else + ct->timeout = (u32)timeout; } int __nf_ct_change_timeout(struct nf_conn *ct, u64 cta_timeout); diff --git a/include/net/netfilter/nf_nat_redirect.h b/include/net/netfilter/nf_nat_redirect.h index 2418653a66db..279380de904c 100644 --- a/include/net/netfilter/nf_nat_redirect.h +++ b/include/net/netfilter/nf_nat_redirect.h @@ -6,8 +6,7 @@ #include <uapi/linux/netfilter/nf_nat.h> unsigned int -nf_nat_redirect_ipv4(struct sk_buff *skb, - const struct nf_nat_ipv4_multi_range_compat *mr, +nf_nat_redirect_ipv4(struct sk_buff *skb, const struct nf_nat_range2 *range, unsigned int hooknum); unsigned int nf_nat_redirect_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range, diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index 1b8e305bb54a..3ed21d2d5659 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -1046,6 +1046,18 @@ struct nft_rule_dp { __attribute__((aligned(__alignof__(struct nft_expr)))); }; +struct nft_rule_dp_last { + struct nft_rule_dp end; /* end of nft_rule_blob marker */ + struct rcu_head h; /* call_rcu head */ + struct nft_rule_blob *blob; /* ptr to free via call_rcu */ + const struct nft_chain *chain; /* for nftables tracing */ +}; + +static inline const struct nft_rule_dp *nft_rule_next(const struct nft_rule_dp *rule) +{ + return (void *)rule + sizeof(*rule) + rule->dlen; +} + struct nft_rule_blob { unsigned long size; unsigned char data[] @@ -1197,6 +1209,7 @@ unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv); * @genmask: generation mask * @afinfo: address family info * @name: name of the table + * @validate_state: internal, set when transaction adds jumps */ struct nft_table { struct list_head list; @@ -1215,6 +1228,7 @@ struct nft_table { char *name; u16 udlen; u8 *udata; + u8 validate_state; }; static inline bool nft_table_has_owner(const struct nft_table *table) @@ -1394,11 +1408,7 @@ void nft_unregister_flowtable_type(struct nf_flowtable_type *type); * @type: event type (enum nft_trace_types) * @skbid: hash of skb to be used as trace id * @packet_dumped: packet headers sent in a previous traceinfo message - * @pkt: pktinfo currently processed * @basechain: base chain currently processed - * @chain: chain currently processed - * @rule: rule that was evaluated - * @verdict: verdict given by rule */ struct nft_traceinfo { bool trace; @@ -1406,18 +1416,16 @@ struct nft_traceinfo { bool packet_dumped; enum nft_trace_types type:8; u32 skbid; - const struct nft_pktinfo *pkt; const struct nft_base_chain *basechain; - const struct nft_chain *chain; - const struct nft_rule_dp *rule; - const struct nft_verdict *verdict; }; void nft_trace_init(struct nft_traceinfo *info, const struct nft_pktinfo *pkt, - const struct nft_verdict *verdict, const struct nft_chain *basechain); -void nft_trace_notify(struct nft_traceinfo *info); +void nft_trace_notify(const struct nft_pktinfo *pkt, + const struct nft_verdict *verdict, + const struct nft_rule_dp *rule, + struct nft_traceinfo *info); #define MODULE_ALIAS_NFT_CHAIN(family, name) \ MODULE_ALIAS("nft-chain-" __stringify(family) "-" name) @@ -1601,6 +1609,8 @@ struct nft_trans_chain { struct nft_stats __percpu *stats; u8 policy; u32 chain_id; + struct nft_base_chain *basechain; + struct list_head hook_list; }; #define nft_trans_chain_update(trans) \ @@ -1613,6 +1623,10 @@ struct nft_trans_chain { (((struct nft_trans_chain *)trans->data)->policy) #define nft_trans_chain_id(trans) \ (((struct nft_trans_chain *)trans->data)->chain_id) +#define nft_trans_basechain(trans) \ + (((struct nft_trans_chain *)trans->data)->basechain) +#define nft_trans_chain_hooks(trans) \ + (((struct nft_trans_chain *)trans->data)->hook_list) struct nft_trans_table { bool update; @@ -1688,7 +1702,6 @@ struct nftables_pernet { struct mutex commit_mutex; u64 table_handle; unsigned int base_seq; - u8 validate_state; }; extern unsigned int nf_tables_net_id; diff --git a/include/net/netns/ipv6.h b/include/net/netns/ipv6.h index b4af4837d80b..3cceb3e9320b 100644 --- a/include/net/netns/ipv6.h +++ b/include/net/netns/ipv6.h @@ -55,6 +55,7 @@ struct netns_sysctl_ipv6 { u64 ioam6_id_wide; bool skip_notify_on_dev_down; u8 fib_notify_on_flag_change; + u8 icmpv6_error_anycast_as_unicast; }; struct netns_ipv6 { diff --git a/include/net/nexthop.h b/include/net/nexthop.h index 28085b995ddc..9fa291a04621 100644 --- a/include/net/nexthop.h +++ b/include/net/nexthop.h @@ -498,7 +498,7 @@ static inline struct fib6_nh *nexthop_fib6_nh(struct nexthop *nh) } /* Variant of nexthop_fib6_nh(). - * Caller should either hold rcu_read_lock_bh(), or RTNL. + * Caller should either hold rcu_read_lock(), or RTNL. */ static inline struct fib6_nh *nexthop_fib6_nh_bh(struct nexthop *nh) { @@ -507,13 +507,13 @@ static inline struct fib6_nh *nexthop_fib6_nh_bh(struct nexthop *nh) if (nh->is_group) { struct nh_group *nh_grp; - nh_grp = rcu_dereference_bh_rtnl(nh->nh_grp); + nh_grp = rcu_dereference_rtnl(nh->nh_grp); nh = nexthop_mpath_select(nh_grp, 0); if (!nh) return NULL; } - nhi = rcu_dereference_bh_rtnl(nh->nh_info); + nhi = rcu_dereference_rtnl(nh->nh_info); if (nhi->family == AF_INET6) return &nhi->fib6_nh; diff --git a/include/net/page_pool.h b/include/net/page_pool.h index ddfa0b328677..c8ec2f34722b 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -77,6 +77,7 @@ struct page_pool_params { unsigned int pool_size; int nid; /* Numa node id to allocate from pages from */ struct device *dev; /* device, for DMA pre-mapping purposes */ + struct napi_struct *napi; /* Sole consumer of pages, otherwise NULL */ enum dma_data_direction dma_dir; /* DMA mapping direction */ unsigned int max_len; /* max DMA sync memory size */ unsigned int offset; /* DMA addr offset */ @@ -239,13 +240,14 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) return pool->p.dma_dir; } -bool page_pool_return_skb_page(struct page *page); +bool page_pool_return_skb_page(struct page *page, bool napi_safe); struct page_pool *page_pool_create(const struct page_pool_params *params); struct xdp_mem_info; #ifdef CONFIG_PAGE_POOL +void page_pool_unlink_napi(struct page_pool *pool); void page_pool_destroy(struct page_pool *pool); void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), struct xdp_mem_info *mem); @@ -253,6 +255,10 @@ void page_pool_release_page(struct page_pool *pool, struct page *page); void page_pool_put_page_bulk(struct page_pool *pool, void **data, int count); #else +static inline void page_pool_unlink_napi(struct page_pool *pool) +{ +} + static inline void page_pool_destroy(struct page_pool *pool) { } diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h index 2016839991a4..f436688b6efc 100644 --- a/include/net/pkt_sched.h +++ b/include/net/pkt_sched.h @@ -64,7 +64,6 @@ static inline psched_time_t psched_get_time(void) } struct qdisc_watchdog { - u64 last_expires; struct hrtimer timer; struct Qdisc *qdisc; }; @@ -167,11 +166,13 @@ struct tc_mqprio_caps { struct tc_mqprio_qopt_offload { /* struct tc_mqprio_qopt must always be the first element */ struct tc_mqprio_qopt qopt; + struct netlink_ext_ack *extack; u16 mode; u16 shaper; u32 flags; u64 min_rate[TC_QOPT_MAX_QUEUE]; u64 max_rate[TC_QOPT_MAX_QUEUE]; + unsigned long preemptible_tcs; }; struct tc_taprio_caps { @@ -194,6 +195,7 @@ struct tc_taprio_sched_entry { struct tc_taprio_qopt_offload { struct tc_mqprio_qopt_offload mqprio; + struct netlink_ext_ack *extack; u8 enable; ktime_t base_time; u64 cycle_time; diff --git a/include/net/raw.h b/include/net/raw.h index 3af5289fdead..32a61481a253 100644 --- a/include/net/raw.h +++ b/include/net/raw.h @@ -22,7 +22,7 @@ extern struct proto raw_prot; extern struct raw_hashinfo raw_v4_hashinfo; -bool raw_v4_match(struct net *net, struct sock *sk, unsigned short num, +bool raw_v4_match(struct net *net, const struct sock *sk, unsigned short num, __be32 raddr, __be32 laddr, int dif, int sdif); int raw_abort(struct sock *sk, int err); @@ -83,10 +83,7 @@ struct raw_sock { u32 ipmr_table; }; -static inline struct raw_sock *raw_sk(const struct sock *sk) -{ - return (struct raw_sock *)sk; -} +#define raw_sk(ptr) container_of_const(ptr, struct raw_sock, inet.sk) static inline bool raw_sk_bound_dev_eq(struct net *net, int bound_dev_if, int dif, int sdif) diff --git a/include/net/rawv6.h b/include/net/rawv6.h index bc70909625f6..82810cbe3798 100644 --- a/include/net/rawv6.h +++ b/include/net/rawv6.h @@ -6,7 +6,7 @@ #include <net/raw.h> extern struct raw_hashinfo raw_v6_hashinfo; -bool raw_v6_match(struct net *net, struct sock *sk, unsigned short num, +bool raw_v6_match(struct net *net, const struct sock *sk, unsigned short num, const struct in6_addr *loc_addr, const struct in6_addr *rmt_addr, int dif, int sdif); diff --git a/include/net/route.h b/include/net/route.h index fe00b0a2e475..bcc367cf3aa2 100644 --- a/include/net/route.h +++ b/include/net/route.h @@ -78,9 +78,6 @@ struct rtable { /* Miscellaneous cached information */ u32 rt_mtu_locked:1, rt_pmtu:31; - - struct list_head rt_uncached; - struct uncached_list *rt_uncached_list; }; static inline bool rt_is_input_route(const struct rtable *rt) diff --git a/include/net/scm.h b/include/net/scm.h index 1ce365f4c256..585adc1346bd 100644 --- a/include/net/scm.h +++ b/include/net/scm.h @@ -105,16 +105,27 @@ static inline void scm_passec(struct socket *sock, struct msghdr *msg, struct sc } } } + +static inline bool scm_has_secdata(struct socket *sock) +{ + return test_bit(SOCK_PASSSEC, &sock->flags); +} #else static inline void scm_passec(struct socket *sock, struct msghdr *msg, struct scm_cookie *scm) { } + +static inline bool scm_has_secdata(struct socket *sock) +{ + return false; +} #endif /* CONFIG_SECURITY_NETWORK */ static __inline__ void scm_recv(struct socket *sock, struct msghdr *msg, struct scm_cookie *scm, int flags) { if (!msg->msg_control) { - if (test_bit(SOCK_PASSCRED, &sock->flags) || scm->fp) + if (test_bit(SOCK_PASSCRED, &sock->flags) || scm->fp || + scm_has_secdata(sock)) msg->msg_flags |= MSG_CTRUNC; scm_destroy(scm); return; diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h index c335dd01a597..2a67100b2a17 100644 --- a/include/net/sctp/sctp.h +++ b/include/net/sctp/sctp.h @@ -425,11 +425,11 @@ static inline bool sctp_chunk_pending(const struct sctp_chunk *chunk) * the chunk length to indicate when to stop. Make sure * there is room for a param header too. */ -#define sctp_walk_params(pos, chunk, member)\ -_sctp_walk_params((pos), (chunk), ntohs((chunk)->chunk_hdr.length), member) +#define sctp_walk_params(pos, chunk)\ +_sctp_walk_params((pos), (chunk), ntohs((chunk)->chunk_hdr.length)) -#define _sctp_walk_params(pos, chunk, end, member)\ -for (pos.v = chunk->member;\ +#define _sctp_walk_params(pos, chunk, end)\ +for (pos.v = (u8 *)(chunk + 1);\ (pos.v + offsetof(struct sctp_paramhdr, length) + sizeof(pos.p->length) <=\ (void *)chunk + end) &&\ pos.v <= (void *)chunk + end - ntohs(pos.p->length) &&\ @@ -452,8 +452,8 @@ for (err = (struct sctp_errhdr *)((void *)chunk_hdr + \ _sctp_walk_fwdtsn((pos), (chunk), ntohs((chunk)->chunk_hdr->length) - sizeof(struct sctp_fwdtsn_chunk)) #define _sctp_walk_fwdtsn(pos, chunk, end)\ -for (pos = chunk->subh.fwdtsn_hdr->skip;\ - (void *)pos <= (void *)chunk->subh.fwdtsn_hdr->skip + end - sizeof(struct sctp_fwdtsn_skip);\ +for (pos = (void *)(chunk->subh.fwdtsn_hdr + 1);\ + (void *)pos <= (void *)(chunk->subh.fwdtsn_hdr + 1) + end - sizeof(struct sctp_fwdtsn_skip);\ pos++) /* External references. */ diff --git a/include/net/sctp/stream_sched.h b/include/net/sctp/stream_sched.h index fa00dc20a0d7..572d73fdcd5e 100644 --- a/include/net/sctp/stream_sched.h +++ b/include/net/sctp/stream_sched.h @@ -58,5 +58,7 @@ void sctp_sched_ops_register(enum sctp_sched_type sched, struct sctp_sched_ops *sched_ops); void sctp_sched_ops_prio_init(void); void sctp_sched_ops_rr_init(void); +void sctp_sched_ops_fc_init(void); +void sctp_sched_ops_wfq_init(void); #endif /* __sctp_stream_sched_h__ */ diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h index e1f6e7fc2b11..5c72d1864dd6 100644 --- a/include/net/sctp/structs.h +++ b/include/net/sctp/structs.h @@ -332,7 +332,7 @@ struct sctp_cookie { * the association TCB is re-constructed from the cookie. */ __u32 raw_addr_list_len; - struct sctp_init_chunk peer_init[]; + /* struct sctp_init_chunk peer_init[]; */ }; @@ -1429,6 +1429,11 @@ struct sctp_stream_out_ext { struct { struct list_head rr_list; }; + struct { + struct list_head fc_list; + __u32 fc_length; + __u16 fc_weight; + }; }; }; @@ -1475,6 +1480,9 @@ struct sctp_stream { /* The next stream in line */ struct sctp_stream_out_ext *rr_next; }; + struct { + struct list_head fc_list; + }; }; struct sctp_stream_interleave *si; }; @@ -1703,7 +1711,6 @@ struct sctp_association { __u16 ecn_capable:1, /* Can peer do ECN? */ ipv4_address:1, /* Peer understands IPv4 addresses? */ ipv6_address:1, /* Peer understands IPv6 addresses? */ - hostname_address:1, /* Peer understands DNS addresses? */ asconf_capable:1, /* Does peer support ADDIP? */ prsctp_capable:1, /* Can peer do PR-SCTP? */ reconf_capable:1, /* Can peer do RE-CONFIG? */ diff --git a/include/net/smc.h b/include/net/smc.h index 597cb9381182..a002552be29c 100644 --- a/include/net/smc.h +++ b/include/net/smc.h @@ -67,6 +67,7 @@ struct smcd_ops { int (*move_data)(struct smcd_dev *dev, u64 dmb_tok, unsigned int idx, bool sf, unsigned int offset, void *data, unsigned int size); + int (*supports_v2)(void); u8* (*get_system_eid)(void); u64 (*get_local_gid)(struct smcd_dev *dev); u16 (*get_chid)(struct smcd_dev *dev); diff --git a/include/net/sock.h b/include/net/sock.h index 573f2bf7e0de..8b7ed7167243 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -2131,7 +2131,7 @@ sk_dst_get(struct sock *sk) rcu_read_lock(); dst = rcu_dereference(sk->sk_dst_cache); - if (dst && !atomic_inc_not_zero(&dst->__refcnt)) + if (dst && !rcuref_get(&dst->__rcuref)) dst = NULL; rcu_read_unlock(); return dst; @@ -2697,7 +2697,7 @@ sock_recv_timestamp(struct msghdr *msg, struct sock *sk, struct sk_buff *skb) else sock_write_timestamp(sk, kt); - if (sock_flag(sk, SOCK_WIFI_STATUS) && skb->wifi_acked_valid) + if (sock_flag(sk, SOCK_WIFI_STATUS) && skb_wifi_acked_valid(skb)) __sock_recv_wifi_status(msg, sk, skb); } diff --git a/include/net/tcp.h b/include/net/tcp.h index db9f828e9d1e..04a31643cda3 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -529,7 +529,7 @@ static inline void tcp_synq_overflow(const struct sock *sk) last_overflow = READ_ONCE(tcp_sk(sk)->rx_opt.ts_recent_stamp); if (!time_between32(now, last_overflow, last_overflow + HZ)) - WRITE_ONCE(tcp_sk(sk)->rx_opt.ts_recent_stamp, now); + WRITE_ONCE(tcp_sk_rw(sk)->rx_opt.ts_recent_stamp, now); } /* syncookies: no recent synqueue overflow on this listening socket? */ @@ -1117,6 +1117,9 @@ struct tcp_congestion_ops { int tcp_register_congestion_control(struct tcp_congestion_ops *type); void tcp_unregister_congestion_control(struct tcp_congestion_ops *type); +int tcp_update_congestion_control(struct tcp_congestion_ops *type, + struct tcp_congestion_ops *old_type); +int tcp_validate_congestion_control(struct tcp_congestion_ops *ca); void tcp_assign_congestion_control(struct sock *sk); void tcp_init_congestion_control(struct sock *sk); diff --git a/include/net/vxlan.h b/include/net/vxlan.h index bca5b01af247..20bd7d893e10 100644 --- a/include/net/vxlan.h +++ b/include/net/vxlan.h @@ -3,6 +3,7 @@ #define __NET_VXLAN_H 1 #include <linux/if_vlan.h> +#include <linux/rhashtable-types.h> #include <net/udp_tunnel.h> #include <net/dst_metadata.h> #include <net/rtnetlink.h> @@ -302,6 +303,10 @@ struct vxlan_dev { struct vxlan_vni_group __rcu *vnigrp; struct hlist_head fdb_head[FDB_HASH_SIZE]; + + struct rhashtable mdb_tbl; + struct hlist_head mdb_list; + unsigned int mdb_seq; }; #define VXLAN_F_LEARN 0x01 @@ -322,6 +327,7 @@ struct vxlan_dev { #define VXLAN_F_IPV6_LINKLOCAL 0x8000 #define VXLAN_F_TTL_INHERIT 0x10000 #define VXLAN_F_VNIFILTER 0x20000 +#define VXLAN_F_MDB 0x40000 /* Flags that are used in the receive path. These flags must match in * order for a socket to be shareable @@ -566,4 +572,23 @@ static inline bool vxlan_fdb_nh_path_select(struct nexthop *nh, return true; } +static inline void vxlan_build_gbp_hdr(struct vxlanhdr *vxh, const struct vxlan_metadata *md) +{ + struct vxlanhdr_gbp *gbp; + + if (!md->gbp) + return; + + gbp = (struct vxlanhdr_gbp *)vxh; + vxh->vx_flags |= VXLAN_HF_GBP; + + if (md->gbp & VXLAN_GBP_DONT_LEARN) + gbp->dont_learn = 1; + + if (md->gbp & VXLAN_GBP_POLICY_APPLIED) + gbp->policy_applied = 1; + + gbp->policy_id = htons(md->gbp & VXLAN_GBP_ID_MASK); +} + #endif diff --git a/include/net/x25.h b/include/net/x25.h index d7d6c2b4ffa7..597eb53c471e 100644 --- a/include/net/x25.h +++ b/include/net/x25.h @@ -177,10 +177,7 @@ struct x25_forward { atomic_t refcnt; }; -static inline struct x25_sock *x25_sk(const struct sock *sk) -{ - return (struct x25_sock *)sk; -} +#define x25_sk(ptr) container_of_const(ptr, struct x25_sock, sk) /* af_x25.c */ extern int sysctl_x25_restart_request_timeout; diff --git a/include/net/xdp.h b/include/net/xdp.h index 76aa748e7923..d1c5381fc95f 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -318,35 +318,6 @@ void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq); void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_frame_bulk *bq); -/* When sending xdp_frame into the network stack, then there is no - * return point callback, which is needed to release e.g. DMA-mapping - * resources with page_pool. Thus, have explicit function to release - * frame resources. - */ -void __xdp_release_frame(void *data, struct xdp_mem_info *mem); -static inline void xdp_release_frame(struct xdp_frame *xdpf) -{ - struct xdp_mem_info *mem = &xdpf->mem; - struct skb_shared_info *sinfo; - int i; - - /* Curr only page_pool needs this */ - if (mem->type != MEM_TYPE_PAGE_POOL) - return; - - if (likely(!xdp_frame_has_frags(xdpf))) - goto out; - - sinfo = xdp_get_shared_info_from_frame(xdpf); - for (i = 0; i < sinfo->nr_frags; i++) { - struct page *page = skb_frag_page(&sinfo->frags[i]); - - __xdp_release_frame(page_address(page), mem); - } -out: - __xdp_release_frame(xdpf->data, mem); -} - static __always_inline unsigned int xdp_get_frame_len(struct xdp_frame *xdpf) { struct skb_shared_info *sinfo; diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h index 3057e1a4a11c..e96a1151ec75 100644 --- a/include/net/xdp_sock.h +++ b/include/net/xdp_sock.h @@ -38,6 +38,7 @@ struct xdp_umem { struct xsk_map { struct bpf_map map; spinlock_t lock; /* Synchronize map updates */ + atomic_t count; struct xdp_sock __rcu *xsk_map[]; }; diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 3e1f70e8e424..33ee3f5936e6 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -138,6 +138,10 @@ enum { XFRM_DEV_OFFLOAD_PACKET, }; +enum { + XFRM_DEV_OFFLOAD_FLAG_ACQ = 1, +}; + struct xfrm_dev_offload { struct net_device *dev; netdevice_tracker dev_tracker; @@ -145,6 +149,7 @@ struct xfrm_dev_offload { unsigned long offload_handle; u8 dir : 2; u8 type : 2; + u8 flags : 2; }; struct xfrm_mode { diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index 3e952e569418..d318c769b445 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -180,13 +180,8 @@ static inline bool xp_desc_crosses_non_contig_pg(struct xsk_buff_pool *pool, if (likely(!cross_pg)) return false; - if (pool->dma_pages_cnt) { - return !(pool->dma_pages[addr >> PAGE_SHIFT] & - XSK_NEXT_PG_CONTIG_MASK); - } - - /* skb path */ - return addr + len > pool->addrs_cnt; + return pool->dma_pages_cnt && + !(pool->dma_pages[addr >> PAGE_SHIFT] & XSK_NEXT_PG_CONTIG_MASK); } static inline u64 xp_aligned_extract_addr(struct xsk_buff_pool *pool, u64 addr) diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h index 2080879e4134..cb8fbb241879 100644 --- a/include/soc/mscc/ocelot.h +++ b/include/soc/mscc/ocelot.h @@ -11,6 +11,8 @@ #include <linux/regmap.h> #include <net/dsa.h> +struct tc_mqprio_qopt_offload; + /* Port Group IDs (PGID) are masks of destination ports. * * For L2 forwarding, the switch performs 3 lookups in the PGID table for each @@ -644,6 +646,7 @@ enum ocelot_tag_prefix { }; struct ocelot; +struct device_node; struct ocelot_ops { struct net_device *(*port_to_netdev)(struct ocelot *ocelot, int port); @@ -743,9 +746,11 @@ struct ocelot_mirror { }; struct ocelot_mm_state { - struct mutex lock; enum ethtool_mm_verify_status verify_status; + bool tx_enabled; bool tx_active; + u8 preemptible_tcs; + u8 active_preemptible_tcs; }; struct ocelot_port; @@ -939,15 +944,17 @@ struct ocelot_policer { __ocelot_target_write_ix(ocelot, target, val, reg, 0) /* I/O */ -u32 ocelot_port_readl(struct ocelot_port *port, u32 reg); -void ocelot_port_writel(struct ocelot_port *port, u32 val, u32 reg); -void ocelot_port_rmwl(struct ocelot_port *port, u32 val, u32 mask, u32 reg); -int __ocelot_bulk_read_ix(struct ocelot *ocelot, u32 reg, u32 offset, void *buf, - int count); -u32 __ocelot_read_ix(struct ocelot *ocelot, u32 reg, u32 offset); -void __ocelot_write_ix(struct ocelot *ocelot, u32 val, u32 reg, u32 offset); -void __ocelot_rmw_ix(struct ocelot *ocelot, u32 val, u32 mask, u32 reg, - u32 offset); +u32 ocelot_port_readl(struct ocelot_port *port, enum ocelot_reg reg); +void ocelot_port_writel(struct ocelot_port *port, u32 val, enum ocelot_reg reg); +void ocelot_port_rmwl(struct ocelot_port *port, u32 val, u32 mask, + enum ocelot_reg reg); +int __ocelot_bulk_read_ix(struct ocelot *ocelot, enum ocelot_reg reg, + u32 offset, void *buf, int count); +u32 __ocelot_read_ix(struct ocelot *ocelot, enum ocelot_reg reg, u32 offset); +void __ocelot_write_ix(struct ocelot *ocelot, u32 val, enum ocelot_reg reg, + u32 offset); +void __ocelot_rmw_ix(struct ocelot *ocelot, u32 val, u32 mask, + enum ocelot_reg reg, u32 offset); u32 __ocelot_target_read_ix(struct ocelot *ocelot, enum ocelot_target target, u32 reg, u32 offset); void __ocelot_target_write_ix(struct ocelot *ocelot, enum ocelot_target target, @@ -1111,6 +1118,12 @@ int ocelot_sb_occ_tc_port_bind_get(struct ocelot *ocelot, int port, enum devlink_sb_pool_type pool_type, u32 *p_cur, u32 *p_max); +int ocelot_port_configure_serdes(struct ocelot *ocelot, int port, + struct device_node *portnp); + +void ocelot_phylink_mac_config(struct ocelot *ocelot, int port, + unsigned int link_an_mode, + const struct phylink_link_state *state); void ocelot_phylink_mac_link_down(struct ocelot *ocelot, int port, unsigned int link_an_mode, phy_interface_t interface, @@ -1139,12 +1152,15 @@ int ocelot_vcap_policer_add(struct ocelot *ocelot, u32 pol_ix, struct ocelot_policer *pol); int ocelot_vcap_policer_del(struct ocelot *ocelot, u32 pol_ix); -void ocelot_port_mm_irq(struct ocelot *ocelot, int port); +void ocelot_mm_irq(struct ocelot *ocelot); int ocelot_port_set_mm(struct ocelot *ocelot, int port, struct ethtool_mm_cfg *cfg, struct netlink_ext_ack *extack); int ocelot_port_get_mm(struct ocelot *ocelot, int port, struct ethtool_mm_state *state); +int ocelot_port_mqprio(struct ocelot *ocelot, int port, + struct tc_mqprio_qopt_offload *mqprio); +void ocelot_port_update_preemptible_tcs(struct ocelot *ocelot, int port); #if IS_ENABLED(CONFIG_BRIDGE_MRP) int ocelot_mrp_add(struct ocelot *ocelot, int port, @@ -1183,4 +1199,6 @@ ocelot_mrp_del_ring_role(struct ocelot *ocelot, int port, } #endif +void ocelot_pll5_init(struct ocelot *ocelot); + #endif diff --git a/include/trace/events/fib.h b/include/trace/events/fib.h index c2300c407f58..76297ecd4935 100644 --- a/include/trace/events/fib.h +++ b/include/trace/events/fib.h @@ -36,7 +36,6 @@ TRACE_EVENT(fib_table_lookup, ), TP_fast_assign( - struct in6_addr in6_zero = {}; struct net_device *dev; struct in6_addr *in6; __be32 *p32; @@ -74,7 +73,7 @@ TRACE_EVENT(fib_table_lookup, *p32 = nhc->nhc_gw.ipv4; in6 = (struct in6_addr *)__entry->gw6; - *in6 = in6_zero; + *in6 = in6addr_any; } else if (nhc->nhc_gw_family == AF_INET6) { p32 = (__be32 *) __entry->gw4; *p32 = 0; @@ -87,7 +86,7 @@ TRACE_EVENT(fib_table_lookup, *p32 = 0; in6 = (struct in6_addr *)__entry->gw6; - *in6 = in6_zero; + *in6 = in6addr_any; } ), diff --git a/include/trace/events/fib6.h b/include/trace/events/fib6.h index 6e821eb79450..4d3e607b3cde 100644 --- a/include/trace/events/fib6.h +++ b/include/trace/events/fib6.h @@ -68,11 +68,8 @@ TRACE_EVENT(fib6_table_lookup, strcpy(__entry->name, "-"); } if (res->f6i == net->ipv6.fib6_null_entry) { - struct in6_addr in6_zero = {}; - in6 = (struct in6_addr *)__entry->gw; - *in6 = in6_zero; - + *in6 = in6addr_any; } else if (res->nh) { in6 = (struct in6_addr *)__entry->gw; *in6 = res->nh->fib_nh_gw6; diff --git a/include/trace/events/handshake.h b/include/trace/events/handshake.h new file mode 100644 index 000000000000..8dadcab5f12a --- /dev/null +++ b/include/trace/events/handshake.h @@ -0,0 +1,159 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM handshake + +#if !defined(_TRACE_HANDSHAKE_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_HANDSHAKE_H + +#include <linux/net.h> +#include <linux/tracepoint.h> + +DECLARE_EVENT_CLASS(handshake_event_class, + TP_PROTO( + const struct net *net, + const struct handshake_req *req, + const struct sock *sk + ), + TP_ARGS(net, req, sk), + TP_STRUCT__entry( + __field(const void *, req) + __field(const void *, sk) + __field(unsigned int, netns_ino) + ), + TP_fast_assign( + __entry->req = req; + __entry->sk = sk; + __entry->netns_ino = net->ns.inum; + ), + TP_printk("req=%p sk=%p", + __entry->req, __entry->sk + ) +); +#define DEFINE_HANDSHAKE_EVENT(name) \ + DEFINE_EVENT(handshake_event_class, name, \ + TP_PROTO( \ + const struct net *net, \ + const struct handshake_req *req, \ + const struct sock *sk \ + ), \ + TP_ARGS(net, req, sk)) + +DECLARE_EVENT_CLASS(handshake_fd_class, + TP_PROTO( + const struct net *net, + const struct handshake_req *req, + const struct sock *sk, + int fd + ), + TP_ARGS(net, req, sk, fd), + TP_STRUCT__entry( + __field(const void *, req) + __field(const void *, sk) + __field(int, fd) + __field(unsigned int, netns_ino) + ), + TP_fast_assign( + __entry->req = req; + __entry->sk = req->hr_sk; + __entry->fd = fd; + __entry->netns_ino = net->ns.inum; + ), + TP_printk("req=%p sk=%p fd=%d", + __entry->req, __entry->sk, __entry->fd + ) +); +#define DEFINE_HANDSHAKE_FD_EVENT(name) \ + DEFINE_EVENT(handshake_fd_class, name, \ + TP_PROTO( \ + const struct net *net, \ + const struct handshake_req *req, \ + const struct sock *sk, \ + int fd \ + ), \ + TP_ARGS(net, req, sk, fd)) + +DECLARE_EVENT_CLASS(handshake_error_class, + TP_PROTO( + const struct net *net, + const struct handshake_req *req, + const struct sock *sk, + int err + ), + TP_ARGS(net, req, sk, err), + TP_STRUCT__entry( + __field(const void *, req) + __field(const void *, sk) + __field(int, err) + __field(unsigned int, netns_ino) + ), + TP_fast_assign( + __entry->req = req; + __entry->sk = sk; + __entry->err = err; + __entry->netns_ino = net->ns.inum; + ), + TP_printk("req=%p sk=%p err=%d", + __entry->req, __entry->sk, __entry->err + ) +); +#define DEFINE_HANDSHAKE_ERROR(name) \ + DEFINE_EVENT(handshake_error_class, name, \ + TP_PROTO( \ + const struct net *net, \ + const struct handshake_req *req, \ + const struct sock *sk, \ + int err \ + ), \ + TP_ARGS(net, req, sk, err)) + + +/* + * Request lifetime events + */ + +DEFINE_HANDSHAKE_EVENT(handshake_submit); +DEFINE_HANDSHAKE_ERROR(handshake_submit_err); +DEFINE_HANDSHAKE_EVENT(handshake_cancel); +DEFINE_HANDSHAKE_EVENT(handshake_cancel_none); +DEFINE_HANDSHAKE_EVENT(handshake_cancel_busy); +DEFINE_HANDSHAKE_EVENT(handshake_destruct); + + +TRACE_EVENT(handshake_complete, + TP_PROTO( + const struct net *net, + const struct handshake_req *req, + const struct sock *sk, + int status + ), + TP_ARGS(net, req, sk, status), + TP_STRUCT__entry( + __field(const void *, req) + __field(const void *, sk) + __field(int, status) + __field(unsigned int, netns_ino) + ), + TP_fast_assign( + __entry->req = req; + __entry->sk = sk; + __entry->status = status; + __entry->netns_ino = net->ns.inum; + ), + TP_printk("req=%p sk=%p status=%d", + __entry->req, __entry->sk, __entry->status + ) +); + +/* + * Netlink events + */ + +DEFINE_HANDSHAKE_ERROR(handshake_notify_err); +DEFINE_HANDSHAKE_FD_EVENT(handshake_cmd_accept); +DEFINE_HANDSHAKE_ERROR(handshake_cmd_accept_err); +DEFINE_HANDSHAKE_FD_EVENT(handshake_cmd_done); +DEFINE_HANDSHAKE_ERROR(handshake_cmd_done_err); + +#endif /* _TRACE_HANDSHAKE_H */ + +#include <trace/define_trace.h> diff --git a/include/trace/events/qrtr.h b/include/trace/events/qrtr.h index b1de14c3bb93..441132c67133 100644 --- a/include/trace/events/qrtr.h +++ b/include/trace/events/qrtr.h @@ -10,15 +10,16 @@ TRACE_EVENT(qrtr_ns_service_announce_new, - TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port), + TP_PROTO(unsigned int service, unsigned int instance, + unsigned int node, unsigned int port), TP_ARGS(service, instance, node, port), TP_STRUCT__entry( - __field(__le32, service) - __field(__le32, instance) - __field(__le32, node) - __field(__le32, port) + __field(unsigned int, service) + __field(unsigned int, instance) + __field(unsigned int, node) + __field(unsigned int, port) ), TP_fast_assign( @@ -36,15 +37,16 @@ TRACE_EVENT(qrtr_ns_service_announce_new, TRACE_EVENT(qrtr_ns_service_announce_del, - TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port), + TP_PROTO(unsigned int service, unsigned int instance, + unsigned int node, unsigned int port), TP_ARGS(service, instance, node, port), TP_STRUCT__entry( - __field(__le32, service) - __field(__le32, instance) - __field(__le32, node) - __field(__le32, port) + __field(unsigned int, service) + __field(unsigned int, instance) + __field(unsigned int, node) + __field(unsigned int, port) ), TP_fast_assign( @@ -62,15 +64,16 @@ TRACE_EVENT(qrtr_ns_service_announce_del, TRACE_EVENT(qrtr_ns_server_add, - TP_PROTO(__le32 service, __le32 instance, __le32 node, __le32 port), + TP_PROTO(unsigned int service, unsigned int instance, + unsigned int node, unsigned int port), TP_ARGS(service, instance, node, port), TP_STRUCT__entry( - __field(__le32, service) - __field(__le32, instance) - __field(__le32, node) - __field(__le32, port) + __field(unsigned int, service) + __field(unsigned int, instance) + __field(unsigned int, node) + __field(unsigned int, port) ), TP_fast_assign( diff --git a/include/trace/events/sock.h b/include/trace/events/sock.h index 03d19fc562f8..fd206a6ab5b8 100644 --- a/include/trace/events/sock.h +++ b/include/trace/events/sock.h @@ -158,7 +158,7 @@ TRACE_EVENT(inet_sock_set_state, ), TP_fast_assign( - struct inet_sock *inet = inet_sk(sk); + const struct inet_sock *inet = inet_sk(sk); struct in6_addr *pin6; __be32 *p32; @@ -222,7 +222,7 @@ TRACE_EVENT(inet_sk_error_report, ), TP_fast_assign( - struct inet_sock *inet = inet_sk(sk); + const struct inet_sock *inet = inet_sk(sk); struct in6_addr *pin6; __be32 *p32; diff --git a/include/trace/events/tcp.h b/include/trace/events/tcp.h index 901b440238d5..bf06db8d2046 100644 --- a/include/trace/events/tcp.h +++ b/include/trace/events/tcp.h @@ -67,7 +67,7 @@ DECLARE_EVENT_CLASS(tcp_event_sk_skb, ), TP_fast_assign( - struct inet_sock *inet = inet_sk(sk); + const struct inet_sock *inet = inet_sk(sk); __be32 *p32; __entry->skbaddr = skb; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 62ce1f5d1b1d..1bb11a6ee667 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -986,6 +986,7 @@ enum bpf_prog_type { BPF_PROG_TYPE_LSM, BPF_PROG_TYPE_SK_LOOKUP, BPF_PROG_TYPE_SYSCALL, /* a program that can execute syscalls */ + BPF_PROG_TYPE_NETFILTER, }; enum bpf_attach_type { @@ -1033,6 +1034,7 @@ enum bpf_attach_type { BPF_PERF_EVENT, BPF_TRACE_KPROBE_MULTI, BPF_LSM_CGROUP, + BPF_STRUCT_OPS, __MAX_BPF_ATTACH_TYPE }; @@ -1049,6 +1051,7 @@ enum bpf_link_type { BPF_LINK_TYPE_PERF_EVENT = 7, BPF_LINK_TYPE_KPROBE_MULTI = 8, BPF_LINK_TYPE_STRUCT_OPS = 9, + BPF_LINK_TYPE_NETFILTER = 10, MAX_BPF_LINK_TYPE, }; @@ -1108,7 +1111,7 @@ enum bpf_link_type { */ #define BPF_F_STRICT_ALIGNMENT (1U << 0) -/* If BPF_F_ANY_ALIGNMENT is used in BPF_PROF_LOAD command, the +/* If BPF_F_ANY_ALIGNMENT is used in BPF_PROG_LOAD command, the * verifier will allow any alignment whatsoever. On platforms * with strict alignment requirements for loads ands stores (such * as sparc and mips) the verifier validates that all loads and @@ -1266,6 +1269,9 @@ enum { /* Create a map that is suitable to be an inner map with dynamic max entries */ BPF_F_INNER_MAP = (1U << 12), + +/* Create a map that will be registered/unregesitered by the backed bpf_link */ + BPF_F_LINK = (1U << 13), }; /* Flags for BPF_PROG_QUERY. */ @@ -1403,6 +1409,11 @@ union bpf_attr { __aligned_u64 fd_array; /* array of FDs */ __aligned_u64 core_relos; __u32 core_relo_rec_size; /* sizeof(struct bpf_core_relo) */ + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + */ + __u32 log_true_size; }; struct { /* anonymous struct used by BPF_OBJ_* commands */ @@ -1488,6 +1499,11 @@ union bpf_attr { __u32 btf_size; __u32 btf_log_size; __u32 btf_log_level; + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + */ + __u32 btf_log_true_size; }; struct { @@ -1507,7 +1523,10 @@ union bpf_attr { } task_fd_query; struct { /* struct used by BPF_LINK_CREATE command */ - __u32 prog_fd; /* eBPF program to attach */ + union { + __u32 prog_fd; /* eBPF program to attach */ + __u32 map_fd; /* struct_ops to attach */ + }; union { __u32 target_fd; /* object to attach to */ __u32 target_ifindex; /* target ifindex */ @@ -1543,17 +1562,34 @@ union bpf_attr { */ __u64 cookie; } tracing; + struct { + __u32 pf; + __u32 hooknum; + __s32 priority; + __u32 flags; + } netfilter; }; } link_create; struct { /* struct used by BPF_LINK_UPDATE command */ __u32 link_fd; /* link fd */ - /* new program fd to update link with */ - __u32 new_prog_fd; + union { + /* new program fd to update link with */ + __u32 new_prog_fd; + /* new struct_ops map fd to update link with */ + __u32 new_map_fd; + }; __u32 flags; /* extra flags */ - /* expected link's program fd; is specified only if - * BPF_F_REPLACE flag is set in flags */ - __u32 old_prog_fd; + union { + /* expected link's program fd; is specified only if + * BPF_F_REPLACE flag is set in flags. + */ + __u32 old_prog_fd; + /* expected link's map fd; is specified only + * if BPF_F_REPLACE flag is set. + */ + __u32 old_map_fd; + }; } link_update; struct { @@ -1647,17 +1683,17 @@ union bpf_attr { * Description * This helper is a "printk()-like" facility for debugging. It * prints a message defined by format *fmt* (of size *fmt_size*) - * to file *\/sys/kernel/debug/tracing/trace* from DebugFS, if + * to file *\/sys/kernel/tracing/trace* from TraceFS, if * available. It can take up to three additional **u64** * arguments (as an eBPF helpers, the total number of arguments is * limited to five). * * Each time the helper is called, it appends a line to the trace. - * Lines are discarded while *\/sys/kernel/debug/tracing/trace* is - * open, use *\/sys/kernel/debug/tracing/trace_pipe* to avoid this. + * Lines are discarded while *\/sys/kernel/tracing/trace* is + * open, use *\/sys/kernel/tracing/trace_pipe* to avoid this. * The format of the trace is customizable, and the exact output * one will get depends on the options set in - * *\/sys/kernel/debug/tracing/trace_options* (see also the + * *\/sys/kernel/tracing/trace_options* (see also the * *README* file under the same directory). However, it usually * defaults to something like: * @@ -4969,6 +5005,12 @@ union bpf_attr { * different maps if key/value layout matches across maps. * Every bpf_timer_set_callback() can have different callback_fn. * + * *flags* can be one of: + * + * **BPF_F_TIMER_ABS** + * Start the timer in absolute expire value instead of the + * default relative one. + * * Return * 0 on success. * **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier @@ -5325,11 +5367,22 @@ union bpf_attr { * Description * Write *len* bytes from *src* into *dst*, starting from *offset* * into *dst*. - * *flags* is currently unused. + * + * *flags* must be 0 except for skb-type dynptrs. + * + * For skb-type dynptrs: + * * All data slices of the dynptr are automatically + * invalidated after **bpf_dynptr_write**\ (). This is + * because writing may pull the skb and change the + * underlying packet buffer. + * + * * For *flags*, please see the flags accepted by + * **bpf_skb_store_bytes**\ (). * Return * 0 on success, -E2BIG if *offset* + *len* exceeds the length * of *dst*'s data, -EINVAL if *dst* is an invalid dynptr or if *dst* - * is a read-only dynptr or if *flags* is not 0. + * is a read-only dynptr or if *flags* is not correct. For skb-type dynptrs, + * other errors correspond to errors returned by **bpf_skb_store_bytes**\ (). * * void *bpf_dynptr_data(const struct bpf_dynptr *ptr, u32 offset, u32 len) * Description @@ -5337,6 +5390,9 @@ union bpf_attr { * * *len* must be a statically known value. The returned data slice * is invalidated whenever the dynptr is invalidated. + * + * skb and xdp type dynptrs may not use bpf_dynptr_data. They should + * instead use bpf_dynptr_slice and bpf_dynptr_slice_rdwr. * Return * Pointer to the underlying dynptr data, NULL if the dynptr is * read-only, if the dynptr is invalid, or if the offset and length @@ -6359,6 +6415,15 @@ struct bpf_link_info { struct { __u32 ifindex; } xdp; + struct { + __u32 map_id; + } struct_ops; + struct { + __u32 pf; + __u32 hooknum; + __s32 priority; + __u32 flags; + } netfilter; }; } __attribute__((aligned(8))); @@ -6934,6 +6999,10 @@ struct bpf_rb_node { __u64 :64; } __attribute__((aligned(8))); +struct bpf_refcount { + __u32 :32; +} __attribute__((aligned(4))); + struct bpf_sysctl { __u32 write; /* Sysctl is being read (= 0) or written (= 1). * Allows 1,2,4-byte read, but no write. @@ -7083,4 +7152,21 @@ struct bpf_core_relo { enum bpf_core_relo_kind kind; }; +/* + * Flags to control bpf_timer_start() behaviour. + * - BPF_F_TIMER_ABS: Timeout passed is absolute time, by default it is + * relative to current time. + */ +enum { + BPF_F_TIMER_ABS = (1ULL << 0), +}; + +/* BPF numbers iterator state */ +struct bpf_iter_num { + /* opaque iterator state; having __u64 here allows to preserve correct + * alignment requirements in vmlinux.h, generated from BTF + */ + __u64 __opaque[1]; +} __attribute__((aligned(8))); + #endif /* _UAPI__LINUX_BPF_H__ */ diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index d39ce21381c5..1ebf8d455f07 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -357,6 +357,8 @@ enum { ETHTOOL_A_RINGS_CQE_SIZE, /* u32 */ ETHTOOL_A_RINGS_TX_PUSH, /* u8 */ ETHTOOL_A_RINGS_RX_PUSH, /* u8 */ + ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN, /* u32 */ + ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX, /* u32 */ /* add new constants above here */ __ETHTOOL_A_RINGS_CNT, diff --git a/include/uapi/linux/handshake.h b/include/uapi/linux/handshake.h new file mode 100644 index 000000000000..1de4d0b95325 --- /dev/null +++ b/include/uapi/linux/handshake.h @@ -0,0 +1,73 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ +/* Do not edit directly, auto-generated from: */ +/* Documentation/netlink/specs/handshake.yaml */ +/* YNL-GEN uapi header */ + +#ifndef _UAPI_LINUX_HANDSHAKE_H +#define _UAPI_LINUX_HANDSHAKE_H + +#define HANDSHAKE_FAMILY_NAME "handshake" +#define HANDSHAKE_FAMILY_VERSION 1 + +enum handshake_handler_class { + HANDSHAKE_HANDLER_CLASS_NONE, + HANDSHAKE_HANDLER_CLASS_TLSHD, + HANDSHAKE_HANDLER_CLASS_MAX, +}; + +enum handshake_msg_type { + HANDSHAKE_MSG_TYPE_UNSPEC, + HANDSHAKE_MSG_TYPE_CLIENTHELLO, + HANDSHAKE_MSG_TYPE_SERVERHELLO, +}; + +enum handshake_auth { + HANDSHAKE_AUTH_UNSPEC, + HANDSHAKE_AUTH_UNAUTH, + HANDSHAKE_AUTH_PSK, + HANDSHAKE_AUTH_X509, +}; + +enum { + HANDSHAKE_A_X509_CERT = 1, + HANDSHAKE_A_X509_PRIVKEY, + + __HANDSHAKE_A_X509_MAX, + HANDSHAKE_A_X509_MAX = (__HANDSHAKE_A_X509_MAX - 1) +}; + +enum { + HANDSHAKE_A_ACCEPT_SOCKFD = 1, + HANDSHAKE_A_ACCEPT_HANDLER_CLASS, + HANDSHAKE_A_ACCEPT_MESSAGE_TYPE, + HANDSHAKE_A_ACCEPT_TIMEOUT, + HANDSHAKE_A_ACCEPT_AUTH_MODE, + HANDSHAKE_A_ACCEPT_PEER_IDENTITY, + HANDSHAKE_A_ACCEPT_CERTIFICATE, + + __HANDSHAKE_A_ACCEPT_MAX, + HANDSHAKE_A_ACCEPT_MAX = (__HANDSHAKE_A_ACCEPT_MAX - 1) +}; + +enum { + HANDSHAKE_A_DONE_STATUS = 1, + HANDSHAKE_A_DONE_SOCKFD, + HANDSHAKE_A_DONE_REMOTE_AUTH, + + __HANDSHAKE_A_DONE_MAX, + HANDSHAKE_A_DONE_MAX = (__HANDSHAKE_A_DONE_MAX - 1) +}; + +enum { + HANDSHAKE_CMD_READY = 1, + HANDSHAKE_CMD_ACCEPT, + HANDSHAKE_CMD_DONE, + + __HANDSHAKE_CMD_MAX, + HANDSHAKE_CMD_MAX = (__HANDSHAKE_CMD_MAX - 1) +}; + +#define HANDSHAKE_MCGRP_NONE "none" +#define HANDSHAKE_MCGRP_TLSHD "tlshd" + +#endif /* _UAPI_LINUX_HANDSHAKE_H */ diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index d60c456710b3..f95326fce6bb 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -525,6 +525,7 @@ enum { BRIDGE_VLANDB_ENTRY_MCAST_ROUTER, BRIDGE_VLANDB_ENTRY_MCAST_N_GROUPS, BRIDGE_VLANDB_ENTRY_MCAST_MAX_GROUPS, + BRIDGE_VLANDB_ENTRY_NEIGH_SUPPRESS, __BRIDGE_VLANDB_ENTRY_MAX, }; #define BRIDGE_VLANDB_ENTRY_MAX (__BRIDGE_VLANDB_ENTRY_MAX - 1) @@ -633,6 +634,11 @@ enum { MDBA_MDB_EATTR_GROUP_MODE, MDBA_MDB_EATTR_SOURCE, MDBA_MDB_EATTR_RTPROT, + MDBA_MDB_EATTR_DST, + MDBA_MDB_EATTR_DST_PORT, + MDBA_MDB_EATTR_VNI, + MDBA_MDB_EATTR_IFINDEX, + MDBA_MDB_EATTR_SRC_VNI, __MDBA_MDB_EATTR_MAX }; #define MDBA_MDB_EATTR_MAX (__MDBA_MDB_EATTR_MAX - 1) @@ -728,6 +734,11 @@ enum { MDBE_ATTR_SRC_LIST, MDBE_ATTR_GROUP_MODE, MDBE_ATTR_RTPROT, + MDBE_ATTR_DST, + MDBE_ATTR_DST_PORT, + MDBE_ATTR_VNI, + MDBE_ATTR_IFINDEX, + MDBE_ATTR_SRC_VNI, __MDBE_ATTR_MAX, }; #define MDBE_ATTR_MAX (__MDBE_ATTR_MAX - 1) diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h index 57ceb788250f..4ac1000b0ef2 100644 --- a/include/uapi/linux/if_link.h +++ b/include/uapi/linux/if_link.h @@ -569,6 +569,7 @@ enum { IFLA_BRPORT_MAB, IFLA_BRPORT_MCAST_N_GROUPS, IFLA_BRPORT_MCAST_MAX_GROUPS, + IFLA_BRPORT_NEIGH_VLAN_SUPPRESS, __IFLA_BRPORT_MAX }; #define IFLA_BRPORT_MAX (__IFLA_BRPORT_MAX - 1) @@ -635,6 +636,7 @@ enum { IFLA_MACVLAN_MACADDR_COUNT, IFLA_MACVLAN_BC_QUEUE_LEN, IFLA_MACVLAN_BC_QUEUE_LEN_USED, + IFLA_MACVLAN_BC_CUTOFF, __IFLA_MACVLAN_MAX, }; diff --git a/include/uapi/linux/if_packet.h b/include/uapi/linux/if_packet.h index 78c981d6a9d4..9efc42382fdb 100644 --- a/include/uapi/linux/if_packet.h +++ b/include/uapi/linux/if_packet.h @@ -59,6 +59,7 @@ struct sockaddr_ll { #define PACKET_ROLLOVER_STATS 21 #define PACKET_FANOUT_DATA 22 #define PACKET_IGNORE_OUTGOING 23 +#define PACKET_VNET_HDR_SZ 24 #define PACKET_FANOUT_HASH 0 #define PACKET_FANOUT_LB 1 diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h index ff677f3a6cad..c4d4d8e42dc8 100644 --- a/include/uapi/linux/netfilter/nf_tables.h +++ b/include/uapi/linux/netfilter/nf_tables.h @@ -685,7 +685,7 @@ enum nft_range_ops { * enum nft_range_attributes - nf_tables range expression netlink attributes * * @NFTA_RANGE_SREG: source register of data to compare (NLA_U32: nft_registers) - * @NFTA_RANGE_OP: cmp operation (NLA_U32: nft_cmp_ops) + * @NFTA_RANGE_OP: cmp operation (NLA_U32: nft_range_ops) * @NFTA_RANGE_FROM_DATA: data range from (NLA_NESTED: nft_data_attributes) * @NFTA_RANGE_TO_DATA: data range to (NLA_NESTED: nft_data_attributes) */ @@ -878,7 +878,7 @@ enum nft_exthdr_op { * @NFTA_EXTHDR_LEN: extension header length (NLA_U32) * @NFTA_EXTHDR_FLAGS: extension header flags (NLA_U32) * @NFTA_EXTHDR_OP: option match type (NLA_U32) - * @NFTA_EXTHDR_SREG: option match type (NLA_U32) + * @NFTA_EXTHDR_SREG: source register (NLA_U32: nft_registers) */ enum nft_exthdr_attributes { NFTA_EXTHDR_UNSPEC, @@ -931,6 +931,7 @@ enum nft_exthdr_attributes { * @NFT_META_TIME_HOUR: hour of day (in seconds) * @NFT_META_SDIF: slave device interface index * @NFT_META_SDIFNAME: slave device interface name + * @NFT_META_BRI_BROUTE: packet br_netfilter_broute bit */ enum nft_meta_keys { NFT_META_LEN, @@ -969,6 +970,7 @@ enum nft_meta_keys { NFT_META_TIME_HOUR, NFT_META_SDIF, NFT_META_SDIFNAME, + NFT_META_BRI_BROUTE, __NFT_META_IIFTYPE, }; @@ -1260,10 +1262,10 @@ enum nft_last_attributes { /** * enum nft_log_attributes - nf_tables log expression netlink attributes * - * @NFTA_LOG_GROUP: netlink group to send messages to (NLA_U32) + * @NFTA_LOG_GROUP: netlink group to send messages to (NLA_U16) * @NFTA_LOG_PREFIX: prefix to prepend to log messages (NLA_STRING) * @NFTA_LOG_SNAPLEN: length of payload to include in netlink message (NLA_U32) - * @NFTA_LOG_QTHRESHOLD: queue threshold (NLA_U32) + * @NFTA_LOG_QTHRESHOLD: queue threshold (NLA_U16) * @NFTA_LOG_LEVEL: log level (NLA_U32) * @NFTA_LOG_FLAGS: logging flags (NLA_U32) */ diff --git a/include/uapi/linux/netfilter/nfnetlink_hook.h b/include/uapi/linux/netfilter/nfnetlink_hook.h index bbcd285b22e1..84a561a74b98 100644 --- a/include/uapi/linux/netfilter/nfnetlink_hook.h +++ b/include/uapi/linux/netfilter/nfnetlink_hook.h @@ -32,8 +32,12 @@ enum nfnl_hook_attributes { /** * enum nfnl_hook_chain_info_attributes - chain description * - * NFNLA_HOOK_INFO_DESC: nft chain and table name (enum nft_table_attributes) (NLA_NESTED) - * NFNLA_HOOK_INFO_TYPE: chain type (enum nfnl_hook_chaintype) (NLA_U32) + * @NFNLA_HOOK_INFO_DESC: nft chain and table name (NLA_NESTED) + * @NFNLA_HOOK_INFO_TYPE: chain type (enum nfnl_hook_chaintype) (NLA_U32) + * + * NFNLA_HOOK_INFO_DESC depends on NFNLA_HOOK_INFO_TYPE value: + * NFNL_HOOK_TYPE_NFTABLES: enum nft_table_attributes + * NFNL_HOOK_TYPE_BPF: enum nfnl_hook_bpf_attributes */ enum nfnl_hook_chain_info_attributes { NFNLA_HOOK_INFO_UNSPEC, @@ -55,10 +59,24 @@ enum nfnl_hook_chain_desc_attributes { /** * enum nfnl_hook_chaintype - chain type * - * @NFNL_HOOK_TYPE_NFTABLES nf_tables base chain + * @NFNL_HOOK_TYPE_NFTABLES: nf_tables base chain + * @NFNL_HOOK_TYPE_BPF: bpf program */ enum nfnl_hook_chaintype { NFNL_HOOK_TYPE_NFTABLES = 0x1, + NFNL_HOOK_TYPE_BPF, +}; + +/** + * enum nfnl_hook_bpf_attributes - bpf prog description + * + * @NFNLA_HOOK_BPF_ID: bpf program id (NLA_U32) + */ +enum nfnl_hook_bpf_attributes { + NFNLA_HOOK_BPF_UNSPEC, + NFNLA_HOOK_BPF_ID, + __NFNLA_HOOK_BPF_MAX, }; +#define NFNLA_HOOK_BPF_MAX (__NFNLA_HOOK_BPF_MAX - 1) #endif /* _NFNL_HOOK_H */ diff --git a/include/uapi/linux/netfilter/nfnetlink_queue.h b/include/uapi/linux/netfilter/nfnetlink_queue.h index ef7c97f21a15..efcb7c044a74 100644 --- a/include/uapi/linux/netfilter/nfnetlink_queue.h +++ b/include/uapi/linux/netfilter/nfnetlink_queue.h @@ -62,6 +62,7 @@ enum nfqnl_attr_type { NFQA_VLAN, /* nested attribute: packet vlan info */ NFQA_L2HDR, /* full L2 header */ NFQA_PRIORITY, /* skb->priority */ + NFQA_CGROUP_CLASSID, /* __u32 cgroup classid */ __NFQA_MAX }; diff --git a/include/uapi/linux/nl80211.h b/include/uapi/linux/nl80211.h index f14621a954e1..c59fec406da5 100644 --- a/include/uapi/linux/nl80211.h +++ b/include/uapi/linux/nl80211.h @@ -1299,6 +1299,16 @@ * @NL80211_CMD_MODIFY_LINK_STA: Modify a link of an MLD station * @NL80211_CMD_REMOVE_LINK_STA: Remove a link of an MLD station * + * @NL80211_CMD_SET_HW_TIMESTAMP: Enable/disable HW timestamping of Timing + * measurement and Fine timing measurement frames. If %NL80211_ATTR_MAC + * is included, enable/disable HW timestamping only for frames to/from the + * specified MAC address. Otherwise enable/disable HW timestamping for + * all TM/FTM frames (including ones that were enabled with specific MAC + * address). If %NL80211_ATTR_HW_TIMESTAMP_ENABLED is not included, disable + * HW timestamping. + * The number of peers that HW timestamping can be enabled for concurrently + * is indicated by %NL80211_ATTR_MAX_HW_TIMESTAMP_PEERS. + * * @NL80211_CMD_MAX: highest used command number * @__NL80211_CMD_AFTER_LAST: internal use */ @@ -1550,6 +1560,8 @@ enum nl80211_commands { NL80211_CMD_MODIFY_LINK_STA, NL80211_CMD_REMOVE_LINK_STA, + NL80211_CMD_SET_HW_TIMESTAMP, + /* add new commands above here */ /* used to define NL80211_CMD_MAX below */ @@ -2775,6 +2787,24 @@ enum nl80211_commands { * indicates that the sub-channel is punctured. Higher 16 bits are * reserved. * + * @NL80211_ATTR_MAX_HW_TIMESTAMP_PEERS: Maximum number of peers that HW + * timestamping can be enabled for concurrently (u16), a wiphy attribute. + * A value of 0xffff indicates setting for all peers (i.e. not specifying + * an address with %NL80211_CMD_SET_HW_TIMESTAMP) is supported. + * @NL80211_ATTR_HW_TIMESTAMP_ENABLED: Indicates whether HW timestamping should + * be enabled or not (flag attribute). + * + * @NL80211_ATTR_EMA_RNR_ELEMS: Optional nested attribute for + * reduced neighbor report (RNR) elements. This attribute can be used + * only when NL80211_MBSSID_CONFIG_ATTR_EMA is enabled. + * Userspace is responsible for splitting the RNR into multiple + * elements such that each element excludes the non-transmitting + * profiles already included in the MBSSID element + * (%NL80211_ATTR_MBSSID_ELEMS) at the same index. Each EMA beacon + * will be generated by adding MBSSID and RNR elements at the same + * index. If the userspace includes more RNR elements than number of + * MBSSID elements then these will be added in every EMA beacon. + * * @NUM_NL80211_ATTR: total number of nl80211_attrs available * @NL80211_ATTR_MAX: highest attribute number currently defined * @__NL80211_ATTR_AFTER_LAST: internal use @@ -3306,6 +3336,11 @@ enum nl80211_attrs { NL80211_ATTR_PUNCT_BITMAP, + NL80211_ATTR_MAX_HW_TIMESTAMP_PEERS, + NL80211_ATTR_HW_TIMESTAMP_ENABLED, + + NL80211_ATTR_EMA_RNR_ELEMS, + /* add attributes here, update the policy in nl80211.c */ __NL80211_ATTR_AFTER_LAST, @@ -4026,6 +4061,10 @@ enum nl80211_band_iftype_attr { * @NL80211_BAND_ATTR_EDMG_BW_CONFIG: Channel BW Configuration subfield encodes * the allowed channel bandwidth configurations. * Defined by IEEE P802.11ay/D4.0 section 9.4.2.251, Table 13. + * @NL80211_BAND_ATTR_S1G_MCS_NSS_SET: S1G capabilities, supported S1G-MCS and NSS + * set subfield, as in the S1G information IE, 5 bytes + * @NL80211_BAND_ATTR_S1G_CAPA: S1G capabilities information subfield as in the + * S1G information IE, 10 bytes * @NL80211_BAND_ATTR_MAX: highest band attribute currently defined * @__NL80211_BAND_ATTR_AFTER_LAST: internal use */ @@ -4046,6 +4085,9 @@ enum nl80211_band_attr { NL80211_BAND_ATTR_EDMG_CHANNELS, NL80211_BAND_ATTR_EDMG_BW_CONFIG, + NL80211_BAND_ATTR_S1G_MCS_NSS_SET, + NL80211_BAND_ATTR_S1G_CAPA, + /* keep last */ __NL80211_BAND_ATTR_AFTER_LAST, NL80211_BAND_ATTR_MAX = __NL80211_BAND_ATTR_AFTER_LAST - 1 @@ -6326,6 +6368,10 @@ enum nl80211_feature_flags { * @NL80211_EXT_FEATURE_SECURE_NAN: Device supports NAN Pairing which enables * authentication, data encryption and message integrity. * + * @NL80211_EXT_FEATURE_AUTH_AND_DEAUTH_RANDOM_TA: Device supports randomized TA + * in authentication and deauthentication frames sent to unassociated peer + * using @NL80211_CMD_FRAME. + * * @NUM_NL80211_EXT_FEATURES: number of extended features. * @MAX_NL80211_EXT_FEATURES: highest extended feature index. */ @@ -6396,6 +6442,7 @@ enum nl80211_ext_feature_index { NL80211_EXT_FEATURE_POWERED_ADDR_CHANGE, NL80211_EXT_FEATURE_PUNCT, NL80211_EXT_FEATURE_SECURE_NAN, + NL80211_EXT_FEATURE_AUTH_AND_DEAUTH_RANDOM_TA, /* add new features before the definition below */ NUM_NL80211_EXT_FEATURES, @@ -6510,8 +6557,16 @@ enum nl80211_timeout_reason { * @NL80211_SCAN_FLAG_FREQ_KHZ: report scan results with * %NL80211_ATTR_SCAN_FREQ_KHZ. This also means * %NL80211_ATTR_SCAN_FREQUENCIES will not be included. - * @NL80211_SCAN_FLAG_COLOCATED_6GHZ: scan for colocated APs reported by - * 2.4/5 GHz APs + * @NL80211_SCAN_FLAG_COLOCATED_6GHZ: scan for collocated APs reported by + * 2.4/5 GHz APs. When the flag is set, the scan logic will use the + * information from the RNR element found in beacons/probe responses + * received on the 2.4/5 GHz channels to actively scan only the 6GHz + * channels on which APs are expected to be found. Note that when not set, + * the scan logic would scan all 6GHz channels, but since transmission of + * probe requests on non PSC channels is limited, it is highly likely that + * these channels would passively be scanned. Also note that when the flag + * is set, in addition to the colocated APs, PSC channels would also be + * scanned if the user space has asked for it. */ enum nl80211_scan_flags { NL80211_SCAN_FLAG_LOW_PRIORITY = 1<<0, diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index 000eec106856..51a7addc56c6 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -719,6 +719,11 @@ enum { #define __TC_MQPRIO_SHAPER_MAX (__TC_MQPRIO_SHAPER_MAX - 1) +enum { + TC_FP_EXPRESS = 1, + TC_FP_PREEMPTIBLE = 2, +}; + struct tc_mqprio_qopt { __u8 num_tc; __u8 prio_tc_map[TC_QOPT_BITMASK + 1]; @@ -733,11 +738,22 @@ struct tc_mqprio_qopt { #define TC_MQPRIO_F_MAX_RATE 0x8 enum { + TCA_MQPRIO_TC_ENTRY_UNSPEC, + TCA_MQPRIO_TC_ENTRY_INDEX, /* u32 */ + TCA_MQPRIO_TC_ENTRY_FP, /* u32 */ + + /* add new constants above here */ + __TCA_MQPRIO_TC_ENTRY_CNT, + TCA_MQPRIO_TC_ENTRY_MAX = (__TCA_MQPRIO_TC_ENTRY_CNT - 1) +}; + +enum { TCA_MQPRIO_UNSPEC, TCA_MQPRIO_MODE, TCA_MQPRIO_SHAPER, TCA_MQPRIO_MIN_RATE64, TCA_MQPRIO_MAX_RATE64, + TCA_MQPRIO_TC_ENTRY, __TCA_MQPRIO_MAX, }; @@ -1236,6 +1252,7 @@ enum { TCA_TAPRIO_TC_ENTRY_UNSPEC, TCA_TAPRIO_TC_ENTRY_INDEX, /* u32 */ TCA_TAPRIO_TC_ENTRY_MAX_SDU, /* u32 */ + TCA_TAPRIO_TC_ENTRY_FP, /* u32 */ /* add new constants above here */ __TCA_TAPRIO_TC_ENTRY_CNT, diff --git a/include/uapi/linux/sctp.h b/include/uapi/linux/sctp.h index ed7d4ecbf53d..b7d91d4cf0db 100644 --- a/include/uapi/linux/sctp.h +++ b/include/uapi/linux/sctp.h @@ -1211,7 +1211,9 @@ enum sctp_sched_type { SCTP_SS_DEFAULT = SCTP_SS_FCFS, SCTP_SS_PRIO, SCTP_SS_RR, - SCTP_SS_MAX = SCTP_SS_RR + SCTP_SS_FC, + SCTP_SS_WFQ, + SCTP_SS_MAX = SCTP_SS_WFQ }; /* Probe Interval socket option */ diff --git a/include/uapi/linux/tc_act/tc_tunnel_key.h b/include/uapi/linux/tc_act/tc_tunnel_key.h index 49ad4033951b..37c6f612f161 100644 --- a/include/uapi/linux/tc_act/tc_tunnel_key.h +++ b/include/uapi/linux/tc_act/tc_tunnel_key.h @@ -34,6 +34,7 @@ enum { */ TCA_TUNNEL_KEY_ENC_TOS, /* u8 */ TCA_TUNNEL_KEY_ENC_TTL, /* u8 */ + TCA_TUNNEL_KEY_NO_FRAG, /* flag */ __TCA_TUNNEL_KEY_MAX, }; diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h index b4062bed186a..12c1c9699935 100644 --- a/include/uapi/linux/virtio_net.h +++ b/include/uapi/linux/virtio_net.h @@ -61,6 +61,7 @@ #define VIRTIO_NET_F_GUEST_USO6 55 /* Guest can handle USOv6 in. */ #define VIRTIO_NET_F_HOST_USO 56 /* Host can handle USO in. */ #define VIRTIO_NET_F_HASH_REPORT 57 /* Supports hash report */ +#define VIRTIO_NET_F_GUEST_HDRLEN 59 /* Guest provides the exact hdr_len value. */ #define VIRTIO_NET_F_RSS 60 /* Supports RSS RX steering */ #define VIRTIO_NET_F_RSC_EXT 61 /* extended coalescing info */ #define VIRTIO_NET_F_STANDBY 62 /* Act as standby for another device |