diff options
author | David S. Miller | 2021-03-14 14:48:26 -0700 |
---|---|---|
committer | David S. Miller | 2021-03-14 14:48:26 -0700 |
commit | c6baf7eeb0cf82f6a90a703f6548250fc85cfdcc (patch) | |
tree | c6bf285f9bd54f1a0b57d604226f34895f4034f2 /net/psample | |
parent | 3f79eb3c3a6abaa8f9900b5e40994060d7341cbc (diff) | |
parent | d206121faf8bb2239cd970af0bd32f5203780427 (diff) |
Merge branch 'skbuff-micro-optimize-flow-dissection'
Alexander Lobakin says:
====================
skbuff: micro-optimize flow dissection
This little number makes all of the flow dissection functions take
raw input data pointer as const (1-5) and shuffles the branches in
__skb_header_pointer() according to their hit probability.
The result is +20 Mbps per flow/core with one Flow Dissector pass
per packet. This affects RPS (with software hashing), drivers that
use eth_get_headlen() on their Rx path and so on.
From v2 [1]:
- reword some commit messages as a potential fix for NIPA;
- no functional changes.
From v1 [0]:
- rebase on top of the latest net-next. This was super-weird, but
I double-checked that the series applies with no conflicts, and
then on Patchwork it didn't;
- no other changes.
[0] https://lore.kernel.org/netdev/20210312194538.337504-1-alobakin@pm.me
[1] https://lore.kernel.org/netdev/20210313113645.5949-1-alobakin@pm.me
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/psample')
0 files changed, 0 insertions, 0 deletions