diff options
author | Willy Tarreau | 2012-12-02 11:49:27 +0000 |
---|---|---|
committer | David S. Miller | 2012-12-02 20:23:01 -0500 |
commit | 02275a2ee7c0ea475b6f4a6428f5df592bc9d30b (patch) | |
tree | 820c92949d326bcf394eca339453a0389a998300 /net | |
parent | 077b393d05915f04e2629bfc47c6fce95cae7d3f (diff) |
tcp: don't abort splice() after small transfers
TCP coalescing added a regression in splice(socket->pipe) performance,
for some workloads because of the way tcp_read_sock() is implemented.
The reason for this is the break when (offset + 1 != skb->len).
As we released the socket lock, this condition is possible if TCP stack
added a fragment to the skb, which can happen with TCP coalescing.
So let's go back to the beginning of the loop when this happens,
to give a chance to splice more frags per system call.
Doing so fixes the issue and makes GRO 10% faster than LRO
on CPU-bound splice() workloads instead of the opposite.
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/ipv4/tcp.c | 12 |
1 files changed, 8 insertions, 4 deletions
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 1aca02c9911e..8fc5b3bd6075 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1494,15 +1494,19 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc, copied += used; offset += used; } - /* - * If recv_actor drops the lock (e.g. TCP splice + /* If recv_actor drops the lock (e.g. TCP splice * receive) the skb pointer might be invalid when * getting here: tcp_collapse might have deleted it * while aggregating skbs from the socket queue. */ - skb = tcp_recv_skb(sk, seq-1, &offset); - if (!skb || (offset+1 != skb->len)) + skb = tcp_recv_skb(sk, seq - 1, &offset); + if (!skb) break; + /* TCP coalescing might have appended data to the skb. + * Try to splice more frags + */ + if (offset + 1 != skb->len) + continue; } if (tcp_hdr(skb)->fin) { sk_eat_skb(sk, skb, false); |