diff options
author | Eric Dumazet | 2012-12-11 15:54:33 +0000 |
---|---|---|
committer | David S. Miller | 2012-12-12 00:16:47 -0500 |
commit | 1abbe1394a84c10919e32242318e715b04d7e33b (patch) | |
tree | ec3930413339cd3c77498c133b41475737494279 /include | |
parent | cae49ede00ec3d0cda290b03fee55b72b49efc11 (diff) |
pkt_sched: avoid requeues if possible
With BQL being deployed, we can more likely have following behavior :
We dequeue a packet from qdisc in dequeue_skb(), then we realize target
tx queue is in XOFF state in sch_direct_xmit(), and we have to hold the
skb into gso_skb for later.
This shows in stats (tc -s qdisc dev eth0) as requeues.
Problem of these requeues is that high priority packets can not be
dequeued as long as this (possibly low prio and big TSO packet) is not
removed from gso_skb.
At 1Gbps speed, a full size TSO packet is 500 us of extra latency.
In some cases, we know that all packets dequeued from a qdisc are
for a particular and known txq :
- If device is non multi queue
- For all MQ/MQPRIO slave qdiscs
This patch introduces a new qdisc flag, TCQ_F_ONETXQUEUE to mark
this capability, so that dequeue_skb() is allowed to dequeue a packet
only if the associated txq is not stopped.
This indeed reduce latencies for high prio packets (or improve fairness
with sfq/fq_codel), and almost remove qdisc 'requeues'.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include')
-rw-r--r-- | include/net/sch_generic.h | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 4616f468d599..1540f9c2fcf4 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -50,6 +50,13 @@ struct Qdisc { #define TCQ_F_INGRESS 2 #define TCQ_F_CAN_BYPASS 4 #define TCQ_F_MQROOT 8 +#define TCQ_F_ONETXQUEUE 0x10 /* dequeue_skb() can assume all skbs are for + * q->dev_queue : It can test + * netif_xmit_frozen_or_stopped() before + * dequeueing next packet. + * Its true for MQ/MQPRIO slaves, or non + * multiqueue device. + */ #define TCQ_F_WARN_NONWC (1 << 16) int padded; const struct Qdisc_ops *ops; |