diff options
Diffstat (limited to 'Documentation/networking')
-rw-r--r-- | Documentation/networking/dpaa.txt | 194 | ||||
-rw-r--r-- | Documentation/networking/scaling.txt | 2 | ||||
-rw-r--r-- | Documentation/networking/tcp.txt | 31 |
3 files changed, 208 insertions, 19 deletions
diff --git a/Documentation/networking/dpaa.txt b/Documentation/networking/dpaa.txt new file mode 100644 index 000000000000..76e016d4d344 --- /dev/null +++ b/Documentation/networking/dpaa.txt @@ -0,0 +1,194 @@ +The QorIQ DPAA Ethernet Driver +============================== + +Authors: +Madalin Bucur <madalin.bucur@nxp.com> +Camelia Groza <camelia.groza@nxp.com> + +Contents +======== + + - DPAA Ethernet Overview + - DPAA Ethernet Supported SoCs + - Configuring DPAA Ethernet in your kernel + - DPAA Ethernet Frame Processing + - DPAA Ethernet Features + - Debugging + +DPAA Ethernet Overview +====================== + +DPAA stands for Data Path Acceleration Architecture and it is a +set of networking acceleration IPs that are available on several +generations of SoCs, both on PowerPC and ARM64. + +The Freescale DPAA architecture consists of a series of hardware blocks +that support Ethernet connectivity. The Ethernet driver depends upon the +following drivers in the Linux kernel: + + - Peripheral Access Memory Unit (PAMU) (* needed only for PPC platforms) + drivers/iommu/fsl_* + - Frame Manager (FMan) + drivers/net/ethernet/freescale/fman + - Queue Manager (QMan), Buffer Manager (BMan) + drivers/soc/fsl/qbman + +A simplified view of the dpaa_eth interfaces mapped to FMan MACs: + + dpaa_eth /eth0\ ... /ethN\ + driver | | | | + ------------- ---- ----------- ---- ------------- + -Ports / Tx Rx \ ... / Tx Rx \ + FMan | | | | + -MACs | MAC0 | | MACN | + / dtsec0 \ ... / dtsecN \ (or tgec) + / \ / \(or memac) + --------- -------------- --- -------------- --------- + FMan, FMan Port, FMan SP, FMan MURAM drivers + --------------------------------------------------------- + FMan HW blocks: MURAM, MACs, Ports, SP + --------------------------------------------------------- + +The dpaa_eth relation to the QMan, BMan and FMan: + ________________________________ + dpaa_eth / eth0 \ + driver / \ + --------- -^- -^- -^- --- --------- + QMan driver / \ / \ / \ \ / | BMan | + |Rx | |Rx | |Tx | |Tx | | driver | + --------- |Dfl| |Err| |Cnf| |FQs| | | + QMan HW |FQ | |FQ | |FQs| | | | | + / \ / \ / \ \ / | | + --------- --- --- --- -v- --------- + | FMan QMI | | + | FMan HW FMan BMI | BMan HW | + ----------------------- -------- + +where the acronyms used above (and in the code) are: +DPAA = Data Path Acceleration Architecture +FMan = DPAA Frame Manager +QMan = DPAA Queue Manager +BMan = DPAA Buffers Manager +QMI = QMan interface in FMan +BMI = BMan interface in FMan +FMan SP = FMan Storage Profiles +MURAM = Multi-user RAM in FMan +FQ = QMan Frame Queue +Rx Dfl FQ = default reception FQ +Rx Err FQ = Rx error frames FQ +Tx Cnf FQ = Tx confirmation FQs +Tx FQs = transmission frame queues +dtsec = datapath three speed Ethernet controller (10/100/1000 Mbps) +tgec = ten gigabit Ethernet controller (10 Gbps) +memac = multirate Ethernet MAC (10/100/1000/10000) + +DPAA Ethernet Supported SoCs +============================ + +The DPAA drivers enable the Ethernet controllers present on the following SoCs: + +# PPC +P1023 +P2041 +P3041 +P4080 +P5020 +P5040 +T1023 +T1024 +T1040 +T1042 +T2080 +T4240 +B4860 + +# ARM +LS1043A +LS1046A + +Configuring DPAA Ethernet in your kernel +======================================== + +To enable the DPAA Ethernet driver, the following Kconfig options are required: + +# common for arch/arm64 and arch/powerpc platforms +CONFIG_FSL_DPAA=y +CONFIG_FSL_FMAN=y +CONFIG_FSL_DPAA_ETH=y +CONFIG_FSL_XGMAC_MDIO=y + +# for arch/powerpc only +CONFIG_FSL_PAMU=y + +# common options needed for the PHYs used on the RDBs +CONFIG_VITESSE_PHY=y +CONFIG_REALTEK_PHY=y +CONFIG_AQUANTIA_PHY=y + +DPAA Ethernet Frame Processing +============================== + +On Rx, buffers for the incoming frames are retrieved from one of the three +existing buffers pools. The driver initializes and seeds these, each with +buffers of different sizes: 1KB, 2KB and 4KB. + +On Tx, all transmitted frames are returned to the driver through Tx +confirmation frame queues. The driver is then responsible for freeing the +buffers. In order to do this properly, a backpointer is added to the buffer +before transmission that points to the skb. When the buffer returns to the +driver on a confirmation FQ, the skb can be correctly consumed. + +DPAA Ethernet Features +====================== + +Currently the DPAA Ethernet driver enables the basic features required for +a Linux Ethernet driver. The support for advanced features will be added +gradually. + +The driver has Rx and Tx checksum offloading for UDP and TCP. Currently the Rx +checksum offload feature is enabled by default and cannot be controlled through +ethtool. + +The driver has support for multiple prioritized Tx traffic classes. Priorities +range from 0 (lowest) to 3 (highest). These are mapped to HW workqueues with +strict priority levels. Each traffic class contains NR_CPU TX queues. By +default, only one traffic class is enabled and the lowest priority Tx queues +are used. Higher priority traffic classes can be enabled with the mqprio +qdisc. For example, all four traffic classes are enabled on an interface with +the following command. Furthermore, skb priority levels are mapped to traffic +classes as follows: + + * priorities 0 to 3 - traffic class 0 (low priority) + * priorities 4 to 7 - traffic class 1 (medium-low priority) + * priorities 8 to 11 - traffic class 2 (medium-high priority) + * priorities 12 to 15 - traffic class 3 (high priority) + +tc qdisc add dev <int> root handle 1: \ + mqprio num_tc 4 map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1 + +Debugging +========= + +The following statistics are exported for each interface through ethtool: + + - interrupt count per CPU + - Rx packets count per CPU + - Tx packets count per CPU + - Tx confirmed packets count per CPU + - Tx S/G frames count per CPU + - Tx error count per CPU + - Rx error count per CPU + - Rx error count per type + - congestion related statistics: + - congestion status + - time spent in congestion + - number of time the device entered congestion + - dropped packets count per cause + +The driver also exports the following information in sysfs: + + - the FQ IDs for each FQ type + /sys/devices/platform/dpaa-ethernet.0/net/<int>/fqids + + - the IDs of the buffer pools in use + /sys/devices/platform/dpaa-ethernet.0/net/<int>/bpids diff --git a/Documentation/networking/scaling.txt b/Documentation/networking/scaling.txt index 59f4db2a0c85..f55639d71d35 100644 --- a/Documentation/networking/scaling.txt +++ b/Documentation/networking/scaling.txt @@ -122,7 +122,7 @@ associated flow of the packet. The hash is either provided by hardware or will be computed in the stack. Capable hardware can pass the hash in the receive descriptor for the packet; this would usually be the same hash used for RSS (e.g. computed Toeplitz hash). The hash is saved in -skb->rx_hash and can be used elsewhere in the stack as a hash of the +skb->hash and can be used elsewhere in the stack as a hash of the packet’s flow. Each receive hardware queue has an associated list of CPUs to which diff --git a/Documentation/networking/tcp.txt b/Documentation/networking/tcp.txt index bdc4c0db51e1..9c7139d57e57 100644 --- a/Documentation/networking/tcp.txt +++ b/Documentation/networking/tcp.txt @@ -1,7 +1,7 @@ TCP protocol ============ -Last updated: 9 February 2008 +Last updated: 3 June 2017 Contents ======== @@ -29,18 +29,19 @@ As of 2.6.13, Linux supports pluggable congestion control algorithms. A congestion control mechanism can be registered through functions in tcp_cong.c. The functions used by the congestion control mechanism are registered via passing a tcp_congestion_ops struct to -tcp_register_congestion_control. As a minimum name, ssthresh, -cong_avoid must be valid. +tcp_register_congestion_control. As a minimum, the congestion control +mechanism must provide a valid name and must implement either ssthresh, +cong_avoid and undo_cwnd hooks or the "omnipotent" cong_control hook. Private data for a congestion control mechanism is stored in tp->ca_priv. tcp_ca(tp) returns a pointer to this space. This is preallocated space - it is important to check the size of your private data will fit this space, or -alternatively space could be allocated elsewhere and a pointer to it could +alternatively, space could be allocated elsewhere and a pointer to it could be stored here. There are three kinds of congestion control algorithms currently: The simplest ones are derived from TCP reno (highspeed, scalable) and just -provide an alternative the congestion window calculation. More complex +provide an alternative congestion window calculation. More complex ones like BIC try to look at other events to provide better heuristics. There are also round trip time based algorithms like Vegas and Westwood+. @@ -49,21 +50,15 @@ Good TCP congestion control is a complex problem because the algorithm needs to maintain fairness and performance. Please review current research and RFC's before developing new modules. -The method that is used to determine which congestion control mechanism is -determined by the setting of the sysctl net.ipv4.tcp_congestion_control. -The default congestion control will be the last one registered (LIFO); -so if you built everything as modules, the default will be reno. If you -build with the defaults from Kconfig, then CUBIC will be builtin (not a -module) and it will end up the default. +The default congestion control mechanism is chosen based on the +DEFAULT_TCP_CONG Kconfig parameter. If you really want a particular default +value then you can set it using sysctl net.ipv4.tcp_congestion_control. The +module will be autoloaded if needed and you will get the expected protocol. If +you ask for an unknown congestion method, then the sysctl attempt will fail. -If you really want a particular default value then you will need -to set it with the sysctl. If you use a sysctl, the module will be autoloaded -if needed and you will get the expected protocol. If you ask for an -unknown congestion method, then the sysctl attempt will fail. - -If you remove a tcp congestion control module, then you will get the next +If you remove a TCP congestion control module, then you will get the next available one. Since reno cannot be built as a module, and cannot be -deleted, it will always be available. +removed, it will always be available. How the new TCP output machine [nyi] works. =========================================== |