aboutsummaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2021-06-10net: ipa: don't assume mem array indexed by IDAlex Elder
Change ipa_mem_valid() to iterate over the entries using a u32 index variable rather than using a memory region ID. Use the ID found inside the memory descriptor rather than the loop index. Change ipa_mem_size_valid() to iterate over the entries but without assuming the array index is the memory region ID. "Empty" entries will have zero size; and we'll temporarily assume such entries have zero offset as well (they all do, currently). Similarly, don't assume the mem[] array is indexed by ID in ipa_mem_config(). There, "empty" entries will have a zero canary count, so no special assumptions are needed to handle them correctly. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10ibmvnic: Allow device probe if the device is not ready at bootCristobal Forno
Allow the device to be initialized at a later time if it is not available at boot. The device will be allowed to probe but will be given a "down" state. After completing device probe and registering the net device, the driver will await an interrupt signal from its partner device, indicating that it is ready for boot. The driver will schedule a work event to perform the necessary procedure and begin operation. Co-developed-by: Thomas Falcon <tlfalcon@linux.ibm.com> Signed-off-by: Thomas Falcon <tlfalcon@linux.ibm.com> Signed-off-by: Cristobal Forno <cforno12@linux.ibm.com> Acked-by: Lijun Pan <lijunp213@gmail.com> Reviewed-by: Dany Madden <drt@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: marvell: prestera: add LAG supportSerhiy Boiko
The following features are supported: - LAG basic operations - create/delete LAG - add/remove a member to LAG - enable/disable member in LAG - LAG Bridge support - LAG VLAN support - LAG FDB support Limitations: - Only HASH lag tx type is supported - The Hash parameters are not configurable. They are applied during the LAG creation stage. - Enslaving a port to the LAG device that already has an upper device is not supported. Co-developed-by: Andrii Savka <andrii.savka@plvision.eu> Signed-off-by: Andrii Savka <andrii.savka@plvision.eu> Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu> Co-developed-by: Vadym Kochan <vkochan@marvell.com> Signed-off-by: Vadym Kochan <vkochan@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: marvell: prestera: do not propagate netdev events to prestera_switchdev.cVadym Kochan
Replace prestera_bridge_port_event(...) by prestera_bridge_port_join(...) and prestera_bridge_port_leave(). It simplifies the code by reading netdev event specific handling only once in prestera_main.c Signed-off-by: Vadym Kochan <vkochan@marvell.com> CC: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: marvell: prestera: move netdev topology validation to prestera_mainVadym Kochan
Move handling of PRECHANGEUPPER event from prestera_switchdev to prestera_main which is responsible for basic netdev events handling and routing them to related module. Signed-off-by: Vadym Kochan <vkochan@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10soc: qcom: ipa: Remove superfluous error message around platform_get_irq()Tan Zhongjun
The platform_get_irq() prints error message telling that interrupt is missing,hence there is no need to duplicated that message in the drivers. Signed-off-by: Tan Zhongjun <tanzhongjun@yulong.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10ibmvnic: Use list_for_each_entry() to simplify code in ibmvnic.cWang Hai
Convert list_for_each() to list_for_each_entry() where applicable. This simplifies the code. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wang Hai <wanghai38@huawei.com> Acked-by: Lijun Pan <lijunp213@gmail.com> Reviewed-by: Dany Madden <drt@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10mt76: mt7615: Use devm_platform_get_and_ioremap_resource()Yang Yingliang
Use devm_platform_get_and_ioremap_resource() to simplify code. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: mido: mdio-mux-bcm-iproc: Use devm_platform_get_and_ioremap_resource()Yang Yingliang
Use devm_platform_get_and_ioremap_resource() to simplify code and avoid a null-ptr-deref by checking 'res' in it. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: stmmac: Fix mixed enum type warningWong Vee Khee
The commit 5a5586112b92 ("net: stmmac: support FPE link partner hand-shaking procedure") introduced the following coverity warning: "Parse warning (PW.MIXED_ENUM_TYPE)" "1. mixed_enum_type: enumerated type mixed with another type" This is due to both "lo_state" and "lp_sate" which their datatype are enum stmmac_fpe_state type, and being assigned with "FPE_EVENT_UNKNOWN" which is a macro-defined of 0. Fixed this by assigned both these variables with the correct enum value. Fixes: 5a5586112b92 ("net: stmmac: support FPE link partner hand-shaking procedure") Signed-off-by: Wong Vee Khee <vee.khee.wong@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10fjes: check return value after calling platform_get_resource()Yang Yingliang
It will cause null-ptr-deref if platform_get_resource() returns NULL, we need check the return value. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: axienet: Use devm_platform_get_and_ioremap_resource()Yang Yingliang
Use devm_platform_get_and_ioremap_resource() to simplify code. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: w5100: Use devm_platform_get_and_ioremap_resource()Yang Yingliang
Use devm_platform_get_and_ioremap_resource() to simplify code. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: ixp4xx_hss: add braces {} to all arms of the statementPeng Li
Braces {} should be used on all arms of this statement. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: ixp4xx_hss: fix the comments style issuePeng Li
Networking block comments don't use an empty /* line, use /* Comment... Block comments use * on subsequent lines. Block comments use a trailing */ on a separate line. This patch fixes the comments style issues. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: ixp4xx_hss: remove redundant spacesPeng Li
According to the chackpatch.pl, space prohibited after that open parenthesis '('. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: ixp4xx_hss: add some required spacesPeng Li
Add space required before the open parenthesis '('. Add space required after that close brace '}'. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: ixp4xx_hss: move out assignment in if conditionPeng Li
Should not use assignment in if condition. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: ixp4xx_hss: fix the code style issue about "foo* bar"Peng Li
Fix the checkpatch error as "foo* bar" and should be "foo *bar", and "(foo*)" should be "(foo *)". Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: ixp4xx_hss: add blank line after declarationsPeng Li
This patch fixes the checkpatch error about missing a blank line after declarations. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: ixp4xx_hss: remove redundant blank linesPeng Li
This patch removes some redundant blank lines. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10Merge tag 'mlx5-updates-2021-06-09' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2021-06-09 Introduce steering header insert/remove and switchdev bridge offloads 1) From Yevgeny, Steering header insert/remove support ConnectX supports offloading of various encapsulations and decapsulations (e.g. VXLAN), which are performed by 'Packet Reformat' action. Starting with ConnectX-6 DX, a new reformat type is supported - INSERT_HEADER. This reformat allows inserting an arbitrary size buffer at a selected location in the packet on RX flows. The insert/remove header support are needed as a prerequisite for the bridge offloads vlan pop/push supprt, see below. 2) From Vlad, Support for bridge offloads for switchdev mode This change implements bridge offloads with VLAN-support that works on top of mlx5 representors in switchdev mode. HIGH-LEVEL OVERVIEW Hardware supported by mlx5 driver doesn't provide dynamic learning or aging functionality and requires the driver to emulate all switch-like behavior in software. As such, all packets by default go through miss path, appear on representor and get to software bridge, if it is the upper device of the representor. This causes bridge to process packet in software, learn the MAC address to FDB and send SWITCHDEV_FDB_ADD_TO_DEVICE event to all subscribers. Upon reception of SWITCHDEV_FDB_ADD_TO_DEVICE notification mlx5 bridge offloads the FDB to hardware and sends back SWITCHDEV_FDB_ADD_TO_BRIDGE notification to prevent such entries from being aged out by kernel bridge. Leaving aging to kernel bridge would result deletion of offloaded dynamic FDB entries every aging_time period due to packets being processed by hardware and, consecutively, 'used' timestamp for FDB entry not being updated. Hardware aging is emulated in driver by running periodic workqueue task that manually updates the rules according to their hardware counter: - If hardware counter has changed since last update, the handler updates 'used' timestamp in kernel bridge dynamic entry by sending SWITCHDEV_FDB_ADD_TO_BRIDGE notification for the entry. - If FDB entry wasn't updated for user-controllable aging_time period, then the FDB entry is unoffloaded from hardware and corresponding SWITCHDEV_FDB_DEL_TO_BRIDGE notification is sent to kernel bridge. The mlx5 bridge offload implementation fully supports port VLAN objects, including PVID (vlan push) and "Egress Untagged" (vlan pop). SOFTWARE ARCHITECTURE Mlx5_eswitch is extended with pointer to new mlx5_esw_bridge_offloads structure which has a linked list of mlx5_esw_bridge objects. Struct mlx5_esw_bridge is the main switch object in mlx5 that holds all data for offloaded FDB entries and metadata for bridge ports and their vlans. The mlx5_esw_bridge object is created when first representor of eswitch vport is added to bridge and deleted when the last representor is detached from it. Bridge FDB entries are saved in linked list (to iterate over all FDB entries in aging workqueue task) and also in hashtable for quick lookup by MAC+VLAN tuple. Bridge FDB entries are saved in linked list (to iterate over all FDB entries in aging workqueue task) and in hashtable for quick lookup by MAC+VLAN tuple. Port metadata is stored in struct mlx5_esw_bridge_port that is saved in xarray to allow quick lookup by vport number. Part of the port metadata is the set of port vlans that are represented by mlx5_esw_bridge_vlan structure. The vlan structure points to all FDBs on vlan/port via fdb_list linked list. Simplified diagram of mlx5 bridge objects: +------------------+ | mxl5_eswitch | | | | br_offloads | +--------+---------+ | +--------v-------------------+ | mlx5_esw_bridge_offloads | | | +--> bridges | | +-------+--------------------+ | | | | | +---v---------------+ | | mlx5_esw_bridge | | | | | | vports | | | | | | fdb_ht | | +---+---------------+ | | | +---v---------------+ +------+ mlx5_esw_bridge | | | +-------------------------+ vports | | | | | | fdb_ht +------------------------------------------+ | +-------------------+ | | | | | | +----------------------+ +---------------------------+ | +-> mlx5_esw_bridge_port | +--> mlx5_esw_bridge_fdb_entry <-+ | | | +----------------------+ | +--+------------------------+ | | | vlans +--+-> mlx5_esw_bridge_vlan | | | | | | | | | | | +--v------------------------+ | | +----------------------+ | | fdb_list +--+ | mlx5_esw_bridge_fdb_entry <-+ | | +-------^--------------+ +--+------------------------+ | | +----------------------+ | | | | +-> mlx5_esw_bridge_port | | +-----------------------+ | | | | | | vlans | | -----------------------+ | | | +-> mlx5_esw_bridge_vlan | | +----------------------+ | | +---------------------------+ | | fdb_list +-----> mlx5_esw_bridge_fdb_entry <-+ +-------^--------------+ +--+------------------------+ | | +-----------------------+ HARDWARE REPRESENTATION In order to adhere to kernel software datapath model bridge offloads must come after TC and NF FDBs. However, since netfilter offload in mlx5 is implemented with unmanaged tables, its miss path is not automatically connected to next priority and requires the code to manually connect with slow table. To keep bridge offloads encapsulated and not mix it with eswitch offloads new FDB_TC_MISS priority is created between FDB_FT_OFFLOAD and FDB_SLOW_PATH which allows bridge offloads to be created without exposing its internal tables to any other modules since miss path of managed TC-miss table is automatically wired to next priority. The bridge tables are created with new priority FDB_BR_OFFLOAD in FDB namespace. The new priority is between tc-miss and slow path priorities. Priority consist of two levels: the ingress table that is global per eswitch and matches incoming packets by src_mac/vid and redirects them to next level (egress table) that is chosen according to ingress port bridge membership and matches on dst_mac/vid in order to redirect packet to vport according to the following diagram: + | +---------v----------+ | | | FDB_TC_OFFLOAD | | | +---------+----------+ | | +---------v----------+ | | | FDB_FT_OFFLOAD | | | +---------+----------+ | | +---------v----------+ | | | FDB_TC_MISS | | | +---------+----------+ | +--------------------------------------+ | | | | +------+ | | | | | +------v--------+ FDB_BR_OFFLOAD | | | INGRESS_TABLE | | | +------+---+----+ | | | | match | | | +---------+ | | | | | +-------+ | | +-------v-------+ match | | | | | | EGRESS_TABLE +------------> vport | | | +-------+-------+ | | | | | | | +-------+ | | miss | | | +------+------+ | | | | +--------------------------------------+ | | +---------v----------+ | | | FDB_SLOW_PATH | | | +---------+----------+ | v PATCHES OVERVIEW 1-3 - Miscellaneous refactorings and infrastructure changes. 4 - Mlx5 bridge offload infrastructure and dedicated fs_core namespace/tables implementation. 5 - FDB entry offload. 6 - Dynamic FDB entry aging. 7-10 - VLAN filtering offload. 11 - Tracepoints for main mlx5 bridge offload events (FDB entry offload/unoffload, VLAN add/delete, etc.) ==================== Signed-off-by: David S. Miller <davem@davemloft.net> --
2021-06-10net: davinci_emac: Use devm_platform_get_and_ioremap_resource()Yang Yingliang
Use devm_platform_get_and_ioremap_resource() to simplify code and avoid a null-ptr-deref by checking 'res' in it. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: ethernet: ti: cpsw: Use devm_platform_get_and_ioremap_resource()Yang Yingliang
Use devm_platform_get_and_ioremap_resource() to simplify code. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-10net: phy: probe for C45 PHYs that return PHY ID of zero in C22 spaceWong Vee Khee
PHY devices such as the Marvell Alaska 88E2110 does not return a valid PHY ID when probed using Clause-22. The current implementation treats PHY ID of zero as a non-error and valid PHY ID, and causing the PHY device failed to bind to the Marvell driver. For such devices, we do an additional probe in the Clause-45 space, if a valid PHY ID is returned, we then proceed to attach the PHY device to the matching PHY ID driver. Signed-off-by: Wong Vee Khee <vee.khee.wong@linux.intel.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-09net/mlx5: Bridge, add tracepointsVlad Buslov
Move private bridge structures to dedicated headers that is accessible to bridge tracepoint header. Implemented following tracepoints: - Initialize FDB entry. - Refresh FDB entry. - Cleanup FDB entry. - Create VLAN. - Cleanup VLAN. - Attach port to bridge. - Detach port from bridge. Usage example: ># cd /sys/kernel/debug/tracing ># echo mlx5:mlx5_esw_bridge_fdb_entry_init >> set_event ># cat trace ... kworker/u20:1-96 [001] .... 231.892503: mlx5_esw_bridge_fdb_entry_init: net_device=enp8s0f0_0 addr=e4:fd:05:08:00:02 vid=3 flags=0 lastuse=4294895695 Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: Bridge, filter tagged packets that didn't match tagged fgVlad Buslov
With support for pvid vlans in mlx5 bridge it is possible to have rules in untagged flow group when vlan filtering is enabled. However, such rules can also match tagged packets that didn't match anything in tagged flow group. Filter such packets by introducing additional flow group between tagged and untagged groups. When filtering is enabled on the bridge create additional flow in vlan filtering flow group and matches tagged packets with specified source MAC address and redirects them to new "skip" table. The skip table is new lowest-level empty table that is used to skip all further processing on packet in bridge priority. Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: Bridge, support pvid and untagged vlan configurationsVlad Buslov
Implement support for pushing vlan header into untagged packet on ingress of port that has pvid configured and support for popping vlan on egress of port that has the matching vlan configured as untagged. To support such configurations packet reformat contexts of {INSERT|REMOVE}_HEADER types are created per such vlan and saved to struct mlx5_esw_bridge_vlan which allows all FDB entries on particular vlan to share single packet reformat instance. When initializing FDB entries with pvid or untagged vlan type set its mlx5_flow_act->pkt_reformat action accordingly. Flush all flows when removing vlan from port. This is necessary because even though software bridge removes all FDB entries before removing their vlan, mlx5 bridge implementation deletes their corresponding flow entries from hardware in asynchronous workqueue task, which will cause firmware error if vlan packet reformat context is deleted before all flows that point to it. Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: Bridge, match FDB entry vlan tagVlad Buslov
Add support for FDB vlan-tagged entries. Extend ingress and egress flow tables with flow groups to match packet vlan tag. Modify the flow creation code to include vlan tag, if vlan is configured on port and vlan configuration is supported for offload. Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: Bridge, implement infrastructure for vlansVlad Buslov
Establish all the necessary infrastructure for implementing vlan matching and vlan push/pop in following patches: - Add new per-vport struct mlx5_esw_bridge_port that is used to store metadata for all port vlans. Initialize and cleanup the instance of the structure when port representor is linked/unliked to bridge. Use xarray to allow quick vport metadata lookup by vport number. - Add new per-port-vlan struct mlx5_esw_bridge_vlan that is used to store vlan-specific data (vid, flags). Handle SWITCHDEV_PORT_OBJ_{ADD|DEL} switchdev blocking event for SWITCHDEV_OBJ_ID_PORT_VLAN object by creating/deleting the vlan structure and saving it in per-vport xarray for quick lookup. - Implement support for SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING object attribute that is used to toggle vlan filtering. Remove all FDB entries from hardware when vlan filtering state is changed. Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: Bridge, dynamic entry ageingVlad Buslov
Dynamic FDB entries require capability to age out unused entries. Such entries are either aged out by kernel software bridge implementation or by hardware switch that offloaded them (and notified the kernel to mark them as SWITCHDEV_FDB_ADD_TO_BRIDGE). Leaving ageing to kernel bridge would result it deleting offloaded dynamic FDB entries every ageing_time period due to packets being processed by hardware and, consecutively, 'used' timestamp for FDB entry not being updated. However, since hardware doesn't support ageing, software solution inside the driver is required. In order to emulate hardware ageing in driver, extend bridge FDB ingress flows with counter and create delayed br_offloads->update_work task on bridge offloads workqueue. Run the task every second, update 'used' timestamp in software bridge dynamic entry by sending SWITCHDEV_FDB_ADD_TO_BRIDGE for the entry, if it flow hardware counter lastuse field was changed since last update. If lastuse wasn't changed for ageing_time period, then delete the FDB entry and notify kernel bridge by sending SWITCHDEV_FDB_DEL_TO_BRIDGE notification. Register blocking switchdev notifier callback and handle attribute set SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME event to allow user to dynamically configure bridge FDB entry ageing timeout. Save the value per-bridge in struct mlx5_esw_bridge. Silently ignore SWITCHDEV_ATTR_ID_PORT_{PRE_}BRIDGE_FLAGS switchdev event since mlx5 bridge implementation relies on software bridge for implementing necessary behavior for all of these flags. Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: Bridge, handle FDB eventsVlad Buslov
Hardware supported by mlx5 driver doesn't provide learning and requires the driver to emulate all switch-like behavior in software. As such, all packets by default go through miss path, appear on representor and get to software bridge, if it is the upper device of the representor. This causes bridge to process packet in software, learn the MAC address to FDB and send SWITCHDEV_FDB_ADD_TO_DEVICE event to all subscribers. In order to offload FDB entries in mlx5, register switchdev notifier callback and implement support for both 'added_by_user' and dynamic FDB entry SWITCHDEV_FDB_ADD_TO_DEVICE events asynchronously using new mlx5_esw_bridge_offloads->wq ordered workqueue. In workqueue callback offload the ingress rule (matching FDB entry MAC as packet source MAC) and egress table rule (matching FDB entry MAC as destination MAC). For ingress table rule also match source vport to ensure that only traffic coming from expected bridge port is matched by offloaded rule. Save all the relevant FDB entry data in struct mlx5_esw_bridge_fdb_entry instance and insert the instance in new mlx5_esw_bridge->fdb_list list (for traversing all entries by software ageing implementation in following patch) and in new mlx5_esw_bridge->fdb_ht hash table for fast retrieval. Notify the bridge that FDB entry has been offloaded by sending SWITCHDEV_FDB_OFFLOADED notification. Delete FDB entry on reception of SWITCHDEV_FDB_DEL_TO_DEVICE event. Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: Bridge, add offload infrastructureVlad Buslov
Create new files bridge.{c|h} in en/rep directory that implement bridge interaction with representor netdevices and handle required events/notifications, bridge.{c|h} in esw directory that implement all necessary eswitch offloading infrastructure and works on vport/eswitch level. Provide new kconfig MLX5_BRIDGE which is automatically selected when both kernel bridge and mlx5 eswitch configs are enabled. Provide basic infrastructure for bridge offloads: - struct mlx5_esw_bridge_offloads - per-eswitch bridge offload structure that encapsulates generic bridge-offloads data (notifier blocks, ingress flow table/group, etc.) that is created/deleted on enable/disable eswitch offloads. - struct mlx5_esw_bridge - per-bridge structure that encapsulates per-bridge data (reference counter, FDB, egress flow table/group, etc.) that is created when first eswitch represetor is attached to new bridge and deleted when last representor is removed from the bridge as a result of NETDEV_CHANGEUPPER event. The bridge tables are created with new priority FDB_BR_OFFLOAD in FDB namespace. The new priority is between tc-miss and slow path priorities. Priority consist of two levels: the ingress table that is global per eswitch and matches incoming packets by src_mac/vid and redirects them to next level (egress table) that is chosen according to ingress port bridge membership and matches on dst_mac/vid in order to redirect packet to vport according to the following diagram: + | +---------v----------+ | | | FDB_TC_OFFLOAD | | | +---------+----------+ | | +---------v----------+ | | | FDB_FT_OFFLOAD | | | +---------+----------+ | | +---------v----------+ | | | FDB_TC_MISS | | | +---------+----------+ | +--------------------------------------+ | | | | +------+ | | | | | +------v--------+ FDB_BR_OFFLOAD | | | INGRESS_TABLE | | | +------+---+----+ | | | | match | | | +---------+ | | | | | +-------+ | | +-------v-------+ match | | | | | | EGRESS_TABLE +------------> vport | | | +-------+-------+ | | | | | | | +-------+ | | miss | | | +------+------+ | | | | +--------------------------------------+ | | +---------v----------+ | | | FDB_SLOW_PATH | | | +---------+----------+ | v Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5e: Refactor mlx5e_eswitch_{*}rep() helpersVlad Buslov
Change the helper to functions to accept constant pointer to struct net_device. This is necessary for following patches in series that pass mlx5e_eswitch_rep() as a callback to kernel bridge infrastructure code. Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: Create TC-miss priority and tableVlad Buslov
In order to adhere to kernel software datapath model bridge offloads must come after TC and NF FDBs. Following patches in this series add new FDB priority for bridge after FDB_FT_OFFLOAD. However, since netfilter offload is implemented with unmanaged tables, its miss path is not automatically connected to next priority and requires the code to manually connect with slow table. To keep bridge offloads encapsulated and not mix it with eswitch offloads, create a new FDB_TC_MISS priority between FDB_FT_OFFLOAD and FDB_SLOW_PATH: + | +---------v----------+ | | | FDB_TC_OFFLOAD | | | +---------+----------+ | | | +---------v----------+ | | | FDB_FT_OFFLOAD | | | +---------+----------+ | | | +---------v----------+ | | | FDB_TC_MISS | | | +---------+----------+ | | | +---------v----------+ | | | FDB_SLOW_PATH | | | +---------+----------+ | v Initialize the new priority with single default empty managed table and use the table as TC/NF miss patch instead of slow table. This approach allows bridge offloads to be created as new FDB namespace priority between FDB_TC_MISS and FDB_SLOW_PATH without exposing its internal tables to any other modules since miss path of managed TC-miss table is automatically wired to next priority. Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: DR, Support EMD tag in modify header for STEv1Yevgeny Kliteynik
Add support for EMD tag in modify header set/copy actions on device that supports STEv1. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: DR, Added support for INSERT_HEADER reformat typeYevgeny Kliteynik
Add support for INSERT_HEADER packet reformat context type Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: Added new parameters to reformat contextYevgeny Kliteynik
Adding new reformat context type (INSERT_HEADER) requires adding two new parameters to reformat context - reformat_param_0 and reformat_param_1. As defined by HW spec, these parameters have different meaning for different reformat context type. The first parameter (reformat_param_0) is not new to HW spec, but it wasn't used by any of the supported reformats. The second parameter (reformat_param_1) is new to the HW spec - it was added to allow supporting INSERT_HEADER. For NSERT_HEADER, reformat_param_0 indicates the header used to reference the location of the inserted header, and reformat_param_1 indicates the offset of the inserted header from the reference point defined by reformat_param_0. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: DR, Allow encap action for RX for supporting devicesYevgeny Kliteynik
Encap actions on RX flow were not supported on older devices. However, this is no longer the case in devices that support STEv1. This patch adds support for encap l3/l2 on RX flow for supported devices: update actions state machine by adding the newely supported transitions and add the required support in STEv0/1 files. The new transitions that are supported are: - from decap/modify-header/pop-vlan to encap - from encap to termination table Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: DR, Split reformat state to Encap and DecapYevgeny Kliteynik
Split single reformat state into two separate states for encap and decap. This will allow adding actions to the specific domain, such as encap on RX. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net/mlx5: mlx5_ifc support for header insert/removeYevgeny Kliteynik
Add support for HCA caps 2 that contains capabilities for the new insert/remove header actions. Added the required definitions for supporting the new reformat type: added packet reformat parameters, reformat anchors and definitions to allow copy/set into the inserted EMD (Embedded MetaData) tag. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-06-09net: ipa: use bitmap to check for missing regionsAlex Elder
In ipa_mem_valid(), wait until regions have been marked in the memory region bitmap, and check all that are not found there to ensure they are not required. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-09net: ipa: flag duplicate memory regionsAlex Elder
Add a test in ipa_mem_valid() to ensure no memory region is defined more than once, using a bitmap to record each defined memory region. Skip over undefined regions when checking (we can have any number of those). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-09net: ipa: validate memory regions based on versionAlex Elder
Introduce ipa_mem_id_valid(), and use it to check defined memory regions to ensure they are valid for a given version of IPA. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-09net: ipa: introduce ipa_mem_id_optional()Alex Elder
Introduce a new function that indicates whether a given memory region is required for a given version of IPA hardware. Use it to verify that all required regions are present during initialization. Reorder the definitions of the memory region IDs to be based on the version in which they're first defined. Use "+" rather than "and above" where defining the IPA versions in which memory IDs are used, and indicate which regions are optional (many are not). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-09net: ipa: pass memory configuration data to ipa_mem_valid()Alex Elder
Pass the memory configuration data array to ipa_mem_valid() for validation, and use that rather than assuming it's already been recorded in the IPA structure. Move the memory data array size check into ipa_mem_valid(). Call ipa_mem_valid() early in ipa_mem_init(), and only proceed with assigning the memory array pointer and size if it is found to be valid. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-09net: ipa: validate memory regions at init timeAlex Elder
Move the memory region validation check so it happens earlier when initializing the driver, at init time rather than config time (i.e., before access to hardware is required). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-09net: ipa: separate region range check from other validationAlex Elder
The only thing done by ipa_mem_valid_one() that requires hardware access is the check for whether all regions fit within the size of IPA local memory specified by an IPA register. Introduce ipa_mem_size_valid() to implement this verification and stop doing so in ipa_mem_valid_one(). Call the new function from ipa_mem_config() (which is also the caller of ipa_mem_valid()). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-09net: ipa: separate memory validation from initializationAlex Elder
Currently, memory regions are validated in the loop that initializes them. Instead, validate them separately. Rename ipa_mem_valid() to be ipa_mem_valid_one(). Define a *new* function named ipa_mem_valid() that performs validation of the array of memory regions provided. This function calls ipa_mem_valid_one() for each region in turn. Skip validation for any "empty" region descriptors, which have zero size and are not preceded by any canary values. Issue a warning for such descriptors if the offset is non-zero. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-09net: ipa: validate memory regions unconditionallyAlex Elder
Do memory region descriptor validation unconditionally, rather than having it depend on IPA_VALIDATION being defined. Pass the address of a memory region descriptor rather than a memory ID to ipa_mem_valid(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>