aboutsummaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)Author
2018-11-09net: mvneta: correct typoAlexandre Belloni
The reserved variable should be named reserved1. Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09net: qualcomm: rmnet: Fix incorrect assignment of real_devSubash Abhinov Kasiviswanathan
A null dereference was observed when a sysctl was being set from userspace and rmnet was stuck trying to complete some actions in the NETDEV_REGISTER callback. This is because the real_dev is set only after the device registration handler completes. sysctl call stack - <6> Unable to handle kernel NULL pointer dereference at virtual address 00000108 <2> pc : rmnet_vnd_get_iflink+0x1c/0x28 <2> lr : dev_get_iflink+0x2c/0x40 <2> rmnet_vnd_get_iflink+0x1c/0x28 <2> inet6_fill_ifinfo+0x15c/0x234 <2> inet6_ifinfo_notify+0x68/0xd4 <2> ndisc_ifinfo_sysctl_change+0x1b8/0x234 <2> proc_sys_call_handler+0xac/0x100 <2> proc_sys_write+0x3c/0x4c <2> __vfs_write+0x54/0x14c <2> vfs_write+0xcc/0x188 <2> SyS_write+0x60/0xc0 <2> el0_svc_naked+0x34/0x38 device register call stack - <2> notifier_call_chain+0x84/0xbc <2> raw_notifier_call_chain+0x38/0x48 <2> call_netdevice_notifiers_info+0x40/0x70 <2> call_netdevice_notifiers+0x38/0x60 <2> register_netdevice+0x29c/0x3d8 <2> rmnet_vnd_newlink+0x68/0xe8 <2> rmnet_newlink+0xa0/0x160 <2> rtnl_newlink+0x57c/0x6c8 <2> rtnetlink_rcv_msg+0x1dc/0x328 <2> netlink_rcv_skb+0xac/0x118 <2> rtnetlink_rcv+0x24/0x30 <2> netlink_unicast+0x158/0x1f0 <2> netlink_sendmsg+0x32c/0x338 <2> sock_sendmsg+0x44/0x60 <2> SyS_sendto+0x150/0x1ac <2> el0_svc_naked+0x34/0x38 Fixes: b752eff5be24 ("net: qualcomm: rmnet: Implement ndo_get_iflink") Signed-off-by: Sean Tranchetti <stranche@codeaurora.org> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09net: aquantia: allow rx checksum offload configurationDmitry Bogdanov
RX Checksum offloads could not be configured and ignored netdev features flag for checksumming. Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: Dmitry Bogdanov <dmitry.bogdanov@aquantia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09net: aquantia: invalid checksumm offload implementationDmitry Bogdanov
Packets with marked invalid IP/UDP/TCP checksums were considered as good by the driver. The error was in a logic, processing offload bits in RX descriptor. Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: Dmitry Bogdanov <dmitry.bogdanov@aquantia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09net: aquantia: fixed enable unicast on 32 macvlanIgor Russkikh
Fixed a condition mistake due to which macvlans unicast item number 32 was not added in the unicast filter. The consequence is that when exactly 32 macvlans are created on NIC, the last created macvlan receives no traffic because its MAC was not registered in HW. Fixes: 94b3b542303f ("net: aquantia: vlan unicast address list correct handling") Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com> Tested-by: Nikita Danilov <nikita.danilov@aquantia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09net: aquantia: fix potential IOMMU fault after driver unbindDmitry Bogdanov
IOMMU fault may occurr on unbind/bind or if_down/if_up sequence. Although driver disables the rings on down, this is not enough. Due to internal HW design, during subsequent initialization NIC sometimes may reuse RX descriptors cache and write to the host memory from the descriptor cache. That's get catched by IOMMU on host. This patch invalidates the descriptor cache in NIC on interface down to prevent writing to the cached descriptors and to the memory pointed in those descriptors. Signed-off-by: Dmitry Bogdanov <dmitry.bogdanov@aquantia.com> Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-09net: aquantia: synchronized flow control between mac/phyIgor Russkikh
Flow control statuses were not synchronized between blocks, that caused packets/link drop on some corner cases, when MAC sent PFC although Phy was not expecting these to come. Driver should readout the negotiated FC from phy and configure RX block accordigly. This is done on each link change event with information from FW. Fixes: 288551de45aa ("net: aquantia: Implement rx/tx flow control ethtools callback") Signed-off-by: Igor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-08net: stmmac: Fix RX packet size > 8191Thor Thayer
Ping problems with packets > 8191 as shown: PING 192.168.1.99 (192.168.1.99) 8150(8178) bytes of data. 8158 bytes from 192.168.1.99: icmp_seq=1 ttl=64 time=0.669 ms wrong data byte 8144 should be 0xd0 but was 0x0 16 10 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f %< ---------------snip-------------------------------------- 8112 b0 b1 b2 b3 b4 b5 b6 b7 b8 b9 ba bb bc bd be bf c0 c1 c2 c3 c4 c5 c6 c7 c8 c9 ca cb cc cd ce cf 8144 0 0 0 0 d0 d1 ^^^^^^^ Notice the 4 bytes of 0 before the expected byte of d0. Databook notes that the RX buffer must be a multiple of 4/8/16 bytes [1]. Update the DMA Buffer size define to 8188 instead of 8192. Remove the -1 from the RX buffer size allocations and use the new DMA Buffer size directly. [1] Synopsys DesignWare Cores Ethernet MAC Universal v3.70a [section 8.4.2 - Table 8-24] Tested on SoCFPGA Stratix10 with ping sweep from 100 to 8300 byte packets. Fixes: 286a83721720 ("stmmac: add CHAINED descriptor mode support (V4)") Suggested-by: Jose Abreu <jose.abreu@synopsys.com> Signed-off-by: Thor Thayer <thor.thayer@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-08qed: Fix potential memory corruptionSagiv Ozeri
A stuck ramrod should be deleted from the completion_pending list, otherwise it will be added again in the future and corrupt the list. Return error value to inform that ramrod is stuck and should be deleted. Signed-off-by: Sagiv Ozeri <sagiv.ozeri@cavium.com> Signed-off-by: Denis Bolotin <denis.bolotin@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-08qed: Fix SPQ entries not returned to pool in error flowsDenis Bolotin
qed_sp_destroy_request() API was added for SPQ users that need to free/return the entry they acquired in their error flows. Signed-off-by: Denis Bolotin <denis.bolotin@cavium.com> Signed-off-by: Michal Kalderon <michal.kalderon@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-08qed: Fix blocking/unlimited SPQ entries leakDenis Bolotin
When there are no SPQ entries left in the free_pool, new entries are allocated and are added to the unlimited list. When an entry in the pool is available, the content is copied from the original entry, and the new entry is sent to the device. qed_spq_post() is not aware of that, so the additional entry is stored in the original entry as p_post_ent, which can later be returned to the pool. Signed-off-by: Denis Bolotin <denis.bolotin@cavium.com> Signed-off-by: Michal Kalderon <michal.kalderon@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-08qed: Fix memory/entry leak in qed_init_sp_request()Denis Bolotin
Free the allocated SPQ entry or return the acquired SPQ entry to the free list in error flows. Signed-off-by: Denis Bolotin <denis.bolotin@cavium.com> Signed-off-by: Michal Kalderon <michal.kalderon@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-08net: hns3: bugfix for not checking return valueHuazhong Tan
hns3_reset_notify_init_enet() only return error early if the return value of hns3_restore_vlan() is not 0. This patch adds checking for the return value of hns3_restore_vlan. Fixes: 7fa6be4fd2f6 ("net: hns3: fix incorrect return value/type of some functions") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-07qlcnic: remove assumption that vlan_tci != 0Michał Mirosław
VLAN.TCI == 0 is perfectly valid (802.1p), so allow it to be accelerated. Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-07ibmvnic: fix accelerated VLAN handlingMichał Mirosław
Don't request tag insertion when it isn't present in outgoing skb. Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-07i40e: enable NETIF_F_NTUPLE and NETIF_F_HW_TC at driver loadJacob Keller
The assignment of the feature flag NETIF_F_NTUPLE and NETIF_F_HW_TC occurs prior to the initial setup of the local hw_features variable. This means the features are set as user-changeable, but are not set in the currently active feature list. This results in the features being disabled at the driver's initial load. Move the assignment after the initial assignment of hw_features, and assign to the local variable. This ensures that NETIF_F_NTUPLE and NETIF_F_HW_TC are marked as user-changeable, and also enables them by default when the driver loads. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-07i40e: restore NETIF_F_GSO_IPXIP[46] to netdev featuresJacob Keller
Since commit bacd75cfac8a ("i40e/i40evf: Add capability exchange for outer checksum", 2017-04-06) the i40e driver has not reported support for IP-in-IP offloads. This likely occurred due to a bad rebase, as the commit extracts hw_enc_features into its own variable. As part of this change, it dropped the NETIF_F_FSO_IPXIP flags from the netdev->hw_enc_features. This was unfortunately not caught during code review. Fix this by adding back the missing feature flags. For reference, NETIF_F_GSO_IPXIP4 was added in commit 7e13318daa4a ("net: define gso types for IPx over IPv4 and IPv6", 2016-05-20), replacing NETIF_F_GSO_IPIP and NETIF_F_GSO_SIT. NETIF_F_GSO_IPXIP6 was added in commit bf2d1df39502 ("intel: Add support for IPv6 IP-in-IP offload", 2016-05-20). Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-07ice: Change req_speeds to be u16Chinh T Cao
Since the req_speeds field in struct ice_link_status is a u8, req_speeds & ICE_AQ_LINK_SPEED_40GB always returns 0. This was caught by a coverity scan. Fix this by changing req_speeds to be u16. Reported-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06igb: shorten maximum PHC timecounter update intervalMiroslav Lichvar
The timecounter needs to be updated at least once per ~550 seconds in order to avoid a 40-bit SYSTIM timestamp to be misinterpreted as an old timestamp. Since commit 500462a9de65 ("timers: Switch to a non-cascading wheel"), scheduling of delayed work seems to be less accurate and a requested delay of 540 seconds may actually be longer than 550 seconds. Also, the PHC may be adjusted to run up to 6% faster than real time and the system clock up to 10% slower. Shorten the delay to 360 seconds to be sure the timecounter is updated in time. This fixes an issue with HW timestamps on 82580/I350/I354 being off by ~1100 seconds for few seconds every ~9 minutes. Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com> Acked-by: Jacob Keller <jacob.e.keller@intel.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06ice: Fix the bytecount sent to netdev_tx_sent_queueBrett Creeley
Currently if the driver does a TSO offload the bytecount sent to netdev_tx_sent_queue will be incorrect. This is because in ice_tso we overwrite the initial value that we set in ice_tx_map. This creates a mismatch between the Tx and Tx clean flow. In the Tx clean flow we calculate the bytecount (called total_bytes) as we clean the descriptors so the value used in the Tx clean path is correct. Fix this by using += in ice_tso instead of =. This fixes the mismatch in bytecount mentioned above. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06ice: Fix tx_timeout in PF driverBrett Creeley
Prior to this commit the driver was running into tx_timeouts when a queue was stressed enough. This was happening because the HW tail and SW tail (NTU) were incorrectly out of sync. Consequently this was causing the HW head to collide with the HW tail, which to the hardware means that all descriptors posted for Tx have been processed. Due to the Tx logic used in the driver SW tail and HW tail are allowed to be out of sync. This is done as an optimization because it allows the driver to write HW tail as infrequently as possible, while still updating the SW tail index to keep track. However, there are situations where this results in the tail never getting updated, resulting in Tx timeouts. Tx HW tail write condition: if (netif_xmit_stopped(txring_txq(tx_ring) || !skb->xmit_more) writel(sw_tail, tx_ring->tail); An issue was found in the Tx logic that was causing the afore mentioned condition for updating HW tail to never happen, causing tx_timeouts. In ice_xmit_frame_ring we calculate how many descriptors we need for the Tx transaction based on the skb the kernel hands us. This is then passed into ice_maybe_stop_tx along with some extra padding to determine if we have enough descriptors available for this transaction. If we don't then we return -EBUSY to the stack, otherwise we move on and eventually prepare the Tx descriptors accordingly in ice_tx_map and set next_to_watch. In ice_tx_map we make another call to ice_maybe_stop_tx with a value of MAX_SKB_FRAGS + 4. The key here is that this value is possibly less than the value we sent in the first call to ice_maybe_stop_tx in ice_xmit_frame_ring. Now, if the number of unused descriptors is between MAX_SKB_FRAGS + 4 and the value used in the first call to ice_maybe_stop_tx in ice_xmit_frame_ring then we do not update the HW tail because of the "Tx HW tail write condition" above. This is because in ice_maybe_stop_tx we return success from ice_maybe_stop_tx instead of calling __ice_maybe_stop_tx and subsequently calling netif_stop_subqueue, which sets the __QUEUE_STATE_DEV_XOFF bit. This bit is then checked in the "Tx HW tail write condition" by calling netif_xmit_stopped and subsequently updating HW tail if the afore mentioned bit is set. In ice_clean_tx_irq, if next_to_watch is not NULL, we end up cleaning the descriptors that HW sets the DD bit on and we have the budget. The HW head will eventually run into the HW tail in response to the description in the paragraph above. The next time through ice_xmit_frame_ring we make the initial call to ice_maybe_stop_tx with another skb from the stack. This time we do not have enough descriptors available and we return NETDEV_TX_BUSY to the stack and end up setting next_to_watch to NULL. This is where we are stuck. In ice_clean_tx_irq we never clean anything because next_to_watch is always NULL and in ice_xmit_frame_ring we never update HW tail because we already return NETDEV_TX_BUSY to the stack and eventually we hit a tx_timeout. This issue was fixed by making sure that the second call to ice_maybe_stop_tx in ice_tx_map is passed a value that is >= the value that was used on the initial call to ice_maybe_stop_tx in ice_xmit_frame_ring. This was done by adding the following defines to make the logic more clear and to reduce the chance of mucking this up again: ICE_CACHE_LINE_BYTES 64 ICE_DESCS_PER_CACHE_LINE (ICE_CACHE_LINE_BYTES / \ sizeof(struct ice_tx_desc)) ICE_DESCS_FOR_CTX_DESC 1 ICE_DESCS_FOR_SKB_DATA_PTR 1 The ICE_CACHE_LINE_BYTES being 64 is an assumption being made so we don't have to figure this out on every pass through the Tx path. Instead I added a sanity check in ice_probe to verify cache line size and print a message if it's not 64 Bytes. This will make it easier to file issues if they are seen when the cache line size is not 64 Bytes when reading from the GLPCI_CNF2 register. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06ice: Fix napi delete calls for removeDave Ertman
In the remove path, the vsi->netdev is being set to NULL before the call to free vectors. This is causing the netif_napi_del call to never be made. Add a call to ice_napi_del to the same location as the calls to unregister_netdev and just prior to them. This will use the reverse flow as the register and netif_napi_add calls. Signed-off-by: Dave Ertman <david.m.ertman@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06ice: Fix typo in error messageAnirudh Venkataramanan
Print should say "Enabling" instead of "Enaabling" Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06ice: Fix flags for port VLANMd Fahad Iqbal Polash
According to the spec, whenever insert PVID field is set, the VLAN driver insertion mode should be set to 01b which isn't done currently. Fix it. Signed-off-by: Md Fahad Iqbal Polash <md.fahad.iqbal.polash@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06ice: Remove duplicate addition of VLANs in replay pathAnirudh Venkataramanan
ice_restore_vlan and active_vlans were originally put in place to reprogram VLAN filters in the replay path. This is now done as part of the much broader VSI rebuild/replay framework. So remove both ice_restore_vlan and active_vlans Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06ice: Free VSI contexts during for unloadVictor Raj
In the unload path, all VSIs are freed. Also free the related VSI contexts to prevent memory leaks. Signed-off-by: Victor Raj <victor.raj@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06ice: Fix dead device link issue with flow controlAkeem G Abodunrin
Setting Rx or Tx pause parameter currently results in link loss on the interface, requiring the platform/host to be cold power cycled. Fix it. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06ice: Check for reset in progress during removeAnirudh Venkataramanan
The remove path does not currently check to see if a reset is in progress before proceeding. This can cause a resource collision resulting in various types of errors. Check for reset in progress and wait for a reasonable amount of time before allowing the remove to progress. Signed-off-by: Dave Ertman <david.m.ertman@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-06ice: Set carrier state and start/stop queues in rebuildAnirudh Venkataramanan
Set the carrier state post rebuild by querying the link status. Also start/stop queues based on link status. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2018-11-05net: alx: make alx_drv_name staticRasmus Villemoes
alx_drv_name is not used outside main.c, so there's no reason for it to have external linkage. Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-03mlxsw: spectrum: Fix IP2ME CPU policer configurationShalom Toledo
The CPU policer used to police packets being trapped via a local route (IP2ME) was incorrectly configured to police based on bytes per second instead of packets per second. Change the policer to police based on packets per second and avoid packet loss under certain circumstances. Fixes: 9148e7cf73ce ("mlxsw: spectrum: Add policers for trap groups") Signed-off-by: Shalom Toledo <shalomt@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-03qed: fix link config error handlingArnd Bergmann
gcc-8 notices that qed_mcp_get_transceiver_data() may fail to return a result to the caller: drivers/net/ethernet/qlogic/qed/qed_mcp.c: In function 'qed_mcp_trans_speed_mask': drivers/net/ethernet/qlogic/qed/qed_mcp.c:1955:2: error: 'transceiver_type' may be used uninitialized in this function [-Werror=maybe-uninitialized] When an error is returned by qed_mcp_get_transceiver_data(), we should propagate that to the caller of qed_mcp_trans_speed_mask() rather than continuing with uninitialized data. Fixes: c56a8be7e7aa ("qed: Add supported link and advertise link to display in ethtool.") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-03net: hns3: Fix for out-of-bounds access when setting pfc back pressureYunsheng Lin
The vport should be initialized to hdev->vport for each bp group, otherwise it will cause out-of-bounds access and bp setting not correct problem. [ 35.254124] BUG: KASAN: slab-out-of-bounds in hclge_pause_setup_hw+0x2a0/0x3f8 [hclge] [ 35.254126] Read of size 2 at addr ffff803b6651581a by task kworker/0:1/14 [ 35.254132] CPU: 0 PID: 14 Comm: kworker/0:1 Not tainted 4.19.0-rc7-hulk+ #85 [ 35.254133] Hardware name: Huawei D06/D06, BIOS Hisilicon D06 UEFI RC0 - B052 (V0.52) 09/14/2018 [ 35.254141] Workqueue: events work_for_cpu_fn [ 35.254144] Call trace: [ 35.254147] dump_backtrace+0x0/0x2f0 [ 35.254149] show_stack+0x24/0x30 [ 35.254154] dump_stack+0x110/0x184 [ 35.254157] print_address_description+0x168/0x2b0 [ 35.254160] kasan_report+0x184/0x310 [ 35.254162] __asan_load2+0x7c/0xa0 [ 35.254170] hclge_pause_setup_hw+0x2a0/0x3f8 [hclge] [ 35.254177] hclge_tm_init_hw+0x794/0x9f0 [hclge] [ 35.254184] hclge_tm_schd_init+0x48/0x58 [hclge] [ 35.254191] hclge_init_ae_dev+0x778/0x1168 [hclge] [ 35.254196] hnae3_register_ae_dev+0x14c/0x298 [hnae3] [ 35.254206] hns3_probe+0x88/0xa8 [hns3] [ 35.254210] local_pci_probe+0x7c/0xf0 [ 35.254212] work_for_cpu_fn+0x34/0x50 [ 35.254214] process_one_work+0x4d4/0xa38 [ 35.254216] worker_thread+0x55c/0x8d8 [ 35.254219] kthread+0x1b0/0x1b8 [ 35.254222] ret_from_fork+0x10/0x1c [ 35.254224] The buggy address belongs to the page: [ 35.254228] page:ffff7e00ed994400 count:1 mapcount:0 mapping:0000000000000000 index:0x0 compound_mapcount: 0 [ 35.273835] flags: 0xfffff8000008000(head) [ 35.282007] raw: 0fffff8000008000 dead000000000100 dead000000000200 0000000000000000 [ 35.282010] raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000 [ 35.282012] page dumped because: kasan: bad access detected [ 35.282014] Memory state around the buggy address: [ 35.282017] ffff803b66515700: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe [ 35.282019] ffff803b66515780: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe [ 35.282021] >ffff803b66515800: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe [ 35.282022] ^ [ 35.282024] ffff803b66515880: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe [ 35.282026] ffff803b66515900: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe [ 35.282028] ================================================================== [ 35.282029] Disabling lock debugging due to kernel taint [ 35.282747] hclge driver initialization finished. Fixes: 67bf2541f4b9 ("net: hns3: Fixes the back pressure setting when sriov is enabled") Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-03net/mlx4_en: use __netdev_tx_sent_queue()Eric Dumazet
doorbell only depends on xmit_more and netif_tx_queue_stopped() Using __netdev_tx_sent_queue() avoids messing with BQL stop flag, and is more generic. This patch increases performance on GSO workload by keeping doorbells to the minimum required. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-03net: systemport: Protect stop from timeoutFlorian Fainelli
A timing hazard exists when the network interface is stopped that allows a watchdog timeout to be processed by a separate core in parallel. This creates the potential for the timeout handler to wake the queues while the driver is shutting down, or access registers after their clocks have been removed. The more common case is that the watchdog timeout will produce a warning message which doesn't lead to a crash. The chances of this are greatly increased by the fact that bcm_sysport_netif_stop stops the transmit queues which can easily precipitate a watchdog time- out because of stale trans_start data in the queues. This commit corrects the behavior by ensuring that the watchdog timeout is disabled before enterring bcm_sysport_netif_stop. There are currently only two users of the bcm_sysport_netif_stop function: close and suspend. The close case already handles the issue by exiting the RUNNING state before invoking the driver close service. The suspend case now performs the netif_device_detach to exit the PRESENT state before the call to bcm_sysport_netif_stop rather than after it. These behaviors prevent any future scheduling of the driver timeout service during the window. The netif_tx_stop_all_queues function in bcm_sysport_netif_stop is replaced with netif_tx_disable to ensure synchronization with any transmit or timeout threads that may already be executing on other cores. For symmetry, the netif_device_attach call upon resume is moved to after the call to bcm_sysport_netif_start. Since it wakes the transmit queues it is not necessary to invoke netif_tx_start_all_queues from bcm_sysport_netif_start so it is moved into the driver open service. Fixes: 40755a0fce17 ("net: systemport: add suspend and resume support") Fixes: 80105befdb4b ("net: systemport: add Broadcom SYSTEMPORT Ethernet MAC driver") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-11-03net: bcmgenet: protect stop from timeoutDoug Berger
A timing hazard exists when the network interface is stopped that allows a watchdog timeout to be processed by a separate core in parallel. This creates the potential for the timeout handler to wake the queues while the driver is shutting down, or access registers after their clocks have been removed. The more common case is that the watchdog timeout will produce a warning message which doesn't lead to a crash. The chances of this are greatly increased by the fact that bcmgenet_netif_stop stops the transmit queues which can easily precipitate a watchdog time- out because of stale trans_start data in the queues. This commit corrects the behavior by ensuring that the watchdog timeout is disabled before enterring bcmgenet_netif_stop. There are currently only two users of the bcmgenet_netif_stop function: close and suspend. The close case already handles the issue by exiting the RUNNING state before invoking the driver close service. The suspend case now performs the netif_device_detach to exit the PRESENT state before the call to bcmgenet_netif_stop rather than after it. These behaviors prevent any future scheduling of the driver timeout service during the window. The netif_tx_stop_all_queues function in bcmgenet_netif_stop is replaced with netif_tx_disable to ensure synchronization with any transmit or timeout threads that may already be executing on other cores. For symmetry, the netif_device_attach call upon resume is moved to after the call to bcmgenet_netif_start. Since it wakes the transmit queues it is not necessary to invoke netif_tx_start_all_queues from bcmgenet_netif_start so it is moved into the driver open service. Fixes: 1c1008c793fa ("net: bcmgenet: add main driver file") Signed-off-by: Doug Berger <opendmb@gmail.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: stmmac: Fix stmmac_mdio_reset() when building stmmac as modulesNiklas Cassel
When building stmmac, it is only possible to select CONFIG_DWMAC_GENERIC, or any of the glue drivers, when CONFIG_STMMAC_PLATFORM is set. The only exception is CONFIG_STMMAC_PCI. When calling of_mdiobus_register(), it will call our ->reset() callback, which is set to stmmac_mdio_reset(). Most of the code in stmmac_mdio_reset() is protected by a "#if defined(CONFIG_STMMAC_PLATFORM)", which will evaluate to false when CONFIG_STMMAC_PLATFORM=m. Because of this, the phy reset gpio will only be pulled when stmmac is built as built-in, but not when built as modules. Fix this by using "#if IS_ENABLED()" instead of "#if defined()". Signed-off-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31Merge branch '10GbE' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue Jeff Kirsher says: ==================== Intel Wired LAN Driver Updates 2018-10-31 This series contains a various collection of fixes. Miroslav Lichvar from Red Hat or should I say IBM now? Updates the PHC timecounter interval for igb so that it gets updated at least once every 550 seconds. Ngai-Mint provides a fix for fm10k to prevent a soft lockup or system crash by adding a new condition to determine if the SM mailbox is in the correct state before proceeding. Jake provides several fm10k fixes, first one marks complier aborts as non-fatal since on some platforms trigger machine check errors when the compile aborts. Added missing device ids to the in-kernel driver. Due to the recent fixes, bumped the driver version. I (Jeff Kirsher) fixed a XFRM_ALGO dependency for both ixgbe and ixgbevf. This fix was based on the original work from Arnd Bergmann, which only fixed ixgbe. Mitch provides a fix for i40e/avf to update the status codes, which resolves an issue between a mis-match between i40e and the iavf driver, which also supports the ice LAN driver. Radoslaw fixes the ixgbe where the driver is logging a message about spoofed packets detected when the VF is re-started with a different MAC address. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31mlxsw: spectrum: Set minimum shaper on MC TCsPetr Machata
An MC-aware mode was introduced in commit 7b8195306694 ("mlxsw: spectrum: Configure MC-aware mode on mlxsw ports"). In MC-aware mode, BUM traffic gets a special treatment by being assigned to a separate set of traffic classes 8..15. Pairs of TCs 0 and 8, 1 and 9, etc., are then configured to strictly prioritize the lower-numbered ones. The intention is to prevent BUM traffic from flooding the switch and push out all UC traffic, which would otherwise happen, and instead give UC traffic precedence. However strictly prioritizing UC traffic has the effect that UC overload pushes out all BUM traffic, such as legitimate ARP queries. These packets are kept in queues for a while, but under sustained UC overload, their lifetime eventually expires and these packets are dropped. That is detrimental to network performance as well. Therefore configure the MC TCs (8..15) with minimum shaper of 200Mbps (a minimum permitted value) to allow a trickle of necessary control traffic to get through. Fixes: 7b8195306694 ("mlxsw: spectrum: Configure MC-aware mode on mlxsw ports") Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31mlxsw: reg: QEEC: Add minimum shaper fieldsPetr Machata
Add QEEC.mise (minimum shaper enable) and QEEC.min_shaper_rate to enable configuration of minimum shaper. Increase the QEEC length to 0x20 as well: that's the length that the register has had for a long time now, but with the configurations that mlxsw typically exercises, the firmware tolerated 0x1C-sized packets. With mise=true however, FW rejects packets unless they have the full required length. Fixes: b9b7cee40579 ("mlxsw: reg: Add QoS ETS Element Configuration register") Signed-off-by: Petr Machata <petrm@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: hns3: bugfix for rtnl_lock's range in the hclgevf_reset()Huazhong Tan
Since hclgevf_reset_wait() is used to wait for the hardware to complete the reset, it is not necessary to hold the rtnl_lock during hclgevf_reset_wait(). So this patch releases the lock for the duration of hclgevf_reset_wait(). Fixes: 6988eb2a9b77 ("net: hns3: Add support to reset the enet/ring mgmt layer") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: hns3: bugfix for rtnl_lock's range in the hclge_reset()Huazhong Tan
Since hclge_reset_wait() is used to wait for the hardware to complete the reset, it is not necessary to hold the rtnl_lock during hclge_reset_wait(). So this patch releases the lock for the duration of hclge_reset_wait(). Fixes: 6d4fab39533f ("net: hns3: Reset net device with rtnl_lock") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: hns3: bugfix for handling mailbox while the command queue reinitializedHuazhong Tan
In a multi-core machine, the mailbox service and reset service will be executed at the same time. The reset service will re-initialize the command queue, before that, the mailbox handler can only get some invalid messages. The HCLGE_STATE_CMD_DISABLE flag means that the command queue is not available and needs to be reinitialized. Therefore, when the mailbox handler recognizes this flag, it should not process the command. Fixes: dde1a86e93ca ("net: hns3: Add mailbox support to PF driver") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: hns3: fix incorrect return value/type of some functionsHuazhong Tan
There are some functions that, when they fail to send the command, need to return the corresponding error value to its caller. Fixes: 46a3df9f9718 ("net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support") Fixes: 681ec3999b3d ("net: hns3: fix for vlan table lost problem when resetting") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: hns3: bugfix for hclge_mdio_write and hclge_mdio_readHuazhong Tan
When there is a PHY, the driver needs to complete some operations through MDIO during reset reinitialization, so HCLGE_STATE_CMD_DISABLE is more suitable than HCLGE_STATE_RST_HANDLING to prevent the MDIO operation from being sent during the hardware reset. Fixes: b50ae26c57cb ("net: hns3: never send command queue message to IMP when reset) Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: hns3: bugfix for is_valid_csq_clean_head()Huazhong Tan
The HEAD pointer of the hardware command queue maybe equal to the command queue's next_to_use in the driver, so that does not belong to the invalid HEAD pointer, since the hardware may not process the command in time, causing the HEAD pointer to be too late to update. The variables' name in this function is unreadable, so give them a more readable one. Fixes: 3ff504908f95 ("net: hns3: fix a dead loop in hclge_cmd_csq_clean") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: hns3: remove unnecessary queue reset in the hns3_uninit_all_ring()Huazhong Tan
It is not necessary to reset the queue in the hns3_uninit_all_ring(), since the queue is stopped in the down operation, and will be reset in the up operation. And the judgment of the HCLGE_STATE_RST_HANDLING flag in the hclge_reset_tqp() is not correct, because we need to reset tqp during pf reset, otherwise it may cause queue not being reset to working state problem. Fixes: 76ad4f0ee747 ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: hns3: bugfix for the initialization of command queue's spin lockHuazhong Tan
The spin lock of the command queue only need to be initialized once when the driver initializes the command queue. It is not necessary to initialize the spin lock when resetting. At the same time, the modification of the queue member should be performed after acquiring the lock. Fixes: 3efb960f056d ("net: hns3: Refactor the initialization of command queue") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: hns3: bugfix for reporting unknown vector0 interrupt repeatly problemHuazhong Tan
The current driver supports handling two vector0 interrupts, reset and mailbox. When the hardware reports an interrupt of another type of interrupt source, if the driver does not process the interrupt, but enables the interrupt, the hardware will repeatedly report the unknown interrupt. Therefore, the driver enables the vector0 interrupt after clearing the known type of interrupt source. Other conditions are not enabled. Fixes: cd8c5c269b1d ("net: hns3: Fix for hclge_reset running repeatly problem") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-31net: hns3: bugfix for buffer not free problem during resettingHuazhong Tan
When hns3_get_ring_config()/hns3_queue_to_ring()/ hns3_get_vector_ring_chain() failed during resetting, the allocated memory has not been freed before these three functions return. So this patch adds error handler in these functions to fix it. Fixes: 76ad4f0ee747 ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>