Rasmus Villemoes [Tue, 14 Jun 2022 08:46:10 +0000 (10:46 +0200)]
dt-bindings: dp83867: add binding for io_impedance_ctrl nvmem cell
We have a board where measurements indicate that the current three
options - leaving IO_IMPEDANCE_CTRL at the reset value (which is
factory calibrated to a value corresponding to approximately 50 ohms)
or using one of the two boolean properties to set it to the min/max
value - are too coarse.
There is no fixed mapping from register values to values in the range
35-70 ohms; it varies from chip to chip, and even that target range is
approximate. So add a DT binding for an nvmem cell which can be
populated during production with a value suitable for each specific
board.
Reviewed-by: Rob Herring <robh@kernel.org> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Linus Torvalds [Thu, 16 Jun 2022 18:51:32 +0000 (11:51 -0700)]
Merge tag 'net-5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski:
"Mostly driver fixes.
Current release - regressions:
- Revert "net: Add a second bind table hashed by port and address",
needs more work
- amd-xgbe: use platform_irq_count(), static setup of IRQ resources
had been removed from DT core
- dts: at91: ksz9477_evb: add phy-mode to fix port/phy validation
Current release - new code bugs:
- hns3: modify the ring param print info
Previous releases - always broken:
- axienet: make the 64b addressable DMA depends on 64b architectures
- iavf: fix issue with MAC address of VF shown as zero
- ice: fix PTP TX timestamp offset calculation
- usb: ax88179_178a needs FLAG_SEND_ZLP
Misc:
- document some net.sctp.* sysctls"
* tag 'net-5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (31 commits)
net: axienet: add missing error return code in axienet_probe()
Revert "net: Add a second bind table hashed by port and address"
net: ax25: Fix deadlock caused by skb_recv_datagram in ax25_recvmsg
net: usb: ax88179_178a needs FLAG_SEND_ZLP
MAINTAINERS: add include/dt-bindings/net to NETWORKING DRIVERS
ARM: dts: at91: ksz9477_evb: fix port/phy validation
net: bgmac: Fix an erroneous kfree() in bgmac_remove()
ice: Fix memory corruption in VF driver
ice: Fix queue config fail handling
ice: Sync VLAN filtering features for DVM
ice: Fix PTP TX timestamp offset calculation
mlxsw: spectrum_cnt: Reorder counter pools
docs: networking: phy: Fix a typo
amd-xgbe: Use platform_irq_count()
octeontx2-vf: Add support for adaptive interrupt coalescing
xilinx: Fix build on x86.
net: axienet: Use iowrite64 to write all 64b descriptor pointers
net: axienet: make the 64b addresable DMA depends on 64b archectures
net: hns3: fix tm port shapping of fibre port is incorrect after driver initialization
net: hns3: fix PF rss size initialization bug
...
Yang Yingliang [Thu, 16 Jun 2022 06:29:17 +0000 (14:29 +0800)]
net: axienet: add missing error return code in axienet_probe()
It should return error code in error path in axienet_probe().
Fixes: 0909279f6be4 ("net: axienet: make the 64b addresable DMA depends on 64b archectures") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20220616062917.3601-1-yangyingliang@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Joanne Koong [Wed, 15 Jun 2022 19:32:13 +0000 (12:32 -0700)]
Revert "net: Add a second bind table hashed by port and address"
This reverts:
commit 9da23f76f146 ("net: Add a second bind table hashed by port and address")
commit bcfa16c3ccf7 ("selftests: Add test for timing a bind request to a port with a populated bhash entry") Link: https://lore.kernel.org/netdev/20220520001834.2247810-1-kuba@kernel.org/
There are a few things that need to be fixed here:
* Updating bhash2 in cases where the socket's rcv saddr changes
* Adding bhash2 hashbucket locks
Fixes: 9da23f76f146 ("net: Add a second bind table hashed by port and address") Reported-by: syzbot+015d756bbd1f8b5c8f09@syzkaller.appspotmail.com Reported-by: syzbot+98fd2d1422063b0f8c44@syzkaller.appspotmail.com Reported-by: syzbot+0a847a982613c6438fba@syzkaller.appspotmail.com Signed-off-by: Joanne Koong <joannelkoong@gmail.com> Link: https://lore.kernel.org/r/20220615193213.2419568-1-joannelkoong@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Haiyang Zhang [Tue, 14 Jun 2022 20:28:55 +0000 (13:28 -0700)]
net: mana: Add support of XDP_REDIRECT action
Add a handler of the XDP_REDIRECT return code from a XDP program. The
packets will be flushed at the end of each RX/CQ NAPI poll cycle.
ndo_xdp_xmit() is implemented by sharing the code in mana_xdp_tx().
Ethtool per queue counters are added for XDP redirect and xmit operations.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
net: ethernet: stmmac: reset force speed bit for ipq806x
Some bootloader may set the force speed regs even if the actual
interface should use autonegotiation between PCS and PHY.
This cause the complete malfuction of the interface.
To fix this correctly reset the force speed regs if a fixed-link is not
defined in the DTS. With a fixed-link node correctly configure the
forced speed regs to handle any misconfiguration by the bootloader.
Reported-by: Mark Mentovai <mark@moxienet.com> Co-developed-by: Mark Mentovai <mark@moxienet.com> Signed-off-by: Mark Mentovai <mark@moxienet.com> Signed-off-by: Christian 'Ansuel' Marangi <ansuelsmth@gmail.com> Link: https://lore.kernel.org/r/20220614112228.1998-2-ansuelsmth@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
net: ethernet: stmmac: add missing sgmii configure for ipq806x
The different gmacid require different configuration based on the soc
and on the gmac id. Add these missing configuration taken from the
original driver.
Linus Torvalds [Wed, 15 Jun 2022 21:20:26 +0000 (14:20 -0700)]
Merge tag 'hardening-v5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
Pull hardening fixes from Kees Cook:
- Correctly handle vm_map areas in hardened usercopy (Matthew Wilcox)
- Adjust CFI RCU usage to avoid boot splats with cpuidle (Sami Tolvanen)
* tag 'hardening-v5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
usercopy: Make usercopy resilient against ridiculously large copies
usercopy: Cast pointer to an integer once
usercopy: Handle vm_map_ram() areas
cfi: Fix __cfi_slowpath_diag RCU usage with cpuidle
Linus Torvalds [Wed, 15 Jun 2022 19:34:19 +0000 (12:34 -0700)]
Merge tag 'tpmdd-next-v5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd
Pull tpm fixes from Jarkko Sakkinen:
"Two fixes for this merge window"
* tag 'tpmdd-next-v5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd:
certs: fix and refactor CONFIG_SYSTEM_BLACKLIST_HASH_LIST build
certs/blacklist_hashes.c: fix const confusion in certs blacklist
Linus Torvalds [Wed, 15 Jun 2022 16:04:55 +0000 (09:04 -0700)]
Merge tag 'fs.fixes.v5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux
Pull vfs idmapping fix from Christian Brauner:
"This fixes an issue where we fail to change the group of a file when
the caller owns the file and is a member of the group to change to.
This is only relevant on idmapped mounts.
There's a detailed description in the commit message and regression
tests have been added to xfstests"
* tag 'fs.fixes.v5.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux:
fs: account for group membership
Casper Andersson [Tue, 14 Jun 2022 09:25:32 +0000 (11:25 +0200)]
net: sparx5: Allow mdb entries to both CPU and ports
Allow mdb entries to be forwarded to CPU and be switched at the same
time. Only remove entry when no port and the CPU isn't part of the group
anymore.
Signed-off-by: Casper Andersson <casper.casan@gmail.com> Acked-by: Steen Hegelund <Steen.Hegelund@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Duoming Zhou [Tue, 14 Jun 2022 09:25:57 +0000 (17:25 +0800)]
net: ax25: Fix deadlock caused by skb_recv_datagram in ax25_recvmsg
The skb_recv_datagram() in ax25_recvmsg() will hold lock_sock
and block until it receives a packet from the remote. If the client
doesn`t connect to server and calls read() directly, it will not
receive any packets forever. As a result, the deadlock will happen.
This patch replaces skb_recv_datagram() with an open-coded variant of it
releasing the socket lock before the __skb_wait_for_more_packets() call
and re-acquiring it after such call in order that other functions that
need socket lock could be executed.
what's more, the socket lock will be released only when recvmsg() will
block and that should produce nicer overall behavior.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Suggested-by: Thomas Osterried <thomas@osterried.de> Signed-off-by: Duoming Zhou <duoming@zju.edu.cn> Reported-by: Thomas Habets <thomas@@habets.se> Acked-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
bcm63xx_enet: switch to napi_build_skb() to reuse skbuff_heads
napi_build_skb() reuses NAPI skbuff_head cache in order to save some
cycles on freeing/allocating skbuff_heads on every new Rx or completed
Tx.
Use napi_consume_skb() to feed the cache with skbuff_heads of completed
Tx so it's never empty.
Signed-off-by: Sieng Piaw Liew <liew.s.piaw@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Casper Andersson [Tue, 14 Jun 2022 06:32:23 +0000 (08:32 +0200)]
net: bridge: allow add/remove permanent mdb entries on disabled ports
Adding mdb entries on disabled ports allows you to do setup before
accepting any traffic, avoiding any time where the port is not in the
multicast group.
Signed-off-by: Casper Andersson <casper.casan@gmail.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Problems observed:
======================================================================
1) Using ssh/sshfs. The remote sshd daemon can abort with the message:
"message authentication code incorrect"
This happens because the tcp message sent is corrupted during the
USB "Bulk out". The device calculate the tcp checksum and send a
valid tcp message to the remote sshd. Then the encryption detects
the error and aborts.
2) NETDEV WATCHDOG: ... (ax88179_178a): transmit queue 0 timed out
3) Stop normal work without any log message.
The "Bulk in" continue receiving packets normally.
The host sends "Bulk out" and the device responds with -ECONNRESET.
(The netusb.c code tx_complete ignore -ECONNRESET)
Under normal conditions these errors take days to happen and in
intense usage take hours.
A test with ping gives packet loss, showing that something is wrong:
ping -4 -s 462 {destination} # 462 = 512 - 42 - 8
Not all packets fail.
My guess is that the device tries to find another packet starting
at the extra byte and will fail or not depending on the next
bytes (old buffer content).
======================================================================
Signed-off-by: Jose Alonso <joalonsof@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Lorenzo Bianconi [Mon, 13 Jun 2022 16:51:50 +0000 (09:51 -0700)]
i40e: add xdp frags support to ndo_xdp_xmit
Add the capability to map non-linear xdp frames in XDP_TX and ndo_xdp_xmit
callback.
Tested-by: Sarkar Tirthendu <tirthendu.sarkar@intel.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Add basic description for the xilinx emaclite driver DT bindings.
Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 15 Jun 2022 08:15:33 +0000 (09:15 +0100)]
Merge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue
Tony Nguyen says:
====================
Intel Wired LAN Driver Updates 2022-06-14
This series contains updates to ice driver only.
Michal fixes incorrect Tx timestamp offset calculation for E822 devices.
Roman enforces required VLAN filtering settings for double VLAN mode.
Przemyslaw fixes memory corruption issues with VFs by ensuring
queues are disabled in the error path of VF queue configuration and to
disabled VFs during reset.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The first patch in this series makes the name used for variables
representing a TRE ring be consistent everywhere. The second
renames two structure fields to better represent their purpose.
The last four rework a little code that manages some tranaction and
byte transfer statistics maintained mainly for TX endpoints. For
the most part this series is refactoring. The last one also
includes the first step toward no longer assuming an event ring is
dedicated to a single channel.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Mon, 13 Jun 2022 17:17:59 +0000 (12:17 -0500)]
net: ipa: rework gsi_channel_tx_update()
Rename gsi_channel_tx_update() to be gsi_trans_tx_completed(), and
pass it just the transaction pointer, deriving the channel from the
transaction. Update the comments above the function to provide a
more concise description of how statistics for TX endpoints are
maintained and used.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Mon, 13 Jun 2022 17:17:58 +0000 (12:17 -0500)]
net: ipa: stop counting total RX bytes and transactions
In gsi_evt_ring_rx_update(), we update each transaction so its len
field reflects the actual number of bytes received. In the process,
the total number of transactions and bytes processed on the channel
are summed, and added to a running total for the channel.
But we don't actually use those running totals for RX endpoints.
They're maintained for TX channels to support CoDel when they are
associated with a "real" network device.
So stop maintaining these totals for RX endpoints, and update the
comment where the fields are defined to make it clear they're only
valid for TX channels.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Mon, 13 Jun 2022 17:17:57 +0000 (12:17 -0500)]
net: ipa: simplify TX completion statistics
When a TX request is issued, its channel's accumulated byte and
transaction counts are recorded. This currently does *not* take
into account the transaction being committed.
Later, when the transaction completes, the number of bytes and
transactions that have completed since the transaction was committed
are reported to the network stack. The transaction and its byte
count are accounted for at that time.
Instead, record the transaction and its bytes in the counts recorded
at commit time. This avoids the need to do so when the transaction
completes, and provides a (small) simplification of that code.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Mon, 13 Jun 2022 17:17:55 +0000 (12:17 -0500)]
net: ipa: rename two transaction fields
There are two fields in a GSI transaction that keep track of TRE
counts. The first represents the number of TREs reserved for the
transaction in the TRE ring; that's currently named "tre_count".
The second is the number of TREs that are actually *used* by the
transaction at the time it is committed.
Rename the "tre_count" field to be "rsvd_count", to make its meaning
a little more specific. The "_count" is present in the name mainly
to avoid interpreting it as a reserved (not-to-be-used) field. This
name also distinguishes it from the "tre_count" field associated
with a channel.
Rename the "used" field to be "used_count", to match the convention
used for reserved TREs.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Mon, 13 Jun 2022 17:17:54 +0000 (12:17 -0500)]
net: ipa: use "tre_ring" for all TRE ring local variables
All local variables that represent event rings are named "ring".
All but two functions that represent a channel's TRE ring with a
local variable use the name "tre_ring". For consistency, use that
name in the two functions that don't fit the pattern.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 15 Jun 2022 05:35:18 +0000 (22:35 -0700)]
Merge branch 'support-mt7531-on-bpi-r2-pro'
Frank Wunderlich says:
====================
Support mt7531 on BPI-R2 Pro
This Series add Support for the mt7531 switch on Bananapi R2 Pro board.
This board uses port5 of the switch to conect to the gmac0 of the
rk3568 SoC.
Currently CPU-Port is hardcoded in the mt7530 driver to port 6.
Compared to v1 the reset-Patch was dropped as it was not needed and
CPU-Port-changes are completely rewriten based on suggestions/code from
Vladimir Oltean (many thanks to this).
In DTS Patch i only dropped the status-property that was not
needed/ignored by driver.
Due to the Changes i also made a regression test on mt7623 bpi-r2
(mt7623 soc + mt7530) and bpi-r64 (mt7622 soc + mt7531) with cpu-
port 6. Tests were done directly (ipv4 config on dsa user port)
and with vlan-aware bridge including vlan that was tagged outgoing
on dsa user port.
====================
Frank Wunderlich [Fri, 10 Jun 2022 17:05:40 +0000 (19:05 +0200)]
dt-bindings: net: dsa: make reset optional and add rgmii-mode to mt7531
A board may have no independent reset-line, so reset cannot be used
inside switch driver.
E.g. on Bananapi-R2 Pro switch and gmac are connected to same reset-line.
Resets should be acquired only to 1 device/driver. This prevents reset to
be bound to switch-driver if reset is already used for gmac. If reset is
only used by switch driver it resets the switch *and* the gmac after the
mdio bus comes up resulting in mdio bus goes down. It takes some time
until all is up again, switch driver tries to read from mdio, will fail
and defer the probe. On next try the reset does the same again.
Make reset optional for such boards.
Allow port 5 as cpu-port and phy-mode rgmii for mt7531.
- MT7530 supports RGMII on port 5 and RGMII/TRGMII on port 6.
- MT7531 supports on port 5 RGMII and SGMII (dual-sgmii) and
SGMII on port 6.
Signed-off-by: Frank Wunderlich <frank-w@public-files.de> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Frank Wunderlich [Fri, 10 Jun 2022 17:05:39 +0000 (19:05 +0200)]
net: dsa: mt7530: get cpu-port via dp->cpu_dp instead of constant
Replace last occurences of hardcoded cpu-port by cpu_dp member of
dsa_port struct.
Now the constant can be dropped.
Suggested-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Frank Wunderlich <frank-w@public-files.de> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Frank Wunderlich [Fri, 10 Jun 2022 17:05:38 +0000 (19:05 +0200)]
net: dsa: mt7530: rework mt753[01]_setup
Enumerate available cpu-ports instead of using hardcoded constant.
Suggested-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Frank Wunderlich <frank-w@public-files.de> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Frank Wunderlich [Fri, 10 Jun 2022 17:05:37 +0000 (19:05 +0200)]
net: dsa: mt7530: rework mt7530_hw_vlan_{add,del}
Rework vlan_add/vlan_del functions in preparation for dynamic cpu port.
Currently BIT(MT7530_CPU_PORT) is added to new_members, even though
mt7530_port_vlan_add() will be called on the CPU port too.
Let DSA core decide when to call port_vlan_add for the CPU port, rather
than doing it implicitly.
We can do autonomous forwarding in a certain VLAN, but not add br0 to that
VLAN and avoid flooding the CPU with those packets, if software knows it
doesn't need to process them.
Suggested-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Frank Wunderlich <frank-w@public-files.de> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 15 Jun 2022 04:51:07 +0000 (21:51 -0700)]
Merge branch 'mlxsw-remove-xm-support'
Ido Schimmel says:
====================
mlxsw: Remove XM support
The XM was supposed to be an external device connected to the
Spectrum-{2,3} ASICs using dedicated Ethernet ports. Its purpose was to
increase the number of routes that can be offloaded to hardware. This was
achieved by having the ASIC act as a cache that refers cache misses to the
XM where the FIB is stored and LPM lookup is performed.
Testing was done over an emulator and dedicated setups in the lab, but
the product was discontinued before shipping to customers.
Therefore, in order to remove dead code and reduce complexity of the
code base, revert the three patchsets that added XM support.
====================
1) Updated HW bits and definitions for upcoming features
1.1) vport debug counters
1.2) flow meter
1.3) Execute ASO action for flow entry
1.4) enhanced CQE compression
2) Add ICM header-modify-pattern RDMA API
Leon Says
=========
SW steering manipulates packet's header using "modifying header" actions.
Many of these actions do the same operation, but use different data each time.
Currently we create and keep every one of these actions, which use expensive
and limited resources.
Now we introduce a new mechanism - pattern and argument, which splits
a modifying action into two parts:
1. action pattern: contains the operations to be applied on packet's header,
mainly set/add/copy of fields in the packet
2. action data/argument: contains the data to be used by each operation
in the pattern.
This way we reuse same patterns with different arguments to create new
modifying actions, and since many actions share the same operations, we end
up creating a small number of patterns that we keep in a dedicated cache.
These modify header patterns are implemented as new type of ICM memory,
so the following kernel patch series add the support for this new ICM type.
==========
* 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux:
net/mlx5: Add bits and fields to support enhanced CQE compression
net/mlx5: Remove not used MLX5_CAP_BITS_RW_MASK
net/mlx5: group fdb cleanup to single function
net/mlx5: Add support EXECUTE_ASO action for flow entry
net/mlx5: Add HW definitions of vport debug counters
net/mlx5: Add IFC bits and enums for flow meter
RDMA/mlx5: Support handling of modify-header pattern ICM area
net/mlx5: Manage ICM of type modify-header pattern
net/mlx5: Introduce header-modify-pattern ICM properties
====================
Linus Torvalds [Tue, 14 Jun 2022 17:36:11 +0000 (10:36 -0700)]
netfs: fix up netfs_inode_init() docbook comment
Commit e8e91508b493 ("netfs: Further cleanups after struct netfs_inode
wrapper introduced") changed the argument types and names, and actually
updated the comment too (although that was thanks to David Howells, not
me: my original patch only changed the code).
But the comment fixup didn't go quite far enough, and didn't change the
argument name in the comment, resulting in
include/linux/netfs.h:314: warning: Function parameter or member 'ctx' not described in 'netfs_inode_init'
include/linux/netfs.h:314: warning: Excess function parameter 'inode' description in 'netfs_inode_init'
during htmldoc generation.
Fixes: e8e91508b493 ("netfs: Further cleanups after struct netfs_inode wrapper introduced") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Fixes: 823dac566650 ("ice: Check if VF is disabled for Opcode and other operations") Signed-off-by: Przemyslaw Patynowski <przemyslawx.patynowski@intel.com> Co-developed-by: Slawomir Laba <slawomirx.laba@intel.com> Signed-off-by: Slawomir Laba <slawomirx.laba@intel.com> Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Disable VF's RX/TX queues, when VIRTCHNL_OP_CONFIG_VSI_QUEUES fail.
Not disabling them might lead to scenario, where PF driver leaves VF
queues enabled, when VF's VSI failed queue config.
In this scenario VF should not have RX/TX queues enabled. If PF failed
to set up VF's queues, VF will reset due to TX timeouts in VF driver.
Initialize iterator 'i' to -1, so if error happens prior to configuring
queues then error path code will not disable queue 0. Loop that
configures queues will is using same iterator, so error path code will
only disable queues that were configured.
Fixes: 519d5f19ed82 ("ice: add support for virtchnl_queue_select.[tx|rx]_queues bitmap") Suggested-by: Slawomir Laba <slawomirx.laba@intel.com> Signed-off-by: Przemyslaw Patynowski <przemyslawx.patynowski@intel.com> Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
VLAN filtering features, that is C-Tag and S-Tag, in DVM mode must be
both enabled or disabled.
In case of turning off/on only one of the features, another feature must
be turned off/on automatically with issuing an appropriate message to
the kernel log.
Fixes: 65f13e0c2a7b ("ice: Advertise 802.1ad VLAN filtering and offloads for PF netdev") Signed-off-by: Roman Storozhenko <roman.storozhenko@intel.com> Co-developed-by: Anatolii Gerasymenko <anatolii.gerasymenko@intel.com> Signed-off-by: Anatolii Gerasymenko <anatolii.gerasymenko@intel.com> Tested-by: Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Michal Michalik [Tue, 10 May 2022 11:03:43 +0000 (13:03 +0200)]
ice: Fix PTP TX timestamp offset calculation
The offset was being incorrectly calculated for E822 - that led to
collisions in choosing TX timestamp register location when more than
one port was trying to use timestamping mechanism.
In E822 one quad is being logically split between ports, so quad 0 is
having trackers for ports 0-3, quad 1 ports 4-7 etc. Each port should
have separate memory location for tracking timestamps. Due to error for
example ports 1 and 2 had been assigned to quad 0 with same offset (0),
while port 1 should have offset 0 and 1 offset 16.
Fix it by correctly calculating quad offset.
Fixes: ae5cb05910db ("ice: implement basic E822 PTP support") Signed-off-by: Michal Michalik <michal.michalik@intel.com> Tested-by: Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Linus Torvalds [Tue, 14 Jun 2022 14:57:18 +0000 (07:57 -0700)]
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Paolo Bonzini:
"While last week's pull request contained miscellaneous fixes for x86,
this one covers other architectures, selftests changes, and a bigger
series for APIC virtualization bugs that were discovered during 5.20
development. The idea is to base 5.20 development for KVM on top of
this tag.
ARM64:
- Properly reset the SVE/SME flags on vcpu load
- Fix a vgic-v2 regression regarding accessing the pending state of a
HW interrupt from userspace (and make the code common with vgic-v3)
- Fix access to the idreg range for protected guests
- Ignore 'kvm-arm.mode=protected' when using VHE
- Return an error from kvm_arch_init_vm() on allocation failure
- A bunch of small cleanups (comments, annotations, indentation)
RISC-V:
- Typo fix in arch/riscv/kvm/vmid.c
- Remove broken reference pattern from MAINTAINERS entry
x86-64:
- Fix error in page tables with MKTME enabled
- Dirty page tracking performance test extended to running a nested
guest
- Disable APICv/AVIC in cases that it cannot implement correctly"
[ This merge also fixes a misplaced end parenthesis bug introduced in
commit 36e91d4510ec ("KVM: x86: inhibit APICv/AVIC on changes to APIC
ID or APIC base") pointed out by Sean Christopherson ]
Link: https://lore.kernel.org/all/20220610191813.371682-1-seanjc@google.com/
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (34 commits)
KVM: selftests: Restrict test region to 48-bit physical addresses when using nested
KVM: selftests: Add option to run dirty_log_perf_test vCPUs in L2
KVM: selftests: Clean up LIBKVM files in Makefile
KVM: selftests: Link selftests directly with lib object files
KVM: selftests: Drop unnecessary rule for STATIC_LIBS
KVM: selftests: Add a helper to check EPT/VPID capabilities
KVM: selftests: Move VMX_EPT_VPID_CAP_AD_BITS to vmx.h
KVM: selftests: Refactor nested_map() to specify target level
KVM: selftests: Drop stale function parameter comment for nested_map()
KVM: selftests: Add option to create 2M and 1G EPT mappings
KVM: selftests: Replace x86_page_size with PG_LEVEL_XX
KVM: x86: SVM: fix nested PAUSE filtering when L0 intercepts PAUSE
KVM: x86: SVM: drop preempt-safe wrappers for avic_vcpu_load/put
KVM: x86: disable preemption around the call to kvm_arch_vcpu_{un|}blocking
KVM: x86: disable preemption while updating apicv inhibition
KVM: x86: SVM: fix avic_kick_target_vcpus_fast
KVM: x86: SVM: remove avic's broken code that updated APIC ID
KVM: x86: inhibit APICv/AVIC on changes to APIC ID or APIC base
KVM: x86: document AVIC/APICv inhibit reasons
KVM: x86/mmu: Set memory encryption "value", not "mask", in shadow PDPTRs
...
Linus Torvalds [Tue, 14 Jun 2022 14:43:15 +0000 (07:43 -0700)]
Merge tag 'x86-bugs-2022-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 MMIO stale data fixes from Thomas Gleixner:
"Yet another hw vulnerability with a software mitigation: Processor
MMIO Stale Data.
They are a class of MMIO-related weaknesses which can expose stale
data by propagating it into core fill buffers. Data which can then be
leaked using the usual speculative execution methods.
Mitigations include this set along with microcode updates and are
similar to MDS and TAA vulnerabilities: VERW now clears those buffers
too"
* tag 'x86-bugs-2022-06-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/speculation/mmio: Print SMT warning
KVM: x86/speculation: Disable Fill buffer clear within guests
x86/speculation/mmio: Reuse SRBDS mitigation for SBDS
x86/speculation/srbds: Update SRBDS mitigation selection
x86/speculation/mmio: Add sysfs reporting for Processor MMIO Stale Data
x86/speculation/mmio: Enable CPU Fill buffer clearing on idle
x86/bugs: Group MDS, TAA & Processor MMIO Stale Data mitigations
x86/speculation/mmio: Add mitigation for Processor MMIO Stale Data
x86/speculation: Add a common function for MD_CLEAR mitigation update
x86/speculation/mmio: Enumerate Processor MMIO Stale Data bug
Documentation: Add documentation for Processor MMIO Stale Data
Petr Machata [Mon, 13 Jun 2022 12:50:17 +0000 (15:50 +0300)]
mlxsw: spectrum_cnt: Reorder counter pools
Both RIF and ACL flow counters use a 24-bit SW-managed counter address to
communicate which counter they want to bind.
In a number of Spectrum FW releases, binding a RIF counter is broken and
slices the counter index to 16 bits. As a result, on Spectrum-2 and above,
no more than about 410 RIF counters can be effectively used. This
translates to 205 netdevices for which L3 HW stats can be enabled. (This
does not happen on Spectrum-1, because there are fewer counters available
overall and the counter index never exceeds 16 bits.)
Binding counters to ACLs does not have this issue. Therefore reorder the
counter allocation scheme so that RIF counters come first and therefore get
lower indices that are below the 16-bit barrier.
When calling setattr_prepare() to determine the validity of the
attributes the ia_{g,u}id fields contain the value that will be written
to inode->i_{g,u}id. This is exactly the same for idmapped and
non-idmapped mounts and allows callers to pass in the values they want
to see written to inode->i_{g,u}id.
When group ownership is changed a caller whose fsuid owns the inode can
change the group of the inode to any group they are a member of. When
searching through the caller's groups we need to use the gid mapped
according to the idmapped mount otherwise we will fail to change
ownership for unprivileged users.
Consider a caller running with fsuid and fsgid 1000 using an idmapped
mount that maps id 65534 to 1000 and 65535 to 1001. Consequently, a file
owned by 65534:65535 in the filesystem will be owned by 1000:1001 in the
idmapped mount.
The caller now requests the gid of the file to be changed to 1000 going
through the idmapped mount. In the vfs we will immediately map the
requested gid to the value that will need to be written to inode->i_gid
and place it in attr->ia_gid. Since this idmapped mount maps 65534 to
1000 we place 65534 in attr->ia_gid.
When we check whether the caller is allowed to change group ownership we
first validate that their fsuid matches the inode's uid. The
inode->i_uid is 65534 which is mapped to uid 1000 in the idmapped mount.
Since the caller's fsuid is 1000 we pass the check.
We now check whether the caller is allowed to change inode->i_gid to the
requested gid by calling in_group_p(). This will compare the passed in
gid to the caller's fsgid and search the caller's additional groups.
Since we're dealing with an idmapped mount we need to pass in the gid
mapped according to the idmapped mount. This is akin to checking whether
a caller is privileged over the future group the inode is owned by. And
that needs to take the idmapped mount into account. Note, all helpers
are nops without idmapped mounts.
The AMD XGbE driver currently counts the number of interrupts assigned
to the device by inspecting the pdev->resource array. Since commit 55098b87fc23 ("of/platform: Drop static setup of IRQ resource from DT
core") removed IRQs from this array, the driver now attempts to get all
interrupts from 1 to -1U and gives up probing once it reaches an invalid
interrupt index.
Obtain the number of IRQs with platform_irq_count() instead.
Fixes: 55098b87fc23 ("of/platform: Drop static setup of IRQ resource from DT core") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: Rob Herring <robh@kernel.org> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20220609161457.69614-1-jean-philippe@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Marco Bonelli [Thu, 9 Jun 2022 13:49:01 +0000 (15:49 +0200)]
ethtool: Fix and simplify ethtool_convert_link_mode_to_legacy_u32()
Fix the implementation of ethtool_convert_link_mode_to_legacy_u32(), which
is supposed to return false if src has bits higher than 31 set. The current
implementation uses the complement of bitmap_fill(ext, 32) to test high
bits of src, which is wrong as bitmap_fill() fills _with long granularity_,
and sizeof(long) can be > 4. No users of this function currently check the
return value, so the bug was dormant.
Also remove the check for __ETHTOOL_LINK_MODE_MASK_NBITS > 32, as the enum
ethtool_link_mode_bit_indices contains far beyond 32 values. Using
find_next_bit() to test the src bitmask works regardless of this anyway.
Ofer Levi [Wed, 8 Jun 2022 20:04:52 +0000 (13:04 -0700)]
net/mlx5: Add bits and fields to support enhanced CQE compression
Expose ifc bits and add needed structure fields and methods to
support enhanced CQE compression feature.
The enhanced CQE compression feature improves cpu utiliziation with
better packet latency from nic to host.
Signed-off-by: Ofer Levi <oferle@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Shay Drory [Wed, 8 Jun 2022 20:04:51 +0000 (13:04 -0700)]
net/mlx5: Remove not used MLX5_CAP_BITS_RW_MASK
Remove not used MLX5_CAP_BITS_RW_MASK.
While at it, remove CAP_MASK, MLX5_CAP_OFF_CMDIF_CSUM
and MLX5_DEV_CAP_FLAG_*, since MLX5_CAP_BITS_RW_MASK
was their only user.
Shay Drory [Wed, 8 Jun 2022 20:04:50 +0000 (13:04 -0700)]
net/mlx5: group fdb cleanup to single function
Currently, the allocation of fdb software objects are done is single
function, oppose to the cleanup of them.
Group the cleanup of fdb software objects to single function.
usercopy: Make usercopy resilient against ridiculously large copies
If 'n' is so large that it's negative, we might wrap around and mistakenly
think that the copy is OK when it's not. Such a copy would probably
crash, but just doing the arithmetic in a more simple way lets us detect
and refuse this case.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Tested-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220612213227.3881769-4-willy@infradead.org
vmalloc does not allocate a vm_struct for vm_map_ram() areas. That causes
us to deny usercopies from those areas. This affects XFS which uses
vm_map_ram() for its directories.
Fix this by calling find_vmap_area() instead of find_vm_area().
Eric Dumazet [Sat, 11 Jun 2022 03:30:16 +0000 (20:30 -0700)]
tcp: sk_forced_mem_schedule() optimization
sk_memory_allocated_add() has three callers, and returns
to them @memory_allocated.
sk_forced_mem_schedule() is one of them, and ignores
the returned value.
Change sk_memory_allocated_add() to return void.
Change sock_reserve_memory() and __sk_mem_raise_allocated()
to call sk_memory_allocated().
This removes one cache line miss [1] for RPC workloads,
as first skbs in TCP write queue and receive queue go through
sk_forced_mem_schedule().
[1] Cache line holding tcp_memory_allocated.
Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: smsc95xx: add support for Microchip EVB-LAN8670-USB
This patch adds support for Microchip's EVB-LAN8670-USB 10BASE-T1S
ethernet device to the existing smsc95xx driver by adding the new
USB VID/PID pairs.
Signed-off-by: Parthiban Veerasooran <Parthiban.Veerasooran@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yinjun Zhang [Mon, 13 Jun 2022 09:58:31 +0000 (11:58 +0200)]
nfp: support 48-bit DMA addressing for NFP3800
48-bit DMA addressing is supported in NFP3800 HW and implemented
in NFDK firmware, so enable this feature in driver now. Note that
with this change, NFD3 firmware, which doesn't implement 48-bit
DMA, cannot be used for NFP3800 any more.
RX free list descriptor, used by both NFD3 and NFDK, is also modified
to support 48-bit DMA. That's OK because the top bits is always get
set to 0 when assigned with 40-bit address.
Based on initial work of Jakub Kicinski <jakub.kicinski@netronome.com>.
Signed-off-by: Yinjun Zhang <yinjun.zhang@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
We ran into multiple DMA TX errors while writing files over a network
block device running on top of a DMA-connected AXI Ethernet device on
64-bit RISC-V machines. The errors indicated that the DMA had fetched a
null descriptor and we found that the reason for this is that AXI DMA had
unexpectedly processed a partially updated tail descriptor pointer. To
fix it, we suggest that the driver should use one 64-bit write instead
of two 32-bit writes to perform such update if possible. For those
archectures where double-word load/stores are unavailable, e.g. 32-bit
archectures, force a driver probe failure if the driver finds 64-bit
capability on DMA.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Chiu [Mon, 13 Jun 2022 03:42:02 +0000 (11:42 +0800)]
net: axienet: Use iowrite64 to write all 64b descriptor pointers
According to commit d75c3557c289 ("net: axienet: Autodetect 64-bit DMA
capability") and AXI-DMA spec (pg021), on 64-bit capable dma, only
writing MSB part of tail descriptor pointer causes DMA engine to start
fetching descriptors. However, we found that it is true only if dma is in
idle state. In other words, dma would use a tailp even if it only has LSB
updated, when the dma is running.
The non-atomicity of this behavior could be problematic if enough
delay were introduced in between the 2 writes. For example, if an
interrupt comes right after the LSB write and the cpu spends long
enough time in the handler for the dma to get back into idle state by
completing descriptors, then the seconcd write to MSB would treat dma
to start fetching descriptors again. Since the descriptor next to the
one pointed by current tail pointer is not filled by the kernel yet,
fetching a null descriptor here causes a dma internal error and halt
the dma engine down.
We suggest that the dma engine should start process a 64-bit MMIO write
to the descriptor pointer only if ONE 32-bit part of it is written on all
states. Or we should restrict the use of 64-bit addressable dma on 32-bit
platforms, since those devices have no instruction to guarantee the write
to LSB and MSB part of tail pointer occurs atomically to the dma.
Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Reported-by: Max Hsu <max.hsu@sifive.com> Reviewed-by: Greentime Hu <greentime.hu@sifive.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Chiu [Mon, 13 Jun 2022 03:42:01 +0000 (11:42 +0800)]
net: axienet: make the 64b addresable DMA depends on 64b archectures
Currently it is not safe to config the IP as 64-bit addressable on 32-bit
archectures, which cannot perform a double-word store on its descriptor
pointers. The pointer is 64-bit wide if the IP is configured as 64-bit,
and the device would process the partially updated pointer on some
states if the pointer was updated via two store-words. To prevent such
condition, we force a probe fail if we discover that the IP has 64-bit
capability but it is not running on a 64-Bit kernel.
This is a series of patch (1/2). The next patch must be applied in order
to make 64b DMA safe on 64b archectures.
Signed-off-by: Andy Chiu <andy.chiu@sifive.com> Reported-by: Max Hsu <max.hsu@sifive.com> Reviewed-by: Greentime Hu <greentime.hu@sifive.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 13 Jun 2022 11:01:58 +0000 (12:01 +0100)]
Merge branch 'ipa-refactoring'
Alex Elder says:
====================
net: ipa: simple refactoring
This series contains some minor code improvements.
The first patch verifies that the configuration is compatible with a
recently-defined limit. The second and third rename two fields so
they better reflect their use in the code. The next gets rid of an
empty function by reworking its only caller.
The last two begin to remove the assumption that an event ring is
associated with a single channel. Eventually we'll support having
multiple channels share an event ring but some more needs to be done
before that can happen.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 10 Jun 2022 15:46:15 +0000 (10:46 -0500)]
net: ipa: derive channel from transaction
In gsi_channel_tx_queued(), we report when a transaction gets passed
to hardware. Change that function so it takes transaction rather
than a channel as its argument, and derive the channel from the
transaction. Rename the function accordingly.
Delete the header comments above the function definition; the ones
above the declaration in "gsi_private.h" should suffice. In
addition, the comments above gsi_channel_tx_update() do a fine job
of explaining what's going on.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 10 Jun 2022 15:46:14 +0000 (10:46 -0500)]
net: ipa: determine channel from event
Each event in an event ring describes the TRE whose completion
caused the event. Currently, every event ring is dedicated to a
single channel, so the channel is easily derived from the event
ring.
An event ring can actually be shared by more than one channel
though, and to distinguish events for one channel from another, the
event structure contains a field indicating which channel the event
is associated with.
In gsi_event_trans(), use the channel ID in an event to determine
which channel the event is for. This makes the channel pointer now
passed to that function irrelevant; pass the GSI pointer to that
function instead.
And although it shouldn't happen, warn if an event arrives that
records a channel ID that's not in use, or if the event does not
have a transaction associated with it.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
When a GSI transaction completes, ipa_endpoint_trans_complete() is
eventually called. That handles TX and RX completions separately,
but ipa_endpoint_tx_complete() is a no-op.
Instead, have ipa_endpoint_trans_complete() return immediately for a
TX transaction, and incorporate code from ipa_endpoint_rx_complete()
to handle RX transactions.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 10 Jun 2022 15:46:12 +0000 (10:46 -0500)]
net: ipa: rename endpoint->trans_tre_max
The trans_tre_max field of the IPA endpoint structure is only used
to limit the number of fragments allowed for an SKB being prepared
for transmission. Recognizing that, rename the field skb_frag_max,
and reduce its value by 1.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 10 Jun 2022 15:46:11 +0000 (10:46 -0500)]
net: ipa: rename channel->tlv_count
Each GSI channel has a TLV FIFO of a certain size, specified in the
configuration data for an AP channel. That size dictates the
maximum number of TREs that are allowed in a single transaction.
The only way that value is used after initialization is as a limit
on the number of TREs in a transaction; calling it "tlv_count"
isn't helpful, and in fact gsi_channel_trans_tre_max() exists to
sort of abstract it.
Instead, rename the channel->tlv_count field trans_tre_max, and get
rid of the helper function. Update a couple of comments as well.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 10 Jun 2022 15:46:10 +0000 (10:46 -0500)]
net: ipa: verify command channel TLV count
In commit 49d594ced0ae2 ("net: ipa: remove command info pool"), the
maximum number of IPA commands that would be sent in a single
transaction was defined. That number can't exceed the size of the
TLV FIFO on the command channel, and we can check that at runtime.
To add this check, pass a new flag to gsi_channel_data_valid() to
indicate the channel being checked is being used for IPA commands.
Knowing that we can also verify the channel direction is correct.
Use a new local variable that refers to the command-specific portion
of the data being checked.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Guangbin Huang [Sat, 11 Jun 2022 12:25:29 +0000 (20:25 +0800)]
net: hns3: fix tm port shapping of fibre port is incorrect after driver initialization
Currently in driver initialization process, driver will set shapping
parameters of tm port to default speed read from firmware. However, the
speed of SFP module may not be default speed, so shapping parameters of
tm port may be incorrect.
To fix this problem, driver sets new shapping parameters for tm port
after getting exact speed of SFP module in this case.
Fixes: 59b0d93afd6c ("net: hns3: add support for multiple media type") Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jie Wang [Sat, 11 Jun 2022 12:25:28 +0000 (20:25 +0800)]
net: hns3: fix PF rss size initialization bug
Currently hns3 driver misuses the VF rss size to initialize the PF rss size
in hclge_tm_vport_tc_info_update. So this patch fix it by checking the
vport id before initialization.
Fixes: 3a377817f335 ("net: hns3: refactor PF rss get APIs with new common rss get APIs") Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Guangbin Huang [Sat, 11 Jun 2022 12:25:27 +0000 (20:25 +0800)]
net: hns3: restore tm priority/qset to default settings when tc disabled
Currently, settings parameters of schedule mode, dwrr, shaper of tm
priority or qset of one tc are only be set when tc is enabled, they are
not restored to the default settings when tc is disabled. It confuses
users when they cat tm_priority or tm_qset files of debugfs. So this
patch fixes it.
Fixes: c701070a20f5 ("net: hns3: Add support of TX Scheduler & Shaper to HNS3 driver") Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jie Wang [Sat, 11 Jun 2022 12:25:26 +0000 (20:25 +0800)]
net: hns3: modify the ring param print info
Currently tx push is also a ring param. So the original ring param print
info in hns3_is_ringparam_changed should be adjusted.
Fixes: c9f8750fb780 ("net: hns3: refactor hns3_set_ringparam()") Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jian Shen [Sat, 11 Jun 2022 12:25:25 +0000 (20:25 +0800)]
net: hns3: don't push link state to VF if unalive
It's unnecessary to push link state to unalive VF, and the VF will
query link state from PF when it being start works.
Fixes: 2d9269b81d97 ("net: hns3: PF add support for pushing link status to VFs") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Guangbin Huang [Sat, 11 Jun 2022 12:25:24 +0000 (20:25 +0800)]
net: hns3: set port base vlan tbl_sta to false before removing old vlan
When modify port base vlan, the port base vlan tbl_sta needs to set to
false before removing old vlan, to indicate this operation is not finish.
Fixes: 1d81e34465b3 ("net: hns3: fix port base vlan add fail when concurrent with reset") Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Sun, 12 Jun 2022 18:33:42 +0000 (11:33 -0700)]
Merge tag 'platform-drivers-x86-v5.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86
Pull x86 platform driver fixes from Hans de Goede:
"Highlights:
- Fix hp-wmi regression on HP Omen laptops introduced in 5.18
- Several hardware-id additions
- A couple of other tiny fixes"
* tag 'platform-drivers-x86-v5.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86:
platform/x86/intel: hid: Add Surface Go to VGBS allow list
platform/x86: hp-wmi: Use zero insize parameter only when supported
platform/x86: hp-wmi: Resolve WMI query failures on some devices
platform/x86: gigabyte-wmi: Add support for B450M DS3H-CF
platform/x86: gigabyte-wmi: Add Z690M AORUS ELITE AX DDR4 support
platform/x86: barco-p50-gpio: Add check for platform_driver_register
platform/x86/intel: pmc: Support Intel Raptorlake P
platform/x86/intel: Fix pmt_crashlog array reference
platform/mellanox: Add static in struct declaration.
platform/mellanox: Spelling s/platfom/platform/