Florian Fainelli [Sat, 13 Feb 2021 03:46:31 +0000 (19:46 -0800)]
net: phy: broadcom: Remove unused flags
We have a number of unused flags defined today and since we are scarce
on space and may need to introduce new flags in the future remove and
shift every existing flag down into a contiguous assignment.
PHY_BCM_FLAGS_MODE_1000BX was only used internally for the BCM54616S
PHY, so we allocate a driver private structure instead to store that
flag instead of canibalizing one from phydev->dev_flags for that
purpose.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Fainelli [Sat, 13 Feb 2021 03:46:30 +0000 (19:46 -0800)]
net: phy: broadcom: Avoid forward for bcm54xx_config_clock_delay()
Avoid a forward declaration by moving the callers of
bcm54xx_config_clock_delay() below its body.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Lijun Pan [Sat, 13 Feb 2021 02:36:10 +0000 (20:36 -0600)]
ibmvnic: simplify reset_long_term_buff function
The only thing reset_long_term_buff() should do is set
buffer to zero. After doing that, it is not necessary to
send_request_map again to VIOS since it actually does not
change the mapping. So, keep memset function and remove all
others.
Signed-off-by: Lijun Pan <ljp@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Geliang Tang [Sat, 13 Feb 2021 00:52:02 +0000 (16:52 -0800)]
mptcp: add local addr info in mptcp_info
Add mptcpi_local_addr_used and mptcpi_local_addr_max in struct mptcp_info.
Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
i40e: Fix incorrect argument in call to ipv6_addr_any()
It seems that the right argument to be passed is &tcp_ip6_spec->ip6dst,
not &tcp_ip6_spec->ip6src, when calling function ipv6_addr_any().
Addresses-Coverity-ID: 1501734 ("Copy-paste error") Fixes: efca91e89b67 ("i40e: Add flow director support for IPv6") Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Rafał Miłecki [Fri, 12 Feb 2021 15:21:35 +0000 (16:21 +0100)]
net: broadcom: bcm4908_enet: set MTU on open & on request
Hardware comes up with default max frame size set to 1518. When using it
with switch it results in actual Ethernet MTU 1492:
1518 - 14 (Ethernet header) - 4 (Broadcom's tag) - 4 (802.1q) - 4 (FCS)
Above means hardware in its default state can't handle standard Ethernet
traffic (MTU 1500).
Define maximum possible Ethernet overhead and always set MAC max frame
length accordingly. This change fixes handling Ethernet frames of length
1506 - 1514.
Signed-off-by: Rafał Miłecki <rafal@milecki.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
This series is the ethernet driver for Toshiba's ARM SoC, Visconti[0].
This provides DT binding documentation, device driver, MAINTAINER files,
and updates to DT files.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add dwmac-visconti to the stmmac driver in Toshiba Visconti ARM SoCs.
This patch contains only the basic function of the device. There is no
clock control, PM, etc. yet. These will be added in the future.
Signed-off-by: Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp> Signed-off-by: David S. Miller <davem@davemloft.net>
Stefan Chulski [Mon, 15 Feb 2021 15:23:42 +0000 (17:23 +0200)]
net: mvpp2: Add TX flow control support for jumbo frames
With MTU less than 1500B on all ports, the driver uses per CPU pool mode.
If one of the ports set to jumbo frame MTU size, all ports move
to shared pools mode.
Here, buffer manager TX Flow Control reconfigured on all ports.
Signed-off-by: Stefan Chulski <stefanc@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 20:46:32 +0000 (22:46 +0200)]
net: dsa: sja1105: make devlink property best_effort_vlan_filtering true by default
The sja1105 driver has a limitation, extensively described under
Documentation/networking/dsa/sja1105.rst and
Documentation/networking/devlink/sja1105.rst, which says that when the
ports are under a bridge with vlan_filtering=1, traffic to and from
the network stack is not possible, unless the driver-specific
best_effort_vlan_filtering devlink parameter is enabled.
For users, this creates a 'wtf' moment. They need to go to the
documentation and find about the existence of this property, then maybe
install devlink and set it to true.
Having best_effort_vlan_filtering enabled by the kernel by default
delays that 'wtf' moment (maybe up to the point that it never even
happens). The user doesn't need to care that the driver supports
addressing the ports individually by retagging VLAN IDs until he/she
needs to use more than 32 VLAN IDs (since there can be at most 32
retagging rules). Only then do they need to think whether they need the
full VLAN table, at the expense of no individual port addressing, or
not.
But the odds that an sja1105 user will need more than 32 VLANs
terminated by the CPU is probably low. And, if we were to follow the
principle that more advanced use cases should require more advanced
preparation steps, then it makes more sense for ping to 'just work'
while CPU termination of > 32 VLAN IDs to require a bit more forethought
and possibly a driver-specific devlink param.
So we should be able to safely change the default here, and make this
driver act just a little bit more sanely out of the box.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 13 Feb 2021 14:26:34 +0000 (06:26 -0800)]
tcp: tcp_data_ready() must look at SOCK_DONE
My prior cleanup missed that tcp_data_ready() has to look at SOCK_DONE.
Otherwise, an application using SO_RCVLOWAT will not get EPOLLIN event
if a FIN is received in the middle of expected payload.
The reason SOCK_DONE is not examined in tcp_epollin_ready()
is that tcp_poll() catches the FIN because tcp_fin()
is also setting RCV_SHUTDOWN into sk->sk_shutdown
Fixes: 05dc72aba364 ("tcp: factorize logic into tcp_epollin_ready()") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Wei Wang <weiwan@google.com> Cc: Arjun Roy <arjunroy@google.com> Reviewed-by: Wei Wang <weiwan@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 15 Feb 2021 21:15:10 +0000 (13:15 -0800)]
Merge branch 'br-next-fixes'
Vladimir Oltean says:
====================
Fix buggy brport flags offload for SJA1105 DSA
While testing the "Software fallback for bridging in DSA" on sja1105, I
discovered that I managed to introduce two bugs in a single patch
submitted recently to net-next.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 15 Feb 2021 21:09:12 +0000 (23:09 +0200)]
net: bridge: fix br_vlan_filter_toggle stub when CONFIG_BRIDGE_VLAN_FILTERING=n
The prototype of br_vlan_filter_toggle was updated to include a netlink
extack, but the stub definition wasn't, which results in a build error
when CONFIG_BRIDGE_VLAN_FILTERING=n.
Fixes: 9e781401cbfc ("net: bridge: propagate extack through store_bridge_parm") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 15 Feb 2021 21:09:11 +0000 (23:09 +0200)]
net: bridge: fix switchdev_port_attr_set stub when CONFIG_SWITCHDEV=n
The switchdev_port_attr_set function prototype was updated only for the
case where CONFIG_SWITCHDEV=y|m, leaving a prototype mismatch with the
stub definition for the disabled case. This results in a build error, so
update that function too.
Fixes: dcbdf1350e33 ("net: bridge: propagate extack through switchdev_port_attr_set") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 15 Feb 2021 13:31:43 +0000 (15:31 +0200)]
net: mscc: ocelot: avoid type promotion when calling ocelot_ifh_set_dest
Smatch is confused by the fact that a 32-bit BIT(port) macro is passed
as argument to the ocelot_ifh_set_dest function and warns:
ocelot_xmit() warn: should '(((1))) << (dp->index)' be a 64 bit type?
seville_xmit() warn: should '(((1))) << (dp->index)' be a 64 bit type?
The destination port mask is copied into a 12-bit field of the packet,
starting at bit offset 67 and ending at 56.
So this DSA tagging protocol supports at most 12 bits, which is clearly
less than 32. Attempting to send to a port number > 12 will cause the
packing() call to truncate way before there will be 32-bit truncation
due to type promotion of the BIT(port) argument towards u64.
Therefore, smatch's fears that BIT(port) will do the wrong thing and
cause unexpected truncation for "port" values >= 32 are unfounded.
Nonetheless, let's silence the warning by explicitly passing an u64
value to ocelot_ifh_set_dest, such that the compiler does not need to do
a questionable type promotion.
Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Sun, 14 Feb 2021 16:38:30 +0000 (17:38 +0100)]
r8169: fix resuming from suspend on RTL8105e if machine runs on battery
Armin reported that after referenced commit his RTL8105e is dead when
resuming from suspend and machine runs on battery. This patch has been
confirmed to fix the issue.
Fixes: e80bd76fbf56 ("r8169: work around power-saving bug on some chip versions") Reported-by: Armin Wolf <W_Armin@gmx.de> Tested-by: Armin Wolf <W_Armin@gmx.de> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
Propagate extack for switchdev VLANs from DSA
This series moves the restriction messages printed by the DSA core, and
by some individual device drivers, into the netlink extended ack
structure, to be communicated to user space where possible, or still
printed to the kernel log from the bridge layer.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 20:43:19 +0000 (22:43 +0200)]
net: dsa: propagate extack to .port_vlan_filtering
Some drivers can't dynamically change the VLAN filtering option, or
impose some restrictions, it would be nice to propagate this info
through netlink instead of printing it to a kernel log that might never
be read. Also netlink extack includes the module that emitted the
message, which means that it's easier to figure out which ones are
driver-generated errors as opposed to command misuse.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 20:43:18 +0000 (22:43 +0200)]
net: dsa: propagate extack to .port_vlan_add
Allow drivers to communicate their restrictions to user space directly,
instead of printing to the kernel log. Where the conversion would have
been lossy and things like VLAN ID could no longer be conveyed (due to
the lack of support for printf format specifier in netlink extack), I
chose to keep the messages in full form to the kernel log only, and
leave it up to individual driver maintainers to move more messages to
extack.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 20:43:17 +0000 (22:43 +0200)]
net: bridge: propagate extack through switchdev_port_attr_set
The benefit is the ability to propagate errors from switchdev drivers
for the SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING and
SWITCHDEV_ATTR_ID_BRIDGE_VLAN_PROTOCOL attributes.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 20:43:16 +0000 (22:43 +0200)]
net: bridge: propagate extack through store_bridge_parm
The bridge sysfs interface stores parameters for the STP, VLAN,
multicast etc subsystems using a predefined function prototype.
Sometimes the underlying function being called supports a netlink
extended ack message, and we ignore it.
Let's expand the store_bridge_parm function prototype to include the
extack, and just print it to console, but at least propagate it where
applicable. Where not applicable, create a shim function in the
br_sysfs_br.c file that discards the extra function argument.
This patch allows us to propagate the extack argument to
br_vlan_set_default_pvid, br_vlan_set_proto and br_vlan_filter_toggle,
and from there, further up in br_changelink from br_netlink.c.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 20:43:15 +0000 (22:43 +0200)]
net: bridge: remove __br_vlan_filter_toggle
This function is identical with br_vlan_filter_toggle.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 15 Feb 2021 01:31:44 +0000 (17:31 -0800)]
Merge branch 'PTP-for-DSA-tag_ocelot_8021q'
Vladimir Oltean says:
====================
PTP for DSA tag_ocelot_8021q
Changes in v2:
Add stub definition for ocelot_port_inject_frame when switch driver is
not compiled in.
This is part two of the errata workaround begun here:
https://patchwork.kernel.org/project/netdevbpf/cover/20210129010009.3959398-1-olteanv@gmail.com/
Now that we have basic traffic support when we operate the Ocelot DSA
switches without an NPI port, it would be nice to regain some of the
features lost due to the lack of the NPI port functionality. An
important one is PTP timestamping, which is intimately tied to the DSA
frame header added by the NPI port: on TX, we put a "timestamp request
ID" in the Injection Frame Header, while on RX, the Extraction Frame
Header contains a partial 32-bit PTP timestamp. Get rid of the NPI port
and replace it with a VLAN-based tagger, and you lose PTP, right?
Well, not quite, this is what this patch series is about. The NPI port
is basically a regular Ethernet port configured to service the packets
in and out of the switch's CPU port module (which has other non-DSA I/O
mechanisms too, such as register-based MMIO and DMA). If we disable the
NPI port, we can in theory still access the packets delivered to the CPU
port module by doing exactly what the ocelot switchdev driver does:
extracting Ethernet packets through registers (yes, it is as icky as it
sounds).
However, there's a catch. The Felix switch was integrated into NXP
LS1028A with the idea in mind that it will operate as DSA, i.e. using
the CPU port module connected to the NPI port, not having I/O over
register-based MMIO which is painfully slow and CPU intensive. So
register-based packet I/O not supposed to work - those registers aren't
even documented in the hardware reference manual for Felix. However
they kinda do, with the exception of the fact that an RX interrupt was
really not wired to the CPU cores - so we don't know when the CPU port
module receives a new packet. But we can hack even around that, by
replicating every packet that goes to the CPU port module and making it
also go to a plain internal Ethernet port. Then drop the Ethernet packet
and read the other copy of it from the CPU port module, this time
annotated with the much-wanted RX timestamp.
This is all fine and it works, but it does raise some questions about
what DSA even is anymore, if we start having switches that inject some
of their packets over Ethernet and some through registers, where do we
draw the line. In principle I believe these concerns are founded, but at
the same time, the way that the Felix driver uses register MMIO based
packet I/O is fundamentally the same as any other DSA driver capable of
PTP makes use of a side-channel for timestamps like a FIFO (just that
this one is a lot more complicated, and comes with the entire actual
packet, not just the timestamp).
Nonetheless, I tried to keep the extra pressure added by this ERR
workaround upon the DSA subsystem as small as possible, so some of the
patches are just a revisit of some of Andrew's complaints w.r.t. the
fact that tag_ocelot already violates any driver <-> tagger boundary,
and as a consequence, is not able to be used on testbeds such as
dsa_loop (which it now can). So now, the tag_ocelot and tag_ocelot_8021q
drivers should be dsa_loop-clean, and have the ERR workarounds as
self-contained as possible, using all the designated features for PTP
timestamping and nothing more.
Comments appreciated.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:38:01 +0000 (00:38 +0200)]
net: dsa: tag_ocelot_8021q: add support for PTP timestamping
For TX timestamping, we use the felix_txtstamp method which is common
with the regular (non-8021q) ocelot tagger. This method says that skb
deferral is needed, prepares a timestamp request ID, and puts a clone of
the skb in a queue waiting for the timestamp IRQ.
felix_txtstamp is called by dsa_skb_tx_timestamp() just before the
tagger's xmit method. In the tagger xmit, we divert the packets
classified by dsa_skb_tx_timestamp() as PTP towards the MMIO-based
injection registers, and we declare them as dead towards dsa_slave_xmit.
If not PTP, we proceed with normal tag_8021q stuff.
Then the timestamp IRQ fires, the clone queued up from felix_txtstamp is
matched to the TX timestamp retrieved from the switch's FIFO based on
the timestamp request ID, and the clone is delivered to the stack.
On RX, thanks to the VCAP IS2 rule that redirects the frames with an
EtherType for 1588 towards two destinations:
- the CPU port module (for MMIO based extraction) and
- if the "no XTR IRQ" workaround is in place, the dsa_8021q CPU port
the relevant data path processing starts in the ptp_classify_raw BPF
classifier installed by DSA in the RX data path (post tagger, which is
completely unaware that it saw a PTP packet).
This time we can't reuse the same implementation of .port_rxtstamp that
also works with the default ocelot tagger. That is because felix_rxtstamp
is given an skb with a freshly stripped DSA header, and it says "I don't
need deferral for its RX timestamp, it's right in it, let me show you";
and it just points to the header right behind skb->data, from where it
unpacks the timestamp and annotates the skb with it.
The same thing cannot happen with tag_ocelot_8021q, because for one
thing, the skb did not have an extraction frame header in the first
place, but a VLAN tag with no timestamp information. So the code paths
in felix_rxtstamp for the regular and 8021q tagger are completely
independent. With tag_8021q, the timestamp must come from the packet's
duplicate delivered to the CPU port module, but there is potentially
complex logic to be handled [ and prone to reordering ] if we were to
just start reading packets from the CPU port module, and try to match
them to the one we received over Ethernet and which needs an RX
timestamp. So we do something simple: we tell DSA "give me some time to
think" (we request skb deferral by returning false from .port_rxtstamp)
and we just drop the frame we got over Ethernet with no attempt to match
it to anything - we just treat it as a notification that there's data to
be processed from the CPU port module's queues. Then we proceed to read
the packets from those, one by one, which we deliver up the stack,
timestamped, using netif_rx - the same function that any driver would
use anyway if it needed RX timestamp deferral. So the assumption is that
we'll come across the PTP packet that triggered the CPU extraction
notification eventually, but we don't know when exactly. Thanks to the
VCAP IS2 trap/redirect rule and the exclusion of the CPU port module
from the flooding replicators, only PTP frames should be present in the
CPU port module's RX queues anyway.
There is just one conflict between the VCAP IS2 trapping rule and the
semantics of the BPF classifier. Namely, ptp_classify_raw() deems
general messages as non-timestampable, but still, those are trapped to
the CPU port module since they have an EtherType of ETH_P_1588. So, if
the "no XTR IRQ" workaround is in place, we need to run another BPF
classifier on the frames extracted over MMIO, to avoid duplicates being
sent to the stack (once over Ethernet, once over MMIO). It doesn't look
like it's possible to install VCAP IS2 rules based on keys extracted
from the 1588 frame headers.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:38:00 +0000 (00:38 +0200)]
net: dsa: felix: setup MMIO filtering rules for PTP when using tag_8021q
Since the tag_8021q tagger is software-defined, it has no means by
itself for retrieving hardware timestamps of PTP event messages.
Because we do want to support PTP on ocelot even with tag_8021q, we need
to use the CPU port module for that. The RX timestamp is present in the
Extraction Frame Header. And because we can't use NPI mode which redirects
the CPU queues to an "external CPU" (meaning the ARM CPU running Linux),
then we need to poll the CPU port module through the MMIO registers to
retrieve TX and RX timestamps.
Sadly, on NXP LS1028A, the Felix switch was integrated into the SoC
without wiring the extraction IRQ line to the ARM GIC. So, if we want to
be notified of any PTP packets received on the CPU port module, we have
a problem.
There is a possible workaround, which is to use the Ethernet CPU port as
a notification channel that packets are available on the CPU port module
as well. When a PTP packet is received by the DSA tagger (without timestamp,
of course), we go to the CPU extraction queues, poll for it there, then
we drop the original Ethernet packet and masquerade the packet retrieved
over MMIO (plus the timestamp) as the original when we inject it up the
stack.
Create a quirk in struct felix is selected by the Felix driver (but not
by Seville, since that doesn't support PTP at all). We want to do this
such that the workaround is minimally invasive for future switches that
don't require this workaround.
The only traffic for which we need timestamps is PTP traffic, so add a
redirection rule to the CPU port module for this. Currently we only have
the need for PTP over L2, so redirection rules for UDP ports 319 and 320
are TBD for now.
Note that for the workaround of matching of PTP-over-Ethernet-port with
PTP-over-MMIO queues to work properly, both channels need to be
absolutely lossless. There are two parts to achieving that:
- We keep flow control enabled on the tag_8021q CPU port
- We put the DSA master interface in promiscuous mode, so it will never
drop a PTP frame (for the profiles we are interested in, these are
sent to the multicast MAC addresses of 01-80-c2-00-00-0e and
01-1b-19-00-00-00).
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:37:59 +0000 (00:37 +0200)]
net: mscc: ocelot: refactor ocelot_xtr_irq_handler into ocelot_xtr_poll
Since the felix DSA driver will need to poll the CPU port module for
extracted frames as well, let's create some common functions that read
an Extraction Frame Header, and then an skb, from a CPU extraction
group.
We abuse the struct ocelot_ops :: port_to_netdev function a little bit,
in order to retrieve the DSA port net_device or the ocelot switchdev
net_device based on the source port information from the Extraction
Frame Header, but it's all in the benefit of code simplification -
netdev_alloc_skb needs it. Originally, the port_to_netdev method was
intended for parsing act->dev from tc flower offload code.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:37:58 +0000 (00:37 +0200)]
net: dsa: tag_ocelot: create separate tagger for Seville
The ocelot tagger is a hot mess currently, it relies on memory
initialized by the attached driver for basic frame transmission.
This is against all that DSA tagging protocols stand for, which is that
the transmission and reception of a DSA-tagged frame, the data path,
should be independent from the switch control path, because the tag
protocol is in principle hot-pluggable and reusable across switches
(even if in practice it wasn't until very recently). But if another
driver like dsa_loop wants to make use of tag_ocelot, it couldn't.
This was done to have common code between Felix and Ocelot, which have
one bit difference in the frame header format. Quoting from commit 67c2404922c2 ("net: dsa: felix: create a template for the DSA tags on
xmit"):
Other alternatives have been analyzed, such as:
- Create a separate tag_seville.c: too much code duplication for just 1
bit field difference.
- Create a separate DSA_TAG_PROTO_SEVILLE under tag_ocelot.c, just like
tag_brcm.c, which would have a separate .xmit function. Again, too
much code duplication for just 1 bit field difference.
- Allocate the template from the init function of the tag_ocelot.c
module, instead of from the driver: couldn't figure out a method of
accessing the correct port template corresponding to the correct
tagger in the .xmit function.
The really interesting part is that Seville should have had its own
tagging protocol defined - it is not compatible on the wire with Ocelot,
even for that single bit. In principle, a packet generated by
DSA_TAG_PROTO_OCELOT when booted on NXP LS1028A would look in a certain
way, but when booted on NXP T1040 it would look differently. The reverse
is also true: a packet generated by a Seville switch would be
interpreted incorrectly by Wireshark if it was told it was generated by
an Ocelot switch.
Actually things are a bit more nuanced. If we concentrate only on the
DSA tag, what I said above is true, but Ocelot/Seville also support an
optional DSA tag prefix, which can be short or long, and it is possible
to distinguish the two taggers based on an integer constant put in that
prefix. Nonetheless, creating a separate tagger is still justified,
since the tag prefix is optional, and without it, there is again no way
to distinguish.
Claiming backwards binary compatibility is a bit more tough, since I've
already changed the format of tag_ocelot once, in commit 5124197ce58b
("net: dsa: tag_ocelot: use a short prefix on both ingress and egress").
Therefore I am not very concerned with treating this as a bugfix and
backporting it to stable kernels (which would be another mess due to the
fact that there would be lots of conflicts with the other DSA_TAG_PROTO*
definitions). It's just simpler to say that the string values of the
taggers have ABI value starting with kernel 5.12, which will be when the
changing of tag protocol via /sys/class/net/<dsa-master>/dsa/tagging
goes live.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:37:57 +0000 (00:37 +0200)]
net: dsa: tag_ocelot: single out PTP-related transmit tag processing
There is one place where we cannot avoid accessing driver data, and that
is 2-step PTP TX timestamping, since the switch wants us to provide a
timestamp request ID through the injection header, which naturally must
come from a sequence number kept by the driver (it is generated by the
.port_txtstamp method prior to the tagger's xmit).
However, since other drivers like dsa_loop do not claim PTP support
anyway, the DSA_SKB_CB(skb)->clone will always be NULL anyway, so if we
move all PTP-related dereferences of struct ocelot and struct ocelot_port
into a separate function, we can effectively ensure that this is dead
code when the ocelot tagger is attached to non-ocelot switches, and the
stateful portion of the tagger is more self-contained.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:37:56 +0000 (00:37 +0200)]
net: mscc: ocelot: use common tag parsing code with DSA
The Injection Frame Header and Extraction Frame Header that the switch
prepends to frames over the NPI port is also prepended to frames
delivered over the CPU port module's queues.
Let's unify the handling of the frame headers by making the ocelot
driver call some helpers exported by the DSA tagger. Among other things,
this allows us to get rid of the strange cpu_to_be32 when transmitting
the Injection Frame Header on ocelot, since the packing API uses
network byte order natively (when "quirks" is 0).
The comments above ocelot_gen_ifh talk about setting pop_cnt to 3, and
the cpu extraction queue mask to something, but the code doesn't do it,
so we don't do it either.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:37:55 +0000 (00:37 +0200)]
net: dsa: tag_ocelot: avoid accessing ds->priv in ocelot_rcv
Taggers should be written to do something valid irrespective of the
switch driver that they are attached to. This is even more true now,
because since the introduction of the .change_tag_protocol method, a
certain tagger is not necessarily strictly associated with a driver any
longer, and I would like to be able to test all taggers with dsa_loop in
the future.
In the case of ocelot, it needs to move the classified VLAN from the DSA
tag into the skb if the port is VLAN-aware. We can allow it to do that
by looking at the dp->vlan_filtering property, no need to invoke
structures which are specific to ocelot.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:37:54 +0000 (00:37 +0200)]
net: mscc: ocelot: refactor ocelot_port_inject_frame out of ocelot_port_xmit
The felix DSA driver will inject some frames through register MMIO, same
as ocelot switchdev currently does. So we need to be able to reuse the
common code.
Also create some shim definitions, since the DSA tagger can be compiled
without support for the switch driver.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:37:53 +0000 (00:37 +0200)]
net: mscc: ocelot: use DIV_ROUND_UP helper in ocelot_port_inject_frame
This looks a bit nicer than the open-coded "(x + 3) % 4" idiom.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:37:52 +0000 (00:37 +0200)]
net: mscc: ocelot: better error handling in ocelot_xtr_irq_handler
The ocelot_rx_frame_word() function can return a negative error code,
however this isn't being checked for consistently. Errors being ignored
have not been seen in practice though.
Also, some constructs can be simplified by using "goto" instead of
repeated "break" statements.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:37:51 +0000 (00:37 +0200)]
net: mscc: ocelot: only drain extraction queue on error
It appears that the intention of this snippet of code is to not exit
ocelot_xtr_irq_handler() while in the middle of extracting a frame.
The problem in extracting it word by word is that future extraction
attempts are really easy to get desynchronized, since the IRQ handler
assumes that the first 16 bytes are the IFH, which give further
information about the frame, such as frame length.
But during normal operation, "err" will not be 0, but 4, set from here:
for (i = 0; i < OCELOT_TAG_LEN / 4; i++) {
err = ocelot_rx_frame_word(ocelot, grp, true, &ifh[i]);
if (err != 4)
break;
}
if (err != 4)
break;
In that case, draining the extraction queue is a no-op. So explicitly
make this code execute only on negative err.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Sat, 13 Feb 2021 22:37:50 +0000 (00:37 +0200)]
net: mscc: ocelot: stop returning IRQ_NONE in ocelot_xtr_irq_handler
Since the xtr (extraction) IRQ of the ocelot switch is not shared, then
if it fired, it means that some data must be present in the queues of
the CPU port module. So simplify the code.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This series implements some optimizations to error recovery. One
patch adds an echo/reply mechanism with firmware to enhance error
detection. The other patches speed up the recovery process by
polling config space earlier and to selectively initialize
context memory during re-initialization.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Feb 2021 23:05:01 +0000 (18:05 -0500)]
bnxt_en: Improve logging of error recovery settings information.
We currently only log the error recovery settings if it is enabled.
In some cases, firmware disables error recovery after it was
initially enabled. Without logging anything, the user will not be
aware of this change in setting.
Log it when error recovery is disabled. Also, change the reset count
value from hexadecimal to decimal.
Reviewed-by: Edwin Peer <edwin.peer@broadcom.com> Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Feb 2021 23:05:00 +0000 (18:05 -0500)]
bnxt_en: Reply to firmware's echo request async message.
This is a new async message that the firmware can send to check if it
can communicate with the driver. This is an added error detection
scheme that firmware can use if it suspects errors in the PCIe
interface. When the driver receives this async message, it will reply
back echoing some data in the async message. If the firmware is not
getting the reply with the proper data after some retries, error
recovery will kick in.
Reviewed-by: Andy Gospodarek <gospo@broadcom.com> Reviewed-by: Edwin Peer <edwin.peer@broadcom.com> Reviewed-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Feb 2021 23:04:59 +0000 (18:04 -0500)]
bnxt_en: Initialize "context kind" field for context memory blocks.
If firmware provides the offset to the "context kind" field of the
relevant context memory blocks, we'll initialize just that field for
each block instead of initializing all of context memory.
Populate the bnxt_mem_init structure with the proper offset returned
by firmware. If it is older firmware and the information is not
available, we set the offset to an invalid value and fall back to
the old behavior of initializing every byte. Otherwise, we initialize
only the "context kind" byte at the offset.
Reviewed-by: Edwin Peer <edwin.peer@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, the driver calls memset() to set all relevant context memory
used by the chip to the initial value. This can take many milliseconds
with the potentially large number of context pages allocated for the
chip.
To make this faster, we only need to initialize the "context kind" field
of each block of context memory. This patch sets up the infrastructure
to do that with the bnxt_mem_init structure. In the next patch, we'll
add the logic to obtain the offset of the "context kind" from the
firmware. This patch is not changing the current behavior of calling
memset() to initialize all relevant context memory.
Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com> Reviewed-by: Edwin Peer <edwin.peer@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Sun, 14 Feb 2021 23:04:57 +0000 (18:04 -0500)]
bnxt_en: Implement faster recovery for firmware fatal error.
During some fatal firmware error conditions, the PCI config space
register 0x2e which normally contains the subsystem ID will become
0xffff. This register will revert back to the normal value after
the chip has completed core reset. If we detect this condition,
we can poll this config register immediately for the value to revert.
Because we use config read cycles to poll this register, there is no
possibility of Master Abort if we happen to read it during core reset.
This speeds up recovery significantly as we don't have to wait for the
conservative min_time before polling MMIO to see if the firmware has
come out of reset. As soon as this register changes value we can
proceed to re-initialize the device.
Reviewed-by: Edwin Peer <edwin.peer@broadcom.com> Reviewed-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com> Reviewed-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Edwin Peer [Sun, 14 Feb 2021 23:04:56 +0000 (18:04 -0500)]
bnxt_en: selectively allocate context memories
Newer devices may have local context memory instead of relying on the
host for backing store. In these cases, HWRM_FUNC_BACKING_STORE_QCAPS
will return a zero entry size to indicate contexts for which the host
should not allocate backing store.
Selectively allocate context memory based on device capabilities and
only enable backing store for the appropriate contexts.
Signed-off-by: Edwin Peer <edwin.peer@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
skbuff: introduce skbuff_heads bulking and reusing
Currently, all sorts of skb allocation always do allocate
skbuff_heads one by one via kmem_cache_alloc().
On the other hand, we have percpu napi_alloc_cache to store
skbuff_heads queued up for freeing and flush them by bulks.
We can use this cache not only for bulk-wiping, but also to obtain
heads for new skbs and avoid unconditional allocations, as well as
for bulk-allocating (like XDP's cpumap code and veth driver already
do).
As this might affect latencies, cache pressure and lots of hardware
and driver-dependent stuff, this new feature is mostly optional and
can be issued via:
- a new napi_build_skb() function (as a replacement for build_skb());
- existing {,__}napi_alloc_skb() and napi_get_frags() functions;
- __alloc_skb() with passing SKB_ALLOC_NAPI in flags.
iperf3 showed 35-70 Mbps bumps for both TCP and UDP while performing
VLAN NAT on 1.2 GHz MIPS board. The boost is likely to be bigger
on more powerful hosts and NICs with tens of Mpps.
Note on skbuff_heads from distant slabs or pfmemalloc'ed slabs:
- kmalloc()/kmem_cache_alloc() itself allows by default allocating
memory from the remote nodes to defragment their slabs. This is
controlled by sysctl, but according to this, skbuff_head from a
remote node is an OK case;
- The easiest way to check if the slab of skbuff_head is remote or
pfmemalloc'ed is:
if (!dev_page_is_reusable(virt_to_head_page(skb)))
/* drop it */;
...*but*, regarding that most slabs are built of compound pages,
virt_to_head_page() will hit unlikely-branch every single call.
This check costed at least 20 Mbps in test scenarios and seems
like it'd be better to _not_ do this.
Since v5 [4]:
- revert flags-to-bool conversion and simplify flags testing in
__alloc_skb() (Alexander Duyck).
Since v4 [3]:
- rebase on top of net-next and address kernel build robot issue;
- reorder checks a bit in __alloc_skb() to make new condition even
more harmless.
Since v3 [2]:
- make the feature mostly optional, so driver developers could
decide whether to use it or not (Paolo Abeni).
This reuses the old flag for __alloc_skb() and introduces
a new napi_build_skb();
- reduce bulk-allocation size from 32 to 16 elements (also Paolo).
This equals to the value of XDP's devmap and veth batch processing
(which were tested a lot) and should be sane enough;
- don't waste cycles on explicit in_serving_softirq() check.
Since v2 [1]:
- also cover {,__}alloc_skb() and {,__}build_skb() cases (became handy
after the changes that pass tiny skbs requests to kmalloc layer);
- cover the cache with KASAN instrumentation (suggested by Eric
Dumazet, help of Dmitry Vyukov);
- completely drop redundant __kfree_skb_flush() (also Eric);
- lots of code cleanups;
- expand the commit message with NUMA and pfmemalloc points (Jakub).
Since v1 [0]:
- use one unified cache instead of two separate to greatly simplify
the logics and reduce hotpath overhead (Edward Cree);
- new: recycle also GRO_MERGED_FREE skbs instead of immediate
freeing;
- correct performance numbers after optimizations and performing
lots of tests for different use cases.
skbuff: queue NAPI_MERGED_FREE skbs into NAPI cache instead of freeing
napi_frags_finish() and napi_skb_finish() can only be called inside
NAPI Rx context, so we can feed NAPI cache with skbuff_heads that
got NAPI_MERGED_FREE verdict instead of immediate freeing.
Replace __kfree_skb() with __kfree_skb_defer() in napi_skb_finish()
and move napi_skb_free_stolen_head() to skbuff.c, so it can drop skbs
to NAPI cache.
As many drivers call napi_alloc_skb()/napi_get_frags() on their
receive path, this becomes especially useful.
Signed-off-by: Alexander Lobakin <alobakin@pm.me> Signed-off-by: David S. Miller <davem@davemloft.net>
skbuff: allow to use NAPI cache from __napi_alloc_skb()
{,__}napi_alloc_skb() is mostly used either for optional non-linear
receive methods (usually controlled via Ethtool private flags and off
by default) and/or for Rx copybreaks.
Use __napi_build_skb() here for obtaining skbuff_heads from NAPI cache
instead of inplace allocations. This includes both kmalloc and page
frag paths.
Signed-off-by: Alexander Lobakin <alobakin@pm.me> Signed-off-by: David S. Miller <davem@davemloft.net>
skbuff: allow to optionally use NAPI cache from __alloc_skb()
Reuse the old and forgotten SKB_ALLOC_NAPI to add an option to get
an skbuff_head from the NAPI cache instead of inplace allocation
inside __alloc_skb().
This implies that the function is called from softirq or BH-off
context, not for allocating a clone or from a distant node.
Cc: Alexander Duyck <alexander.duyck@gmail.com> # Simplified flags check Signed-off-by: Alexander Lobakin <alobakin@pm.me> Signed-off-by: David S. Miller <davem@davemloft.net>
skbuff: introduce {,__}napi_build_skb() which reuses NAPI cache heads
Instead of just bulk-flushing skbuff_heads queued up through
napi_consume_skb() or __kfree_skb_defer(), try to reuse them
on allocation path.
If the cache is empty on allocation, bulk-allocate the first
16 elements, which is more efficient than per-skb allocation.
If the cache is full on freeing, bulk-wipe the second half of
the cache (32 elements).
This also includes custom KASAN poisoning/unpoisoning to be
double sure there are no use-after-free cases.
To not change current behaviour, introduce a new function,
napi_build_skb(), to optionally use a new approach later
in drivers.
Note on selected bulk size, 16:
- this equals to XDP_BULK_QUEUE_SIZE, DEV_MAP_BULK_SIZE
and especially VETH_XDP_BATCH, which is also used to
bulk-allocate skbuff_heads and was tested on powerful
setups;
- this also showed the best performance in the actual
test series (from the array of {8, 16, 32}).
Suggested-by: Edward Cree <ecree.xilinx@gmail.com> # Divide on two halves Suggested-by: Eric Dumazet <edumazet@google.com> # KASAN poisoning Cc: Dmitry Vyukov <dvyukov@google.com> # Help with KASAN Cc: Paolo Abeni <pabeni@redhat.com> # Reduced batch size Signed-off-by: Alexander Lobakin <alobakin@pm.me> Signed-off-by: David S. Miller <davem@davemloft.net>
This function isn't much needed as NAPI skb queue gets bulk-freed
anyway when there's no more room, and even may reduce the efficiency
of bulk operations.
It will be even less needed after reusing skb cache on allocation path,
so remove it and this way lighten network softirqs a bit.
Suggested-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Alexander Lobakin <alobakin@pm.me> Signed-off-by: David S. Miller <davem@davemloft.net>
Eversince the introduction of __kmalloc_reserve(), "ip" argument
hasn't been used. _RET_IP_ is embedded inside
kmalloc_node_track_caller().
Remove the redundant macro and rename the function after it.
Signed-off-by: Alexander Lobakin <alobakin@pm.me> Signed-off-by: David S. Miller <davem@davemloft.net>
skbuff: move __alloc_skb() next to the other skb allocation functions
In preparation before reusing several functions in all three skb
allocation variants, move __alloc_skb() next to the
__netdev_alloc_skb() and __napi_alloc_skb().
No functional changes.
Signed-off-by: Alexander Lobakin <alobakin@pm.me> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Feb 2021 01:38:53 +0000 (17:38 -0800)]
Merge branch 'Xilinx-axienet-updates'
Robert Hancock says:
====================
Xilinx axienet updates
Updates to the Xilinx AXI Ethernet driver to add support for an additional
ethtool operation, and to support dynamic switching between 1000BaseX and
SGMII interface modes.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Robert Hancock [Sat, 13 Feb 2021 00:23:56 +0000 (18:23 -0600)]
net: axienet: Support dynamic switching between 1000BaseX and SGMII
Newer versions of the Xilinx AXI Ethernet core (specifically version 7.2 or
later) allow the core to be configured with a PHY interface mode of "Both",
allowing either 1000BaseX or SGMII modes to be selected at runtime. Add
support for this in the driver to allow better support for applications
which can use both fiber and copper SFP modules.
Signed-off-by: Robert Hancock <robert.hancock@calian.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Document the new xlnx,switch-x-sgmii attribute which is used to indicate
that the Ethernet core supports dynamic switching between 1000BaseX and
SGMII.
Signed-off-by: Robert Hancock <robert.hancock@calian.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Fri, 12 Feb 2021 23:22:14 +0000 (15:22 -0800)]
tcp: factorize logic into tcp_epollin_ready()
Both tcp_data_ready() and tcp_stream_is_readable() share the same logic.
Add tcp_epollin_ready() helper to avoid duplication.
Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Arjun Roy <arjunroy@google.com> Cc: Wei Wang <weiwan@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Fri, 12 Feb 2021 23:22:13 +0000 (15:22 -0800)]
tcp: fix SO_RCVLOWAT related hangs under mem pressure
While commit 24adbc1676af ("tcp: fix SO_RCVLOWAT hangs with fat skbs")
fixed an issue vs too small sk_rcvbuf for given sk_rcvlowat constraint,
it missed to address issue caused by memory pressure.
1) If we are under memory pressure and socket receive queue is empty.
First incoming packet is allowed to be queued, after commit 76dfa6082032 ("tcp: allow one skb to be received per socket under memory pressure")
But we do not send EPOLLIN yet, in case tcp_data_ready() sees sk_rcvlowat
is bigger than skb length.
2) Then, when next packet comes, it is dropped, and we directly
call sk->sk_data_ready().
3) If application is using poll(), tcp_poll() will then use
tcp_stream_is_readable() and decide the socket receive queue is
not yet filled, so nothing will happen.
Even when sender retransmits packets, phases 2) & 3) repeat
and flow is effectively frozen, until memory pressure is off.
Fix is to consider tcp_under_memory_pressure() to take care
of global memory pressure or memcg pressure.
Fixes: 24adbc1676af ("tcp: fix SO_RCVLOWAT hangs with fat skbs") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Arjun Roy <arjunroy@google.com> Suggested-by: Wei Wang <weiwan@google.com> Reviewed-by: Wei Wang <weiwan@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Feb 2021 01:24:47 +0000 (17:24 -0800)]
Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
40GbE Intel Wired LAN Driver Updates 2021-02-12
This series contains updates to i40e, ice, and ixgbe drivers.
Maciej does cleanups on the following drivers.
For i40e, removes redundant check for XDP prog, cleans up no longer
relevant information, and removes an unused function argument.
For ice, removes local variable use, instead returning values directly.
Moves skb pointer from buffer to ring and removes an unneeded check for
xdp_prog in zero copy path. Also removes a redundant MTU check when
changing it.
For i40e, ice, and ixgbe, stores the rx_offset in the Rx ring as
the value is constant so there's no need for continual calls.
Bjorn folds a decrement into a while statement.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Feb 2021 01:13:53 +0000 (17:13 -0800)]
Merge branch 'tc-mpls-selftests'
Guillaume Nault says:
====================
selftests: tc: Test tc-flower's MPLS features
A couple of patches for exercising the MPLS filters of tc-flower.
Patch 1 tests basic MPLS matching features: those that only work on the
first label stack entry (that is, the mpls_label, mpls_tc, mpls_bos and
mpls_ttl options).
Patch 2 tests the more generic "mpls" and "lse" options, which allow
matching MPLS fields beyond the first stack entry.
In both patches, special care is taken to skip these new tests for
incompatible versions of tc.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault [Fri, 12 Feb 2021 19:05:43 +0000 (20:05 +0100)]
selftests: tc: Add generic mpls matching support for tc-flower
Add tests in tc_flower.sh for generic matching on MPLS Label Stack
Entries. The label, tc, bos and ttl fields are tested for the first
and second labels. For each field, the minimal and maximal values are
tested (the former at depth 1 and the later at depth 2).
There are also tests for matching the presence of a label stack entry
at a given depth.
In order to reduce the amount of code, all "lse" subcommands are tested
in match_mpls_lse_test(). Action "continue" is used, so that test
packets are evaluated by all filters. Then, we can verify if each
filter matched the expected number of packets.
Some versions of tc-flower produced invalid json output when dumping
MPLS filters with depth > 1. Skip the test if tc isn't recent enough.
Signed-off-by: Guillaume Nault <gnault@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Feb 2021 01:08:05 +0000 (17:08 -0800)]
Merge branch 'brport-flags'
Vladimir Oltean says:
====================
Cleanup in brport flags switchdev offload for DSA
The initial goal of this series was to have better support for
standalone ports mode on the DSA drivers like ocelot/felix and sja1105.
This turned out to require some API adjustments in both directions:
to the information presented to and by the switchdev notifier, and to
the API presented to the switch drivers by the DSA layer.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Fri, 12 Feb 2021 15:16:00 +0000 (17:16 +0200)]
net: dsa: sja1105: offload bridge port flags to device
The chip can configure unicast flooding, broadcast flooding and learning.
Learning is per port, while flooding is per {ingress, egress} port pair
and we need to configure the same value for all possible ingress ports
towards the requested one.
While multicast flooding is not officially supported, we can hack it by
using a feature of the second generation (P/Q/R/S) devices, which is that
FDB entries are maskable, and multicast addresses always have an odd
first octet. So by putting a match-all for 00:01:00:00:00:00 addr and
00:01:00:00:00:00 mask at the end of the FDB, we make sure that it is
always checked last, and does not take precedence in front of any other
MDB. So it behaves effectively as an unknown multicast entry.
For the first generation switches, this feature is not available, so
unknown multicast will always be treated the same as unknown unicast.
So the only thing we can do is request the user to offload the settings
for these 2 flags in tandem, i.e.
ip link set swp2 type bridge_slave flood off
Error: sja1105: This chip cannot configure multicast flooding independently of unicast.
ip link set swp2 type bridge_slave flood off mcast_flood off
ip link set swp2 type bridge_slave mcast_flood on
Error: sja1105: This chip cannot configure multicast flooding independently of unicast.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Fri, 12 Feb 2021 15:15:59 +0000 (17:15 +0200)]
net: mscc: ocelot: offload bridge port flags to device
We should not be unconditionally enabling address learning, since doing
that is actively detrimential when a port is standalone and not offloading
a bridge. Namely, if a port in the switch is standalone and others are
offloading the bridge, then we could enter a situation where we learn an
address towards the standalone port, but the bridged ports could not
forward the packet there, because the CPU is the only path between the
standalone and the bridged ports. The solution of course is to not
enable address learning unless the bridge asks for it.
We need to set up the initial port flags for no learning and flooding
everything, and also when the port joins and leaves the bridge.
The flood configuration was already configured ok for standalone mode
in ocelot_init, we just need to disable learning in ocelot_init_port.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Fri, 12 Feb 2021 15:15:58 +0000 (17:15 +0200)]
net: mscc: ocelot: use separate flooding PGID for broadcast
In preparation of offloading the bridge port flags which have
independent settings for unknown multicast and for broadcast, we should
also start reserving one destination Port Group ID for the flooding of
broadcast packets, to allow configuring it individually.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Fri, 12 Feb 2021 15:15:57 +0000 (17:15 +0200)]
net: dsa: felix: restore multicast flood to CPU when NPI tagger reinitializes
ocelot_init sets up PGID_MC to include the CPU port module, and that is
fine, but the ocelot-8021q tagger removes the CPU port module from the
unknown multicast replicator. So after a transition from the default
ocelot tagger towards ocelot-8021q and then again towards ocelot,
multicast flooding towards the CPU port module will be disabled.
Fixes: e21268efbe26 ("net: dsa: felix: perform switch setup for tag_8021q") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Fri, 12 Feb 2021 15:15:56 +0000 (17:15 +0200)]
net: dsa: act as passthrough for bridge port flags
There are multiple ways in which a PORT_BRIDGE_FLAGS attribute can be
expressed by the bridge through switchdev, and not all of them can be
emulated by DSA mid-layer API at the same time.
One possible configuration is when the bridge offloads the port flags
using a mask that has a single bit set - therefore only one feature
should change. However, DSA currently groups together unicast and
multicast flooding in the .port_egress_floods method, which limits our
options when we try to add support for turning off broadcast flooding:
do we extend .port_egress_floods with a third parameter which b53 and
mv88e6xxx will ignore? But that means that the DSA layer, which
currently implements the PRE_BRIDGE_FLAGS attribute all by itself, will
see that .port_egress_floods is implemented, and will report that all 3
types of flooding are supported - not necessarily true.
Another configuration is when the user specifies more than one flag at
the same time, in the same netlink message. If we were to create one
individual function per offloadable bridge port flag, we would limit the
expressiveness of the switch driver of refusing certain combinations of
flag values. For example, a switch may not have an explicit knob for
flooding of unknown multicast, just for flooding in general. In that
case, the only correct thing to do is to allow changes to BR_FLOOD and
BR_MCAST_FLOOD in tandem, and never allow mismatched values. But having
a separate .port_set_unicast_flood and .port_set_multicast_flood would
not allow the driver to possibly reject that.
Also, DSA doesn't consider it necessary to inform the driver that a
SWITCHDEV_ATTR_ID_BRIDGE_MROUTER attribute was offloaded, because it
just calls .port_egress_floods for the CPU port. When we'll add support
for the plain SWITCHDEV_ATTR_ID_PORT_MROUTER, that will become a real
problem because the flood settings will need to be held statefully in
the DSA middle layer, otherwise changing the mrouter port attribute will
impact the flooding attribute. And that's _assuming_ that the underlying
hardware doesn't have anything else to do when a multicast router
attaches to a port than flood unknown traffic to it. If it does, there
will need to be a dedicated .port_set_mrouter anyway.
So we need to let the DSA drivers see the exact form that the bridge
passes this switchdev attribute in, otherwise we are standing in the
way. Therefore we also need to use this form of language when
communicating to the driver that it needs to configure its initial
(before bridge join) and final (after bridge leave) port flags.
The b53 and mv88e6xxx drivers are converted to the passthrough API and
their implementation of .port_egress_floods is split into two: a
function that configures unicast flooding and another for multicast.
The mv88e6xxx implementation is quite hairy, and it turns out that
the implementations of unknown unicast flooding are actually the same
for 6185 and for 6352:
so there was no reason to entangle them in the first place.
Whereas the 6185 writes to MV88E6185_PORT_CTL0_FORWARD_UNKNOWN of
PORT_CTL0, which has the exact same bit index. I have left the
implementations separate though, for the only reason that the names are
different enough to confuse me, since I am not able to double-check with
a user manual. The multicast flooding setting for 6185 is in a different
register than for 6352 though.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Fri, 12 Feb 2021 15:15:55 +0000 (17:15 +0200)]
net: switchdev: pass flags and mask to both {PRE_,}BRIDGE_FLAGS attributes
This switchdev attribute offers a counterproductive API for a driver
writer, because although br_switchdev_set_port_flag gets passed a
"flags" and a "mask", those are passed piecemeal to the driver, so while
the PRE_BRIDGE_FLAGS listener knows what changed because it has the
"mask", the BRIDGE_FLAGS listener doesn't, because it only has the final
value. But certain drivers can offload only certain combinations of
settings, like for example they cannot change unicast flooding
independently of multicast flooding - they must be both on or both off.
The way the information is passed to switchdev makes drivers not
expressive enough, and unable to reject this request ahead of time, in
the PRE_BRIDGE_FLAGS notifier, so they are forced to reject it during
the deferred BRIDGE_FLAGS attribute, where the rejection is currently
ignored.
This patch also changes drivers to make use of the "mask" field for edge
detection when possible.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Fri, 12 Feb 2021 15:15:54 +0000 (17:15 +0200)]
net: dsa: configure better brport flags when ports leave the bridge
For a DSA switch port operating in standalone mode, address learning
doesn't make much sense since that is a bridge function. In fact,
address learning even breaks setups such as this one:
because if the switch has a single FDB (can offload a single bridge)
then source address learning on swp3 can "steal" the source MAC address
of swp2 from br0's FDB, because learning frames coming from swp2 will be
done twice: first on the swp1 ingress port, second on the swp3 ingress
port. So the hardware FDB will become out of sync with the software
bridge, and when swp2 tries to send one more packet towards swp1, the
ASIC will attempt to short-circuit the forwarding path and send it
directly to swp3 (since that's the last port it learned that address on),
which it obviously can't, because swp3 operates in standalone mode.
So DSA drivers operating in standalone mode should still configure a
list of bridge port flags even when they are standalone. Currently DSA
attempts to call dsa_port_bridge_flags with 0, which disables egress
flooding of unknown unicast and multicast, something which doesn't make
much sense. For the switches that implement .port_egress_floods - b53
and mv88e6xxx, it probably doesn't matter too much either, since they
can possibly inject traffic from the CPU into a standalone port,
regardless of MAC DA, even if egress flooding is turned off for that
port, but certainly not all DSA switches can do that - sja1105, for
example, can't. So it makes sense to use a better common default there,
such as "flood everything".
It should also be noted that what DSA calls "dsa_port_bridge_flags()"
is a degenerate name for just calling .port_egress_floods(), since
nothing else is implemented - not learning, in particular. But disabling
address learning, something that this driver is also coding up for, will
be supported by individual drivers once .port_egress_floods is replaced
with a more generic .port_bridge_flags.
Previous attempts to code up this logic have been in the common bridge
layer, but as pointed out by Ido Schimmel, there are corner cases that
are missed when doing that:
https://patchwork.kernel.org/project/netdevbpf/patch/20210209151936.97382-5-olteanv@gmail.com/
So, at least for now, let's leave DSA in charge of setting port flags
before and after the bridge join and leave.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Fri, 12 Feb 2021 15:15:53 +0000 (17:15 +0200)]
net: bridge: don't print in br_switchdev_set_port_flag
For the netlink interface, propagate errors through extack rather than
simply printing them to the console. For the sysfs interface, we still
print to the console, but at least that's one layer higher than in
switchdev, which also allows us to silently ignore the offloading of
flags if that is ever needed in the future.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Fri, 12 Feb 2021 15:15:52 +0000 (17:15 +0200)]
net: bridge: offload all port flags at once in br_setport
If for example this command:
ip link set swp0 type bridge_slave flood off mcast_flood off learning off
succeeded at configuring BR_FLOOD and BR_MCAST_FLOOD but not at
BR_LEARNING, there would be no attempt to revert the partial state in
any way. Arguably, if the user changes more than one flag through the
same netlink command, this one _should_ be all or nothing, which means
it should be passed through switchdev as all or nothing.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Fri, 12 Feb 2021 15:15:51 +0000 (17:15 +0200)]
net: switchdev: propagate extack to port attributes
When a struct switchdev_attr is notified through switchdev, there is no
way to report informational messages, unlike for struct switchdev_obj.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Nikolay Aleksandrov <nikolay@nvidia.com> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Feb 2021 00:54:17 +0000 (16:54 -0800)]
Merge branch 'ipa-cleanups'
Alex Elder says:
====================
net: ipa: some more cleanup
Version 3 of this series uses dev_err_probe() in the second patch,
as suggested by Heiner Kallweit.
Version 2 was sent to ensure the series was based on current
net-next/master, and added copyright updates to files touched.
The original introduction is below.
This is another fairly innocuous set of cleanup patches.
The first was motivated by a bug found that would affect IPA v4.5.
It maintain a new GSI address pointer; one is the "raw" (original
mapped) address, and the other will have been adjusted if necessary
for use on newer platforms.
The second just quiets some unnecessary noise during early probe.
The third fixes some errors that show up when IPA_VALIDATION is
enabled.
The last two just create helper functions to improve readability.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 12 Feb 2021 14:34:01 +0000 (08:34 -0600)]
net: ipa: introduce ipa_table_hash_support()
Introduce a new function to abstract the knowledge of whether hashed
routing and filter tables are supported for a given IPA instance.
IPA v4.2 is the only one that doesn't support hashed tables (now
and for the foreseeable future), but the name of the helper function
is better for explaining what's going on.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 12 Feb 2021 14:34:00 +0000 (08:34 -0600)]
net: ipa: fix register write command validation
In ipa_cmd_register_write_valid() we verify that values we will
supply to a REGISTER_WRITE IPA immediate command will fit in
the fields that need to hold them. This patch fixes some issues
in that function and ipa_cmd_register_write_offset_valid().
The dev_err() call in ipa_cmd_register_write_offset_valid() has
some printf format errors:
- The name of the register (corresponding to the string format
specifier) was not supplied.
- The IPA base offset and offset need to be supplied separately to
match the other format specifiers.
Also make the ~0 constant used there to compute the maximum
supported offset value explicitly unsigned.
There are two other issues in ipa_cmd_register_write_valid():
- There's no need to check the hash flush register for platforms
(like IPA v4.2) that do not support hashed tables
- The highest possible endpoint number, whose status register
offset is computed, is COUNT - 1, not COUNT.
Fix these problems, and add some additional commentary.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 12 Feb 2021 14:33:59 +0000 (08:33 -0600)]
net: ipa: use dev_err_probe() in ipa_clock.c
When initializing the IPA core clock and interconnects, it's
possible we'll get an EPROBE_DEFER error. This isn't really an
error, it's just means we need to be re-probed later.
Use dev_err_probe() to report the error rather than dev_err().
This avoids polluting the log with these "error" messages.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alex Elder [Fri, 12 Feb 2021 14:33:58 +0000 (08:33 -0600)]
net: ipa: use a separate pointer for adjusted GSI memory
This patch actually fixes a bug, though it doesn't affect the two
platforms supported currently. The fix implements GSI memory
pointers a bit differently.
For IPA version 4.5 and above, the address space for almost all GSI
registers is adjusted downward by a fixed amount. This is currently
handled by adjusting the I/O virtual address pointer after it has
been mapped. The bug is that the pointer is not "de-adjusted" as it
should be when it's unmapped.
This patch fixes that error, but it does so by maintaining one "raw"
pointer for the mapped memory range. This is assigned when the
memory is mapped and used to unmap the memory. This pointer is also
used to access the two registers that do *not* sit in the "adjusted"
memory space.
Rather than adjusting *that* pointer, we maintain a separate pointer
that's an adjusted copy of the "raw" pointer, and that is used for
most GSI register accesses.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Feb 2021 00:48:52 +0000 (16:48 -0800)]
Merge tag 'mac80211-next-for-net-next-2021-02-12' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next
Johannes Berg says:
====================
Last set of updates:
* more minstrel work from Felix to reduce the
probing overhead
* QoS for nl80211 control port frames
* STBC injection support
* and a couple of small fixes
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix this by properly indexing fec[] with rsp->fwdata.supported_fec - 1.
In this case the proper indexes 0 to 3 are used when
rsp->fwdata.supported_fec evaluates to a range of 1 to 4, correspondingly.
Fixes: d0cf9503e908 ("octeontx2-pf: ethtool fec mode support")
Addresses-Coverity-ID: 1501722 ("Out-of-bounds read") Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Feb 2021 00:43:13 +0000 (16:43 -0800)]
Merge tag 'wireless-drivers-next-2021-02-12' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers-next patches for v5.12
Second set of patches for v5.12. Last time there was a smaller pull
request so unsurprisingly this time we have a big one. mt76 has new
hardware support and lots of new features, iwlwifi getting new
features and rtw88 got NAPI support. And the usual cleanups and fixes
all over.
Major changes:
ath10k
* support setting SAR limits via nl80211
rtw88
* support 8821 RFE type2 devices
* NAPI support
iwlwifi
* add new FW API support
* support for new So devices
* support for RF interference mitigation (RFI)
* support for PNVM (Platform Non-Volatile Memory, a firmware data
file) from BIOS
mt76
* add new mt7921e driver
* 802.11 encap offload support
* support for multiple pcie gen1 host interfaces on 7915
* 7915 testmode support
* 7915 txbf support
brcmfmac
* support for CQM RSSI notifications
wil6210
* support for extended DMG MCS 12.1 rate
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vadim Fedorenko [Fri, 12 Feb 2021 10:48:14 +0000 (13:48 +0300)]
rxrpc: Fix dependency on IPv6 in udp tunnel config
As udp_port_cfg struct changes its members with dependency on IPv6
configuration, the code in rxrpc should also check for IPv6.
Fixes: 1a9b86c9fd95 ("rxrpc: use udp tunnel APIs instead of open code in rxrpc_open_socket") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Vadim Fedorenko <vfedorenko@novek.ru> Acked-by: David Howells <dhowells@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 13 Feb 2021 00:31:46 +0000 (16:31 -0800)]
Merge branch 'mptcp-genl-events'
Mat Martineau says:
====================
mptcp: Add genl events for connection info
This series from the MPTCP tree adds genl multicast events that are
important for implementing a userspace path manager. In MPTCP, a path
manager is responsible for adding or removing additional subflows on
each MPTCP connection. The in-kernel path manager (already part of the
kernel) is a better fit for many server use cases, but the additional
flexibility of userspace path managers is often useful for client
devices.
Patches 1, 2, 4, 5, and 6 do some refactoring to streamline the netlink
event implementation in the final patch.
Patch 3 improves the timeliness of subflow destruction to ensure the
'subflow closed' event will be sent soon enough.
Patch 7 allows use of the GENL_UNS_ADMIN_PERM flag on genl mcast groups
to mandate CAP_NET_ADMIN, which is important to protect token information
in the MPTCP events. This is a genetlink change.
Patch 8 adds the MPTCP netlink events.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>