And, as Dmitry rightly assessed, that is because we can drop the
reference and then touch it when the underlying recvmsg calls return
some packets and then hit an error, which will make recvmmsg to set
sock->sk->sk_err, oops, fix it.
Reported-and-Tested-by: Dmitry Vyukov <dvyukov@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Kostya Serebryany <kcc@google.com> Cc: Sasha Levin <sasha.levin@oracle.com> Fixes: a2e2725541fa ("net: Introduce recvmmsg socket syscall")
http://lkml.kernel.org/r/20160122211644.GC2470@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Below patches attempts to improve performance by reducing
no of atomic operations while allocating new receive buffers
and reducing cache misses by adjusting nicvf structure elements.
Changes from v1:
No changes, resubmitting a fresh as per David's suggestion.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Sunil Goutham [Mon, 14 Mar 2016 11:06:15 +0000 (16:36 +0530)]
net: thunderx: Adjust nicvf structure to reduce cache misses
Adjusted nicvf structure such that all elements used in hot
path like napi, xmit e.t.c fall into same cache line. This reduced
no of cache misses and resulted in ~2% increase in no of packets
handled on a core.
Also modified elements with :1 notation to boolean, to be
consistent with other element definitions.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Richard Alpe [Mon, 14 Mar 2016 08:43:52 +0000 (09:43 +0100)]
tipc: make sure IPv6 header fits in skb headroom
Expand headroom further in order to be able to fit the larger IPv6
header. Prior to this patch this caused a skb under panic for certain
tipc packets when using IPv6 UDP bearer(s).
Signed-off-by: Richard Alpe <richard.alpe@ericsson.com> Acked-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 14 Mar 2016 16:19:47 +0000 (12:19 -0400)]
Merge branch 'mvneta-hwbm'
Gregory CLEMENT says:
====================
API set for HW Buffer management
This is the sixth version of the API set for HW Buffer management (that was
initially submitted here:
http://thread.gmane.org/gmane.linux.kernel/2125152).
This version is just a rebasing onto the last net-next. I also added
the Tested-by flag from Sebastian Careba : "The patch set applies
successfully and it works well, no more Samba issues any longer".
For the record in the previous versions I made the following changes:
v4 -> v5:
- Add a field with the size of the buffer of the pool was added. It
then allow to fix some misused size in the mvneta_bm code when using
the new framework.
- Add a new patch from Marcin for sram allowing to require
non-bufferable access to the memory. It was needed for the hardware
buffer management of the mvneta.
- Fix the build issue notified by the 0-day builder when building the
drivers as module.
v3 -> v4
- Fix build issue when HWBM is not selected
v2 -> v3
- Make a HWBM and a SWBM version of the mvneta_rx() function in order
to reduce the the conditional code. Kept a condition inside the
mvneta_poll because specializing this function would have means
duplicating 95% of the code.
- Put back the register_netdev() call at the end of the mvneta_probe()
function. In order to have a unique ID for each port, just used a
global variable in the driver.
- Added a fix from Marcin in the "net: mvneta: bm: add support for
hardware buffer management" patch: "when dropping packets, only
buffer pointers passed from BM to descriptors have to be returned to
the pool. In submitted version after closing the port and
mvneta_rxq_deinit(), it was very likely that a lot of fake buffers
are added to the pool, because all descriptors took part in
iteration."
- Removed the select MVNETA_BM from the Kconfig, it will let the user
the choice to use not use it if they want.
v1 -> v2
- The hardware buffer management helpers are no more built by default
and now depend on a hidden config symbol which has to be selected
by the driver if needed
- The hwbm_pool_refill() and hwbm_pool_add() now receive a gfp_t as
argument allowing the caller to specify the flag it needs.
- buf_num is now tested to ensure there is no wrapping
- A spinlock has been added to protect the hwbm_pool_add() function in
SMP or irq context.
- used pr_warn instead of pr_debug in case of errors.
- fixed the mvneta implementation by returning the buffer to the pool
at various place instead of ignoring it.
- Squashed "bus: mvenus-mbus: Fix size test for
mvebu_mbus_get_dram_win_info" into bus: mvebu-mbus: provide api for
obtaining IO and DRAM window information.
- Added my signed-otf-by on all the patches as submitter of the series.
- Renamed the dts patches with the pattern "ARM: dts: platform:"
- Removed the patch "ARM: mvebu: enable SRAM support in
mvebu_v7_defconfig" of this series and already applied it
- Modified the order of the patches.
In order to ease the test the branch mvneta-BM-framework-v6 is
available at git@github.com:MISL-EBU-System-SW/mainline-public.git.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Gregory CLEMENT [Mon, 14 Mar 2016 08:39:05 +0000 (09:39 +0100)]
net: mvneta: Use the new hwbm framework
Now that the hardware buffer management framework had been introduced,
let's use it.
Tested-by: Sebastian Careba <nitroshift@yahoo.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Gregory CLEMENT [Mon, 14 Mar 2016 08:39:04 +0000 (09:39 +0100)]
net: add a hardware buffer management helper API
This basic implementation allows to share code between driver using
hardware buffer management. As the code is hardware agnostic, there is
few helpers, most of the optimization brought by the an HW BM has to be
done at driver level.
Tested-by: Sebastian Careba <nitroshift@yahoo.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Marcin Wojtas [Mon, 14 Mar 2016 08:39:03 +0000 (09:39 +0100)]
net: mvneta: bm: add support for hardware buffer management
Buffer manager (BM) is a dedicated hardware unit that can be used by all
ethernet ports of Armada XP and 38x SoC's. It allows to offload CPU on RX
path by sparing DRAM access on refilling buffer pool, hardware-based
filling of descriptor ring data and better memory utilization due to HW
arbitration for using 'short' pools for small packets.
Tests performed with A388 SoC working as a network bridge between two
packet generators showed increase of maximum processed 64B packets by
~20k (~555k packets with BM enabled vs ~535 packets without BM). Also
when pushing 1500B-packets with a line rate achieved, CPU load decreased
from around 25% without BM to 20% with BM.
BM comprise up to 4 buffer pointers' (BP) rings kept in DRAM, which
are called external BP pools - BPPE. Allocating and releasing buffer
pointers (BP) to/from BPPE is performed indirectly by write/read access
to a dedicated internal SRAM, where internal BP pools (BPPI) are placed.
BM hardware controls status of BPPE automatically, as well as assigning
proper buffers to RX descriptors. For more details please refer to
Functional Specification of Armada XP or 38x SoC.
In order to enable support for a separate hardware block, common for all
ports, a new driver has to be implemented ('mvneta_bm'). It provides
initialization sequence of address space, clocks, registers, SRAM,
empty pools' structures and also obtaining optional configuration
from DT (please refer to device tree binding documentation). mvneta_bm
exposes also a necessary API to mvneta driver, as well as a dedicated
structure with BM information (bm_priv), whose presence is used as a
flag notifying of BM usage by port. It has to be ensured that mvneta_bm
probe is executed prior to the ones in ports' driver. In case BM is not
used or its probe fails, mvneta falls back to use software buffer
management.
A sequence executed in mvneta_probe function is modified in order to have
an access to needed resources before possible port's BM initialization is
done. According to port-pools mapping provided by DT appropriate registers
are configured and the buffer pools are filled. RX path is modified
accordingly. Becaues the hardware allows a wide variety of configuration
options, following assumptions are made:
* using BM mechanisms can be selectively disabled/enabled basing
on DT configuration among the ports
* 'long' pool's single buffer size is tied to port's MTU
* using 'long' pool by port is obligatory and it cannot be shared
* using 'short' pool for smaller packets is optional
* one 'short' pool can be shared among all ports
This commit enables hardware buffer management operation cooperating with
existing mvneta driver. New device tree binding documentation is added and
the one of mvneta is updated accordingly.
[gregory.clement@free-electrons.com: removed the suspend/resume part]
Signed-off-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Marcin Wojtas [Mon, 14 Mar 2016 08:39:02 +0000 (09:39 +0100)]
bus: mvebu-mbus: provide api for obtaining IO and DRAM window information
This commit enables finding appropriate mbus window and obtaining its
target id and attribute for given physical address in two separate
routines, both for IO and DRAM windows. This functionality
is needed for Armada XP/38x Network Controller's Buffer Manager and
PnC configuration.
[gregory.clement@free-electrons.com: Fix size test for
mvebu_mbus_get_dram_win_info]
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
[DRAM window information reference in LKv3.10] Signed-off-by: Evan Wang <xswang@marvell.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Marcin Wojtas [Mon, 14 Mar 2016 08:39:00 +0000 (09:39 +0100)]
ARM: dts: armada-xp: enable buffer manager support on Armada XP boards
Since mvneta driver supports using hardware buffer management (BM), in
order to use it, board files have to be adjusted accordingly. This commit
enables BM on AXP-DB and AXP-GP in same manner - because number of ports
on those boards is the same as number of possible pools, each port is
supposed to use single pool for all kind of packets.
Moreover appropriate entry is added to 'soc' node ranges, as well as "okay"
status for 'bm' and 'bm-bppi' (internal SRAM) nodes.
Signed-off-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Marcin Wojtas [Mon, 14 Mar 2016 08:38:59 +0000 (09:38 +0100)]
ARM: dts: armada-xp: add buffer manager nodes
Armada XP network controller supports hardware buffer management (BM).
Since it is now enabled in mvneta driver, appropriate nodes can be added
to armada-xp.dtsi - for the actual common BM unit (bm@c0000) and its
internal SRAM (bm-bppi), which is used for indirect access to buffer
pointer ring residing in DRAM.
Pools - ports mapping, bm-bppi entry in 'soc' node's ranges and optional
parameters are supposed to be set in board files.
Signed-off-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Marcin Wojtas [Mon, 14 Mar 2016 08:38:58 +0000 (09:38 +0100)]
ARM: dts: armada-38x: enable buffer manager support on Armada 38x boards
Since mvneta driver supports using hardware buffer management (BM), in
order to use it, board files have to be adjusted accordingly. This commit
enables BM on:
* A385-DB-AP - each port has its own pool for long and common pool for
short packets,
* A388-ClearFog - same as above,
* A388-DB - to each port unique 'short' and 'long' pools are mapped,
* A388-GP - same as above.
Moreover appropriate entry is added to 'soc' node ranges, as well as "okay"
status for 'bm' and 'bm-bppi' (internal SRAM) nodes.
[gregory.clement@free-electrons.com: add suppport for the ClearFog board]
Signed-off-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
Marcin Wojtas [Mon, 14 Mar 2016 08:38:57 +0000 (09:38 +0100)]
ARM: dts: armada-38x: add buffer manager nodes
Armada 38x network controller supports hardware buffer management (BM).
Since it is now enabled in mvneta driver, appropriate nodes can be added
to armada-38x.dtsi - for the actual common BM unit (bm@c8000) and its
internal SRAM (bm-bppi), which is used for indirect access to buffer
pointer ring residing in DRAM.
Pools - ports mapping, bm-bppi entry in 'soc' node's ranges and optional
parameters are supposed to be set in board files.
Signed-off-by: Marcin Wojtas <mw@semihalf.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 14 Mar 2016 03:55:14 +0000 (23:55 -0400)]
Merge branch 'ipv4-ipv6-csums'
Alexander Duyck says:
====================
Fix differences between IPv4 and IPv6 TCP/UDP checksum calculation
This patch series is meant to address the differences that exist between
IPv4 and IPv6 in terms of checksum calculation. Specifically the IPv6
function csum_ipv6_magic treated length as a value that could be greater
than 64K, while csum_tcpudp_magic was truncating the length at 16 bits.
After looking over the code and giving it some thought I decided it would
be best to update the IPv4 function so that it worked the same way the IPv6
one did. This allows us to get the same results given the same inputs for
both functions. As a result we can use the same processes to reverse the
calculation in the event we need to do something like remove the length of
the pseudo-header checksum.
I also took the opportunity to standardize things so that the parameters
for these functions all use the correct types. IPv4 addresses are __be32,
length should always be __u32, and protocol is a __u8.
With this change in place it corrects an issue with UDP tunnels in which we
were getting a checksum that was off by 1 when performing fragmentation on
inner UDP packets.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Fri, 11 Mar 2016 22:05:47 +0000 (14:05 -0800)]
GSO/UDP: Use skb->len instead of udph->len to determine length of original skb
It is possible for tunnels to end up generating IP or IPv6 datagrams that
are larger than 64K and expecting to be segmented. As such we need to deal
with length values greater than 64K. In order to accommodate this we need
to update the code to work with a 32b length value instead of a 16b one.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Fri, 11 Mar 2016 22:05:41 +0000 (14:05 -0800)]
ipv6: Pass proto to csum_ipv6_magic as __u8 instead of unsigned short
This patch updates csum_ipv6_magic so that it correctly recognizes that
protocol is a unsigned 8 bit value.
This will allow us to better understand what limitations may or may not be
present in how we handle the data. For example there are a number of
places that call htonl on the protocol value. This is likely not necessary
and can be replaced with a multiplication by ntohl(1) which will be
converted to a shift by the compiler.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Fri, 11 Mar 2016 22:05:34 +0000 (14:05 -0800)]
ipv4: Update parameters for csum_tcpudp_magic to their original types
This patch updates all instances of csum_tcpudp_magic and
csum_tcpudp_nofold to reflect the types that are usually used as the source
inputs. For example the protocol field is populated based on nexthdr which
is actually an unsigned 8 bit value. The length is usually populated based
on skb->len which is an unsigned integer.
This addresses an issue in which the IPv6 function csum_ipv6_magic was
generating a checksum using the full 32b of skb->len while
csum_tcpudp_magic was only using the lower 16 bits. As a result we could
run into issues when attempting to adjust the checksum as there was no
protocol agnostic way to update it.
With this change the value is still truncated as many architectures use
"(len + proto) << 8", however this truncation only occurs for values
greater than 16776960 in length and as such is unlikely to occur as we stop
the inner headers at ~64K in size.
I did have to make a few minor changes in the arm, mn10300, nios2, and
score versions of the function in order to support these changes as they
were either using things such as an OR to combine the protocol and length,
or were using ntohs to convert the length which would have truncated the
value.
I also updated a few spots in terms of whitespace and type differences for
the addresses. Most of this was just to make sure all of the definitions
were in sync going forward.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 14 Mar 2016 03:28:00 +0000 (23:28 -0400)]
ipv4: Don't do expensive useless work during inetdev destroy.
When an inetdev is destroyed, every address assigned to the interface
is removed. And in this scenerio we do two pointless things which can
be very expensive if the number of assigned interfaces is large:
1) Address promotion. We are deleting all addresses, so there is no
point in doing this.
2) A full nf conntrack table purge for every address. We only need to
do this once, as is already caught by the existing
masq_dev_notifier so masq_inet_event() can skip this.
Reported-by: Solar Designer <solar@openwall.com> Signed-off-by: David S. Miller <davem@davemloft.net> Tested-by: Cyrill Gorcunov <gorcunov@openvz.org>
David S. Miller [Mon, 14 Mar 2016 02:43:01 +0000 (22:43 -0400)]
Merge tag 'nfc-next-4.6-1' of git://git.kernel.org/pub/scm/linux/kernel/git/sameo/nfc-next
Samuel Ortiz says:
====================
NFC 4.6 pull request
This is a very small one this time, with only 5 patches.
There are a couple of big items that could not be merged/finished
on time.
We have:
- 2 LLCP fixes for a race and a potential OOM.
- 2 cleanups for the pn544 and microread drivers.
- 1 Maintainer addition for the s3fwrn5 driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
MACsec (IEEE 802.1AE [0]) is a protocol that provides security for
wired ethernet LANs. MACsec offers two protection modes:
authentication only, or authenticated encryption.
MACsec defines "secure channels" that allow transmission from one node
to one or more others. Communication on a channel is done over a
succession of "secure associations", that each use a specific key.
Secure associations are identified by their "association number" in
the range 0..3. A secure association is retired when its 32-bit
packet number would wrap, and the same association number can later be
reused with a new key and packet number.
The standard mode of encryption is GCM AES with 128 bits keys,
although an extension allows 256 bits keys [1] (not implemented in
this submission).
When using MACsec, an extra header, called "SecTAG", is added between
the ethernet header and the original payload:
The ethertype for the packet is set to 0x88E5, and the original
ethertype becomes part of the secure payload, which may be encrypted.
The ethernet header and the SecTAG are always transmitted in the
clear, but are integrity-protected.
MACsec supports optional replay protection with a configurable replay
window.
MACsec is designed to be used with the MKA extension to 802.1X (MACsec
Key Agreement protocol) [2], which provides channel attribution and
key distribution to the nodes, but can also be used with static keys
getting fed manually by an administrator.
Optional (not supported yet) features:
- confidentiality offset: in encryption mode, part of the payload may
be left unencrypted.
- choice of cipher suite: GCM AES with 256 bits has been standardised
[1].
Implementation
A netdevice is created on top of a real device for each TX secure
channel, like we do for VLANs. Multiple TX channels can be created on
top of the same underlying device.
Several other approaches were considered for the RX path:
- dev_add_pack: doesn't work, because we want to filter out
unprotected packets
- transparent mode: MACsec would be enabled directly on the real
netdevice. For this, we cannot use a rx_handler directly because
MACsec must be available for underlying devices enslaved in a
bridge or in a bond, so we need a hook directly in
__netif_receive_skb_core. This approach makes it harder to filter
non-encrypted packets on RX without forcing the user to setup some
rules, so the "transparent" mode is not so transparent after all.
It also makes TX more complex than with a dedicated netdevice.
One issue with the proposed implementation is that the qdisc layer for
the real device operates on already encrypted packets.
Netlink API
This is currently a mix of rtnetlink (to create the device and set up
the TX channel) and genl (for RX channels, secure associations and
their keys). genl provides clean demultiplexing of the {TX,RX}{SC,SA}
commands.
Use cases
The normal use case is wired LANs, including veth and slave devices
for bonding/teaming or bridges.
MACsec can also be used on any device that makes a full ethernet
header visible, for example VXLAN.
The VXLAN+MACsec setup would be:
| eth | IP | UDP | VXLAN | eth | MACsec | IP | ... | MACsec ICV |
One benefit on this approach to encryption in the cloud is that the
payload is encrypted by the tenant, not by the tunnel provider, thus
the tenant has full control over the keys.
Changes from v1:
- rework netlink API after discussion with Johannes Berg
- nest attributes, rename
- export stats as separate attributes
- add some comments
- misc small fixes (rcu, constants, struct organization)
Changes from RFCv2:
- fix ENCODING_SA param validation
- add parent link to netlink ifdumps
Changes from RFCv1:
- addressed comments from Florian and Paolo + kbuild robot
- also perform post-decrypt handling after crypto callback
- fixed ->dellink behavior
Future plans:
- offload to hardware, on nics that support it
- implement optional features
Sabrina Dubroca [Fri, 11 Mar 2016 17:07:33 +0000 (18:07 +0100)]
macsec: introduce IEEE 802.1AE driver
This is an implementation of MACsec/IEEE 802.1AE. This driver
provides authentication and encryption of traffic in a LAN, typically
with GCM-AES-128, and optional replay protection.
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Reviewed-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Sabrina Dubroca [Fri, 11 Mar 2016 17:07:32 +0000 (18:07 +0100)]
net: add MACsec netdevice priv_flags and helper
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Reviewed-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Sabrina Dubroca [Fri, 11 Mar 2016 17:07:31 +0000 (18:07 +0100)]
uapi: add MACsec bits
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Reviewed-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Zefir Kurtisi [Fri, 11 Mar 2016 14:31:53 +0000 (15:31 +0100)]
at803x: fix suspend/resume for SGMII link
When operating the at803x in SGMII mode, resuming the chip
from power down brings up the copper-side link but leaves
the SGMII link in unconnected state (tested with at8031
attached to gianfar). In effect, this caused a permanent
link loss once the related interface was put down.
This patch ensures that power down handling in supspend()
and resume() is also applied to the SGMII link.
Signed-off-by: Zefir Kurtisi <zefir.kurtisi@neratec.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 14 Mar 2016 02:35:36 +0000 (22:35 -0400)]
Merge branch 'net-more-bulk-free-users'
Jesper Dangaard Brouer says:
====================
net: bulk free adjustment and two driver use-cases
I've split out the bulk free adjustments, from the bulk alloc patches,
as I want the adjustment to napi_consume_skb be in same kernel cycle
the API was introduced.
Adjustments based on discussion:
Subj: "mlx4: use napi_consume_skb API to get bulk free operations"
http://thread.gmane.org/gmane.linux.network/402503/focus=403386
mlx5: use napi_consume_skb API to get bulk free operations
Bulk free of SKBs happen transparently by the API call napi_consume_skb().
The napi budget parameter is needed by napi_consume_skb() to detect
if called from netpoll.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
mlx4: use napi_consume_skb API to get bulk free operations
Bulk free of SKBs happen transparently by the API call napi_consume_skb().
The napi budget parameter is usually needed by napi_consume_skb()
to detect if called from netpoll. In this patch it has an extra meaning.
For mlx4 driver, the mlx4_en_stop_port() call is done outside
NAPI/softirq context, and cleanup the entire TX ring via
mlx4_en_free_tx_buf(). The code mlx4_en_free_tx_desc() for
freeing SKBs are shared with NAPI calls.
To handle this shared use the zero budget indication is reused,
and handled appropriately in napi_consume_skb(). To reflect this,
variable is called napi_mode for the function call that needed
this distinction.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: adjust napi_consume_skb to handle non-NAPI callers
Some drivers reuse/share code paths that free SKBs between NAPI
and non-NAPI calls. Adjust napi_consume_skb to handle this
use-case.
Before, calls from netpoll (w/ IRQs disabled) was handled and
indicated with a budget zero indication. Use the same zero
indication to handle calls not originating from NAPI/softirq.
Simply handled by using dev_consume_skb_any().
This adds an extra branch+call for the netpoll case (checking
in_irq() + irqs_disabled()), but that is okay as this is a slowpath.
Suggested-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Pirko [Thu, 10 Mar 2016 22:10:21 +0000 (23:10 +0100)]
mlxsw: pci: Implement reset done check
Firmware now tells us that the reset is done by passing a magic value
via register. Use it to shorten the wait in case this is supported.
With old firmware, we still wait until the timeout is reached.
Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
sctp: allow sctp_transmit_packet and others to use gfp
Currently sctp_sendmsg() triggers some calls that will allocate memory
with GFP_ATOMIC even when not necessary. In the case of
sctp_packet_transmit it will allocate a linear skb that will be used to
construct the packet and this may cause sends to fail due to ENOMEM more
often than anticipated specially with big MTUs.
This patch thus allows it to inherit gfp flags from upper calls so that
it can use GFP_KERNEL if it was triggered by a sctp_sendmsg call or
similar. All others, like retransmits or flushes started from BH, are
still allocated using GFP_ATOMIC.
In netperf tests this didn't result in any performance drawbacks when
memory is not too fragmented and made it trigger ENOMEM way less often.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Samuel Gauthier [Thu, 10 Mar 2016 16:14:59 +0000 (17:14 +0100)]
ovs: allow nl 'flow set' to use ufid without flow key
When we want to change a flow using netlink, we have to identify it to
be able to perform a lookup. Both the flow key and unique flow ID
(ufid) are valid identifiers, but we always have to specify the flow
key in the netlink message. When both attributes are there, the ufid
is used. The flow key is used to validate the actions provided by
the userland.
This commit allows to use the ufid without having to provide the flow
key, as it is already done in the netlink 'flow get' and 'flow del'
path. The flow key remains mandatory when an action is provided.
Signed-off-by: Samuel Gauthier <samuel.gauthier@6wind.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Acked-by: Pravin B Shelar <pshelar@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Nicolas Ferre [Thu, 10 Mar 2016 15:44:32 +0000 (16:44 +0100)]
net: macb: fix default configuration for GMAC on AT91
On AT91 SoCs, the User Register (USRIO) exposes a switch to configure the
"Reduced" or "Traditional" version of the Media Independent Interface
(RMII vs. MII or RGMII vs. GMII).
As on the older EMAC version, on GMAC, this switch is set by default to the
non-reduced type of interface, so use the existing capability and extend it to
GMII as well. We then keep the current logic in the macb_init() function.
The capabilities of sama5d2, sama5d4 and sama5d3 GEM interface are updated in
the macb_config structure to be able to properly enable them with a traditional
interface (GMII or MII).
Reported-by: Romain HENRIET <romain.henriet@l-acoustics.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
LABBE Corentin [Thu, 10 Mar 2016 12:58:58 +0000 (13:58 +0100)]
phy: remove documentation of removed members of phy_device structure
Commit e5a03bfd873c ("phy: Add an mdio_device structure") removed addr,
bus and dev member of the phy_device structure.
This patch remove the documentation about those members.
Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
xen-netback: fix multiple extra info handling
If a frontend passes multiple extra info fragments to netback on the guest
transmit side, because xen-netback does not account for this properly, only
a single ack response will be sent. This will eventually cause processing
of the shared ring to wedge.
This series re-imports the canonical netif.h from Xen, where the ring
protocol documentation has been updated, fixes this issue in xen-netback
and also adds a patch to reduce log spam.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Paul Durrant [Thu, 10 Mar 2016 12:30:28 +0000 (12:30 +0000)]
xen-netback: reduce log spam
Remove the "prepare for reconnect" pr_info in xenbus.c. It's largely
uninteresting and the states of the frontend and backend can easily be
observed by watching the (o)xenstored log.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paul Durrant [Thu, 10 Mar 2016 12:30:27 +0000 (12:30 +0000)]
xen-netback: support multiple extra info fragments passed from frontend
The code does not currently support a frontend passing multiple extra info
fragments to the backend in a tx request. The xenvif_get_extras() function
handles multiple extra_info fragments but make_tx_response() assumes there
is only ever a single extra info fragment.
This patch modifies xenvif_get_extras() to pass back a count of extra
info fragments, which is then passed to make_tx_response() (after
possibly being stashed in pending_tx_info for deferred responses).
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Paul Durrant [Thu, 10 Mar 2016 12:30:26 +0000 (12:30 +0000)]
xen-netback: re-import canonical netif header
The canonical netif header (in the Xen source repo) and the Linux variant
have diverged significantly. Recently much documentation has been added to
the canonical header which is highly useful for developers making
modifications to either xen-netfront or xen-netback. This patch therefore
re-imports the canonical header in its entirity.
To maintain compatibility and some style consistency with the old Linux
variant, the header was stripped of its emacs boilerplate, and
post-processed and copied into place with the following commands:
ed -s netif.h << EOF
H
,s/NETTXF_/XEN_NETTXF_/g
,s/NETRXF_/XEN_NETRXF_/g
,s/NETIF_/XEN_NETIF_/g
,s/XEN_XEN_/XEN_/g
,s/netif/xen_netif/g
,s/xen_xen_/xen_/g
,s/^typedef.*$//g
,s/^ /${TAB}/g
w
$
w
EOF
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Thu, 10 Mar 2016 07:31:57 +0000 (15:31 +0800)]
sctp: fix the transports round robin issue when init is retransmitted
prior to this patch, at the beginning if we have two paths in one assoc,
they may have the same params other than the last_time_heard, it will try
the paths like this:
1st cycle
try trans1 fail.
then trans2 is selected.(cause it's last_time_heard is after trans1).
2nd cycle:
try trans2 fail
then trans2 is selected.(cause it's last_time_heard is after trans1).
3rd cycle:
try trans2 fail
then trans2 is selected.(cause it's last_time_heard is after trans1).
....
trans1 will never have change to be selected, which is not what we expect.
we should keeping round robin all the paths if they are just added at the
beginning.
So at first every tranport's last_time_heard should be initialized 0, so
that we ensure they have the same value at the beginning, only by this,
all the transports could get equal chance to be selected.
Then for sctp_trans_elect_best, it should return the trans_next one when
*trans == *trans_next, so that we can try next if it fails, but now it
always return trans. so we can fix it by exchanging these two params when
we calls sctp_trans_elect_tie().
Fixes: 4c47af4d5eb2 ('net: sctp: rework multihoming retransmission path selection to rfc4960') Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells [Wed, 9 Mar 2016 23:22:56 +0000 (23:22 +0000)]
rxrpc: Replace all unsigned with unsigned int
Replace all "unsigned" types with "unsigned int" types.
Reported-by: David Miller <davem@davemloft.net> Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 13 Mar 2016 19:03:34 +0000 (15:03 -0400)]
Merge tag 'wireless-drivers-next-for-davem-2016-03-09' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers patches for 4.6
Major changes:
ath10k
* dt: add bindings for ipq4019 wifi block
* start adding support for qca4019 chip
ath9k
* add device ID for Toshiba WLM-20U2/GN-1080
* allow more than one interface on DFS channels
bcma
* move flash detection code to ChipCommon core driver
brcmfmac
* IPv6 Neighbor discovery offload
* driver settings that can be populated from different sources
* country code setting in firmware
* length checks to validate firmware events
* new way to determine device memory size needed for BCM4366
* various offloads during Wake on Wireless LAN (WoWLAN)
* full Management Frame Protection (MFP) support
iwlwifi
* add support for thermal device / cooling device
* improvements in scheduled scan without profiles
* new firmware support (-21.ucode)
* add MSIX support for 9000 devices
* enable MU-MIMO and take care of firmware restart
* add support for large SKBs in mvm to reach A-MSDU
* add support for filtering frames from a BA session
* start implementing the new Rx path for 9000 devices
* enable the new Radio Resource Management (RRM) nl80211 feature flag
* add a new module paramater to disable VHT
* build infrastructure for Dynamic Queue Allocation
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
====================
A couple of minor clean-ups and optimizations
This patch series is basically just a v2 of a couple patches I recently
submitted.
The two patches aren't technically related but there are just items I found
while cleaning up and prepping some further work to enable Tx checksums for
tunnels.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Wed, 9 Mar 2016 17:25:26 +0000 (09:25 -0800)]
csum: Update csum_block_add to use rotate instead of byteswap
The code for csum_block_add was doing a funky byteswap to swap the even and
odd bytes of the checksum if the offset was odd. Instead of doing this we
can save ourselves some trouble and just shift by 8 as this should have the
same effect in terms of the final checksum value and only requires one
instruction.
In addition we can update csum_block_sub to just use csum_block_add with a
inverse value for csum2. This way we follow the same code path as
csum_block_add without having to duplicate it.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Wed, 9 Mar 2016 17:24:23 +0000 (09:24 -0800)]
gro: Defer clearing of flush bit in tunnel paths
This patch updates the GRO handlers for GRE, VXLAN, GENEVE, and FOU so that
we do not clear the flush bit until after we have called the next level GRO
handler. Previously this was being cleared before parsing through the list
of frames, however this resulted in several paths where either the bit
needed to be reset but wasn't as in the case of FOU, or cases where it was
being set as in GENEVE. By just deferring the clearing of the bit until
after the next level protocol has been parsed we can avoid any unnecessary
bit twiddling and avoid bugs.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This series contains several changes to driver interaction with the
management fw.
The biggest [& most significant] change here is a change in the locking
scheme and re-definition of the 'critical section' when accessing shared
resources toward the goal of interacting with the management firmware.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuval Mintz [Wed, 9 Mar 2016 07:16:26 +0000 (09:16 +0200)]
qed: Enlrage the drain timeout
In the scenario where slowpath configuration isn't passing due to
various pause configurations affecting the chip, the theoretical time
required in worst-case-scenario to empty hw fifos sufficiently to
guarantee that slowpath configuration would flow is currently
insufficient.
This increases such a drain request to the theoretical maximum.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Zvi Nachmani [Wed, 9 Mar 2016 07:16:25 +0000 (09:16 +0200)]
qed: Notify of transciever changes
Handle a new message from the MFW, one that indicate that the transciever
state has changed, and log that into the system logs.
Signed-off-by: Zvi Nachmani <Zvi.Nachmani@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Tomer Tayar [Wed, 9 Mar 2016 07:16:24 +0000 (09:16 +0200)]
qed: Major changes to MB locking
Driver interaction with the managemnt firmware is done via mailbox
commands which the management firmware periodically sample, as well
as placing of additional data in set places in the shared memory.
Each PF has a single designated mailbox address, and all flows that
require messaging to the management should use it.
This patch does 2 things:
1. It re-defines the critical section surrounding the mailbox sending -
that section should include the setting of the shared memory as well as
the sending of the command [otherwise a race might send a command with
the data of a different command].
2. It moves the locking scheme from using mutices into using spinlocks.
This lays the groundwork for sending MFW commands from non-sleepable
contexts.
Signed-off-by: Tomer Tayar <Tomer.Tayar@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
When device is configured for Multi-function mode, some older management
firmware might incorrectly notify interfaces of link changes while they
haven't requested the physical link configuration to be set.
This can create bizzare race conditions where unloading interfaces are
getting notified that the link is up.
Let the driver compensate - store the logical requested state of the link
and don't propagate notifications after protocol driver explicitly
requires the link to be unset.
Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 11 Mar 2016 20:14:27 +0000 (15:14 -0500)]
Merge branch 'bpf-flow-labels'
Daniel Borkmann says:
====================
BPF support for flow labels
This set adds support for tunnel key flow labels for vxlan
and geneve devices in collect meta data mode and eBPF support
for managing these. For details please see individual patches.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Wed, 9 Mar 2016 02:00:05 +0000 (03:00 +0100)]
bpf: support flow label for bpf_skb_{set, get}_tunnel_key
This patch extends bpf_tunnel_key with a tunnel_label member, that maps
to ip_tunnel_key's label so underlying backends like vxlan and geneve
can propagate the label to udp_tunnel6_xmit_skb(), where it's being set
in the IPv6 header. It allows for having 20 more bits to encode/decode
flow related meta information programmatically. Tested with vxlan and
geneve.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Wed, 9 Mar 2016 02:00:04 +0000 (03:00 +0100)]
geneve: support setting IPv6 flow label
This work adds support for setting the IPv6 flow label for geneve per
device and through collect metadata (ip_tunnel_key) frontends. Also here,
the geneve dst cache does not need any special considerations, for the
cases where caches can be used, the label is static per cache.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Wed, 9 Mar 2016 02:00:03 +0000 (03:00 +0100)]
vxlan: support setting IPv6 flow label
This work adds support for setting the IPv6 flow label for vxlan per
device and through collect metadata (ip_tunnel_key) frontends. The
vxlan dst cache does not need any special considerations here, for
the cases where caches can be used, the label is static per cache.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Wed, 9 Mar 2016 02:00:02 +0000 (03:00 +0100)]
ip_tunnel: add support for setting flow label via collect metadata
This patch extends udp_tunnel6_xmit_skb() to pass in the IPv6 flow label
from call sites. Currently, there's no such option and it's always set to
zero when writing ip6_flow_hdr(). Add a label member to ip_tunnel_key, so
that flow-based tunnels via collect metadata frontends can make use of it.
vxlan and geneve will be converted to add flow label support separately.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
This fixes a regression in the bridge ageing time caused by:
commit c62987bbd8a1 ("bridge: push bridge setting ageing_time down to switchdev")
There are users of Linux bridge which use the feature that if ageing time
is set to 0 it causes entries to never expire. See:
https://www.linuxfoundation.org/collaborate/workgroups/networking/bridge
For a pure software bridge, it is unnecessary for the code to have
arbitrary restrictions on what values are allowable.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Tue, 8 Mar 2016 20:59:34 +0000 (12:59 -0800)]
rocker: set FDB cleanup timer according to lowest ageing time
In rocker, ageing time is a per-port attribute, so the next time the FDB
cleanup timer fires should be set according to the lowest ageing time.
This will later allow us to delete the BR_MIN_AGEING_TIME macro, which was
added to guarantee minimum ageing time in the bridge layer, thereby breaking
existing behavior.
Signed-off-by: Ido Schimmel <idosch@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Tue, 8 Mar 2016 20:59:33 +0000 (12:59 -0800)]
mlxsw: spectrum: Check requested ageing time is valid
Commit c62987bbd8a1 ("bridge: push bridge setting ageing_time down to
switchdev") added a check for minimum and maximum ageing time, but this
breaks existing behaviour where one can set ageing time to 0 for a
non-learning bridge.
Push this check down to the driver and allow the check in the bridge
layer to be removed. Currently ageing time 0 is refused by the driver,
but we can later add support for this functionality.
Signed-off-by: Ido Schimmel <idosch@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The stack expects link layer headers in the skb linear section.
Macvtap can create skbs with llheader in frags in edge cases:
when (IFF_VNET_HDR is off or vnet_hdr.hdr_len < ETH_HLEN) and
prepad + len > PAGE_SIZE and vnet_hdr.flags has no or bad csum.
Add checks to ensure linear is always at least ETH_HLEN.
At this point, len is already ensured to be >= ETH_HLEN.
For backwards compatiblity, rounds up short vnet_hdr.hdr_len.
This differs from tap and packet, which return an error.
Fixes b9fb9ee07e67 ("macvtap: add GSO/csum offload support") Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Fri, 11 Mar 2016 09:08:45 +0000 (11:08 +0200)]
net/flower: Fix pointer cast
Cast pointer to unsigned long instead of u64, to fix compilation warning
on 32 bit arch, spotted by 0day build.
Fixes: 5b33f48 ("net/flower: Introduce hardware offload support") Signed-off-by: Amir Vadai <amir@vadai.me> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 10 Mar 2016 21:24:03 +0000 (16:24 -0500)]
Merge branch 'flower-offload'
Amir Vadai says:
====================
cls_flower hardware offload support
Please see changes from V2 at the bottom.
This patchset introduces cls_flower hardware offload support over ConnectX-4
driver, more hardware vendors are welcome to use it too.
This patchset is based on John's infrastructure for tc offloading [2] to add
hardware offload support to the flower filter. It also extends the support to
an additional tc action - skbedit mark operation.
NIC driver that was used is ConnectX-4. Feature is off by default and could be
turned on using ethtool.
$TC filter add dev $ETH protocol ip prio 30 parent ffff: \
flower ip_proto 6 \
indev $ETH \
action skbedit mark 0x1234
$TC filter add dev $ETH protocol ip prio 10 parent ffff: \
handle 0x1234 fw action pass
The code was tested and applied on top of commit 3ebeac1 ("Merge branch
'cxgb4-next'")
Changes from V2:
- patch 1/10 ("net/flower: Introduce hardware offload support")
- Remove unused variable [Dave]
- Don't fail command when HW can't offload filter [John]
- patch 3/10 ("net/sched: Macro instead of CONFIG_NET_CLS_ACT ifdef")
- Mention in changelog that struct tc_action is now exposed out of the ifdef.
- patch 4/10 ("net/act_skbedit: Utility functions for mark action")
- Document clearly that is_tcf_skbedit_mark() is returning true if and only
if the only action is mark [Dave]
- patch 8/10 ("net/mlx5e: Introduce tc offload support")
- make mlx5e_tc_add_flow() static
Changes from V1:
- patch 3/10 ("net/sched: Macro instead of CONFIG_NET_CLS_ACT ifdef")
- fixed return value of tc_no_actions
Changes from V0:
- Use tc_no_actions and tc_for_each_action instead of ifdef CONFIG_NET_CLS_ACT
- Replace ENOTSUPP (and some EINVAL) with EOPNOTSUPP
- Name the flower command enum
- fl_hw_destroy_filter() to return void - nobody uses the return value
- mlx5e_tc_init() and mlx5e_tc_cleanup() to be called from the right places.
- When adding HW rule fails - fail the command
- Rules are added to be processed both by HW and SW unless SKIP_HW is given
- Adding patch 6/10 ("net/mlx5e: Relax ndo_setup_tc handle restriction")
Main changes from the RFC [1]:
- API
- Using ndo_setup_tc() instead of switchdev
- act_skbedit, act_gact
- Actions are not serialized to NIC driver, instead using access functions.
- cls_flower
- prevent double classification by software by not adding
successfuly offloaded filters to the hashtable
- Fixed some bugs in original RFC with rule delete
- mlx5
- Adding flow table to kernel namespace instead of a new namespace
- s/offload/tc/ in many places
- no need for a special kconfig since switchdev is not used
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Tue, 8 Mar 2016 10:42:36 +0000 (12:42 +0200)]
net/mlx5e: Introduce tc offload support
Extend ndo_setup_tc() to support ingress tc offloading. Will be used by
later patches to offload tc flower filter.
Feature is off by default and could be enabled by issuing:
# ethtool -K eth0 hw-tc-offload on
Offloads flow table is dynamically created when first filter is
added.
Rules are saved in a hash table that is maintained by the consumer (for
example - the flower offload in the next patch).
When last filter is removed and no filters exist in the hash table, the
offload flow table is destroyed.
Signed-off-by: Amir Vadai <amir@vadai.me> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Tue, 8 Mar 2016 10:42:35 +0000 (12:42 +0200)]
net/mlx5e: Add a new priority for kernel flow tables
Move the vlan and main flow tables to use priority 1. This will allow
the upcoming TC offload logic to use a higher priority (0) for the
offload steering table.
Signed-off-by: Amir Vadai <amir@vadai.me> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Tue, 8 Mar 2016 10:42:33 +0000 (12:42 +0200)]
net/mlx5_core: Set flow steering dest only for forward rules
We need to handle flow table entry destinations only if the action
associated with the rule is forwarding (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST).
Fixes: 26a8145390b3 ('net/mlx5_core: Introduce flow steering firmware commands') Signed-off-by: Amir Vadai <amir@vadai.me> Signed-off-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Tue, 8 Mar 2016 10:42:31 +0000 (12:42 +0200)]
net/sched: Macro instead of CONFIG_NET_CLS_ACT ifdef
Introduce the macros tc_no_actions and tc_for_each_action to make code
clearer.
Extracted struct tc_action out of the ifdef to make calls to
is_tcf_gact_shot() and similar functions valid, even when it is a nop.
Acked-by: Jiri Pirko <jiri@mellanox.com> Acked-by: John Fastabend <john.r.fastabend@intel.com> Suggested-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Amir Vadai <amir@vadai.me> Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Tue, 8 Mar 2016 10:42:30 +0000 (12:42 +0200)]
net/flow_dissector: Make dissector_uses_key() and skb_flow_dissector_target() public
Will be used in a following patch to query if a key is being used, and
what it's value in the target object.
Acked-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Amir Vadai <amir@vadai.me> Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai [Tue, 8 Mar 2016 10:42:29 +0000 (12:42 +0200)]
net/flower: Introduce hardware offload support
This patch is based on a patch made by John Fastabend.
It adds support for offloading cls_flower.
when NETIF_F_HW_TC is on:
flags = 0 => Rule will be processed twice - by hardware, and if
still relevant, by software.
flags = SKIP_HW => Rull will be processed by software only
If hardware fail/not capabale to apply the rule, operation will NOT
fail. Filter will be processed by SW only.
Acked-by: Jiri Pirko <jiri@mellanox.com> Suggested-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Amir Vadai <amir@vadai.me> Signed-off-by: David S. Miller <davem@davemloft.net>
This series adds support for the Mediatek ethernet core found on current ARM
based SoCs. The driver works on MT2701 and MT7623 SoCs
Instead of trying to upstream everything at once I decided to concentrate on
the important parts required to make current generation silicon work. The V3
series only includes the code required to make dual MAC setups work and only
supports the newer QDMA engine.
Changes in V5
* reduce the mdio timeut to HZ
* add a call to usleep_range() which schedules in the background.
Changes in V4
* remove ugly _FE macro, use offsetof() instead
Changes in V3
* only include code for MT2701/7623 support
* drop support for PDMA and older MIPS based SoCs
* drop switch support
Changes in V2
* change the namespace of the functions from fe_* to mtk_*
* add support for the latest generation of ARM SoCs
* add dual MAC support
* remove the swconfig specific bits
* remove most of the magic values and replace them with defines
* add verbose descriptions to the patches
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John Crispin [Tue, 8 Mar 2016 10:29:55 +0000 (11:29 +0100)]
net-next: mediatek: add support for MT7623 ethernet
Add ethernet support for MediaTek SoCs from the MT7623 family. These have
dual GMAC. Depending on the exact version, there might be a built-in
Gigabit switch (MT7530). The core does not have the typical DMA ring setup.
Instead there is a linked list that we add descriptors to. There is only
one linked list that both MACs use together. There is a special field
inside the TX descriptors called the VQID. This allows us to assign packets
to different internal queues. By using a separate id for each MAC we are
able to get deterministic results for BQL. Additionally we need to
provide the core with a block of scratch memory that is the same size as
the RX ring and data buffer. This is really needed to make the HW datapath
work. Although the driver does not support this yet, we still need to
assign the memory and tell the core about it for RX to work.
Signed-off-by: Felix Fietkau <nbd@openwrt.org> Signed-off-by: Michael Lee <igvtee@gmail.com> Signed-off-by: John Crispin <blogic@openwrt.org> Signed-off-by: David S. Miller <davem@davemloft.net>
This adds the binding documentation for the MediaTek Ethernet
controller.
Signed-off-by: John Crispin <blogic@openwrt.org> Acked-by: Rob Herring <robh@kernel.org> Cc: devicetree@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
Neil Armstrong [Tue, 8 Mar 2016 09:36:20 +0000 (10:36 +0100)]
net: dsa: Fix cleanup resources upon module removal
The initial commit badly merged into the dsa_resume method instead
of the dsa_remove_dst method.
As consequence, the dst->master_netdev->dsa_ptr is not set to NULL on
removal and re-bind of the dsa device fails with error -17.
Fixes: b0dc635d923c ("net: dsa: cleanup resources upon module removal ") Signed-off-by: Neil Armstrong <narmstrong@baylibre.com> Acked-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Manish Chopra [Tue, 8 Mar 2016 09:09:44 +0000 (04:09 -0500)]
qede: Fix net-next "make ARCH=x86_64"
'commit 55482edc25f0606851de42e73618f813f310d009
("qede: Add slowpath/fastpath support and enable hardware GRO")'
introduces below error when compiling net-next with "make ARCH=x86_64"
drivers/built-in.o: In function `qede_rx_int':
qede_main.c:(.text+0x6101a0): undefined reference to `tcp_gro_complete'
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 10 Mar 2016 21:15:54 +0000 (16:15 -0500)]
Merge branch 'qlcnic-next'
Rajesh Borundia says:
====================
qlcnic fixes
This series adds following fixes.
o While processing mailbox if driver gets a spurious mailbox
interrupt it leads into premature completion of a next
mailbox request. Added a guard against this by checking current
state of mailbox and ignored spurious interrupt.
Added a stats counter to record this condition.
v2:
o Added patch that removes usage of atomic_t as we are not implemeting
atomicity by using atomic_t value.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Rajesh Borundia [Tue, 8 Mar 2016 07:39:58 +0000 (02:39 -0500)]
qlcnic: Fix mailbox completion handling during spurious interrupt
o While the driver is in the middle of a MB completion processing
and it receives a spurious MB interrupt, it is mistaken as a good MB
completion interrupt leading to premature completion of the next MB
request. Fix the driver to guard against this by checking the current
state of MB processing and ignore the spurious interrupt.
Also added a stats counter to record this condition.
Signed-off-by: Rajesh Borundia <rajesh.borundia@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 10 Mar 2016 21:12:25 +0000 (16:12 -0500)]
Merge branch 'cxgb4-next'
Hariprasad Shenai says:
====================
cxgb4vf: Interrupt and queue configuration changes
This series fixes some issues and some changes in the queue and interrupt
configuration for cxgb4vf driver. We need to enable interrupts before we
register our network device, so that we don't loose link up interrupts.
Allocate rx queues based on interrupt type. Set number of tx/rx queues in
probe function only. Also adds check for some invalid configurations.
This patch series has been created against net-next tree and includes
patches on cxgb4vf driver.
We have included all the maintainers of respective drivers. Kindly review
the change and let us know in case of any review comments.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
cxgb4vf: Configure queue based on resource and interrupt type
The Queue Set Configuration code was always reserving room for a
Forwarded interrupt Queue even in the cases where we weren't using it.
Figure out how many Ports and Queue Sets we can support. This depends on
knowing our Virtual Function Resources and may be called a second time
if we fall back from MSI-X to MSI Interrupt Mode. This change fixes that
problem.
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
cxgb4vf: Enable interrupts before we register our network devices
This avoids a race condition where a system that has network devices set up
to be automatically configured and we get the first Port Link Status
message from the firmware on the Asynchronous Firmware Event Queue before
we've enabled interrupts. If that happens, we end up losing the interrupt
and never realizing that the links has actually come up.
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Vivien Didelot [Mon, 7 Mar 2016 23:24:39 +0000 (18:24 -0500)]
net: dsa: mv88e6xxx: read then write PVID
The port register 0x07 contains more options than just the default VID,
even though they are not used yet. So prefer a read then write operation
over a direct write.
This also allows to keep track of the change through dynamic debug.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Vivien Didelot [Mon, 7 Mar 2016 23:24:17 +0000 (18:24 -0500)]
net: dsa: mv88e6xxx: rework port state setter
Apply a few non-functional changes on the port state setter:
* add a dynamic debug message with state names to track changes
* explicit states checking instead of assuming their numeric values
* lock mutex only once when changing several port states
* use bitmap macros to declare and access port_state_update_mask
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Sergei Shtylyov [Mon, 7 Mar 2016 22:37:09 +0000 (01:37 +0300)]
sh_eth: advance 'rxdesc' later in sh_eth_ring_format()
Iff dma_map_single() fails, 'rxdesc' should point to the last filled RX
descriptor, so that it can be marked as the last one, however the driver
would have already advanced it by that time. In order to fix that, only
fill an RX descriptor once all the data for it is ready.
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sergei Shtylyov [Mon, 7 Mar 2016 22:36:28 +0000 (01:36 +0300)]
sh_eth: fix NULL pointer dereference in sh_eth_ring_format()
In a low memory situation, if netdev_alloc_skb() fails on a first RX ring
loop iteration in sh_eth_ring_format(), 'rxdesc' is still NULL. Avoid
kernel oops by adding the 'rxdesc' check after the loop.
Reported-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>