====================
Remove rtnl lock dependency from all action implementations
Currently, all netlink protocol handlers for updating rules, actions and
qdiscs are protected with single global rtnl lock which removes any
possibility for parallelism. This patch set is a second step to remove
rtnl lock dependency from TC rules update path.
Recently, new rtnl registration flag RTNL_FLAG_DOIT_UNLOCKED was added.
Handlers registered with this flag are called without RTNL taken. End
goal is to have rule update handlers(RTM_NEWTFILTER, RTM_DELTFILTER,
etc.) to be registered with UNLOCKED flag to allow parallel execution.
However, there is no intention to completely remove or split rtnl lock
itself. This patch set addresses specific problems in implementation of
tc actions that prevent their control path from being executed
concurrently. Additional changes are required to refactor classifiers
API and individual classifiers for parallel execution. This patch set
lays groundwork to eventually register rule update handlers as
rtnl-unlocked.
Action API is already prepared for parallel execution with previous
patch set, which means that action ops that use action API for their
implementation do not require additional modifications. (delete, search,
etc.) Action API implements concurrency-safe reference counting and
guarantees that cleanup/delete is called only once, after last reference
to action is released.
The goal of this change is to update specific actions APIs that access
action private state directly, in order to be independent from external
locking. General approach is to re-use existing tcf_lock spinlock (used
by some action implementation to synchronize control path with data
path) to protect action private state from concurrent modification. If
action has rcu-protected pointer, tcf spinlock is used to protect its
update code, instead of relying on rtnl lock.
Some actions need to determine rtnl mutex status in order to release it.
For example, ife action can load additional kernel modules(meta ops) and
must make sure that no locks are held during module load. In such cases
'rtnl_held' argument is used to conditionally release rtnl mutex.
Changes from V1 to V2:
- Patch 12:
- new patch
- Patch 14:
- refactor gen_new_estimator() to reuse stats_lock when re-assigning
rate estimator statistics pointer
- Remove mirred and tunnel_key helper function changes. (to be submitted
and standalone patch)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:55 +0000 (20:51 +0300)]
net: sched: act_police: remove dependency on rtnl lock
Use tcf spinlock to protect police action private data from concurrent
modification during dump. (init already uses tcf spinlock when changing
police action state)
Pass tcf spinlock as estimator lock argument to gen_replace_estimator()
during action init.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:53 +0000 (20:51 +0300)]
net: sched: act_mirred: remove dependency on rtnl lock
Re-introduce mirred list spinlock, that was removed some time ago, in order
to protect it from concurrent modifications, instead of relying on rtnl
lock.
Use tcf spinlock to protect mirred action private data from concurrent
modification in init and dump. Rearrange access to mirred data in order to
be performed only while holding the lock.
Rearrange net dev access to always hold reference while working with it,
instead of relying on rntl lock.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:52 +0000 (20:51 +0300)]
net: sched: extend action ops with put_dev callback
As a preparation for removing dependency on rtnl lock from rules update
path, all users of shared objects must take reference while working with
them.
Extend action ops with put_dev() API to be used on net device returned by
get_dev().
Modify mirred action (only action that implements get_dev callback):
- Take reference to net device in get_dev.
- Implement put_dev API that releases reference to net device.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:51 +0000 (20:51 +0300)]
net: sched: act_vlan: remove dependency on rtnl lock
Use tcf spinlock to protect vlan action private data from concurrent
modification during dump and init. Use rcu swap operation to reassign
params pointer under protection of tcf lock. (old params value is not used
by init, so there is no need of standalone rcu dereference step)
Remove rtnl assertion that is no longer necessary.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:50 +0000 (20:51 +0300)]
net: sched: act_tunnel_key: remove dependency on rtnl lock
Use tcf lock to protect tunnel key action struct private data from
concurrent modification in init and dump. Use rcu swap operation to
reassign params pointer under protection of tcf lock. (old params value is
not used by init, so there is no need of standalone rcu dereference step)
Remove rtnl lock assertion that is no longer required.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:49 +0000 (20:51 +0300)]
net: sched: act_skbmod: remove dependency on rtnl lock
Move read of skbmod_p rcu pointer to be protected by tcf spinlock. Use tcf
spinlock to protect private skbmod data from concurrent modification during
dump.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:48 +0000 (20:51 +0300)]
net: sched: act_simple: remove dependency on rtnl lock
Use tcf spinlock to protect private simple action data from concurrent
modification during dump. (simple init already uses tcf spinlock when
changing action state)
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:46 +0000 (20:51 +0300)]
net: sched: act_pedit: remove dependency on rtnl lock
Rearrange pedit init code to only access pedit action data while holding
tcf spinlock. Change keys allocation type to atomic to allow it to execute
while holding tcf spinlock. Take tcf spinlock in dump function when
accessing pedit action data.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:45 +0000 (20:51 +0300)]
net: sched: act_ipt: remove dependency on rtnl lock
Use tcf spinlock to protect ipt action private data from concurrent
modification during dump. Ipt init already takes tcf spinlock when
modifying ipt state.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:44 +0000 (20:51 +0300)]
net: sched: act_ife: remove dependency on rtnl lock
Use tcf spinlock and rcu to protect params pointer from concurrent
modification during dump and init. Use rcu swap operation to reassign
params pointer under protection of tcf lock. (old params value is not used
by init, so there is no need of standalone rcu dereference step)
Ife action has meta-actions that are compiled as standalone modules. Rtnl
mutex must be released while loading a kernel module. In order to support
execution without rtnl mutex, propagate 'rtnl_held' argument to meta action
loading functions. When requesting meta action module, conditionally
release rtnl lock depending on 'rtnl_held' argument.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:43 +0000 (20:51 +0300)]
net: sched: act_gact: remove dependency on rtnl lock
Use tcf spinlock to protect gact action private state from concurrent
modification during dump and init. Remove rtnl assertion that is no longer
necessary.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:42 +0000 (20:51 +0300)]
net: sched: act_csum: remove dependency on rtnl lock
Use tcf lock to protect csum action struct private data from concurrent
modification in init and dump. Use rcu swap operation to reassign params
pointer under protection of tcf lock. (old params value is not used by
init, so there is no need of standalone rcu dereference step)
Remove rtnl assertion that is no longer necessary.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vlad Buslov [Fri, 10 Aug 2018 17:51:41 +0000 (20:51 +0300)]
net: sched: act_bpf: remove dependency on rtnl lock
Use tcf spinlock to protect bpf action private data from concurrent
modification during dump and init. Remove rtnl lock assertion that is no
longer necessary.
Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
net/sctp: Avoid allocating high order memory with kmalloc()
Each SCTP association can have up to 65535 input and output streams.
For each stream type an array of sctp_stream_in or sctp_stream_out
structures is allocated using kmalloc_array() function. This function
allocates physically contiguous memory regions, so this can lead
to allocation of memory regions of very high order, i.e.:
sizeof(struct sctp_stream_out) == 24,
((65535 * 24) / 4096) == 383 memory pages (4096 byte per page),
which means 9th memory order.
This can lead to a memory allocation failures on the systems
under a memory stress.
We actually do not need these arrays of memory to be physically
contiguous. Possible simple solution would be to use kvmalloc()
instread of kmalloc() as kvmalloc() can allocate physically scattered
pages if contiguous pages are not available. But the problem
is that the allocation can happed in a softirq context with
GFP_ATOMIC flag set, and kvmalloc() cannot be used in this scenario.
So the other possible solution is to use flexible arrays instead of
contiguios arrays of memory so that the memory would be allocated
on a per-page basis.
This patchset replaces kvmalloc() with flex_array usage.
It consists of two parts:
* First patch is preparatory - it mechanically wraps all direct
access to assoc->stream.out[] and assoc->stream.in[] arrays
with SCTP_SO() and SCTP_SI() wrappers so that later a direct
array access could be easily changed to an access to a
flex_array (or any other possible alternative).
* Second patch replaces kmalloc_array() with flex_array usage.
v2 changes:
sctp_stream_in() users are updated to provide stream as an argument,
sctp_stream_{in,out}_ptr() are now just sctp_stream_{in,out}().
v3 changes:
Move type chages struct sctp_stream_out -> flex_array to next patch.
Make sctp_stream_{in,out}() static incline and move them to a header.
Performance results (single stream):
====================================
* Kernel: v4.18-rc6 - stock and with 2 patches from Oleg (earlier in this thread)
* Node: CPU (8 cores): Intel(R) Xeon(R) CPU E31230 @ 3.20GHz
RAM: 32 Gb
* netperf: taken from https://github.com/HewlettPackard/netperf.git,
compiled from sources with sctp support
* netperf server and client are run on the same node
* ip link set lo mtu 1500
The script used to run tests:
# cat run_tests.sh
#!/bin/bash
for test in SCTP_STREAM SCTP_STREAM_MANY SCTP_RR SCTP_RR_MANY; do
echo "TEST: $test";
for i in `seq 1 3`; do
echo "Iteration: $i";
set -x
netperf -t $test -H localhost -p 22222 -S 200000,200000 -s 200000,200000 \
-l 60 -- -m 1452;
set +x
done
done
================================================
Results (a bit reformatted to be more readable):
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
Performance results for many streams:
=====================================
* Kernel: v4.18-rc8 - stock and with 2 patches v3
* Node: CPU (8 cores): Intel(R) Xeon(R) CPU E31230 @ 3.20GHz
RAM: 32 Gb
* sctp_test: https://github.com/sctp/lksctp-tools
* both server and client are run on the same node
* ip link set lo mtu 1500
* sysctl -w vm.max_map_count=65530000 (need it to make memory fragmented)
The script used to run tests:
=============================
# cat run_sctp_test.sh
#!/bin/bash
1) ms stock kernel v4.18-rc8, no memory fragmentation
test 1 test 2 test 3
real 0m14.715s 0m14.593s 0m15.954s
user 0m0.954s 0m0.955s 0m0.854s
sys 0m13.388s 0m12.537s 0m13.749s
2) kernel with fixes, no memory fragmentation
test 1 test 2 test 3
real 0m14.959s 0m14.693s 0m14.762s
user 0m0.948s 0m0.921s 0m0.929s
sys 0m13.538s 0m13.225s 0m13.217s
net/sctp: Replace in/out stream arrays with flex_array
This path replaces physically contiguous memory arrays
allocated using kmalloc_array() with flexible arrays.
This enables to avoid memory allocation failures on the
systems under a memory stress.
Signed-off-by: Oleg Babin <obabin@virtuozzo.com> Signed-off-by: Konstantin Khorenko <khorenko@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net/sctp: Make wrappers for accessing in/out streams
This patch introduces wrappers for accessing in/out streams indirectly.
This will enable to replace physically contiguous memory arrays
of streams with flexible arrays (or maybe any other appropriate
mechanism) which do memory allocation on a per-page basis.
Signed-off-by: Oleg Babin <obabin@virtuozzo.com> Signed-off-by: Konstantin Khorenko <khorenko@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Keara Leibovitz [Fri, 10 Aug 2018 14:09:41 +0000 (10:09 -0400)]
tc: Update README and add config
Updated README.
Added config file that contains the minimum required features enabled to
run the tests currently present in the kernel.
This must be updated when new unittests are created and require their own
modules.
Signed-off-by: Keara Leibovitz <kleib@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The current ioctl() handling code can be simplified. It tests for
non-relevant conditions and uselessly holds sockets. Once useless
code is removed, it becomes even simpler to let pppol2tp_ioctl() handle
commands directly, rather than dispatch them to pppol2tp_tunnel_ioctl()
or pppol2tp_session_ioctl(). That is the approach taken by this series.
Patch #1 and #2 define helper functions aimed at simplifying the rest
of the patch set.
Patch #3 drops useless tests in pppol2p_ioctl() and avoid holding a
refcount on the socket.
Patches #4, #5 and #6 are the core of the series. They let
pppol2tp_ioctl() handle all ioctls and drop the tunnel and session
specific functions.
Then patch #6 brings a little bit of consolidation.
Finally, patch #7 takes advantage of the simplified code to make
pppol2tp sockets compatible with dev_ioctl(). Certainly not a killer
feature, but it is trivial and it is always nice to see l2tp getting
better integration with the rest of the stack.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault [Fri, 10 Aug 2018 11:22:03 +0000 (13:22 +0200)]
l2tp: let pppol2tp_ioctl() fallback to dev_ioctl()
Return -ENOIOCTLCMD for unknown ioctl commands. This lets dev_ioctl()
handle generic socket ioctls like SIOCGIFNAME or SIOCGIFINDEX.
PF_PPPOX/PX_PROTO_OL2TP was one of the few socket types not honouring
this mechanism.
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault [Fri, 10 Aug 2018 11:22:00 +0000 (13:22 +0200)]
l2tp: remove pppol2tp_tunnel_ioctl()
Handle PPPIOCGL2TPSTATS in pppol2tp_ioctl() if the socket represents a
tunnel. This one is a bit special because the caller may use the tunnel
socket to retrieve statistics of one of its sessions. If the session_id
is set, the corresponding session's statistics are returned, instead of
those of the tunnel. This is handled by the new
pppol2tp_tunnel_copy_stats() helper function.
Set ->tunnel_id and ->using_ipsec out of the conditional, so
that it can be used by the 'else' branch in the following patch.
We cannot do that for ->session_id, because tunnel sockets have to
report the value that was originally passed in 'stats.session_id',
while session sockets have to report their own session_id.
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault [Fri, 10 Aug 2018 11:21:58 +0000 (13:21 +0200)]
l2tp: simplify pppol2tp_ioctl()
* Drop test on 'sk': sock->sk cannot be NULL, or pppox_ioctl() could
not have called us.
* Drop test on 'SOCK_DEAD' state: if this flag was set, the socket
would be in the process of being released and no ioctl could be
running anymore.
* Drop test on 'PPPOX_*' state: we depend on ->sk_user_data to get
the session structure. If it is non-NULL, then the socket is
connected. Testing for PPPOX_* is redundant.
* Retrieve session using ->sk_user_data directly, instead of going
through pppol2tp_sock_to_session(). This avoids grabbing a useless
reference on the socket.
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault [Fri, 10 Aug 2018 11:21:57 +0000 (13:21 +0200)]
l2tp: split l2tp_session_get()
l2tp_session_get() is used for two different purposes. If 'tunnel' is
NULL, the session is searched globally in the supplied network
namespace. Otherwise it is searched exclusively in the tunnel context.
Callers always know the context in which they need to search the
session. But some of them do provide both a namespace and a tunnel,
making the semantic of the call unclear.
This patch defines l2tp_tunnel_get_session() for lookups done in a
tunnel and restricts l2tp_session_get() to namespace searches.
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 11 Aug 2018 19:11:36 +0000 (12:11 -0700)]
Merge branch 'netsec-driver-improvements'
Ilias Apalodimas says:
====================
netsec driver improvements
This patchset introduces some improvements on socionext netsec driver.
- patch 1/2, avoids unneeded MMIO reads on the Rx path
- patch 2/2, is adjusting the numbers of descriptors used
Changes since v1:
- Move dma_rmb() to protect descriptor accesses until the device
has updated the NETSEC_RX_PKT_OWN_FIELD bit
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ilias Apalodimas [Fri, 10 Aug 2018 06:12:38 +0000 (09:12 +0300)]
net: socionext: Use descriptor info instead of MMIO reads on Rx
MMIO reads for remaining packets in queue occur (at least)twice per
invocation of netsec_process_rx(). We can use the packet descriptor to
identify if it's owned by the hardware and break out, avoiding the more
expensive MMIO read operations. This has a ~2% increase on the pps of the
Rx path when tested with 64byte packets
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Fri, 10 Aug 2018 06:08:37 +0000 (14:08 +0800)]
vxge: remove set but not used variable 'req_out', 'status' and 'ret'
Fixes gcc '-Wunused-but-set-variable' warning:
drivers/net/ethernet/neterion/vxge/vxge-config.c:1097:6: warning:
variable 'ret' set but not used [-Wunused-but-set-variable]
drivers/net/ethernet/neterion/vxge/vxge-config.c:2263:6: warning:
variable 'req_out' set but not used [-Wunused-but-set-variable]
drivers/net/ethernet/neterion/vxge/vxge-config.c:2262:22: warning:
variable 'status' set but not used [-Wunused-but-set-variable]
drivers/net/ethernet/neterion/vxge/vxge-config.c:2360:22: warning:
variable 'status' set but not used [-Wunused-but-set-variable]
enum vxge_hw_status status = VXGE_HW_OK;
Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
virtio_net: Expand affinity to arbitrary numbers of cpu and vq
Virtio-net tries to pin each virtual queue rx and tx interrupt to a cpu if
there are as many queues as cpus.
Expand this heuristic to configure a reasonable affinity setting also
when the number of cpus != the number of virtual queues.
Patch 1 allows vqs to take an affinity mask with more than 1 cpu.
Patch 2 generalizes the algorithm in virtnet_set_affinity beyond
the case where #cpus == #vqs.
v2 changes:
Renamed "virtio_net: Make vp_set_vq_affinity() take a mask." to
"virtio: Make vp_set_vq_affinity() take a mask."
Caleb Raitto [Fri, 10 Aug 2018 00:28:40 +0000 (17:28 -0700)]
virtio_net: Stripe queue affinities across cores.
Always set the affinity hint, even if #cpu != #vq.
Handle the case where #cpu > #vq (including when #cpu % #vq != 0) and
when #vq > #cpu (including when #vq % #cpu != 0).
Signed-off-by: Caleb Raitto <caraitto@google.com> Signed-off-by: Willem de Bruijn <willemb@google.com> Acked-by: Jon Olson <jonolson@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Caleb Raitto [Fri, 10 Aug 2018 01:18:28 +0000 (18:18 -0700)]
virtio: Make vp_set_vq_affinity() take a mask.
Make vp_set_vq_affinity() take a cpumask instead of taking a single CPU.
If there are fewer queues than cores, queue affinity should be able to
map to multiple cores.
Link: https://patchwork.ozlabs.org/patch/948149/ Suggested-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Caleb Raitto <caraitto@google.com> Acked-by: Gonglei <arei.gonglei@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
tcp: new mechanism to ACK immediately
This patch is a follow-up feature improvement to the recent fixes on
the performance issues in ECN (delayed) ACKs. Many of the fixes use
tcp_enter_quickack_mode routine to force immediate ACKs. However the
routine also reset tracking interactive session. This is not ideal
because these immediate ACKs are required by protocol specifics
unrelated to the interactiveness nature of the application.
This patch set introduces a new flag to send a one-time immediate ACK
without changing the status of interactive session tracking. With this
patch set the immediate ACKs are generated upon these protocol states:
1) When a hole is repaired
2) When CE status changes between subsequent data packets received
3) When a data packet carries CWR flag
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Thu, 9 Aug 2018 16:38:12 +0000 (09:38 -0700)]
tcp: avoid resetting ACK timer upon receiving packet with ECN CWR flag
Previously commit 56534410cf3c ("tcp: ack immediately when a cwr
packet arrives") calls tcp_enter_quickack_mode to force sending
two immediate ACKs upon receiving a packet w/ CWR flag. The side
effect is it'll also reset the delayed ACK timer and interactive
session tracking. This patch removes that side effect by using the
new ACK_NOW flag to force an immmediate ACK.
Fixes: 56534410cf3c ("tcp: ack immediately when a cwr packet arrives") Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Thu, 9 Aug 2018 16:38:11 +0000 (09:38 -0700)]
tcp: always ACK immediately on hole repairs
RFC 5681 sec 4.2:
To provide feedback to senders recovering from losses, the receiver
SHOULD send an immediate ACK when it receives a data segment that
fills in all or part of a gap in the sequence space.
When a gap is partially filled, __tcp_ack_snd_check already checks
the out-of-order queue and correctly send an immediate ACK. However
when a gap is fully filled, the previous implementation only resets
pingpong mode which does not guarantee an immediate ACK because the
quick ACK counter may be zero. This patch addresses this issue by
marking the one-time immediate ACK flag instead.
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Wei Wang <weiwan@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Thu, 9 Aug 2018 16:38:10 +0000 (09:38 -0700)]
tcp: avoid resetting ACK timer in DCTCP
The recent fix of acking immediately in DCTCP on CE status change
has an undesirable side-effect: it also resets TCP ack timer and
disables pingpong mode (interactive session). But the CE status
change has nothing to do with them. This patch addresses that by
using the new one-time immediate ACK flag instead of calling
tcp_enter_quickack_mode().
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Wei Wang <weiwan@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Thu, 9 Aug 2018 16:38:09 +0000 (09:38 -0700)]
tcp: mandate a one-time immediate ACK
Add a new flag to indicate a one-time immediate ACK. This flag is
occasionaly set under specific TCP protocol states in addition to
the more common quickack mechanism for interactive application.
In several cases in the TCP code we want to force an immediate ACK
but do not want to call tcp_enter_quickack_mode() because we do
not want to forget the icsk_ack.pingpong or icsk_ack.ato state.
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Wei Wang <weiwan@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.
Notice that in this particular case, I placed the "fall through"
annotation at the bottom of the case, which is what GCC is expecting
to find.
Addresses-Coverity-ID: 115075 ("Missing break in switch") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: David S. Miller <davem@davemloft.net>
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.
Notice that in this particular case, I placed the "fall through"
annotation at the bottom of the case, which is what GCC is expecting
to find.
Addresses-Coverity-ID: 1369529 ("Missing break in switch") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: David S. Miller <davem@davemloft.net>
In preparation to enabling -Wimplicit-fallthrough, mark switch cases
where we are expecting to fall through.
Notice that in this particular case, I replaced the code comment at the
top of the switch statement with a proper "fall through" annotation for
each case, which is what GCC is expecting to find.
Addresses-Coverity-ID: 1056542 ("Missing break in switch")
Addresses-Coverity-ID: 1339579 ("Missing break in switch")
Addresses-Coverity-ID: 1369526 ("Missing break in switch") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Thu, 9 Aug 2018 11:00:49 +0000 (12:00 +0100)]
rxrpc: remove redundant static int 'zero'
The static int 'zero' is defined but is never used hence it is
redundant and can be removed. The use of this variable was removed
with commit df9ebef74965 ("rxrpc: Fix call timeouts").
Cleans up clang warning:
warning: 'zero' defined but not used [-Wunused-const-variable=]
Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vasundhara Volam [Fri, 10 Aug 2018 22:24:43 +0000 (18:24 -0400)]
bnxt_en: Fix strcpy() warnings in bnxt_ethtool.c
This patch fixes following smatch warnings:
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c:2826 bnxt_fill_coredump_seg_hdr() error: strcpy() '"sEgM"' too large for 'seg_hdr->signature' (5 vs 4)
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c:2858 bnxt_fill_coredump_record() error: strcpy() '"cOrE"' too large for 'record->signature' (5 vs 4)
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c:2879 bnxt_fill_coredump_record() error: strcpy() 'utsname()->sysname' too large for 'record->os_name' (65 vs 32)
Fixes: c9e9192abe61 ("bnxt_en: Add support for ethtool get dump.") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Vasundhara Volam <vasundhara-v.volam@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Raghu Vatsavayi [Thu, 9 Aug 2018 20:54:12 +0000 (13:54 -0700)]
liquidio: copperhead LED identification
Add LED identification support for liquidio TP copperhead cards.
Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@cavium.com> Acked-by: Derek Chickles <derek.chickles@cavium.com> Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Fixes: 3264b410c70d ("qed/qede: Multi CoS support.") Signed-off-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Fri, 10 Aug 2018 02:37:30 +0000 (10:37 +0800)]
mlxsw: core: remove unnecessary function mlxsw_core_driver_put
The function mlxsw_core_driver_put only traverse mlxsw_core_driver_list
to find the matched mlxsw_driver,but never used it.
So it can be removed safely.
Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jisheng Zhang [Fri, 10 Aug 2018 03:36:27 +0000 (11:36 +0800)]
net: mvneta: fix mvneta_config_rss on armada 3700
The mvneta Ethernet driver is used on a few different Marvell SoCs.
Some SoCs have per cpu interrupts for Ethernet events, the driver uses
a per CPU napi structure for this case. Some SoCs such as armada 3700
have a single interrupt for Ethernet events, the driver uses a global
napi structure for this case.
Current mvneta_config_rss() always operates the per cpu napi structure.
Fix it by operating a global napi for "single interrupt" case, and per
cpu napi structure for remaining cases.
Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Fixes: ebd369332715 ("net: mvneta: Add network support for Armada 3700 SoC") Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Fri, 10 Aug 2018 20:40:37 +0000 (22:40 +0200)]
r8169: don't configure max jumbo frame size per chip version
We don't have to configure the max jumbo frame size per chip
(sub-)version. It can be easily determined based on the chip family.
And new members of the RTL8168 family (if there are any) should be
automatically covered.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Fri, 10 Aug 2018 20:39:29 +0000 (22:39 +0200)]
r8169: don't configure csum function per chip version
We don't have to configure the csum function per chip (sub-)version.
The distinction is simple, versions RTL8102e and from RTL8168c onwards
support csum_v2.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Heiner Kallweit [Fri, 10 Aug 2018 19:27:55 +0000 (21:27 +0200)]
r8169: remove version info
The version number hasn't changed for ages and in general I doubt it
provides any benefit. The message in rtl_init_one() may even be
misleading because it's printed also if something fails in probe.
Therefore let's remove the version information.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Here's one more (most likely last) bluetooth-next pull request for the
4.19 kernel.
- Added support for MediaTek serial Bluetooth devices
- Initial skeleton for controller-side address resolution support
- Fix BT_HCIUART_RTL related Kconfig dependencies
- A few other minor fixes/cleanups
Please let me know if there are any issues pulling. Thanks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
====================
Netfilter updates for net-next
The following batch contains netfilter updates for your net-next tree:
1) Expose NFT_OSF_MAXGENRELEN maximum OS name length from the new OS
passive fingerprint matching extension, from Fernando Fernandez.
2) Add extension to support for fine grain conntrack timeout policies
from nf_tables. As preparation works, this patchset moves
nf_ct_untimeout() to nf_conntrack_timeout and it also decouples the
timeout policy from the ctnl_timeout object, most work done by
Harsha Sharma.
3) Enable connection tracking when conntrack helper is in place.
4) Missing enumeration in uapi header when splitting original xt_osf
to nfnetlink_osf, also from Fernando.
5) Fix a sparse warning due to incorrect typing in the nf_osf_find(),
from Wei Yongjun.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ganesh Goudar [Fri, 10 Aug 2018 09:17:01 +0000 (14:47 +0530)]
cxgb4: add support to display DCB info
display Data Center bridging information in debug
fs.
Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Colin Ian King [Fri, 10 Aug 2018 07:53:28 +0000 (08:53 +0100)]
mlxsw: remove unused arrays mlxsw_i2c_driver_name and mlxsw_pci_driver_name
Arrays mlxsw_i2c_driver_name and mlxsw_pci_driver_name are defined
but never used hence they are redundant and can be removed.
Cleans up clang warnings:
warning: 'mlxsw_i2c_driver_name' defined but not used
warning: 'mlxsw_pci_driver_name' defined but not used
Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: Provide stub for __netif_set_xps_queue if there is no CONFIG_XPS
Building virtio_net driver without CONFIG_XPS fails with:
drivers/net/virtio_net.c: In function ‘virtnet_set_affinity’:
drivers/net/virtio_net.c:1910:3: error: implicit declaration of function ‘__netif_set_xps_queue’ [-Werror=implicit-function-declaration]
__netif_set_xps_queue(vi->dev, mask, i, false);
^ Fixes: c6963d260158 ("net: allow to call netif_reset_xps_queues() under cpus_read_lock") Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Ankit Navik [Tue, 7 Aug 2018 07:46:35 +0000 (13:16 +0530)]
Bluetooth: Add definitions for LE set address resolution
Add the definitions for LE address resolution enable HCI commands.
When the LE address resolution enable gets changed via HCI commands
make sure that flag gets updated.
Andrei Vagin [Thu, 9 Aug 2018 03:07:35 +0000 (20:07 -0700)]
net: allow to call netif_reset_xps_queues() under cpus_read_lock
The definition of static_key_slow_inc() has cpus_read_lock in place. In the
virtio_net driver, XPS queues are initialized after setting the queue:cpu
affinity in virtnet_set_affinity() which is already protected within
cpus_read_lock. Lockdep prints a warning when we are trying to acquire
cpus_read_lock when it is already held.
This patch adds an ability to call __netif_set_xps_queue under
cpus_read_lock(). Acked-by: Jason Wang <jasowang@redhat.com>
============================================
WARNING: possible recursive locking detected
4.18.0-rc3-next-20180703+ #1 Not tainted
--------------------------------------------
swapper/0/1 is trying to acquire lock: 00000000cf973d46 (cpu_hotplug_lock.rw_sem){++++}, at: static_key_slow_inc+0xe/0x20
but task is already holding lock: 00000000cf973d46 (cpu_hotplug_lock.rw_sem){++++}, at: init_vqs+0x513/0x5a0
other info that might help us debug this:
Possible unsafe locking scenario:
3 locks held by swapper/0/1:
#0: 00000000244bc7da (&dev->mutex){....}, at: __driver_attach+0x5a/0x110
#1: 00000000cf973d46 (cpu_hotplug_lock.rw_sem){++++}, at: init_vqs+0x513/0x5a0
#2: 000000005cd8463f (xps_map_mutex){+.+.}, at: __netif_set_xps_queue+0x8d/0xc60
v2: move cpus_read_lock() out of __netif_set_xps_queue()
Cc: "Nambiar, Amritha" <amritha.nambiar@intel.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Fixes: 6b82a0c3004e ("net-sysfs: Add interface for Rx queue(s) map per Tx queue") Signed-off-by: Andrei Vagin <avagin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Walleij [Wed, 8 Aug 2018 12:38:55 +0000 (14:38 +0200)]
net: dsa: rtl8366rb: Support port 4 (WAN)
The totally undocumented IO mode needs to be set to enumerator
0 to enable port 4 also known as WAN in most configurations,
for ordinary traffic. The 3 bits in the register come up as
010 after reset, but need to be set to 000.
The Realtek source code contains a name for these bits, but
no explanation of what the 8 different IO modes may be.
Set it to zero for the time being and drop a comment so
people know what is going on if they run into trouble. This
"mode zero" works fine with the D-Link DIR-685 with
RTL8366RB.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
YueHaibing [Wed, 8 Aug 2018 11:59:32 +0000 (19:59 +0800)]
decnet: fix using plain integer as NULL warning
Fixes the following sparse warning:
net/decnet/dn_route.c:407:30: warning: Using plain integer as NULL pointer
net/decnet/dn_route.c:1923:22: warning: Using plain integer as NULL pointer
Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Maria Pasechnik [Wed, 8 Aug 2018 08:46:30 +0000 (11:46 +0300)]
net: ipv6_gre: Fix GRO to work on IPv6 over GRE tap
IPv6 GRO over GRE tap is not working while GRO is not set
over the native interface.
gro_list_prepare function updates the same_flow variable
of existing sessions to 1 if their mac headers match the one
of the incoming packet.
same_flow is used to filter out non-matching sessions and keep
potential ones for aggregation.
The number of bytes to compare should be the number of bytes
in the mac headers. In gro_list_prepare this number is set to
be skb->dev->hard_header_len. For GRE interfaces this hard_header_len
should be as it is set in the initialization process (when GRE is
created), it should not be overridden. But currently it is being overridden
by the value that is actually supposed to represent the needed_headroom.
Therefore, the number of bytes compared in order to decide whether the
the mac headers are the same is greater than the length of the headers.
As it's documented in netdevice.h, hard_header_len is the maximum
hardware header length, and needed_headroom is the extra headroom
the hardware may need.
hard_header_len is basically all the bytes received by the physical
till layer 3 header of the packet received by the interface.
For example, if the interface is a GRE tap then the needed_headroom
should be the total length of the following headers:
IP header of the physical, GRE header, mac header of GRE.
It is often used to calculate the MTU of the created interface.
This patch removes the override of the hard_header_len, and
assigns the calculated value to needed_headroom.
This way, the comparison in gro_list_prepare is really of
the mac headers, and if the packets have the same mac headers
the same_flow will be set to 1.
Performance testing: 45% higher bandwidth.
Measuring bandwidth of single-stream IPv4 TCP traffic over IPv6
GRE tap while GRO is not set on the native.
NIC: ConnectX-4LX
Before (GRO not working) : 7.2 Gbits/sec
After (GRO working): 10.5 Gbits/sec
Signed-off-by: Maria Pasechnik <mariap@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The main motive of this patch is to lay down driver's
tc offload infrastructure in place.
With these changes tc can offload various supported flow
profiles (4 tuples, src-ip, dst-ip, l4 port) for the drop
action. Dropped flows statistic is a global counter for
all the offloaded flows for drop action and is populated
in ethtool statistics as common "gft_filter_drop".
Examples -
tc qdisc add dev p4p1 ingress
tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
skip_sw ip_proto tcp dst_ip 192.168.40.200 action drop
tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
skip_sw ip_proto udp src_ip 192.168.40.100 action drop
tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
skip_sw ip_proto tcp src_ip 192.168.40.100 dst_ip 192.168.40.200 \
src_port 453 dst_port 876 action drop
tc filter add dev p4p1 protocol ipv4 parent ffff: flower \
skip_sw ip_proto tcp dst_port 98 action drop
Signed-off-by: Manish Chopra <manish.chopra@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Manish Chopra <manish.chopra@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Manish Chopra [Thu, 9 Aug 2018 18:13:49 +0000 (11:13 -0700)]
qed/qede: Multi CoS support.
This patch adds support for tc mqprio offload,
using this different traffic classes on the adapter
can be utilized based on configured priority to tc map.
This will cause SKBs with priority 0,1,2,3 to transmit
over tc 0,1,2,3 hardware queues respectively.
Signed-off-by: Manish Chopra <manish.chopra@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
one more set of patches for net-next. This is all sorts of cleanups and
more refactoring on the way to using netdev_priv. Please apply.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Return statements in functions returning bool should use true or false
instead of an integer value.
This issue was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Thu, 9 Aug 2018 12:48:03 +0000 (14:48 +0200)]
s390/qeth: don't restrict qeth_card to DMA memory
Allocating the main qeth_card struct with GFP_DMA blocks us from moving
it into netdev_priv(). But the only reason why we need DMA memory is the
ccw1 structs embedded into each ccw channel. So extract those into
separate allocations, like we already do for the cmd buffers.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Julian Wiedmann [Thu, 9 Aug 2018 12:48:01 +0000 (14:48 +0200)]
s390/qeth: do basic setup for data channel
The data channel currently doesn't need a setup operation, because we
don't use pre-allocated cmd buffers for its IO. But subsequent changes
will introduce further setup that also applies to the data channel.
This refactors things a bit, so that the new stuff can then be
automatically applied to all channels.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 9 Aug 2018 18:16:29 +0000 (11:16 -0700)]
Merge branch 'Add-support-for-XGMAC2-in-stmmac'
Jose Abreu says:
====================
Add support for XGMAC2 in stmmac
This series adds support for 10Gigabit IP in stmmac. The IP is called XGMAC2
and has many similarities with GMAC4. Due to this, its relatively easy to
incorporate this new IP into stmmac driver by adding a new block and
filling the necessary callbacks.
The functionality added by this series is still reduced but its only a
starting point which will later be expanded.
I splitted the patches into funcionality and to ease the review. Only the
patch 8/9 really enables the XGMAC2 block by adding a new compatible string.
Version 4 addresses review comments of Florian Fainelli and Rob Herring.
NOTE: Although the IP supports 10G, for now it was only possible to test it
at 1G speed due to 10G PHY HW shipping problems. Here follows iperf3
results at 1G:
Jose Abreu [Wed, 8 Aug 2018 08:04:37 +0000 (09:04 +0100)]
dt-bindings: net: stmmac: Add the bindings documentation for XGMAC2.
Adds the documentation for XGMAC2 DT bindings.
Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Cc: devicetree@vger.kernel.org Cc: Rob Herring <robh+dt@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu [Wed, 8 Aug 2018 08:04:36 +0000 (09:04 +0100)]
net: stmmac: Add the bindings parsing for XGMAC2
Add the bindings parsing for XGMAC2 IP block.
Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu [Wed, 8 Aug 2018 08:04:35 +0000 (09:04 +0100)]
net: stmmac: Integrate XGMAC into main driver flow
Now that we have all the XGMAC related callbacks, lets start integrating
this IP block into main driver.
Also, we corrected the initialization flow to only start DMA after
setting descriptors length.
Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Cc: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu [Wed, 8 Aug 2018 08:04:34 +0000 (09:04 +0100)]
net: stmmac: Add PTP support for XGMAC2
XGMAC2 uses the same engine of timestamping as GMAC4. Let's use the same
callbacks.
Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu [Wed, 8 Aug 2018 08:04:33 +0000 (09:04 +0100)]
net: stmmac: Add MDIO related functions for XGMAC2
Add the MDIO related funcionalities for the new IP block XGMAC2.
Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu [Wed, 8 Aug 2018 08:04:32 +0000 (09:04 +0100)]
net: stmmac: Add descriptor related callbacks for XGMAC2
Add the descriptor related callbacks for the new IP block XGMAC2.
Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu [Wed, 8 Aug 2018 08:04:31 +0000 (09:04 +0100)]
net: stmmac: Add DMA related callbacks for XGMAC2
Add the DMA related callbacks for the new IP block XGMAC2.
Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu [Wed, 8 Aug 2018 08:04:30 +0000 (09:04 +0100)]
net: stmmac: Add MAC related callbacks for XGMAC2
Add the MAC related callbacks for the new IP block XGMAC2.
Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jose Abreu [Wed, 8 Aug 2018 08:04:29 +0000 (09:04 +0100)]
net: stmmac: Add XGMAC 2.10 HWIF entry
Add a new entry to HWIF table for XGMAC 2.10. For now we fill it with
empty callbacks which will be added in posterior patches.
Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Joao Pinto <jpinto@synopsys.com> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>