]> git.baikalelectronics.ru Git - kernel.git/commit
net-zerocopy: use vm_insert_pages() for tcp rcv zerocopy
authorArjun Roy <arjunroy@google.com>
Mon, 8 Jun 2020 01:54:41 +0000 (18:54 -0700)
committerDavid S. Miller <davem@davemloft.net>
Tue, 9 Jun 2020 02:08:17 +0000 (19:08 -0700)
commitf419578cc6a3af260340666574477ad02a38e35e
tree9f841e8d82490893edf6d827a4a9bbc53f647d76
parentf0b85055f4e5c0237d99e45812680a9c3399af17
net-zerocopy: use vm_insert_pages() for tcp rcv zerocopy

Use vm_insert_pages() for tcp receive zerocopy.  Spin lock cycles (as
reported by perf) drop from a couple of percentage points to a fraction of
a percent.  This results in a roughly 6% increase in efficiency, measured
roughly as zerocopy receive count divided by CPU utilization.

The intention of this patchset is to reduce atomic ops for tcp zerocopy
receives, which normally hits the same spinlock multiple times
consecutively.

[akpm@linux-foundation.org: suppress gcc-7.2.0 warning]
Link: http://lkml.kernel.org/r/20200128025958.43490-3-arjunroy.kdev@gmail.com
Signed-off-by: Arjun Roy <arjunroy@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Cc: David Miller <davem@davemloft.net>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
net/ipv4/tcp.c