bpf: Remove bpf_lsm_file_mprotect from sleepable list.
Technically the bpf programs can sleep while attached to bpf_lsm_file_mprotect,
but such programs need to access user memory. So they're in might_fault()
category. Which means they cannot be called from file_mprotect lsm hook that
takes write lock on mm->mmap_lock.
Adjust the test accordingly.
Also add might_fault() to __bpf_prog_enter_sleepable() to catch such deadlocks early.
Weqaar Janjua [Fri, 28 Aug 2020 16:17:17 +0000 (00:17 +0800)]
samples/bpf: Fix to xdpsock to avoid recycling frames
The txpush program in the xdpsock sample application is supposed
to send out all packets in the umem in a round-robin fashion.
The problem is that it only cycled through the first BATCH_SIZE
worth of packets. Fixed this so that it cycles through all buffers
in the umem as intended.
Fixes: 4cb659ee6cb1 ("samples/bpf: convert xdpsock to use libbpf for AF_XDP access") Signed-off-by: Weqaar Janjua <weqaar.a.janjua@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Björn Töpel <bjorn.topel@intel.com> Link: https://lore.kernel.org/bpf/20200828161717.42705-1-weqaar.a.janjua@intel.com
Magnus Karlsson [Fri, 28 Aug 2020 12:51:05 +0000 (14:51 +0200)]
samples/bpf: Optimize l2fwd performance in xdpsock
Optimize the throughput performance of the l2fwd sub-app in the
xdpsock sample application by removing a duplicate syscall and
increasing the size of the fill ring.
The latter needs some further explanation. We recommend that you set
the fill ring size >= HW RX ring size + AF_XDP RX ring size. Make sure
you fill up the fill ring with buffers at regular intervals, and you
will with this setting avoid allocation failures in the driver. These
are usually quite expensive since drivers have not been written to
assume that allocation failures are common. For regular sockets,
kernel allocated memory is used that only runs out in OOM situations
that should be rare.
These two performance optimizations together lead to a 6% percent
improvement for the l2fwd app on my machine.
This sample code illustrates the packet forwarding between multiple
AF_XDP sockets in multi-threading environment. All the threads and
sockets are sharing a common buffer pool, with each socket having
its own private buffer cache. The sockets are created with the
xsk_socket__create_shared() function, which allows multiple AF_XDP
sockets to share the same UMEM object.
Example 1: Single thread handling two sockets. Packets received
from socket A (on top of interface IFA, queue QA) are forwarded
to socket B (on top of interface IFB, queue QB) and vice-versa.
The thread is affinitized to CPU core C:
./xsk_fwd -i IFA -q QA -i IFB -q QB -c C
Example 2: Two threads, each handling two sockets. Packets from
socket A are sent to socket B (by thread X), packets
from socket B are sent to socket A (by thread X); packets from
socket C are sent to socket D (by thread Y), packets from socket
D are sent to socket C (by thread Y). The two threads are bound
to CPU cores CX and CY:
Magnus Karlsson [Fri, 28 Aug 2020 08:26:27 +0000 (10:26 +0200)]
libbpf: Support shared umems between queues and devices
Add support for shared umems between hardware queues and devices to
the AF_XDP part of libbpf. This so that zero-copy can be achieved in
applications that want to send and receive packets between HW queues
on one device or between different devices/netdevs.
In order to create sockets that share a umem between hardware queues
and devices, a new function has been added called
xsk_socket__create_shared(). It takes the same arguments as
xsk_socket_create() plus references to a fill ring and a completion
ring. So for every socket that share a umem, you need to have one more
set of fill and completion rings. This in order to maintain the
single-producer single-consumer semantics of the rings.
You can create all the sockets via the new xsk_socket__create_shared()
call, or create the first one with xsk_socket__create() and the rest
with xsk_socket__create_shared(). Both methods work.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:26 +0000 (10:26 +0200)]
xsk: Add shared umem support between devices
Add support to share a umem between different devices. This mode
can be invoked with the XDP_SHARED_UMEM bind flag. Previously,
sharing was only supported within the same device. Note that when
sharing a umem between devices, just as in the case of sharing a
umem between queue ids, you need to create a fill ring and a
completion ring and tie them to the socket (with two setsockopts,
one for each ring) before you do the bind with the
XDP_SHARED_UMEM flag. This so that the single-producer
single-consumer semantics of the rings can be upheld.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:25 +0000 (10:26 +0200)]
xsk: Add shared umem support between queue ids
Add support to share a umem between queue ids on the same
device. This mode can be invoked with the XDP_SHARED_UMEM bind
flag. Previously, sharing was only supported within the same
queue id and device, and you shared one set of fill and
completion rings. However, note that when sharing a umem between
queue ids, you need to create a fill ring and a completion ring
and tie them to the socket before you do the bind with the
XDP_SHARED_UMEM flag. This so that the single-producer
single-consumer semantics can be upheld.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:24 +0000 (10:26 +0200)]
xsk: i40e: ice: ixgbe: mlx5: Test for dma_need_sync earlier for better performance
Test for dma_need_sync earlier to increase
performance. xsk_buff_dma_sync_for_cpu() takes an xdp_buff as
parameter and from that the xsk_buff_pool reference is dug out. Perf
shows that this dereference causes a lot of cache misses. But as the
buffer pool is now sent down to the driver at zero-copy initialization
time, we might as well use this pointer directly, instead of going via
the xsk_buff and we can do so already in xsk_buff_dma_sync_for_cpu()
instead of in xp_dma_sync_for_cpu. This gets rid of these cache
misses.
Throughput increases with 3% for the xdpsock l2fwd sample application
on my machine.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:23 +0000 (10:26 +0200)]
xsk: Rearrange internal structs for better performance
Rearrange the xdp_sock, xdp_umem and xsk_buff_pool structures so
that they get smaller and align better to the cache lines. In the
previous commits of this patch set, these structs have been
reordered with the focus on functionality and simplicity, not
performance. This patch improves throughput performance by around
3%.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:22 +0000 (10:26 +0200)]
xsk: Enable sharing of dma mappings
Enable the sharing of dma mappings by moving them out from the buffer
pool. Instead we put each dma mapped umem region in a list in the umem
structure. If dma has already been mapped for this umem and device, it
is not mapped again and the existing dma mappings are reused.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:21 +0000 (10:26 +0200)]
xsk: Move addrs from buffer pool to umem
Replicate the addrs pointer in the buffer pool to the umem. This mapping
will be the same for all buffer pools sharing the same umem. In the
buffer pool we leave the addrs pointer for performance reasons.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:20 +0000 (10:26 +0200)]
xsk: Move xsk_tx_list and its lock to buffer pool
Move the xsk_tx_list and the xsk_tx_list_lock from the umem to
the buffer pool. This so that we in a later commit can share the
umem between multiple HW queues. There is one xsk_tx_list per
device and queue id, so it should be located in the buffer pool.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:19 +0000 (10:26 +0200)]
xsk: Move queue_id, dev and need_wakeup to buffer pool
Move queue_id, dev, and need_wakeup from the umem to the
buffer pool. This so that we in a later commit can share the umem
between multiple HW queues. There is one buffer pool per dev and
queue id, so these variables should belong to the buffer pool, not
the umem. Need_wakeup is also something that is set on a per napi
level, so there is usually one per device and queue id. So move
this to the buffer pool too.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:18 +0000 (10:26 +0200)]
xsk: Move fill and completion rings to buffer pool
Move the fill and completion rings from the umem to the buffer
pool. This so that we in a later commit can share the umem
between multiple HW queue ids. In this case, we need one fill and
completion ring per queue id. As the buffer pool is per queue id
and napi id this is a natural place for it and one umem
struture can be shared between these buffer pools.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:17 +0000 (10:26 +0200)]
xsk: Create and free buffer pool independently from umem
Create and free the buffer pool independently from the umem. Move
these operations that are performed on the buffer pool from the
umem create and destroy functions to new create and destroy
functions just for the buffer pool. This so that in later commits
we can instantiate multiple buffer pools per umem when sharing a
umem between HW queues and/or devices. We also erradicate the
back pointer from the umem to the buffer pool as this will not
work when we introduce the possibility to have multiple buffer
pools per umem.
Rename the AF_XDP zero-copy driver interface functions to better
reflect what they do after the replacement of umems with buffer
pools in the previous commit. Mostly it is about replacing the
umem name from the function names with xsk_buff and also have
them take the a buffer pool pointer instead of a umem. The
various ring functions have also been renamed in the process so
that they have the same naming convention as the internal
functions in xsk_queue.h. This so that it will be clearer what
they do and also for consistency.
Magnus Karlsson [Fri, 28 Aug 2020 08:26:15 +0000 (10:26 +0200)]
xsk: i40e: ice: ixgbe: mlx5: Pass buffer pool to driver instead of umem
Replace the explicit umem reference passed to the driver in AF_XDP
zero-copy mode with the buffer pool instead. This in preparation for
extending the functionality of the zero-copy mode so that umems can be
shared between queues on the same netdev and also between netdevs. In
this commit, only an umem reference has been added to the buffer pool
struct. But later commits will add other entities to it. These are
going to be entities that are different between different queue ids
and netdevs even though the umem is shared between them.
bpf: Fix build without BPF_SYSCALL, but with BPF_JIT.
When CONFIG_BPF_SYSCALL is not set, but CONFIG_BPF_JIT=y
the kernel build fails:
In file included from ../kernel/bpf/trampoline.c:11:
../kernel/bpf/trampoline.c: In function ‘bpf_trampoline_update’:
../kernel/bpf/trampoline.c:220:39: error: ‘call_rcu_tasks_trace’ undeclared
../kernel/bpf/trampoline.c: In function ‘__bpf_prog_enter_sleepable’:
../kernel/bpf/trampoline.c:411:2: error: implicit declaration of function ‘rcu_read_lock_trace’
../kernel/bpf/trampoline.c: In function ‘__bpf_prog_exit_sleepable’:
../kernel/bpf/trampoline.c:416:2: error: implicit declaration of function ‘rcu_read_unlock_trace’
This is due to:
obj-$(CONFIG_BPF_JIT) += trampoline.o
obj-$(CONFIG_BPF_JIT) += dispatcher.o
There is a number of functions that arch/x86/net/bpf_jit_comp.c is
using from these two files, but none of them will be used when
only cBPF is on (which is the case for BPF_SYSCALL=n BPF_JIT=y).
Add rcu_trace functions to rcupdate_trace.h. The JITed code won't execute them
and BPF trampoline logic won't be used without BPF_SYSCALL.
Fixes: 4f9aef2a7f98 ("bpf: Introduce sleepable BPF programs") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Paul E. McKenney <paulmck@kernel.org> Link: https://lore.kernel.org/bpf/20200831155155.62754-1-alexei.starovoitov@gmail.com
Daniel Borkmann [Fri, 28 Aug 2020 19:20:33 +0000 (21:20 +0200)]
Merge branch 'bpf-sleepable'
Alexei Starovoitov says:
====================
v2->v3:
- switched to minimal allowlist approach. Essentially that means that syscall
entry, few btrfs allow_error_inject functions, should_fail_bio(), and two LSM
hooks: file_mprotect and bprm_committed_creds are the only hooks that allow
attaching of sleepable BPF programs. When comprehensive analysis of LSM hooks
will be done this allowlist will be extended.
- added patch 1 that fixes prototypes of two mm functions to reliably work with
error injection. It's also necessary for resolve_btfids tool to recognize
these two funcs, but that's secondary.
v1->v2:
- split fmod_ret fix into separate patch
- added denylist
v1:
This patch set introduces the minimal viable support for sleepable bpf programs.
In this patch only fentry/fexit/fmod_ret and lsm progs can be sleepable.
Only array and pre-allocated hash and lru maps allowed.
Pass request to load program as sleepable via ".s" suffix in the section name.
If it happens in the future that all map types and helpers are allowed with
BPF_F_SLEEPABLE flag "fmod_ret/" and "lsm/" can be aliased to "fmod_ret.s/" and
"lsm.s/" to make all lsm and fmod_ret programs sleepable by default. The fentry
and fexit programs would always need to have sleepable vs non-sleepable
distinction, since not all fentry/fexit progs will be attached to sleepable
kernel functions.
Introduce sleepable BPF programs that can request such property for themselves
via BPF_F_SLEEPABLE flag at program load time. In such case they will be able
to use helpers like bpf_copy_from_user() that might sleep. At present only
fentry/fexit/fmod_ret and lsm programs can request to be sleepable and only
when they are attached to kernel functions that are known to allow sleeping.
The non-sleepable programs are relying on implicit rcu_read_lock() and
migrate_disable() to protect life time of programs, maps that they use and
per-cpu kernel structures used to pass info between bpf programs and the
kernel. The sleepable programs cannot be enclosed into rcu_read_lock().
migrate_disable() maps to preempt_disable() in non-RT kernels, so the progs
should not be enclosed in migrate_disable() as well. Therefore
rcu_read_lock_trace is used to protect the life time of sleepable progs.
There are many networking and tracing program types. In many cases the
'struct bpf_prog *' pointer itself is rcu protected within some other kernel
data structure and the kernel code is using rcu_dereference() to load that
program pointer and call BPF_PROG_RUN() on it. All these cases are not touched.
Instead sleepable bpf programs are allowed with bpf trampoline only. The
program pointers are hard-coded into generated assembly of bpf trampoline and
synchronize_rcu_tasks_trace() is used to protect the life time of the program.
The same trampoline can hold both sleepable and non-sleepable progs.
When rcu_read_lock_trace is held it means that some sleepable bpf program is
running from bpf trampoline. Those programs can use bpf arrays and preallocated
hash/lru maps. These map types are waiting on programs to complete via
synchronize_rcu_tasks_trace();
Updates to trampoline now has to do synchronize_rcu_tasks_trace() and
synchronize_rcu_tasks() to wait for sleepable progs to finish and for
trampoline assembly to finish.
This is the first step of introducing sleepable progs. Eventually dynamically
allocated hash maps can be allowed and networking program types can become
sleepable too.
mm/error_inject: Fix allow_error_inject function signatures.
'static' and 'static noinline' function attributes make no guarantees that
gcc/clang won't optimize them. The compiler may decide to inline 'static'
function and in such case ALLOW_ERROR_INJECT becomes meaningless. The compiler
could have inlined __add_to_page_cache_locked() in one callsite and didn't
inline in another. In such case injecting errors into it would cause
unpredictable behavior. It's worse with 'static noinline' which won't be
inlined, but it still can be optimized. Like the compiler may decide to remove
one argument or constant propagate the value depending on the callsite.
To avoid such issues make sure that these functions are global noinline.
Fixes: 8eddbe2c2210 ("mm/page_alloc.c: allow error injection") Fixes: 0c9c1cf02dab ("mm/filemap.c: enable error injection at add_to_page_cache()") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Link: https://lore.kernel.org/bpf/20200827220114.69225-2-alexei.starovoitov@gmail.com
Martin KaFai Lau [Fri, 28 Aug 2020 01:18:19 +0000 (18:18 -0700)]
bpf: selftests: Add test for different inner map size
This patch tests the inner map size can be different
for reuseport_sockarray but has to be the same for
arraymap. A new subtest "diff_size" is added for this.
The existing test is moved to a subtest "lookup_update".
Martin KaFai Lau [Fri, 28 Aug 2020 01:18:13 +0000 (18:18 -0700)]
bpf: Relax max_entries check for most of the inner map types
Most of the maps do not use max_entries during verification time.
Thus, those map_meta_equal() do not need to enforce max_entries
when it is inserted as an inner map during runtime. The max_entries
check is removed from the default implementation bpf_map_meta_equal().
The prog_array_map and xsk_map are exception. Its map_gen_lookup
uses max_entries to generate inline lookup code. Thus, they will
implement its own map_meta_equal() to enforce max_entries.
Since there are only two cases now, the max_entries check
is not refactored and stays in its own .c file.
Martin KaFai Lau [Fri, 28 Aug 2020 01:18:06 +0000 (18:18 -0700)]
bpf: Add map_meta_equal map ops
Some properties of the inner map is used in the verification time.
When an inner map is inserted to an outer map at runtime,
bpf_map_meta_equal() is currently used to ensure those properties
of the inserting inner map stays the same as the verification
time.
In particular, the current bpf_map_meta_equal() checks max_entries which
turns out to be too restrictive for most of the maps which do not use
max_entries during the verification time. It limits the use case that
wants to replace a smaller inner map with a larger inner map. There are
some maps do use max_entries during verification though. For example,
the map_gen_lookup in array_map_ops uses the max_entries to generate
the inline lookup code.
To accommodate differences between maps, the map_meta_equal is added
to bpf_map_ops. Each map-type can decide what to check when its
map is used as an inner map during runtime.
Also, some map types cannot be used as an inner map and they are
currently black listed in bpf_map_meta_alloc() in map_in_map.c.
It is not unusual that the new map types may not aware that such
blacklist exists. This patch enforces an explicit opt-in
and only allows a map to be used as an inner map if it has
implemented the map_meta_equal ops. It is based on the
discussion in [1].
All maps that support inner map has its map_meta_equal points
to bpf_map_meta_equal in this patch. A later patch will
relax the max_entries check for most maps. bpf_types.h
counts 28 map types. This patch adds 23 ".map_meta_equal"
by using coccinelle. -5 for
BPF_MAP_TYPE_PROG_ARRAY
BPF_MAP_TYPE_(PERCPU)_CGROUP_STORAGE
BPF_MAP_TYPE_STRUCT_OPS
BPF_MAP_TYPE_ARRAY_OF_MAPS
BPF_MAP_TYPE_HASH_OF_MAPS
The "if (inner_map->inner_map_meta)" check in bpf_map_meta_alloc()
is moved such that the same error is returned.
Yonghong Song [Fri, 28 Aug 2020 05:19:22 +0000 (22:19 -0700)]
bpf: Make bpf_link_info.iter similar to bpf_iter_link_info
bpf_link_info.iter is used by link_query to return bpf_iter_link_info
to user space. Fields may be different, e.g., map_fd vs. map_id, so
we cannot reuse the exact structure. But make them similar, e.g.,
struct bpf_link_info {
/* common fields */
union {
struct { ... } raw_tracepoint;
struct { ... } tracing;
...
struct {
/* common fields for iter */
union {
struct {
__u32 map_id;
} map;
/* other structs for other targets */
};
};
};
};
so the structure is extensible the same way as bpf_iter_link_info.
Fixes: 1539010035b0 ("bpf: Implement link_query for bpf iterators") Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200828051922.758950-1-yhs@fb.com
tools, bpf/build: Cleanup feature files on make clean
The system for "Auto-detecting system features" located under
tools/build/ are (currently) used by perf, libbpf and bpftool. It can
contain stalled feature detection files, which are not cleaned up by
libbpf and bpftool on make clean (side-note: perf tool is correct).
Fix this by making the users invoke the make clean target.
Some details about the changes. The libbpf Makefile already had a
clean-config target (which seems to be copy-pasted from perf), but this
target was not "connected" (a make dependency) to clean target. Choose
not to rename target as someone might be using it. Did change the output
from "CLEAN config" to "CLEAN feature-detect", to make it more clear
what happens.
Andrii Nakryiko [Thu, 27 Aug 2020 04:11:09 +0000 (21:11 -0700)]
libbpf: Fix compilation warnings for 64-bit printf args
Fix compilation warnings due to __u64 defined differently as `unsigned long`
or `unsigned long long` on different architectures (e.g., ppc64le differs from
x86-64). Also cast one argument to size_t to fix printf warning of similar
nature.
Yonghong Song [Tue, 25 Aug 2020 06:46:08 +0000 (23:46 -0700)]
bpf: Fix a verifier failure with xor
bpf selftest test_progs/test_sk_assign failed with llvm 11 and llvm 12.
Compared to llvm 10, llvm 11 and 12 generates xor instruction which
is not handled properly in verifier. The following illustrates the
problem:
At above insn 132, w5 = 0, but after w5 ^= 1, we give a really conservative
value of w5. At insn 45, in reality the condition should be always false.
But due to conservative value for w3, the verifier evaluates it could be
true and this later leads to verifier failure complaining potential
packet out-of-bound access.
This patch implemented proper XOR support in verifier.
In the above example, we have:
132: R5=invP0
132: (a4) w5 ^= 1
133: R5_w=invP1
...
37: (bc) w8 = w5
...
41: (bc) w3 = w8
42: R3_w=invP1
...
45: (56) if w3 != 0x0 goto pc+1
47: R3_w=invP1
...
processed 353 insns ...
and the verifier can verify the program successfully.
Alex Gartrell [Wed, 26 Aug 2020 07:55:49 +0000 (00:55 -0700)]
libbpf: Fix unintentional success return code in bpf_object__load
There are code paths where EINVAL is returned directly without setting
errno. In that case, errno could be 0, which would mask the
failure. For example, if a careless programmer set log_level to 10000
out of laziness, they would have to spend a long time trying to figure
out why.
====================
This patch series adds changes in verifier to make decisions such as granting
of read / write access or enforcement of return code status based on
the program type of the target program while using dynamic program
extension (of type BPF_PROG_TYPE_EXT).
The BPF_PROG_TYPE_EXT type can be used to extend types such as XDP, SKB
and others. Since the BPF_PROG_TYPE_EXT program type on itself is just a
placeholder for those, we need this extended check for those extended
programs to actually work with proper access, while using this option.
Patch #1 includes changes in the verifier.
Patch #2 adds selftests to verify write access on a packet for a valid
extension program type
Patch #3 adds selftests to verify proper check for the return code
Patch #4 adds selftests to ensure access permissions and restrictions
for some map types such sockmap.
Changelogs:
v2 -> v3:
* more comprehensive resolution of the program type in the verifier
based on the target program (and not just for the packet access)
* selftests for checking return code and map access
* Also moved this patch to 'bpf-next' from 'bpf' tree
v1 -> v2:
* extraction of the logic to resolve prog type into a separate method
* selftests to check for packet access for a valid freplace prog
====================
Udip Pant [Tue, 25 Aug 2020 23:20:03 +0000 (16:20 -0700)]
selftests/bpf: Test for map update access from within EXT programs
This adds further tests to ensure access permissions and restrictions
are applied properly for some map types such as sock-map.
It also adds another negative tests to assert static functions cannot be
replaced. In the 'unreliable' mode it still fails with error 'tracing progs
cannot use bpf_spin_lock yet' with the change in the verifier
Udip Pant [Tue, 25 Aug 2020 23:20:02 +0000 (16:20 -0700)]
selftests/bpf: Test for checking return code for the extended prog
This adds test to enforce same check for the return code for the extended prog
as it is enforced for the target program. It asserts failure for a
return code, which is permitted without the patch in this series, while
it is restricted after the application of this patch.
Udip Pant [Tue, 25 Aug 2020 23:20:01 +0000 (16:20 -0700)]
selftests/bpf: Add test for freplace program with write access
This adds a selftest that tests the behavior when a freplace target program
attempts to make a write access on a packet. The expectation is that the read or write
access is granted based on the program type of the linked program and
not itself (which is of type, for e.g., BPF_PROG_TYPE_EXT).
This test fails without the associated patch on the verifier.
Udip Pant [Tue, 25 Aug 2020 23:20:00 +0000 (16:20 -0700)]
bpf: verifier: Use target program's type for access verifications
This patch adds changes in verifier to make decisions such as granting
of read / write access or enforcement of return code status based on
the program type of the target program while using dynamic program
extension (of type BPF_PROG_TYPE_EXT).
The BPF_PROG_TYPE_EXT type can be used to extend types such as XDP, SKB
and others. Since the BPF_PROG_TYPE_EXT program type on itself is just a
placeholder for those, we need this extended check for those extended
programs to actually work with proper access, while using this option.
Specifically, it introduces following changes:
- may_access_direct_pkt_data:
allow access to packet data based on the target prog
- check_return_code:
enforce return code based on the target prog
(currently, this check is skipped for EXT program)
- check_ld_abs:
check for 'may_access_skb' based on the target prog
- check_map_prog_compatibility:
enforce the map compatibility check based on the target prog
- may_update_sockmap:
allow sockmap update based on the target prog
Some other occurrences of prog->type is left as it without replacing
with the 'resolved' type:
- do_check_common() and check_attach_btf_id():
already have specific logic to handle the EXT prog type
- jit_subprogs() and bpf_check():
Not changed for jit compilation or while inferring env->ops
Next few patches in this series include selftests for some of these cases.
Jiri Olsa [Wed, 26 Aug 2020 10:18:45 +0000 (12:18 +0200)]
selftests/bpf: Fix open call in trigger_fstat_events
Alexei reported compile breakage on newer systems with
following error:
In file included from /usr/include/fcntl.h:290:0,
4814 from ./test_progs.h:29,
4815 from
.../bpf-next/tools/testing/selftests/bpf/prog_tests/d_path.c:3:
4816In function ‘open’,
4817 inlined from ‘trigger_fstat_events’ at
.../bpf-next/tools/testing/selftests/bpf/prog_tests/d_path.c:50:10,
4818 inlined from ‘test_d_path’ at
.../bpf-next/tools/testing/selftests/bpf/prog_tests/d_path.c:119:6:
4819/usr/include/x86_64-linux-gnu/bits/fcntl2.h:50:4: error: call to
‘__open_missing_mode’ declared with attribute error: open with O_CREAT
or O_TMPFILE in second argument needs 3 arguments
4820 __open_missing_mode ();
4821 ^~~~~~~~~~~~~~~~~~~~~~
We're missing permission bits as 3rd argument
for open call with O_CREAT flag specified.
Fixes: 19fcfbbd79f4 ("selftests/bpf: Add test for d_path helper") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200826101845.747617-1-jolsa@kernel.org
Jiri Olsa [Tue, 25 Aug 2020 19:21:23 +0000 (21:21 +0200)]
selftests/bpf: Add test for d_path helper
Adding test for d_path helper which is pretty much
copied from Wenbo Zhang's test for bpf_get_fd_path,
which never made it in.
The test is doing fstat/close on several fd types,
and verifies we got the d_path helper working on
kernel probes for vfs_getattr/filp_close functions.
Jiri Olsa [Tue, 25 Aug 2020 19:21:22 +0000 (21:21 +0200)]
selftests/bpf: Add verifier test for d_path helper
Adding verifier test for attaching tracing program and
calling d_path helper from within and testing that it's
allowed for dentry_open function and denied for 'd_path'
function with appropriate error.
Jiri Olsa [Tue, 25 Aug 2020 19:21:20 +0000 (21:21 +0200)]
bpf: Add d_path helper
Adding d_path helper function that returns full path for
given 'struct path' object, which needs to be the kernel
BTF 'path' object. The path is returned in buffer provided
'buf' of size 'sz' and is zero terminated.
bpf_d_path(&file->f_path, buf, size);
The helper calls directly d_path function, so there's only
limited set of function it can be called from. Adding just
very modest set for the start.
Updating also bpf.h tools uapi header and adding 'path' to
bpf_helpers_doc.py script.
WALK_SCALAR - when SCALAR_VALUE is found
WALK_PTR - when pointer value is found, its ID is stored
in 'next_btf_id' output param
WALK_STRUCT - when nested struct object is found, its ID is stored
in 'next_btf_id' output param
It will be used in following patches to get all nested
struct objects for given type and offset.
The btf_struct_access now calls btf_struct_walk function,
as long as it gets nested structs as return value.
Jiri Olsa [Tue, 25 Aug 2020 19:21:13 +0000 (21:21 +0200)]
bpf: Move btf_resolve_size into __btf_resolve_size
Moving btf_resolve_size into __btf_resolve_size and
keeping btf_resolve_size public with just first 3
arguments, because the rest of the arguments are not
used by outside callers.
Following changes are adding more arguments, which
are not useful to outside callers. They will be added
to the __btf_resolve_size function.
The CC_CAN_LINK checks that the host compiler can link, but bpf_preload
relies on libbpf which in turn needs libelf to be present during linking.
allmodconfig runs in odd setups with cross compilers and missing host
libraries like libelf. Instead of extending kconfig with every possible
library that bpf_preload might need disallow building BPF_PRELOAD in
such build-only configurations.
KP Singh [Tue, 25 Aug 2020 18:29:19 +0000 (20:29 +0200)]
bpf: Add selftests for local_storage
inode_local_storage:
* Hook to the file_open and inode_unlink LSM hooks.
* Create and unlink a temporary file.
* Store some information in the inode's bpf_local_storage during
file_open.
* Verify that this information exists when the file is unlinked.
sk_local_storage:
* Hook to the socket_post_create and socket_bind LSM hooks.
* Open and bind a socket and set the sk_storage in the
socket_post_create hook using the start_server helper.
* Verify if the information is set in the socket_bind hook.
KP Singh [Tue, 25 Aug 2020 18:29:18 +0000 (20:29 +0200)]
bpf: Allow local storage to be used from LSM programs
Adds support for both bpf_{sk, inode}_storage_{get, delete} to be used
in LSM programs. These helpers are not used for tracing programs
(currently) as their usage is tied to the life-cycle of the object and
should only be used where the owning object won't be freed (when the
owning object is passed as an argument to the LSM hook). Thus, they
are safer to use in LSM hooks than tracing. Usage of local storage in
tracing programs will probably follow a per function based whitelist
approach.
Since the UAPI helper signature for bpf_sk_storage expect a bpf_sock,
it, leads to a compilation warning for LSM programs, it's also updated
to accept a void * pointer instead.
KP Singh [Tue, 25 Aug 2020 18:29:17 +0000 (20:29 +0200)]
bpf: Implement bpf_local_storage for inodes
Similar to bpf_local_storage for sockets, add local storage for inodes.
The life-cycle of storage is managed with the life-cycle of the inode.
i.e. the storage is destroyed along with the owning inode.
The BPF LSM allocates an __rcu pointer to the bpf_local_storage in the
security blob which are now stackable and can co-exist with other LSMs.
KP Singh [Tue, 25 Aug 2020 18:29:15 +0000 (20:29 +0200)]
bpf: Generalize bpf_sk_storage
Refactor the functionality in bpf_sk_storage.c so that concept of
storage linked to kernel objects can be extended to other objects like
inode, task_struct etc.
Each new local storage will still be a separate map and provide its own
set of helpers. This allows for future object specific extensions and
still share a lot of the underlying implementation.
adding new map operations to support bpf_local_storage maps:
* storages for different kernel objects to optionally have different
memory charging strategy (map_local_storage_charge,
map_local_storage_uncharge)
* Functionality to extract the storage pointer from a pointer to the
owning object (map_owner_storage_ptr)
KP Singh [Tue, 25 Aug 2020 18:29:14 +0000 (20:29 +0200)]
bpf: Generalize caching for sk_storage.
Provide the a ability to define local storage caches on a per-object
type basis. The caches and caching indices for different objects should
not be inter-mixed as suggested in:
Yonghong Song [Mon, 24 Aug 2020 22:28:07 +0000 (15:28 -0700)]
selftests/bpf: Enable tc verbose mode for test_sk_assign
Currently test_sk_assign failed verifier with llvm11/llvm12.
During debugging, I found the default verifier output is
truncated like below
Verifier analysis:
Skipped 2200 bytes, use 'verb' option for the full verbose log.
[...]
off=23,r=34,imm=0) R5=inv0 R6=ctx(id=0,off=0,imm=0) R7=pkt(id=0,off=0,r=34,imm=0) R10=fp0
80: (0f) r7 += r2
last_idx 80 first_idx 21
regs=4 stack=0 before 78: (16) if w3 == 0x11 goto pc+1
when I am using "./test_progs -vv -t assign".
The reason is tc verbose mode is not enabled.
This patched enabled tc verbose mode and the output looks like below
Verifier analysis:
Daniel T. Lee [Sun, 23 Aug 2020 08:53:34 +0000 (17:53 +0900)]
samples: bpf: Refactor tracepoint tracing programs with libbpf
For the problem of increasing fragmentation of the bpf loader programs,
instead of using bpf_loader.o, which is used in samples/bpf, this
commit refactors the existing tracepoint tracing programs with libbbpf
bpf loader.
- Adding a tracepoint event and attaching a bpf program to it was done
through bpf_program_attach().
- Instead of using the existing BPF MAP definition, MAP definition
has been refactored with the new BTF-defined MAP format.
Daniel T. Lee [Sun, 23 Aug 2020 08:53:33 +0000 (17:53 +0900)]
samples: bpf: Refactor kprobe tracing programs with libbpf
For the problem of increasing fragmentation of the bpf loader programs,
instead of using bpf_loader.o, which is used in samples/bpf, this
commit refactors the existing kprobe tracing programs with libbbpf
bpf loader.
- For kprobe events pointing to system calls, the SYSCALL() macro in
trace_common.h was used.
- Adding a kprobe event and attaching a bpf program to it was done
through bpf_program_attach().
- Instead of using the existing BPF MAP definition, MAP definition
has been refactored with the new BTF-defined MAP format.
Daniel T. Lee [Sun, 23 Aug 2020 08:53:32 +0000 (17:53 +0900)]
samples: bpf: Cleanup bpf_load.o from Makefile
Since commit ffd24b7d660f ("samples: bpf: Refactor BPF map performance
test with libbpf") has ommited the removal of bpf_load.o from Makefile,
this commit removes the bpf_load.o rule for targets where bpf_load.o is
not used.
Andrii Nakryiko [Fri, 21 Aug 2020 22:56:53 +0000 (15:56 -0700)]
libbpf: Fix type compatibility check copy-paste error
Fix copy-paste error in types compatibility check. Local type is accidentally
used instead of target type for the very first type check strictness check.
This can result in potentially less strict candidate comparison. Fix the
error.
Jakub Sitnicki [Fri, 21 Aug 2020 10:02:26 +0000 (12:02 +0200)]
bpf: sk_lookup: Add user documentation
Describe the purpose of BPF sk_lookup program, how it can be attached, when
it gets invoked, and what information gets passed to it. Point the reader
to examples and further documentation.
====================
The earlier effort in BPF-TCP-CC allows the TCP Congestion Control
algorithm to be written in BPF. It opens up opportunities to allow
a faster turnaround time in testing/releasing new congestion control
ideas to production environment.
The same flexibility can be extended to writing TCP header option.
It is not uncommon that people want to test new TCP header option
to improve the TCP performance. Another use case is for data-center
that has a more controlled environment and has more flexibility in
putting header options for internal traffic only.
This patch set introduces the necessary BPF logic and API to
allow bpf program to write and parse header options.
There are also some changes to TCP and they are mostly to provide
the needed sk and skb info to the bpf program to make decision.
Patch 9 is the main patch and has more details on the API and design.
The set includes an example which sends the max delay ack in
the BPF TCP header option and the receiving side can
then adjust its RTO accordingly.
v5:
- Move some of the comments from git commit message to the UAPI bpf.h
in patch 9
- Some variable clean up in the tests (patch 11).
v4:
- Since bpf-next is currently closed, tag the set with RFC to keep the
review cadence
- Separate tcp changes in its own patches (5, 6, 7). It is a bit
tricky since most of the tcp changes is to call out the bpf prog to
write and parse the header. The write and parse callout has been
modularized into a few bpf_skops_* function in v3.
This revision (v4) tries to move those bpf_skops_* functions into separate
TCP patches. However, they will be half implemented to highlight
the changes to the TCP stack, mainly:
- when the bpf prog will be called in the TCP stack and
- what information needs to pump through the TCP stack to the actual bpf
prog callsite.
The bpf_skops_* functions will be fully implemented in patch 9 together
with other bpf pieces.
- Use struct_size() in patch 1 (Eric)
- Add saw_unknown to struct tcp_options_received in patch 4 (Eric)
v3:
- Add kdoc for tcp_make_synack (Jakub Kicinski)
- Add BPF_WRITE_HDR_TCP_CURRENT_MSS and BPF_WRITE_HDR_TCP_SYNACK_COOKIE
in bpf.h to give a clearer meaning to sock_ops->args[0] when
writing header option.
- Rename BPF_SOCK_OPS_PARSE_UNKWN_HDR_OPT_CB_FLAG
to BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG
v2:
- Instead of limiting the bpf prog to write experimental
option (kind:254, magic:0xeB9F), this revision allows the bpf prog to
write any TCP header option through the bpf_store_hdr_opt() helper.
That will allow different bpf-progs to write its own
option and the helper will guarantee there is no duplication.
- Add bpf_load_hdr_opt() helper to search a particular option by kind.
Some of the get_syn logic is refactored to bpf_sock_ops_get_syn().
- Since bpf prog is no longer limited to option (254, 0xeB9F),
the TCP_SKB_CB(skb)->bpf_hdr_opt_off is no longer needed.
Instead, when there is any option kernel cannot recognize,
the bpf prog will be called if the
BPF_SOCK_OPS_PARSE_UNKWN_HDR_OPT_CB_FLAG is set.
[ The "unknown_opt" is learned in tcp_parse_options() in patch 4. ]
- Add BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG.
If this flag is set, the bpf-prog will be called
on all tcp packet received at an established sk.
It will be useful to ensure a previously written header option is
received by the peer.
e.g. The latter test is using this on the active-side during syncookie.
- The test_tcp_hdr_options.c is adjusted accordingly
to test writing both experimental and regular TCP header option.
- The test_misc_tcp_hdr_options.c is added to mainly
test different cases on the new helpers.
- Break up the TCP_BPF_RTO_MIN and TCP_BPF_DELACK_MAX into
two patches.
- Directly store the tcp_hdrlen in "struct saved_syn" instead of
going back to the tcp header to obtain it by "th->doff * 4"
- Add a new optval(==2) for setsockopt(TCP_SAVE_SYN) such
that it will also store the mac header (patch 9).
====================
Martin KaFai Lau [Thu, 20 Aug 2020 19:01:23 +0000 (12:01 -0700)]
tcp: bpf: Optionally store mac header in TCP_SAVE_SYN
This patch is adapted from Eric's patch in an earlier discussion [1].
The TCP_SAVE_SYN currently only stores the network header and
tcp header. This patch allows it to optionally store
the mac header also if the setsockopt's optval is 2.
It requires one more bit for the "save_syn" bit field in tcp_sock.
This patch achieves this by moving the syn_smc bit next to the is_mptcp.
The syn_smc is currently used with the TCP experimental option. Since
syn_smc is only used when CONFIG_SMC is enabled, this patch also puts
the "IS_ENABLED(CONFIG_SMC)" around it like the is_mptcp did
with "IS_ENABLED(CONFIG_MPTCP)".
The mac_hdrlen is also stored in the "struct saved_syn"
to allow a quick offset from the bpf prog if it chooses to start
getting from the network header or the tcp header.
Suggested-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/bpf/20200820190123.2886935-1-kafai@fb.com
Martin KaFai Lau [Thu, 20 Aug 2020 19:01:17 +0000 (12:01 -0700)]
bpf: selftests: Tcp header options
This patch adds tests for the new bpf tcp header option feature.
test_tcp_hdr_options.c:
- It tests header option writing and parsing in 3WHS: regular
connection establishment, fastopen, and syncookie.
- In syncookie, the passive side's bpf prog is asking the active side
to resend its bpf header option by specifying a RESEND bit in the
outgoing SYNACK. handle_active_estab() and write_nodata_opt() has
some details.
- handle_passive_estab() has comments on fastopen.
- It also has test for header writing and parsing in FIN packet.
- Most of the tests is writing an experimental option 254 with magic 0xeB9F.
- The no_exprm_estab() also tests writing a regular TCP option
without any magic.
test_misc_tcp_options.c:
- It is an one directional test. Active side writes option and
passive side parses option. The focus is to exercise
the new helpers and API.
- Testing the new helper: bpf_load_hdr_opt() and bpf_store_hdr_opt().
- Testing the bpf_getsockopt(TCP_BPF_SYN).
- Negative tests for the above helpers.
- Testing the sock_ops->skb_data.
Martin KaFai Lau [Thu, 20 Aug 2020 19:01:04 +0000 (12:01 -0700)]
bpf: tcp: Allow bpf prog to write and parse TCP header option
[ Note: The TCP changes here is mainly to implement the bpf
pieces into the bpf_skops_*() functions introduced
in the earlier patches. ]
The earlier effort in BPF-TCP-CC allows the TCP Congestion Control
algorithm to be written in BPF. It opens up opportunities to allow
a faster turnaround time in testing/releasing new congestion control
ideas to production environment.
The same flexibility can be extended to writing TCP header option.
It is not uncommon that people want to test new TCP header option
to improve the TCP performance. Another use case is for data-center
that has a more controlled environment and has more flexibility in
putting header options for internal only use.
For example, we want to test the idea in putting maximum delay
ACK in TCP header option which is similar to a draft RFC proposal [1].
This patch introduces the necessary BPF API and use them in the
TCP stack to allow BPF_PROG_TYPE_SOCK_OPS program to parse
and write TCP header options. It currently supports most of
the TCP packet except RST.
Supported TCP header option:
───────────────────────────
This patch allows the bpf-prog to write any option kind.
Different bpf-progs can write its own option by calling the new helper
bpf_store_hdr_opt(). The helper will ensure there is no duplicated
option in the header.
By allowing bpf-prog to write any option kind, this gives a lot of
flexibility to the bpf-prog. Different bpf-prog can write its
own option kind. It could also allow the bpf-prog to support a
recently standardized option on an older kernel.
Sockops Callback Flags:
──────────────────────
The bpf program will only be called to parse/write tcp header option
if the following newly added callback flags are enabled
in tp->bpf_sock_ops_cb_flags:
BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG
BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG
BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG
A few words on the PARSE CB flags. When the above PARSE CB flags are
turned on, the bpf-prog will be called on packets received
at a sk that has at least reached the ESTABLISHED state.
The parsing of the SYN-SYNACK-ACK will be discussed in the
"3 Way HandShake" section.
The default is off for all of the above new CB flags, i.e. the bpf prog
will not be called to parse or write bpf hdr option. There are
details comment on these new cb flags in the UAPI bpf.h.
sock_ops->skb_data and bpf_load_hdr_opt()
─────────────────────────────────────────
sock_ops->skb_data and sock_ops->skb_data_end covers the whole
TCP header and its options. They are read only.
The new bpf_load_hdr_opt() helps to read a particular option "kind"
from the skb_data.
Please refer to the comment in UAPI bpf.h. It has details
on what skb_data contains under different sock_ops->op.
3 Way HandShake
───────────────
The bpf-prog can learn if it is sending SYN or SYNACK by reading the
sock_ops->skb_tcp_flags.
* Passive side
When writing SYNACK (i.e. sock_ops->op == BPF_SOCK_OPS_WRITE_HDR_OPT_CB),
the received SYN skb will be available to the bpf prog. The bpf prog can
use the SYN skb (which may carry the header option sent from the remote bpf
prog) to decide what bpf header option should be written to the outgoing
SYNACK skb. The SYN packet can be obtained by getsockopt(TCP_BPF_SYN*).
More on this later. Also, the bpf prog can learn if it is in syncookie
mode (by checking sock_ops->args[0] == BPF_WRITE_HDR_TCP_SYNACK_COOKIE).
The bpf prog can store the received SYN pkt by using the existing
bpf_setsockopt(TCP_SAVE_SYN). The example in a later patch does it.
[ Note that the fullsock here is a listen sk, bpf_sk_storage
is not very useful here since the listen sk will be shared
by many concurrent connection requests.
Extending bpf_sk_storage support to request_sock will add weight
to the minisock and it is not necessary better than storing the
whole ~100 bytes SYN pkt. ]
When the connection is established, the bpf prog will be called
in the existing PASSIVE_ESTABLISHED_CB callback. At that time,
the bpf prog can get the header option from the saved syn and
then apply the needed operation to the newly established socket.
The later patch will use the max delay ack specified in the SYN
header and set the RTO of this newly established connection
as an example.
The received ACK (that concludes the 3WHS) will also be available to
the bpf prog during PASSIVE_ESTABLISHED_CB through the sock_ops->skb_data.
It could be useful in syncookie scenario. More on this later.
There is an existing getsockopt "TCP_SAVED_SYN" to return the whole
saved syn pkt which includes the IP[46] header and the TCP header.
A few "TCP_BPF_SYN*" getsockopt has been added to allow specifying where to
start getting from, e.g. starting from TCP header, or from IP[46] header.
The new getsockopt(TCP_BPF_SYN*) will also know where it can get
the SYN's packet from:
- (a) the just received syn (available when the bpf prog is writing SYNACK)
and it is the only way to get SYN during syncookie mode.
or
- (b) the saved syn (available in PASSIVE_ESTABLISHED_CB and also other
existing CB).
The bpf prog does not need to know where the SYN pkt is coming from.
The getsockopt(TCP_BPF_SYN*) will hide this details.
Similarly, a flags "BPF_LOAD_HDR_OPT_TCP_SYN" is also added to
bpf_load_hdr_opt() to read a particular header option from the SYN packet.
* Fastopen
Fastopen should work the same as the regular non fastopen case.
This is a test in a later patch.
* Syncookie
For syncookie, the later example patch asks the active
side's bpf prog to resend the header options in ACK. The server
can use bpf_load_hdr_opt() to look at the options in this
received ACK during PASSIVE_ESTABLISHED_CB.
* Active side
The bpf prog will get a chance to write the bpf header option
in the SYN packet during WRITE_HDR_OPT_CB. The received SYNACK
pkt will also be available to the bpf prog during the existing
ACTIVE_ESTABLISHED_CB callback through the sock_ops->skb_data
and bpf_load_hdr_opt().
* Turn off header CB flags after 3WHS
If the bpf prog does not need to write/parse header options
beyond the 3WHS, the bpf prog can clear the bpf_sock_ops_cb_flags
to avoid being called for header options.
Or the bpf-prog can select to leave the UNKNOWN_HDR_OPT_CB_FLAG on
so that the kernel will only call it when there is option that
the kernel cannot handle.
Martin KaFai Lau [Thu, 20 Aug 2020 19:00:58 +0000 (12:00 -0700)]
bpf: sock_ops: Change some members of sock_ops_kern from u32 to u8
A later patch needs to add a few pointers and a few u8 to
sock_ops_kern. Hence, this patch saves some spaces by moving
some of the existing members from u32 to u8 so that the later
patch can still fit everything in a cacheline.
Martin KaFai Lau [Thu, 20 Aug 2020 19:00:52 +0000 (12:00 -0700)]
bpf: tcp: Add bpf_skops_hdr_opt_len() and bpf_skops_write_hdr_opt()
The bpf prog needs to parse the SYN header to learn what options have
been sent by the peer's bpf-prog before writing its options into SYNACK.
This patch adds a "syn_skb" arg to tcp_make_synack() and send_synack().
This syn_skb will eventually be made available (as read-only) to the
bpf prog. This will be the only SYN packet available to the bpf
prog during syncookie. For other regular cases, the bpf prog can
also use the saved_syn.
When writing options, the bpf prog will first be called to tell the
kernel its required number of bytes. It is done by the new
bpf_skops_hdr_opt_len(). The bpf prog will only be called when the new
BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG is set in tp->bpf_sock_ops_cb_flags.
When the bpf prog returns, the kernel will know how many bytes are needed
and then update the "*remaining" arg accordingly. 4 byte alignment will
be included in the "*remaining" before this function returns. The 4 byte
aligned number of bytes will also be stored into the opts->bpf_opt_len.
"bpf_opt_len" is a newly added member to the struct tcp_out_options.
Then the new bpf_skops_write_hdr_opt() will call the bpf prog to write the
header options. The bpf prog is only called if it has reserved spaces
before (opts->bpf_opt_len > 0).
The bpf prog is the last one getting a chance to reserve header space
and writing the header option.
These two functions are half implemented to highlight the changes in
TCP stack. The actual codes preparing the bpf running context and
invoking the bpf prog will be added in the later patch with other
necessary bpf pieces.
Martin KaFai Lau [Thu, 20 Aug 2020 19:00:46 +0000 (12:00 -0700)]
bpf: tcp: Add bpf_skops_parse_hdr()
The patch adds a function bpf_skops_parse_hdr().
It will call the bpf prog to parse the TCP header received at
a tcp_sock that has at least reached the ESTABLISHED state.
For the packets received during the 3WHS (SYN, SYNACK and ACK),
the received skb will be available to the bpf prog during the callback
in bpf_skops_established() introduced in the previous patch and
in the bpf_skops_write_hdr_opt() that will be added in the
next patch.
Calling bpf prog to parse header is controlled by two new flags in
tp->bpf_sock_ops_cb_flags:
BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG and
BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG.
When BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG is set,
the bpf prog will only be called when there is unknown
option in the TCP header.
When BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG is set,
the bpf prog will be called on all received TCP header.
This function is half implemented to highlight the changes in
TCP stack. The actual codes preparing the bpf running context and
invoking the bpf prog will be added in the later patch with other
necessary bpf pieces.
Martin KaFai Lau [Thu, 20 Aug 2020 19:00:39 +0000 (12:00 -0700)]
bpf: tcp: Add bpf_skops_established()
In tcp_init_transfer(), it currently calls the bpf prog to give it a
chance to handle the just "ESTABLISHED" event (e.g. do setsockopt
on the newly established sk). Right now, it is done by calling the
general purpose tcp_call_bpf().
In the later patch, it also needs to pass the just-received skb which
concludes the 3 way handshake. E.g. the SYNACK received at the active side.
The bpf prog can then learn some specific header options written by the
peer's bpf-prog and potentially do setsockopt on the newly established sk.
Thus, instead of reusing the general purpose tcp_call_bpf(), a new function
bpf_skops_established() is added to allow passing the "skb" to the bpf
prog. The actual skb passing from bpf_skops_established() to the bpf prog
will happen together in a later patch which has the necessary bpf pieces.
A "skb" arg is also added to tcp_init_transfer() such that
it can then be passed to bpf_skops_established().
Calling the new bpf_skops_established() instead of tcp_call_bpf()
should be a noop in this patch.
Martin KaFai Lau [Thu, 20 Aug 2020 19:00:33 +0000 (12:00 -0700)]
tcp: Add saw_unknown to struct tcp_options_received
In a later patch, the bpf prog only wants to be called to handle
a header option if that particular header option cannot be handled by
the kernel. This unknown option could be written by the peer's bpf-prog.
It could also be a new standard option that the running kernel does not
support it while a bpf-prog can handle it.
This patch adds a "saw_unknown" bit to "struct tcp_options_received"
and it uses an existing one byte hole to do that. "saw_unknown" will
be set in tcp_parse_options() if it sees an option that the kernel
cannot handle.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200820190033.2884430-1-kafai@fb.com
Martin KaFai Lau [Thu, 20 Aug 2020 19:00:27 +0000 (12:00 -0700)]
tcp: bpf: Add TCP_BPF_RTO_MIN for bpf_setsockopt
This patch adds bpf_setsockopt(TCP_BPF_RTO_MIN) to allow bpf prog
to set the min rto of a connection. It could be used together
with the earlier patch which has added bpf_setsockopt(TCP_BPF_DELACK_MAX).
A later selftest patch will communicate the max delay ack in a
bpf tcp header option and then the receiving side can use
bpf_setsockopt(TCP_BPF_RTO_MIN) to set a shorter rto.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200820190027.2884170-1-kafai@fb.com
Martin KaFai Lau [Thu, 20 Aug 2020 19:00:21 +0000 (12:00 -0700)]
tcp: bpf: Add TCP_BPF_DELACK_MAX setsockopt
This change is mostly from an internal patch and adapts it from sysctl
config to the bpf_setsockopt setup.
The bpf_prog can set the max delay ack by using
bpf_setsockopt(TCP_BPF_DELACK_MAX). This max delay ack can be communicated
to its peer through bpf header option. The receiving peer can then use
this max delay ack and set a potentially lower rto by using
bpf_setsockopt(TCP_BPF_RTO_MIN) which will be introduced
in the next patch.
Another later selftest patch will also use it like the above to show
how to write and parse bpf tcp header option.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200820190021.2884000-1-kafai@fb.com
Martin KaFai Lau [Thu, 20 Aug 2020 19:00:14 +0000 (12:00 -0700)]
tcp: Use a struct to represent a saved_syn
The TCP_SAVE_SYN has both the network header and tcp header.
The total length of the saved syn packet is currently stored in
the first 4 bytes (u32) of an array and the actual packet data is
stored after that.
A later patch will add a bpf helper that allows to get the tcp header
alone from the saved syn without the network header. It will be more
convenient to have a direct offset to a specific header instead of
re-parsing it. This requires to separately store the network hdrlen.
The total header length (i.e. network + tcp) is still needed for the
current usage in getsockopt. Although this total length can be obtained
by looking into the tcphdr and then get the (th->doff << 2), this patch
chooses to directly store the tcp hdrlen in the second four bytes of
this newly created "struct saved_syn". By using a new struct, it can
give a readable name to each individual header length.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200820190014.2883694-1-kafai@fb.com
Andrii Nakryiko [Thu, 20 Aug 2020 23:12:39 +0000 (16:12 -0700)]
libbpf: Normalize and improve logging across few functions
Make libbpf logs follow similar pattern and provide more context like section
name or program name, where appropriate. Also, add BPF_INSN_SZ constant and
use it throughout to clean up code a little bit. This commit doesn't have any
functional changes and just removes some code changes out of the way before
bigger refactoring in libbpf internals.
Andrii Nakryiko [Thu, 20 Aug 2020 23:12:38 +0000 (16:12 -0700)]
libbpf: Skip well-known ELF sections when iterating ELF
Skip and don't log ELF sections that libbpf knows about and ignores during ELF
processing. This allows to not unnecessarily log details about those ELF
sections and cleans up libbpf debug log. Ignored sections include DWARF data,
string table, empty .text section and few special (e.g., .llvm_addrsig)
useless sections.
With such ELF sections out of the way, log unrecognized ELF sections at
pr_info level to increase visibility.
Andrii Nakryiko [Thu, 20 Aug 2020 23:12:36 +0000 (16:12 -0700)]
libbpf: Factor out common ELF operations and improve logging
Factor out common ELF operations done throughout the libbpf. This simplifies
usage across multiple places in libbpf, as well as hide error reporting from
higher-level functions and make error logging more consistent.
Andrii Nakryiko [Thu, 20 Aug 2020 23:12:35 +0000 (16:12 -0700)]
selftests/bpf: BPF object files should depend only on libbpf headers
There is no need to re-build BPF object files if any of the sources of libbpf
change. So record more precise dependency only on libbpf/bpf_*.h headers. This
eliminates unnecessary re-builds.
====================
We're currently building a control plane for our BPF socket dispatch
work. As part of that, we have a need to create a copy of an existing
sockhash, to allow us to change the keys. I previously proposed allowing
privileged userspace to look up sockets, which doesn't work due to
security concerns (see [1]).
In follow up discussions during BPF office hours we identified bpf_iter
as a possible solution: instead of accessing sockets from user space
we can iterate the source sockhash, and insert the values into a new
map. Enabling this requires two pieces: the ability to iterate
sockmap and sockhash, as well as being able to call map_update_elem
from BPF.
This patch set implements the latter: it's now possible to update
sockmap from BPF context. As a next step, we can implement bpf_iter
for sockmap.
===
I've done some more fixups, and audited the safe contexts more
thoroughly. As a result I'm removing CGROUP_SKB, SK_MSG and SK_SKB
for now.
Changes in v3:
- Use CHECK as much as possible (Yonghong)
- Reject ARG_PTR_TO_MAP_VALUE_OR_NULL for sockmap (Yonghong)
- Remove CGROUP_SKB, SK_MSG, SK_SKB from safe contexts
- Test that the verifier rejects update from unsafe context
Changes in v2:
- Fix warning in patch #2 (Jakub K)
- Renamed override_map_arg_type (John)
- Only allow updating sockmap from known safe contexts (John)
- Use __s64 for sockmap updates from user space (Yonghong)
- Various small test fixes around test macros and such (Yonghong)
Lorenz Bauer [Fri, 21 Aug 2020 10:29:48 +0000 (11:29 +0100)]
selftests: bpf: Test sockmap update from BPF
Add a test which copies a socket from a sockmap into another sockmap
or sockhash. This excercises bpf_map_update_elem support from BPF
context. Compare the socket cookies from source and destination to
ensure that the copy succeeded.
Also check that the verifier rejects map_update from unsafe contexts.
Lorenz Bauer [Fri, 21 Aug 2020 10:29:47 +0000 (11:29 +0100)]
bpf: sockmap: Allow update from BPF
Allow calling bpf_map_update_elem on sockmap and sockhash from a BPF
context. The synchronization required for this is a bit fiddly: we
need to prevent the socket from changing its state while we add it
to the sockmap, since we rely on getting a callback via
sk_prot->unhash. However, we can't just lock_sock like in
sock_map_sk_acquire because that might sleep. So instead we disable
softirq processing and use bh_lock_sock to prevent further
modification.
Yet, this is still not enough. BPF can be called in contexts where
the current CPU might have locked a socket. If the BPF can get
a hold of such a socket, inserting it into a sockmap would lead to
a deadlock. One straight forward example are sock_ops programs that
have ctx->sk, but the same problem exists for kprobes, etc.
We deal with this by allowing sockmap updates only from known safe
contexts. Improper usage is rejected by the verifier.
I've audited the enabled contexts to make sure they can't run in
a locked context. It's possible that CGROUP_SKB and others are
safe as well, but the auditing here is much more difficult. In
any case, we can extend the safe contexts when the need arises.
Lorenz Bauer [Fri, 21 Aug 2020 10:29:46 +0000 (11:29 +0100)]
bpf: Override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and sockhash
The verifier assumes that map values are simple blobs of memory, and
therefore treats ARG_PTR_TO_MAP_VALUE, etc. as such. However, there are
map types where this isn't true. For example, sockmap and sockhash store
sockets. In general this isn't a big problem: we can just
write helpers that explicitly requests PTR_TO_SOCKET instead of
ARG_PTR_TO_MAP_VALUE.
The one exception are the standard map helpers like map_update_elem,
map_lookup_elem, etc. Here it would be nice we could overload the
function prototype for different kinds of maps. Unfortunately, this
isn't entirely straight forward:
We only know the type of the map once we have resolved meta->map_ptr
in check_func_arg. This means we can't swap out the prototype
in check_helper_call until we're half way through the function.
Instead, modify check_func_arg to treat ARG_PTR_TO_MAP_VALUE to
mean "the native type for the map" instead of "pointer to memory"
for sockmap and sockhash. This means we don't have to modify the
function prototype at all
Lorenz Bauer [Fri, 21 Aug 2020 10:29:45 +0000 (11:29 +0100)]
bpf: sockmap: Call sock_map_update_elem directly
Don't go via map->ops to call sock_map_update_elem, since we know
what function to call in bpf_map_update_value. Since we currently
don't allow calling map_update_elem from BPF context, we can remove
ops->map_update_elem and rename the function to sock_map_update_elem_sys.
Lorenz Bauer [Fri, 21 Aug 2020 10:29:43 +0000 (11:29 +0100)]
net: sk_msg: Simplify sk_psock initialization
Initializing psock->sk_proto and other saved callbacks is only
done in sk_psock_update_proto, after sk_psock_init has returned.
The logic for this is difficult to follow, and needlessly complex.
Instead, initialize psock->sk_proto whenever we allocate a new
psock. Additionally, assert the following invariants:
* The SK has no ULP: ULP does it's own finagling of sk->sk_prot
* sk_user_data is unused: we need it to store sk_psock
Protect our access to sk_user_data with sk_callback_lock, which
is what other users like reuseport arrays, etc. do.
The result is that an sk_psock is always fully initialized, and
that psock->sk_proto is always the "original" struct proto.
The latter allows us to use psock->sk_proto when initializing
IPv6 TCP / UDP callbacks for sockmap.
Andrii Nakryiko [Fri, 21 Aug 2020 16:59:27 +0000 (09:59 -0700)]
libbpf: Add perf_buffer APIs for better integration with outside epoll loop
Add a set of APIs to perf_buffer manage to allow applications to integrate
perf buffer polling into existing epoll-based infrastructure. One example is
applications using libevent already and wanting to plug perf_buffer polling,
instead of relying on perf_buffer__poll() and waste an extra thread to do it.
But perf_buffer is still extremely useful to set up and consume perf buffer
rings even for such use cases.
So to accomodate such new use cases, add three new APIs:
- perf_buffer__buffer_cnt() returns number of per-CPU buffers maintained by
given instance of perf_buffer manager;
- perf_buffer__buffer_fd() returns FD of perf_event corresponding to
a specified per-CPU buffer; this FD is then polled independently;
- perf_buffer__consume_buffer() consumes data from single per-CPU buffer,
identified by its slot index.
To support a simpler, but less efficient, way to integrate perf_buffer into
external polling logic, also expose underlying epoll FD through
perf_buffer__epoll_fd() API. It will need to be followed by
perf_buffer__poll(), wasting extra syscall, or perf_buffer__consume(), wasting
CPU to iterate buffers with no data. But could be simpler and more convenient
for some cases.
These APIs allow for great flexiblity, but do not sacrifice general usability
of perf_buffer.
Also exercise and check new APIs in perf_buffer selftest.
====================
"link" has been an important concept for bpf ecosystem to connect
bpf program with other properties. Currently, the information related
information can be queried from userspace through bpf command
BPF_LINK_GET_NEXT_ID, BPF_LINK_GET_FD_BY_ID and BPF_OBJ_GET_INFO_BY_FD.
The information is also available by "cating" /proc/<pid>/fdinfo/<link_fd>.
Raw_tracepoint, tracing, cgroup, netns and xdp links are already
supported in the kernel and bpftool.
This patch added support for bpf iterator. Patch #1 added generic support
for link querying interface. Patch #2 implemented callback functions
for map element bpf iterators. Patch #3 added bpftool support.
Changelogs:
v3 -> v4:
. return target specific link_info even if target_name buffer
is empty. (Andrii)
v2 -> v3:
. remove extra '\t' when fdinfo prints map_id to make parsing
consistent. (Andrii)
v1 -> v2:
. fix checkpatch.pl warnings. (Jakub)
====================