]> git.baikalelectronics.ru Git - kernel.git/commit
fgraph: Convert ret_stack tasklist scanning to rcu
authorDavidlohr Bueso <dave@stgolabs.net>
Mon, 7 Sep 2020 01:33:26 +0000 (18:33 -0700)
committerSteven Rostedt (VMware) <rostedt@goodmis.org>
Tue, 22 Sep 2020 01:06:02 +0000 (21:06 -0400)
commit8182bd97d62a9184501302c16eaf85d8b163a7c1
tree4913385409f94966a7932c13a3c09f21c0109f2d
parentd96811511f533c04c39e75eadac0ed0d7c860643
fgraph: Convert ret_stack tasklist scanning to rcu

It seems that alloc_retstack_tasklist() can also take a lockless
approach for scanning the tasklist, instead of using the big global
tasklist_lock. For this we also kill another deprecated and rcu-unsafe
tsk->thread_group user replacing it with for_each_process_thread(),
maintaining semantics.

Here tasklist_lock does not protect anything other than the list
against concurrent fork/exit. And considering that the whole thing
is capped by FTRACE_RETSTACK_ALLOC_SIZE (32), it should not be a
problem to have a pontentially stale, yet stable, list. The task cannot
go away either, so we don't risk racing with ftrace_graph_exit_task()
which clears the retstack.

The tsk->ret_stack management is not protected by tasklist_lock, being
serialized with the corresponding publish/subscribe barriers against
concurrent ftrace_push_return_trace(). In addition this plays nicer
with cachelines by avoiding two atomic ops in the uncontended case.

Link: https://lkml.kernel.org/r/20200907013326.9870-1-dave@stgolabs.net
Acked-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
kernel/trace/fgraph.c