Linus Torvalds [Sat, 18 Nov 2017 01:51:33 +0000 (17:51 -0800)]
Merge tag 'kbuild-misc-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild misc updates from Masahiro Yamada:
- Clean up and fix RPM package build
- Fix a warning in DEB package build
- Improve coccicheck script
- Improve some semantic patches
* tag 'kbuild-misc-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
docs: dev-tools: coccinelle: delete out of date wiki reference
coccinelle: orplus: reorganize to improve performance
coccinelle: use exists to improve efficiency
builddeb: Pass the kernel:debarch substvar to dpkg-genchanges
Coccinelle: use false positive annotation
coccinelle: fix verbose message about .cocci file being run
coccinelle: grep Options and Requires fields more precisely
Coccinelle: make DEBUG_FILE option more useful
coccinelle: api: detect identical chip data arrays
coccinelle: Improve setup_timer.cocci matching
Coccinelle: setup_timer: improve messages from setup_timer
kbuild: rpm-pkg: do not force -jN in submake
kbuild: rpm-pkg: keep spec file until make mrproper
kbuild: rpm-pkg: fix jobserver unavailable warning
kbuild: rpm-pkg: replace $RPM_BUILD_ROOT with %{buildroot}
kbuild: rpm-pkg: fix build error when CONFIG_MODULES is disabled
kbuild: rpm-pkg: refactor mkspec with here doc
kbuild: rpm-pkg: clean up mkspec
kbuild: rpm-pkg: install vmlinux.bz2 unconditionally
kbuild: rpm-pkg: remove ppc64 specific image handling
Linus Torvalds [Sat, 18 Nov 2017 01:45:29 +0000 (17:45 -0800)]
Merge tag 'kbuild-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild updates from Masahiro Yamada:
"One of the most remarkable improvements in this cycle is, Kbuild is
now able to cache the result of shell commands. Some variables are
expensive to compute, for example, $(call cc-option,...) invokes the
compiler. It is not efficient to redo this computation every time,
even when we are not actually building anything. Kbuild creates a
hidden file ".cache.mk" that contains invoked shell commands and their
results. The speed-up should be noticeable.
Summary:
- Fix arch build issues (hexagon, sh)
- Clean up various Makefiles and scripts
- Fix wrong usage of {CFLAGS,LDFLAGS}_MODULE in arch Makefiles
- Cache variables that are expensive to compute
- Improve cc-ldopton and ld-option for Clang
- Optimize output directory creation"
* tag 'kbuild-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (30 commits)
kbuild: move coccicheck help from scripts/Makefile.help to top Makefile
sh: decompressor: add shipped files to .gitignore
frv: .gitignore: ignore vmlinux.lds
selinux: remove unnecessary assignment to subdir-
kbuild: specify FORCE in Makefile.headersinst as .PHONY target
kbuild: remove redundant mkdir from ./Kbuild
kbuild: optimize object directory creation for incremental build
kbuild: create object directories simpler and faster
kbuild: filter-out PHONY targets from "targets"
kbuild: remove redundant $(wildcard ...) for cmd_files calculation
kbuild: create directory for make cache only when necessary
sh: select KBUILD_DEFCONFIG depending on ARCH
kbuild: fix linker feature test macros when cross compiling with Clang
kbuild: shrink .cache.mk when it exceeds 1000 lines
kbuild: do not call cc-option before KBUILD_CFLAGS initialization
kbuild: Cache a few more calls to the compiler
kbuild: Add a cache for generated variables
kbuild: add forward declaration of default target to Makefile.asm-generic
kbuild: remove KBUILD_SUBDIR_ASFLAGS and KBUILD_SUBDIR_CCFLAGS
hexagon/kbuild: replace CFLAGS_MODULE with KBUILD_CFLAGS_MODULE
...
Randy Dunlap [Fri, 17 Nov 2017 23:31:47 +0000 (15:31 -0800)]
EXPERT Kconfig menu: fix broken EXPERT menu
Clean up the EXPERT menu (yet again).
Move FHANDLE and CHECKPOINT_RESTORE into the primary EXPERT menu since
they already depend on EXPERT.
Move BPF_SYSCALL and USERFAULTFD out of the EXPERT Kconfig symbols menu
list since they do not depend on EXPERT and were breaking the continuity
of that menu list.
Move all of the KALLSYMS Kconfig symbols to the end of the EXPERT menu.
This separates the kernel services from the build options.
This patch depends on [PATCH] pci: move PCI_QUIRKS to the PCI bus menu
(https://lkml.org/lkml/2017/11/2/907).
Link: http://lkml.kernel.org/r/72e4465a-a5ff-cb3c-1a90-11aa4861b161@infradead.org Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> [BPF] Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit e73d57f6a73b ("mm: drop useless local parameters of
__register_one_node()") removed the last user of parent_node().
The parent_node() macro in SPARC64 platform is unnecessary.
Remove it for cleanup.
Link: http://lkml.kernel.org/r/1504234599-29533-6-git-send-email-douly.fnst@cn.fujitsu.com Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com> Reported-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Davidlohr Bueso [Fri, 17 Nov 2017 23:31:18 +0000 (15:31 -0800)]
sysvipc: make get_maxid O(1) again
For a custom microbenchmark on a 3.30GHz Xeon SandyBridge, which calls
IPC_STAT over and over, it was calculated that, on avg the cost of
ipc_get_maxid() for increasing amounts of keys was:
By having the max_id available in O(1) we save all those cycles for each
semctl(_STAT) command, the idr_find can be expensive -- which some real
(customer) workloads actually poll on.
Note that this used to be the case, until commit 7ed8208531b ("ipc:
store ipcs into IDRs"). The cost is the extra idr_find when doing
RMIDs, but we simply go backwards, and should not take too many
iterations to find the new value.
Davidlohr Bueso [Fri, 17 Nov 2017 23:31:08 +0000 (15:31 -0800)]
sysvipc: unteach ids->next_id for !CHECKPOINT_RESTORE
Patch series "sysvipc: ipc-key management improvements".
Here are a few improvements I spotted while eyeballing Guillaume's
rhashtable implementation for ipc keys. The first and fourth patches
are the interesting ones, the middle two are trivial.
This patch (of 4):
The next_id object-allocation functionality was introduced in commit 86755ff47348 ("ipc: add sysctl to specify desired next object id").
Given that these new entries are _only_ exported under the
CONFIG_CHECKPOINT_RESTORE option, there is no point for the common case
to even know about ->next_id. As such rewrite ipc_buildid() such that
it can do away with the field as well as unnecessary branches when
adding a new identifier. The end result also better differentiates both
cases, so the code ends up being cleaner; albeit the small duplications
regarding the default case.
Arnd Bergmann [Fri, 17 Nov 2017 23:31:04 +0000 (15:31 -0800)]
initramfs: use time64_t timestamps
The cpio format uses a 32-bit number to encode file timestamps, which
breaks initramfs support in 2038. This reinterprets the timestamp as
unsigned, to give us another 68 years and avoids breaking until 2106.
Victor Chibotaru [Fri, 17 Nov 2017 23:30:50 +0000 (15:30 -0800)]
Makefile: support flag -fsanitizer-coverage=trace-cmp
The flag enables Clang instrumentation of comparison operations
(currently not supported by GCC). This instrumentation is needed by the
new KCOV device to collect comparison operands.
Victor Chibotaru [Fri, 17 Nov 2017 23:30:46 +0000 (15:30 -0800)]
kcov: support comparison operands collection
Enables kcov to collect comparison operands from instrumented code.
This is done by using Clang's -fsanitize=trace-cmp instrumentation
(currently not available for GCC).
The comparison operands help a lot in fuzz testing. E.g. they are used
in Syzkaller to cover the interiors of conditional statements with way
less attempts and thus make previously unreachable code reachable.
To allow separate collection of coverage and comparison operands two
different work modes are implemented. Mode selection is now done via a
KCOV_ENABLE ioctl call with corresponding argument value.
Borislav Petkov [Fri, 17 Nov 2017 23:30:38 +0000 (15:30 -0800)]
kernel/panic.c: add TAINT_AUX
This is the gist of a patch which we've been forward-porting in our
kernels for a long time now and it probably would make a good sense to
have such TAINT_AUX flag upstream which can be used by each distro etc,
how they see fit. This way, we won't need to forward-port a distro-only
version indefinitely.
Add an auxiliary taint flag to be used by distros and others. This
obviates the need to forward-port whatever internal solutions people
have in favor of a single flag which they can map arbitrarily to a
definition of their pleasing.
The "X" mnemonic could also mean eXternal, which would be taint from a
distro or something else but not the upstream kernel. We will use it to
mark modules for which we don't provide support. I.e., a really
eXternal module.
Link: http://lkml.kernel.org/r/20170911134533.dp5mtyku5bongx4c@pd.tnic Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Kees Cook <keescook@chromium.org> Cc: Jessica Yu <jeyu@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Michal Marek <mmarek@suse.cz> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Takashi Iwai <tiwai@suse.de> Cc: Petr Mladek <pmladek@suse.com> Cc: Jeff Mahoney <jeffm@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Gargi Sharma [Fri, 17 Nov 2017 23:30:34 +0000 (15:30 -0800)]
pid: remove pidhash
pidhash is no longer required as all the information can be looked up
from idr tree. nr_hashed represented the number of pids that had been
hashed. Since, nr_hashed and PIDNS_HASH_ADDING are no longer relevant,
it has been renamed to pid_allocated and PIDNS_ADDING respectively.
[gs051095@gmail.com: v6] Link: http://lkml.kernel.org/r/1507760379-21662-3-git-send-email-gs051095@gmail.com Link: http://lkml.kernel.org/r/1507583624-22146-3-git-send-email-gs051095@gmail.com Signed-off-by: Gargi Sharma <gs051095@gmail.com> Reviewed-by: Rik van Riel <riel@redhat.com> Tested-by: Tony Luck <tony.luck@intel.com> [ia64] Cc: Julia Lawall <julia.lawall@lip6.fr> Cc: Ingo Molnar <mingo@kernel.org> Cc: Pavel Tatashin <pasha.tatashin@oracle.com> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Gargi Sharma [Fri, 17 Nov 2017 23:30:30 +0000 (15:30 -0800)]
pid: replace pid bitmap implementation with IDR API
Patch series "Replacing PID bitmap implementation with IDR API", v4.
This series replaces kernel bitmap implementation of PID allocation with
IDR API. These patches are written to simplify the kernel by replacing
custom code with calls to generic code.
The following are the stats for pid and pid_namespace object files
before and after the replacement. There is a noteworthy change between
the IDR and bitmap implementation.
Before
text data bss dec hex filename
8447 3894 64 12405 3075 kernel/pid.o
After
text data bss dec hex filename
3397 304 0 3701 e75 kernel/pid.o
Before
text data bss dec hex filename
5692 1842 192 7726 1e2e kernel/pid_namespace.o
After
text data bss dec hex filename
2854 216 16 3086 c0e kernel/pid_namespace.o
The following are the stats for ps, pstree and calling readdir on /proc
for 10,000 processes.
ps:
With IDR API With bitmap
real 0m1.479s 0m2.319s
user 0m0.070s 0m0.060s
sys 0m0.289s 0m0.516s
pstree:
With IDR API With bitmap
real 0m1.024s 0m1.794s
user 0m0.348s 0m0.612s
sys 0m0.184s 0m0.264s
proc:
With IDR API With bitmap
real 0m0.059s 0m0.074s
user 0m0.000s 0m0.004s
sys 0m0.016s 0m0.016s
This patch (of 2):
Replace the current bitmap implementation for Process ID allocation.
Functions that are no longer required, for example, free_pidmap(),
alloc_pidmap(), etc. are removed. The rest of the functions are
modified to use the IDR API. The change was made to make the PID
allocation less complex by replacing custom code with calls to generic
API.
Arvind Yadav [Fri, 17 Nov 2017 23:30:15 +0000 (15:30 -0800)]
rapidio: constify rio_device_id
rio_device_id are not supposed to change at runtime. rio driver is
working with const 'id_table'. So mark the non-const rio_device_id
structs as const.
Dave Young [Fri, 17 Nov 2017 23:30:12 +0000 (15:30 -0800)]
kdump: print a message in case parse_crashkernel_mem resulted in zero bytes
parse_crashkernel_mem() silently returns if we get zero bytes in the
parsing function. It is useful for debugging to add a message,
especially if the kernel cannot boot correctly.
Add a pr_info instead of pr_warn because it is expected behavior for
size = 0, eg. crashkernel=2G-4G:128M, size will be 0 in case system
memory is less than 2G.
Link: http://lkml.kernel.org/r/20171114080129.GA6115@dhcp-128-65.nay.redhat.com Signed-off-by: Dave Young <dyoung@redhat.com> Cc: Baoquan He <bhe@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Bhupesh Sharma <bhsharma@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Oleg Nesterov [Fri, 17 Nov 2017 23:30:08 +0000 (15:30 -0800)]
kernel/signal.c: remove the no longer needed SIGNAL_UNKILLABLE check in complete_signal()
complete_signal() checks SIGNAL_UNKILLABLE before it starts to destroy
the thread group, today this is wrong in many ways.
If nothing else, fatal_signal_pending() should always imply that the
whole thread group (except ->group_exit_task if it is not NULL) is
killed, this check breaks the rule.
After the previous changes we can rely on sig_task_ignored();
sig_fatal(sig) && SIGNAL_UNKILLABLE can only be true if we actually want
to kill this task and sig == SIGKILL OR it is traced and debugger can
intercept the signal.
This should hopefully fix the problem reported by Dmitry. This
test-case
static int init(void *arg)
{
for (;;)
pause();
}
int main(void)
{
char stack[16 * 1024];
for (;;) {
int pid = clone(init, stack + sizeof(stack)/2,
CLONE_NEWPID | SIGCHLD, NULL);
assert(pid > 0);
triggers the WARN_ON_ONCE(!(task->jobctl & JOBCTL_STOP_PENDING)) in
task_participate_group_stop(). do_signal_stop()->signal_group_exit()
checks SIGNAL_GROUP_EXIT and return false, but task_set_jobctl_pending()
checks fatal_signal_pending() and does not set JOBCTL_STOP_PENDING.
And his should fix the minor security problem reported by Kyle,
SECCOMP_RET_TRACE can miss fatal_signal_pending() the same way if the
task is the root of a pid namespace.
Oleg Nesterov [Fri, 17 Nov 2017 23:30:04 +0000 (15:30 -0800)]
kernel/signal.c: protect the SIGNAL_UNKILLABLE tasks from !sig_kernel_only() signals
Change sig_task_ignored() to drop the SIG_DFL && !sig_kernel_only()
signals even if force == T. This simplifies the next change and this
matches the same check in get_signal() which will drop these signals
anyway.
Oleg Nesterov [Fri, 17 Nov 2017 23:30:01 +0000 (15:30 -0800)]
kernel/signal.c: protect the traced SIGNAL_UNKILLABLE tasks from SIGKILL
The comment in sig_ignored() says "Tracers may want to know about even
ignored signals" but SIGKILL can not be reported to debugger and it is
just wrong to return 0 in this case: SIGKILL should only kill the
SIGNAL_UNKILLABLE task if it comes from the parent ns.
Change sig_ignored() to ignore ->ptrace if sig == SIGKILL and rely on
sig_task_ignored().
SISGTOP coming from within the namespace is not really right too but at
least debugger can intercept it, and we can't drop it here because this
will break "gdb -p 1": ptrace_attach() won't work. Perhaps we will add
another ->ptrace check later, we will see.
Colin Ian King [Fri, 17 Nov 2017 23:29:57 +0000 (15:29 -0800)]
fat: remove redundant assignment of 0 to slots
The variable slots is being assigned a value of zero that is never read,
slots is being updated again a few lines later. Remove this redundant
assignment.
Cleans clang warning: Value stored to 'slots' is never read
Link: http://lkml.kernel.org/r/20171017140258.22536-1-colin.king@canonical.com Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ryusuke Konishi [Fri, 17 Nov 2017 23:29:43 +0000 (15:29 -0800)]
nilfs2: align block comments of nilfs_sufile_truncate_range() at *
Fix the following checkpatch warning:
WARNING: Block comments should align the * on each line
#633: FILE: sufile.c:633:
+/**
+ * nilfs_sufile_truncate_range - truncate range of segment array
Elena Reshetova [Fri, 17 Nov 2017 23:29:39 +0000 (15:29 -0800)]
fs, nilfs: convert nilfs_root.count from atomic_t to refcount_t
atomic_t variables are currently used to implement reference counters
with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basic atomic operations
(set, inc, inc_not_zero, dec_and_test, etc.)
Such atomic variables should be converted to a newly provided refcount_t
type and API that prevents accidental counter overflows and underflows.
This is important since overflows and underflows can lead to
use-after-free situation and be exploitable.
The variable nilfs_root.count is used as pure reference counter.
Convert it to refcount_t and fix up the operations.
Link: http://lkml.kernel.org/r/1509367935-3086-3-git-send-email-konishi.ryusuke@lab.ntt.co.jp Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Suggested-by: Kees Cook <keescook@chromium.org> Reviewed-by: David Windsor <dwindsor@gmail.com> Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andreas Rohner [Fri, 17 Nov 2017 23:29:35 +0000 (15:29 -0800)]
nilfs2: fix race condition that causes file system corruption
There is a race condition between nilfs_dirty_inode() and
nilfs_set_file_dirty().
When a file is opened, nilfs_dirty_inode() is called to update the
access timestamp in the inode. It calls __nilfs_mark_inode_dirty() in a
separate transaction. __nilfs_mark_inode_dirty() caches the ifile
buffer_head in the i_bh field of the inode info structure and marks it
as dirty.
After some data was written to the file in another transaction, the
function nilfs_set_file_dirty() is called, which adds the inode to the
ns_dirty_files list.
Then the segment construction calls nilfs_segctor_collect_dirty_files(),
which goes through the ns_dirty_files list and checks the i_bh field.
If there is a cached buffer_head in i_bh it is not marked as dirty
again.
Since nilfs_dirty_inode() and nilfs_set_file_dirty() use separate
transactions, it is possible that a segment construction that writes out
the ifile occurs in-between the two. If this happens the inode is not
on the ns_dirty_files list, but its ifile block is still marked as dirty
and written out.
In the next segment construction, the data for the file is written out
and nilfs_bmap_propagate() updates the b-tree. Eventually the bmap root
is written into the i_bh block, which is not dirty, because it was
written out in another segment construction.
As a result the bmap update can be lost, which leads to file system
corruption. Either the virtual block address points to an unallocated
DAT block, or the DAT entry will be reused for something different.
The error can remain undetected for a long time. A typical error
message would be one of the "bad btree" errors or a warning that a DAT
entry could not be found.
This bug can be reproduced reliably by a simple benchmark that creates
and overwrites millions of 4k files.
Link: http://lkml.kernel.org/r/1509367935-3086-2-git-send-email-konishi.ryusuke@lab.ntt.co.jp Signed-off-by: Andreas Rohner <andreas.rohner@gmx.net> Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Tested-by: Andreas Rohner <andreas.rohner@gmx.net> Tested-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kees Cook [Fri, 17 Nov 2017 23:29:32 +0000 (15:29 -0800)]
fs/nilfs2: convert timers to use timer_setup()
In preparation for unconditionally passing the struct timer_list pointer
to all timer callbacks, switch to using the new timer_setup() and
from_timer() to pass the timer pointer explicitly. This requires adding
a pointer to hold the timer's target task, as the lifetime of sc_task
doesn't appear to match the timer's task.
Joe Lawrence [Fri, 17 Nov 2017 23:29:28 +0000 (15:29 -0800)]
sysctl: check for UINT_MAX before unsigned int min/max
Mikulas noticed in the existing do_proc_douintvec_minmax_conv() and
do_proc_dopipe_max_size_conv() introduced in this patchset, that they
inconsistently handle overflow and min/max range inputs:
In do_proc_do*() routines which store values into unsigned int variables
(4 bytes wide for 64-bit builds), first validate that the input unsigned
long value (8 bytes wide for 64-bit builds) will fit inside the smaller
unsigned int variable. Then check that the unsigned int value falls
inside the specified parameter min, max range. Otherwise the unsigned
long -> unsigned int conversion drops leading bits from the input value,
leading to the inconsistent pattern Mikulas documented above.
Link: http://lkml.kernel.org/r/1507658689-11669-5-git-send-email-joe.lawrence@redhat.com Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Reported-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Mikulas Patocka <mpatocka@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jens Axboe <axboe@kernel.dk> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This leaves a window of time between initial assignment and rounding
that may be visible to other threads. (For example, one thread sets a
non-rounded value to pipe_max_size while another reads its value.)
Similar reads of pipe_max_size are potentially racy:
Add a new proc_dopipe_max_size() that consolidates reading the new value
from the user buffer, verifying bounds, and calling round_pipe_size()
with a single assignment to pipe_max_size.
Link: http://lkml.kernel.org/r/1507658689-11669-4-git-send-email-joe.lawrence@redhat.com Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Reported-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Mikulas Patocka <mpatocka@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jens Axboe <axboe@kernel.dk> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Lawrence [Fri, 17 Nov 2017 23:29:21 +0000 (15:29 -0800)]
pipe: avoid round_pipe_size() nr_pages overflow on 32-bit
round_pipe_size() contains a right-bit-shift expression which may
overflow, which would cause undefined results in a subsequent
roundup_pow_of_two() call.
static inline unsigned int round_pipe_size(unsigned int size)
{
unsigned long nr_pages;
This is bad because roundup_pow_of_two(n) is undefined when n == 0!
64-bit is not a problem as the unsigned int size is 4 bytes wide
(similar to 32-bit) and the larger, 8 byte wide unsigned long, is
sufficient to handle the largest value of the bit shift expression:
size=0xffffffff nr_pages=100000
Modify round_pipe_size() to return 0 if n == 0 and updates its callers to
handle accordingly.
Link: http://lkml.kernel.org/r/1507658689-11669-3-git-send-email-joe.lawrence@redhat.com Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com> Reported-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Mikulas Patocka <mpatocka@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jens Axboe <axboe@kernel.dk> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Lawrence [Fri, 17 Nov 2017 23:29:17 +0000 (15:29 -0800)]
pipe: match pipe_max_size data type with procfs
Patch series "A few round_pipe_size() and pipe-max-size fixups", v3.
While backporting Michael's "pipe: fix limit handling" patchset to a
distro-kernel, Mikulas noticed that current upstream pipe limit handling
contains a few problems:
1 - procfs signed wrap: echo'ing a large number into
/proc/sys/fs/pipe-max-size and then cat'ing it back out shows a
negative value.
2 - round_pipe_size() nr_pages overflow on 32bit: this would
subsequently try roundup_pow_of_two(0), which is undefined.
3 - visible non-rounded pipe-max-size value: there is no mutual
exclusion or protection between the time pipe_max_size is assigned
a raw value from proc_dointvec_minmax() and when it is rounded.
4 - unsigned long -> unsigned int conversion makes for potential odd
return errors from do_proc_douintvec_minmax_conv() and
do_proc_dopipe_max_size_conv().
This version underwent the same testing as v1:
https://marc.info/?l=linux-kernel&m=150643571406022&w=2
NeilBrown [Fri, 17 Nov 2017 23:29:13 +0000 (15:29 -0800)]
autofs: don't fail mount for transient error
Currently if the autofs kernel module gets an error when writing to the
pipe which links to the daemon, then it marks the whole moutpoint as
catatonic, and it will stop working.
It is possible that the error is transient. This can happen if the
daemon is slow and more than 16 requests queue up. If a subsequent
process tries to queue a request, and is then signalled, the write to
the pipe will return -ERESTARTSYS and autofs will take that as total
failure.
So change the code to assess -ERESTARTSYS and -ENOMEM as transient
failures which only abort the current request, not the whole mountpoint.
It isn't a crash or a data corruption, but having autofs mountpoints
suddenly stop working is rather inconvenient.
Ian said:
: And given the problems with a half dozen (or so) user space applications
: consuming large amounts of CPU under heavy mount and umount activity this
: could happen more easily than we expect.
Link: http://lkml.kernel.org/r/87y3norvgp.fsf@notabene.neil.brown.name Signed-off-by: NeilBrown <neilb@suse.com> Acked-by: Ian Kent <raven@themaw.net> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Jason Baron [Fri, 17 Nov 2017 23:29:06 +0000 (15:29 -0800)]
epoll: remove ep_call_nested() from ep_eventpoll_poll()
The use of ep_call_nested() in ep_eventpoll_poll(), which is the .poll
routine for an epoll fd, is used to prevent excessively deep epoll
nesting, and to prevent circular paths.
However, we are already preventing these conditions during
EPOLL_CTL_ADD. In terms of too deep epoll chains, we do in fact allow
deep nesting of the epoll fds themselves (deeper than EP_MAX_NESTS),
however we don't allow more than EP_MAX_NESTS when an epoll file
descriptor is actually connected to a wakeup source. Thus, we do not
require the use of ep_call_nested(), since ep_eventpoll_poll(), which is
called via ep_scan_ready_list() only continues nesting if there are
events available.
Since ep_call_nested() is implemented using a global lock, applications
that make use of nested epoll can see large performance improvements
with this change.
Davidlohr said:
: Improvements are quite obscene actually, such as for the following
: epoll_wait() benchmark with 2 level nesting on a 80 core IvyBridge:
:
: ncpus vanilla dirty delta
: 1 24470923028315 +23.75%
: 4 231265 2986954 +1191.57%
: 8 121631 2898796 +2283.27%
: 16 59749 2902056 +4757.07%
: 32 26837 2326314 +8568.30%
: 64 12926 1341281 +10276.61%
:
: (http://linux-scalability.org/epoll/epoll-test.c)
Link: http://lkml.kernel.org/r/1509430214-5599-1-git-send-email-jbaron@akamai.com Signed-off-by: Jason Baron <jbaron@akamai.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Salman Qazi <sqazi@google.com> Cc: Hou Tao <houtao1@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Jason Baron [Fri, 17 Nov 2017 23:29:02 +0000 (15:29 -0800)]
epoll: avoid calling ep_call_nested() from ep_poll_safewake()
ep_poll_safewake() is used to wakeup potentially nested epoll file
descriptors. The function uses ep_call_nested() to prevent entering the
same wake up queue more than once, and to prevent excessively deep
wakeup paths (deeper than EP_MAX_NESTS). However, this is not necessary
since we are already preventing these conditions during EPOLL_CTL_ADD.
This saves extra function calls, and avoids taking a global lock during
the ep_call_nested() calls.
I have, however, left ep_call_nested() for the CONFIG_DEBUG_LOCK_ALLOC
case, since ep_call_nested() keeps track of the nesting level, and this
is required by the call to spin_lock_irqsave_nested(). It would be nice
to remove the ep_call_nested() calls for the CONFIG_DEBUG_LOCK_ALLOC
case as well, however its not clear how to simply pass the nesting level
through multiple wake_up() levels without more surgery. In any case, I
don't think CONFIG_DEBUG_LOCK_ALLOC is generally used for production.
This patch, also apparently fixes a workload at Google that Salman Qazi
reported by completely removing the poll_safewake_ncalls->lock from
wakeup paths.
Link: http://lkml.kernel.org/r/1507920533-8812-1-git-send-email-jbaron@akamai.com Signed-off-by: Jason Baron <jbaron@akamai.com> Acked-by: Davidlohr Bueso <dbueso@suse.de> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Salman Qazi <sqazi@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Shakeel Butt [Fri, 17 Nov 2017 23:28:59 +0000 (15:28 -0800)]
epoll: account epitem and eppoll_entry to kmemcg
A userspace application can directly trigger the allocations from
eventpoll_epi and eventpoll_pwq slabs. A buggy or malicious application
can consume a significant amount of system memory by triggering such
allocations. Indeed we have seen in production where a buggy
application was leaking the epoll references and causing a burst of
eventpoll_epi and eventpoll_pwq slab allocations. This patch opt-in the
charging of eventpoll_epi and eventpoll_pwq slabs.
There is a per-user limit (~4% of total memory if no highmem) on these
caches. I think it is too generous particularly in the scenario where
jobs of multiple users are running on the system and the administrator
is reducing cost by overcomitting the memory. This is unaccounted
kernel memory and will not be considered by the oom-killer. I think by
accounting it to kmemcg, for systems with kmem accounting enabled, we
can provide better isolation between jobs of different users.
Link: http://lkml.kernel.org/r/20171003021519.23907-1-shakeelb@google.com Signed-off-by: Shakeel Butt <shakeelb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Greg Thelen <gthelen@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Perches [Fri, 17 Nov 2017 23:28:52 +0000 (15:28 -0800)]
checkpatch: add --strict test for lines ending in [ or (
Lines that end in an open bracket or open parenthesis are generally hard
to follow. Lines following those ending with open parenthesis are also
rarely aligned to that open parenthesis.
Joe Perches [Fri, 17 Nov 2017 23:28:41 +0000 (15:28 -0800)]
checkpatch: printks always need a KERN_<LEVEL>
There was code in checkpatch that allowed continuation printks to be
used without KERN_CONT. Remove the continuation check and always
require a KERN_<LEVEL>.
Yury Norov [Fri, 17 Nov 2017 23:28:31 +0000 (15:28 -0800)]
lib: test module for find_*_bit() functions
find_bit functions are widely used in the kernel, including hot paths.
This module tests performance of those functions in 2 typical scenarios:
randomly filled bitmap with relatively equal distribution of set and
cleared bits, and sparse bitmap which has 1 set bit for 500 cleared
bits.
Davidlohr Bueso [Fri, 17 Nov 2017 23:28:27 +0000 (15:28 -0800)]
lib/rbtree-test: lower default params
Fengguang reported soft lockups while running the rbtree and interval
tree test modules. The logic for these tests all occur in init phase,
and we currently are pounding with the default values for number of
nodes and number of iterations of each test. Reduce the latter by two
orders of magnitude. This does not influence the value of the tests in
that one thousand times by default is enough to get the picture.
Cheng Jian [Fri, 17 Nov 2017 23:28:23 +0000 (15:28 -0800)]
tools/lib/traceevent/parse-filter.c: clean up clang build warning
The uniform structure filter_arg sets its union based on the difference
of enum filter_arg_type, However, some functions use implicit type
conversion obviously.
warning: implicit conversion from enumeration type 'enum filter_exp_type'
to different enumeration type 'enum filter_op_type'
warning: implicit conversion from enumeration type 'enum filter_cmp_type'
to different enumeration type 'enum filter_exp_type'
Link: http://lkml.kernel.org/r/1509938415-113825-1-git-send-email-cj.chengjian@huawei.com Signed-off-by: Cheng Jian <cj.chengjian@huawei.com> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Xie XiuQi <xiexiuqi@huawei.com> Cc: Li Bin <huawei.libin@huawei.com> Cc: Steven Rostedt (Red Hat) <rostedt@goodmis.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Stephen Bates [Fri, 17 Nov 2017 23:28:16 +0000 (15:28 -0800)]
lib/genalloc.c: make the avail variable an atomic_long_t
If the amount of resources allocated to a gen_pool exceeds 2^32 then the
avail atomic overflows and this causes problems when clients try and
borrow resources from the pool. This is only expected to be an issue on
64 bit systems.
Add the <linux/atomic.h> header to pull in atomic_long* operations. So
that 32 bit systems continue to use atomic32_t but 64 bit systems can
use atomic64_t.
Link: http://lkml.kernel.org/r/1509033843-25667-1-git-send-email-sbates@raithlin.com Signed-off-by: Stephen Bates <sbates@raithlin.com> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Daniel Mentz <danielmentz@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Peter Zijlstra [Fri, 17 Nov 2017 23:28:08 +0000 (15:28 -0800)]
lib/int_sqrt: optimize initial value compute
The initial value (@m) compute is:
m = 1UL << (BITS_PER_LONG - 2);
while (m > x)
m >>= 2;
Which is a linear search for the highest even bit smaller or equal to @x
We can implement this using a binary search using __fls() (or better when
its hardware implemented).
m = 1UL << (__fls(x) & ~1UL);
Especially for small values of @x; which are the more common arguments
when doing a CDF on idle times; the linear search is near to worst case,
while the binary search of __fls() is a constant 6 (or 5 on 32bit)
branches.
Averages computed over all values <128k using a LFSR to generate order.
Cold numbers have a LFSR based branch trace buffer 'confuser' ran between
each int_sqrt() invocation.
Link: http://lkml.kernel.org/r/20171020164644.936577234@infradead.org Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Suggested-by: Joe Perches <joe@perches.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Anshul Garg <aksgarg1989@gmail.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: David Miller <davem@davemloft.net> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <mawilcox@microsoft.com> Cc: Michael Davidson <md@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Peter Zijlstra [Fri, 17 Nov 2017 23:28:04 +0000 (15:28 -0800)]
lib/int_sqrt: optimize small argument
The current int_sqrt() computation is sub-optimal for the case of small
@x. Which is the interesting case when we're going to do cumulative
distribution functions on idle times, which we assume to be a random
variable, where the target residency of the deepest idle state gives an
upper bound on the variable (5e6ns on recent Intel chips).
In the case of small @x, the compute loop:
while (m != 0) {
b = y + m;
y >>= 1;
if (x >= b) {
x -= b;
y += m;
}
m >>= 2;
}
can be reduced to:
while (m > x)
m >>= 2;
Because y==0, b==m and until x>=m y will remain 0.
And while this is computationally equivalent, it runs much faster
because there's less code, in particular less branches.
Averages computed over all values <128k using a LFSR to generate order.
Cold numbers have a LFSR based branch trace buffer 'confuser' ran between
each int_sqrt() invocation.
Link: http://lkml.kernel.org/r/20171020164644.876503355@infradead.org Fixes: 2c6a204a2fda ("lib/int_sqrt.c: optimize square root algorithm") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Suggested-by: Anshul Garg <aksgarg1989@gmail.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Joe Perches <joe@perches.com> Cc: David Miller <davem@davemloft.net> Cc: Matthew Wilcox <mawilcox@microsoft.com> Cc: Kees Cook <keescook@chromium.org> Cc: Michael Davidson <md@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This include was added by commit 05926db44b31 ("BUG: headers with
BUG/BUG_ON etc. need linux/bug.h") because BUG_ON() was used in this
header at that time.
Some time later, commit 949b3699f6cd ("lib: radix-tree: check accounting
of existing slot replacement users") removed the use of BUG_ON() from
this header.
Since then, there is no reason to include <linux/bug.h>.
Link: http://lkml.kernel.org/r/1505660151-4383-1-git-send-email-yamada.masahiro@socionext.com Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Matthew Wilcox <mawilcox@microsoft.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Jan Kara <jack@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Chris Mi <chrism@mellanox.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Masahiro Yamada [Fri, 17 Nov 2017 23:27:49 +0000 (15:27 -0800)]
include/linux/bitfield.h: include <linux/build_bug.h> instead of <linux/bug.h>
Since commit 6dd927f10703 ("bug: split BUILD_BUG stuff out into
<linux/build_bug.h>"), #include <linux/build_bug.h> is better to pull
minimal headers needed for BUILG_BUG() family.
Link: http://lkml.kernel.org/r/1505700775-19826-1-git-send-email-yamada.masahiro@socionext.com Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Cc: Dinan Gunawardena <dinan.gunawardena@netronome.com> Cc: Kalle Valo <kvalo@codeaurora.org> Cc: Ian Abbott <abbotti@mev.co.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Perches [Fri, 17 Nov 2017 23:27:46 +0000 (15:27 -0800)]
get_maintainer: add more --self-test options
Add tests for duplicate section headers, missing section content, link and
scm reachability.
Miscellanea:
o Add --self-test=<foo> options
(a comma separated list of any of sections, patterns, links or scm)
where the default without options is all tests
o Rename check_maintainers_patterns to self_test
o Rename self_test_pattern_info to self_test_info
The GCC randomize layout plugin can randomize the member offsets of
sensitive kernel data structures. To use this feature, certain
annotations and members are added to the structures which affect the
member offsets even if this plugin is not used.
All of these structures are completely randomized, except for task_struct
which leaves out some of its members. All the other members are wrapped
within an anonymous struct with the __randomize_layout attribute. This is
done using the randomized_struct_fields_start and
randomized_struct_fields_end defines.
When the plugin is disabled, the behaviour of this attribute can vary
based on the GCC version. For GCC 5.1+, this attribute maps to
__designated_init otherwise it is just an empty define but the anonymous
structure is still present. For other compilers, both
randomized_struct_fields_start and randomized_struct_fields_end default
to empty defines meaning the anonymous structure is not introduced at
all.
So, if a module compiled with Clang, such as a BPF program, needs to
access task_struct fields such as pid and comm, the offsets of these
members as recognized by Clang are different from those recognized by
modules compiled with GCC. If GCC 4.6+ is used to build the kernel,
this can be solved by introducing appropriate defines for Clang so that
the anonymous structure is seen when determining the offsets for the
members.
Link: http://lkml.kernel.org/r/20171109064645.25581-1-sandipan@linux.vnet.ibm.com Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Cc: David Rientjes <rientjes@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Alexei Starovoitov <ast@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kees Cook [Fri, 17 Nov 2017 23:27:24 +0000 (15:27 -0800)]
bug: fix "cut here" location for __WARN_TAINT architectures
Prior to v4.11, x86 used warn_slowpath_fmt() for handling WARN()s.
After WARN() was moved to using UD0 on x86, the warning text started
appearing _before_ the "cut here" line. This appears to have been a
long-standing bug on architectures that used __WARN_TAINT, but it didn't
get fixed.
v4.11 and earlier on x86:
------------[ cut here ]------------
WARNING: CPU: 0 PID: 2956 at drivers/misc/lkdtm_bugs.c:65 lkdtm_WARNING+0x21/0x30
This is a warning message
Modules linked in:
v4.12 and later on x86:
This is a warning message
------------[ cut here ]------------
WARNING: CPU: 1 PID: 2982 at drivers/misc/lkdtm_bugs.c:68 lkdtm_WARNING+0x15/0x20
Modules linked in:
With this fix:
------------[ cut here ]------------
This is a warning message
WARNING: CPU: 3 PID: 3009 at drivers/misc/lkdtm_bugs.c:67 lkdtm_WARNING+0x15/0x20
Since the __FILE__ reporting happens as part of the UD0 handler, it
isn't trivial to move the message to after the WARNING line, but at
least we can fix the position of the "cut here" line so all the various
logging tools will start including the actual runtime warning message
again, when they follow the instruction and "cut here".
Arnd Bergmann [Fri, 17 Nov 2017 23:27:13 +0000 (15:27 -0800)]
iopoll: avoid -Wint-in-bool-context warning
When we pass the result of a multiplication as the timeout or the delay,
we can get a warning from gcc-7:
drivers/mmc/host/bcm2835.c:596:149: error: '*' in boolean context, suggest '&&' instead [-Werror=int-in-bool-context]
drivers/mfd/arizona-core.c:247:195: error: '*' in boolean context, suggest '&&' instead [-Werror=int-in-bool-context]
drivers/gpu/drm/sun4i/sun4i_hdmi_i2c.c:49:27: error: '*' in boolean context, suggest '&&' instead [-Werror=int-in-bool-context]
The warning is a bit questionable inside of a macro, but this is
intentional on the side of the gcc developers. It is also an indication
of another problem: we evaluate the timeout and sleep arguments multiple
times, which can have undesired side-effects when those are complex
expressions.
This changes the two iopoll variants to use local variables for storing
copies of the timeouts. This adds some more type safety, and avoids
both the double-evaluation and the gcc warning.
Andi Kleen [Fri, 17 Nov 2017 23:27:06 +0000 (15:27 -0800)]
kernel debug: support resetting WARN_ONCE for all architectures
Some architectures store the WARN_ONCE state in the flags field of the
bug_entry. Clear that one too when resetting once state through
/sys/kernel/debug/clear_warn_once
Pointed out by Michael Ellerman
Improves the earlier patch that add clear_warn_once.
[ak@linux.intel.com: add a missing ifdef CONFIG_MODULES] Link: http://lkml.kernel.org/r/20171020170633.9593-1-andi@firstfloor.org
[akpm@linux-foundation.org: fix unused var warning]
[akpm@linux-foundation.org: Use 0200 for clear_warn_once file, per mpe]
[akpm@linux-foundation.org: clear BUGFLAG_DONE in clear_once_table(), per mpe] Link: http://lkml.kernel.org/r/20171019204642.7404-1-andi@firstfloor.org Signed-off-by: Andi Kleen <ak@linux.intel.com> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andi Kleen [Fri, 17 Nov 2017 23:27:03 +0000 (15:27 -0800)]
kernel debug: support resetting WARN*_ONCE
I like _ONCE warnings because it's guaranteed that they don't flood the
log.
During testing I find it useful to reset the state of the once warnings,
so that I can rerun tests and see if they trigger again, or can
guarantee that a test run always hits the same warnings.
This patch adds a debugfs interface to reset all the _ONCE warnings so
that they appear again:
echo 1 > /sys/kernel/debug/clear_warn_once
This is implemented by putting all the warning booleans into a special
section, and clearing it.
Kees Cook [Fri, 17 Nov 2017 23:26:59 +0000 (15:26 -0800)]
sh/boot: add static stack-protector to pre-kernel
The sh decompressor code triggers stack-protector code generation when
using CONFIG_CC_STACKPROTECTOR_STRONG. As done for arm and mips, add a
simple static stack-protector canary. As this wasn't protected before,
the risk of using a weak canary is minimized. Once the kernel is
actually up, a better canary is chosen.
Link: http://lkml.kernel.org/r/1506972007-80614-2-git-send-email-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Ingo Molnar <mingo@kernel.org> Cc: Laura Abbott <labbott@redhat.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michal Marek <mmarek@suse.com> Cc: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Roman Gushchin [Fri, 17 Nov 2017 23:26:45 +0000 (15:26 -0800)]
proc, coredump: add CoreDumping flag to /proc/pid/status
Right now there is no convenient way to check if a process is being
coredumped at the moment.
It might be necessary to recognize such state to prevent killing the
process and getting a broken coredump. Writing a large core might take
significant time, and the process is unresponsive during it, so it might
be killed by timeout, if another process is monitoring and
killing/restarting hanging tasks.
We're getting a significant number of corrupted coredump files on
machines in our fleet, just because processes are being killed by
timeout in the middle of the core writing process.
We do have a process health check, and some agent is responsible for
restarting processes which are not responding for health check requests.
Writing a large coredump to the disk can easily exceed the reasonable
timeout (especially on an overloaded machine).
This flag will allow the agent to distinguish processes which are being
coredumped, extend the timeout for them, and let them produce a full
coredump file.
To provide an ability to detect if a process is in the state of being
coredumped, we can expose a boolean CoreDumping flag in
/proc/pid/status.
Vlastimil Babka [Fri, 17 Nov 2017 23:26:41 +0000 (15:26 -0800)]
mm, compaction: remove unneeded pageblock_skip_persistent() checks
Commit f3c931633a59 ("mm, compaction: persistently skip hugetlbfs
pageblocks") has introduced pageblock_skip_persistent() checks into
migration and free scanners, to make sure pageblocks that should be
persistently skipped are marked as such, regardless of the
ignore_skip_hint flag.
Since the previous patch introduced a new no_set_skip_hint flag, the
ignore flag no longer prevents marking pageblocks as skipped. Therefore
we can remove the special cases. The relevant pageblocks will be marked
as skipped by the common logic which marks each pageblock where no page
could be isolated. This makes the code simpler.
Link: http://lkml.kernel.org/r/20171102121706.21504-3-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vlastimil Babka [Fri, 17 Nov 2017 23:26:38 +0000 (15:26 -0800)]
mm, compaction: split off flag for not updating skip hints
Pageblock skip hints were added as a heuristic for compaction, which
shares core code with CMA. Since CMA reliability would suffer from the
heuristics, compact_control flag ignore_skip_hint was added for the CMA
use case. Since 105064ce03df ("mm/compaction: respect ignore_skip_hint
in update_pageblock_skip") the flag also means that CMA won't *update*
the skip hints in addition to ignoring them.
Today, direct compaction can also ignore the skip hints in the last
resort attempt, but there's no reason not to set them when isolation
fails in such case. Thus, this patch splits off a new no_set_skip_hint
flag to avoid the updating, which only CMA sets. This should improve
the heuristics a bit, and allow us to simplify the persistent skip bit
handling as the next step.
Link: http://lkml.kernel.org/r/20171102121706.21504-2-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vlastimil Babka [Fri, 17 Nov 2017 23:26:34 +0000 (15:26 -0800)]
mm, compaction: extend pageblock_skip_persistent() to all compound pages
pageblock_skip_persistent() checks for HugeTLB pages of pageblock order.
When clearing pageblock skip bits for compaction, the bits are not
cleared for such pageblocks, because they cannot contain base pages
suitable for migration, nor free pages to use as migration targets.
This optimization can be simply extended to all compound pages of order
equal or larger than pageblock order, because migrating such pages (if
they support it) cannot help sub-pageblock fragmentation. This includes
THP's and also gigantic HugeTLB pages, which the current implementation
doesn't persistently skip due to a strict pageblock_order equality check
and not recognizing tail pages.
While THP pages are generally less "persistent" than HugeTLB, we can
still expect that if a THP exists at the point of
__reset_isolation_suitable(), it will exist also during the subsequent
compaction run. The time difference here could be actually smaller than
between a compaction run that sets a (non-persistent) skip bit on a THP,
and the next compaction run that observes it.
Link: http://lkml.kernel.org/r/20171102121706.21504-1-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
David Rientjes [Fri, 17 Nov 2017 23:26:30 +0000 (15:26 -0800)]
mm, compaction: persistently skip hugetlbfs pageblocks
It is pointless to migrate hugetlb memory as part of memory compaction
if the hugetlb size is equal to the pageblock order. No defragmentation
is occurring in this condition.
It is also pointless to for the freeing scanner to scan a pageblock
where a hugetlb page is pinned. Unconditionally skip these pageblocks,
and do so peristently so that they are not rescanned until it is
observed that these hugepages are no longer pinned.
It would also be possible to do this by involving the hugetlb subsystem
in marking pageblocks to no longer be skipped when they hugetlb pages
are freed. This is a simple solution that doesn't involve any
additional subsystems in pageblock skip manipulation.
David Rientjes [Fri, 17 Nov 2017 23:26:27 +0000 (15:26 -0800)]
mm, compaction: kcompactd should not ignore pageblock skip
Kcompactd is needlessly ignoring pageblock skip information. It is
doing MIGRATE_SYNC_LIGHT compaction, which is no more powerful than
MIGRATE_SYNC compaction.
If compaction recently failed to isolate memory from a set of
pageblocks, there is nothing to indicate that kcompactd will be able to
do so, or that it is beneficial from attempting to isolate memory.
Use the pageblock skip hint to avoid rescanning pageblocks needlessly
until that information is reset.
Miles Chen [Fri, 17 Nov 2017 23:26:19 +0000 (15:26 -0800)]
lib/dma-debug.c: fix incorrect pfn calculation
dma-debug reports the following warning:
WARNING: CPU: 3 PID: 298 at kernel-4.4/lib/dma-debug.c:604
debug _dma_assert_idle+0x1a8/0x230()
DMA-API: cpu touching an active dma mapped cacheline [cln=0x00000882300]
CPU: 3 PID: 298 Comm: vold Tainted: G W O 4.4.22+ #1
Hardware name: MT6739 (DT)
Call trace:
debug_dma_assert_idle+0x1a8/0x230
wp_page_copy.isra.96+0x118/0x520
do_wp_page+0x4fc/0x534
handle_mm_fault+0xd4c/0x1310
do_page_fault+0x1c8/0x394
do_mem_abort+0x50/0xec
I found that debug_dma_alloc_coherent() and debug_dma_free_coherent()
assume that dma_alloc_coherent() always returns a linear address.
However it's possible that dma_alloc_coherent() returns a non-linear
address. In this case, page_to_pfn(virt_to_page(virt)) will return an
incorrect pfn. If the pfn is valid and mapped as a COW page, we will
hit the warning when doing wp_page_copy().
Fix this by calculating pfn for linear and non-linear addresses.
Vitaly Wool [Fri, 17 Nov 2017 23:26:16 +0000 (15:26 -0800)]
mm/z3fold.c: use kref to prevent page free/compact race
There is a race in the current z3fold implementation between
do_compact() called in a work queue context and the page release
procedure when page's kref goes to 0.
do_compact() may be waiting for page lock, which is released by
release_z3fold_page_locked right before putting the page onto the
"stale" list, and then the page may be freed as do_compact() modifies
its contents.
The mechanism currently implemented to handle that (checking the
PAGE_STALE flag) is not reliable enough. Instead, we'll use page's kref
counter to guarantee that the page is not released if its compaction is
scheduled. It then becomes compaction function's responsibility to
decrease the counter and quit immediately if the page was actually
freed.
Arnd Bergmann [Fri, 17 Nov 2017 23:26:12 +0000 (15:26 -0800)]
mm: fix nodemask printing
The cleanup caused build warnings for constant mask pointers:
mm/mempolicy.c: In function `mpol_to_str':
./include/linux/nodemask.h:108:11: warning: the comparison will always evaluate as `true' for the address of `nodes' will never be NULL [-Waddress]
An earlier workaround I suggested was incorporated in the version that
got merged, but that only solved the problem for gcc-7 and higher, while
gcc-4.6 through gcc-6.x still warn.
This changes the printing again to use inline functions that make it
clear to the compiler that the line that does the NULL check has no idea
whether the argument is a constant NULL.
Linus Torvalds [Fri, 17 Nov 2017 22:58:01 +0000 (14:58 -0800)]
Merge tag 'trace-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing updates from
- allow module init functions to be traced
- clean up some unused or not used by config events (saves space)
- clean up of trace histogram code
- add support for preempt and interrupt enabled/disable events
- other various clean ups
* tag 'trace-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (30 commits)
tracing, thermal: Hide cpu cooling trace events when not in use
tracing, thermal: Hide devfreq trace events when not in use
ftrace: Kill FTRACE_OPS_FL_PER_CPU
perf/ftrace: Small cleanup
perf/ftrace: Fix function trace events
perf/ftrace: Revert ("perf/ftrace: Fix double traces of perf on ftrace:function")
tracing, dma-buf: Remove unused trace event dma_fence_annotate_wait_on
tracing, memcg, vmscan: Hide trace events when not in use
tracing/xen: Hide events that are not used when X86_PAE is not defined
tracing: mark trace_test_buffer as __maybe_unused
printk: Remove superfluous memory barriers from printk_safe
ftrace: Clear hashes of stale ips of init memory
tracing: Add support for preempt and irq enable/disable events
tracing: Prepare to add preempt and irq trace events
ftrace/kallsyms: Have /proc/kallsyms show saved mod init functions
ftrace: Add freeing algorithm to free ftrace_mod_maps
ftrace: Save module init functions kallsyms symbols for tracing
ftrace: Allow module init functions to be traced
ftrace: Add a ftrace_free_mem() function for modules to use
tracing: Reimplement log2
...
Linus Torvalds [Fri, 17 Nov 2017 22:56:14 +0000 (14:56 -0800)]
Merge tag 'linux-kselftest-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
Pull kselftest updates from Shuah Khan:
"This update to Kselftest consists of cleanup patches, fixes, and a new
test for ion buffer sharing.
Fixes include changes to skip firmware tests on systems that aren't
configured to support them, as opposed to failing them"
* tag 'linux-kselftest-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
selftests: firmware: skip unsupported custom firmware fallback tests
selftests: firmware: skip unsupported async loading tests
selftests: memfd_test.c: fix compilation warning.
selftests/ftrace: Introduce exit_pass and exit_fail
selftests: ftrace: add more config fragments
android/ion: userspace test utility for ion buffer sharing
selftests: remove obsolete kconfig fragment for cpu-hotplug
selftests: vdso_test: support ARM64 targets
selftests/ftrace: Do not use arch dependent do_IRQ as a target function
selftests: breakpoints: fix compile error on breakpoint_test_arm64
selftests: add missing test result status in memory-hotplug test
selftests/exec: include cwd in long path calculation
selftests: seccomp: update .gitignore with newly added tests
selftests: vm: Update .gitignore with newly added tests
selftests: timers: Update .gitignore with newly added tests
Linus Torvalds [Fri, 17 Nov 2017 22:51:24 +0000 (14:51 -0800)]
Merge tag 'acpi-fix-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull ACPI fix from Rafael Wysocki:
"This fixes a possible memory leak in an error code path in one of the
utility routines (Xiongfeng Wang)"
* tag 'acpi-fix-4.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
ACPI / utils: Fix memory leak in acpi_evaluate_reference() error path