Hannes Reinecke [Tue, 2 Jun 2020 11:09:50 +0000 (13:09 +0200)]
dm zoned: per-device reclaim
Instead of having one reclaim workqueue for the entire set we should
be allocating a reclaim workqueue per device; doing so will reduce
contention and should boost performance for a multi-device setup.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Tue, 2 Jun 2020 11:09:47 +0000 (13:09 +0200)]
dm zoned: allocate temporary superblock for tertiary devices
Checking the tertiary superblock just consists of validating UUIDs,
crcs, and the generation number; it doesn't have contents which would
be required during the actual operation.
So allocate a temporary superblock when checking tertiary devices to
avoid having to store it together with the 'real' superblocks.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Tue, 2 Jun 2020 11:09:46 +0000 (13:09 +0200)]
dm zoned: convert to xarray
The zones array is getting really large, and large arrays tend to
wreak havoc with the CPU caches. So convert it to xarray to become
more cache friendly.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Colin Ian King <colin.king@canonical.com> # fix leak in dmz_insert Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Mikulas Patocka [Tue, 2 Jun 2020 13:34:41 +0000 (15:34 +0200)]
dm ebs: use dm_bufio_forget_buffers
Use dm_bufio_forget_buffers instead of a block-by-block loop that
calls dm_bufio_forget. dm_bufio_forget_buffers is faster than the loop
because it searches for used buffers using rb-tree.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Mikulas Patocka [Tue, 2 Jun 2020 13:34:39 +0000 (15:34 +0200)]
dm bufio: clean up rbtree block ordering
dm-bufio uses unnatural ordering in the rb-tree - blocks with smaller
numbers were put to the right node and blocks with bigger numbers were
put to the left node.
Reverse that logic so that it's natural.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Mikulas Patocka [Wed, 27 May 2020 11:04:46 +0000 (07:04 -0400)]
dm bufio: delete unused and inefficient dm_bufio_discard_buffers
There is no user for this interface. If in future it is needed it can
be reimplemented to walk the rbtree of buffers instead of doing
block-by-block lookups.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Fri, 22 May 2020 08:58:53 +0000 (10:58 +0200)]
dm zoned: remove leftover hunk for switching to sequential zones
Remove a leftover hunk to switch from random zones to sequential
zones when selecting a reclaim zone; the logic has moved into the
caller and this hunk is now pointless.
Fixes: 34f5affd04c4 ("dm zoned: separate random and cache zones") Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Tue, 19 May 2020 08:14:23 +0000 (10:14 +0200)]
dm zoned: terminate reclaim on congestion
When dmz_get_chunk_mapping() selects a zone which is under reclaim
we should terminate the reclaim copy process. Since we're changing
the zone itself, reclaim needs to run afterwards again anyway.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Tue, 19 May 2020 08:14:20 +0000 (10:14 +0200)]
dm zoned: separate random and cache zones
Instead of lumping emulated zones together with random zones we
should be handling them as separate 'cache' zones. This improves
code readability and allows an easier implementation of different
cache policies.
Also add additional allocation flags, to separate the type (cache,
random, or sequential) from the purpose (eg reclaim).
Also switch the allocation policy to not use random zones as buffer
zones if cache zones are present. This avoids a performance drop when
all cache zones are used.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Tue, 19 May 2020 08:14:19 +0000 (10:14 +0200)]
dm zoned: return NULL if dmz_get_zone_for_reclaim() fails to find a zone
The only case where dmz_get_zone_for_reclaim() cannot return a zone is
if the respective lists are empty. So we should just return a simple
NULL value here as we really don't have an error code which would make
sense.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
dm zoned: Avoid 64-bit division error in dmz_fixup_devices
When building arm32 allyesconfig:
ld.lld: error: undefined symbol: __aeabi_uldivmod
>>> referenced by dm-zoned-target.c
>>> md/dm-zoned-target.o:(dmz_ctr) in archive drivers/built-in.a
dmz_fixup_devices uses DIV_ROUND_UP with variables of type sector_t. As
such, it should be using DIV_ROUND_UP_SECTOR_T, which handles this
automatically.
Fixes: 70978208ec91 ("dm zoned: metadata version 2") Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Mike Snitzer [Thu, 14 May 2020 16:55:39 +0000 (12:55 -0400)]
dm: use DMDEBUG macros now that they use pr_debug variants
Now that DMDEBUG uses pr_debug and DMDEBUG_LIMIT uses
pr_debug_ratelimited cleanup DM's 2 direct pr_debug callers to use
them to get the benefit of consistent DM_FMT formatting of debugging
messages.
While doing so, dm-mpath.c:dm_report_EIO() was switched over to using
DMDEBUG_LIMIT due to the potential for error handling floods in the IO
completion path.
The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:
struct foo {
int stuff;
struct boo array[];
};
By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.
Also, notice that, dynamic memory allocations won't be affected by
this change:
"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]
sizeof(flexible-array-member) triggers a warning because flexible array
members have incomplete type[1]. There are some instances of code in
which the sizeof operator is being incorrectly/erroneously applied to
zero-length arrays and the result is zero. Such instances may be hiding
some bugs. So, this work (flexible-array member conversions) will also
help to get completely rid of those sorts of issues.
Hannes Reinecke [Mon, 11 May 2020 08:24:30 +0000 (10:24 +0200)]
dm zoned: metadata version 2
Implement handling for metadata version 2. The new metadata adds a
label and UUID for the device mapper device, and additional UUID for
the underlying block devices.
It also allows for an additional regular drive to be used for
emulating random access zones. The emulated zones will be placed
logically in front of the zones from the zoned block device, causing
the superblocks and metadata to be stored on that device.
The first zone of the original zoned device will be used to hold
another, tertiary copy of the metadata; this copy carries a generation
number of 0 and is never updated; it's just used for identification.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Bob Liu <bob.liu@oracle.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:29 +0000 (10:24 +0200)]
dm zoned: ignore metadata zone in dmz_alloc_zone()
When looking up zones in dmz_alloc_zone() we need to ignore
metadata zones so as not to accidentally overwrite metadata.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:28 +0000 (10:24 +0200)]
dm zoned: Reduce logging output on startup
dm-zoned is becoming quite chatty during startup; reduce the noise
by moving some information to 'debug' level.
Suggested-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:27 +0000 (10:24 +0200)]
dm zoned: add metadata logging functions
Use the metadata label for logging and not the underlying
device.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:26 +0000 (10:24 +0200)]
dm zoned: use dmz_zone_to_dev() when handling metadata I/O
Use accessors to retrieve the device pointer in preparation
for adding an additional block device.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:25 +0000 (10:24 +0200)]
dm zoned: replace 'target' pointer in the bio context
Replace the 'target' pointer in the bio context with the
device pointer as this is what's actually used.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Bob Liu <bob.liu@oracle.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:24 +0000 (10:24 +0200)]
dm zoned: remove 'dev' argument from reclaim
Use the dmz_zone_to_dev() mapping function to remove the
'dev' argument from reclaim.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Bob Liu <bob.liu@oracle.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:23 +0000 (10:24 +0200)]
dm zoned: Introduce dmz_dev_is_dying() and dmz_check_dev()
Introduce accessors dmz_dev_is_dying() and dmz_check_dev() to
avoid having to reference the devices directly.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Bob Liu <bob.liu@oracle.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:22 +0000 (10:24 +0200)]
dm zoned: introduce dmz_metadata_label() to format device name
Introduce dmz_metadata_label() to format the device-mapper device
name and use it instead of the device name of the underlying device.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:21 +0000 (10:24 +0200)]
dm zoned: move fields from struct dmz_dev to dmz_metadata
Move fields from the device structure into the metadata structure
and provide accessor functions.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:20 +0000 (10:24 +0200)]
dm zoned: store device in struct dmz_sb
Store the device together with the superblock so that
we don't have to recur to the metadata to find it.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:19 +0000 (10:24 +0200)]
dm zoned: use array for superblock zones
Instead of storing just the first superblock zone and calculate
the secondary relative to that we should be using an array for
holding the superblock zones.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:18 +0000 (10:24 +0200)]
dm zoned: store zone id within the zone structure and kill dmz_id()
Instead of calculating the zone index by the offset within the
zone array store the index within the structure itself. With that
the helper dmz_id() is pointless and can be replaced with accessing
the ->id value directly.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Bob Liu <bob.liu@oracle.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:17 +0000 (10:24 +0200)]
dm zoned: add 'message' callback
Add callback for 'dmsetup message' to allow the reclaim process
to be triggered manually.
Eg.
dmsetup message /dev/dm-X 0 message
will start the reclaim process even if the default threshold
of 50 percent of free random zones is not reached.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Bob Liu <bob.liu@oracle.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Hannes Reinecke [Mon, 11 May 2020 08:24:16 +0000 (10:24 +0200)]
dm zoned: add 'status' callback
Add callback to supply information for 'dmsetup status'
and 'dmsetup table'. The output for 'dmsetup status' is
0 <size> zoned <nr_zones> zones <nr_unmap_rnd>/<nr_rnd> random <nr_unmap_seq>/<nr_seq> sequential
where <nr_unmap_rnd> is the number of unmapped (ie free) random zones,
<nr_rnd> the total number of random zones, <nr_unmap_seq> the number
of unmapped sequential zones, and <nr_seq> the total number of
sequential zones.
Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Bob Liu <bob.liu@oracle.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
dm mpath: add Historical Service Time Path Selector
This new selector keeps an exponential moving average of the service
time for each path (losely defined as delta between start_io and
end_io), and uses this along with the number of inflight requests to
estimate future service time for a path. Since we don't have a prober
to account for temporally slow paths, re-try "slow" paths every once in
a while (num_paths * historical_service_time). To account for fast paths
transitioning to slow, if a path has not completed any request within
(num_paths * historical_service_time), limit the number of outstanding
requests. To account for low volume situations where number of
inflight IOs would be zero, the last finish time of each path is
factored in.
Signed-off-by: Khazhismel Kumykov <khazhy@google.com> Co-developed-by: Gabriel Krisman Bertazi <krisman@collabora.com> Signed-off-by: Gabriel Krisman Bertazi <krisman@collabora.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
The HST path selector needs this information to perform path
prediction. For request-based mpath, struct request's io_start_time_ns
is used, while for bio-based, use the start_time stored in dm_io.
Signed-off-by: Gabriel Krisman Bertazi <krisman@collabora.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
dm writecache: improve performance on DDR persistent memory (Optane)
When testing the dm-writecache target on a real DDR persistent memory
(Intel Optane), it turned out that explicit cache flushing using the
clflushopt instruction performs better than non-temporal stores for
block sizes 1k, 2k and 4k.
The dm-writecache target is singlethreaded (all the copying is done
while holding the writecache lock), so it benefits from clwb, see:
http://lore.kernel.org/r/alpine.LRH.2.02.2004160411460.7833@file01.intranet.prod.int.rdu2.redhat.com
Add a new function memcpy_flushcache_optimized() that tests if
clflushopt is present - and if it is, we use it instead of
memcpy_flushcache.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Zhiqiang Liu [Wed, 15 Apr 2020 11:57:31 +0000 (19:57 +0800)]
dm persistent data: switch exit_ro_spine to return void
In commit 4c7da06f5a78 ("dm persistent data: eliminate unnecessary
return values"), r value in exit_ro_spine will not change, so
exit_ro_spine doesn't need a return value.
Signed-off-by: Zhiqiang Liu <liuzhiqiang26@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
drivers/md/dm-integrity.c: In function 'integrity_metadata':
drivers/md/dm-integrity.c:1557:12: warning:
variable 'save_metadata_offset' set but not used [-Wunused-but-set-variable]
drivers/md/dm-integrity.c:1556:12: warning:
variable 'save_metadata_block' set but not used [-Wunused-but-set-variable]
They are never used, so remove it.
Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Mikulas Patocka [Fri, 7 Feb 2020 20:59:25 +0000 (15:59 -0500)]
dm bufio: implement discard
Add functions dm_bufio_issue_discard and dm_bufio_discard_buffers.
dm_bufio_issue_discard sends discard request to the underlying device.
dm_bufio_discard_buffers frees buffers in the range and then calls
dm_bufio_issue_discard.
Also, factor out block_to_sector for reuse in dm_bufio_issue_discard.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
This new target is similar to the linear target except that it emulates
a smaller logical block size on a device with a larger logical block
size. Its main purpose is to emulate 512 byte sectors on 4K native
disks (i.e. 512e).
See Documentation/admin-guide/device-mapper/dm-ebs.rst for details.
Reviewed-by: Damien Le Moal <DamienLeMoal@wdc.com> Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> [Kconfig fixes] Signed-off-by: Zheng Bin <zhengbin13@huawei.com> [static fixes] Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Martin Wilck [Mon, 20 Apr 2020 20:29:09 +0000 (22:29 +0200)]
dm mpath: switch paths in dm_blk_ioctl() code path
SCSI LUN passthrough code such as qemu's "scsi-block" device model
pass every IO to the host via SG_IO ioctls. Currently, dm-multipath
calls choose_pgpath() only in the block IO code path, not in the ioctl
code path (unless current_pgpath is NULL). This has the effect that no
path switching and thus no load balancing is done for SCSI-passthrough
IO, unless the active path fails.
Fix this by using the same logic in multipath_prepare_ioctl() as in
multipath_clone_and_map().
Note: The allegedly best path selection algorithm, service-time,
still wouldn't work perfectly, because the io size of the current
request is always set to 0. Changing that for the IO passthrough
case would require the ioctl cmd and arg to be passed to dm's
prepare_ioctl() method.
Signed-off-by: Martin Wilck <mwilck@suse.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Linus Torvalds [Sun, 10 May 2020 18:59:53 +0000 (11:59 -0700)]
Merge tag 'x86-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Thomas Gleixner:
"A set of fixes for x86:
- Ensure that direct mapping alias is always flushed when changing
page attributes. The optimization for small ranges failed to do so
when the virtual address was in the vmalloc or module space.
- Unbreak the trace event registration for syscalls without arguments
caused by the refactoring of the SYSCALL_DEFINE0() macro.
- Move the printk in the TSC deadline timer code to a place where it
is guaranteed to only be called once during boot and cannot be
rearmed by clearing warn_once after boot. If it's invoked post boot
then lockdep rightfully complains about a potential deadlock as the
calling context is different.
- A series of fixes for objtool and the ORC unwinder addressing
variety of small issues:
- Stack offset tracking for indirect CFAs in objtool ignored
subsequent pushs and pops
- Repair the unwind hints in the register clearing entry ASM code
- Make the unwinding in the low level exit to usermode code stop
after switching to the trampoline stack. The unwind hint is no
longer valid and the ORC unwinder emits a warning as it can't
find the registers anymore.
- Fix unwind hints in switch_to_asm() and rewind_stack_do_exit()
which caused objtool to generate bogus ORC data.
- Prevent unwinder warnings when dumping the stack of a
non-current task as there is no way to be sure about the
validity because the dumped stack can be a moving target.
- Make the ORC unwinder behave the same way as the frame pointer
unwinder when dumping an inactive tasks stack and do not skip
the first frame.
- Prevent ORC unwinding before ORC data has been initialized
- Immediately terminate unwinding when a unknown ORC entry type
is found.
- Prevent premature stop of the unwinder caused by IRET frames.
- Fix another infinite loop in objtool caused by a negative
offset which was not catched.
- Address a few build warnings in the ORC unwinder and add
missing static/ro_after_init annotations"
* tag 'x86-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/unwind/orc: Move ORC sorting variables under !CONFIG_MODULES
x86/apic: Move TSC deadline timer debug printk
ftrace/x86: Fix trace event registration for syscalls without arguments
x86/mm/cpa: Flush direct map alias during cpa
objtool: Fix infinite loop in for_offset_range()
x86/unwind/orc: Fix premature unwind stoppage due to IRET frames
x86/unwind/orc: Fix error path for bad ORC entry type
x86/unwind/orc: Prevent unwinding before ORC initialization
x86/unwind/orc: Don't skip the first frame for inactive tasks
x86/unwind: Prevent false warnings for non-current tasks
x86/unwind/orc: Convert global variables to static
x86/entry/64: Fix unwind hints in rewind_stack_do_exit()
x86/entry/64: Fix unwind hints in __switch_to_asm()
x86/entry/64: Fix unwind hints in kernel exit path
x86/entry/64: Fix unwind hints in register clearing code
objtool: Fix stack offset tracking for indirect CFAs
Linus Torvalds [Sun, 10 May 2020 18:42:14 +0000 (11:42 -0700)]
Merge tag 'objtool-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull objtool fix from Thomas Gleixner:
"A single fix for objtool to prevent an infinite loop in the
jump table search which can be triggered when building the
kernel with '-ffunction-sections'"
* tag 'objtool-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
objtool: Fix infinite loop in find_jump_table()
Linus Torvalds [Sun, 10 May 2020 18:39:31 +0000 (11:39 -0700)]
Merge tag 'locking-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking fix from Thomas Gleixner:
"A single fix for the fallout of the recent futex uacess rework.
With those changes GCC9 fails to analyze arch_futex_atomic_op_inuser()
correctly and emits a 'maybe unitialized' warning. While we usually
ignore compiler stupidity the conditional store is pointless anyway
because the correct case has to store. For the fault case the extra
store does no harm"
* tag 'locking-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
ARM: futex: Address build warning
Linus Torvalds [Sun, 10 May 2020 18:26:23 +0000 (11:26 -0700)]
Merge tag 'iommu-fixes-v5.7-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull iommu fixes from Joerg Roedel:
- Race condition fixes for the AMD IOMMU driver.
These are five patches fixing two race conditions around
increase_address_space(). The first race condition was around the
non-atomic update of the domain page-table root pointer and the
variable containing the page-table depth (called mode). This is fixed
now be merging page-table root and mode into one 64-bit field which
is read/written atomically.
The second race condition was around updating the page-table root
pointer and making it public before the hardware caches were flushed.
This could cause addresses to be mapped and returned to drivers which
are not reachable by IOMMU hardware yet, causing IO page-faults. This
is fixed too by adding the necessary flushes before a new page-table
root is published.
Related to the race condition fixes these patches also add a missing
domain_flush_complete() barrier to update_domain() and a fix to bail
out of the loop which tries to increase the address space when the
call to increase_address_space() fails.
Qian was able to trigger the race conditions under high load and
memory pressure within a few days of testing. He confirmed that he
has seen no issues anymore with the fixes included here.
- Fix for a list-handling bug in the VirtIO IOMMU driver.
* tag 'iommu-fixes-v5.7-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu/virtio: Reverse arguments to list_add
iommu/amd: Do not flush Device Table in iommu_map_page()
iommu/amd: Update Device Table in increase_address_space()
iommu/amd: Call domain_flush_complete() in update_domain()
iommu/amd: Do not loop forever when trying to increase address space
iommu/amd: Fix race in increase_address_space()/fetch_pte()
Linus Torvalds [Sun, 10 May 2020 18:16:07 +0000 (11:16 -0700)]
Merge tag 'block-5.7-2020-05-09' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
- a small series fixing a use-after-free of bdi name (Christoph,Yufen)
- NVMe fix for a regression with the smaller CQ update (Alexey)
- NVMe fix for a hang at namespace scanning error recovery (Sagi)
- fix race with blk-iocost iocg->abs_vdebt updates (Tejun)
* tag 'block-5.7-2020-05-09' of git://git.kernel.dk/linux-block:
nvme: fix possible hang when ns scanning fails during error recovery
nvme-pci: fix "slimmer CQ head update"
bdi: add a ->dev_name field to struct backing_dev_info
bdi: use bdi_dev_name() to get device name
bdi: move bdi_dev_name out of line
vboxsf: don't use the source name in the bdi name
iocost: protect iocg->abs_vdebt with iocg->waitq.lock
Linus Torvalds [Sun, 10 May 2020 00:50:03 +0000 (17:50 -0700)]
gcc-10: mark more functions __init to avoid section mismatch warnings
It seems that for whatever reason, gcc-10 ends up not inlining a couple
of functions that used to be inlined before. Even if they only have one
single callsite - it looks like gcc may have decided that the code was
unlikely, and not worth inlining.
The code generation difference is harmless, but caused a few new section
mismatch errors, since the (now no longer inlined) function wasn't in
the __init section, but called other init functions:
Section mismatch in reference from the function kexec_free_initrd() to the function .init.text:free_initrd_mem()
Section mismatch in reference from the function tpm2_calc_event_log_size() to the function .init.text:early_memremap()
Section mismatch in reference from the function tpm2_calc_event_log_size() to the function .init.text:early_memunmap()
So add the appropriate __init annotation to make modpost not complain.
In both cases there were trivially just a single callsite from another
__init function.
Linus Torvalds [Sat, 9 May 2020 23:24:16 +0000 (16:24 -0700)]
Merge tag 'riscv-for-linus-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux
Pull RISC-V fixes from Palmer Dabbelt:
"A smattering of fixes and cleanups:
- Dead code removal.
- Exporting riscv_cpuid_to_hartid_mask for modules.
- Per-CPU tracking of ISA features.
- Setting max_pfn correctly when probing memory.
- Adding a note to the VDSO so glibc can check the kernel's version
without a uname().
- A fix to force the bootloader to initialize the boot spin tables,
which still get used as a fallback when SBI-0.1 is enabled"
* tag 'riscv-for-linus-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
RISC-V: Remove unused code from STRICT_KERNEL_RWX
riscv: force __cpu_up_ variables to put in data section
riscv: add Linux note to vdso
riscv: set max_pfn to the PFN of the last page
RISC-V: Remove N-extension related defines
RISC-V: Add bitmap reprensenting ISA features common across CPUs
RISC-V: Export riscv_cpuid_to_hartid_mask() API
Linus Torvalds [Sat, 9 May 2020 22:58:04 +0000 (15:58 -0700)]
gcc-10: avoid shadowing standard library 'free()' in crypto
gcc-10 has started warning about conflicting types for a few new
built-in functions, particularly 'free()'.
This results in warnings like:
crypto/xts.c:325:13: warning: conflicting types for built-in function ‘free’; expected ‘void(void *)’ [-Wbuiltin-declaration-mismatch]
because the crypto layer had its local freeing functions called
'free()'.
Gcc-10 is in the wrong here, since that function is marked 'static', and
thus there is no chance of confusion with any standard library function
namespace.
But the simplest thing to do is to just use a different name here, and
avoid this gcc mis-feature.
[ Side note: gcc knowing about 'free()' is in itself not the
mis-feature: the semantics of 'free()' are special enough that a
compiler can validly do special things when seeing it.
So the mis-feature here is that gcc thinks that 'free()' is some
restricted name, and you can't shadow it as a local static function.
Making the special 'free()' semantics be a function attribute rather
than tied to the name would be the much better model ]
Linus Torvalds [Sat, 9 May 2020 22:45:21 +0000 (15:45 -0700)]
gcc-10: disable 'restrict' warning for now
gcc-10 now warns about passing aliasing pointers to functions that take
restricted pointers.
That's actually a great warning, and if we ever start using 'restrict'
in the kernel, it might be quite useful. But right now we don't, and it
turns out that the only thing this warns about is an idiom where we have
declared a few functions to be "printf-like" (which seems to make gcc
pick up the restricted pointer thing), and then we print to the same
buffer that we also use as an input.
And people do that as an odd concatenation pattern, with code like this:
where we have 'buffer' as both the destination of the final result, and
as the initial argument.
Yes, it's a bit questionable. And outside of the kernel, people do have
standard declarations like
int snprintf( char *restrict buffer, size_t bufsz,
const char *restrict format, ... );
where that output buffer is marked as a restrict pointer that cannot
alias with any other arguments.
But in the context of the kernel, that 'use snprintf() to concatenate to
the end result' does work, and the pattern shows up in multiple places.
And we have not marked our own version of snprintf() as taking restrict
pointers, so the warning is incorrect for now, and gcc picks it up on
its own.
If we do start using 'restrict' in the kernel (and it might be a good
idea if people find places where it matters), we'll need to figure out
how to avoid this issue for snprintf and friends. But in the meantime,
this warning is not useful.
Linus Torvalds [Sat, 9 May 2020 22:40:52 +0000 (15:40 -0700)]
gcc-10: disable 'stringop-overflow' warning for now
This is the final array bounds warning removal for gcc-10 for now.
Again, the warning is good, and we should re-enable all these warnings
when we have converted all the legacy array declaration cases to
flexible arrays. But in the meantime, it's just noise.
Sagi Grimberg [Wed, 6 May 2020 22:44:02 +0000 (15:44 -0700)]
nvme: fix possible hang when ns scanning fails during error recovery
When the controller is reconnecting, the host fails I/O and admin
commands as the host cannot reach the controller. ns scanning may
revalidate namespaces during that period and it is wrong to remove
namespaces due to these failures as we may hang (see 205da2434301).
One command that may fail is nvme_identify_ns_descs. Since we return
success due to having ns identify descriptor list optional, we continue
to compare ns identifiers in nvme_revalidate_disk, obviously fail and
return -ENODEV to nvme_validate_ns, which will remove the namespace.
Exactly what we don't want to happen.
Fixes: 22802bf742c2 ("nvme: Namepace identification descriptor list is optional") Tested-by: Anton Eidelman <anton@lightbitslabs.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
But the code is minimal now: one read for head, one read for q_depth,
one increment, one comparison, single instruction phase bit update and
one write for new head.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Reported-by: John Garry <john.garry@huawei.com> Tested-by: John Garry <john.garry@huawei.com> Fixes: e2a366a4b0feaeb ("nvme-pci: slimmer CQ head update") Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
bdi: add a ->dev_name field to struct backing_dev_info
Cache a copy of the name for the life time of the backing_dev_info
structure so that we can reference it even after unregistering.
Fixes: 68f23b89067f ("memcg: fix a crash in wb_workfn when a device disappears") Reported-by: Yufen Yu <yuyufen@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Linus Torvalds [Sat, 9 May 2020 21:52:44 +0000 (14:52 -0700)]
gcc-10: disable 'array-bounds' warning for now
This is another fine warning, related to the 'zero-length-bounds' one,
but hitting the same historical code in the kernel.
Because C didn't historically support flexible array members, we have
code that instead uses a one-sized array, the same way we have cases of
zero-sized arrays.
The one-sized arrays come from either not wanting to use the gcc
zero-sized array extension, or from a slight convenience-feature, where
particularly for strings, the size of the structure now includes the
allocation for the final NUL character.
So with a "char name[1];" at the end of a structure, you can do things
like
v = my_malloc(sizeof(struct vendor) + strlen(name));
and avoid the "+1" for the terminator.
Yes, the modern way to do that is with a flexible array, and using
'offsetof()' instead of 'sizeof()', and adding the "+1" by hand. That
also technically gets the size "more correct" in that it avoids any
alignment (and thus padding) issues, but this is another long-term
cleanup thing that will not happen for 5.7.
So disable the warning for now, even though it's potentially quite
useful. Having a slew of warnings that then hide more urgent new issues
is not an improvement.
Linus Torvalds [Sat, 9 May 2020 21:30:29 +0000 (14:30 -0700)]
gcc-10: disable 'zero-length-bounds' warning for now
This is a fine warning, but we still have a number of zero-length arrays
in the kernel that come from the traditional gcc extension. Yes, they
are getting converted to flexible arrays, but in the meantime the gcc-10
warning about zero-length bounds is very verbose, and is hiding other
issues.
I missed one actual build failure because it was hidden among hundreds
of lines of warning. Thankfully I caught it on the second go before
pushing things out, but it convinced me that I really need to disable
the new warnings for now.
We'll hopefully be all done with our conversion to flexible arrays in
the not too distant future, and we can then re-enable this warning.
Linus Torvalds [Sat, 9 May 2020 20:57:10 +0000 (13:57 -0700)]
Stop the ad-hoc games with -Wno-maybe-initialized
We have some rather random rules about when we accept the
"maybe-initialized" warnings, and when we don't.
For example, we consider it unreliable for gcc versions < 4.9, but also
if -O3 is enabled, or if optimizing for size. And then various kernel
config options disabled it, because they know that they trigger that
warning by confusing gcc sufficiently (ie PROFILE_ALL_BRANCHES).
And now gcc-10 seems to be introducing a lot of those warnings too, so
it falls under the same heading as 4.9 did.
At the same time, we have a very straightforward way to _enable_ that
warning when wanted: use "W=2" to enable more warnings.
So stop playing these ad-hoc games, and just disable that warning by
default, with the known and straight-forward "if you want to work on the
extra compiler warnings, use W=123".
Would it be great to have code that is always so obvious that it never
confuses the compiler whether a variable is used initialized or not?
Yes, it would. In a perfect world, the compilers would be smarter, and
our source code would be simpler.
That's currently not the world we live in, though.
Linus Torvalds [Sat, 9 May 2020 19:02:09 +0000 (12:02 -0700)]
Merge tag 'io_uring-5.7-2020-05-08' of git://git.kernel.dk/linux-block
Pull io_uring fixes from Jens Axboe:
- Fix finish_wait() balancing in file cancelation (Xiaoguang)
- Ensure early cleanup of resources in ring map failure (Xiaoguang)
- Ensure IORING_OP_SLICE does the right file mode checks (Pavel)
- Remove file opening from openat/openat2/statx, it's not needed and
messes with O_PATH
* tag 'io_uring-5.7-2020-05-08' of git://git.kernel.dk/linux-block:
io_uring: don't use 'fd' for openat/openat2/statx
splice: move f_mode checks to do_{splice,tee}()
io_uring: handle -EFAULT properly in io_uring_setup()
io_uring: fix mismatched finish_wait() calls in io_uring_cancel_files()
Linus Torvalds [Fri, 8 May 2020 17:36:56 +0000 (10:36 -0700)]
Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"Four minor fixes, all in drivers (qla2xxx, ibmvfc, ibmvscsi)"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
scsi: ibmvscsi: Fix WARN_ON during event pool release
scsi: ibmvfc: Don't send implicit logouts prior to NPIV login
scsi: qla2xxx: Delete all sessions before unregister local nvme port
scsi: qla2xxx: Fix hang when issuing nvme disconnect-all in NPIV
Linus Torvalds [Fri, 8 May 2020 17:27:00 +0000 (10:27 -0700)]
Merge tag 'ceph-for-5.7-rc5' of git://github.com/ceph/ceph-client
Pull ceph fixes from Ilya Dryomov:
"Fixes for an endianness handling bug that prevented mounts on
big-endian arches, a spammy log message and a couple error paths.
Also included a MAINTAINERS update"
* tag 'ceph-for-5.7-rc5' of git://github.com/ceph/ceph-client:
ceph: demote quotarealm lookup warning to a debug message
MAINTAINERS: remove myself as ceph co-maintainer
ceph: fix double unlock in handle_cap_export()
ceph: fix special error code in ceph_try_get_caps()
ceph: fix endianness bug when handling MDS session feature bits
Linus Torvalds [Fri, 8 May 2020 16:11:53 +0000 (09:11 -0700)]
Merge tag 'char-misc-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc
Pull char/misc driver fixes from Greg KH:
"Here are some small driver fixes for 5.7-rc5 that resolve a number of
minor reported issues:
- mhi bus driver fixes found as people actually use the code
- phy driver fixes and compat string additions
- most driver fix due to link order changing when the core moved out
of staging
- mei driver fix
- interconnect build warning fix
All of these have been in linux-next for a while with no reported
issues"
* tag 'char-misc-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
bus: mhi: core: Fix channel device name conflict
bus: mhi: core: Fix typo in comment
bus: mhi: core: Offload register accesses to the controller
bus: mhi: core: Remove link_status() callback
bus: mhi: core: Make sure to powerdown if mhi_sync_power_up fails
bus: mhi: Fix parsing of mhi_flags
mei: me: disable mei interface on LBG servers.
phy: qualcomm: usb-hs-28nm: Prepare clocks in init
MAINTAINERS: Add Vinod Koul as Generic PHY co-maintainer
interconnect: qcom: Move the static keyword to the front of declaration
most: core: use function subsys_initcall()
bus: mhi: core: Fix a NULL vs IS_ERR check in mhi_create_devices()
phy: qcom-qusb2: Re add "qcom,sdm845-qusb2-phy" compat string
phy: tegra: Select USB_COMMON for usb_get_maximum_speed()
Linus Torvalds [Fri, 8 May 2020 16:06:34 +0000 (09:06 -0700)]
Merge tag 'driver-core-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull driver core fixes from Greg KH:
"Here are a number of small driver core fixes for 5.7-rc5 to resolve a
bunch of reported issues with the current tree.
Biggest here are the reverts and patches from John Stultz to resolve a
bunch of deferred probe regressions we have been seeing in 5.7-rc
right now.
Along with those are some other smaller fixes:
- coredump crash fix
- devlink fix for when permissive mode was enabled
- amba and platform device dma_parms fixes
- component error silenced for when deferred probe happens
All of these have been in linux-next for a while with no reported
issues"
* tag 'driver-core-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core:
regulator: Revert "Use driver_deferred_probe_timeout for regulator_init_complete_work"
driver core: Ensure wait_for_device_probe() waits until the deferred_probe_timeout fires
driver core: Use dev_warn() instead of dev_WARN() for deferred_probe_timeout warnings
driver core: Revert default driver_deferred_probe_timeout value to 0
component: Silence bind error on -EPROBE_DEFER
driver core: Fix handling of fw_devlink=permissive
coredump: fix crash when umh is disabled
amba: Initialize dma_parms for amba devices
driver core: platform: Initialize dma_parms for platform devices
Linus Torvalds [Fri, 8 May 2020 16:03:49 +0000 (09:03 -0700)]
Merge tag 'staging-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging
Pull staging driver fixes from Greg KH:
"Here are three small driver fixes for 5.7-rc5.
Two of these are documentation fixes:
- MAINTAINERS update due to removed driver
- removing Wolfram from the ks7010 driver TODO file
The other patch is a real fix:
- fix gasket driver to proper check the return value of a call
All of these have been in linux-next for a while with no reported
issues"
* tag 'staging-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging:
staging: gasket: Check the return value of gasket_get_bar_index()
staging: ks7010: remove me from CC list
MAINTAINERS: remove entry after hp100 driver removal
Linus Torvalds [Fri, 8 May 2020 15:56:16 +0000 (08:56 -0700)]
Merge tag 'tty-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
Pull tty/serial fixes from Greg KH:
"Here are three small TTY/Serial/VT fixes for 5.7-rc5:
- revert for the bcm63xx driver "fix" that was incorrect
- vt unicode console bugfix
- xilinx_uartps console driver fix
All of these have been in linux next with no reported issues"
* tag 'tty-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
tty: xilinx_uartps: Fix missing id assignment to the console
vt: fix unicode console freeing with a common interface
Revert "tty: serial: bcm63xx: fix missing clk_put() in bcm63xx_uart"
Linus Torvalds [Fri, 8 May 2020 15:49:34 +0000 (08:49 -0700)]
Merge tag 'drm-fixes-2020-05-08' of git://anongit.freedesktop.org/drm/drm
Pull drm fixes from Dave Airlie:
"Another pretty normal week. I didn't get any i915 fixes yet, so next
week I'd expect double the usual i915, but otherwise a bunch of amdgpu
and some scattered other fixes.
hdcp:
- fix HDCP regression
amdgpu:
- Runtime PM fixes
- DC fix for PPC
- Misc DC fixes
virtio:
- fix context ordering issue
sun4i:
- old gcc warning fix
ingenic-drm:
- missing module support"
* tag 'drm-fixes-2020-05-08' of git://anongit.freedesktop.org/drm/drm:
drm/amd/display: Prevent dpcd reads with passive dongles
drm/amd/display: fix counter in wait_for_no_pipes_pending
drm/amd/display: Update DCN2.1 DV Code Revision
drm: Fix HDCP failures when SRM fw is missing
sun6i: dsi: fix gcc-4.8
drm: ingenic-drm: add MODULE_DEVICE_TABLE
drm/virtio: create context before RESOURCE_CREATE_2D in 3D mode
drm/amd/display: work around fp code being emitted outside of DC_FP_START/END
drm/amdgpu/dc: Use WARN_ON_ONCE for ASSERT
drm/amdgpu: drop redundant cg/pg ungate on runpm enter
drm/amdgpu: move kfd suspend after ip_suspend_phase1
Linus Torvalds [Fri, 8 May 2020 15:41:09 +0000 (08:41 -0700)]
Merge branch 'akpm' (patches from Andrew)
Merge misc fixes from Andrew Morton:
"14 fixes and one selftest to verify the ipc fixes herein"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
mm: limit boost_watermark on small zones
ubsan: disable UBSAN_ALIGNMENT under COMPILE_TEST
mm/vmscan: remove unnecessary argument description of isolate_lru_pages()
epoll: atomically remove wait entry on wake up
kselftests: introduce new epoll60 testcase for catching lost wakeups
percpu: make pcpu_alloc() aware of current gfp context
mm/slub: fix incorrect interpretation of s->offset
scripts/gdb: repair rb_first() and rb_last()
eventpoll: fix missing wakeup for ovflist in ep_poll_callback
arch/x86/kvm/svm/sev.c: change flag passed to GUP fast in sev_pin_memory()
scripts/decodecode: fix trapping instruction formatting
kernel/kcov.c: fix typos in kcov_remote_start documentation
mm/page_alloc: fix watchdog soft lockups during set_zone_contiguous()
mm, memcg: fix error return value of mem_cgroup_css_alloc()
ipc/mqueue.c: change __do_notify() to bypass check_kill_permission()
Julia Lawall [Tue, 5 May 2020 18:47:47 +0000 (20:47 +0200)]
iommu/virtio: Reverse arguments to list_add
Elsewhere in the file, there is a list_for_each_entry with
&vdev->resv_regions as the second argument, suggesting that
&vdev->resv_regions is the list head. So exchange the
arguments on the list_add call to put the list head in the
second argument.
Henry Willard [Fri, 8 May 2020 01:36:27 +0000 (18:36 -0700)]
mm: limit boost_watermark on small zones
Commit 1c30844d2dfe ("mm: reclaim small amounts of memory when an
external fragmentation event occurs") adds a boost_watermark() function
which increases the min watermark in a zone by at least
pageblock_nr_pages or the number of pages in a page block.
On Arm64, with 64K pages and 512M huge pages, this is 8192 pages or
512M. It does this regardless of the number of managed pages managed in
the zone or the likelihood of success.
This can put the zone immediately under water in terms of allocating
pages from the zone, and can cause a small machine to fail immediately
due to OoM. Unlike set_recommended_min_free_kbytes(), which
substantially increases min_free_kbytes and is tied to THP,
boost_watermark() can be called even if THP is not active.
The problem is most likely to appear on architectures such as Arm64
where pageblock_nr_pages is very large.
It is desirable to run the kdump capture kernel in as small a space as
possible to avoid wasting memory. In some architectures, such as Arm64,
there are restrictions on where the capture kernel can run, and
therefore, the space available. A capture kernel running in 768M can
fail due to OoM immediately after boost_watermark() sets the min in zone
DMA32, where most of the memory is, to 512M. It fails even though there
is over 500M of free memory. With boost_watermark() suppressed, the
capture kernel can run successfully in 448M.
This patch limits boost_watermark() to boosting a zone's min watermark
only when there are enough pages that the boost will produce positive
results. In this case that is estimated to be four times as many pages
as pageblock_nr_pages.
Mel said:
: There is no harm in marking it stable. Clearly it does not happen very
: often but it's not impossible. 32-bit x86 is a lot less common now
: which would previously have been vulnerable to triggering this easily.
: ppc64 has a larger base page size but typically only has one zone.
: arm64 is likely the most vulnerable, particularly when CMA is
: configured with a small movable zone.
Fixes: 1c30844d2dfe ("mm: reclaim small amounts of memory when an external fragmentation event occurs") Signed-off-by: Henry Willard <henry.willard@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/1588294148-6586-1-git-send-email-henry.willard@oracle.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Kees Cook [Fri, 8 May 2020 01:36:23 +0000 (18:36 -0700)]
ubsan: disable UBSAN_ALIGNMENT under COMPILE_TEST
The documentation for UBSAN_ALIGNMENT already mentions that it should
not be used on all*config builds (and for efficient-unaligned-access
architectures), so just refactor the Kconfig to correctly implement this
so randconfigs will stop creating insane images that freak out objtool
under CONFIG_UBSAN_TRAP (due to the false positives producing functions
that never return, etc).
Qiwu Chen [Fri, 8 May 2020 01:36:20 +0000 (18:36 -0700)]
mm/vmscan: remove unnecessary argument description of isolate_lru_pages()
Since commit a9e7c39fa9fd9 ("mm/vmscan.c: remove 7th argument of
isolate_lru_pages()"), the explanation of 'mode' argument has been
unnecessary. Let's remove it.
Signed-off-by: Qiwu Chen <chenqiwu@xiaomi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200501090346.2894-1-chenqiwu@xiaomi.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Roman Penyaev [Fri, 8 May 2020 01:36:16 +0000 (18:36 -0700)]
epoll: atomically remove wait entry on wake up
This patch does two things:
- fixes a lost wakeup introduced by commit 339ddb53d373 ("fs/epoll:
remove unnecessary wakeups of nested epoll")
- improves performance for events delivery.
The description of the problem is the following: if N (>1) threads are
waiting on ep->wq for new events and M (>1) events come, it is quite
likely that >1 wakeups hit the same wait queue entry, because there is
quite a big window between __add_wait_queue_exclusive() and the
following __remove_wait_queue() calls in ep_poll() function.
This can lead to lost wakeups, because thread, which was woken up, can
handle not all the events in ->rdllist. (in better words the problem is
described here: https://lkml.org/lkml/2019/10/7/905)
The idea of the current patch is to use init_wait() instead of
init_waitqueue_entry().
Internally init_wait() sets autoremove_wake_function as a callback,
which removes the wait entry atomically (under the wq locks) from the
list, thus the next coming wakeup hits the next wait entry in the wait
queue, thus preventing lost wakeups.
Problem is very well reproduced by the epoll60 test case [1].
Wait entry removal on wakeup has also performance benefits, because
there is no need to take a ep->lock and remove wait entry from the queue
after the successful wakeup. Here is the timing output of the epoll60
test case:
With explicit wakeup from ep_scan_ready_list() (the state of the
code prior 339ddb53d373):
real 0m6.970s
user 0m49.786s
sys 0m0.113s
After this patch:
real 0m5.220s
user 0m36.879s
sys 0m0.019s
The other testcase is the stress-epoll [2], where one thread consumes
all the events and other threads produce many events:
With explicit wakeup from ep_scan_ready_list() (the state of the
code prior 339ddb53d373):
(number of "events/ms" represents event bandwidth, thus higher is
better; number of "run-time ms" represents overall time spent
doing the benchmark, thus lower is better)
Roman Penyaev [Fri, 8 May 2020 01:36:13 +0000 (18:36 -0700)]
kselftests: introduce new epoll60 testcase for catching lost wakeups
This test case catches lost wake up introduced by commit 339ddb53d373
("fs/epoll: remove unnecessary wakeups of nested epoll")
The test is simple: we have 10 threads and 10 event fds. Each thread
can harvest only 1 event. 1 producer fires all 10 events at once and
waits that all 10 events will be observed by 10 threads.
In case of lost wakeup epoll_wait() will timeout and 0 will be returned.
Test case catches two sort of problems: forgotten wakeup on event, which
hits the ->ovflist list, this problem was fixed by:
5a2513239750 ("eventpoll: fix missing wakeup for ovflist in ep_poll_callback")
the other problem is when several sequential events hit the same waiting
thread, thus other waiters get no wakeups. Problem is fixed in the
following patch.
Signed-off-by: Roman Penyaev <rpenyaev@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Khazhismel Kumykov <khazhy@google.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Heiher <r@hev.cc> Cc: Jason Baron <jbaron@akamai.com> Link: http://lkml.kernel.org/r/20200430130326.1368509-1-rpenyaev@suse.de Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Filipe Manana [Fri, 8 May 2020 01:36:10 +0000 (18:36 -0700)]
percpu: make pcpu_alloc() aware of current gfp context
Since 5.7-rc1, on btrfs we have a percpu counter initialization for
which we always pass a GFP_KERNEL gfp_t argument (this happens since
commit 2992df73268f78 ("btrfs: Implement DREW lock")).
That is safe in some contextes but not on others where allowing fs
reclaim could lead to a deadlock because we are either holding some
btrfs lock needed for a transaction commit or holding a btrfs
transaction handle open. Because of that we surround the call to the
function that initializes the percpu counter with a NOFS context using
memalloc_nofs_save() (this is done at btrfs_init_fs_root()).
However it turns out that this is not enough to prevent a possible
deadlock because percpu_alloc() determines if it is in an atomic context
by looking exclusively at the gfp flags passed to it (GFP_KERNEL in this
case) and it is not aware that a NOFS context is set.
Because percpu_alloc() thinks it is in a non atomic context it locks the
pcpu_alloc_mutex. This can result in a btrfs deadlock when
pcpu_balance_workfn() is running, has acquired that mutex and is waiting
for reclaim, while the btrfs task that called percpu_counter_init() (and
therefore percpu_alloc()) is holding either the btrfs commit_root
semaphore or a transaction handle (done fs/btrfs/backref.c:
iterate_extent_inodes()), which prevents reclaim from finishing as an
attempt to commit the current btrfs transaction will deadlock.
Lockdep reports this issue with the following trace:
======================================================
WARNING: possible circular locking dependency detected
5.6.0-rc7-btrfs-next-77 #1 Not tainted
------------------------------------------------------
kswapd0/91 is trying to acquire lock: ffff8938a3b3fdc8 (&delayed_node->mutex){+.+.}, at: __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs]
but task is already holding lock: ffffffffb4f0dbc0 (fs_reclaim){+.+.}, at: __fs_reclaim_acquire+0x5/0x30
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
This could be fixed by making btrfs pass GFP_NOFS instead of GFP_KERNEL
to percpu_counter_init() in contextes where it is not reclaim safe,
however that type of approach is discouraged since
memalloc_[nofs|noio]_save() were introduced. Therefore this change
makes pcpu_alloc() look up into an existing nofs/noio context before
deciding whether it is in an atomic context or not.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Tejun Heo <tj@kernel.org> Acked-by: Dennis Zhou <dennis@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Christoph Lameter <cl@linux.com> Link: http://lkml.kernel.org/r/20200430164356.15543-1-fdmanana@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Waiman Long [Fri, 8 May 2020 01:36:06 +0000 (18:36 -0700)]
mm/slub: fix incorrect interpretation of s->offset
In a couple of places in the slub memory allocator, the code uses
"s->offset" as a check to see if the free pointer is put right after the
object. That check is no longer true with commit 3202fa62fb43 ("slub:
relocate freelist pointer to middle of object").
As a result, echoing "1" into the validate sysfs file, e.g. of dentry,
may cause a bunch of "Freepointer corrupt" error reports like the
following to appear with the system in panic afterwards.
=============================================================================
BUG dentry(666:pmcd.service) (Tainted: G B): Freepointer corrupt
-----------------------------------------------------------------------------
To fix it, use the check "s->offset == s->inuse" in the new helper
function freeptr_outside_object() instead. Also add another helper
function get_info_end() to return the end of info block (inuse + free
pointer if not overlapping with object).
Fixes: 3202fa62fb43 ("slub: relocate freelist pointer to middle of object") Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Rafael Aquini <aquini@redhat.com> Cc: Christoph Lameter <cl@linux.com> Cc: Vitaly Nikolenko <vnik@duasynt.com> Cc: Silvio Cesare <silvio.cesare@gmail.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Markus Elfring <Markus.Elfring@web.de> Cc: Changbin Du <changbin.du@gmail.com> Link: http://lkml.kernel.org/r/20200429135328.26976-1-longman@redhat.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The current implementations of the rb_first() and rb_last() gdb
functions have a variable that references itself in its instanciation,
which causes the function to throw an error if a specific condition on
the argument is met. The original author rather intended to reference
the argument and made a typo. Referring the argument instead makes the
function work as intended.
Signed-off-by: Aymeric Agon-Rambosson <aymeric.agon@yandex.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Stephen Boyd <swboyd@chromium.org> Cc: Jan Kiszka <jan.kiszka@siemens.com> Cc: Kieran Bingham <kbingham@kernel.org> Cc: Douglas Anderson <dianders@chromium.org> Cc: Nikolay Borisov <n.borisov.lkml@gmail.com> Cc: Jackie Liu <liuyun01@kylinos.cn> Cc: Jason Wessel <jason.wessel@windriver.com> Link: http://lkml.kernel.org/r/20200427051029.354840-1-aymeric.agon@yandex.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
eventpoll: fix missing wakeup for ovflist in ep_poll_callback
In the event that we add to ovflist, before commit 339ddb53d373
("fs/epoll: remove unnecessary wakeups of nested epoll") we would be
woken up by ep_scan_ready_list, and did no wakeup in ep_poll_callback.
With that wakeup removed, if we add to ovflist here, we may never wake
up. Rather than adding back the ep_scan_ready_list wakeup - which was
resulting in unnecessary wakeups, trigger a wake-up in ep_poll_callback.
We noticed that one of our workloads was missing wakeups starting with 339ddb53d373 and upon manual inspection, this wakeup seemed missing to me.
With this patch added, we no longer see missing wakeups. I haven't yet
tried to make a small reproducer, but the existing kselftests in
filesystem/epoll passed for me with this patch.
[khazhy@google.com: use if/elif instead of goto + cleanup suggested by Roman] Link: http://lkml.kernel.org/r/20200424190039.192373-1-khazhy@google.com Fixes: 339ddb53d373 ("fs/epoll: remove unnecessary wakeups of nested epoll") Signed-off-by: Khazhismel Kumykov <khazhy@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Roman Penyaev <rpenyaev@suse.de> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Roman Penyaev <rpenyaev@suse.de> Cc: Heiher <r@hev.cc> Cc: Jason Baron <jbaron@akamai.com> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/20200424025057.118641-1-khazhy@google.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/x86/kvm/svm/sev.c: change flag passed to GUP fast in sev_pin_memory()
When trying to lock read-only pages, sev_pin_memory() fails because
FOLL_WRITE is used as the flag for get_user_pages_fast().
Commit 73b0140bf0fe ("mm/gup: change GUP fast to use flags rather than a
write 'bool'") updated the get_user_pages_fast() call sites to use
flags, but incorrectly updated the call in sev_pin_memory(). As the
original coding of this call was correct, revert the change made by that
commit.
Fixes: 73b0140bf0fe ("mm/gup: change GUP fast to use flags rather than a write 'bool'") Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <sean.j.christopherson@intel.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Wanpeng Li <wanpengli@tencent.com> Cc: Jim Mattson <jmattson@google.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Mike Marshall <hubcap@omnibond.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Link: http://lkml.kernel.org/r/20200423152419.87202-1-Janakarajan.Natarajan@amd.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If the trapping instruction contains a ':', for a memory access through
segment registers for example, the sed substitution will insert the '*'
marker in the middle of the instruction instead of the line address:
I started to think I had forgotten some quirk of the assembly syntax
before noticing that it was actually coming from the script. Fix it to
add the address marker at the right place for these instructions:
mm/page_alloc: fix watchdog soft lockups during set_zone_contiguous()
Without CONFIG_PREEMPT, it can happen that we get soft lockups detected,
e.g., while booting up.
watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:1]
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.6.0-next-20200331+ #4
Hardware name: Red Hat KVM, BIOS 1.11.1-4.module+el8.1.0+4066+0f1aadab 04/01/2014
RIP: __pageblock_pfn_to_page+0x134/0x1c0
Call Trace:
set_zone_contiguous+0x56/0x70
page_alloc_init_late+0x166/0x176
kernel_init_freeable+0xfa/0x255
kernel_init+0xa/0x106
ret_from_fork+0x35/0x40
The issue becomes visible when having a lot of memory (e.g., 4TB)
assigned to a single NUMA node - a system that can easily be created
using QEMU. Inside VMs on a hypervisor with quite some memory
overcommit, this is fairly easy to trigger.
Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Pavel Tatashin <pasha.tatashin@soleen.com> Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com> Reviewed-by: Baoquan He <bhe@redhat.com> Reviewed-by: Shile Zhang <shile.zhang@linux.alibaba.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Shile Zhang <shile.zhang@linux.alibaba.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Alexander Duyck <alexander.duyck@gmail.com> Cc: Baoquan He <bhe@redhat.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/20200416073417.5003-1-david@redhat.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Yafang Shao [Fri, 8 May 2020 01:35:43 +0000 (18:35 -0700)]
mm, memcg: fix error return value of mem_cgroup_css_alloc()
When I run my memcg testcase which creates lots of memcgs, I found
there're unexpected out of memory logs while there're still enough
available free memory. The error log is
The reason is when we try to create more than MEM_CGROUP_ID_MAX memcgs,
an -ENOMEM errno will be set by mem_cgroup_css_alloc(), but the right
errno should be -ENOSPC "No space left on device", which is an
appropriate errno for userspace's failed mkdir.
As the errno really misled me, we should make it right. After this
patch, the error log will be
mkdir: cannot create directory 'foo.65533': No space left on device
[akpm@linux-foundation.org: s/EBUSY/ENOSPC/, per Michal]
[akpm@linux-foundation.org: s/EBUSY/ENOSPC/, per Michal] Fixes: 73f576c04b94 ("mm: memcontrol: fix cgroup creation failure after many small jobs") Suggested-by: Matthew Wilcox <willy@infradead.org> Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Michal Hocko <mhocko@kernel.org> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Link: http://lkml.kernel.org/r/20200407063621.GA18914@dhcp22.suse.cz Link: http://lkml.kernel.org/r/1586192163-20099-1-git-send-email-laoar.shao@gmail.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>