From e7f26a7ea66bb26d2197f18a40ed9560ac8c9e77 Mon Sep 17 00:00:00 2001 From: Joonsoo Kim Date: Mon, 13 Oct 2014 15:51:01 -0700 Subject: [PATCH] mm/slab: fix unaligned access on sparc64 Commit 7eb07d84ff6c ("mm/slab: use percpu allocator for cpu cache") changed the allocation method for cpu cache array from slab allocator to percpu allocator. Alignment should be provided for aligned memory in percpu allocator case, but, that commit mistakenly set this alignment to 0. So, percpu allocator returns unaligned memory address. It doesn't cause any problem on x86 which permits unaligned access, but, it causes the problem on sparc64 which needs strong guarantee of alignment. Following bug report is reported from David Miller. I'm getting tons of the following on sparc64: [603965.383447] Kernel unaligned access at TPC[546b58] free_block+0x98/0x1a0 [603965.396987] Kernel unaligned access at TPC[546b60] free_block+0xa0/0x1a0 ... [603970.554394] log_unaligned: 333 callbacks suppressed ... This patch provides a proper alignment parameter when allocating cpu cache to fix this unaligned memory access problem on sparc64. Reported-by: David Miller Tested-by: David Miller Tested-by: Meelis Roos Signed-off-by: Joonsoo Kim Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/slab.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/slab.c b/mm/slab.c index 154aac8411c59..eb2b2ea301309 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1992,7 +1992,7 @@ static struct array_cache __percpu *alloc_kmem_cache_cpus( struct array_cache __percpu *cpu_cache; size = sizeof(void *) * entries + sizeof(struct array_cache); - cpu_cache = __alloc_percpu(size, 0); + cpu_cache = __alloc_percpu(size, sizeof(void *)); if (!cpu_cache) return NULL; -- 2.39.5