From 275da3119017a0147dc7df637aee342c6598fe8d Mon Sep 17 00:00:00 2001 From: Michael Ellerman Date: Thu, 11 May 2023 21:42:24 +1000 Subject: [PATCH] powerpc/64s/radix: Fix soft dirty tracking MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit commit 66b2ca086210732954a7790d63d35542936fc664 upstream. It was reported that soft dirty tracking doesn't work when using the Radix MMU. The tracking is supposed to work by clearing the soft dirty bit for a mapping and then write protecting the PTE. If/when the page is written to, a page fault occurs and the soft dirty bit is added back via pte_mkdirty(). For example in wp_page_reuse(): entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) update_mmu_cache(vma, vmf->address, vmf->pte); Unfortunately on radix _PAGE_SOFTDIRTY is being dropped by radix__ptep_set_access_flags(), called from ptep_set_access_flags(), meaning the soft dirty bit is not set even though the page has been written to. Fix it by adding _PAGE_SOFTDIRTY to the set of bits that are able to be changed in radix__ptep_set_access_flags(). Fixes: 77e575e2a3ac ("powerpc/mm/radix: Add radix pte #defines") Cc: stable@vger.kernel.org # v4.7+ Reported-by: Dan Horák Link: https://lore.kernel.org/r/20230511095558.56663a50f86bdc4cd97700b7@danny.cz Signed-off-by: Michael Ellerman Link: https://msgid.link/20230511114224.977423-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/mm/book3s64/radix_pgtable.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index bdcb07a98cd37..dc8f3fb02ac21 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1018,8 +1018,8 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep, pte_t entry, unsigned long address, int psize) { struct mm_struct *mm = vma->vm_mm; - unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED | - _PAGE_RW | _PAGE_EXEC); + unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_SOFT_DIRTY | + _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC); unsigned long change = pte_val(entry) ^ pte_val(*ptep); /* -- 2.39.5