]> git.itanic.dy.fi Git - linux-stable/commitdiff
powerpc/64s/radix: Fix soft dirty tracking
authorMichael Ellerman <mpe@ellerman.id.au>
Thu, 11 May 2023 11:42:24 +0000 (21:42 +1000)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 24 May 2023 16:30:23 +0000 (17:30 +0100)
commit 66b2ca086210732954a7790d63d35542936fc664 upstream.

It was reported that soft dirty tracking doesn't work when using the
Radix MMU.

The tracking is supposed to work by clearing the soft dirty bit for a
mapping and then write protecting the PTE. If/when the page is written
to, a page fault occurs and the soft dirty bit is added back via
pte_mkdirty(). For example in wp_page_reuse():

entry = maybe_mkwrite(pte_mkdirty(entry), vma);
if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
update_mmu_cache(vma, vmf->address, vmf->pte);

Unfortunately on radix _PAGE_SOFTDIRTY is being dropped by
radix__ptep_set_access_flags(), called from ptep_set_access_flags(),
meaning the soft dirty bit is not set even though the page has been
written to.

Fix it by adding _PAGE_SOFTDIRTY to the set of bits that are able to be
changed in radix__ptep_set_access_flags().

Fixes: b0b5e9b13047 ("powerpc/mm/radix: Add radix pte #defines")
Cc: stable@vger.kernel.org # v4.7+
Reported-by: Dan HorĂ¡k <dan@danny.cz>
Link: https://lore.kernel.org/r/20230511095558.56663a50f86bdc4cd97700b7@danny.cz
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230511114224.977423-1-mpe@ellerman.id.au
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/powerpc/mm/book3s64/radix_pgtable.c

index 26245aaf12b8bddb8f6082ce5b2b3c093f2e00cc..2297aa764ecdbcedbaf7958df234f5d5c4408470 100644 (file)
@@ -1040,8 +1040,8 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
                                  pte_t entry, unsigned long address, int psize)
 {
        struct mm_struct *mm = vma->vm_mm;
-       unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED |
-                                             _PAGE_RW | _PAGE_EXEC);
+       unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_SOFT_DIRTY |
+                                             _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
 
        unsigned long change = pte_val(entry) ^ pte_val(*ptep);
        /*