]> git.itanic.dy.fi Git - linux-stable/commitdiff
csky: patch_text: Fixup last cpu should be master
authorGuo Ren <guoren@linux.alibaba.com>
Wed, 6 Apr 2022 14:28:43 +0000 (22:28 +0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 9 Jun 2022 08:30:50 +0000 (10:30 +0200)
commit 8c4d16471e2babe9bdfe41d6ef724526629696cb upstream.

These patch_text implementations are using stop_machine_cpuslocked
infrastructure with atomic cpu_count. The original idea: When the
master CPU patch_text, the others should wait for it. But current
implementation is using the first CPU as master, which couldn't
guarantee the remaining CPUs are waiting. This patch changes the
last CPU as the master to solve the potential risk.

Fixes: 33e53ae1ce41 ("csky: Add kprobes supported")
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/csky/kernel/probes/kprobes.c

index 42920f25e73c8875a9c58401cb8f922d52f5eb20..34ba684d5962b1067a4121f515b6761fea7c3aaf 100644 (file)
@@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
        struct csky_insn_patch *param = priv;
        unsigned int addr = (unsigned int)param->addr;
 
-       if (atomic_inc_return(&param->cpu_count) == 1) {
+       if (atomic_inc_return(&param->cpu_count) == num_online_cpus()) {
                *(u16 *) addr = cpu_to_le16(param->opcode);
                dcache_wb_range(addr, addr + 2);
                atomic_inc(&param->cpu_count);