When instrumenting memory accesses for plugin, we force memory accesses
to use the slow path for mmu [1]. This create a situation where we end
up calling ptw_setl_slow. This was fixed recently in [2] but the issue
still could appear out of plugins use case.
Since this function gets called during a cpu_exec, start_exclusive then
hangs. This exclusive section was introduced initially for security
reasons [3].
I suspect this code path was never triggered, because ptw_setl_slow
would always be called transitively from cpu_exec, resulting in a hang.
[1] https://gitlab.com/qemu-project/qemu/-/commit/
6d03226b42247b68ab2f0b3663e0f624335a4055
[2] https://gitlab.com/qemu-project/qemu/-/commit/
115ade42d50144c15b74368d32dc734ea277d853
[3] https://gitlab.com/qemu-project/qemu/-/issues/279
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/2566
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <
20241025175857.
2554252-2-pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
{
uint32_t cmp;
+ CPUState *cpu = env_cpu(in->env);
+ /* We are in cpu_exec, and start_exclusive can't be called directly.*/
+ g_assert(cpu->running);
+ cpu_exec_end(cpu);
/* Does x86 really perform a rmw cycle on mmio for ptw? */
start_exclusive();
cmp = cpu_ldl_mmuidx_ra(in->env, in->gaddr, in->ptw_idx, 0);
cpu_stl_mmuidx_ra(in->env, in->gaddr, new, in->ptw_idx, 0);
}
end_exclusive();
+ cpu_exec_start(cpu);
return cmp == old;
}