perf/x86/intel: Optimize FIXED_CTR_CTRL access
authorKan Liang <kan.liang@linux.intel.com>
Thu, 4 Aug 2022 14:07:29 +0000 (07:07 -0700)
committerPeter Zijlstra <peterz@infradead.org>
Wed, 7 Sep 2022 19:54:04 +0000 (21:54 +0200)
commitfae9ebde9696385fa2e993e752cf68d9781f3ea0
tree7307a940040422d0bfe4eea5a11449636e534963
parentdbf4e792beadafc684ef455453c613ff182c7723
perf/x86/intel: Optimize FIXED_CTR_CTRL access

All the fixed counters share a fixed control register. The current
perf reads and re-writes the fixed control register for each fixed
counter disable/enable, which is unnecessary.

When changing the fixed control register, the entire PMU must be
disabled via the global control register. The changing cannot be taken
effect until the entire PMU is re-enabled. Only updating the fixed
control register once right before the entire PMU re-enabling is
enough.

The read of the fixed control register is not necessary either. The
value can be cached in the per CPU cpu_hw_events.

Test results:

Counting all the fixed counters with the perf bench sched pipe as below
on a SPR machine.

 $perf stat -e cycles,instructions,ref-cycles,slots --no-inherit --
  taskset -c 1 perf bench sched pipe

The Total elapsed time reduces from 5.36s (without the patch) to 4.99s
(with the patch), which is ~6.9% improvement.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20220804140729.2951259-1-kan.liang@linux.intel.com
arch/x86/events/intel/core.c
arch/x86/events/perf_event.h