drm/amdgpu: Put drm_dev_enter/exit outside hot codepath
We hit soft hang while doing memory pressure test on one numa system.
After a qucik look, this is because kfd invalid/valid userptr memory
frequently with process_info lock hold.
Looks like update page table mapping use too much cpu time.
perf top says below,
75.81% [kernel] [k] __srcu_read_unlock
6.19% [amdgpu] [k] amdgpu_gmc_set_pte_pde
3.56% [kernel] [k] __srcu_read_lock
2.20% [amdgpu] [k] amdgpu_vm_cpu_update
2.20% [kernel] [k] __sg_page_iter_dma_next
2.15% [drm] [k] drm_dev_enter
1.70% [drm] [k] drm_prime_sg_to_dma_addr_array
1.18% [kernel] [k] __sg_alloc_table_from_pages
1.09% [drm] [k] drm_dev_exit
So move drm_dev_enter/exit outside gmc code, instead let caller do it.
They are gart_unbind, gart_map, vm_clear_bo, vm_update_pdes and
gmc_init_pdb0. vm_bo_update_mapping already calls it.
Signed-off-by: xinhui pan <xinhui.pan@amd.com>
Reviewed-and-tested-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>