A mix of the system unbound wq and Xe ordered wq was used for the
rebind, only use the Xe ordered wq. This will ensure only 1 rebind is
occuring at a time providing a somewhat clunky work around for short
comings in TTM wrt to memory contention. Once the TTM memory contention
is resolved we should be able to use a dedicated non-ordered workqueue.
Also add helper to queue rebind worker to avoid using wrong workqueue
going forward.
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
dma_fence_signal(&pfence->base);
dma_fence_end_signalling(cookie);
- queue_work(system_unbound_wq, &e->vm->preempt.rebind_work);
+ xe_vm_queue_rebind_worker(e->vm);
xe_engine_put(e);
}
/* Rebinds may have been blocked, give worker a kick */
if (xe_vm_in_compute_mode(vm))
- queue_work(vm->xe->ordered_wq,
- &vm->preempt.rebind_work);
+ xe_vm_queue_rebind_worker(vm);
}
goto put_engine;
int xe_vma_userptr_check_repin(struct xe_vma *vma);
+static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
+{
+ XE_WARN_ON(!xe_vm_in_compute_mode(vm));
+ queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
+}
+
/*
* XE_ONSTACK_TV is used to size the tv_onstack array that is input
* to xe_vm_lock_dma_resv() and xe_vm_unlock_dma_resv().