drm/xe: Don't process TLB invalidation done in CT fast-path
authorMatthew Brost <matthew.brost@intel.com>
Fri, 20 Jan 2023 17:17:50 +0000 (09:17 -0800)
committerRodrigo Vivi <rodrigo.vivi@intel.com>
Tue, 19 Dec 2023 23:27:45 +0000 (18:27 -0500)
We can't currently do this due to TLB invalidation done handler
expecting the seqno being received in-order, with the fast-path a TLB
invalidation done could pass one being processed in the slow-path in an
extreme corner case. Remove TLB invalidation done from the fast-path for
now and in a follow up reenable this once the TLB invalidation done
handler can deal with out of order seqno.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
drivers/gpu/drm/xe/xe_guc_ct.c

index f48eb01847efa9e3b608931b04d119da8aab22a1..6e25c1d5d43eadd1f0d27759223072c242f103c1 100644 (file)
@@ -966,7 +966,14 @@ static int g2h_read(struct xe_guc_ct *ct, u32 *msg, bool fast_path)
                        return 0;
 
                switch (FIELD_GET(GUC_HXG_EVENT_MSG_0_ACTION, msg[1])) {
-               case XE_GUC_ACTION_TLB_INVALIDATION_DONE:
+               /*
+                * FIXME: We really should process
+                * XE_GUC_ACTION_TLB_INVALIDATION_DONE here in the fast-path as
+                * these critical for page fault performance. We currently can't
+                * due to TLB invalidation done algorithm expecting the seqno
+                * returned in-order. With some small changes to the algorithm
+                * and locking we should be able to support out-of-order seqno.
+                */
                case XE_GUC_ACTION_REPORT_PAGE_FAULT_REQ_DESC:
                        break;  /* Process these in fast-path */
                default: