For when managed interrupts are used (and shost->nr_hw_queues is set), a
fixed queue - set per-device - is still used for internal I/Os.
If all the CPUs mapped to that queue are offlined, then the completions for
that queue are not serviced and any internal I/Os will time out.
Fix by selecting a queue for internal I/Os from the queue mapped from the
current CPU in this scenario.
This is still not ideal as it does not deal with CPU hotplug for inflight
internal I/Os, and needs proper support from [0].
[0] https://lore.kernel.org/linux-scsi/
20200703130122.111448-1-hare@suse.de/T/#m7d77d049b18f33a24ef206af69ebb66d07440556
Link: https://lore.kernel.org/r/1607347855-59091-1-git-send-email-john.garry@huawei.com
Fixes: 8d98416a55eb ("scsi: hisi_sas: Switch v3 hw to MQ")
Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
blk_tag = blk_mq_unique_tag(scmd->request);
dq_index = blk_mq_unique_tag_to_hwq(blk_tag);
*dq_pointer = dq = &hisi_hba->dq[dq_index];
+ } else if (hisi_hba->shost->nr_hw_queues) {
+ struct Scsi_Host *shost = hisi_hba->shost;
+ struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT];
+ int queue = qmap->mq_map[raw_smp_processor_id()];
+
+ *dq_pointer = dq = &hisi_hba->dq[queue];
} else {
*dq_pointer = dq = sas_dev->dq;
}
rc = -ENOENT;
goto free_irq_vectors;
}
+ cq->irq_mask = pci_irq_get_affinity(pdev, i + BASE_VECTORS_V3_HW);
+ if (!cq->irq_mask) {
+ dev_err(dev, "could not get cq%d irq affinity!\n", i);
+ return -ENOENT;
+ }
}
return 0;