nvmet-tcp: always initialize tls_handshake_tmo_work
authorHannes Reinecke <hare@suse.de>
Fri, 20 Oct 2023 05:06:06 +0000 (07:06 +0200)
committerKeith Busch <kbusch@kernel.org>
Mon, 20 Nov 2023 17:25:33 +0000 (09:25 -0800)
The TLS handshake timeout work item should always be
initialized to avoid a crash when cancelling the workqueue.

Fixes: 675b453e0241 ("nvmet-tcp: enable TLS handshake upcall")
Suggested-by: Maurizio Lombardi <mlombard@redhat.com>
Signed-off-by: Hannes Reinecke <hare@suse.de>
Tested-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
drivers/nvme/target/tcp.c

index 92b74d0b8686a673c7d049d52d49971f77e7be9f..4cc27856aa8fefc53d2a77044ea3a3ef927c8ba5 100644 (file)
@@ -1854,6 +1854,8 @@ static int nvmet_tcp_tls_handshake(struct nvmet_tcp_queue *queue)
        }
        return ret;
 }
+#else
+static void nvmet_tcp_tls_handshake_timeout(struct work_struct *w) {}
 #endif
 
 static void nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port,
@@ -1911,9 +1913,9 @@ static void nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port,
        list_add_tail(&queue->queue_list, &nvmet_tcp_queue_list);
        mutex_unlock(&nvmet_tcp_queue_mutex);
 
-#ifdef CONFIG_NVME_TARGET_TCP_TLS
        INIT_DELAYED_WORK(&queue->tls_handshake_tmo_work,
                          nvmet_tcp_tls_handshake_timeout);
+#ifdef CONFIG_NVME_TARGET_TCP_TLS
        if (queue->state == NVMET_TCP_Q_TLS_HANDSHAKE) {
                struct sock *sk = queue->sock->sk;