Jens Axboe [Fri, 20 Oct 2023 16:37:01 +0000 (10:37 -0600)]
Merge tag 'md-next-
20231020' of https://git./linux/kernel/git/song/md into for-6.7/block
Pull MD changes from Song:
"1. Handle timeout in md-cluster, by Denis Plotnikov;
2. Cleanup pers->prepare_suspend, by Yu Kuai."
* tag 'md-next-
20231020' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
md: cleanup pers->prepare_suspend()
md-cluster: check for timeout while a new disk adding
Jiapeng Chong [Thu, 19 Oct 2023 03:04:44 +0000 (11:04 +0800)]
block: ublk_drv: Remove unused function
The function are defined in the ublk_drv.c file, but not called
elsewhere, so delete the unused function.
drivers/block/ublk_drv.c:1211:20: warning: unused function 'ublk_abort_io_cmds'.
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=6938
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Fixes: b4e1353f4651 ("ublk: simplify aborting request")
Reviewed-by: Ming Lei <ming.lei@rehdat.com>
Link: https://lore.kernel.org/r/20231019030444.53680-1-jiapeng.chong@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Mon, 16 Oct 2023 10:02:40 +0000 (18:02 +0800)]
md: cleanup pers->prepare_suspend()
pers->prepare_suspend() is not used anymore and can be removed.
Reverts following three commit:
- commit
431e61257d63 ("md: export md_is_rdwr() and is_md_suspended()")
- commit
3e00777d5157 ("md: add a new api prepare_suspend() in
md_personality")
- commit
868bba54a3bc ("md/raid5: fix a deadlock in the case that reshape
is interrupted")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231016100240.540474-1-yukuai1@huaweicloud.com
Jens Axboe [Tue, 17 Oct 2023 21:20:14 +0000 (15:20 -0600)]
Merge tag 'nvme-6.7-2023-10-17' of git://git.infradead.org/nvme into for-6.7/block
Pull NVMe updates from Keith:
"nvme updates for Linux 6.7
- nvme-auth updates (Mark)
- nvme-tcp tls (Hannes)
- nvme-fc annotaions (Kees)"
* tag 'nvme-6.7-2023-10-17' of git://git.infradead.org/nvme: (24 commits)
nvme-auth: allow mixing of secret and hash lengths
nvme-auth: use transformed key size to create resp
nvme-auth: alloc nvme_dhchap_key as single buffer
nvmet-tcp: use 'spin_lock_bh' for state_lock()
nvme: rework NVME_AUTH Kconfig selection
nvmet-tcp: peek icreq before starting TLS
nvmet-tcp: control messages for recvmsg()
nvmet-tcp: enable TLS handshake upcall
nvmet: Set 'TREQ' to 'required' when TLS is enabled
nvmet-tcp: allocate socket file
nvmet-tcp: make nvmet_tcp_alloc_queue() a void function
nvmet: make TCP sectype settable via configfs
nvme-fabrics: parse options 'keyring' and 'tls_key'
nvme-tcp: improve icreq/icresp logging
nvme-tcp: control message handling for recvmsg()
nvme-tcp: enable TLS handshake upcall
nvme-tcp: allocate socket file
security/keys: export key_lookup()
nvme-keyring: implement nvme_tls_psk_default()
nvme-tcp: add definitions for TLS cipher suites
...
Mark O'Donovan [Tue, 17 Oct 2023 17:09:19 +0000 (17:09 +0000)]
nvme-auth: allow mixing of secret and hash lengths
We can now use any of the secret transformation hashes with a
secret, regardless of the secret size.
e.g. a 32 byte key with the SHA-512(64 byte) hash.
The example secret from the spec should now be permitted with
any of the following:
DHHC-1:00:ia6zGodOr4SEG0Zzaw398rpY0wqipUWj4jWjUh4HWUz6aQ2n:
DHHC-1:01:ia6zGodOr4SEG0Zzaw398rpY0wqipUWj4jWjUh4HWUz6aQ2n:
DHHC-1:02:ia6zGodOr4SEG0Zzaw398rpY0wqipUWj4jWjUh4HWUz6aQ2n:
DHHC-1:03:ia6zGodOr4SEG0Zzaw398rpY0wqipUWj4jWjUh4HWUz6aQ2n:
Note: Secrets are still restricted to 32,48 or 64 bits.
Co-developed-by: Akash Appaiah <Akash.Appaiah@dell.com>
Signed-off-by: Akash Appaiah <Akash.Appaiah@dell.com>
Signed-off-by: Mark O'Donovan <shiftee@posteo.net>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Mark O'Donovan [Tue, 17 Oct 2023 17:09:18 +0000 (17:09 +0000)]
nvme-auth: use transformed key size to create resp
This does not change current behaviour as the driver currently
verifies that the secret size is the same size as the length of
the transformation hash.
Co-developed-by: Akash Appaiah <Akash.Appaiah@dell.com>
Signed-off-by: Akash Appaiah <Akash.Appaiah@dell.com>
Signed-off-by: Mark O'Donovan <shiftee@posteo.net>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Mark O'Donovan [Tue, 17 Oct 2023 17:09:17 +0000 (17:09 +0000)]
nvme-auth: alloc nvme_dhchap_key as single buffer
Co-developed-by: Akash Appaiah <Akash.Appaiah@dell.com>
Signed-off-by: Akash Appaiah <Akash.Appaiah@dell.com>
Signed-off-by: Mark O'Donovan <shiftee@posteo.net>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 12 Oct 2023 12:59:54 +0000 (14:59 +0200)]
nvmet-tcp: use 'spin_lock_bh' for state_lock()
nvmet_tcp_schedule_release_queue() is called from socket state
change callbacks, which may be called from an softirq context.
So use 'spin_lock_bh' to avoid a spin lock warning.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Greg Joyce [Wed, 4 Oct 2023 20:19:57 +0000 (15:19 -0500)]
powerpc/pseries: PLPKS SED Opal keystore support
Define operations for SED Opal to read/write keys
from POWER LPAR Platform KeyStore(PLPKS). This allows
non-volatile storage of SED Opal keys.
Signed-off-by: Greg Joyce <gjoyce@linux.vnet.ibm.com>
Reviewed-by: Jonathan Derrick <jonathan.derrick@linux.dev>
Link: https://lore.kernel.org/r/20231004201957.1451669-4-gjoyce@linux.vnet.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Greg Joyce [Wed, 4 Oct 2023 20:19:56 +0000 (15:19 -0500)]
block: sed-opal: keystore access for SED Opal keys
Allow for permanent SED authentication keys by
reading/writing to the SED Opal non-volatile keystore.
Signed-off-by: Greg Joyce <gjoyce@linux.vnet.ibm.com>
Reviewed-by: Jonathan Derrick <jonathan.derrick@linux.dev>
Link: https://lore.kernel.org/r/20231004201957.1451669-3-gjoyce@linux.vnet.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Greg Joyce [Wed, 4 Oct 2023 20:19:55 +0000 (15:19 -0500)]
block:sed-opal: SED Opal keystore
Add read and write functions that allow SED Opal keys to stored
in a permanent keystore.
Signed-off-by: Greg Joyce <gjoyce@linux.vnet.ibm.com>
Reviewed-by: Jonathan Derrick <jonathan.derrick@linux.dev>
Link: https://lore.kernel.org/r/20231004201957.1451669-2-gjoyce@linux.vnet.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Mon, 9 Oct 2023 09:33:22 +0000 (17:33 +0800)]
ublk: simplify aborting request
Now ublk_abort_queue() is run exclusively with ublk_queue_rq() and the
ubq_daemon task, so simplify aborting request:
- set UBLK_IO_FLAG_ABORTED in ublk_abort_queue() just for aborting
this request
- abort request in ublk_queue_rq() if ubq->canceling is set
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20231009093324.957829-8-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Mon, 9 Oct 2023 09:33:21 +0000 (17:33 +0800)]
ublk: replace monitor with cancelable uring_cmd
Monitor work actually introduces one extra context for handling abort, this
way is easy to cause race, and also introduce extra delay when handling
aborting.
Now we start to support cancelable uring_cmd, so use it instead:
1) this cancel callback is either run from the uring cmd submission task
context or called after the io_uring context is exit, so the callback is
run exclusively with ublk_ch_uring_cmd() and __ublk_rq_task_work().
2) the previous patch freezes request queue when calling ublk_abort_queue(),
which is now completely exclusive with ublk_queue_rq() and
ublk_ch_uring_cmd()/__ublk_rq_task_work().
3) in timeout handler, if all IOs are in-flight, then all uring commands
are completed, uring command canceling can't help us to provide forward
progress any more, so call ublk_abort_requests() in timeout handler.
This way simplifies aborting queue, and is helpful for adding new feature,
such as, relax the limit of using single task for handling one queue.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20231009093324.957829-7-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Mon, 9 Oct 2023 09:33:20 +0000 (17:33 +0800)]
ublk: quiesce request queue when aborting queue
So far aborting queue ends request when the ubq daemon is exiting, and
it can be run concurrently with ublk_queue_rq(), this way is fragile and
we depend on the tricky usage of UBLK_IO_FLAG_ABORTED for avoiding such
race.
Quiesce queue when aborting queue, and the two code paths can be run
completely exclusively, then it becomes easier to add new ublk feature,
such as relaxing single same task limit for each queue.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20231009093324.957829-6-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Mon, 9 Oct 2023 09:33:19 +0000 (17:33 +0800)]
ublk: rename mm_lock as lock
Rename mm_lock field of ublk_device as lock, so that this lock can be reused
for protecting access of ub->ub_disk, which will be used for simplifying
ublk_abort_queue() by quiesce queue in next patch.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20231009093324.957829-5-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Mon, 9 Oct 2023 09:33:18 +0000 (17:33 +0800)]
ublk: move ublk_cancel_dev() out of ub->mutex
ublk_cancel_dev() just calls ublk_cancel_queue() to cancel all pending
io commands after ublk request queue is idle. The only protection is just
the read & write of ubq->nr_io_ready and avoid duplicated command cancel,
so add one per-queue lock with cancel flag for providing this protection,
meantime move ublk_cancel_dev() out of ub->mutex.
Then we needn't to call io_uring_cmd_complete_in_task() to cancel
pending command. And the same cancel logic will be re-used for
cancelable uring command.
This patch basically reverts commit
ac5902f84bb5 ("ublk: fix AB-BA lockdep warning").
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20231009093324.957829-4-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Mon, 9 Oct 2023 09:33:17 +0000 (17:33 +0800)]
ublk: make sure io cmd handled in submitter task context
In well-done ublk server implementation, ublk io command won't be
linked into any link chain. Meantime they are always handled in no-wait
style, so basically io cmd is always handled in submitter task context.
However, the server may set IOSQE_ASYNC, or io command is linked to one
chain mistakenly, then we may still run into io-wq context and
ctx->uring_lock isn't held.
So in case of IO_URING_F_UNLOCKED, schedule this command by
io_uring_cmd_complete_in_task to force running it in submitter task. Then
ublk_ch_uring_cmd_local() is guaranteed to run with context uring_lock held,
and we needn't to worry about sync among submission code path any more.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20231009093324.957829-3-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Mon, 9 Oct 2023 09:33:16 +0000 (17:33 +0800)]
ublk: don't get ublk device reference in ublk_abort_queue()
ublk_abort_queue() is called in ublk_daemon_monitor_work(), in which
it is guaranteed that the device is live because monitor work is
canceled when removing device, so no need to get the device reference.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20231009093324.957829-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mike Christie [Thu, 12 Oct 2023 15:06:00 +0000 (10:06 -0500)]
ublk: Make ublks_max configurable
We are converting tcmu applications to ublk, but have systems with up
to 1k devices. This patch allows us to configure the ublks_max from
userspace with the ublks_max modparam.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20231012150600.6198-3-michael.christie@oracle.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mike Christie [Thu, 12 Oct 2023 15:05:59 +0000 (10:05 -0500)]
ublk: Limit dev_id/ub_number values
The dev_id/ub_number is used for the ublk dev's char device's minor
number so it has to fit into MINORMASK. This patch adds checks to prevent
userspace from passing a number that's too large and limits what can be
allocated by the ublk_index_idr for the case where userspace has the
kernel allocate the dev_id/ub_number.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20231012150600.6198-2-michael.christie@oracle.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Tue, 17 Oct 2023 14:26:38 +0000 (08:26 -0600)]
Merge branch 'for-6.7/io_uring' into for-6.7/block
Merge in io_uring fixes, as the ublk simplifying cancelations and
aborts depend on the two patches from Ming adding cancelation support
for uring_cmd.
* for-6.7/io_uring:
io_uring/kbuf: Use slab for struct io_buffer objects
io_uring/kbuf: Allow the full buffer id space for provided buffers
io_uring/kbuf: Fix check of BID wrapping in provided buffers
io_uring/rsrc: cleanup io_pin_pages()
io_uring: cancelable uring_cmd
io_uring: retain top 8bits of uring_cmd flags for kernel internal use
io_uring: add IORING_OP_WAITID support
exit: add internal include file with helpers
exit: add kernel_waitid_prepare() helper
exit: move core of do_wait() into helper
exit: abstract out should_wake helper for child_wait_callback()
io_uring/rw: add support for IORING_OP_READ_MULTISHOT
io_uring/rw: mark readv/writev as vectored in the opcode definition
io_uring/rw: split io_read() into a helper
Jens Axboe [Thu, 12 Oct 2023 17:35:58 +0000 (11:35 -0600)]
Merge tag 'md-next-
20231012' of https://git./linux/kernel/git/song/md into for-6.7/block
Pull MD changes from Song:
"1. Rewrite mddev_suspend(), by Yu Kuai;
2. Simplify md_seq_ops, by Yu Kuai;
3. Reduce unnecessary locking array_state_store(), by Mariusz Tkaczyk."
* tag 'md-next-
20231012' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md: (23 commits)
md: rename __mddev_suspend/resume() back to mddev_suspend/resume()
md: remove old apis to suspend the array
md: suspend array in md_start_sync() if array need reconfiguration
md/raid5: replace suspend with quiesce() callback
md/md-linear: cleanup linear_add()
md: cleanup mddev_create/destroy_serial_pool()
md: use new apis to suspend array before mddev_create/destroy_serial_pool
md: use new apis to suspend array for ioctls involed array reconfiguration
md: use new apis to suspend array for adding/removing rdev from state_store()
md: use new apis to suspend array for sysfs apis
md/raid5: use new apis to suspend array
md/raid5-cache: use new apis to suspend array
md/md-bitmap: use new apis to suspend array for location_store()
md/dm-raid: use new apis to suspend array
md: add new helpers to suspend/resume and lock/unlock array
md: add new helpers to suspend/resume array
md: replace is_md_suspended() with 'mddev->suspended' in md_check_recovery()
md/raid5-cache: use READ_ONCE/WRITE_ONCE for 'conf->log'
md: use READ_ONCE/WRITE_ONCE for 'suspend_lo' and 'suspend_hi'
md/raid1: don't split discard io for write behind
...
Denis Plotnikov [Mon, 25 Sep 2023 12:59:40 +0000 (15:59 +0300)]
md-cluster: check for timeout while a new disk adding
A new disk adding may end up with timeout and a new disk won't be added.
Add returning the error in that case.
Found by Linux Verification Center (linuxtesting.org) with SVACE
Signed-off-by: Denis Plotnikov <den-plotnikov@yandex-team.ru>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230925125940.1542506-1-den-plotnikov@yandex-team.ru
Hannes Reinecke [Thu, 12 Oct 2023 12:22:48 +0000 (14:22 +0200)]
nvme: rework NVME_AUTH Kconfig selection
Having a single Kconfig symbol NVME_AUTH conflates the selection
of the authentication functions from nvme/common and nvme/host,
causing kbuild robot to complain when building the nvme target
only. So introduce a Kconfig symbol NVME_HOST_AUTH for the nvme
host bits and use NVME_AUTH for the common functions only.
And move the CRYPTO selection into nvme/common to make it
easier to read.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202310120733.TlPOVeJm-lkp@intel.com/
Signed-off-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:25 +0000 (16:39 +0200)]
nvmet-tcp: peek icreq before starting TLS
Incoming connection might be either 'normal' NVMe-TCP connections
starting with icreq or TLS handshakes. To ensure that 'normal'
connections can still be handled we need to peek the first packet
and only start TLS handshake if it's not an icreq.
With that we can lift the restriction to always set TREQ to
'required' when TLS1.3 is enabled.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:24 +0000 (16:39 +0200)]
nvmet-tcp: control messages for recvmsg()
kTLS requires control messages for recvmsg() to relay any out-of-band
TLS messages (eg TLS alerts) to the caller.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:23 +0000 (16:39 +0200)]
nvmet-tcp: enable TLS handshake upcall
TLS handshake is handled in userspace with the netlink tls handshake
protocol.
The patch adds a function to start the TLS handshake upcall for any
incoming network connections if the TCP TSAS sectype is set to 'tls1.3'.
A config option NVME_TARGET_TCP_TLS selects whether the TLS handshake
upcall should be compiled in. The patch also adds reference counting
to struct nvmet_tcp_queue to ensure the queue is always valid when the
the TLS handshake completes.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:22 +0000 (16:39 +0200)]
nvmet: Set 'TREQ' to 'required' when TLS is enabled
The current implementation does not support secure concatenation,
so 'TREQ' is always set to 'required' when TLS is enabled.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:21 +0000 (16:39 +0200)]
nvmet-tcp: allocate socket file
For the TLS upcall we need to allocate a socket file such
that the userspace daemon is able to use the socket.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:20 +0000 (16:39 +0200)]
nvmet-tcp: make nvmet_tcp_alloc_queue() a void function
The return value from nvmet_tcp_alloc_queue() are just used to
figure out if sock_release() need to be called. So this patch
moves sock_release() into nvmet_tcp_alloc_queue() and make it
a void function.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Nitesh Shetty <nj.shetty@samsung.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:19 +0000 (16:39 +0200)]
nvmet: make TCP sectype settable via configfs
Add a new configfs attribute 'addr_tsas' to make the TCP sectype
settable via configfs.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:18 +0000 (16:39 +0200)]
nvme-fabrics: parse options 'keyring' and 'tls_key'
Parse the fabrics options 'keyring' and 'tls_key' and store the
referenced keys in the options structure.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:17 +0000 (16:39 +0200)]
nvme-tcp: improve icreq/icresp logging
When icreq/icresp fails we should be printing out a warning to
inform the user that the connection could not be established;
without it there won't be anything in the kernel message log,
just an error code returned to nvme-cli.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:16 +0000 (16:39 +0200)]
nvme-tcp: control message handling for recvmsg()
kTLS is sending TLS ALERT messages as control messages for recvmsg().
As we can't do anything sensible with it just abort the connection
and let the userspace agent to a re-negotiation.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:15 +0000 (16:39 +0200)]
nvme-tcp: enable TLS handshake upcall
Add a fabrics option 'tls' and start the TLS handshake upcall
with the default PSK. When TLS is started the PSK key serial
number is displayed in the sysfs attribute 'tls_key'
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:14 +0000 (16:39 +0200)]
nvme-tcp: allocate socket file
When using the TLS upcall we need to allocate a socket file such
that the userspace daemon is able to use the socket.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:13 +0000 (16:39 +0200)]
security/keys: export key_lookup()
For in-kernel consumers one cannot readily assign a user (eg when
running from a workqueue), so the normal key search permissions
cannot be applied.
This patch exports the 'key_lookup()' function for a simple lookup
of keys without checking for permissions.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Acked-by: David Howells <dhowells@redhat.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:12 +0000 (16:39 +0200)]
nvme-keyring: implement nvme_tls_psk_default()
Implement a function to select the preferred PSK for TLS.
A 'retained' PSK should be preferred over a 'generated' PSK,
and SHA-384 should be preferred to SHA-256.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:11 +0000 (16:39 +0200)]
nvme-tcp: add definitions for TLS cipher suites
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:10 +0000 (16:39 +0200)]
nvme: add TCP TSAS definitions
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:09 +0000 (16:39 +0200)]
nvme-keyring: define a 'psk' keytype
Define a 'psk' keytype to hold the NVMe TLS PSKs.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Hannes Reinecke [Thu, 24 Aug 2023 14:39:08 +0000 (16:39 +0200)]
nvme-keyring: register '.nvme' keyring
Register a '.nvme' keyring to hold keys for TLS and DH-HMAC-CHAP and
add a new config option NVME_KEYRING.
We need a separate keyring for NVMe as the configuration is done
via individual commands (eg for configfs), and the usual per-session
or per-process keyrings can't be used.
Signed-off-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Song Liu [Wed, 11 Oct 2023 02:23:32 +0000 (19:23 -0700)]
Merge branch 'md-suspend-rewrite' into md-next
From Yu Kuai, written by Song Liu
Recent tests with raid10 revealed many issues with the following scenarios:
- add or remove disks to the array
- issue io to the array
At first, we fixed each problem independently respect that io can
concurrent with array reconfiguration. However, with more issues reported
continuously, I am hoping to fix these problems thoroughly.
Refer to how block layer protect io with queue reconfiguration (for
example, change elevator):
blk_mq_freeze_queue
-> wait for all io to be done, and prevent new io to be dispatched
// reconfiguration
blk_mq_unfreeze_queue
I think we can do something similar to synchronize io with array
reconfiguration.
Current synchronization works as the following. For the reconfiguration
operation:
1. Hold 'reconfig_mutex';
2. Check that rdev can be added/removed, one condition is that there is no
IO (for example, check nr_pending).
3. Do the actual operations to add/remove a rdev, one procedure is
set/clear a pointer to rdev.
4. Check if there is still no IO on this rdev, if not, revert the
change.
IO path uses rcu_read_lock/unlock() to access rdev.
- rcu is used wrongly;
- There are lots of places involved that old rdev can be read, however,
many places doesn't handle old value correctly;
- Between step 3 and 4, if new io is dispatched, NULL will be read for
the rdev, and data will be lost if step 4 failed.
The new synchronization is similar to blk_mq_freeze_queue(). To add or
remove disk:
1. Suspend the array, that is, stop new IO from being dispatched
and wait for inflight IO to finish.
2. Add or remove rdevs to array;
3. Resume the array;
IO path doesn't need to change for now, and all rcu implementation can
be removed.
Then main work is divided into 3 steps:
First, first make sure new apis to suspend the array is general:
- make sure suspend array will wait for io to be done(Done by [1]);
- make sure suspend array can be called for all personalities(Done by [2]);
- make sure suspend array can be called at any time(Done by [3]);
- make sure suspend array doesn't rely on 'reconfig_mutex'(PATCH 3-5);
Second replace old apis with new apis(PATCH 6-16). Specifically, the
synchronization is changed from:
lock reconfig_mutex
suspend array
make changes
resume array
unlock reconfig_mutex
to:
suspend array
lock reconfig_mutex
make changes
unlock reconfig_mutex
resume array
Finally, for the remain path that involved reconfiguration, suspend the
array first(PATCH 11,12, [4] and PATCH 17):
Preparatory work:
[1] https://lore.kernel.org/all/
20230621165110.
1498313-1-yukuai1@huaweicloud.com/
[2] https://lore.kernel.org/all/
20230628012931.88911-2-yukuai1@huaweicloud.com/
[3] https://lore.kernel.org/all/
20230825030956.
1527023-1-yukuai1@huaweicloud.com/
[4] https://lore.kernel.org/all/
20230825031622.
1530464-1-yukuai1@huaweicloud.com/
* md-suspend-rewrite:
md: rename __mddev_suspend/resume() back to mddev_suspend/resume()
md: remove old apis to suspend the array
md: suspend array in md_start_sync() if array need reconfiguration
md/raid5: replace suspend with quiesce() callback
md/md-linear: cleanup linear_add()
md: cleanup mddev_create/destroy_serial_pool()
md: use new apis to suspend array before mddev_create/destroy_serial_pool
md: use new apis to suspend array for ioctls involed array reconfiguration
md: use new apis to suspend array for adding/removing rdev from state_store()
md: use new apis to suspend array for sysfs apis
md/raid5: use new apis to suspend array
md/raid5-cache: use new apis to suspend array
md/md-bitmap: use new apis to suspend array for location_store()
md/dm-raid: use new apis to suspend array
md: add new helpers to suspend/resume and lock/unlock array
md: add new helpers to suspend/resume array
md: replace is_md_suspended() with 'mddev->suspended' in md_check_recovery()
md/raid5-cache: use READ_ONCE/WRITE_ONCE for 'conf->log'
md: use READ_ONCE/WRITE_ONCE for 'suspend_lo' and 'suspend_hi'
Yu Kuai [Tue, 10 Oct 2023 15:19:58 +0000 (23:19 +0800)]
md: rename __mddev_suspend/resume() back to mddev_suspend/resume()
Now that the old apis are removed, __mddev_suspend/resume() can be
renamed to their original names.
This is done by:
sed -i "s/__mddev_suspend/mddev_suspend/g" *.[ch]
sed -i "s/__mddev_resume/mddev_resume/g" *.[ch]
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-20-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:57 +0000 (23:19 +0800)]
md: remove old apis to suspend the array
Now that mddev_suspend() and mddev_resume() is not used anywhere, remove
them, and remove 'MD_ALLOW_SB_UPDATE' and 'MD_UPDATING_SB' as well.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-19-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:56 +0000 (23:19 +0800)]
md: suspend array in md_start_sync() if array need reconfiguration
So that io won't concurrent with array reconfiguration, and it's safe to
suspend the array directly because normal io won't rely on
md_start_sync().
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-18-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:55 +0000 (23:19 +0800)]
md/raid5: replace suspend with quiesce() callback
raid5 is the only personality to suspend array in check_reshape() and
start_reshape() callback, suspend and quiesce() callback can both wait
for all normal io to be done, and prevent new io to be dispatched, the
difference is that suspend is implemented in common layer, and quiesce()
callback is implemented in raid5.
In order to cleanup all the usage of mddev_suspend(), the new apis
__mddev_suspend() need to be called before 'reconfig_mutex' is held,
and it's not good to affect all the personalities in common layer just
for raid5. Hence replace suspend with quiesce() callaback, prepare to
reomove all the users of mddev_suspend().
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-17-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:54 +0000 (23:19 +0800)]
md/md-linear: cleanup linear_add()
Now that caller already suspend the array, there is no need to suspend
array in liner_add().
Note that mddev_suspend/resume() is not used anymore.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-16-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:53 +0000 (23:19 +0800)]
md: cleanup mddev_create/destroy_serial_pool()
Now that except for stopping the array, all the callers already suspend
the array, there is no need to suspend anymore, hence remove the second
parameter.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-15-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:52 +0000 (23:19 +0800)]
md: use new apis to suspend array before mddev_create/destroy_serial_pool
mddev_create/destroy_serial_pool() will be called from several places
where mddev_suspend() will be called later.
Prepare to remove the mddev_suspend() from
mddev_create/destroy_serial_pool().
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-14-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:51 +0000 (23:19 +0800)]
md: use new apis to suspend array for ioctls involed array reconfiguration
'reconfig_mutex' will be grabbed before these ioctls, suspend array
before holding the lock, so that io won't concurrent with array
reconfiguration through ioctls.
This is not hot path, so performance is not concerned.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-13-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:50 +0000 (23:19 +0800)]
md: use new apis to suspend array for adding/removing rdev from state_store()
User can write 'remove' and 're-add' to trigger array reconfiguration
through sysfs, suspend array in this case so that io won't concurrent
with array reconfiguration.
And now that all the caller of add_bound_rdev() alread suspend the
array, remove mddev_suspend/resume() from add_bound_rdev() as well.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-12-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:49 +0000 (23:19 +0800)]
md: use new apis to suspend array for sysfs apis
Convert to use new apis in following sysfs apis:
- level_store
- suspend_lo_store
- suspend_hi_store
- serialize_policy_store
These are not hot path, so performance is not concerned.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-11-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:48 +0000 (23:19 +0800)]
md/raid5: use new apis to suspend array
Convert to use new apis, the old apis will be removed eventually.
These are not hot path, so performance is not concerned.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-10-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:47 +0000 (23:19 +0800)]
md/raid5-cache: use new apis to suspend array
Convert to use new apis, the old apis will be removed eventually.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-9-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:46 +0000 (23:19 +0800)]
md/md-bitmap: use new apis to suspend array for location_store()
Convert to use new apis, the old apis will be removed eventually.
This is not hot path, so performance is not concerned.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-8-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:45 +0000 (23:19 +0800)]
md/dm-raid: use new apis to suspend array
Convert to use new apis, the old apis will be removed eventually.
These are not hot path, so performance is not concerned.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-7-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:44 +0000 (23:19 +0800)]
md: add new helpers to suspend/resume and lock/unlock array
The new helpers suspend the array first and then lock the array,
Prepare to refactor from:
mddev_lock/lock_nointr
mddev_suspend
...
mddev_resuem
mddev_lock
With:
mddev_suspend_and_lock/lock_nointr
...
mddev_unlock_and_resume
After all the use cases is refactored, mddev_suspend/resume() will be
removed.
And mddev_suspend_and_lock() will also replace mddev_lock() for the case
that the array will be reconfigured, in order to synchronize with io to
prevent problems in many corner cases.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-6-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:43 +0000 (23:19 +0800)]
md: add new helpers to suspend/resume array
Advantages for new apis:
- reconfig_mutex is not required;
- the weird logical that suspend array hold 'reconfig_mutex' for
mddev_check_recovery() to update superblock is not needed;
- the specail handling, 'pers->prepare_suspend', for raid456 is not
needed;
- It's safe to be called at any time once mddev is allocated, and it's
designed to be used from slow path where array configuration is changed;
- the new helpers is designed to be called before mddev_lock(), hence
it support to be interrupted by user as well.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-5-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:42 +0000 (23:19 +0800)]
md: replace is_md_suspended() with 'mddev->suspended' in md_check_recovery()
Prepare to cleanup pers->prepare_suspend(), which is used to fix a
deadlock in raid456 by returning error for io that is waiting for
reshape to make progress in mddev_suspend().
This change will allow reshape to make progress while waiting for io to
be done in mddev_suspend() in following patches.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-4-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:41 +0000 (23:19 +0800)]
md/raid5-cache: use READ_ONCE/WRITE_ONCE for 'conf->log'
'conf->log' is set with 'reconfig_mutex' grabbed, however, readers are
not procted, hence protect it with READ_ONCE/WRITE_ONCE to prevent
reading abnormal values.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-3-yukuai1@huaweicloud.com
Yu Kuai [Tue, 10 Oct 2023 15:19:40 +0000 (23:19 +0800)]
md: use READ_ONCE/WRITE_ONCE for 'suspend_lo' and 'suspend_hi'
Protect 'suspend_lo' and 'suspend_hi' with READ_ONCE/WRITE_ONCE to prevent
reading abnormal values.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-2-yukuai1@huaweicloud.com
Yu Kuai [Sat, 7 Oct 2023 11:21:05 +0000 (19:21 +0800)]
md/raid1: don't split discard io for write behind
Currently, discad io is treated the same as normal write io, and for
write behind case, io size is limited to:
BIO_MAX_VECS * (PAGE_SIZE >> 9)
For 0.5KB sector size and 4KB PAGE_SIZE, this is just 1MB. For
consequence, if 'WriteMostly' is set to one of the underlying disks,
then diskcard io will be splited into 1MB and it will take a long time
for the diskcard to finish.
Fix this problem by disable write behind for discard io.
Reported-by: Roman Mamedov <rm@romanrm.net>
Closes: https://lore.kernel.org/all/6a1165f7-c792-c054-b8f0-1ad4f7b8ae01@ultracoder.org/
Reported-and-tested-by: Kirill Kirilenko <kirill@ultracoder.org>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231007112105.407449-1-yukuai1@huaweicloud.com
Kees Cook [Thu, 5 Oct 2023 16:14:12 +0000 (09:14 -0700)]
nvmet-fc: Annotate struct nvmet_fc_tgt_queue with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct
nvmet_fc_tgt_queue. Additionally, since the element count member must
be set before accessing the annotated flexible array member, move its
initialization earlier.
Cc: James Smart <james.smart@broadcom.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: linux-nvme@lists.infradead.org
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Gabriel Krisman Bertazi [Thu, 5 Oct 2023 00:05:31 +0000 (20:05 -0400)]
io_uring/kbuf: Use slab for struct io_buffer objects
The allocation of struct io_buffer for metadata of provided buffers is
done through a custom allocator that directly gets pages and
fragments them. But, slab would do just fine, as this is not a hot path
(in fact, it is a deprecated feature) and, by keeping a custom allocator
implementation we lose benefits like tracking, poisoning,
sanitizers. Finally, the custom code is more complex and requires
keeping the list of pages in struct ctx for no good reason. This patch
cleans this path up and just uses slab.
I microbenchmarked it by forcing the allocation of a large number of
objects with the least number of io_uring commands possible (keeping
nbufs=USHRT_MAX), with and without the patch. There is a slight
increase in time spent in the allocation with slab, of course, but even
when allocating to system resources exhaustion, which is not very
realistic and happened around 1/2 billion provided buffers for me, it
wasn't a significant hit in system time. Specially if we think of a
real-world scenario, an application doing register/unregister of
provided buffers will hit ctx->io_buffers_cache more often than actually
going to slab.
Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>
Link: https://lore.kernel.org/r/20231005000531.30800-4-krisman@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Gabriel Krisman Bertazi [Thu, 5 Oct 2023 00:05:30 +0000 (20:05 -0400)]
io_uring/kbuf: Allow the full buffer id space for provided buffers
nbufs tracks the number of buffers and not the last bgid. In 16-bit, we
have 2^16 valid buffers, but the check mistakenly rejects the last
bid. Let's fix it to make the interface consistent with the
documentation.
Fixes: ddf0322db79c ("io_uring: add IORING_OP_PROVIDE_BUFFERS")
Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>
Link: https://lore.kernel.org/r/20231005000531.30800-3-krisman@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Gabriel Krisman Bertazi [Thu, 5 Oct 2023 00:05:29 +0000 (20:05 -0400)]
io_uring/kbuf: Fix check of BID wrapping in provided buffers
Commit
3851d25c75ed0 ("io_uring: check for rollover of buffer ID when
providing buffers") introduced a check to prevent wrapping the BID
counter when sqe->off is provided, but it's off-by-one too
restrictive, rejecting the last possible BID (65534).
i.e., the following fails with -EINVAL.
io_uring_prep_provide_buffers(sqe, addr, size, 0xFFFF, 0, 0);
Fixes: 3851d25c75ed ("io_uring: check for rollover of buffer ID when providing buffers")
Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>
Link: https://lore.kernel.org/r/20231005000531.30800-2-krisman@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jan Höppner [Fri, 15 Sep 2023 13:10:01 +0000 (15:10 +0200)]
partitions/ibm: Introduce defines for magic string length values
The length values for volume label type and volume label id are
hard-coded in several places. Provide defines for those values and
replace all occurrences accordingly.
Note that the length is defined and used, and not the size since the
volume label type string and volume label id string are not
nul-terminated.
Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com>
Reviewed-by: Stefan Haberland <sth@linux.ibm.com>
Signed-off-by: Stefan Haberland <sth@linux.ibm.com>
Link: https://lore.kernel.org/r/20230915131001.697070-4-sth@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jan Höppner [Fri, 15 Sep 2023 13:10:00 +0000 (15:10 +0200)]
partitions/ibm: Replace strncpy() and improve readability
strncpy() is deprecated and needs to be replaced. The volume label
information strings are not nul-terminated and strncpy() can simply be
replaced with memcpy().
To enhance the readability of find_label() alongside this change, the
following improvements are made:
- Introduce the array dasd_vollabels[] containing all information
necessary for the label detection.
- Provide a helper function to obtain an index value corresponding to a
volume label type. This allows the use of a switch statement to reduce
indentation levels.
- The 'temp' variable is used to check against valid volume label types.
In the good case, this variable already contains the volume label type
making it unnecessary to copy the information again from e.g.
label->vol.vollbl. Remove the 'temp' variable and the second copy as
all information are already provided.
- Remove the 'found' variable and replace it with early returns
Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com>
Reviewed-by: Stefan Haberland <sth@linux.ibm.com>
Signed-off-by: Stefan Haberland <sth@linux.ibm.com>
Link: https://lore.kernel.org/r/20230915131001.697070-3-sth@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jan Höppner [Fri, 15 Sep 2023 13:09:59 +0000 (15:09 +0200)]
partitions/ibm: Remove unnecessary memset
The data holding the volume label information is zeroed in case no valid
volume label was found. Since the label information isn't used in that
case, zeroing the data doesn't provide any value whatsoever.
Remove the unnecessary memset() call accordingly.
Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com>
Reviewed-by: Stefan Haberland <sth@linux.ibm.com>
Signed-off-by: Stefan Haberland <sth@linux.ibm.com>
Link: https://lore.kernel.org/r/20230915131001.697070-2-sth@linux.ibm.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Justin Stitt [Tue, 19 Sep 2023 05:27:45 +0000 (05:27 +0000)]
aoe: replace strncpy with strscpy
`strncpy` is deprecated for use on NUL-terminated destination strings [1].
`aoe_iflist` is expected to be NUL-terminated which is evident by its
use with string apis later on like `strspn`:
| p = aoe_iflist + strspn(aoe_iflist, WHITESPACE);
It also seems `aoe_iflist` does not need to be NUL-padded which means
`strscpy` [2] is a suitable replacement due to the fact that it
guarantees NUL-termination on the destination buffer while not
unnecessarily NUL-padding.
Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings
Link: https://manpages.debian.org/testing/linux-manual-4.8/strscpy.9.en.html
Link: https://github.com/KSPP/linux/issues/90
Cc: linux-hardening@vger.kernel.org
Cc: Kees Cook <keescook@chromium.org>
Cc: Xu Panda <xu.panda@zte.com.cn>
Cc: Yang Yang <yang.yang29@zte.com>
Signed-off-by: Justin Stitt <justinstitt@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230919-strncpy-drivers-block-aoe-aoenet-c-v2-1-3d5d158410e9@google.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Justin Stitt [Tue, 19 Sep 2023 05:30:35 +0000 (05:30 +0000)]
null_blk: replace strncpy with strscpy
`strncpy` is deprecated for use on NUL-terminated destination strings [1].
We should favor a more robust and less ambiguous interface.
We expect that both `nullb->disk_name` and `disk->disk_name` be
NUL-terminated:
| snprintf(nullb->disk_name, sizeof(nullb->disk_name),
| "%s", config_item_name(&dev->group.cg_item));
...
| pr_info("disk %s created\n", nullb->disk_name);
It seems like NUL-padding may be required due to __assign_disk_name()
utilizing a memcpy as opposed to a `str*cpy` api.
| static inline void __assign_disk_name(char *name, struct gendisk *disk)
| {
| if (disk)
| memcpy(name, disk->disk_name, DISK_NAME_LEN);
| else
| memset(name, 0, DISK_NAME_LEN);
| }
Then we go and print it with `__print_disk_name` which wraps `nullb_trace_disk_name()`.
| #define __print_disk_name(name) nullb_trace_disk_name(p, name)
This function obviously expects a NUL-terminated string.
| const char *nullb_trace_disk_name(struct trace_seq *p, char *name)
| {
| const char *ret = trace_seq_buffer_ptr(p);
|
| if (name && *name)
| trace_seq_printf(p, "disk=%s, ", name);
| trace_seq_putc(p, 0);
|
| return ret;
| }
>From the above, we need both 1) a NUL-terminated string and 2) a
NUL-padded string. So, let's use strscpy_pad() as per Kees' suggestion
from v1.
Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings
Link: https://github.com/KSPP/linux/issues/90
Cc: linux-hardening@vger.kernel.org
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Justin Stitt <justinstitt@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230919-strncpy-drivers-block-null_blk-main-c-v3-1-10cf0a87a2c3@google.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Joel Granados [Mon, 2 Oct 2023 22:02:03 +0000 (23:02 +0100)]
cdrom: Remove now superfluous sentinel element from ctl_table array
This commit comes at the tail end of a greater effort to remove the
empty elements at the end of the ctl_table arrays (sentinels) which
will reduce the overall build time size of the kernel and run time
memory bloat by ~64 bytes per sentinel (further information Link :
https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/)
Remove sentinel element from cdrom_table
Signed-off-by: Joel Granados <j.granados@samsung.com>
Link: https://lore.kernel.org/lkml/20231002-jag-sysctl_remove_empty_elem_drivers-v2-1-02dd0d46f71e@samsung.com
Reviewed-by: Phillip Potter <phil@philpotter.co.uk>
Link: https://lore.kernel.org/lkml/20231002214528.15529-1-phil@philpotter.co.uk
Signed-off-by: Phillip Potter <phil@philpotter.co.uk>
Link: https://lore.kernel.org/r/20231002220203.15637-2-phil@philpotter.co.uk
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Tue, 3 Oct 2023 00:25:23 +0000 (18:25 -0600)]
io_uring/rsrc: cleanup io_pin_pages()
This function is overly convoluted with a goto error path, and checks
under the mmap_read_lock() that don't need to be at all. Rearrange it
a bit so the checks and errors fall out naturally, rather than needing
to jump around for it.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 29 Sep 2023 05:58:34 +0000 (23:58 -0600)]
Merge tag 'md-next-
20230927' of https://git./linux/kernel/git/song/md into for-6.7/block
Pull MD updates from Song:
"1. Make rdev add/remove independent from daemon thread, by Yu Kuai;
2. Refactor code around quiesce() and mddev_suspend(), by Yu Kuai."
* tag 'md-next-
20230927' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
md: replace deprecated strncpy with memcpy
md/md-linear: Annotate struct linear_conf with __counted_by
md: don't check 'mddev->pers' and 'pers->quiesce' from suspend_lo_store()
md: don't check 'mddev->pers' from suspend_hi_store()
md-bitmap: suspend array earlier in location_store()
md-bitmap: remove the checking of 'pers->quiesce' from location_store()
md: don't rely on 'mddev->pers' to be set in mddev_suspend()
md: initialize 'writes_pending' while allocating mddev
md: initialize 'active_io' while allocating mddev
md: delay remove_and_add_spares() for read only array to md_start_sync()
md: factor out a helper rdev_addable() from remove_and_add_spares()
md: factor out a helper rdev_is_spare() from remove_and_add_spares()
md: factor out a helper rdev_removeable() from remove_and_add_spares()
md: delay choosing sync action to md_start_sync()
md: factor out a helper to choose sync action from md_check_recovery()
md: use separate work_struct for md_start_sync()
Mariusz Tkaczyk [Thu, 28 Sep 2023 12:55:17 +0000 (14:55 +0200)]
md: do not require mddev_lock() for all options in array_state_store()
We don't need to lock device to reject not supported request
in array_state_store(). No functional changes intended.
There are differences between ioctl and sysfs handling during stopping.
With this change, it will be easier to add additional steps which needs
to be completed before mddev_lock() is taken.
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230928125517.12356-1-mariusz.tkaczyk@linux.intel.com
Ming Lei [Thu, 28 Sep 2023 12:43:25 +0000 (20:43 +0800)]
io_uring: cancelable uring_cmd
uring_cmd may never complete, such as ublk, in which uring cmd isn't
completed until one new block request is coming from ublk block device.
Add cancelable uring_cmd to provide mechanism to driver for cancelling
pending commands in its own way.
Add API of io_uring_cmd_mark_cancelable() for driver to mark one command as
cancelable, then io_uring will cancel this command in
io_uring_cancel_generic(). ->uring_cmd() callback is reused for canceling
command in driver's way, then driver gets notified with the cancelling
from io_uring.
Add API of io_uring_cmd_get_task() to help driver cancel handler
deal with the canceling.
Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
Suggested-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Thu, 28 Sep 2023 12:43:24 +0000 (20:43 +0800)]
io_uring: retain top 8bits of uring_cmd flags for kernel internal use
Retain top 8bits of uring_cmd flags for kernel internal use, so that we
can move IORING_URING_CMD_POLLED out of uapi header.
Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Yu Kuai [Wed, 27 Sep 2023 06:12:41 +0000 (14:12 +0800)]
md: simplify md_seq_ops
Before this patch, the implementation is hacky and hard to understand:
1) md_seq_start set pos to 1;
2) md_seq_show found pos is 1, then print Personalities;
3) md_seq_next found pos is 1, then it update pos to the first mddev;
4) md_seq_show found pos is not 1 or 2, show mddev;
5) md_seq_next found pos is not 1 or 2, update pos to next mddev;
6) loop 4-5 until the last mddev, then md_seq_next update pos to 2;
7) md_seq_show found pos is 2, then print unused devices;
8) md_seq_next found pos is 2, stop;
This patch remove the magic value and use seq_list_start/next/stop()
directly, and move printing "Personalities" to md_seq_start(),
"unsed devices" to md_seq_stop():
1) md_seq_start print Personalities, and then set pos to first mddev;
2) md_seq_show show mddev;
3) md_seq_next update pos to next mddev;
4) loop 2-3 until the last mddev;
5) md_seq_stop print unsed devices;
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230927061241.1552837-3-yukuai1@huaweicloud.com
Yu Kuai [Wed, 27 Sep 2023 06:12:40 +0000 (14:12 +0800)]
md: factor out a helper from mddev_put()
There are no functional changes, prepare to simplify md_seq_ops in next
patch.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230927061241.1552837-2-yukuai1@huaweicloud.com
Coly Li [Fri, 11 Aug 2023 17:05:12 +0000 (01:05 +0800)]
badblocks: switch to the improved badblock handling code
This patch removes old code of badblocks_set(), badblocks_clear() and
badblocks_check(), and make them as wrappers to call _badblocks_set(),
_badblocks_clear() and _badblocks_check().
By this change now the badblock handing switch to the improved algorithm
in _badblocks_set(), _badblocks_clear() and _badblocks_check().
This patch only contains the changes of old code deletion, new added
code for the improved algorithms are in previous patches.
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Geliang Tang <geliang.tang@suse.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: NeilBrown <neilb@suse.de>
Cc: Vishal L Verma <vishal.l.verma@intel.com>
Cc: Xiao Ni <xni@redhat.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Acked-by: Geliang Tang <geliang.tang@suse.com>
Link: https://lore.kernel.org/r/20230811170513.2300-7-colyli@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Fri, 11 Aug 2023 17:05:11 +0000 (01:05 +0800)]
badblocks: improve badblocks_check() for multiple ranges handling
This patch rewrites badblocks_check() with similar coding style as
_badblocks_set() and _badblocks_clear(). The only difference is bad
blocks checking may handle multiple ranges in bad tables now.
If a checking range covers multiple bad blocks range in bad block table,
like the following condition (C is the checking range, E1, E2, E3 are
three bad block ranges in bad block table),
+------------------------------------+
| C |
+------------------------------------+
+----+ +----+ +----+
| E1 | | E2 | | E3 |
+----+ +----+ +----+
The improved badblocks_check() algorithm will divide checking range C
into multiple parts, and handle them in 7 runs of a while-loop,
+--+ +----+ +----+ +----+ +----+ +----+ +----+
|C1| | C2 | | C3 | | C4 | | C5 | | C6 | | C7 |
+--+ +----+ +----+ +----+ +----+ +----+ +----+
+----+ +----+ +----+
| E1 | | E2 | | E3 |
+----+ +----+ +----+
And the start LBA and length of range E1 will be set as first_bad and
bad_sectors for the caller.
The return value rule is consistent for multiple ranges. For example if
there are following bad block ranges in bad block table,
Index No. Start Len Ack
0 400 20 1
1 500 50 1
2 650 20 0
the return value, first_bad, bad_sectors by calling badblocks_set() with
different checking range can be the following values,
Checking Start, Len Return Value first_bad bad_sectors
100, 100 0 N/A N/A
100, 310 1 400 10
100, 440 1 400 10
100, 540 1 400 10
100, 600 -1 400 10
100, 800 -1 400 10
In order to make code review easier, this patch names the improved bad
block range checking routine as _badblocks_check() and does not change
existing badblock_check() code yet. Later patch will delete old code of
badblocks_check() and make it as a wrapper to call _badblocks_check().
Then the new added code won't mess up with the old deleted code, it will
be more clear and easier for code review.
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Geliang Tang <geliang.tang@suse.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: NeilBrown <neilb@suse.de>
Cc: Vishal L Verma <vishal.l.verma@intel.com>
Cc: Xiao Ni <xni@redhat.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Acked-by: Geliang Tang <geliang.tang@suse.com>
Link: https://lore.kernel.org/r/20230811170513.2300-6-colyli@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Fri, 11 Aug 2023 17:05:10 +0000 (01:05 +0800)]
badblocks: improve badblocks_clear() for multiple ranges handling
With the fundamental ideas and helper routines from badblocks_set()
improvement, clearing bad block for multiple ranges is much simpler.
With a similar idea from badblocks_set() improvement, this patch
simplifies bad block range clearing into 5 situations. No matter how
complicated the clearing condition is, we just look at the head part
of clearing range with relative already set bad block range from the
bad block table. The rested part will be handled in next run of the
while-loop.
Based on existing helpers added from badblocks_set(), this patch adds
two more helpers,
- front_clear()
Clear the bad block range from bad block table which is front
overlapped with the clearing range.
- front_splitting_clear()
Handle the condition that the clearing range hits middle of an
already set bad block range from bad block table.
Similar as badblocks_set(), the first part of clearing range is handled
with relative bad block range which is find by prev_badblocks(). In most
cases a valid hint is provided to prev_badblocks() to avoid unnecessary
bad block table iteration.
This patch also explains the detail algorithm code comments at beginning
of badblocks.c, including which five simplified situations are
categrized and how all the bad block range clearing conditions are
handled by these five situations.
Again, in order to make the code review easier and avoid the code
changes mixed together, this patch does not modify badblock_clear() and
implement another routine called _badblock_clear() for the improvement.
Later patch will delete current code of badblock_clear() and make it as
a wrapper to _badblock_clear(), so the code change can be much clear for
review.
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Geliang Tang <geliang.tang@suse.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: NeilBrown <neilb@suse.de>
Cc: Vishal L Verma <vishal.l.verma@intel.com>
Cc: Xiao Ni <xni@redhat.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Acked-by: Geliang Tang <geliang.tang@suse.com>
Link: https://lore.kernel.org/r/20230811170513.2300-5-colyli@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Fri, 11 Aug 2023 17:05:09 +0000 (01:05 +0800)]
badblocks: improve badblocks_set() for multiple ranges handling
Recently I received a bug report that current badblocks code does not
properly handle multiple ranges. For example,
badblocks_set(bb, 32, 1, true);
badblocks_set(bb, 34, 1, true);
badblocks_set(bb, 36, 1, true);
badblocks_set(bb, 32, 12, true);
Then indeed badblocks_show() reports,
32 3
36 1
But the expected bad blocks table should be,
32 12
Obviously only the first 2 ranges are merged and badblocks_set() returns
and ignores the rest setting range.
This behavior is improper, if the caller of badblocks_set() wants to set
a range of blocks into bad blocks table, all of the blocks in the range
should be handled even the previous part encountering failure.
The desired way to set bad blocks range by badblocks_set() is,
- Set as many as blocks in the setting range into bad blocks table.
- Merge the bad blocks ranges and occupy as less as slots in the bad
blocks table.
- Fast.
Indeed the above proposal is complicated, especially with the following
restrictions,
- The setting bad blocks range can be acknowledged or not acknowledged.
- The bad blocks table size is limited.
- Memory allocation should be avoided.
The basic idea of the patch is to categorize all possible bad blocks
range setting combinations into much less simplified and more less
special conditions. Inside badblocks_set() there is an implicit loop
composed by jumping between labels 're_insert' and 'update_sectors'. No
matter how large the setting bad blocks range is, in every loop just a
minimized range from the head is handled by a pre-defined behavior from
one of the categorized conditions. The logic is simple and code flow is
manageable.
The different relative layout between the setting range and existing bad
block range are checked and handled (merge, combine, overwrite, insert)
by the helpers in previous patch. This patch is to make all the helpers
work together with the above idea.
This patch only has the algorithm improvement for badblocks_set(). There
are following patches contain improvement for badblocks_clear() and
badblocks_check(). But the algorithm in badblocks_set() is fundamental
and typical, other improvement in clear and check routines are based on
all the helpers and ideas in this patch.
In order to make the change to be more clear for code review, this patch
does not directly modify existing badblocks_set(), and just add a new
one named _badblocks_set(). Later patch will remove current existing
badblocks_set() code and make it as a wrapper of _badblocks_set(). So
the new added change won't be mixed with deleted code, the code review
can be easier.
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Geliang Tang <geliang.tang@suse.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: NeilBrown <neilb@suse.de>
Cc: Vishal L Verma <vishal.l.verma@intel.com>
Cc: Wols Lists <antlists@youngman.org.uk>
Cc: Xiao Ni <xni@redhat.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Acked-by: Geliang Tang <geliang.tang@suse.com>
Link: https://lore.kernel.org/r/20230811170513.2300-4-colyli@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Fri, 11 Aug 2023 17:05:08 +0000 (01:05 +0800)]
badblocks: add helper routines for badblock ranges handling
This patch adds several helper routines to improve badblock ranges
handling. These helper routines will be used later in the improved
version of badblocks_set()/badblocks_clear()/badblocks_check().
- Helpers prev_by_hint() and prev_badblocks() are used to find the bad
range from bad table which the searching range starts at or after.
- The following helpers are to decide the relative layout between the
manipulating range and existing bad block range from bad table.
- can_merge_behind()
Return 'true' if the manipulating range can backward merge with the
bad block range.
- can_merge_front()
Return 'true' if the manipulating range can forward merge with the
bad block range.
- can_combine_front()
Return 'true' if two adjacent bad block ranges before the
manipulating range can be merged.
- overlap_front()
Return 'true' if the manipulating range exactly overlaps with the
bad block range in front of its range.
- overlap_behind()
Return 'true' if the manipulating range exactly overlaps with the
bad block range behind its range.
- can_front_overwrite()
Return 'true' if the manipulating range can forward overwrite the
bad block range in front of its range.
- The following helpers are to add the manipulating range into the bad
block table. Different routine is called with the specific relative
layout between the manipulating range and other bad block range in the
bad block table.
- behind_merge()
Merge the manipulating range with the bad block range behind its
range, and return the number of merged length in unit of sector.
- front_merge()
Merge the manipulating range with the bad block range in front of
its range, and return the number of merged length in unit of sector.
- front_combine()
Combine the two adjacent bad block ranges before the manipulating
range into a larger one.
- front_overwrite()
Overwrite partial of whole bad block range which is in front of the
manipulating range. The overwrite may split existing bad block range
and generate more bad block ranges into the bad block table.
- insert_at()
Insert the manipulating range at a specific location in the bad
block table.
All the above helpers are used in later patches to improve the bad block
ranges handling for badblocks_set()/badblocks_clear()/badblocks_check().
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Geliang Tang <geliang.tang@suse.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: NeilBrown <neilb@suse.de>
Cc: Vishal L Verma <vishal.l.verma@intel.com>
Cc: Xiao Ni <xni@redhat.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Acked-by: Geliang Tang <geliang.tang@suse.com>
Link: https://lore.kernel.org/r/20230811170513.2300-3-colyli@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Fri, 11 Aug 2023 17:05:07 +0000 (01:05 +0800)]
badblocks: add more helper structure and routines in badblocks.h
This patch adds the following helper structure and routines into
badblocks.h,
- struct badblocks_context
This structure is used in improved badblocks code for bad table
iteration.
- BB_END()
The macro to calculate end LBA of a bad range record from bad
table.
- badblocks_full() and badblocks_empty()
The inline routines to check whether bad table is full or empty.
- set_changed() and clear_changed()
The inline routines to set and clear 'changed' tag from struct
badblocks.
These new helper structure and routines can help to make the code more
clear, they will be used in the improved badblocks code in following
patches.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Xiao Ni <xni@redhat.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Geliang Tang <geliang.tang@suse.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: NeilBrown <neilb@suse.de>
Cc: Vishal L Verma <vishal.l.verma@intel.com>
Acked-by: Geliang Tang <geliang.tang@suse.com>
Link: https://lore.kernel.org/r/20230811170513.2300-2-colyli@suse.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Justin Stitt [Mon, 25 Sep 2023 09:49:17 +0000 (09:49 +0000)]
md: replace deprecated strncpy with memcpy
`strncpy` is deprecated for use on NUL-terminated destination strings
[1] and as such we should prefer more robust and less ambiguous string
interfaces.
There are three such strncpy uses that this patch addresses:
The respective destination buffers are:
1) mddev->clevel
2) clevel
3) mddev->metadata_type
We expect mddev->clevel to be NUL-terminated due to its use with format
strings:
| ret = sprintf(page, "%s\n", mddev->clevel);
Furthermore, we can see that mddev->clevel is not expected to be
NUL-padded as `md_clean()` merely set's its first byte to NULL -- not
the entire buffer:
| static void md_clean(struct mddev *mddev)
| {
| mddev->array_sectors = 0;
| mddev->external_size = 0;
| ...
| mddev->level = LEVEL_NONE;
| mddev->clevel[0] = 0;
| ...
A suitable replacement for this instance is `memcpy` as we know the
number of bytes to copy and perform manual NUL-termination at a
specified offset. This really decays to just a byte copy from one buffer
to another. `strscpy` is also a considerable replacement but using
`slen` as the length argument would result in truncation of the last
byte unless something like `slen + 1` was provided which isn't the most
idiomatic strscpy usage.
For the next case, the destination buffer `clevel` is expected to be
NUL-terminated based on its usage within kstrtol() which expects
NUL-terminated strings. Note that, in context, this code removes a
trailing newline which is seemingly not required as kstrtol() can handle
trailing newlines implicitly. However, there exists further usage of
clevel (or buf) that would also like to have the newline removed. All in
all, with similar reasoning to the first case, let's just use memcpy as
this is just a byte copy and NUL-termination is handled manually.
The third and final case concerning `mddev->metadata_type` is more or
less the same as the other two. We expect that it be NUL-terminated
based on its usage with seq_printf:
| seq_printf(seq, " super external:%s",
| mddev->metadata_type);
... and we can surmise that NUL-padding isn't required either due to how
it is handled in md_clean():
| static void md_clean(struct mddev *mddev)
| {
| ...
| mddev->metadata_type[0] = 0;
| ...
So really, all these instances have precisely calculated lengths and
purposeful NUL-termination so we can just use memcpy to remove ambiguity
surrounding strncpy.
Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings
Link: https://github.com/KSPP/linux/issues/90
Cc: linux-hardening@vger.kernel.org
Signed-off-by: Justin Stitt <justinstitt@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230925-strncpy-drivers-md-md-c-v1-1-2b0093b89c2b@google.com
Kees Cook [Fri, 15 Sep 2023 20:03:28 +0000 (13:03 -0700)]
md/md-linear: Annotate struct linear_conf with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct linear_conf.
Additionally, since the element count member must be set before accessing
the annotated flexible array member, move its initialization earlier.
[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci
Cc: Song Liu <song@kernel.org>
Cc: linux-raid@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230915200328.never.064-kees@kernel.org
Yu Kuai [Fri, 25 Aug 2023 03:09:56 +0000 (11:09 +0800)]
md: don't check 'mddev->pers' and 'pers->quiesce' from suspend_lo_store()
Now that mddev_suspend() doean't rely on 'mddev->pers' to be set, it's
safe to remove such checking.
This will also allow the array to be suspended even before the array
is ran.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825030956.1527023-8-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:09:55 +0000 (11:09 +0800)]
md: don't check 'mddev->pers' from suspend_hi_store()
Now that mddev_suspend() doean't rely on 'mddev->pers' to be set, it's
safe to remove such checking.
This will also allow the array to be suspended even before the array
is ran.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825030956.1527023-7-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:09:54 +0000 (11:09 +0800)]
md-bitmap: suspend array earlier in location_store()
Now that mddev_suspend() doean't rely on 'mddev->pers' to be set, it's
safe to call mddev_suspend() earlier.
This will also be helper to refactor mddev_suspend() later.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825030956.1527023-6-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:09:53 +0000 (11:09 +0800)]
md-bitmap: remove the checking of 'pers->quiesce' from location_store()
After commit
4d27e927344a ("md: don't quiesce in mddev_suspend()"),
there is no need to check 'pers->quiesce' before calling
mddev_suspend().
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825030956.1527023-5-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:09:52 +0000 (11:09 +0800)]
md: don't rely on 'mddev->pers' to be set in mddev_suspend()
'active_io' used to be initialized while the array is running, and
'mddev->pers' is set while the array is running as well. Hence caller
must hold 'reconfig_mutex' and guarantee 'mddev->pers' is set before
calling mddev_suspend().
Now that 'active_io' is initialized when mddev is allocated, such
restriction doesn't exist anymore. In the meantime, follow up patches
will refactor mddev_suspend(), hence add checking for 'mddev->pers' to
prevent null-ptr-deref.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825030956.1527023-4-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:09:51 +0000 (11:09 +0800)]
md: initialize 'writes_pending' while allocating mddev
Currently 'writes_pending' is initialized in pers->run for raid1/5/10,
and it's freed while deleing mddev, instead of pers->free. pers->run can
be called multiple times before mddev is deleted, and a helper
mddev_init_writes_pending() is used to prevent 'writes_pending' to be
initialized multiple times, this usage is safe but a litter weird.
On the other hand, 'writes_pending' is only initialized for raid1/5/10,
however, it's used in common layer, for example:
array_state_store
set_in_sync
if (!mddev->in_sync) -> in_sync is used for all levels
// access writes_pending
There might be some implicit dependency that I don't recognized to make
sure 'writes_pending' can only be accessed for raid1/5/10, but there are
no comments about that.
By the way, it make sense to initialize 'writes_pending' in common layer
because there are already three levels use it.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825030956.1527023-3-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:09:50 +0000 (11:09 +0800)]
md: initialize 'active_io' while allocating mddev
'active_io' is used for mddev_suspend() and it's initialized in
md_run(), this restrict that 'reconfig_mutex' must be held and
"mddev->pers" must be set before calling mddev_suspend().
Initialize 'active_io' early so that mddev_suspend() is safe to call
once mddev is allocated, this will be helpful to refactor
mddev_suspend() in following patches.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825030956.1527023-2-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:16:22 +0000 (11:16 +0800)]
md: delay remove_and_add_spares() for read only array to md_start_sync()
Before this patch, for read-only array:
md_check_recovery() check that 'MD_RECOVERY_NEEDED' is set, then it will
call remove_and_add_spares() directly to try to remove and add rdevs
from array.
After this patch:
1) md_check_recovery() check that 'MD_RECOVERY_NEEDED' is set, and the
worker 'sync_work' is not pending, and there are rdevs can be added
or removed, then it will queue new work md_start_sync();
2) md_start_sync() will call remove_and_add_spares() and exist;
This change make sure that array reconfiguration is independent from
daemon, and it'll be much easier to synchronize it with io, consier
that io may rely on daemon thread to be done.
Also fix a problem that 'pers->spars_active' is called after
remove_and_add_spares(), which order is wrong, because spares must
active first, and then remove_and_add_spares() can add spares to the
array, like what read-write case does:
1) daemon set 'MD_RECOVERY_RUNNING', register new sync thread to do
recovery;
2) recovery is done, md_do_sync() set 'MD_RECOVERY_DONE' before return;
3) daemon call 'pers->spars_active', and clear 'MD_RECOVERY_RUNNING';
4) in the next round of daemon, call remove_and_add_spares() to add
spares to the array.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825031622.1530464-8-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:16:21 +0000 (11:16 +0800)]
md: factor out a helper rdev_addable() from remove_and_add_spares()
There are no functional changes, just to make the code simpler and
prepare to delay remove_and_add_spares() to md_start_sync().
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825031622.1530464-7-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:16:20 +0000 (11:16 +0800)]
md: factor out a helper rdev_is_spare() from remove_and_add_spares()
There are no functional changes, just to make the code simpler and
prepare to delay remove_and_add_spares() to md_start_sync().
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825031622.1530464-6-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:16:19 +0000 (11:16 +0800)]
md: factor out a helper rdev_removeable() from remove_and_add_spares()
There are no functional changes, just to make the code simpler and
prepare to delay remove_and_add_spares() to md_start_sync().
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825031622.1530464-5-yukuai1@huaweicloud.com
Yu Kuai [Fri, 25 Aug 2023 03:16:18 +0000 (11:16 +0800)]
md: delay choosing sync action to md_start_sync()
Before this patch, for read-write array:
1) md_check_recover() found that something need to be done, and it'll
try to grab 'reconfig_mutex'. The case that md_check_recover() need
to do something:
- array is not suspend;
- super_block need to be updated;
- 'MD_RECOVERY_NEEDED' or 'MD_RECOVERY_DONE' is set;
- unusual case related to safemode;
2) if 'MD_RECOVERY_RUNNING' is not set, and 'MD_RECOVERY_NEEDED' is set,
md_check_recover() will try to choose a sync action, and then queue a
work md_start_sync().
3) md_start_sync() register sync_thread;
After this patch,
1) is the same;
2) if 'MD_RECOVERY_RUNNING' is not set, and 'MD_RECOVERY_NEEDED' is set,
queue a work md_start_sync() directly;
3) md_start_sync() will try to choose a sync action, and then register
sync_thread();
Because 'MD_RECOVERY_RUNNING' is cleared when sync_thread is done, 2)
and 3) and md_do_sync() is always ran in serial and they can never
concurrent, this change should not introduce any behavior change for now.
Also fix a problem that md_start_sync() can clear 'MD_RECOVERY_RUNNING'
without protection in error path, which might affect the logical in
md_check_recovery().
The advantage to change this is that array reconfiguration is
independent from daemon now, and it'll be much easier to synchronize it
with io, consider that io may rely on daemon thread to be done.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825031622.1530464-4-yukuai1@huaweicloud.com