Kent Overstreet [Tue, 27 Jun 2023 20:20:05 +0000 (16:20 -0400)]
bcachefs: Fix leak in backpointers fsck
We were forgetting to exit a printbuf - whoops.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 27 Jun 2023 03:31:49 +0000 (23:31 -0400)]
bcachefs: unregister_shrinker() now safe on not-registered shrinker
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 27 Jun 2023 03:10:21 +0000 (23:10 -0400)]
bcachefs: Add a missing rhashtable_destroy() call
Fixes https://lore.kernel.org/linux-bcachefs/
784c3e6a-75bd-e6ca-535a-
43b3e1daf643@kernel.dk/T/#mbf7caf005f960018eba23b58795d06c06c947411
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 26 Jun 2023 22:36:24 +0000 (18:36 -0400)]
bcachefs: Improve bch2_bkey_make_mut()
bch2_bkey_make_mut() now takes the bkey_s_c by reference and points it
at the new, mutable key.
This helps in some fsck paths that may have multiple repair operations
on the same key.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 27 Jun 2023 02:26:04 +0000 (22:26 -0400)]
bcachefs: Reduce stack frame size of bch2_check_alloc_info()
Excessive inlining may (on some versions of gcc?) cause excessive stack
usage; this turns off some inlining in bch2_check_alloc_info.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 25 Jun 2023 05:34:45 +0000 (01:34 -0400)]
bcachefs: fsck needs BTREE_UPDATE_INTERNAL_SNAPSHOT_NODE
A few fsck paths weren't using BTREE_UPDATE_INTERNAL_SNAPSHOT_NODE -
oops.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 25 Jun 2023 03:22:20 +0000 (23:22 -0400)]
bcachefs: Improve error message for overlapping extents
We now print out the full previous extent we overlapping with, to aid in
debugging and searching through the journal.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 25 Jun 2023 03:20:39 +0000 (23:20 -0400)]
bcachefs: Fix check_pos_snapshot_overwritten()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 24 Jun 2023 23:30:10 +0000 (19:30 -0400)]
bcachefs: Rename enum alloc_reserve -> bch_watermark
This is prep work for consolidating with JOURNAL_WATERMARK.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 24 Jun 2023 19:59:03 +0000 (15:59 -0400)]
bcachefs: BCH_ERR_fsck -> EINVAL
When we return errors outside of bcachefs, we need to return a standard
error code - fix this for BCH_ERR_fsck.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 24 Jun 2023 16:17:57 +0000 (12:17 -0400)]
bcachefs: bch2_trans_mark_pointer() refactoring
bch2_bucket_backpointer_mod() doesn't need to update the alloc key, we
can exit the alloc iter earlier.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 21 Jun 2023 10:44:44 +0000 (06:44 -0400)]
bcachefs: Fix more lockdep splats in debug.c
Similar to previous fixes, we can't incur page faults while holding
btree locks.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 21 Jun 2023 10:00:04 +0000 (06:00 -0400)]
bcachefs: Fix lockdep splat in bch2_readdir
dir_emit() can fault (taking mmap_lock); thus we can't be holding btree
locks.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 21 Jun 2023 04:31:49 +0000 (00:31 -0400)]
bcachefs: Check for ERR_PTR() from filemap_lock_folio()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 20 Jun 2023 17:49:25 +0000 (13:49 -0400)]
bcachefs: New error message helpers
Add two new helpers for printing error messages with __func__ and
bch2_err_str():
- bch_err_fn
- bch_err_msg
Also kill the old error strings in the recovery path, which were causing
us to incorrectly report memory allocation failures - they're not needed
anymore.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 20 Jun 2023 01:12:05 +0000 (21:12 -0400)]
bcachefs: fiemap: Fix a lockdep splat
As with the previous patch, we generally can't hold btree locks while
copying to userspace, as that may incur a page fault and require
mmap_lock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 20 Jun 2023 01:01:13 +0000 (21:01 -0400)]
bcachefs: seqmutex; fix a lockdep splat
We can't be holding btree_trans_lock while copying to user space, which
might incur a page fault. To fix this, convert it to a seqmutex so we
can unlock/relock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 19 Jun 2023 04:07:40 +0000 (00:07 -0400)]
bcachefs: Don't call lock_graph_descend() with wait lock held
This fixes a deadlock:
01305 WARNING: possible circular locking dependency detected
01305
6.3.0-ktest-gf4de9bee61af #5305 Tainted: G W
01305 ------------------------------------------------------
01305 cat/14658 is trying to acquire lock:
01305
ffffffc00982f460 (fs_reclaim){+.+.}-{0:0}, at: __kmem_cache_alloc_node+0x48/0x278
01305
01305 but task is already holding lock:
01305
ffffff8011aaf040 (&lock->wait_lock){+.+.}-{2:2}, at: bch2_check_for_deadlock+0x4b8/0xa58
01305
01305 which lock already depends on the new lock.
01305
01305
01305 the existing dependency chain (in reverse order) is:
01305
01305 -> #2 (&lock->wait_lock){+.+.}-{2:2}:
01305 _raw_spin_lock+0x54/0x70
01305 __six_lock_wakeup+0x40/0x1b0
01305 six_unlock_ip+0xe8/0x248
01305 bch2_btree_key_cache_scan+0x720/0x940
01305 shrink_slab.constprop.0+0x284/0x770
01305 shrink_node+0x390/0x828
01305 balance_pgdat+0x390/0x6d0
01305 kswapd+0x2e4/0x718
01305 kthread+0x184/0x1a8
01305 ret_from_fork+0x10/0x20
01305
01305 -> #1 (&c->lock#2){+.+.}-{3:3}:
01305 __mutex_lock+0x104/0x14a0
01305 mutex_lock_nested+0x30/0x40
01305 bch2_btree_key_cache_scan+0x5c/0x940
01305 shrink_slab.constprop.0+0x284/0x770
01305 shrink_node+0x390/0x828
01305 balance_pgdat+0x390/0x6d0
01305 kswapd+0x2e4/0x718
01305 kthread+0x184/0x1a8
01305 ret_from_fork+0x10/0x20
01305
01305 -> #0 (fs_reclaim){+.+.}-{0:0}:
01305 __lock_acquire+0x19d0/0x2930
01305 lock_acquire+0x1dc/0x458
01305 fs_reclaim_acquire+0x9c/0xe0
01305 __kmem_cache_alloc_node+0x48/0x278
01305 __kmalloc_node_track_caller+0x5c/0x278
01305 krealloc+0x94/0x180
01305 bch2_printbuf_make_room.part.0+0xac/0x118
01305 bch2_prt_printf+0x150/0x1e8
01305 bch2_btree_bkey_cached_common_to_text+0x170/0x298
01305 bch2_btree_trans_to_text+0x244/0x348
01305 print_cycle+0x7c/0xb0
01305 break_cycle+0x254/0x528
01305 bch2_check_for_deadlock+0x59c/0xa58
01305 bch2_btree_deadlock_read+0x174/0x200
01305 full_proxy_read+0x94/0xf0
01305 vfs_read+0x15c/0x3a8
01305 ksys_read+0xb8/0x148
01305 __arm64_sys_read+0x48/0x60
01305 invoke_syscall.constprop.0+0x64/0x138
01305 do_el0_svc+0x84/0x138
01305 el0_svc+0x34/0x80
01305 el0t_64_sync_handler+0xb0/0xb8
01305 el0t_64_sync+0x14c/0x150
01305
01305 other info that might help us debug this:
01305
01305 Chain exists of:
01305 fs_reclaim --> &c->lock#2 --> &lock->wait_lock
01305
01305 Possible unsafe locking scenario:
01305
01305 CPU0 CPU1
01305 ---- ----
01305 lock(&lock->wait_lock);
01305 lock(&c->lock#2);
01305 lock(&lock->wait_lock);
01305 lock(fs_reclaim);
01305
01305 *** DEADLOCK ***
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 18 Jun 2023 17:25:35 +0000 (13:25 -0400)]
bcachefs: Fix bch2_check_discard_freespace_key()
We weren't correctly checking the freespace btree - it's an extents
btree, which means we need to iterate over each bucket in a freespace
extent.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 18 Jun 2023 17:25:09 +0000 (13:25 -0400)]
bcachefs: bch2_trans_unlock_noassert()
This fixes a spurious assert in the btree node read path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 17 Jun 2023 03:30:02 +0000 (23:30 -0400)]
bcachefs: Fix bch2_btree_update_start()
The calculation for number of nodes to allocate in
bch2_btree_update_start() was incorrect - this fixes a BUG_ON() on the
small nodes test.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 13 Jun 2023 19:12:04 +0000 (15:12 -0400)]
bcachefs: bch2_extent_ptr_desired_durability()
This adds a new helper for getting a pointer's durability irrespective
of the device state, and uses it in the the data update path.
This fixes a bug where we do a data update but request 0 replicas to be
allocated, because the replica being rewritten is on a device marked as
failed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 13 Jun 2023 19:05:40 +0000 (15:05 -0400)]
bcachefs: snapshot_to_text() includes snapshot tree
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 16 Mar 2023 22:05:00 +0000 (18:05 -0400)]
bcachefs: Fix try_decrease_writepoints()
- We may need to drop btree locks before taking the writepoint_lock, as
is done in other places.
- We should be using open_bucket_free_unused(), so that we don't waste
space.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 11 Jun 2023 22:24:04 +0000 (18:24 -0400)]
bcachefs: Delete weird hacky transaction restart injection
since we currently don't have a good fault injection library,
bch2_btree_insert_node() was randomly injecting faults based on
local_clock().
At the very least this should have been a debug mode only thing, but
this is a brittle method so let's just delete it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 11 Jun 2023 23:45:21 +0000 (19:45 -0400)]
bcachefs: Write buffer flush needs BTREE_INSERT_NOCHECK_RW
btree write buffer flush is only invoked from contexts that already hold
a write ref, and checking if we're still RW could cause us to fail to
completely flush the write buffer when shutting down.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 11 Jun 2023 23:21:16 +0000 (19:21 -0400)]
bcachefs: New assertions when marking filesystem clean
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 10 Jun 2023 05:37:16 +0000 (01:37 -0400)]
bcachefs: ec: Fix a lost wakeup
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Mikulas Patocka [Tue, 30 May 2023 12:15:41 +0000 (08:15 -0400)]
bcachefs: fix NULL pointer dereference in try_alloc_bucket
On Mon, 29 May 2023, Mikulas Patocka wrote:
> The oops happens in set_btree_iter_dontneed and it is caused by the fact
> that iter->path is NULL. The code in try_alloc_bucket is buggy because it
> sets "struct btree_iter iter = { NULL };" and then jumps to the "err"
> label that tries to dereference values in "iter".
Here I'm sending a patch for it.
From: Mikulas Patocka <mpatocka@redhat.com>
The function try_alloc_bucket sets the variable "iter" to NULL and then
(on various error conditions) jumps to the label "err". On the "err"
label, it calls "set_btree_iter_dontneed" that tries to dereference
"iter->trans" and "iter->path".
So, we get an oops on error condition.
This patch fixes the crash by testing that iter.trans and iter.path is
non-zero before calling set_btree_iter_dontneed.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 9 Jun 2023 19:41:41 +0000 (15:41 -0400)]
bcachefs: Fix subvol deletion deadlock
d_prune_aliases() may call bch2_evict_inode(), which needs
c->vfs_inodes_list_lock.
Fix this by always calling igrab() before putting the inodes onto our
disposal list, and then calling d_prune_aliases() with
c->vfs_inodes_lock dropped.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Tue, 30 May 2023 18:51:12 +0000 (14:51 -0400)]
bcachefs: don't spin in rebalance when background target is not usable
If a bcachefs filesystem is configured with a background device
(disk group), rebalance will relocate data to this device in the
background by checking extent keys for whether they currently reside
in the specified target. For keys that do not, rebalance performs a
read/write cycle to allow the write path to properly relocate data.
If the background target is not usable (read-only, for example),
however, the write path doesn't actually move data to another
device. Instead, rebalance spins indefinitely reading and rewriting
the same data over and over to the same device. If the background
target is made available again, the rebalance picks this up,
relocates the data, and eventually terminates.
To avoid this spinning behavior, update the rebalance background
target logic to not only check whether the extent is not in the
target, but whether the target is actually usable as well. If not,
then don't mark the key for rewrite.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Tue, 30 May 2023 18:48:58 +0000 (14:48 -0400)]
bcachefs: push rcu lock down into bch2_target_to_mask()
We have one caller that cycles the rcu lock solely for this call
(via target_rw_devs()), and we'd like to add another. Simplify
things by pushing the rcu lock down into bch2_target_to_mask(),
similar to how bch2_dev_in_target() works.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Tue, 30 May 2023 18:41:50 +0000 (14:41 -0400)]
bcachefs: create internal disk_groups sysfs file
We have bch2_sb_disk_groups_to_text() to dump disk group labels, but
no good information on device group membership at runtime. Add
bch2_disk_groups_to_text() and an associated 'disk_groups' sysfs
file to print group and device relationships.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 5 Jun 2023 05:16:00 +0000 (01:16 -0400)]
bcachefs: Clean up tests code
- delete redundant error messages
- convert various code to bch2_trans_run
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 5 Jun 2023 05:15:33 +0000 (01:15 -0400)]
bcachefs: Improve backpointers error message
the error message here dated from when backpointers could be stored in
alloc keys; now, we should always print the full key.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 30 May 2023 08:59:30 +0000 (04:59 -0400)]
bcachefs: More drop_locks_do() conversions
Using drop_locks_do() ensures that every unlock() is paired with a
relock(), with proper error checking.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 4 Jun 2023 23:40:35 +0000 (19:40 -0400)]
bcachefs: Delete warning from promote_alloc()
It's possible to see a -BCH_ERR_ENOSPC_disk_reservation here, and that's
fine.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 4 Jun 2023 22:08:56 +0000 (18:08 -0400)]
bcachefs: Fix bch2_fsck_ask_yn()
- getline() output includes a newline, without stripping that we were
just looping
- Make the prompt clearer
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 23:23:35 +0000 (19:23 -0400)]
bcachefs: replicas_deltas_realloc() uses allocate_dropping_locks()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 07:44:38 +0000 (03:44 -0400)]
bcachefs: Convert acl.c to allocate_dropping_locks()
More work to avoid allocating memory with btree locks held.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 07:44:38 +0000 (03:44 -0400)]
bcachefs: allocate_dropping_locks()
Add two new helpers for allocating memory with btree locks held: The
idea is to first try the allocation with GFP_NOWAIT|__GFP_NOWARN, then
if that fails - unlock, retry with GFP_KERNEL, and then call
trans_relock().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 29 May 2023 20:27:11 +0000 (16:27 -0400)]
bcachefs: Use unlikely() in bch2_err_matches()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 29 May 2023 06:26:04 +0000 (02:26 -0400)]
bcachefs: Fix error handling in promote path
The promote path had a BUG_ON() for unknown error type, which we're now
seeing: change it to a WARN_ON() - because we're curious what this is -
and otherwise handle it in the normal error path.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 04:59:26 +0000 (00:59 -0400)]
bcachefs: fs-io: Eliminate GFP_NOFS usage
GFP_NOFS doesn't ever make sense. If we're allocatingc memory it should
be GFP_NOWAIT if btree locks are held, GFP_KERNEL otherwise.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 05:09:50 +0000 (01:09 -0400)]
bcachefs: bch2_trans_kmalloc no longer allocates memory with btree locks held
When allocating memory, gfp flags should generally be
- GFP_NOWAIT|__GFP_NOWARN if btree locks are held
- GFP_NOFS if in the IO path or otherwise holding resources needed for
IO submission
- GFP_KERNEL otherwise
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 22:06:27 +0000 (18:06 -0400)]
bcachefs: drop_locks_do()
Add a new helper for the common pattern of:
- trans_unlock()
- do something
- trans_relock()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 22:02:38 +0000 (18:02 -0400)]
bcachefs: GFP_NOIO -> GFP_NOFS
GFP_NOIO dates from the bcache days, when we operated under the block
layer. Now, GFP_NOFS is more appropriate, so switch all GFP_NOIO uses to
GFP_NOFS.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 06:35:34 +0000 (02:35 -0400)]
bcachefs: Ensure bch2_btree_node_get() calls relock() after unlock()
Fix a bug where bch2_btree_node_get() might call bch2_trans_unlock() (in
fill) without calling bch2_trans_relock(); this is a bug when it's done
in the core btree code.
Also, twea bch2_btree_node_mem_alloc() to drop btree locks before doing
a blocking memory allocation.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 04:35:35 +0000 (00:35 -0400)]
bcachefs: Avoid __GFP_NOFAIL
We've been using __GFP_NOFAIL for allocating struct bch_folio, our
private per-folio state.
However, that struct is variable size - it holds state for each sector
in the folio, and folios can be quite large now, which means it's
possible for bch_folio to be larger than PAGE_SIZE now.
__GFP_NOFAIL allocations are undesirable in normal circumstances, but
particularly so at >= PAGE_SIZE, and warnings are emitted for that.
So, this patch adds proper error paths and eliminates most uses of
__GFP_NOFAIL. Also, do some more cleanup of gfp flags w.r.t. btree node
locks: we can use GFP_KERNEL, but only if we're not holding btree locks,
and if we are holding btree locks we should be using GFP_NOWAIT.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 03:19:13 +0000 (23:19 -0400)]
bcachefs: Fix corruption with writeable snapshots
When partially overwriting an extent in an older snapshot, the existing
extent has to be split.
If the existing extent was overwritten in a different (sibling)
snapshot, we have to ensure that the split won't be visible in the
sibling snapshot.
data_update.c already has code for this,
bch2_insert_snapshot_writeouts() - we just need to move it into
btree_update_leaf.c and change bch2_trans_update_extent() to use it as
well.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 27 May 2023 23:59:59 +0000 (19:59 -0400)]
bcachefs: Convert -ENOENT to private error codes
As with previous conversions, replace -ENOENT uses with more informative
private error codes.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 27 May 2023 23:55:54 +0000 (19:55 -0400)]
bcachefs: trans_for_each_path_safe()
bch2_btree_trans_to_text() is used on btree_trans objects that are owned
by different threads - when printing out deadlock cycles - so we need a
safe version of trans_for_each_path(), else we race with seeing a
btree_path that was just allocated and not fully initialized:
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 28 May 2023 00:00:13 +0000 (20:00 -0400)]
bcachefs: Fix a quota read bug
bch2_fs_quota_read() could see an inode that's been deleted
(KEY_TYPE_inode_generation) - bch2_fs_quota_read_inode() needs to check
for that instead of erroring.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 26 May 2023 22:12:55 +0000 (18:12 -0400)]
bcachefs: Fix move_extent_fail counter
fail counters need to be events, not numbers of sectors - or the
calculations the tests use for determining if we've had too many
slowpath events don't work.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 26 May 2023 03:37:06 +0000 (23:37 -0400)]
bcachefs: Don't reuse reflink btree keyspace
We've been seeing difficult to debug "missing indirect extent" bugs,
that fsck doesn't seem to find.
One possibility is that there was a missing indirect extent, but then a
new indirect extent was created at the location of the previous indirect
extent.
This patch eliminates that possibility by always creating new indirect
extents right after the last one, at the end of the reflink btree.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 4 Jun 2023 21:58:56 +0000 (17:58 -0400)]
mean and variance: Add a missing include
abs() is in math.h
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 26 May 2023 02:22:25 +0000 (22:22 -0400)]
mean and variance: More tests
Add some more tests that test conventional and weighted mean
simultaneously, and with a table of values that represents events that
we'll be using this to look for so we can verify-by-eyeball that the
output looks sane.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 10 Jun 2023 14:57:23 +0000 (10:57 -0400)]
six locks: Disable percpu read lock mode in userspace
When running in userspace, we currently don't have a real percpu
implementation available - at least in bcachefs-tools, which is where
this code is currently used in userspace.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 25 May 2023 18:35:06 +0000 (14:35 -0400)]
six locks: Use atomic_try_cmpxchg_acquire()
This switches to a newer cmpxchg variant which updates @old for us on
failure, simplifying the cmpxchg loops a bit and supposedly generating
better code.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 25 May 2023 22:10:04 +0000 (18:10 -0400)]
six locks: Fix an unitialized var
In the conversion to atomic_t, six_lock_slowpath() ended up calling
six_lock_wakeup() in the failure path with a state variable that was
never initialized - whoops.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 23 May 2023 04:21:22 +0000 (00:21 -0400)]
six locks: Delete redundant comment
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 22 May 2023 16:11:13 +0000 (12:11 -0400)]
six locks: Tiny bit more tidying
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 16 Jun 2023 19:56:42 +0000 (15:56 -0400)]
six locks: Seq now only incremented on unlock
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 22 May 2023 04:17:40 +0000 (00:17 -0400)]
six locks: Split out seq, use atomic_t instead of atomic64_t
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 16 Jun 2023 23:21:21 +0000 (19:21 -0400)]
six locks: Single instance of six_lock_vals
Since we're not generating different versions of the lock functions for
each lock type, the constant propagation we were trying to do before is
no longer useful - this is now a small code size decrease.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 22 May 2023 21:54:19 +0000 (17:54 -0400)]
six_locks: Kill test_bit()/set_bit() usage
This deletes the crazy cast-atomic-to-unsigned-long, and replaces them
with atomic_and() and atomic_or().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 16 Jun 2023 22:24:05 +0000 (18:24 -0400)]
six locks: lock->state.seq no longer used for write lock held
lock->state.seq is shortly being moved out of lock->state, to kill the
depedency on atomic64; in preparation for that, we change the write
locking bit to write locked.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 16 Jun 2023 19:00:48 +0000 (15:00 -0400)]
six locks: Simplify six_relock()
The next patch is going to move lock->seq out of lock->state. This
replaces six_relock() with a much simpler implementation based on
trylock.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 22 May 2023 03:41:56 +0000 (23:41 -0400)]
six locks: Improve spurious wakeup handling in pcpu reader mode
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 19:40:40 +0000 (15:40 -0400)]
six locks: Documentation, renaming
- Expanded and revamped overview documentation in six.h, giving an
overview of all features
- docbook-comments for all external interfaces
- Rename some functions for simplicity, i.e.
six_lock_ip_type() -> six_lock_ip()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 03:57:48 +0000 (23:57 -0400)]
six locks: Kill six_lock_state union
As suggested by Linus, this drops the six_lock_state union in favor of
raw bitmasks.
On the one hand, bitfields give more type-level structure to the code.
However, a significant amount of the code was working with
six_lock_state as a u64/atomic64_t, and the conversions from the
bitfields to the u64 were deemed a bit too out-there.
More significantly, because bitfield order is poorly defined (#ifdef
__LITTLE_ENDIAN_BITFIELD can be used, but is gross), incrementing the
sequence number would overflow into the rest of the bitfield if the
compiler didn't put the sequence number at the high end of the word.
The new code is a bit saner when we're on an architecture without real
atomic64_t support - all accesses to lock->state now go through
atomic64_*() operations.
On architectures with real atomic64_t support, we additionally use
atomic bit ops for setting/clearing individual bits.
Text size: 7467 bytes -> 4649 bytes - compilers still suck at
bitfields.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 01:44:30 +0000 (21:44 -0400)]
six locks: Simplify dispatch
Originally, we used inlining/flattening to cause the compiler to
generate different versions of lock/trylock/relock/unlock for each lock
type - read, intent, and write. This made the individual functions
smaller and let the compiler eliminate table lookups: however, as the
code has gotten more complicated these optimizations have gotten less
worthwhile, and all the tricky inlining and dispatching made the code
less readable.
Text size: 11015 bytes -> 7467 bytes, and benchmarks show no loss of
performance.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 00:37:53 +0000 (20:37 -0400)]
six locks: Centralize setting of waiting bit
Originally, the waiting bit was always set by trylock() on failure:
however, it's now set by __six_lock_type_slowpath(), with wait_lock held
- which is the more correct place to do it.
That made setting the waiting bit in trylock redundant, so this patch
deletes that.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 20:38:09 +0000 (16:38 -0400)]
six locks: Remove hacks for percpu mode lost wakeup
The lost wakeup bug hasn't been observed in awhile, and we're trying to
provoke it and determine if it still exists.
This patch removes some defenses that were added to attempt to track it
down; if it still exists, this should make it easier to see it.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 00:57:55 +0000 (20:57 -0400)]
six locks: Kill six_lock_pcpu_(alloc|free)
six_lock_pcpu_alloc() is an unsafe interface: it's not safe to allocate
or free the percpu reader count on an existing lock that's in use, the
only safe time to allocate percpu readers is when the lock is first
being initialized.
This patch adds a flags parameter to six_lock_init(), and instead of
six_lock_pcpu_free() we now expose six_lock_exit(), which does the same
thing but is less likely to be misused.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 21 May 2023 00:40:08 +0000 (20:40 -0400)]
six locks: six_lock_readers_add()
This moves a helper out of the bcachefs code that shouldn't have been
there, since it touches six lock internals.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 16 Jun 2023 22:55:07 +0000 (18:55 -0400)]
bcachefs: Don't call local_clock() twice in trans_begin()
local_clock() is not as cheap as we'd like it to be, alas
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 22 May 2023 18:39:44 +0000 (14:39 -0400)]
bcachefs: Fix a buffer overrun in bch2_fs_usage_read()
We were copying the size of a struct bch_fs_usage_online to a struct
bch_fs_usage, which is 8 bytes smaller.
This adds some new helpers so we can do this correctly, and get rid of
some magic +1s too.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 22 May 2023 04:49:06 +0000 (00:49 -0400)]
bcachefs: Clear btree_node_just_written() when node reused or evicted
This fixes the following bug:
Journal reclaim attempts to flush a node, but races with the node being
evicted from the btree node cache; when we lock the node, the data
buffers have already been freed.
We don't evict a node that's dirty, so calling btree_node_write() is
fine - it's a noop - except that the btree_node_just_written bit causes
bch2_btree_post_write_cleanup() to run (resorting the node), which then
causes a null ptr deref.
00078 Unable to handle kernel NULL pointer dereference at virtual address
000000000000009e
00078 Mem abort info:
00078 ESR = 0x0000000096000005
00078 EC = 0x25: DABT (current EL), IL = 32 bits
00078 SET = 0, FnV = 0
00078 EA = 0, S1PTW = 0
00078 FSC = 0x05: level 1 translation fault
00078 Data abort info:
00078 ISV = 0, ISS = 0x00000005
00078 CM = 0, WnR = 0
00078 user pgtable: 4k pages, 39-bit VAs, pgdp=
000000007ed64000
00078 [
000000000000009e] pgd=
0000000000000000, p4d=
0000000000000000, pud=
0000000000000000
00078 Internal error: Oops:
0000000096000005 [#1] SMP
00078 Modules linked in:
00078 CPU: 75 PID: 1170 Comm: stress-ng-utime Not tainted
6.3.0-ktest-g5ef5b466e77e #2078
00078 Hardware name: linux,dummy-virt (DT)
00078 pstate:
80001005 (Nzcv daif -PAN -UAO -TCO -DIT +SSBS BTYPE=--)
00078 pc : btree_node_sort+0xc4/0x568
00078 lr : bch2_btree_post_write_cleanup+0x6c/0x1c0
00078 sp :
ffffff803e30b350
00078 x29:
ffffff803e30b350 x28:
0000000000000001 x27:
ffffff80076e52a8
00078 x26:
0000000000000002 x25:
0000000000000000 x24:
ffffffc00912e000
00078 x23:
ffffff80076e52a8 x22:
0000000000000000 x21:
ffffff80076e52bc
00078 x20:
ffffff80076e5200 x19:
0000000000000000 x18:
0000000000000000
00078 x17:
fffffffff8000000 x16:
0000000008000000 x15:
0000000008000000
00078 x14:
0000000000000002 x13:
0000000000000000 x12:
00000000000000a0
00078 x11:
ffffff803e30b400 x10:
ffffff803e30b408 x9 :
0000000000000001
00078 x8 :
0000000000000000 x7 :
ffffff803e480000 x6 :
00000000000000a0
00078 x5 :
0000000000000088 x4 :
0000000000000000 x3 :
0000000000000010
00078 x2 :
0000000000000000 x1 :
0000000000000000 x0 :
ffffff80076e52a8
00078 Call trace:
00078 btree_node_sort+0xc4/0x568
00078 bch2_btree_post_write_cleanup+0x6c/0x1c0
00078 bch2_btree_node_write+0x108/0x148
00078 __btree_node_flush+0x104/0x160
00078 bch2_btree_node_flush0+0x1c/0x30
00078 journal_flush_pins.constprop.0+0x184/0x2d0
00078 __bch2_journal_reclaim+0x4d4/0x508
00078 bch2_journal_reclaim+0x1c/0x30
00078 __bch2_journal_preres_get+0x244/0x268
00078 bch2_trans_journal_preres_get_cold+0xa4/0x180
00078 __bch2_trans_commit+0x61c/0x1bb0
00078 bch2_setattr_nonsize+0x254/0x318
00078 bch2_setattr+0x5c/0x78
00078 notify_change+0x2bc/0x408
00078 vfs_utimes+0x11c/0x218
00078 do_utimes+0x84/0x140
00078 __arm64_sys_utimensat+0x68/0xa8
00078 invoke_syscall.constprop.0+0x54/0xf0
00078 do_el0_svc+0x48/0xd8
00078 el0_svc+0x14/0x48
00078 el0t_64_sync_handler+0xb0/0xb8
00078 el0t_64_sync+0x14c/0x150
00078 Code:
8b050265 910020c6 8b060266 910060ac (
79402cad)
00078 ---[ end trace
0000000000000000 ]---
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 20 May 2023 06:20:28 +0000 (02:20 -0400)]
bcachefs: alloc_v4_u64s() fix
With the recent bkey_ops.min_val_size addition, bkey values are
automatically extended to the size of the current version.
The check in bch2_alloc_v4_invalid() needs to be updated to take this
into account.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 15 May 2023 03:01:14 +0000 (23:01 -0400)]
bcachefs: Delete an incorrect bch2_trans_unlock()
These deletes a bch2_trans_unlock() call from __bch2_move_data(). It was
redundant; bch2_move_extent() has the correct unlock call, and it was
buggy because when move_extent calls bch2_extent_drop_ptrs() we don't
want the transaction to be unlocked yet - this fixes a btree_iter.c
assertion.
Fixes https://github.com/koverstreet/bcachefs/issues/511.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 13 May 2023 21:21:55 +0000 (17:21 -0400)]
bcachefs: Use memcpy_u64s_small() for copying keys
Small performance optimization; an open coded loop is better than rep ;
movsq for small copies.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 13 May 2023 04:11:14 +0000 (00:11 -0400)]
bcachefs: Fix check_overlapping_extents()
A error check had a flipped conditional - whoops.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 13 May 2023 00:28:54 +0000 (20:28 -0400)]
bcachefs: Replace a BUG_ON() with fatal error
A user hit this BUG_ON() - it's unclear how it happened, so replace it
with a fatal error that will cause us to go read only, and print out
more information.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 8 May 2023 18:23:08 +0000 (14:23 -0400)]
bcachefs: Delete some dead code in bch2_replicas_gc_end()
bch2_replicas_gc_(start|end) is now only used for journal replicas
entries, which don't have bucket sector counts - so this code is
entirely dead and can be deleted.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Thu, 4 May 2023 16:44:15 +0000 (12:44 -0400)]
bcachefs: mark journal replicas before journal write submission
The journal write submission path marks the associated replica
entries for journal data in journal_write_done(), which is just
after journal write bio submission. This creates a small window
where journal entries might have been written out, but the
associated replica is not marked such that recovery does not know
that the associated device contains journal data.
Move the replica marking a bit earlier in the write path such that
recovery is guaranteed to recognize that the device contains journal
data in the event of a crash.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 2 May 2023 22:22:12 +0000 (18:22 -0400)]
bcachefs: Improved comment for bch2_replicas_gc2()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 28 Apr 2023 07:50:57 +0000 (03:50 -0400)]
bcachefs: Fix quotas + snapshots
Now that we can reliably designate and find the master subvolume out of
a tree of snapshots, we can finally make quotas work with snapshots:
That is - quotas will now _ignore_ snapshot subvolumes, and only be in
effect for the master (non snapshot) subvolume.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 29 Mar 2023 15:18:59 +0000 (11:18 -0400)]
bcachefs: Add otime, parent to bch_subvolume
Add two new fields to bch_subvolume:
- otime: creation time
- parent: For snapshots, this is the id of the subvolume the snapshot
was created from
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Wed, 29 Mar 2023 15:18:52 +0000 (11:18 -0400)]
bcachefs: BTREE_ID_snapshot_tree
This adds a new btree which gets us a persistent per-snapshot-tree
identifier.
- BTREE_ID_snapshot_trees
- KEY_TYPE_snapshot_tree
- bch_snapshot now has a field that points to a snapshot_tree
This is going to be used to designate one snapshot ID/subvolume out of a
given tree of snapshots as the "main" subvolume, so that we can do quota
accounting in that subvolume and not the rest.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 28 Apr 2023 03:20:18 +0000 (23:20 -0400)]
bcachefs: bch2_bkey_get_empty_slot()
Add a new helper for allocating a new slot in a btree.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 30 Apr 2023 23:21:06 +0000 (19:21 -0400)]
bcachefs: bch2_bkey_make_mut() now calls bch2_trans_update()
It's safe to call bch2_trans_update with a k/v pair where the value
hasn't been filled out, as long as the key part has been and the value
is filled out by transaction commit time.
This patch folds the bch2_trans_update() call into bch2_bkey_make_mut(),
eliminating a bit of boilerplate.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 30 Apr 2023 22:46:24 +0000 (18:46 -0400)]
bcachefs: bch2_bkey_get_mut() now calls bch2_trans_update()
It's safe to call bch2_trans_update with a k/v pair where the value
hasn't been filled out, as long as the key part has been and the value
is filled out by transaction commit time.
This patch folds the bch2_trans_update() call into bch2_bkey_get_mut(),
eliminating a bit of boilerplate.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 30 Apr 2023 22:59:28 +0000 (18:59 -0400)]
bcachefs: bch2_bkey_alloc() now calls bch2_trans_update()
It's safe to call bch2_trans_update with a k/v pair where the value
hasn't been filled out, as long as the key part has been and the value
is filled out by transaction commit time.
This patch folds the bch2_trans_update() call into bch2_bkey_alloc(),
eliminating a bit of boilerplate.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 28 Apr 2023 03:48:33 +0000 (23:48 -0400)]
bcachefs: bch2_bkey_get_mut() improvements
- bch2_bkey_get_mut() now handles types increasing in size, allocating
a buffer for the type's current size when necessary
- bch2_bkey_make_mut_typed()
- bch2_bkey_get_mut() now initializes the iterator, like
bch2_bkey_get_iter()
Also, refactor so that most of the code is in functions - now macros are
only used for wrappers.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 1 May 2023 00:58:59 +0000 (20:58 -0400)]
bcachefs: Move bch2_bkey_make_mut() to btree_update.h
It's for doing updates - this is where it belongs, and next pathes will
be changing these helpers to use items from btree_update.h.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 29 Apr 2023 23:33:09 +0000 (19:33 -0400)]
bcachefs: bch2_bkey_get_iter() helpers
Introduce new helpers for a common pattern:
bch2_trans_iter_init();
bch2_btree_iter_peek_slot();
- bch2_bkey_get_iter_type() returns -ENOENT if it doesn't find a key of
the correct type
- bch2_bkey_get_val_typed() copies the val out of the btree to a
(typically stack allocated) variable; it handles the case where the
value in the btree is smaller than the current version of the type,
zeroing out the remainder.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 29 Apr 2023 17:24:18 +0000 (13:24 -0400)]
bcachefs: bkey_ops.min_val_size
This adds a new field to bkey_ops for the minimum size of the value,
which standardizes that check and also enforces the new rule (previously
done somewhat ad-hoc) that we can extend value types by adding new
fields on to the end.
To make that work we do _not_ initialize min_val_size with sizeof,
instead we initialize it to the size of the first version of those
values.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 30 Apr 2023 17:02:05 +0000 (13:02 -0400)]
bcachefs: Converting to typed bkeys is now allowed for err, null ptrs
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sun, 30 Apr 2023 22:04:43 +0000 (18:04 -0400)]
bcachefs: Btree iterator, update flags no longer conflict
Change btree_update_flags to start after the last btree iterator flag,
so that we can pass both in the same flags argument.
This is needed for the upcoming bch2_bkey_get_mut() helper.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>