Post-Mortem: Suspended ZFS Pool
I have a daily zpool status for all of my zpools sent to me each morning. This morning (2026-01-11) the email said my main zpool was suspended. On a Sunday? Really?
phil@nuc-proxmox:~# zpool status -v gamehenge
pool: gamehenge
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run ‘zpool clear’.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
scan: scrub in progress since Sun Jan 11 00:24:02 2026
3.27T / 19.3T scanned at 98.8M/s, 1.53T / 19.3T issued at 46.2M/s
0B repaired, 7.93% done, 4 days 15:52:56 to go
config:
NAME STATE READ WRITE CKSUM
gamehenge ONLINE 0 0 0
raidz1-0 ONLINE 270 36 0
wwn-0x5000c5008732e46e ONLINE 410 159 0
usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0 ONLINE 276 18 0
wwn-0x5000c50086f822fb ONLINE 217 18 0
usb-Mercury_Elite_Pro_Quad_A_154900000A01-0:0 ONLINE 295 18 0
errors: List of errors unavailable: pool I/O is currently suspended
Ugh oh.
Disk/RAID level errors but no pool level errors? I’m unsure if that matters, or means anything but is worth noting.
I know my drive names are all over the place. I’m using an OWC Elite Quad Pro USB enclosure which I know isn’t ideal with ZFS. As you can see it doesn’t know what it wants to name the drives. I’m currently in the process of acquiring parts for a NAS, I have just about everything besides the case and the drives. So I’m working on it. Everything I’m learning now is going to help me when I build the NAS.
And right now I need to fix my pool.
I ran zpool clear like it said in both the warning and at the link but I ended up with more issues:
phil@nuc-proxmox:~# sudo zpool clear gamehenge
cannot clear errors for gamehenge: I/O error
phil@nuc-proxmox:~# sudo zpool status -v gamehenge
pool: gamehenge
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run ‘zpool clear’.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
scan: scrub in progress since Sun Jan 11 00:24:02 2026
3.27T / 19.3T scanned at 95.6M/s, 1.53T / 19.3T issued at 44.8M/s
0B repaired, 7.93% done, 4 days 19:34:51 to go
config:
NAME STATE READ WRITE CKSUM
gamehenge UNAVAIL 0 0 0 insufficient replicas
raidz1-0 UNAVAIL 0 0 0 insufficient replicas
wwn-0x5000c5008732e46e UNAVAIL 0 0 0
usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0 UNAVAIL 0 0 0
wwn-0x5000c50086f822fb UNAVAIL 0 0 0
usb-Mercury_Elite_Pro_Quad_A_154900000A01-0:0 UNAVAIL 0 0 0
errors: List of errors unavailable: pool I/O is currently suspended
Great, it now appears that all my disks are unavailable. The GitHub link in the warning said to reboot if issues persist but I figured I’d check some things first.
Can proxmox even see the disks?
phil@nuc-proxmox:~# sudo ls -la /dev/disk/by-id/
total 0
drwxr-xr-x 2 root root 200 Jan 11 01:17 .
drwxr-xr-x 8 root root 160 Jan 11 01:17 ..
lrwxrwxrwx 1 root root 9 Dec 16 04:12 ata-WDC_WDS500G2B0B-00YS70_203501A0129A -> ../../sda
lrwxrwxrwx 1 root root 10 Dec 16 04:12 ata-WDC_WDS500G2B0B-00YS70_203501A0129A-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Dec 16 04:12 ata-WDC_WDS500G2B0B-00YS70_203501A0129A-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Dec 16 04:12 ata-WDC_WDS500G2B0B-00YS70_203501A0129A-part3 -> ../../sda3
lrwxrwxrwx 1 root root 9 Dec 16 04:12 wwn-0x5001b448bbb86f48 -> ../../sda
lrwxrwxrwx 1 root root 10 Dec 16 04:12 wwn-0x5001b448bbb86f48-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Dec 16 04:12 wwn-0x5001b448bbb86f48-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Dec 16 04:12 wwn-0x5001b448bbb86f48-part3 -> ../../sda3
It seems it only sees /dev/sda, my root disk. Unfortunately I forgot to check lsblk, unsure why.
But lsusb showed:
phil@nuc-proxmox:~# sudo lsusb
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 1e91:a4a7 Other World Computing Mercury Elite Pro Quad
Bus 001 Device 002: ID 8087:0026 Intel Corp. AX201 Bluetooth
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
So the enclosure was still there, something must have happened that caused disk errors. So I checked out kernel messages for any disk errors.
phil@nuc-proxmox:~# sudo dmesg -T | grep -iE “mercury|usb-storage|scsi|sd[b-z]” | tail -100
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#6 CDB: Read(16) 88 00 00 00 00 02 27 97 cf 88 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#5 uas_eh_abort_handler 0 uas-tag 4 inflight: CMD
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#5 CDB: Read(16) 88 00 00 00 00 02 27 97 cb 88 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#4 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#4 CDB: Read(16) 88 00 00 00 00 02 27 97 c7 88 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#24 uas_eh_abort_handler 0 uas-tag 5 inflight: CMD
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#24 CDB: Read(16) 88 00 00 00 00 02 27 98 16 58 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#15 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#15 CDB: Read(16) 88 00 00 00 00 02 27 98 0a 80 00 00 03 f0 00 00
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#7 uas_eh_abort_handler 0 uas-tag 4 inflight: CMD
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#7 CDB: Read(16) 88 00 00 00 00 02 27 98 12 70 00 00 03 e8 00 00
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#6 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#6 CDB: Read(16) 88 00 00 00 00 02 27 98 0e 70 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#12 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#12 CDB: Read(16) 88 00 00 00 00 02 27 97 f3 f8 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#3 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#3 CDB: Read(16) 88 00 00 00 00 02 27 98 07 c8 00 00 03 e8 00 00
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#2 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#2 CDB: Read(16) 88 00 00 00 00 02 27 98 03 c8 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#1 uas_eh_abort_handler 0 uas-tag 6 inflight: CMD IN
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#1 CDB: Read(16) 88 00 00 00 00 02 27 97 ff e0 00 00 03 e0 00 00
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#0 uas_eh_abort_handler 0 uas-tag 5 inflight: CMD
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#0 CDB: Read(16) 88 00 00 00 00 02 27 97 fb e0 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] scsi host5: uas_eh_device_reset_handler FAILED to get lock err -19
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#6 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=35s
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#6 CDB: Read(16) 88 00 00 00 00 02 27 98 0e 70 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] I/O error, dev sdd, sector 9254211184 op 0x0:(READ) flags 0x4000 phys_seg 128 prio class 0
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#7 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=35s
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#7 CDB: Read(16) 88 00 00 00 00 02 27 98 12 70 00 00 03 e8 00 00
[Sun Jan 11 01:16:03 2026] I/O error, dev sdd, sector 9254212208 op 0x0:(READ) flags 0x0 phys_seg 125 prio class 0
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#15 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=35s
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#15 CDB: Read(16) 88 00 00 00 00 02 27 98 0a 80 00 00 03 f0 00 00
[Sun Jan 11 01:16:03 2026] I/O error, dev sdd, sector 9254210176 op 0x0:(READ) flags 0x0 phys_seg 126 prio class 0
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#24 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=35s
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#24 CDB: Read(16) 88 00 00 00 00 02 27 98 16 58 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] I/O error, dev sdd, sector 9254213208 op 0x0:(READ) flags 0x4000 phys_seg 128 prio class 0
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#25 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=35s
[Sun Jan 11 01:16:03 2026] sd 5:0:0:0: [sdd] tag#25 CDB: Read(16) 88 00 00 00 00 02 27 98 1a 58 00 00 03 d8 00 00
[Sun Jan 11 01:16:03 2026] I/O error, dev sdd, sector 9254214232 op 0x0:(READ) flags 0x0 phys_seg 123 prio class 0
[Sun Jan 11 01:16:03 2026] I/O error, dev sdd, sector 9254215216 op 0x0:(READ) flags 0x4000 phys_seg 128 prio class 0
[Sun Jan 11 01:16:03 2026] I/O error, dev sdd, sector 9254217264 op 0x0:(READ) flags 0x4000 phys_seg 128 prio class 0
[Sun Jan 11 01:16:03 2026] I/O error, dev sdd, sector 9254216240 op 0x0:(READ) flags 0x0 phys_seg 128 prio class 0
[Sun Jan 11 01:16:03 2026] I/O error, dev sdd, sector 9254218288 op 0x0:(READ) flags 0x0 phys_seg 123 prio class 0
[Sun Jan 11 01:16:03 2026] I/O error, dev sdd, sector 2576 op 0x0:(READ) flags 0x0 phys_seg 2 prio class 0
[Sun Jan 11 01:16:03 2026] scsi host4: uas_eh_device_reset_handler FAILED to get lock err -19
[Sun Jan 11 01:16:03 2026] scsi host6: uas_eh_device_reset_handler FAILED to get lock err -19
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#4 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=35s
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#4 CDB: Read(16) 88 00 00 00 00 02 27 97 c7 88 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#0 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=35s
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#5 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=35s
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#0 CDB: Read(16) 88 00 00 00 00 02 27 97 fb e0 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#5 CDB: Read(16) 88 00 00 00 00 02 27 97 cb 88 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#1 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=35s
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#6 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=35s
[Sun Jan 11 01:16:03 2026] sd 6:0:0:0: [sde] tag#1 CDB: Read(16) 88 00 00 00 00 02 27 97 ff e0 00 00 03 e0 00 00
[Sun Jan 11 01:16:03 2026] sd 4:0:0:0: [sdc] tag#6 CDB: Read(16) 88 00 00 00 00 02 27 97 cf 88 00 00 04 00 00 00
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738152644608 size=1032192 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738153680896 size=1036288 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738151608320 size=1032192 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738154717184 size=1036288 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738155753472 size=1028096 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738156785664 size=1044480 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=270336 size=8192 flags=721089
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=8001552916480 size=8192 flags=721089
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738157834240 size=1036288 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=8001553178624 size=8192 flags=721089
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738158870528 size=1024000 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738159894528 size=1036288 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738160934912 size=1019904 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738161958912 size=1036288 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_D_154900000A04-0:0-part1 error=5 type=1 offset=4738162995200 size=139264 flags=1074267312
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_A_154900000A01-0:0-part1 error=5 type=1 offset=4738155708416 size=40960 flags=1605809
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_A_154900000A01-0:0-part1 error=5 type=1 offset=4738153631744 size=45056 flags=1605809
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_A_154900000A01-0:0-part1 error=5 type=1 offset=4738153676800 size=45056 flags=1605809
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_A_154900000A01-0:0-part1 error=5 type=1 offset=4738153721856 size=45056 flags=1605809
[Sun Jan 11 01:16:03 2026] zio pool=gamehenge vdev=/dev/disk/by-id/usb-Mercury_Elite_Pro_Quad_A_154900000A01-0:0-part1 error=5 type=1 offset=4738162225152 size=909312 flags=1074267312
[Sun Jan 11 01:16:03 2026] sd 3:0:0:0: [sdb] Synchronizing SCSI cache
[Sun Jan 11 01:16:04 2026] sd 3:0:0:0: [sdb] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[Sun Jan 11 01:16:04 2026] sd 4:0:0:0: [sdc] Synchronizing SCSI cache
[Sun Jan 11 01:16:04 2026] sd 4:0:0:0: [sdc] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[Sun Jan 11 01:16:04 2026] sd 5:0:0:0: [sdd] Synchronizing SCSI cache
[Sun Jan 11 01:16:04 2026] sd 5:0:0:0: [sdd] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[Sun Jan 11 01:16:04 2026] sd 6:0:0:0: [sde] Synchronizing SCSI cache
[Sun Jan 11 01:16:04 2026] sd 6:0:0:0: [sde] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[Sun Jan 11 01:19:16 2026] zfsdev_ioctl_common+0x5a9/0x9f0 [zfs]
[Sun Jan 11 01:19:16 2026] zfsdev_ioctl+0x57/0xf0 [zfs]
[Sun Jan 11 01:21:19 2026] zfsdev_ioctl_common+0x5a9/0x9f0 [zfs]
[Sun Jan 11 01:21:19 2026] zfsdev_ioctl+0x57/0xf0 [zfs]
[Sun Jan 11 01:23:22 2026] zfsdev_ioctl_common+0x5a9/0x9f0 [zfs]
[Sun Jan 11 01:23:22 2026] zfsdev_ioctl+0x57/0xf0 [zfs]
[Sun Jan 11 01:25:25 2026] zfsdev_ioctl_common+0x5a9/0x9f0 [zfs]
[Sun Jan 11 01:25:25 2026] zfsdev_ioctl+0x57/0xf0 [zfs]
[Sun Jan 11 01:27:27 2026] zfsdev_ioctl_common+0x5a9/0x9f0 [zfs]
[Sun Jan 11 01:27:27 2026] zfsdev_ioctl+0x57/0xf0 [zfs]
[Sun Jan 11 01:29:30 2026] zfsdev_ioctl_common+0x5a9/0x9f0 [zfs]
[Sun Jan 11 01:29:30 2026] zfsdev_ioctl+0x57/0xf0 [zfs]
[Sun Jan 11 01:31:33 2026] zfsdev_ioctl_common+0x5a9/0x9f0 [zfs]
[Sun Jan 11 01:31:33 2026] zfsdev_ioctl+0x57/0xf0 [zfs]
[Sun Jan 11 01:33:36 2026] zfsdev_ioctl_common+0x5a9/0x9f0 [zfs]
[Sun Jan 11 01:33:36 2026] zfsdev_ioctl+0x57/0xf0 [zfs]
It appears something happened at 1:16:03 during the scrub. My sanoid/syncoid has a systemd timer set for :45 and :15, so perhaps the enclosure controller got overwhelmed with i/o after syncoid kicked off? Unsure.
I went ahead and rebooted the proxmox host, paused syncoid, and crossed my fingers. The pool came up immediately with all disks. The scrub restarted immediately, and finished without issue 6 hours later.
All my services work and media is readable with no issues. I will figure it was an issue with using an enclosure, and will continue to keep an eye on my daily statuses. This has never been an issue before, though. Perhaps will pause syncoid again before the next scrub.