Bitcoind restarting

Hi everyone, I am using a VM (KVM) on CachyOS and for storage I am using a nvme WD_BLACK SN850X 4T.

My VM
2 Processors
4 GB Memory
2T Virtual Disk

I managed to sync the blockchain without problems, but when I try to sync the electrum server things don’t go well. I noticed the Electrum was restarting, then I checked the bitcoin core and it was down. Apparently it was in restart loop state.

Is there anything I can do to alleviate things for core? Maybe it’s too much IO?

Below is the log:

2025-11-27T19:54:29+01:00  2025-11-27T18:54:29Z New outbound-full-relay v1 peer connected: version: 70016, blocks=925456, peer=6
2025-11-27T19:54:31+01:00  2025-11-27T18:54:31Z [error] ReadBlockFromDisk: Deserialize or I/O error - AutoFile::read: end of file: iostream error at FlatFilePos(nFile=5193, nPos=38803503)
2025-11-27T19:54:31+01:00  2025-11-27T18:54:31Z [error] A fatal internal error occurred, see debug.log for details: Sync: Failed to read block 00000000000000000000be4757ac50220a9db461b57083cfef9bbc2a59fe677a from disk
2025-11-27T19:54:31+01:00  Error: A fatal internal error occurred, see debug.log for details: Sync: Failed to read block 00000000000000000000be4757ac50220a9db461b57083cfef9bbc2a59fe677a from disk
2025-11-27T19:54:31+01:00  2025-11-27T18:54:31Z tor: Thread interrupt
2025-11-27T19:54:31+01:00  2025-11-27T18:54:31Z Shutdown: In progress...
2025-11-27T19:54:31+01:00  2025-11-27T18:54:31Z torcontrol thread exit
2025-11-27T19:54:31+01:00  2025-11-27T18:54:31Z addcon thread exit
2025-11-27T19:54:31+01:00  2025-11-27T18:54:31Z basic block filter index thread exit
2025-11-27T19:54:31+01:00  2025-11-27T18:54:31Z msghand thread exit
2025-11-27T19:54:31+01:00  2025-11-27T18:54:31Z net thread exit
2025-11-27T19:54:31+01:00  Error updating blockchain info: error: timeout on transient error: Could not connect to the server 127.0.0.1:8332
2025-11-27T19:54:31+01:00  
2025-11-27T19:54:31+01:00  Make sure the bitcoind server is running and that you are connecting to the correct RPC port.
2025-11-27T19:54:31+01:00  Use "bitcoin-cli -help" for more info.
2025-11-27T19:54:31+01:00  
2025-11-27T19:54:31+01:00  Error updating network info: error: timeout on transient error: Could not connect to the server 127.0.0.1:8332
2025-11-27T19:54:31+01:00  
2025-11-27T19:54:31+01:00  Make sure the bitcoind server is running and that you are connecting to the correct RPC port.
2025-11-27T19:54:31+01:00  Use "bitcoin-cli -help" for more info.
2025-11-27T19:54:31+01:00  
2025-11-27T19:54:34+01:00  2025-11-27T18:54:34Z opencon thread exit
2025-11-27T19:54:34+01:00  2025-11-27T18:54:34Z DumpAnchors: Flush 2 outbound block-relay-only peer addresses to anchors.dat started
2025-11-27T19:54:34+01:00  2025-11-27T18:54:34Z DumpAnchors: Flush 2 outbound block-relay-only peer addresses to anchors.dat completed (0.00s)
2025-11-27T19:54:34+01:00  2025-11-27T18:54:34Z scheduler thread exit
2025-11-27T19:54:34+01:00  2025-11-27T18:54:34Z Writing 333 mempool transactions to file...
2025-11-27T19:54:34+01:00  2025-11-27T18:54:34Z Writing 0 unbroadcast transactions to file.
2025-11-27T19:54:34+01:00  2025-11-27T18:54:34Z Dumped mempool: 0.000s to copy, 0.004s to dump, 328142 bytes dumped to file
2025-11-27T19:54:34+01:00  2025-11-27T18:54:34Z Flushed fee estimates to fee_estimates.dat.
2025-11-27T19:54:34+01:00  2025-11-27T18:54:34Z Shutdown: done
2025-11-27T19:54:50+01:00  Error updating blockchain info: error: timeout on transient error: Could not connect to the server 127.0.0.1:8332
2025-11-27T19:54:50+01:00  
2025-11-27T19:54:50+01:00  Make sure the bitcoind server is running and that you are connecting to the correct RPC port.
2025-11-27T19:54:50+01:00  Use "bitcoin-cli -help" for more info.
2025-11-27T19:54:50+01:00  
2025-11-27T19:54:50+01:00  Error updating network info: error: timeout on transient error: Could not connect to the server 127.0.0.1:8332
2025-11-27T19:54:50+01:00  
2025-11-27T19:54:50+01:00  Make sure the bitcoind server is running and that you are connecting to the correct RPC port.
2025-11-27T19:54:50+01:00  Use "bitcoin-cli -help" for more info.
2025-11-27T19:54:50+01:00  
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z Bitcoin Core version v28.1.0 (release build)
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z parameter interaction: -externalip set -> setting -discover=0
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z Script verification uses 1 additional threads
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z Using the 'x86_shani(1way,2way)' SHA256 implementation
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z Using RdSeed as an additional entropy source
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z Using RdRand as an additional entropy source
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z Default data directory /root/.bitcoin
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z Using data directory /root/.bitcoin
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z Config file: /root/.bitcoin/bitcoin.conf
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z Config file arg: avoidpartialspends="1"
2025-11-27T19:54:50+01:00  2025-11-27T18:54:50Z Config file arg: bind="0.0.0.0:8333"

These logs show Bitcoin failing to be able to read from the drive.

Perhaps some issue with the virtual drive, or perhaps some issue with the underlying hardware… or maybe with how it’s connected?

With that last question, and you asking “Maybe it’s too much IO?” – that’s not clear, normally when there’s a slow drive, Bitcoin times out rather than crashes. It’s more like the drive is there intermittently. The Kernel logs might prove or disprove that.

I will check later today. From what I got of AI I can check the logs (as you suggested) using dmesg, right?

It’s worth mentioning that I already had the blockchain synced until certain point in a different machine and I rsynced the {blocks,chainstate} folders as suggested in other guides here.

I will also try to disable connection from other peers. I noticed everything runs ok until Electrs reaches around 30% synchronization. Is there a way to make it a bit “slow” and demand a bit less from the nvme?

Also what are therespective files of block 00000000000000000000be4757ac50220a9db461b57083cfef9bbc2a59fe677a? Maybe the file was corrupted?

Sorry to make lots of questions at single post. I can only check it later, then I can try several things.

You don’t need AI to go to System → Kernel logs.

But you might use it to get the right -flags for sudo journalctl -k if you’ve set up SSH following the StartOS setup guides.

Oh tks I nver noticed there was an option to check the Kernel logs. I will check later.

So, started the core service alone and kept my eyes at the logs. Apparently core tries to do something after it reaches the last block. You can see from the logs what happened at the time the last block was 925584.

core crashes

2025-11-28T17:28:19+00:00  2025-11-28T17:28:19Z UpdateTip: new best=0000000000000000000163f6282dbe43e89374aa36cfd68160adff2d597af24c height=925583 version=0x25d36000 log2_work=95.960928 tx=1276542067 date='2025-11-28T17:18:18Z' progress=0.999997 cache=98.5MiB(720237txo)
2025-11-28T17:28:19+00:00  2025-11-28T17:28:19Z UpdateTip: new best=000000000000000000009a6c2d1097c67e29e44d663b67b9428b90c03bc18b85 height=925584 version=0x20018000 log2_work=95.960940 tx=1276546423 date='2025-11-28T17:21:52Z' progress=0.999998 cache=99.2MiB(725089txo)
2025-11-28T17:28:26+00:00  2025-11-28T17:28:26Z Syncing basic block filter index with block chain from height 919244
2025-11-28T17:28:27+00:00  2025-11-28T17:28:27Z [error] ReadBlockFromDisk: Deserialize or I/O error - AutoFile::read: end of file: iostream error at FlatFilePos(nFile=5193, nPos=38803503)
2025-11-28T17:28:27+00:00  2025-11-28T17:28:27Z [error] A fatal internal error occurred, see debug.log for details: Sync: Failed to read block 00000000000000000000be4757ac50220a9db461b57083cfef9bbc2a59fe677a from disk
2025-11-28T17:28:27+00:00  Error: A fatal internal error occurred, see debug.log for details: Sync: Failed to read block 00000000000000000000be4757ac50220a9db461b57083cfef9bbc2a59fe677a from disk
2025-11-28T17:28:27+00:00  2025-11-28T17:28:27Z tor: Thread interrupt
2025-11-28T17:28:27+00:00  2025-11-28T17:28:27Z addcon thread exit
2025-11-28T17:28:27+00:00  2025-11-28T17:28:27Z opencon thread exit
2025-11-28T17:28:27+00:00  2025-11-28T17:28:27Z Shutdown: In progress...
2025-11-28T17:28:27+00:00  2025-11-28T17:28:27Z torcontrol thread exit
2025-11-28T17:28:28+00:00  2025-11-28T17:28:28Z basic block filter index thread exit
2025-11-28T17:28:28+00:00  2025-11-28T17:28:28Z net thread exit
2025-11-28T17:28:28+00:00  2025-11-28T17:28:28Z msghand thread exit
2025-11-28T17:28:28+00:00  2025-11-28T17:28:28Z DumpAnchors: Flush 2 outbound block-relay-only peer addresses to anchors.dat started
2025-11-28T17:28:28+00:00  2025-11-28T17:28:28Z DumpAnchors: Flush 2 outbound block-relay-only peer addresses to anchors.dat completed (0.00s)
2025-11-28T17:28:28+00:00  2025-11-28T17:28:28Z scheduler thread exit
2025-11-28T17:28:28+00:00  2025-11-28T17:28:28Z Writing 939 mempool transactions to file...
2025-11-28T17:28:28+00:00  2025-11-28T17:28:28Z Writing 0 unbroadcast transactions to file.
2025-11-28T17:28:28+00:00  2025-11-28T17:28:28Z Dumped mempool: 0.000s to copy, 0.004s to dump, 391188 bytes dumped to file
2025-11-28T17:28:28+00:00  2025-11-28T17:28:28Z Flushed fee estimates to fee_estimates.dat.
2025-11-28T17:28:29+00:00  2025-11-28T17:28:29Z Shutdown: done

OS logs:

2025-11-28T17:28:17+00:00  2025-11-28T17:28:17.347933Z DEBUG run_main:check: startos::manager::health: Checking health of bitcoind
2025-11-28T17:28:29+00:00  2025-11-28T17:28:29.201416Z ERROR startos::manager::manager_container: The service bitcoind has crashed with the following exit code: 1
2025-11-28T17:28:49+00:00  2025-11-28T17:28:49.366249Z DEBUG run_main:check: startos::manager::health: Checking health of bitcoind

Kernel logs:

2025-11-28T17:23:47+00:00  IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
2025-11-28T17:23:47+00:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth1: link becomes ready
2025-11-28T17:23:47+00:00  br-start9: port 2(veth1) entered blocking state
2025-11-28T17:23:47+00:00  br-start9: port 2(veth1) entered forwarding state
2025-11-28T17:28:29+00:00  br-start9: port 2(veth1) entered disabled state
2025-11-28T17:28:29+00:00  device veth1 left promiscuous mode
2025-11-28T17:28:29+00:00  br-start9: port 2(veth1) entered disabled state
2025-11-28T17:28:44+00:00  br-start9: port 2(veth1) entered blocking state
2025-11-28T17:28:44+00:00  br-start9: port 2(veth1) entered disabled state
2025-11-28T17:28:44+00:00  device veth1 entered promiscuous mode
2025-11-28T17:28:44+00:00  IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
2025-11-28T17:28:44+00:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth1: link becomes ready
2025-11-28T17:28:44+00:00  br-start9: port 2(veth1) entered blocking state
2025-11-28T17:28:44+00:00  br-start9: port 2(veth1) entered forwarding state
2025-11-28T17:29:08+00:00  br-start9: port 2(veth1) entered disabled state
2025-11-28T17:29:08+00:00  device veth1 left promiscuous mode

Looks like a bad drive to me. Since it’s a virtual drive, there’s no way to know if it’s an underlying hardware error or a VM software error.

Ok, the SSD is new and from a good brand (maybe I am just unlucky), so I have searched a bit and I think I need better understanding of how core works to solve this.

So, like I said before I copied the blockchain (and chainstate) from another host at height 925.225.

Checking the original host on the console this is what I get:
image

So for some reason the block (919.335) that s9 instance says can’t read from the disk, also can’t be read from the original bitcoin.d host. But apparently this host works without problems. In the original host if I try to consult a different block it works:

08:54:15 getblockhash 925374
08:54:15 00000000000000000000e4b64b0c30489c91634959c27b3270fb4adec35dd7a8
08:54:25 getblock 00000000000000000000e4b64b0c30489c91634959c27b3270fb4adec35dd7a8
08:54:25 {
  "hash": "00000000000000000000e4b64b0c30489c91634959c27b3270fb4adec35dd7a8",
  "confirmations": 1,
  "height": 925374,

So I think the problem occurred at the original host, not because of the SSD or VM, since I can “kind of reproduce” the issue. But for some reason the original host does not keep restarting.

Checking the possible commands in the console I tried to get the block again from peers and here is what I got: