Random CPU Spikes - forced restart

Hi community,

since a few months I noticed that my Start9 node randomly spikes to 100% CPU usage and stays that way until I (force) restart it. The node is not responsive during those times.
I am running the node inside proxmox in a ThinkCentre M715q Tiny machine with 32 GB RAM, and an AMD Ryzen 5 PRO.

I would check the logs but the problem is that after a restart the logs are reset (I think) and while the CPU is at 100% I can’t see them…

I noticed that the spike mostly appears around midnight, so I thought it might be my router for some reason. But I don’t know if that’t the case and if so, how to resolve it. I can’t fiddle with the router.
I added cronjobs which automatically shut down the containers 5 minutes before midnight and restart them 5 minutes after midnight but that only worked for ~1 week after which for some reason the bitcoind container could not be restarted properly which forced me to restart the whole machine again.

I know this is a shot in the dark, but do you guys have seen something like this or do you have any tips on how to resolve the issue?

Regards,
Okamikun

nobody? Would really appreciate any insights into how to resolve this.

I heard there was an issue like this on an older version. What version of StartOS are you running? Also, do you commonly leave your browser tab open with the StartOS dashboard?

Also after a restart, when you look at the OS logs and Kernel logs, even if the oldest record shown on the screen is from after the restart, if you click download in the bottom right corner it should download the last 10,000 records - showing information from prior to restart.

Thank you for your reply!
I am running version 0.3.5~1 with git hash 39de098461833e4c56bd3509644ddf7f1a0fc4ca.
I rarely have the browser tab of the dashboard open.

Also after a restart, when you look at the OS logs and Kernel logs, even if the oldest record shown on the screen is from after the restart, if you click download in the bottom right corner it should download the last 10,000 records - showing information from prior to restart.

Good to know! I wasn’t aware of that. I will try it the next time the CPU spike occurs.

The CPU started going up since yesterday again but I am still able to use the instance for now.
The logs don’t show anything unusual. They don’t show anything different than before the CPU went up.
I did SSH into the node and ran htop and saw that a few processes are constantly at high CPU% but I don’t know what those processes are for:

Does anyone know what the root for this may be and how to solve this, except by restarting?
I am a bit surprised that there are three processes with that high CPU%.

One of those is the Firefox, thanks to the fact that in your case you’re unable to run this without Kiosk mode, since you’re using a laptop. The other is /usr/bin/conmon which is part of setting up the containers. When this is spiking, what is happening in StartOS? Are you running services? Are they working?

The fact that your DIY device is a laptop could also mean…

  1. Your server is not turning off, it might just be an issue with kiosk mode, that isn’t designed to run continuously. Are you sure the laptop is powered off? Can you connect to it from the LAN when it’s “off”?

  2. Your laptop may just be shutting down due to bios settings (any kind of advanced power management etc) or simply because of drivers (StartOS tries to support DIY hardware, but it can’t promise to support everything)

What makes you sure that the server powers off according to CPU spikes?

Thank you for your reply.

When the CPU spiked last time I could still use StartOS normally.
But when it happened previously it wouldn’t respond at all and I also couldn’t SSH into it.
Btw, I am running StartOS using proxmox on a ThinkCentre machine.

Maybe there was a misunderstanding but I didn’t say anything about powering the server down. I am not shutting down the machine at any point. When the CPU spikes, I restart it which makes the CPU spike go down.

I will check the BIOS settings.

Just now I got another full freeze where I couldn’t even SSH into the machine anymore. I checked the logs but I didn’t find anything suspicious.

It was weird to see that the OS Logs suddenly stopped showing anything until I force restarted the proxmox container in which StartOS is running:

2024-06-06T13:37:34+09:00  2024-06-06T04:37:34.425867Z DEBUG run_main:check: startos::manager::health: Checking health of electrs
2024-06-06T13:37:34+09:00  2024-06-06T04:37:34.484450Z DEBUG run_main:check: startos::manager::health: Checking health of lightning-terminal
2024-06-06T13:37:35+09:00  2024-06-06T04:37:35.997524Z DEBUG run_main:check: startos::manager::health: Checking health of bitcoind
2024-06-06T13:37:36+09:00  2024-06-06T04:37:36.084089Z DEBUG run_main:check: startos::manager::health: Checking health of mempool
2024-06-06T13:37:40+09:00  2024-06-06T04:37:40.454884Z DEBUG run_main:check: startos::manager::health: Checking health of thunderhub
2024-06-06T13:37:40+09:00  2024-06-06T04:37:40.752237Z DEBUG run_main:check: startos::manager::health: Checking health of lnd
2024-06-06T13:37:49+09:00  2024-06-06T04:37:49.702851Z DEBUG run_main:check: startos::manager::health: Checking health of lightning-terminal
2024-06-06T18:28:48+09:00  2024-06-06T09:28:48.829245Z INFO inner_main:setup_or_init:init: startos::init: Mounted Logs
2024-06-06T18:28:48+09:00  2024-06-06T09:28:48.830952Z INFO inner_main:setup_or_init:init:bind: startos::disk::mount::util: Binding /embassy-data/package-data/tmp/var to /var/tmp
2024-06-06T18:28:48+09:00  2024-06-06T09:28:48.835093Z INFO inner_main:setup_or_init:init:bind: startos::disk::mount::util: Binding /embassy-data/package-data/tmp/podman to /var/lib/containers
2024-06-06T18:28:48+09:00  2024-06-06T09:28:48.839591Z INFO inner_main:setup_or_init:init: startos::init: Mounted Docker Data
2024-06-06T18:28:49+09:00  2024-06-06T09:28:49.665994Z INFO inner_main:setup_or_init:init: startos::init: Enabling Docker QEMU Emulation
2024-06-06T18:28:50+09:00  2024-06-06T09:28:50.549417Z INFO inner_main:setup_or_init:init: startos::init: Enabled Docker QEMU Emulation
2024-06-06T18:28:50+09:00  2024-06-06T09:28:50.672852Z INFO inner_main:setup_or_init:init: startos::init: Syncronized system clock
2024-06-06T18:28:50+09:00  2024-06-06T09:28:50.724561Z INFO inner_main:setup_or_init:init: startos::init: System initialized.
2024-06-06T18:28:50+09:00  2024-06-06T09:28:50.728100Z INFO inner_main:init: startos::context::rpc: Loaded Config
2024-06-06T18:28:50+09:00  2024-06-06T09:28:50.729985Z DEBUG inner_main:init:secret_store:init_postgres:unmount: startos::disk::mount::util: Unmounting /var/lib/postgresql.
2024-06-06T18:28:50+09:00  2024-06-06T09:28:50.743850Z INFO inner_main:init:secret_store:init_postgres:bind: startos::disk::mount::util: Binding /embassy-data/main/postgresql to /var/lib/postgresql
2024-06-06T18:28:50+09:00  2024-06-06T09:28:50.761107Z INFO inner_main:init: startos::context::rpc: Opened Pg DB
2024-06-06T18:28:50+09:00  2024-06-06T09:28:50.782222Z INFO inner_main:init: startos::context::rpc: Opened PatchDB
2024-06-06T18:28:50+09:00  2024-06-06T09:28:50.808589Z INFO inner_main:init: startos::context::rpc: Initialized Net Controller
2024-06-06T18:28:50+09:00  2024-06-06T09:28:50.808611Z INFO inner_main:init: startos::context::rpc: Initialized Notification Manager
2024-06-06T18:28:51+09:00  2024-06-06T09:28:51.837594Z INFO torctl: startos::net::tor: Tor is started
2024-06-06T18:28:52+09:00  2024-06-06T09:28:52.229476Z INFO inner_main:init:cleanup_and_initialize: startos::context::rpc: Initialized Package Managers
2024-06-06T18:28:52+09:00  2024-06-06T09:28:52.745754Z INFO inner_main:init: startos::context::rpc: Cleaned up transient states
2024-06-06T18:28:52+09:00  2024-06-06T09:28:52.968495Z ERROR startos::system: Could not get initial temperature: Filesystem I/O Error: No sensors found!
2024-06-06T18:28:52+09:00  Make sure you loaded all the kernel drivers you need.
2024-06-06T18:28:52+09:00  Try sensors-detect to find out which these are.
2024-06-06T18:28:52+09:00  2024-06-06T09:28:52.968527Z DEBUG startos::system: Error { source:
2024-06-06T18:28:52+09:00  0: No sensors found!
2024-06-06T18:28:52+09:00  Make sure you loaded all the kernel drivers you need.
2024-06-06T18:28:52+09:00  Try sensors-detect to find out which these are.
2024-06-06T18:28:52+09:00  0:
2024-06-06T18:28:52+09:00  Location:
2024-06-06T18:28:52+09:00  startos/src/util/mod.rs:163
2024-06-06T18:28:52+09:00  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ SPANTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
2024-06-06T18:28:52+09:00  0: startos::system::get_temp
2024-06-06T18:28:52+09:00  at startos/src/system.rs:667
2024-06-06T18:28:52+09:00  Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
2024-06-06T18:28:52+09:00  Run with RUST_BACKTRACE=full to include source snippets., kind: Filesystem, revision: None }
2024-06-06T18:28:54+09:00  2024-06-06T09:28:54.335462Z ERROR startos::manager::manager_container: The service electrs has crashed with the following exit code: 1
2024-06-06T18:28:58+09:00  2024-06-06T09:28:58.638899Z DEBUG run_main:check: startos::manager::health: Checking health of lightning-terminal
2024-06-06T18:28:59+09:00  2024-06-06T09:28:59.012373Z DEBUG run_main:check: startos::manager::health: Checking health of lnd

The Kernel Logs just showed the logs from the restarted even though I downloaded the logs which should show the last 10,000 entries but they didn’t. And the entries in the downloaded file are far from 10,000.

2024-06-06T18:28:48+09:00  Received SIGTERM from PID 1 (systemd).
2024-06-06T18:28:48+09:00  Stopping systemd-journald.service - Journal Service...
2024-06-06T18:28:48+09:00  systemd-journald.service: Deactivated successfully.
2024-06-06T18:28:48+09:00  Stopped systemd-journald.service - Journal Service.
2024-06-06T18:28:48+09:00  Starting systemd-journald.service - Journal Service...
2024-06-06T18:28:48+09:00  File /var/log/journal/57b13a23a930488ead52dadb62e6b796/system.journal corrupted or uncleanly shut down, renaming and replacing.
2024-06-06T18:28:48+09:00  Started systemd-journald.service - Journal Service.
2024-06-06T18:28:49+09:00  bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
2024-06-06T18:28:49+09:00  br-start9: port 1(veth0) entered blocking state
2024-06-06T18:28:49+09:00  br-start9: port 1(veth0) entered disabled state
2024-06-06T18:28:49+09:00  device veth0 entered promiscuous mode
2024-06-06T18:28:49+09:00  br-start9: port 1(veth0) entered blocking state
2024-06-06T18:28:49+09:00  br-start9: port 1(veth0) entered forwarding state
2024-06-06T18:28:49+09:00  br-start9: port 1(veth0) entered disabled state
2024-06-06T18:28:49+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth0: link becomes ready
2024-06-06T18:28:49+09:00  br-start9: port 1(veth0) entered blocking state
2024-06-06T18:28:49+09:00  br-start9: port 1(veth0) entered forwarding state
2024-06-06T18:28:49+09:00  podman0: port 1(veth1) entered blocking state
2024-06-06T18:28:49+09:00  podman0: port 1(veth1) entered disabled state
2024-06-06T18:28:49+09:00  device veth1 entered promiscuous mode
2024-06-06T18:28:49+09:00  podman0: port 1(veth1) entered blocking state
2024-06-06T18:28:49+09:00  podman0: port 1(veth1) entered forwarding state
2024-06-06T18:28:49+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): podman0: link becomes ready
2024-06-06T18:28:49+09:00  podman0: port 1(veth1) entered disabled state
2024-06-06T18:28:49+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth1: link becomes ready
2024-06-06T18:28:49+09:00  podman0: port 1(veth1) entered blocking state
2024-06-06T18:28:49+09:00  podman0: port 1(veth1) entered forwarding state
2024-06-06T18:28:50+09:00  podman0: port 1(veth1) entered disabled state
2024-06-06T18:28:50+09:00  device veth1 left promiscuous mode
2024-06-06T18:28:50+09:00  podman0: port 1(veth1) entered disabled state
2024-06-06T18:28:50+09:00  zram: Added device: zram0
2024-06-06T18:28:50+09:00  zram0: detected capacity change from 0 to 15036416
2024-06-06T18:28:50+09:00  Adding 7518204k swap on /dev/zram0. Priority:5 extents:1 across:7518204k SSFS
2024-06-06T18:28:52+09:00  br-start9: port 2(veth1) entered blocking state
2024-06-06T18:28:52+09:00  br-start9: port 2(veth1) entered disabled state
2024-06-06T18:28:52+09:00  device veth1 entered promiscuous mode
2024-06-06T18:28:52+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
2024-06-06T18:28:52+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth1: link becomes ready
2024-06-06T18:28:52+09:00  br-start9: port 2(veth1) entered blocking state
2024-06-06T18:28:52+09:00  br-start9: port 2(veth1) entered forwarding state
2024-06-06T18:28:52+09:00  br-start9: port 3(veth2) entered blocking state
2024-06-06T18:28:52+09:00  br-start9: port 3(veth2) entered disabled state
2024-06-06T18:28:52+09:00  device veth2 entered promiscuous mode
2024-06-06T18:28:52+09:00  br-start9: port 3(veth2) entered blocking state
2024-06-06T18:28:52+09:00  br-start9: port 3(veth2) entered forwarding state
2024-06-06T18:28:52+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth2: link becomes ready
2024-06-06T18:28:53+09:00  br-start9: port 4(veth3) entered blocking state
2024-06-06T18:28:53+09:00  br-start9: port 4(veth3) entered disabled state
2024-06-06T18:28:53+09:00  device veth3 entered promiscuous mode
2024-06-06T18:28:53+09:00  br-start9: port 4(veth3) entered blocking state
2024-06-06T18:28:53+09:00  br-start9: port 4(veth3) entered forwarding state
2024-06-06T18:28:53+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth3: link becomes ready
2024-06-06T18:28:53+09:00  br-start9: port 5(veth4) entered blocking state
2024-06-06T18:28:53+09:00  br-start9: port 5(veth4) entered disabled state
2024-06-06T18:28:53+09:00  device veth4 entered promiscuous mode
2024-06-06T18:28:53+09:00  br-start9: port 5(veth4) entered blocking state
2024-06-06T18:28:53+09:00  br-start9: port 5(veth4) entered forwarding state
2024-06-06T18:28:53+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth4: link becomes ready
2024-06-06T18:28:53+09:00  br-start9: port 6(veth5) entered blocking state
2024-06-06T18:28:53+09:00  br-start9: port 6(veth5) entered disabled state
2024-06-06T18:28:53+09:00  device veth5 entered promiscuous mode
2024-06-06T18:28:53+09:00  br-start9: port 6(veth5) entered blocking state
2024-06-06T18:28:53+09:00  br-start9: port 6(veth5) entered forwarding state
2024-06-06T18:28:53+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth5: link becomes ready
2024-06-06T18:28:54+09:00  br-start9: port 7(veth6) entered blocking state
2024-06-06T18:28:54+09:00  br-start9: port 7(veth6) entered disabled state
2024-06-06T18:28:54+09:00  device veth6 entered promiscuous mode
2024-06-06T18:28:54+09:00  br-start9: port 7(veth6) entered blocking state
2024-06-06T18:28:54+09:00  br-start9: port 7(veth6) entered forwarding state
2024-06-06T18:28:54+09:00  br-start9: port 7(veth6) entered disabled state
2024-06-06T18:28:54+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
2024-06-06T18:28:54+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth6: link becomes ready
2024-06-06T18:28:54+09:00  br-start9: port 7(veth6) entered blocking state
2024-06-06T18:28:54+09:00  br-start9: port 7(veth6) entered forwarding state
2024-06-06T18:28:54+09:00  br-start9: port 4(veth3) entered disabled state
2024-06-06T18:28:54+09:00  device veth3 left promiscuous mode
2024-06-06T18:28:54+09:00  br-start9: port 4(veth3) entered disabled state
2024-06-06T18:29:09+09:00  br-start9: port 4(veth3) entered blocking state
2024-06-06T18:29:09+09:00  br-start9: port 4(veth3) entered disabled state
2024-06-06T18:29:09+09:00  device veth3 entered promiscuous mode
2024-06-06T18:29:09+09:00  br-start9: port 4(veth3) entered blocking state
2024-06-06T18:29:09+09:00  br-start9: port 4(veth3) entered forwarding state
2024-06-06T18:29:09+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth3: link becomes ready
2024-06-06T18:30:15+09:00  br-start9: port 6(veth5) entered disabled state
2024-06-06T18:30:15+09:00  device veth5 left promiscuous mode
2024-06-06T18:30:15+09:00  br-start9: port 6(veth5) entered disabled state
2024-06-06T18:30:30+09:00  br-start9: port 6(veth5) entered blocking state
2024-06-06T18:30:30+09:00  br-start9: port 6(veth5) entered disabled state
2024-06-06T18:30:30+09:00  device veth5 entered promiscuous mode
2024-06-06T18:30:30+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
2024-06-06T18:30:30+09:00  IPv6: ADDRCONF(NETDEV_CHANGE): veth5: link becomes ready
2024-06-06T18:30:30+09:00  br-start9: port 6(veth5) entered blocking state
2024-06-06T18:30:30+09:00  br-start9: port 6(veth5) entered forwarding state

Here are the TOR Logs before and after the restart:

2024-06-06T06:04:22+09:00  Average packaged cell fullness: 56.624%. TLS write overhead: 1%
2024-06-06T06:04:22+09:00  Heartbeat: Our onion services received 2258 v3 INTRODUCE2 cells and attempted to launch 2625 rendezvous circuits.
2024-06-06T11:08:42+09:00  Your network connection speed appears to have changed. Resetting timeout to 60000ms after 18 timeouts and 1000 buildtimes.
2024-06-06T12:04:22+09:00  Heartbeat: Tor's uptime is 1 day 12:00 hours, with 118 circuits open. I've sent 4.11 GB and received 918.05 MB. I've received 5490 connections on IPv4 and 0 on IPv6. I've made 4955 connections with IPv4 and 0 with IPv6.
2024-06-06T12:04:22+09:00  While bootstrapping, fetched this many bytes: 158230 (microdescriptor fetch)
2024-06-06T12:04:22+09:00  While not bootstrapping, fetched this many bytes: 385210 (consensus network-status fetch); 1534452 (microdescriptor fetch)
2024-06-06T12:04:22+09:00  Average packaged cell fullness: 53.852%. TLS write overhead: 1%
2024-06-06T12:04:22+09:00  Heartbeat: Our onion services received 2574 v3 INTRODUCE2 cells and attempted to launch 3045 rendezvous circuits.
2024-06-06T12:29:26+09:00  Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
2024-06-06T18:28:50+09:00  Starting tor@default.service - Anonymizing overlay network for TCP...
2024-06-06T18:28:50+09:00  Jun 06 09:28:50.888 [notice] Tor 0.4.8.9 running on Linux with Libevent 2.1.12-stable, OpenSSL 3.0.11, Zlib 1.2.13, Liblzma 5.4.1, Libzstd 1.5.4 and Glibc 2.36 as libc.
2024-06-06T18:28:50+09:00  Jun 06 09:28:50.888 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://support.torproject.org/faq/staying-anonymous/
2024-06-06T18:28:50+09:00  Jun 06 09:28:50.891 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc".
2024-06-06T18:28:50+09:00  Jun 06 09:28:50.891 [notice] Read configuration file "/etc/tor/torrc".
2024-06-06T18:28:50+09:00  Configuration was valid
2024-06-06T18:28:50+09:00  Jun 06 09:28:50.953 [notice] Tor 0.4.8.9 running on Linux with Libevent 2.1.12-stable, OpenSSL 3.0.11, Zlib 1.2.13, Liblzma 5.4.1, Libzstd 1.5.4 and Glibc 2.36 as libc.
2024-06-06T18:28:50+09:00  Jun 06 09:28:50.953 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://support.torproject.org/faq/staying-anonymous/
2024-06-06T18:28:50+09:00  Jun 06 09:28:50.953 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc".
2024-06-06T18:28:50+09:00  Jun 06 09:28:50.953 [notice] Read configuration file "/etc/tor/torrc".
2024-06-06T18:28:50+09:00  Jun 06 09:28:50.956 [warn] You specified a public address '0.0.0.0:9050' for SocksPort. Other people on the Internet might find your computer and use it as an open proxy. Please don't allow this unless you have a good reason.

I had missed your initial mention of proxmox… running in a VM provides a degree of separation from the underlying hardware, so I doubt looking in the BIOS would be as much help as I suggested. Instead you’ll need to work on your proxmox configuration.

sudo journalctl -k -b-1 -efa
and
sudo journalctl -b-1 -efa

Could give you a clearer idea of what happened before your setup froze.

Neither running StartOS in a VM or on custom hardware is something we officially support, but it should, in many many cases, just work. A number of us run StartOS this way for testing and don’t have issues.

1 Like

Thank you for your response!

I ran the commands but didn’t really understand the output…
sudo journalctl -k -b-1 -efa:

Jun 05 05:26:09 squeaky-lice systemd-journald[2039]: Data hash table of /var/log/journal/57b13a23a930488ead52dadb62e6b796/system.journal has a fill level at 75.0 (174764 of 233016 items, 58720256 file size, 335 bytes per hash table item), suggesting rotation.
Jun 05 05:26:09 squeaky-lice systemd-journald[2039]: /var/log/journal/57b13a23a930488ead52dadb62e6b796/system.journal: Journal header limits reached or header out-of-date, rotating.
Jun 05 16:30:02 squeaky-lice systemd-journald[2039]: Data hash table of /var/log/journal/57b13a23a930488ead52dadb62e6b796/system.journal has a fill level at 75.0 (174763 of 233016 items, 58720256 file size, 335 bytes per hash table item), suggesting rotation.
Jun 05 16:30:02 squeaky-lice systemd-journald[2039]: /var/log/journal/57b13a23a930488ead52dadb62e6b796/system.journal: Journal header limits reached or header out-of-date, rotating.
Jun 06 03:35:44 squeaky-lice systemd-journald[2039]: Data hash table of /var/log/journal/57b13a23a930488ead52dadb62e6b796/system.journal has a fill level at 75.0 (174765 of 233016 items, 58720256 file size, 335 bytes per hash table item), suggesting rotation.
Jun 06 03:35:44 squeaky-lice systemd-journald[2039]: /var/log/journal/57b13a23a930488ead52dadb62e6b796/system.journal: Journal header limits reached or header out-of-date, rotating.

sudo journalctl -b-1 -efa:

Jun 06 04:37:34 squeaky-lice startd[541]: 2024-06-06T04:37:34.425867Z DEBUG run_main:check: startos::manager::health: Checking health of electrs
Jun 06 04:37:34 squeaky-lice startd[541]: 2024-06-06T04:37:34.484450Z DEBUG run_main:check: startos::manager::health: Checking health of lightning-terminal
Jun 06 04:37:34 squeaky-lice lightning-terminal.embassy[3371]: 2024-06-06 04:37:34.546 [INF] LITD: Handling static file request: /
Jun 06 04:37:34 squeaky-lice podman[88786]: 2024-06-06 04:37:34.559338752 +0000 UTC m=+0.123711355 container exec d4cc5d664678d6e038f296bf9e62d72a91bfc4248136d7b40c61403989830254 (image=docker.io/start9/electrs/main:0.10.4, name=electrs.embassy)
Jun 06 04:37:34 squeaky-lice systemd[1]: tmp-crun.uTTcxC.mount: Deactivated successfully.
Jun 06 04:37:34 squeaky-lice podman[88802]: 2024-06-06 04:37:34.604844476 +0000 UTC m=+0.114070625 container exec 397db41359839c26521480ad18412754c771f1fcf35360b35c37e87d82368d21 (image=docker.io/start9/lightning-terminal/main:0.12.5, name=lightning-terminal.embassy)
Jun 06 04:37:34 squeaky-lice podman[88802]: 2024-06-06 04:37:34.654734205 +0000 UTC m=+0.163960404 container exec_died 397db41359839c26521480ad18412754c771f1fcf35360b35c37e87d82368d21 (image=docker.io/start9/lightning-terminal/main:0.12.5, name=lightning-terminal.embassy)
Jun 06 04:37:34 squeaky-lice podman[88785]: 2024-06-06 04:37:34.795960497 +0000 UTC m=+0.360496619 container exec d4cc5d664678d6e038f296bf9e62d72a91bfc4248136d7b40c61403989830254 (image=docker.io/start9/electrs/main:0.10.4, name=electrs.embassy)
Jun 06 04:37:35 squeaky-lice systemd[1]: tmp-crun.ndnh1E.mount: Deactivated successfully.
Jun 06 04:37:35 squeaky-lice startd[541]: 2024-06-06T04:37:35.997524Z DEBUG run_main:check: startos::manager::health: Checking health of bitcoind
Jun 06 04:37:36 squeaky-lice startd[541]: 2024-06-06T04:37:36.084089Z DEBUG run_main:check: startos::manager::health: Checking health of mempool
Jun 06 04:37:36 squeaky-lice podman[88915]: 2024-06-06 04:37:36.155513059 +0000 UTC m=+0.150294038 container exec fd13f5cf893103b17c33d1ac3703305518942ee4bfa7c61406d0a164bacc81b5 (image=docker.io/start9/bitcoind/main:27.0.0, name=bitcoind.embassy, maintainer.0=João Fonseca (@joaopaulofonseca), maintainer.1=Pedro Branco (@pedrobranco), maintainer.2=Rui Marinho (@ruimarinho), maintainer.3=Aiden McClelland (@dr-bonez))
Jun 06 04:37:36 squeaky-lice podman[88916]: 2024-06-06 04:37:36.218472604 +0000 UTC m=+0.212166317 container exec fd13f5cf893103b17c33d1ac3703305518942ee4bfa7c61406d0a164bacc81b5 (image=docker.io/start9/bitcoind/main:27.0.0, name=bitcoind.embassy, maintainer.0=João Fonseca (@joaopaulofonseca), maintainer.1=Pedro Branco (@pedrobranco), maintainer.2=Rui Marinho (@ruimarinho), maintainer.3=Aiden McClelland (@dr-bonez))
Jun 06 04:37:36 squeaky-lice podman[88915]: 2024-06-06 04:37:36.227849214 +0000 UTC m=+0.222630213 container exec_died fd13f5cf893103b17c33d1ac3703305518942ee4bfa7c61406d0a164bacc81b5 (image=docker.io/start9/bitcoind/main:27.0.0, name=bitcoind.embassy, maintainer.2=Rui Marinho (@ruimarinho), maintainer.3=Aiden McClelland (@dr-bonez), maintainer.0=João Fonseca (@joaopaulofonseca), maintainer.1=Pedro Branco (@pedrobranco))
Jun 06 04:37:36 squeaky-lice podman[88916]: 2024-06-06 04:37:36.333234156 +0000 UTC m=+0.326927909 container exec_died fd13f5cf893103b17c33d1ac3703305518942ee4bfa7c61406d0a164bacc81b5 (image=docker.io/start9/bitcoind/main:27.0.0, name=bitcoind.embassy, maintainer.2=Rui Marinho (@ruimarinho), maintainer.3=Aiden McClelland (@dr-bonez), maintainer.0=João Fonseca (@joaopaulofonseca), maintainer.1=Pedro Branco (@pedrobranco))
Jun 06 04:37:36 squeaky-lice podman[88932]: 2024-06-06 04:37:36.340240822 +0000 UTC m=+0.248754860 container exec d162f57ae43182fe19975f3ed75e49b353ed7c53f3fd529c0bd6ad5850232556 (image=docker.io/start9/mempool/main:2.5.1.1, name=mempool.embassy)
Jun 06 04:37:36 squeaky-lice podman[88785]: 2024-06-06 04:37:36.41110841 +0000 UTC m=+1.975644552 container exec_died d4cc5d664678d6e038f296bf9e62d72a91bfc4248136d7b40c61403989830254 (image=docker.io/start9/electrs/main:0.10.4, name=electrs.embassy)
Jun 06 04:37:36 squeaky-lice podman[89044]: 2024-06-06 04:37:36.429361013 +0000 UTC m=+0.081211088 container exec_died d162f57ae43182fe19975f3ed75e49b353ed7c53f3fd529c0bd6ad5850232556 (image=docker.io/start9/mempool/main:2.5.1.1, name=mempool.embassy)
Jun 06 04:37:36 squeaky-lice podman[88786]: 2024-06-06 04:37:36.480862299 +0000 UTC m=+2.045234922 container exec_died d4cc5d664678d6e038f296bf9e62d72a91bfc4248136d7b40c61403989830254 (image=docker.io/start9/electrs/main:0.10.4, name=electrs.embassy)
Jun 06 04:37:36 squeaky-lice podman[88933]: 2024-06-06 04:37:36.647840713 +0000 UTC m=+0.555810150 container exec d162f57ae43182fe19975f3ed75e49b353ed7c53f3fd529c0bd6ad5850232556 (image=docker.io/start9/mempool/main:2.5.1.1, name=mempool.embassy)
Jun 06 04:37:36 squeaky-lice podman[88932]: 2024-06-06 04:37:36.656779795 +0000 UTC m=+0.565293853 container exec_died d162f57ae43182fe19975f3ed75e49b353ed7c53f3fd529c0bd6ad5850232556 (image=docker.io/start9/mempool/main:2.5.1.1, name=mempool.embassy)
Jun 06 04:37:40 squeaky-lice startd[541]: 2024-06-06T04:37:40.454884Z DEBUG run_main:check: startos::manager::health: Checking health of thunderhub
Jun 06 04:37:40 squeaky-lice startd[541]: 2024-06-06T04:37:40.752237Z DEBUG run_main:check: startos::manager::health: Checking health of lnd
Jun 06 04:37:40 squeaky-lice podman[89100]: 2024-06-06 04:37:40.848866222 +0000 UTC m=+0.088985877 container exec 6666f242e121b83eb1524e9c5c2e3991692c3f1de855e96cb70a2bda13b6b5cb (image=docker.io/start9/lnd/main:0.17.5, name=lnd.embassy)
Jun 06 04:37:40 squeaky-lice podman[89100]: 2024-06-06 04:37:40.878664298 +0000 UTC m=+0.118783983 container exec_died 6666f242e121b83eb1524e9c5c2e3991692c3f1de855e96cb70a2bda13b6b5cb (image=docker.io/start9/lnd/main:0.17.5, name=lnd.embassy)
Jun 06 04:37:44 squeaky-lice lnd.embassy[3567]: 2024-06-06 04:37:44.986 [INF] CRTR: Processed channels=0 updates=155 nodes=0 in last 59.997894057s
Jun 06 04:37:49 squeaky-lice startd[541]: 2024-06-06T04:37:49.702851Z DEBUG run_main:check: startos::manager::health: Checking health of lightning-terminal
Jun 06 04:37:49 squeaky-lice lightning-terminal.embassy[3371]: 2024-06-06 04:37:49.912 [INF] LITD: Handling static file request: /
Jun 06 04:37:50 squeaky-lice podman[89133]: 2024-06-06 04:37:50.044926478 +0000 UTC m=+0.316250910 container exec 397db41359839c26521480ad18412754c771f1fcf35360b35c37e87d82368d21 (image=docker.io/start9/lightning-terminal/main:0.12.5, name=lightning-terminal.embassy)
Jun 06 04:37:50 squeaky-lice podman[89133]: 2024-06-06 04:37:50.138731902 +0000 UTC m=+0.410056183 container exec_died 397db41359839c26521480ad18412754c771f1fcf35360b35c37e87d82368d21 (image=docker.io/start9/lightning-terminal/main:0.12.5, name=lightning-terminal.embassy)

Looks like there’s something wrong with the bitcoin container.

I assume you’ve already tried a system rebuild, but I’d also be interested in whether you suffer the same problem when bitcoin is stopped and you leave it stopped.

Thanks I will stop the node for some time and let you know if the same problem occurs.

1 Like

After reinstalling Bitcoin Core it seemed to have become better but since today conmon is constantly at 100% again. Seems to be the same as before.

/usr/bin/conmon is still running on 100% constantly after the node is up for a few days.
Any tips for what I could do about this?

This is Podman, which runs the containers. I’m not aware of an mechanism by which DIY hardware can cause a system service like this to act up. If someone came to me with this issue for the first time with no back story, I’d tell them to rebuild the system and reflash the OS in that order.

Perhaps you’ll have more luck with the next version of the OS, where we do away with Podman and move to native Linux containers.

Thank you for your answer. Alright, I will wait for the update then.