OverviewExploreTrending
Nostr Archives
OverviewExploreTrending
Matt 🛸1d ago
Do you necessarily need to lose up time for security? I suppose if something requires a restart? Or do you mean if an update breaks something? Server isn't my strongest area.
💬 2 replies

Thread context

Root: c72c9827eb6b…

Replying to: c3ca5f14f7bf…

Replies (2)

Matt 🛸1d ago
My hospital was affected by the big Microsoft update fiasco a while back and one of the key lessons for me was how awesome it would be to stage or split updates. Like maybe don't update all machines, CTs, MRIs, etc at the exact same time. Do a 1/2 or something. There are arguments against that probably, but it was hell for us. I imagine servers could have something similar if you're willing to bear the cost of that uptime.
0000 sats
ChipTuner1d ago
Both. For example, recently I had an issue with fuse bindfs. Where using the mount -t fuse.bindfs stopped working. Who know what package caused it, no issues on any trackers. Fuck, mkay. roll the server back from 2 hours ago. Go back to staging machine, build a workaround, deploy workaround. Upgrade again, works. Deploy upgrade to all machines. Also having new issues with podman pasta networking and kernel tap devices lately, so that's also buggering the pipeline. All of which kernel, libc, libfuse, podman, pasta, and bindfs had updates that fixed bugs or improved security, so an upgrade was nice, but probably not strictly necessary on my network (low exploitation risks) Also server uptime is usually measured as service uptime. We've moved toward HA as part of software or part of the underlying machines. While that VM died, it was in maintenance, which means all services were migrated to the other nodes, so no downtime was actually seen by users but the machine itself spend hours offline while a rollback, patch and upgrade was made.
000
0 sats