Both. For example, recently I had an issue with fuse bindfs. Where using the mount -t fuse.bindfs stopped working. Who know what package caused it, no issues on any trackers. Fuck, mkay. roll the server back from 2 hours ago. Go back to staging machine, build a workaround, deploy workaround. Upgrade again, works. Deploy upgrade to all machines. Also having new issues with podman pasta networking and kernel tap devices lately, so that's also buggering the pipeline. All of which kernel, libc, libfuse, podman, pasta, and bindfs had updates that fixed bugs or improved security, so an upgrade was nice, but probably not strictly necessary on my network (low exploitation risks)
Also server uptime is usually measured as service uptime. We've moved toward HA as part of software or part of the underlying machines. While that VM died, it was in maintenance, which means all services were migrated to the other nodes, so no downtime was actually seen by users but the machine itself spend hours offline while a rollback, patch and upgrade was made.