Backups, litestream gives you streaming replication to the second.
Deployment, caddy holds open incoming connections whilst your app drains the current request queue and restarts. This is all sub second and imperceptible. You can do fancier things than this with two version of the app running on the same box if that's your thing. In my case I can also hot patch the running app as it's the JVM.
Server hard drive failing etc you have a few options:
1. Spin up a new server/VPS and litestream the backup (the application automatically does this on start).
2. If your data is truly colossal have a warm backup VPS with a snapshot of the data so litestream has to stream less data.
Pretty easy to have 3 to 4 9s of availability this way (which is more than github, anthropic etc).
My understanding is litestream can lose data if a crash occurs before the backup replication to object storage. This makes it an unfair comparison to a Postgres in RDS for example?
Last I checked RDS uploads transaction logs for DB instances to Amazon S3 every five minutes. Litestream by default does it every second (you can go sub second with litestream if you want).
Interesting - I had not looked deep into this before.
Is suppose the difference is RDS has high 9s, whereas in the Litestream case the frequency of crashes is tied to your application code and deployment process. In practice this will take more work to reach the same uptime?
> Backups, litestream gives you streaming replication to the second.
You seem terribly confused. Backups don't buy you high availability. At best, they buy you disaster recovery. If your node goes down in flames, your users don't continue to get service because you have an external HD with last week's db snapshots.
If anything backups are the key to high availability.
Streaming replication lets you spin up new nodes quickly with sub second dataloss in the event of anything happening to your server. It makes having a warm standby/failover trivial (if your dataset is large enough to warrant it).
If your backups are a week old snapshots, you have bigger problems to worry about than HA.
> If anything backups are the key to high availability.
Not really. Backups are complementary in disaster recovery. They play no role in high availability. Putting your data in cold storage plays no role in keeping your system up and handling traffic.
> Streaming replication lets you spin up new nodes (...)
You seem to be confused. Replication and backups are two entirely separate things. Replication is used to preserve consistency across a distributed system and improve fault tolerance, whereas backups just means you are able to recover the state of your system at each checkpoint. Either you're using a word while giving it a new personal meaning, or you're confusing concepts.
Depends how you do your backups. If you do them by replicating. They are both. See litestream [1].
With SQLite this is even more obvious as a database is just a file (or three in the case of WAL). Which means you can replicate to not just another machine (or any file system) but much more resilient object storage like S3 (most cloud provider offer S3 compatible object storage).
Deployment, caddy holds open incoming connections whilst your app drains the current request queue and restarts. This is all sub second and imperceptible. You can do fancier things than this with two version of the app running on the same box if that's your thing. In my case I can also hot patch the running app as it's the JVM.
Server hard drive failing etc you have a few options:
1. Spin up a new server/VPS and litestream the backup (the application automatically does this on start).
2. If your data is truly colossal have a warm backup VPS with a snapshot of the data so litestream has to stream less data.
Pretty easy to have 3 to 4 9s of availability this way (which is more than github, anthropic etc).