Roughly: The number of hours of my time that would be required to get something with even theoretically equivalent features would be sufficient to make the cost - and opportunity cost - involved seem far more reasonable.
Plus "written by cperciva and heavily battle tested by Serious Sysadmins" is a feature I couldn't recreate myself - notice that while there was an outage, part of the reason for it taking a while was a conscious choice to take a much longer path to resolution than bringing up the previous server in the name of paranoia. Paranoia about data corruption is a nice thing to have in a backup system and something I'm happily willing to trade-off uptime for.
However: For backups of bulk data then, yes, it's going to be relatively expensive. I wouldn't put e.g. my media backups on tarsnap, but "use tarsnap for your git repositories and other high value data, and something else for the rest" is both perfectly doable and an approach I suspect cperciva himself would endorse.
> notice that while there was an outage, part of the reason for it taking a while was a conscious choice to take a much longer path to resolution than bringing up the previous server in the name of paranoia. Paranoia about data corruption is a nice thing to have in a backup system and something I'm happily willing to trade-off uptime for.
As Actual Serious Sysadmin that Actually Manages Big Systems for Living that screams more lack of preparation than anything else.
Yes you should be careful but you should also have procedures in place and know the system well enough to trust it. And the fact is that the "boring" architecture of RDS DB instead of that S3 database abomination thing would just start right up if master DB server failed.
It honestly looks like a trap many intelligent people fall into where they turn their cool-but-ulimately-flawed mental excercise into bedrock of the product. I don't want to use baby's-first-database on my production servers (I'm looking at you Lennart Poettering and journald) and I don't want my data/metadata stored on some experimental one.
Then the obvious question is "why would I use this instead of something else over S3" (ex. rclone), to which I think the answer is ease of use (don't need to deal with AWS yourself, encryption/deduplication/compression handled for you, nice interface), which isn't everything to everyone but is certainly useful.
You need a remote service that keeps backup readonly. You’re not covering attack scenarios if you just use raw object storage from your client machine.
I'd classify that under "ease of use" - you can do it with S3 yourself (your post is a pretty good explanation of the how, from a quick skim), or you can just use tarsnap and not worry about it.
You can see from my post that doing that _properly_ is quite convoluted and requires a good deal of technical skills.
So it's not just ease of use. It's actual _functionality_ to me - getting from raw object storage to a fully working, attack-resistant backup strategy, is not trivial; hence, comparing tarsnap (or rsync.net, or borgbase, or whatever) to B2 or S3 makes little to no sense.
You _could_ compare it to crashplan or backblaze personal backup if you like, but IIRC those don't work for *nix systems, only for Win and Mac.
Learning to use some backup tool that does same things sounds way better than paying 10 times more for the storage cost that lasts.
There's also a service like rsync.net where you can just rsync to the destination and they do the versioning and so on for less than 10th of the cost of tarsnap.