The deduping in restic is just on the edge of acceptable for me, making me think I'd have trouble with a lot more data. Basically the one a month "prune" operation takes about 36h (to B2) . I feel I could be tuning something but also it works and I don't want to touch it.
I backup around 2TB with Restic, also tried locally with Borg. The size is nearly the same. Sadly, I can’t even test with Tarsnap! (absurd pricing for 2TB).
Not in any way affiliated but I'm a happy user of Scaleway's Object Storage [0] together with S3QL [1]. It's not the fastest but they give you 75GB of storage for free so that's a fair trade [2].
I'm renting $15/mo 2TB atom machine from OVH/kimsufi as second target for backups.
Now that I think about it... some kind of micro-distributed backup server (throw on few of your machines, auto-replicate between) would be a neat project...
Curious how much you backup, which version of restic you're running and why you think the deduplication is borderline unacceptable. There were several major (orders of magnitudes) improvements made to pruning within the past ~1 year, that's why I'm interested.
The max-unused percentage feature is well worth it to 80/20 the prune process and only prune the data which is easiest to prune away (i.e. not try to remove small files big packs but focus on packs which have lots of garbage).
In general, there's an unavoidable trade-off between creating many small packs (harder on metadata throughout the system, inside restic and on the backing store but more efficient to prune) versus creating big packs which are more easy on the metadata but might create big repack cost.
I guess a bit more intelligent repacking could avoid some of that cost by packing stuff together that might be more likely to get pruned together.