Dump is the proper term in this case, though, given S3 limitations (I.e. no append to file) which means you need to either create new file for each insert (very expensive) or append replace file for update. So practically it’s a workable read only replica with dumps of updates. For reasonably small datasets it can potentially work otherwise you look at rebalancing, partitions etc. and probably you’re better off with parquet,avro etc. given you are usually at the stage of introducing spark etc.