Using the docker container is a breeze - add a POSTGRES_PASSWORD env variable and you're all set. I'd be curios how it performs for a TB of data but I would be surprised if it straight out breaks.
SQLite can be awesome for huge data sets as I've rarely found anything that can ingest data as rapidly but, as with any database, it requires some specific tweaking to get the most out of it.
The biggest headache is the single write limitation, but that's no different than any other database which is merely hidden behind various abstractions. The solution to 90% of complaints against SQLite is to have a dedicated worker thread dealing with all writes by itself.
I usually code a pool of workers (i.e. scrapers, analysis threads) to prepare data for inserts and then hand it off for rapid bulk inserts to a single write process. SQLite can be set up for concurrent reads so it's only the writes that require this isolation.
This is the way.
It would be nice to have a very simple APi that handles the multiprocessing of data for ETL-like pipelines and serializes to one write thread out there. There are a few that get close to that area but I haven't seen any that really nail it.
https://hub.docker.com/_/postgres