Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But you don't need to master bucket policies, ACL, CORS, multi part uploads, content headers, CDN, or pre-signed URLs for what this is doing. There's a bit more boilerplate to set everything to the "I don't care whatever" settings, but that's because the "I don't care whatever" approach is usually not what you want for anything serious.

I'm not sure how to use this from my Java or Rust projects. I also don't see any API docs so I don't know how to write a wrapper. I guess it's a project for and by Javascript developers.

I can't find anything about egress limits, file size limits, how incomplete transfers are handled, in what country the data is stored, and what happens when you try to create a file that already exists. Maybe that info is behind the login wall?



The webpage is a little bare-bones apologies. The setup instructions and tutorial appears when you create an account.

At the moment the client package is only for js/ts devs. The package is based on HTTP api calls so it shouldn't be a huge issue porting it to other languages, but obv it's challenging without public HTTP specs. If you're into implementing a wrapper I'd be happy to assist and share those details.

About costs. The is no information, because there is no egress fee. FILE0 is built on top of R2. They don't charge for egress, so neither you pay. File size limit is 5GB soft limit. This is set as a sensible default but can be increased on demand.

The main location is in us-east, and it's replicated around the globe. If the file already exists, it is overriden. Yes, all the setup guide and API-specific tutorials are visible in your dashboard after signup.


Assuming there wont be any egress fess forever on Cloudflare R2 is a bit risky IMHO if you read stuff like [0]. Especially if you build a product on top of it.

[0]: https://news.ycombinator.com/item?id=40481808


I was also reading this the other day, but right now they have the best offering on the market to serve as a base for FILE0. There are also alternatives out there, in case we get slapped with a 120k "offer".

The first step is to get to a scale where you piss off the Cloudflare sales team. FILE0 is far from that. Whenever that will be the case we can think about solutions, but this wouldn't be a good enough reason not to use them, and the free egress until we can.


> If the file already exists, it is overriden.

So any one of my users could overwrite or delete my other users files? Seems like this is not really thought through.


But you... control the file names... You can overwrite contents in most other file systems easily.


But this library is supposed to also be a client-side library, right? I think as soon as you start doing all of the auth checks for CRUD, etc. it becomes almost as complex as the alternatives.

If the point is just a "everyone can do everything" bucket then that isn't too hard on any of the current blob storage providers.


In the backend you control everything. You can write whichever file you want and you’re authenticated via a secret key thats in your env variables.

In the frontend you need get a file-scoped token from the server.

Server: import { f0 } from ‘file0’; const token= await f0.createToken(‘myfile.png’);

You can send this token to the client. And use it like this: import { f0 } from ‘file0’;

await f0.useToken(token).set(myfileblob);

The docs are in the dashboard only after account creation atm. Public docs on the way.


I feel like the site is somewhat deceptive then... Using phrases like "Stop reading docs. Start shipping.", "As easy as using the localStorage." "Just 3 simple steps." implies something else.

What you are saying is that for actual usage I would need to

---

1. Read the docs for what access you provide by default (anonymous access, etc)

2. Build a backend api endpoint to do all AuthN/AuthZ checks, call your library to generate a token and then return that

3. (On the frontend) Make an API request to my backend, get the token. Call your library with the token to upload the file

4. (maybe think about revoking that token to disallow overwriting the file with the same token)

5. In other clients use your library to retrieve the file? Do I need to build a backend endpoint for tokens here too? If not do you have a way to handle non-public files?

---

My guess would be that whenever this service is used for real we actually need to deal with all of the details it supposedly abstracted away.

The hard part of blob storage has never been storage, it's all the parts that we imply when we say "blob storage". AuthN, AuthZ, permissions, versioning, backups, querying, partial updates, etc. etc. And for most "simple" use-cases you need one or more of those.

I'm not saying you could have made any of these parts any easier, but I think you pitch them as easier than they could be.


I recently had to set up an S3 bucket and ran into all the issues the OP mentioned. I remembered when S3 was simple and just works. Now the default set up flow is advanced enterprise secure web scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: