Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: File0 – An easier way to manage files in serverless apps (file0.dev)
212 points by davidkarolyi on May 28, 2024 | hide | past | favorite | 154 comments
Cmon... I just want to upload a file and make it public on the internet. Now you tell me I need to master bucket policies, ACL, CORS, multipart uploads, content headers, CDN, presigned URLs, and a bunch of other crap?

I can't be asked, so I built FILE0. It's for storing files but you don't need to complete an online course.



It's well executed from a UX and Marketing point of view, but what it needs is some copy dedicated to security of files, privacy of files, why I (and my customers) should trust your system etc.


Fair.


There's a surprising large portion of Hacker News can't grasp the concept of positioning.


It’s a bit sad to position yourself as a magnet for the gullible though.


I don't know being able to put two and two together on an R2 wrapper makes you gullible?


This seems like a great idea.

I've come to think the UX requirements of enterprise and indie dev / start up customers are fundamentally incompatible in the cloud.

As an enterprise I want to be able to set restrictions at the org, project, team and resource level for various compliance rules. And I want these rules to be restrictive by default (e.g block public access).

However as an individual developer the mere existence of those options makes getting started and debugging things painful. It's why I'll never again use Azure for a side project, although it'd be my first choice for a government project.


When you have highly granular controls, you need to create intermediary common sets or “profiles” of those sets at decreasing levels of granularity for the more common use cases, at least some of the most common sets where I can simply choose a single setting to effect everything with sane defaults.

Consumer cameras remind me of this when they started mimicking more and more sophisticated feature sets of professional cameras. Sure I want that fantastic sensor, great lens setup and ability to take both narrow and wide depth of field photos but I don’t know or rather want to be bothered dealing with adjusting all my f-stops, focal point etc. I need automation where possible for these cases to give me sane results.

In that case it’s a bit more complex because you have engineering problems for those intermediary steps (e.g. autofocus and making sure the image results meet criteria) whereas for software profiles you can often get away with just choosing option sets. Some options still can’t be hand waived away though and require some degree of looking at other things to inform them or being manually set even for pure software controls.


I understand the different need being espoused here, but I think it paints with an overly broad brush.

My company is what people would colloquially refer to as a startup, although it is by most formal definitions an established company (more than five years old, profitable and self-funded, no external money, defined product and stable customer base).

Under the US SBA classification we are a small business by revenue, and we have three people.

Little frustrates us more than sophisticated features being locked behind enterprise packages, because we tend to select toward enterprise feature sets by default. We are small, but we make it a priority to do business in ways that most small businesses don’t focus on. This has been wonderfully successful for us because we have the kind of customers who will ask “how do you manage backups, and why should I trust you to do that?”

We can answer those sorts of questions with the same kind of robust architecture people expect from much larger providers. We can point to adhering to the 3-2-1 rule with three different copies of customer data (one production, two backups) maintained in three locations managed by two (legally distinct) clouds/data centers, each at least 500 miles apart, two of which are resistant to ransomware attacks (the backups in S3), all of which are protected with hardware MFA. We have a fourth copy as a failsafe in the form of rolling VM backups made every 24 hours, saved for approximately 7 days.

That is in large part due to using S3.

Sophisticated feature sets are extremely valuable for us even if we aren’t an enterprise. They allow us to put our money where our mouth is.


It sounds like you want to provide an enterprise-grade service, why wouldn’t you expect to pay enterprise-grade prices to your suppliers?


In short: SSO is a core security requirement for any company [customer] with more than five employees.

SaaS vendors appear not to have received this message, however. SSO is often only available as part of “Enterprise” pricing, which assumes either a huge number of users (minimum seat count) or is force-bundled with other “Enterprise” features which may have no value to the company using the software.

If companies claim to “take your security seriously”, then SSO should be available as a feature that is either:

- part of the core product, or

- an optional paid extra for a reasonable delta, or

- attached to a price tier, but with a reasonably small gap between the non-SSO tier and SSO tiers.

https://sso.tax/


That’s exactly it.

We have found that a lot of the thinking behind locking feature flags behind enterprise pricing is that there’s a perception that providing those features always comes with an increased support load. Or that you only need these features if you have lots of money to spend anyway. Neither have proven true for us.

Sometimes enterprise pricing is to offset the costs of and somewhat conceal that lack of focus on those features. It’s exceedingly ridiculous in 2024, for example, to have to email a support contact the SAML certificate to setup SSO. (In our case, we run away from those kinds of providers anyway.)

In direct reply to u/contrast: Of course there are some areas where we purchase the enterprise option because it’s the only thing available (our ERP for example), but that’s becoming rarer than it used to be. Where it becomes a deal breaker we usually find that the competition is happy to have us. Alternatively we make our own solution variously on platform agnostic primitives like S3 (or S3 API-compatible options), as a custom app in our ERP, or by using (and/or sponsoring) FOSS upstreams for commercialized source-available products. Being a customer that typically doesn’t need to talk to sales or support seems to make us a more profitable customer, and there can sometimes be room to negotiate there.

Edit to add: We don’t necessarily position ourselves as an enterprise grade provider. We tend to avoid engagements like that purposefully. Rather, we position ourselves as a trustworthy provider that takes their work seriously. We don’t find enterprise branding particularly helpful, and we aren’t oriented toward a sales culture or pushing to grow the business every single quarter. We prefer to simply do a good job and earn the trust our customers place in us. That does mean we need to operate with an enterprise grade focus in some areas, but that doesn’t mean we can or want to pay enterprise grade prices for every single thing we need.

We target mainly small businesses. Many of our customers want something different than what is frequently not-even-bargain-basement service that they had before. For example, we manage many customer domain names. Many of our customers have been burned in the past by web designer sole-props saying “yes” to any business that comes at them, but forgetting or not knowing to do things like annual WHOIS contact reviews, properly offboarding resold accounts, not implementing strong MFA, staying on top things like the recent DMARC changes, etc. These businesses deserve top notch service just as much as an enterprise, so we strive to do that for them. Unfortunately rendering that service frequently requires tools or features presumed to be desired or needed only by large enterprises.


Or you need to have enough experience with any IaC tech that setting up a bucket with proper permisioning is a breeze.

Once you get your own little pulumi/terraform library, these things really stop to be a pain.


Isn't that why Heroku exists?


Yeah - Heroku, Firebase, Supabase etc all target these kind of smaller customers.

The nice thing about this service is that it seems to be public first. Which reduces the user effort involved in managing a permissions layer on top of storage.


But you don't need to master bucket policies, ACL, CORS, multi part uploads, content headers, CDN, or pre-signed URLs for what this is doing. There's a bit more boilerplate to set everything to the "I don't care whatever" settings, but that's because the "I don't care whatever" approach is usually not what you want for anything serious.

I'm not sure how to use this from my Java or Rust projects. I also don't see any API docs so I don't know how to write a wrapper. I guess it's a project for and by Javascript developers.

I can't find anything about egress limits, file size limits, how incomplete transfers are handled, in what country the data is stored, and what happens when you try to create a file that already exists. Maybe that info is behind the login wall?


The webpage is a little bare-bones apologies. The setup instructions and tutorial appears when you create an account.

At the moment the client package is only for js/ts devs. The package is based on HTTP api calls so it shouldn't be a huge issue porting it to other languages, but obv it's challenging without public HTTP specs. If you're into implementing a wrapper I'd be happy to assist and share those details.

About costs. The is no information, because there is no egress fee. FILE0 is built on top of R2. They don't charge for egress, so neither you pay. File size limit is 5GB soft limit. This is set as a sensible default but can be increased on demand.

The main location is in us-east, and it's replicated around the globe. If the file already exists, it is overriden. Yes, all the setup guide and API-specific tutorials are visible in your dashboard after signup.


Assuming there wont be any egress fess forever on Cloudflare R2 is a bit risky IMHO if you read stuff like [0]. Especially if you build a product on top of it.

[0]: https://news.ycombinator.com/item?id=40481808


I was also reading this the other day, but right now they have the best offering on the market to serve as a base for FILE0. There are also alternatives out there, in case we get slapped with a 120k "offer".

The first step is to get to a scale where you piss off the Cloudflare sales team. FILE0 is far from that. Whenever that will be the case we can think about solutions, but this wouldn't be a good enough reason not to use them, and the free egress until we can.


> If the file already exists, it is overriden.

So any one of my users could overwrite or delete my other users files? Seems like this is not really thought through.


But you... control the file names... You can overwrite contents in most other file systems easily.


But this library is supposed to also be a client-side library, right? I think as soon as you start doing all of the auth checks for CRUD, etc. it becomes almost as complex as the alternatives.

If the point is just a "everyone can do everything" bucket then that isn't too hard on any of the current blob storage providers.


In the backend you control everything. You can write whichever file you want and you’re authenticated via a secret key thats in your env variables.

In the frontend you need get a file-scoped token from the server.

Server: import { f0 } from ‘file0’; const token= await f0.createToken(‘myfile.png’);

You can send this token to the client. And use it like this: import { f0 } from ‘file0’;

await f0.useToken(token).set(myfileblob);

The docs are in the dashboard only after account creation atm. Public docs on the way.


I feel like the site is somewhat deceptive then... Using phrases like "Stop reading docs. Start shipping.", "As easy as using the localStorage." "Just 3 simple steps." implies something else.

What you are saying is that for actual usage I would need to

---

1. Read the docs for what access you provide by default (anonymous access, etc)

2. Build a backend api endpoint to do all AuthN/AuthZ checks, call your library to generate a token and then return that

3. (On the frontend) Make an API request to my backend, get the token. Call your library with the token to upload the file

4. (maybe think about revoking that token to disallow overwriting the file with the same token)

5. In other clients use your library to retrieve the file? Do I need to build a backend endpoint for tokens here too? If not do you have a way to handle non-public files?

---

My guess would be that whenever this service is used for real we actually need to deal with all of the details it supposedly abstracted away.

The hard part of blob storage has never been storage, it's all the parts that we imply when we say "blob storage". AuthN, AuthZ, permissions, versioning, backups, querying, partial updates, etc. etc. And for most "simple" use-cases you need one or more of those.

I'm not saying you could have made any of these parts any easier, but I think you pitch them as easier than they could be.


I recently had to set up an S3 bucket and ran into all the issues the OP mentioned. I remembered when S3 was simple and just works. Now the default set up flow is advanced enterprise secure web scale.


As somebody who's spent over a decade making interactions with filesystem easier, I really understand why somebody would be tired. I originally made Flysystem for PHP to reduce the consumer-end complexity of using many types of filesystems (S3, FTP, SFTP, GCS, GridFS). I've recently made the move towards the TypeScript ecosystem, for which I've built https://flystorage.dev (a TS equivalent of Flysystem). Looks like this could be an easy adapter to include. Will put it on my research list, thanks for sharing!


Awesome poduct! Let me know if u need help.


Webpage looks great. Love the simple design with code examples front and center. I don't think it's fair to call the pricing transparent but hide it behind a sales gate. API looks nice, maybe a public page with API documentation would help. npm package is lightweight which is refreshing to see. Backend seems to be Cloudflare R2.


Cheers. The long-term plan of pricing is to only bill after storage-tiers. The only reason there is no tiers, is because there are no customers with that need yet. But I hear you. I'm also sick of seat-based/bandwidth and a many other metrics providers charge us for.

Public API documentation is something many pointed out, so I consider adding some form of it. Appriciated.

Npm package is meant to be portable. It has 0 dependencies, and only utilizing API's that is widely available in any runtime.

And yes. you nailed it. It's backed by R2 and CF workers.


These comparisons are really unfair. For example there's no reason you have to use a bucket policy in the sdk (I've never).

Unless it's S3 compatible it's going to be a gargantuan task to make this successful as _everything_ uses the S3 API.


Perhaps, but AWS certainly doesn't do a very good job of highlighting the most simple way to use it.

This is the user guide: https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcom...


R2 is also a solution to much of this.

But best of luck to OP!


R2 also makes it hard to figure out how to simply upload a file.


In both S3 and R2, it would be relatively easy to create an client/overlay that has a similar interface to TFA.


The syntax is a bit obtuse, but you setup creds and then:

    wrangler r2 object put test-bucket/file --file=file


Yeah, but now try uploading a folder. They'll just refer you to their S3 API. Also, that's 4 levels of commands, it's more than my mind can easily remember.

aws s3 sync [source] [dest]


Perhaps arsed rather than asked


This was my catch phrase for years, and today I learned I used it wrong all the time.


Reminiscent of Diane Morgan's Mandy -- "I'm at my lowest egg"


Interesting, since such plausible mishearing are often called ‘Eggcorns’ (https://en.m.wikipedia.org/wiki/Eggcorn)


I never knew that, thanks -- includes one of my favourites, "damp squid", which I've heard for real ...


What's that supposed to be?


Ebb.


I thought this was an exclusively Hiberno twist, maybe not?


It suffers the usual effect as seen in eg. Wayland. You think the problem is simple and make a new protocol that's simple. Then you find complexity in the problem and solve it with complexity in the solution. Then your protocol becomes as complex as the one you're replacing. There are reasons S3 does all these things.

ACLs, for example? How do you control who can access a file you uploaded?

Also, it costs about 20 times as much as Backblaze B2.


If you need granular control, you can use S3. If you need the cheapest option out there you can use Backblaze. If your application is bandwidth heavy, you can use R2.

If you're a busy indie hacker who just want to be up an running with file storage under 5 mins and forget the whole thing even exists, you can use FILE0.

S3 is stuffed with features for a reason. It's the backbone of the internet so it needs to cover 100% of the use cases and more.


We really wanted Backblaze to work. However, there's not a single week it doesn't have downtime at least once or twice. It's really really not ready for production usage (trust me - we've tried in production and had to revert back to S3 after two major outages).

Willing to try wasabi next time.


I guess that's when you switch to a more robust solution. If all you want is to host a couple of gigs at most because you're just starting then you're optimizing for speed and simplicity.


Is that really true about Wayland? I'm no expert on the subject matter, just a regular Linux user. From skimming the discussions, however, Wayland is touted as a simpler alternative to X because it comes without 25 years of baggage and bolted on things that turned X11 into a complex beast. Also, the security model of Wayland is often praised.

All in all, I had the impression that while a lot of things are WIP, Wayland had taken a more holistic approach because it was able to learn from X11's mistakes. Has Wayland really that unexpected complexity that it's now as much a part of the problem than it is a part of the solution?


Yes, it's true about Wayland. It started as a project to "let's just send pixels to the screen and remove the rest" and then it discovered why the rest existed, and reimplemented much of it in a more annoying or at least equally annoying ways.


Wayland is simple because it lets someone else do the heavy lifting (the compositor).


If you are happy to run everything via the backend it is probably ok (no worse than a typical app with an app level DB login that had access to all rows and tables)

With async programming there is little load in forwarding it on, and that’ll be fine at low scale most of us use.

Probably a great option for profile pics to avoid upgrading your postgres instance to a costly one.


Hmm, couldn't find documentation, so I tried looking for the source to see how auth worked.

Package has no README https://www.npmjs.com/package/file0 nor links to a repo

Searching Github doesn't give anything (nor any public projects using it).

Installing the package from NPM just gives you a completely minified JS that's pretty difficult to understand https://www.npmjs.com/package/file0?activeTab=code. No need to publish minified JS to NPM imho. Actually reading/stepping through dependencies is always exceptionally valuable.


This is on me. Right now all the instructions & setup guide & code snippets lives in your dashboard, appears after signup. You're not the first one pointing out the lack of public docs, and it's a legit need to take a look and understand a tool before you crate an account.


I personally don’t see the value of $2.50 (S3) versus $12 (you) (for 100 GB). Especially for file storage, who knows when you’re going to close up shop and take my data with it?

I should be able to store my data where I want to. If you wanted to wrap a service around S3, Backblaze, or whatever, maybe sell the convenience instead of selling storage?


Note that unlike FILE0, S3 not only charges for storage, but for write requests, read requests, network bandwidth, and some less obvious niche things.

So it's not a 1-1 comparison.

If you don't see the value in this extra convinience you should stay with your current service! The goal of FILE0 is not to replace these big players, but to provide an option to folks who just want to get things up and running quickly and don't want to think about infrastructure.


Have you talked with cloudflare about this?

Lets say you become a huge hit and megaupload or vimeo or someone else with massive traffic (but not on the "we built our own datacenters and ISP CDN" level) start using your service. What then? Cloudflare will call you and demand you step up to an enterprise plan or pay for premium traffic.

Your pricing model assumes the pricing model of another pricing model that assumes pricing negotiation when usage increases. You will be the middle party in a discussion where you really don't control either party.

This is why I actually like AWS:es nickle-and-dime for every compute second and megabyte.


It isn't just developer experience.

- Can I trust you?

- Where are your company credentials?

- What's the business continuity plan?

- What's your support? Sending an email to hi@ may not work, when my application is dead because of some bug at your end.


I agree with your concerns, but wow do I miss the 1990s/2000s era of the internet. Things were so much more fun then.

Perhaps a 'mirror to S3-compatible store' feature would address these concerns though. Sure it is extra cost, but it would be a nice de-risking option for early adopters.


> I agree with your concerns, but wow do I miss the 1990s/2000s era of the internet. Things were so much more fun then.

Yeah in one way people were much more casual and curious. But keep in mind everything back then were monoliths. You’d just upload files to your server. Some of the complications of trust comes from the fact that you need microservices with custom API credentials and usage quotas in order to run leftpad at web scale.

If you zoom out and think where we are today it’s absolutely insane that uploading a file 20 years later is a closed source subscription service. Nothing against OPs project, but goddamn look at us.


Everyday I am more convinced that open sourcing developer products makes more sense than ever, if a service is very critical and something breaks, you just jump in and fix it yoursef.

Nevertheless, its nice seeing crappy aws products getting fixed.


There is a huge continuum from scrappy startup all the way to the point where you ask this kind of questions before even touching something new.


Yeah you addressed the elephant in the room.

Obviously it's a brand new product, with 0 reputation. As with every new product at the begining I expect bugs and disruptions, but IMO the product is simple enough to get it right quickly. FILE0 a simple layer on top of R2, so for reliability and HA is majorly dependant of their system.

You can trust me as much as any random guy on the internet.

For a big cloud provider you will be just a number, they don't particularly care if you're happy or not, but what I can tell you that for me you're my #1 priority. If you're happy, FILE0 survives. If not all the effort building it goes to the bin.

What's missing from the product is the ability to share apps (aka buckets) with your co-workers. If thats a feature you can live without momentarily, businesses can ask for a custom offer if 100GB is not enough. But they won't be billed after seats or any of that crap. Just for usage.

That all being said, I would love to setup a call and have a personal connection, and I'm here to help with anything you might need. I'll enrich the page with more obvious contact details, because it's a legit concern I'd also have.


It seems like they’re asking you to write out your plans and what makes you trustworthy on the website. Talking to people 1:1 won’t solve that problem.

Saying “you can’t trust me” isn’t a good look for a SaaS.


> Saying “you can’t trust me” isn’t a good look for a SaaS.

It worked out for Facebook.

I’m kinda happy about that since we can now use it as an example.

But more importantly, it’s realistic. You should absolutely expect someone to walk away with all your data and zero recourse.


It’s not realistic. I can trust the many other object storage providers.


Because inevitability this is what happens.


Why should this be a whole new product and not just a library around S3 sdk?

You're mixing up server side and client side.


1. If you switch the default SDK you still need to understand S3 to some extent. At least creating buckets, bucket policies, IAM, CORS headers and a few more. The complexity of S3 comes from it's architecture, not the SDK. In fact I believe the v3 SDK of AWS does a good job at what it is.

2. Feature extensibility. This way FILE0 is not restricted by S3 API limitations. One example being: the AWS SDK doesn't support advanced filters for files (like ends with .png). This is a feature FILE0 has, but not supported by the s3 api.

3. If the goal is to provide a smooth DX, you cannot start with "Go to your aws account and create a bucket, and add this configuration to your bucket".

A package like this would be interesting but it's not what FILE0 aims to achieve.


Ends with .png is a bad heuristic though. If you want to be sure something is an image you should read it as an image, and then rewrite it as an image.


This is great! I was able to get it running in Val Town: https://www.val.town/v/stevekrouse/f0_example

We built simple blob storage in Val Town, which has the added benefit of not needing an external API key if you're using it from within Val Town: https://docs.val.town/std/blob/

The API is similar: `blob.setJSON(key, value)` and `blob.getJSON`


What a rube goldberg machine. Just use actual files in folders and stop using 'serverless'. It'll probably be a lot easier to mantain and just as performant in most use cases.


Nice! This looks pretty well thought out so maybe you have considered this already, but you should sort out how you want to handle your local laws surrounding hosting files and your culpability.

You will pretty quickly get nefarious and copyrighted material hosted on your servers, you will be asked by law enforcement for access and you will get DCMA take down requests, and scary sounding emails from lawyers.

Just engage a good lawyer, and check for any requirements local laws require you to fulfill pre-emptively.


Valuable feedback. Cheers!


looking at the code side by side example: FILE0 does not provide authentication!? Everybody can upload files?


Guessing it reads some credentials out of environment variables. The client side example shows the server issuing a token so I guess it's only the server that has/needs the credentials.


Yeah, reading that example looks like it's basically signed URLs, and you define the permissions by deciding who can receive a signed URL on your server. Not a big deal for my use cases.


Agreed. The demo highlights "extra" AWS code for authz + authn, but doesn't explain how f0 doesn't need it... Does the server need to set an ENV? You could argue that's no different then one AWS auth method.

This looks like an interesting product, but it's missing some key technical details to woo engineers -- both how it's done and how reliable the CDN+service is.


Yeah more or less the comments are correct, but let me clarify! FILE0 is looking for an environment variable, that contains your API key. This is how you authenticate yourself from your backend. When you want to use client uploads you can use f0.createToken('myfile.png') and send that token to the frontend, where you can also import f0 and use it like this: f0.useToken(myTokenFromBackend).set(blob) In the dashboard you will find a setup guide and code snippets with all this info.


> FILE0 is looking for an environment variable, that contains your API key.

So is `new S3Client({})`. It's unfair to S3 to pass redundant credentials in the sample code.


If you don't need all of this, rent a VM somewhere, put a webserver on it (or use the static file support of your framework) and serve files from a directory on local storage. That's still possible and easy to do.

You do need to consider backups, you don't get high availability or anything like that and you're limited to the amount of storage you can attach to the VM. So it's not the same as S3 at all, but depending on your needs it is an entirely valid solution.

Serving files is a large space and people do very different things there. No single solution will fit all use cases and still remain simple.


Maybe I'm not a target customer of yours but I couldn't use this for now since I need to region lock to EU, which R2 has recently provided.

That said my honest thoughts are that for my company's simple use case, I set it up once in 15 mins and never really thought about it again, but I can see how someone who hasn't done it before would love this kind of DX.


And thats fine! You should keep using R2. It's amazing. Obviously for experienced seniors who did this a million times, much easier to stick with S3/R2! Cheers


I was kinda hoping this would just be a simplistic API on top of actual S3 so I get the benefits of both.


Interesting you mention this. That was the first approach. The problem comes when you want to introduce missing features to S3. Like tokens, or file filtering.


I didn't realise there were extra features s3 doesn't have, I thought it was just a simplification.

Couldn't presigned URLs work like tokens? And can't you do filtering with s3 select?

It's been a while since I did anything complicated, or much at all with s3 so I am not claiming to be any authority. Just curious!


Another idea for something in this vein is caching. For small jobs, S3 and Dynamo are my typical go-to, and while they're both "simple" it still takes some effort to get all the code in place, more effort than you usually want to dedicate to a simple cache. The most frustrating thing that can happen is an endpoint or lambda goes down because you made a mistake in the damn cache layer, I'm a big believer in the cache having near-zero mental overhead.

Something to simplify a cached piece of data into 1 or 2 lines of code would be nice.


I could not find the docs. Does it offer a static storage URI for people to download stuff via a link? Can I update the contents of the storage URI without generating a new URI everytime?


There are no docs. Once you create an account you will have a setup guide and will have a useful snippets section in your dashboard that contains everything you need to know about file0.

Yes. By default all your files are private and only you can access them. To make a public url you can use f0.publish('filename'); This returns a static url that you can share with anyone, and they can download the file.

This url will stay valid until the file is deleted or unpublished via f0.unpublish('filename');

If you call publish again, it will generate a new public URL.


You tell me you don’t have bucket policies, ACL, CORS, multipart, headers, CDN, presigned URLS, and all those other absolutely necessary features, and so your alternative is a non-starter.


He’s not trying to replace S3 for you. The lack of those features means nothing to me.


How is it replacing anything then? At best the functionality on the homepage feels like it could be replicated with a few 5-line functions wrapping the AWS SDK.


You can just use less of the features in S3. You don't need to use them all. It really does work fine as simple crud storage.


The good thing about S3 is the API is stable this s bunch of clones have popped up, or at least workalikes.


100% legit concern. Obviously I can't sit here and say that my new crappy product is as stable as S3. But I can say that most parts are just a thin layer on top of R2. Obviously at the begining I expect bugs and disruption, but the product is simple enough to get it right quickly.


this is awesome! i wish i had this before i knew aws well.

workers+r2 is the ideal stack for 80% of problems now.

to everyone complaining, just use workers+r2 directly. cheaper and more flexible.

this is excellent ux around the much smaller burden of learning workers+r2. bravo.

btw darkreader breaks your css.


What is the main difference between this and an abstraction wrapper around an S3 library?


didn't see from the docs, can you track upload/download progress from the sdk?


OP here, not at the moment. Is this something you miss?


Free network transfer?

Is file0 S3 compatible?


Yes. No egress fees. It's built on top of R2. they don't charge for network transer, so FILE0 neither.

No! I believe if you look for S3 compatible storages you find much better options out there then FILE0. R2, Backbaze and others.

FILE0's mission is to keep things simple for the small guys.


Another example of making easy things easy.

If I’m really sure I want objects to be public, make the bucket (or Azure blob store, etc) public and reduce the lines of code without adding another dependency.


How do you handle the costs of class A, B operations? With 100MB storage limit, it is very possible to abuse the operation limits and incur some interesting costs.


This looks nifty but what is the story on security?

It is great to have the File0 code next to the over verbose S3 code.

The S3 code contains a few security operations that are missing in the F0 code.

What is the story on that?


The file0 package is relying on the presence of an evironment variable F0_SECRET_KEY that you can obtain from the dashboard. The example code is fully functional and secure, the authentication is happening in the background.


If you use env variables for the F0 secret you should do the same for S3? You’ve written the auth there out explicitly, but a new S3 client will pick them up from the environment as well.


File APIs are always irksome at the edge . Eg when block count and latency are high . Yielding to UI with IO. The easy case is not where most dev time is needed


I kinda like how familiar the api is, is this R2 under the hood?


Yes, it is using R2 and CF workers.


> I can't be asked

Do you mean "can't be arsed" here?


Why isn’t your pricing metered? Flat rate for storage generally feels scammy and the enterprise “contact us” really confirms that for me.


Not OP but scammy is a bit harsh here. I do agree, metered would be cool, but for v1 I'm not surprised to see the pricing as such.


The OP has already stated that this is a fairly thin wrapper around R2. They’re charging 6x the R2 metered rate for a tier that most users will only consume a fraction of.

That kind of pricing seems distinctly unfair.


Like the concept. Are you using S3 to store the file but abstracting the complexities and making it easier to use?


FILE0 uses Cloudflare R2, and a thin layer of custom logic on top of that in form of CF workers.

The reasons for using R2 instead of S3: - Pricing: S3 charges for everything you do. R2 is only for requests and storage. This enables FILE0 to only charge for storage-tiers which is much more understandable pricing model.

- Workers: Good fit for large-scale file-streaming. And the two works great together.


All fun and games until some junior dev uploads a production database with medical record to it or something ;)


What's your point exactly?


S3 is complicated on purpose so github jannies and lawyers have something to do.

This looks sick! Thanks for building this!


what a nice comment, cheers


This looks like exactly what I’ve wanted on more than one occasion. I hope it sticks around.


Good to hear! Initially I built it for myself without the intention to publish, but good to hear I'm not the only one has big-cloud provider fatigue.


I'd probably use that, but the odds of me remembering it next time I need it is going to be approaching zero.

If I were you, I'd dedicate all your efforts on getting into the Vercel integrations page.

That's where I would re-find it, and probably more likely your user base. (scrappy js apps that don't care more than just getting something working and released)


Where's the documentation? I just see pricing and modals trying to get me to sign up.


No docs. In your dashboard there is a "Snippets" collection, which contains all information you need to use FILE0. It's kind of a code tutorial. If you find something fuzzy, let me know and I help you out.


this needs to exisiting. in the very least for prototyping and trying to get web apps of the ground. i want to be a beta user will you help me get started.


of course, let me know how can I help you


R u going to list on Product Hunt? or somewhere else?


Yes!


"Secure connection failed" in firefox?


How did you end up doing key management?


The side-by-sides are disingenuous in my opinion. For example, if you create your client object by specifying the access key and ID, something is very wrong.


isn't this just a knock off of uploadthing?


ut is a little bit smarter than f0. they provide ui components and server-framework bindings for file uploads.

I feel like f0 is a more minimalistic take on file management. More comparable to Vercel blobs than ut. I don't feel these two are comparable though. ut is a nice product if you are looking for those functionalities though.


I like the part where you bulked up the S3 part unnecessarily and hid the extra stuff you needed for yours that S3 doesn’t need.

Sorry, but your code examples are a flat out lie.


I disagree. Only an import statement is missing from both examples. Which parts is missing in your opinion?


> Only an import statement is missing from both examples.

    import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'; 
Really? C'mon. It's YOUR examples on YOUR site.

Regardless, that's not even what I'm worried about.

The part where you need a key for this service and don't need any of that for S3.

Sorry, but now that means I have to handle credentials with this service, which is something I like to avoid as much as possible.

Let's put it this way.

I can spin up an AWS instance right now and do

    await s3lib.set('image.png');
and it works.

But if I do:

    await f0.set('image.png');
I also need to find a way to properly handle this extra token.

And my issue with that is this:

    const s3 = new S3Client({
      region: region,
      credentials: {
          accessKeyId: accessKeyId,
          secretAccessKey: secretAccessKey
      }
    });
You don't need that.

But here is the fun part. Here is the AWS code you need to upload a file and have it available.

    await s3.send(new PutObjectCommand({Bucket: 'my-bucket',Key: 'image.png',Body: myFile}));
And that's it. Full stop. No key to muck about with.

versus

    await f0.set('image.png', myFile);
But..., you also need to set the key, so....

    SOME_ENV_VAR_FOR_YOUR_SERVICE=howeverlongthisisyouneedtomakesureitgoessomewhere node app.js
And, honestly, we know that's not even the same thing as what I have up there for AWS.

Or...

    const f0 = new File0({secretKey="howeverlongthisisyouneedtomakesureitgoessomewhere"})
    await f0.set('image.png', myFile);
Or...

    const config = {
        secretKey: "howeverlongthisisyouneedtomakesureitgoessomewhere"
    };
    const f0 = new File0();
    await f0.set('image.png', myFile);
    await f0.publish('image.png');
Listen, whatever. I'm tired of people being deceptive to try and sell products to programmers. I get it. You are after the lowest common denominator. Someone who actually doesn't know AWS, S3, or anything about Roles. Someone who could set that up in minutes. Fine, I get it. Convenience is real. Just own up to the fact that your examples are purposely bloated on one end and shamelessly thinned on the other end.

You literally add extra stuff on the AWS side and remove needed stuff on your side. To me, that's a lie.


I will kindly ignore the mental breakdown.

About the rest: For s3 you need to instantiate the client, and the only part you're correct about is that the client credentials can be also auto detected from env vars.

Let's boil the rest down: - For S3 you will need to add 2 env vars: AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY_SECRET. In case you don't have other AWS keys in your env for other services this will be auto detected by the client. (Still needs to be instantiated), so you can cross off the 2 lines for creadentials.

- For FILE0 you need to add 1 env var: F0_SECRET_KEY. Then import the client which is autodetecting your env. ``` import {f0} from 'file0';

f0.set('myshit.png', myFile); ```

I'll let everyone be the judge of which one is simpler for them. And you should also use whichever you like more. I will sleep good at night either way and keep using file0.


I am old enough that I still remember people including myself being perplexed why s3 didn't ship with an FTP adapter. They FINALLY added the option in 2020, 14 years late.

I do think the storage "space" could use some disruption. I use s3 every day at work, and my general feeling is that the entire permission system is more complicated than 99% of apps actually need, to the point of being dangerous; I think the complexity has absolutely contributed to so many people getting it wrong and leaking data.

An alternative where the only options were public and private, exclusively set at the bucket level would be good enough for 95% of users and simple enough to rarely get wrong.


This is classic feature bloat of a product that's been around for a while. Some customer wants a really niche feature so product adds a new option for it. It's just one new option so the experience isn't that different. Rinse and repeat hundreds of times and nobody stops the think of the cumulative effect on the 99% of customers who don't need those features.


I was going to say that the cloud now has to be all things to all people; it feels like a reason that products cycle in and out of fashion. I'm not a heavy cloud user, but in my memory, AWS was extremely simplistic back when it debuted - it was a few core services and lacked a lot of features enterprises required which in my mind gave it a sort of "toy" quality I think and was one of the reasons I was a bit of a cloud skeptic. The "S" in a lot of AWS services (to include one of the "S"'s in S3) actually stands for simple, but clearly things have moved on a bit.


While that's true, they might have made it easier for customers if they decided not to deprecate bucket ACLs. They are even hypocritical there in the sense that their own product (Control Tower) uses bucket ACLs whereas they tell customers not to use them. ACLS were ultra-simple - mosltly READ, WRITE, FULL_CONTROL - very little margin for error.

Of course you still have complexity as you have the bucket owner, object owner and requester which might be 3 different entities, but still mentally easier to grasp than policies with dozens of options you need to read documentation for to understand what they are for and what the consequences of using them are.


> Rinse and repeat hundreds of times and nobody stops the think of the cumulative effect on the 99% of customers who don't need those features.

Does it cost anything to not use features you do not need?

Last time I checked, all the so-called niche features were stashed in hierarchical option lists, outside of the happy path. You need to purposely want to dig into, say, file access, bucket access metrics, storage strategies, object versioning, etc to actually get to them.


It costs time whenever a coworker has toggled the wrong setting and you must debug.

It costs time whenever some coworker asks for something impossible, and says something like "Have you really checked all the options?"


> It costs time whenever a coworker has toggled the wrong setting and you must debug.

Why do you have team members toggling settings at random at will?

If that's a real problem with your organization, you have far more pressing problems than the number of options offered by a service you consume.


Everywhere I've worked had either people who made mistakes, people who made unrealisic demands, or both.

I guess you'd be the type with unrealistic demands.


Complexity without ease of use costs. S3 kind of forces you to use those features. If all you wanted was just a public/private access, you're forced to read into the complexity of S3's complex permission system.

So, to be fair, yes it costs to not use features you do not need in this case.

You literally pay by the hour to use S3, and you also pay the hours you spend trying to understand the permissions and modify settings, so it literally costs to not use the features you do not need in this case. I'm trying to make an argument, I'm not saying you pay much, but I answered your question, and you do pay, either by time, or money.


> S3 kind of forces you to use those features.

Not really. In order to set bucket request metrics not only you need to explicitly set your bucket to be open to the world but you also need to dig down the options to explicitly enable them along with a metrics filter of what objects you cover.

Object versioning is disabled by default and you need to go way out of your way to enable them at a specific level.

You also need to go way out of your way to set another object storage class.

None of these features are enabled by default. You need to turn them on and configure to start using them.


the cost is usability. When there's dozens of options you don't need or care about, you have to spend the time to understand what they are to realize you don't need them


This is true for all of Amazon's products. They enshitified everything by being beholden to never-ending customer requests for more fine knobs. Their products have a serious "let's just bolt this on" vibe. There is no theme or reason to it.

AWS went from "just push a couple of buttons and you are running in the cloud without a dedicated sys admin" to "you are going to have to hire a team of cloud admins because no one understands all the options and costs anymore".

Compare AWS to DigitalOcean, for example - the difference in simplicity is mind-blowing.


It's just different maturity stages of different products. It's very hard not to add features when your bottom line depends on it.


There is a middle ground, however, between Amazon's "let's pile on more stuff until it's incomprehensible" and Google's "ah let's just kill the whole thing".

The latter will really get your customers torqued, the former is going to make some customers not get exactly what they want.

Maybe in the early days this would have made sense, but the cloud vendor lock in is so strong that I seriously doubt most customers even have the leverage. "Add this feature or we are walking to Linode"?

I don't think so. It just seems like a broken product design culture, where the managers need to add features non-stop, so they can make some slides and get their raises. Big company problems, to be sure, but still disfunctional.


Absolutely agree. The ideas of 'publically accessible' files on S3 is so convoluted and the copy is not at all clear despite them adding all-caps in multiple places...

And then policies are an attempt (I think) at simplifying permissions to a bucket with one declaration, but in practice what I see are people copy-pasting decades-old (really!) policies from other projects because most of us just need to get a private file store up and running.

Platform exhaustion - I have to know so much about AWS to not make a huge mistake that it takes away from my actual product development work that my company expects out of me.


Standard S3 is $.02/GB, this is $.12/GB so 6x the price. You don't pay data transfer with this solution at least.

Quick alternative:

  #!/bin/bash

  if [ "$#" -ne 2 ]; then
    echo "Usage: $0 <file-to-upload> <s3-bucket-name>"
    exit 1
  fi

  FILE_TO_UPLOAD=$1
  S3_BUCKET=$2
  S3_OBJECT_NAME=$(basename "$FILE_TO_UPLOAD")

  aws s3 cp "$FILE_TO_UPLOAD" "s3://$S3_BUCKET/$S3_OBJECT_NAME" --acl public-read

  if [ $? -eq 0 ]; then
    echo "File uploaded successfully."
    PUBLIC_URL="https://$S3_BUCKET.s3.amazonaws.com/$S3_OBJECT_NAME"
    echo "The file is publicly accessible at: $PUBLIC_URL"
  else
    echo "Failed to upload file."
    exit 1
  fi


I'm sorry to be harsh, but this is not an alternative to the linked product at all. It's not available through an API, there's no Javascript SDK, there's no CDN configured, you still have to set up and manage an S3 bucket... It's not even in the same category, imo.


I have a few qualms with this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: