Anthropic’s models have almost certainly gorged on an enormous amount of OSS, and if they think they can settle that debt with only six months of perks for the maintainers who’ve kept that ecosystem alive, it comes across as pretty arrogant.
It's amazing how quickly Anthropic is turning into the "bad" guys.
First we couldn't use our Claude subscription with anything but Claude code, then the limits seemed to change every week without any communication, then they banned a bunch of people (including some prominent names). Then they complain about the Chinese distilling using their API (which I'm partly sympathetic to but let's not pretend that Antrophic invented their training data from scratch).
Then there's this half-baked offer. I mean sure, it looks nice on paper but given how incredibly valuable opensource has been for them and given their budget it does seem a bit tight.
Uncharitably, I think this is a strategy to gorge further especially if they select for higher quality open source. They are embracing the best to train off iteration patterns of the best, and have a semi self correcting slop mechanism.
Charitably this will be great for open source software so... so long as they never moat up and lockdown.
I wanted to create a catchy, attention-grabbing phrase:)
This app is meant to feel like a single sheet of paper on our desk where we can write in Markdown.
Thank you! I like how it makes Zettelkasten-style organization easy. For this ephe.app though, I intentionally limited it to a single page as you know. A deliberate choice not to expand. I’m looking forward to yours!
reply