Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anthropic is by far the most moralizing and nanny-like AI company, complete with hypocrisy (Pentagon deals) and regulatory capture/ladder-pulling (this here).


Don't worry about it, they're not well managed, you can see it from their ops, their products, etc, they won't stick around. They're going to get ground to dust by Google and OpenAI at the high end and the chinese models on the low end. They'll end up in Amazon's pocket, Jeff's catch-up play in the AI war after sitting out the bidding wars.


I can see disliking deals with the Pentagon, but where's the hypocrisy? Did they say that nobody should do deal with the Pentagon?


The hypocrisy is that they constantly doom about ai existential risks but theyre also constantly training stoa models.


That’s just politics: basically they’re saying “let us do our thing, otherwise China will win this race”.

And it’s also market segmentation: they need to separate themselves from the others, and want to be the de-facto standard when people are looking for “safe” AI.


Would you find it more agreeable for them to dismiss safety entirely?


I would expect people who doom about AI existential risks to not train cutting edge models and give them agentic freedom.


>constantly doom about ai existential risks

That's kinda their marketing. "we've tamed this hyperintelligent genie that could wipe us all out, imagine what it could do for your cold emails!"


You're asserting that cooperating with Defense is hypocrisy.

I would say the other way, as recent events show, Defense is the only department everyone should be glad to collaborate with.

Or do you mean collaborating with only Pentagon is hypocrisy, not other DoD-s?


department of war you mean


Always has been




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: