Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think it's sanctimonious to say, hey, I don't want the technology I work on to be used for targeting decisions when executing people from the sky. Especially as the tech starts to play more active roles. You know governments will be quick to shift blame to the model developers when things go wrong.
 help



> I don't want the technology I work on to be used for targeting decisions when executing people from the sky

one problem i have with this specific case and Anthropic/Claude working with the DOD is I feel an LLM is the wrong tool for targeting decisions. Maybe given a set of 10 targets an LLm can assist with compiling risks/reward and then prioritizing each of the 10 targets but it seems like there would be much faster and better way to do that than asking an LLM. As for target acquisition and identification, i think an LLM would be especially slow and cumbersome vs one of the many traditional ML AIs that already exist. DOD must be after something else.


> I don't want the technology I work on to be used for targeting decisions when executing people from the sky

What do you do when the government come to you and tell you that they do want that, and can back it up with threats such as nationalizing your technology? (see Anthropic)

We're back to "you might not care about politics, but that won't stop politics caring about you".


I know this is a foreign concept to some, but you can have a backbone.

Challenge it in court. Move the company to a different jurisdiction. Burn everything down and refuse to comply.


> I know this is a foreign concept to some, but you can have a backbone. Challenge it in court. Move the company to a different jurisdiction. Burn everything down and refuse to comply.

Challenge in court is fine, even healthy.

Threatening to burn everything down and refuse to comply might well work; simply daring Trump to a game of Russian Roulette about this popping the bubble that's only just managing to keep the US economy out of recession, on the basis that he TACOs a lot, I can see it working in a way it wouldn't if he were a sane leader making the same actual demands just for sane reasons.

Move the company to a different jurisdiction? That would have worked if AI was a few hundred people and a handful of servers, as per classic examples of:

  At the height of its power, Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography has become Instagram. When Instagram was sold to Facebook for a billion dollars in 2012, it employed only 13 people. Where did all those jobs disappear? And what happened to the wealth that all those middle class jobs created?
- Jaron Lanier, "Who Owns the Future?", https://www.goodreads.com/work/quotes/21526102-who-owns-the-...

But (I think) now that AI needs new data centres so fast and on such a scale that they're being held back by grid connection and similar planning permission limits, this isn't a viable response.

They can be burned down, but I think they can't realistically be moved at this point. That said, I guess it depends on how much Anthropic relies on their own data centres vs. using 3rd parties, given Amazon's announced AWS sovereign cloud in Europe?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: