Help me understand the line Anthropic is drawing in the sand?
Don't get me wrong i'm glad they are unwilling to do certain things...
but to me it also seems a little ironic that Anthropic literally is partnered with Palantir which already mass surveills the US. Claude was used in the operation in Venezuala.
Their line not to cross seems absurdly thin?
Or there is something mega scary thats already much worse they were asked to do which we dont know about I guess.
I don't understand the line as well. So its no to domestic surveillance, but all other countries are a fair game? How is this an ethical stand? What sort of mental gymnastics allow Anthropic to classify this as an ethical stance?
To me all of this reads like "we don't trust our models enough yet to not cause domestic havoc, all other is fine, and we don't trust our models enough yet to not vibe-kill people". Key word being "yet".
The whole reason this is happening is because Anthropic looked into how Claude was used in the Maduro op and found it to violate the negotiated terms of service.
Their hard lines are:
- no usage of AI to commit murder WITHOUT a human in the loop
I agree the distinction doesn't matter, but im not so sure "just" having a human in the loop qualifies as an ethical stand. Just because your not pulling the trigger doesn't make you not culpible for the outcome.
Don't get me wrong i'm glad they are unwilling to do certain things...
but to me it also seems a little ironic that Anthropic literally is partnered with Palantir which already mass surveills the US. Claude was used in the operation in Venezuala.
Their line not to cross seems absurdly thin?
Or there is something mega scary thats already much worse they were asked to do which we dont know about I guess.