If an OpenAI model helped someone create a cancer cure, they wouldn't see a dime from that beneficial act. So why should they be liable if someone does something harmful with the model?
If an OpenAI model helped someone create a cancer cure I guarantee that they would try to profit as much as possible from that fact. They have even talked in the past about having partial ownership over discoveries made with AI be part of the license. They would be all over that.
I'm sure if they could, they would, sure, as would any business. That's where competition enters the equation. They can't do it because their competitors would undercut them by requiring no such conditions.
Sure they would, just like people would use the bad PR to smear OpenAI if someone did something bad with knowledge their model created. The situation is totally symmetrical and fair as it is, and my point is that expecting them to liable is asymmetric and unfair. If they can be held liable, then they should also be able reap the rewards in order to offset those risks.
This is what I'd expect from companies - I don't see why Facebook would get money because they helped people connect to each other who ended up developing a cancer cure, but they definitely should be held accountable for enabling a genocide. You're allowed to operate a business until you cause harm to society, then we can shut it down.
I think the big thing you would need is to see the internal emails - if there was ever a case where someone raised a concern about this possibility and it wasn't taken seriously, then they should be liable. If they just never thought about it then it could be negligence but I think if I was on a jury I'd find that more reasonable than knowing it could be a problem and deciding you aren't responsible
> I don't see why Facebook would get money because they helped people connect to each other who ended up developing a cancer cure, but they definitely should be held accountable for enabling a genocide.
Why? What does it even mean to "enable a genocide"? Just saying something isn't an argument.
> if there was ever a case where someone raised a concern about this possibility and it wasn't taken seriously, then they should be liable.
Again, why? How is this any different than electricity as a tool, which has both beneficial and harmful uses? AI is knowledge as a utility, that's the position here.
Well the point of capitalism (going back to Adam Smith) is that the invisible hand converts locally selfish behavior to globally good outcomes. The argument is whether or not that emerges. So if your implication was that human trait was selfishness, yes, that is quite the point of capitalism.
That would be a better mission statement for OpenAI at this point.