Oh, I think the problem might just be a little bit more complicated than that.
In a sense (and this is a gross exaguration, but just to frame the concept), an ethics panel formed by Charles Manson, Ted Bundy, John Wayne Gacy, Jeffrey Dahmer and David Berkowicz would not be an improvement over nothing at all. It would be a step backwards, and the world would be worse for tolerating it.
This is not to say that any of the people involved in this particular episode are abominable monsters, far from it, but to drive home the point, whether you pick the right people or the best people really matters, and it does make a difference.
In some respects, I'm not sure I'd want individuals with a vested interest and enthusiasm for AI to play watchdog over appropriate, responsible behavior.
In a way, an ethics board in something of a no fun zone. It would likely make more sense to invite members from areas that might run counter to industry wonks, in ways that AI experts might prove tone deaf to self policing concerns. Does that make sense?
We don't want to stifle the best parts of progress, but an ethics board shouldn't be made of people inclined to rubber stamp Skynet, because they'd tend toward seeing AI as progress by default.
In a sense (and this is a gross exaguration, but just to frame the concept), an ethics panel formed by Charles Manson, Ted Bundy, John Wayne Gacy, Jeffrey Dahmer and David Berkowicz would not be an improvement over nothing at all. It would be a step backwards, and the world would be worse for tolerating it.
This is not to say that any of the people involved in this particular episode are abominable monsters, far from it, but to drive home the point, whether you pick the right people or the best people really matters, and it does make a difference.
In some respects, I'm not sure I'd want individuals with a vested interest and enthusiasm for AI to play watchdog over appropriate, responsible behavior.
In a way, an ethics board in something of a no fun zone. It would likely make more sense to invite members from areas that might run counter to industry wonks, in ways that AI experts might prove tone deaf to self policing concerns. Does that make sense?
We don't want to stifle the best parts of progress, but an ethics board shouldn't be made of people inclined to rubber stamp Skynet, because they'd tend toward seeing AI as progress by default.