Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm Jewish. I would not in fact be up in arms if your hypothetical person were on an AI ethics council, if I felt they might have useful things to contribute outside the context of "the Jewish Question". I suspect I would personally be happier if people could be found who could offer similar contributions without that particular viewpoint, but "mildly unhappy" is about as far as I would go in this particular scenario.

Now if the council had multiple people like you describe, or worse yet a majority of them, then I would certainly be up in arms, yes. Just like I would be up in arms if a majority of such a council held _any_ extreme-by-society's-standards position (Greenpeace member, orthodox Jew, Catholic priest, Communist party member, hard libertarian, etc, etc). From my point of view, the most dangerous failure mode for an AI ethics council is groupthink that leads them to not notice problems that should get noticed. If the council is set up right, it should not require unanimity, or even a majority to flag something as an issue.



> "mildly unhappy" is about as far as I would go in this particular scenario.

I suspect that you're only willing to say that because, at the moment, nazis are not anywhere close to being in a position to carry out any of their promises. Let's imagine that 30% of the general population agreed with this person's views, but that they were highly unpopular among educated people and had little influence in certain institutions, such as Google. Would you still be shrugging your shoulders and saying that it was worth it to have a diversity of opinion, or would you be scared, and willing to do anything you could to prevent this person and their views from having any more influence?

> From my point of view, the most dangerous failure mode for an AI ethics council is groupthink that leads them to not notice problems that should get noticed.

Hmm… I didn't think there was much of a chance of this council producing anything of value to begin with, so I basically just saw it as a minor endorsement of a small group of people. Maybe my opinion on this matter would be different if I were more concerned about AI and thought that there was meaningful progress to be made by such a group.


One other thing, because I do think this is also important. In the presented hypothetical, "highly unpopular among educated people" is an important condition which I'd love to see data for in the case in question. That will require carefully pinning down by what views we really think Kay Coles James holds, though; I suspect that support for her views varies quite significantly based on that, and also based on geography and age, after controlling for education.

I, personally, would not be surprised if 30%, or more, of "educated people" across the US agreed that some (though perhaps not all) of the issues she raises are valid issues that need to be addressed.

Now maybe this just makes the situation scarier for trans people, of course....


I agree that the degree of possible harm is important here. But I also think that if we start measuring that, then we have to compare the actual positions people hold to the one you ascribed to your hypothetical outspoken Nazi-like person. If someone who advocates murdering trans people were placed on such a board I would be a lot more up in arms than in the hypothetical Nazi-like case. For a number of reasons, including the power dynamic, but not limited to that.

But that's not what we're talking about here, either for the particular person on the ex-board or the overall population dynamics: 30% of the population is not in favor of murdering trans people, and neither was anyone on Google's board. To get to the 30% number I think there are two options: 1) reduce the level of disapproval to the point where you in fact have a meaningful fraction of the population (not 30% by any means, but not negligible either, and including some members of congress, which I realize is much more acutely true for the trans case) with an equivalent disapproval level of Jews, or 2) define any expression of disapproval or concern with complications at all as an existential threat. It seems to me that a number of people do the latter in practice, which is why we end up with comparisons with the hypothetical Nazi-like.

Just to expand on this, I really do think there is a vast difference between 1) people who acknowledge that trans people exist and are "legitimate" in whatever sense one cares to think, have concerns about trans women's participation in women's sports, and want to figure out how and whether that can be made to work reasonably, 2) people who just wish trans people as a concept would disappear because it would make everything so much simpler, and 3) people who advocate violence against trans people. I don't think my viewpoint is universally accepted, and there are various instances in this very thread of comments conflating various positions on that spectrum.

Finally, I agree that if this board is not a serious attempt at ethics oversight and is instead just a PR stunt, then there's no point in worrying about a diversity of viewpoints and all that; Google should just appoint whoever will score the most brownie points in whatever status competition they think they're involved in. But I do think that external oversight of AI ethics to prevent echo-chamber effects is quite important, and I'm disappointed in the lack of such, whether that's because there's no board or because there is a useless board.


So, I absolutely agree that the Heritage Foundation is less dangerous to trans people than Nazis are to Jews. However, the parent seemed to be objecting prima facie to removing someone from an ethics board on the basis of their views on issues. If we concede that Jews should object to even a single Nazi being put on an ethics board, then that objection evaporates it's instead just a question of degree: how harmful do you think the Heritage Foundation's views are compared to those of a Nazi?

If Google put a literal Neo-Nazi on an ethics board, I would tell everyone I knew to stop using Google products and applying for jobs there. I'd probably set up a script on my university Gmail to auto-reply to everything saying that I can't be reached through a Nazi-supporting platform, and explaining to the sender can run the same script in protest. Knowing my university, we'd be off Google in a few weeks.

I'd say that the Heritage Foundation is about 2 percent as bad as a Neo-nazi organization (supposing that they were the same size), so I'm about 1/50th as concerned as I would be in the hypothetical scenario. But that's still enough that I'll post on Hacker News about it, and I'd probably sign the petition if I worked at Google.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: