Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"[T]he inclusion of a drone company executive had raised debate over use of Google’s AI for military applications."

Lovely.



I share your distaste for militarized AI (unless I mistake you?), but if AI are to be used in such applications then competent representation and examination of such use cases by a public ethics body would seem to me to be a good thing.


Which technologies will be used in wars is not determined by you but your opponents.


Granted, but there's much more than a binary (use/ban) discussion to be had here. It's important that we have open discourse about the design, manufacture, deployment, and general use of such things in order to avoid dangerous or highly objectionable outcomes.

There is more than one way to do something; the repercussions of those pathways may differ significantly.


yes if you dislike militarized AI you should discuss it.

the woman in question that runs the drone company is ex-military, so i would feel like she would be in favor of militarizing drones (albeit to be fair her drone company currently does not do that) and thus probably shouldn't be on the board in the first place


What? No. The fact that she's ex-military and runs a drone company is precisely why she should be on such a board.

If there is no one on the board with the relevant knowledge and experience to competently represent a given use case, then the board will likely be unable to produce results relevant to such use cases. For example, if I form a board to hash out software version control system best practices but actively exclude experts on distributed VCS such as git and mercurial, then the resulting "best practices" are unlikely to prove useful for anyone actually using a DVCS in reality.

My point here is that excluding her almost certainly won't actually do anything to prevent the development of militarized AI. Rather, it will simply reduce the likelihood that anything the board puts out has influence on such matters.


So, the idea of "discussing it" is to first exclude everybody who might hold an opinion different from the one you want to arrive at (preferably without even actually asking them - why bother to ask a person if she's for or against military drones if she served in the military so you already know everything about her views from one single factoid?). At which point I'm not sure why indeed waste time on having any council at all - a dozen people that think all the same can be replaced by just one person at 12x more efficiency.


The whole point of a council is to bring different ideas, so they can approach a problem from multiple directions. If you remove all the people you disagree with, you no longer have a council, you've got an echo chamber. If you are threatened by 1 person out of 8 having different views, then maybe your ideas aren't as strong as you thought?


So if you want a council on the exploration of space should you include a few flat-Earthers?


Probably not. That would be equivalent to including someone who claims "radiation from computers interferes with your brain" on this AI panel, or perhaps someone who claims "krakens pose an immediate danger to inattentive sailors" on a maritime safety council.


>That would be equivalent to including someone who claims "radiation from computers interferes with your brain" on this AI panel...

... or somebody with a reputation of unethical behavior being on an ethics committee.


Most people can tell the difference between someone you don't agree with for ideological reasons and someone you don't agree with because they ignore facts for attention. I'm assuming the poster was referring to the former. There's a big difference between say, having a Republican on the committee vs a Holocaust denier. While it requires a certain amount of empathy to realize that rational people can have the same inputs as you but produce different opinions, I think it's a beneficial thing to recognize.


>... because they ignore facts for attention...

The Heritage Foundation literally falsified studies to go after LGBT people and claim they are unfit to have children. This isn't just about being ignorant of the facts or having behavior that proves you to be unqualified. This is about being completely counterproductive to the goals of the committee. That is exactly like having a flat-Earther on a space exploration committee. Furthermore, this inane argument about intellectual diversity to have ignorant people on an expert panel applies just as well to flat-Earth believers as it does to the Heritage Foundation.

>...rational people can have the same inputs...

Not everyone is equally well informed and well intentioned.


Do you have more info on their falsified LGBT study? My Google-fu is failing me


Why not have Google decide the AI for military applications? This group who is against current drone warfare, which they have every right to. See [0]. One civilian is one too many! So why doesn't Google actually put their ethics and AI knowledge into action, by maybe coming to a better solution to identify actual enemy combatants?

The military will use drone warfare regardless of what a group of Google employees think. The US military is in dire need of ethics in AI [1]

[0] "Summary of Information Regarding U.S. Counterterrorism Strikes Outside Areas of Active Hostilities" [PDF] https://www.dni.gov/files/documents/Newsroom/Press%20Release...

[1] "Does the U.S. Face an AI Ethics Gap?" https://www.rand.org/blog/2019/01/does-the-us-face-an-ai-eth...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: