Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems to me that in reality in such a scenario (at least ideally), the human will mostly focus on targets that have already been marked by AI as probably an enemy, and rigorously double-check those before firing. That means that of course you are going to be influenced by AI, and it is not necessarily a problem. If you haven't first established, and are re-evaluating with some regularity, that the AI's results have a positive correlation with reality, why are you using AI at all? You could e.g. improve this further by showing a confidence percentage of the AI, and a summary of the reasons why it gave its result.

This is aside from whether remotely killing people by drone is a good idea at all, of which I'm not convinced.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: