Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Every good invention can be terrifying if it falls in the hands of bad guys (Nuclear technology for example). It's true for AI also. I am sure bad guys must be training similar AI agents by only feeding fake news, conspiracy theories etc. and it's easy to build AI agents as there is so much Open Source material online about AI.


I'm trying to imagine a productive use case for this? Maybe in reverse for attempting to answer questions?


Think things like election meddling. Propagating truly fake news to cater to the emotions of what people simply want to be true. Humans are weak against Confirmation Bias, ten minutes on Facebook will show you for sure.


Yes. That was the rationale OpenAI made just a few weeks ago to not release their new language models:

http://approximatelycorrect.com/2019/02/17/openai-trains-lan...


use case is to spread misconceptions in the society (that's what bad guys want right?) in an automated way. especially during elections.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: