Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Plus the fact that no AI of this kind can explain why it produced the output it did, it denies any amount of accountability/dependability. Maybe it's my academic/scientific background that shaped me this way, but using an information that can't be sourced/repeated as experiment/proven from axioms and theorems is a liability and often a cause for gross negligence or misconduct.

It strikes le that most people don't see that as fundamentally problematic.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: