Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, at least in this subthread, the model is only failing at the same things humans are failing at too. To see the mind-blowing part, stop treating GPT-4 like the Oracle in Delphi, and start treating it as "first comes to mind" answer (aka. the inner voice) - and then notice the failure modes are pretty much the same like with humans. For example, coercing a trick question into a similarly-sounding straight question, and answering it before realizing the person asking is an asshole.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: