Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> are there modes of thinking that fundamentally require something other than what current LLM architectures do?

Possibly. There are likely also modes of thinking that fundamentally require something other than what current humans do.

Better questions are: are there any kinds of human thinking that cannot be expressed in a "predict the next token" language? Is there any kind of human thinking that maps into token prediction pattern such that training a model for it would not be feasible regardless of training data and compute resources?

At the end of the day, the real world value is utility, some of their cognitive handicaps are likely addressable. Think of it like the evolution of flight by natural selection, flight is usefulness to make it worth it adapt the whole body to make flight not just possible but useful and efficient. Sleep falls in this category too imo.

We will likely see similar with AI. To compensate for some of their handicaps, we might adapt our processes or systems so the original problem can be solved automatically by the models.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: