Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the intuition is that the first N layers decode into "thought language" while the last N encode back to desired output language. So if there are well defined points where it transitions between decoding/understanding, thinking, and rendering back to language, those 2 transition points should be in the same vector space of "LLM magic thinking language".
 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: