Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah it's talking about a new feature for LLMs where the output of an LLM is fed back in as input and done again and again and again and this produces way more accurate output.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: