Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I simply asked the question, the model failed, as did most of the others. It's a smaller model, that I could run locally, so obviously not as powerful.

I wanted to see if a prompt would do better that pulled into the analysis 1) a suggestion to not take every question at face value, and 2) to include knowledge of the structure of riddles.

These are part of the "context" of humans, so I speculated that maybe that was something missing from the LLM's reasoning unless explictly included.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: