Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


I think you might be over-simplifying. This (and llama.cpp's grammar-based sampling, which this is moving towards[1]) doesn't say "no, not like that, give me another token". It excludes impossible tokens at each step, but otherwise samples like normal.

Is this a revolutionary trick? Not really, since llama.cpp and guidance, and probably others have already done it. But it's a good trick, and hopefully one of many to justify the valuation :).

[1]: https://github.com/normal-computing/outlines/pull/178


I’m sorry that our software made you so angry. It was a side project led by two people independently from the rest of the company.


> Imagine thinking that adding regex on top of an LLM is worth $8.5M

you should be downvoted for being this reductionist and uncharitable. this is a side project of a larger company effort.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: