Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apologies if you’d already seen this and were only trying to make a point, but you might like this article from a week or 2 ago that talks about how to run Llama 2 “uncensored” locally, and it seems to do a decent job of mitigating the sermons!

Article: https://ollama.ai/blog/run-llama2-uncensored-locally

Discussion: https://news.ycombinator.com/item?id=36973584



When you encounter "uncensored" in a llama model (1 or 2) what that means in that context is that the fine-tuning datasets used have had all refusals to respond removed. There's no way to uncensor the pre-trained model itself and fine-tuning only changes the style of the output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: