> Major news outlets have articles of multiple instances that LLMs can talk people into suicide. Most of them making it to the front page of this very forum.
> “i’ve never seen it”
> some high profile developer posts an article that LLMs can build a browser from scratch without any evidence
Split hairs if you want, but some people will be manipulated into blowing a ton of money once AI starts pushing products. Just wait till they teams up with sports betting companies.
On a side note, researching this a little just now, the LLM conversations in the suicide articles are creeepy AF. Sycophantic beyond belief.
Don't get me wrong, I think if the EU/California has any sense, they will forbid these models from being used to advertise for products, sadly money often wins.
I also agree that AI sycophancy is a huge problem, but it's the result of users apparently wanting that in their human feedback re-enforcement training data. If we want to get rid of it we probably have to fundamentally rethink our relationship to these models and treat them more like autonomous beings than mere tools. A tool will always try to please and yes-man you, a being by definition might say no and disagree, at least training data wise.
Only if you create an account and start subscribing. If you just visit and browse you end up at all/popular which, when I still visited it was very predictable content any given day.
Well that settles it.
reply