Uh, to explain what? You probably read something into what I said while I was being very literal.
If you train an LLM on mostly false statements, it will generate both known and novel falsehoods. Same for truth.
An LLM has no intrinsic concept of true or false, everything is a function of the training set. It just generates statements similar to what it has seen and higher-dimensional analogies of those .
Reasoning allows to produce statements that are more likely to be true based on statements that are known to be true. You'd need to structure your "falsehood training data" in a specific way to allow an LLM to generalize as well as with the regular data (instead of memorizing noise). And then you'll get a reasoning model which remembers false premises.
You generate your text based on a "stochastic parrot" hypothesis with no post-validation it seems.
Really, how hard is it to follow HN guidelines and :
a) not imagine straw-man arguments and not imagine more (or less) than what was said
b) refrain from snarky and false ad hominems
None of what you said in no way conflicts with what I said, and again shows a fundamental misunderstanding.
Reasoning is (mostly) part of the post-training dataset. If you add a large majority of false (ie. paradoxical, irrational etc.) reasoning traces to those, you'll get a model that successfully replicates the false reasoning of humans. If you mix it in with true reasoning traces, I imagine you'll get infinite loop behaviour as the reasoning trace oscillates between the true and the false.
The original premise that truth is purely a function of the training dataset still stands... I'm not even sure what people are arguing here, as that seems quite trivially obvious?
Ah, sorry. I haven't recognized "all the high-level capabilities of an LLM come from the training data (presumably unlike humans, given the context of this thread)" in your wording. This is probably true. LLM structure probably has no inherent inductive bias that would amount to truth seeking. If you want to get a useless LLM, you can do it. OK, no disagreement here.
If you train an LLM on mostly false statements, it will generate both known and novel falsehoods. Same for truth.
An LLM has no intrinsic concept of true or false, everything is a function of the training set. It just generates statements similar to what it has seen and higher-dimensional analogies of those .