> under Bayesian reasoning a lot of “fallacies” like sunk cost, slippery slope, etc. are actually powerful heuristics for EV optimization.
Can you elaborate on that?
This really piqued my interest. I feel like logic is easy to apply retrospectively (especially so for spotting fallacies), but trying to catch myself in a fallacy in the present feels like excessive second quessing and overanalyzing. The sort that prevents forward momentum and learning.
Would you by any change have any recommendations on reading on the topic?
Sure. Fallacies, as usually stated, tell you when something that feels like a logical entailment isn’t actually a logical entailment.
Intuitively, people find “bob is an idiot so he’s wrong” a reasonable statement.
Technically, the implication does not hold (stupid people can be correct) and this is an ad hominem fallacy.
However, if we analyze this statement from a Bayesian standpoint (which we should), the rules of entailment are different and actually bob being stupid is evidence that he’s wrong. So maybe this is actually a pretty reasonable thing to say! Certainly reasonable people should use speakers’ intelligence when deciding how much to trust speakers’ claims, even though this is narrowly “fallacious” in an Aristotelian sense.
I’m not aware of any reading on this topic. It seems under-explored in my circles. However I know some other people have been having similar thoughts recently.
No disagreement with the main thrust of your comment, it's a very good one and imo goes to the heart of the seemingly intractable divide between the logician's approach to truth and that of damn near everyone else - which tends to leave the logician reasoning into the void, doing not a bit of good for anyone.
However I myself would probably label the statement "Bob is an idiot" (or perhaps less abrasively, "Bob has often been wrong in the past in easily verifiable ways") not as evidence that he's wrong per se, but as a signal, possibly a rather strong signal, that he is likely also incorrect in the current matter.
A minor semantic quibble, but in my own experience I've found that conceiving of it as such helps frame the situation as a "sensor fusion of individually unreliable data sources" type of problem, as opposed to one of "collecting experimental results in a logbook and deriving conclusions from them."
The latter of which can lead pretty seamlessly to a towering edifice of belief built upon some ultimately pretty shaky foundations. Ask me how I know ;)
Yes, 100% agreed! Your post reflects my feelings on this.
It's important to understand that something being a "logical fallacy" just implies that you can't unilaterally justify conclusion X by using reasoning Y.
But that does not mean that reasoning Y is not valid or helpful in understanding conclusion X.
Ultimately it's important to justify your views with sound reasoning, but life is full of heuristics, so often use of heuristics to reach a conclusion can be reasonable. It just means the conclusion is not definitive from a logical point of view.
Ideally you use a combination of logically sound and heuristic based statements to support an argument.
Following your Bob example... It's important that the person making the argument uses stronger reasoning than just calling Bob an idiot. But agreed that it's a totally valid point of supporting evidence.. assuming that Bob is an idiot is a fairly agreed upon statement.
Can you elaborate on that?
This really piqued my interest. I feel like logic is easy to apply retrospectively (especially so for spotting fallacies), but trying to catch myself in a fallacy in the present feels like excessive second quessing and overanalyzing. The sort that prevents forward momentum and learning.
Would you by any change have any recommendations on reading on the topic?