I've seen a lot of stories lately about detecting fake news and deep fake video and audio. I've seen contests for detecting if something is trustworthy.
How do we know if we can trust something or someone? I've been betrayed, more than once. I've been lied to and not known it until much later.
Is there an algorithm? Is that possible? Is it a cat and mouse game at best? Over the past year or two, it seems like Google is being gamed more and more. Business listings at the wrong address, the locksmith scandal.
Learning to lie is a natural and expected part of human maturation. Other species are known to deceive.
Is this a solvable problem? On a personal note, how can an individual learn to trust, if they believe it is impossible due to past experience?
I think this question is at the deep intersection of Computer Science and Philosophy, especially epistemology. If philosophers haven't solved the problems associated with the field of knowledge, justification, and truth, how can a computer? How can we believe a computer, programmed by a human, can properly distinguish true from false, when humans are themselves unable to do so with great accuracy.
And, if the above is possible, how do we know we can trust the prediction of the computer? It's already a well known problem that understanding the output of AI, as in, why did the AI choose this result, is becoming increasingly more difficult over time -- not easier. If we can't trust computers -- or people -- how can we trust the designation a computer will make about something's else's trustworthiness?
I once had a theory that you can use computers to prove lies during courtroom testimony by finding conflicting statements using prolog. However, the lack of confliction doesn't mean it's a true statement.