> It starts producing a weird dialogue that's completely unlike its training corpus
But it's not doing that. It's just replacing a relation in vector space with one that we would think is distant.
Of course you would view an LLM's behavior as mystifying and indicative of something deeper when you do not know what it is doing. You should seek to understand something before assigning mysterious capabilities to it.
You're not addressing the objection. What is it about your model of how you think LLMs work (that it's just "repeated information") that predicts they'd go haywire when asked about a seahorse emoji (and only the seahorse emoji)? Why does your model explain this better than the standard academic view of deep neural nets?
You just pointed out an example of LLMs screwing up and then skipped right to "therefore they're just repeating information" without showing this is what your explanation predicts.
If you copy two words from me and put them in a difference sentence that means something else, that's a lie. If you want to argue with a strawman, that's something you can go rely on an LLM for instead of me.
I haven't lied. You're making accusations in bad faith. This was a faithful representation of your position as best as I can tell from your comment.
If you'd like to explain why "What you've mistaken for a 'logical model' is simply a large amount of repeated information." actually means something else, or why you think I've misinterpreted it, be my guest.
But it's not doing that. It's just replacing a relation in vector space with one that we would think is distant.
Of course you would view an LLM's behavior as mystifying and indicative of something deeper when you do not know what it is doing. You should seek to understand something before assigning mysterious capabilities to it.