Deciding whether A is an X or a Y is a really basic part of why we're all communicating. Suspicion of em dashes is one thing, but once you start getting nervous on seeing "It’s not X. It’s Y." then you're just going to get paranoid.
The fundamentals of an LLM is to statistically match their output with the corpus. The tics they have are really common in natural human usage too.
I didn't reply to the comments talking about the AI tells. I replied to the comment that is making a bad argument. It doesn't matter to me whether the article is or isn't LLM assisted.
> Is the not X it's Y in large frequency not an AI tell?
I doubt it. The AIs are statistical models, if they've picked up a habit of saying "not X it's Y" then that is probably the most likely thing for humans to say when they are explaining something. The whole training process is about making what the AI says statistically indistinguishable from what humans do; the only way to pick up that it is an AI is either because the model is badly fitted (which, in fairness, many are, they're still working out the ideal weights) or because it isn't grounded in reality. It isn't reasonable to say "oh this looks like AI" based on small phrases like that, AIs use the same phrases humans tend to. It is where they should be doing the best job of fitting in with us.
"This uses really common phrasing ergo it is AI" is a bad case to be trying to make.
The fundamentals of an LLM is to statistically match their output with the corpus. The tics they have are really common in natural human usage too.