Consumer LLM apps have moat. As it is, ChatGPT (the app) spends most of its compute on Personal Non work messages (approx 1.9B per day vs 716 for Work)[0]. First, from ongoing conversations that users would return to, then to the pushing of specific and past chat memories, these conversations have become increasingly personalized. Suddenly, there is a lot of personal data that you rely on it having, that make the product better. You cannot just plop over to Gemini and replicate this.
Because it changes all the time. A few weeks ago, it was Gemini 2.5 Pro, then Claude Opus 4.1, GPT-5 Thinking, now maybe Claude Sonnet 4.5, etc[1]. Having a good model isn't enough when they're basically interchangeable now. You need something else.
[1] This is an example. Which model was the best when is not important.
Because it depends on how much better “best” is. If it’s only incrementally better than open source models that have other advantages, why would you bother?
OpenAI’s moat will only come from the products they built on top. Theoretically their products will be better because they’ll be more vertically integrated with the underlying models. It’s not unlike Apple’s playbook with regard to hardwares and software integration.