Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A bit of PR puffery, but it is fair to say that between Gemini and others it’s now been clearly demonstrated that OpenAI doesn’t have any clear moat.




Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.

That means responses can be far more tailored - it knows what your job is, knows where you go with friends, knows that when you ask about 'dates' you mean romantic relationships and which ones are going well or badly not the fruit, etc.

Eventually when they make it work better, open ai can be your friend and confident, and you wouldn't dump your friend of many years to make another new friend without good reason.


I really think this memory thing is overstated on Hacker News. This is not something that is hard to move at all. It's not a moat. I don't think most users even know memory exist outside of a single conversation.

Every single one of my non-techie friends who use ChatGPT rely heavily on memory. Whenever they try something different to it, they get very annoyed that it just doesn't "get them" or "know them".

Perhaps it'll be easy to migrate memories indeed (I mean there are already plugins that sort of claim to do it, and it doesn't seem very hard), but it certainly is a very differentiating feature at the moment.

I also use ChatGPT as my daily "chat LLM" because of memory, and, especially, because of the voice chat, which I still feel is miles better than any competition. People say Gemini voice chat is great, but I find it terrible. Maybe I'm on the wrong side of an A/B test.


This feels like an area Google would have an advantage though. Look at all of the data about you that Google has and it could mine across Wallet, Maps, Photos, Calendar, GMail, and more. Google knows my name, address, drivers license, passport, where I work, when I'm home, what I'm doing tomorrow, when I'm going on vacation and where I'm going, and whole litany of other information.

The real challenge for Google is going to be using that information in a privacy-conscious way. If this was 2006 and Google was still a darling child that could do no wrong, they'd have already integrated all of that information and tried to sell it as a "magical experience". Now all it'll take is one public slip-up and the media will pounce. I bet this is why they haven't done that integration yet.


I used to think that, too, but I don't think it's the case.

Many people slowly open up to an LLM as if they were meeting someone. Sure, they might open up faster or share some morally questionable things earlier on, but there are some things that they hide even from the LLM (like one hides thoughts from oneself, only to then open up to a friend). To know that an LLM knows everything about you will certainly alienate many people, especially because who I am today is very different from who I was five years ago, or two weeks ago when I was mad and acted irrationally.

Google has loads of information, but it knows very little of how I actually think. Of what I feel. Of the memories I cherish. It may know what I should buy, or my interests in general. It may know where I live, my age, my friends, the kind of writing I had ten years ago and have now, and many many other things which are definitely interesting and useful, but don't really amount to knowing me. When people around me say "ChatGPT knows them", this is not what they are talking about at all. (And, in part, it's also because they are making some of it up, sure)

We know a lot about famous people, historical figures. We know their biographies, their struggles, their life story. But they would surely not get the feeling that we "know them" or that we "get them", because that's something they would have to forge together with us, by priming us the right way, or by providing us with their raw, unfiltered thoughts in a dialogue. To truly know someone is to forge a bond with them — to me, no one is known alone, we are all known to each other. I don't think google (or apple, or whomever) can do that without it being born out of a two-way street (user and LLM)[1]. Especially if we then take into account the aforementioned issue that we evolve, our beliefs change, how we feel about the past changes, and others.

[1] But — and I guess sort of contradicting myself — Google could certainly try to grab all my data and forge that conversation and connection. Prompt me with questions about things, and so on. Like a therapist who has suddenly come into possession of all our diaries and whom we slowly, but surely, open up to. Google could definitely intelligently go from the information to the feeling of connection.


Maybe. I haven't really heard many of the people in my circles describing an experience like that ("opening up" to an LLM). I can't imagine *anyone* telling a general-purpose LLM about memories they cherish.

Do people want an LLM to "know them"? I literally shuddered at the thought. That sounds like a dystopian hell to me.

But I think Google has, or can infer, a lot more of that data than people realize. If you're on Android you're probably opted into Google Photos, and they can mine a ton of context about you out of there. Certainly infer information about who is important to you, even if you don't realize it yourself. And let's face it, people aren't that unique. It doesn't take much pattern matching to come up with text that looks insightful and deep, but is actually superficial. Look at cold-reading psychics for examples of how trivial it is.


On the other side of the test, I don't know a non-tech person who uses ChatGPT at all.

Another data point: my generally tech savvy teenage daughter (17) says that her friends are only aware of AI having been available for last year (3 actually), and basically only use it via Snaphhat "My AI" (which is powered by OpenAI) as a homework helper.

I get the impression that most non-techies have either never tried "AI", or regard it as Google (search) on steroids for answering questions.

Maybe more related to his (sad but true) senility rather than lack of interest, but I was a bit shocked to see the physicist Roger Penrose interviewed recently by Curt Jaimungal, and when asked if he had tried LLMs/ChatGPT assumed the conversation was about the "stupid lady" (his words) ELIZA (fake chatbot from the 60's), evidentially never having even heard of LLMs!


My mom does. She's almost 60. She asks for recipes and facts, asks about random illnesses, asks it why she's feeling sad, asks it how to talk to her friend with terminal cancer.

I didn't tell her to download the app, nor she is a tech-y person, she just did on her own.


I dislike that it has a memory.

It creeps me out when a past session poisons a current one.


Exactly. I went through a phase of playing around with ESP32s and now it tries to steer every prompt about anything technology or electronics related back to how it can be used in conjunction with a microcontroller, regardless of how little sense it makes.

I agree. For me it's annoying because everything it generates is too tailored to the first stuff I started chatting with it about. I have multiple responsibilities and I haven't been able to get it to compartmentalize. When I'm wearing my "radiology research" support hat it assumes I'm also wearing my "MRI physics" hat and to weaves everything for MRI. It's really annoying.


Thank you! I feel really dumb for not knowing about that!

Agree. Memory is absolutely a misfeature in almost every LLM use case

You can turn it off

It doesn't even change the responses a lot. I used ChatGPT for a year for a lot of personal stuff, and tried a new account with basic prompts and it was pretty much the same. Lots of glazing.

What kind of a moat is that? I think it only works in abusive relationships, not consumer economies. Is OpenAIs model being an abusive money grubbing partner? I suppose it could be!

If you have all your “stuff” saved on ChatGPT, you’re naturally more likely to stay there, everything else being more or less equal: Your applications, translations, market research . . .

I think this is one of the reasons I prefer claude-code and codex. All the files are on my disks and if claude or codex were to disappear nothing is lost.

> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.

Their 'memory' is mostly unhelpful and gets in the way. At best it saves you from prompting some context, but more often than not it adds so much irrelevant context that it over fits responses so hard that it makes them completely useless, specially in exploratory sessions.


It's certainly valuable but you can ask Digg and MySpace how secure being the first mover is. I can already hear my dad telling me he is using Google's ChatGPT...

But Google has your Gmail inbox, your photos, your maps location history…

I think an OpenAI paper showed 25% of GPT usage is “seeking information”. In that case Google also has a an advantage from being the default search provider on iOS and Android. I do find myself using the address bar in a browser like a chat box.

https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...


> your maps location history

Note that since 2024, Google no longer has your location history on their servers, it's only stored locally: https://www.theguardian.com/technology/article/2024/jun/06/g...


> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider

This sounds like first-mover advantage more than a moat.


The memory is definitely sort of a moat. As an example, I'm working on a relatively niche problem in computer vision (small, low-resolution images) and ChatGPT now "knows" this and tailors its responses accordingly. With other chatbots I need to provide this context every time else I get suggestions oriented towards the most common scenarios in the literature, which don't work at all for my use-case.

That may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now. I asked ChatGPT to roast me again at the end of last year, and I was a bit taken aback that it had even figured out the broader problem I'm working on and the high level approach I'm taking, something I had never explicitly mentioned. In fact, it even nailed some aspects of my personality that were not obvious at all from the chats.

I'm not saying it's a deep moat, especially for the less frequent users, but it's there.


> may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now

I’m not saying it’s minor. And one could argue first-mover advantages are a form of moat.

But the advantage is limited to those who have used ChatGPT. For anyone else, it doesn’t apply. That’s different from a moat, which tends to be more fundamental.


Ah, I guess I've been interpreting "moat" narrowly, such as, keeping your competitors from muscling in on your existing business, e.g. siphoning away your existing users. Makes sense that it applies in the broader sense as well, such as say, protecting the future growth of your business.

Sounds similar to how psychics work. Observing obvious facts and pattern matching, except in this case you made the job super easy for the psychic because you gave it a _ton_ of information, instead of a psychic having to infer from the clothes you wear, your haircut, hygiene, demeanor, facial expression etc.

Yeah, it somewhat is! It also made some mistakes analogous to what psychics would based on the limited sample of exposure it had to me.

For instance, I've been struggling against a specific problem for a very long time, using ChatGPT heavily for exploration. In the roast, it chided me for being eternally in search of elegant perfect solutions instead of shipping something that works at all. But that's because it only sees the targeted chats I've had with it, and not the brute force methods and hacks I've been piling on elsewhere to make progress!

I'd bet with better context it would have been more right. But the surprising thing is what it got right was also not very obvious from the chats. Also for something that has only intermittent existence when prompted, it did display some sense of time passing. I wonder if it noticed the timestamps on our chats?

Notably, that roast evolved into an ad-hoc therapy session and eventually into a technical debugging and product roadmap discussion.

A programmer, researcher, computer vision expert, product manager, therapist, accountability partner, and more all in a package that I'd pay a lot of money if it wasn't available for free. If anything I think the AI revolution is rather underplayed.


> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.

Branding isn't a moat when, as far as the mass market is concerned, you are 2 years old.

Branding is a moat when you're IBM, Microsoft (and more recently) Google, Meta, etc.


You can prompt the model to dump all of the memory into a text file and import that.

In the onboarding flow, I can ask you, "Do you use another LLM?" If so, give it this prompt and then give me the memory file that outputs.


I just learned Gemini has "memory" because it mixed its response to a new query with a completely unrelated query I had beforehand, despite making separate chats for them. It responded as if they were the same chat. Garbage.

I recently discovered that if a sentence starts with "remember", Gemini writes the rest of it down as standing instructions. Maybe go look in there and see if there is something surprising.

Its a recent addition. You can view them in some settings menu. Gemini also has scheduled triggers like "Give me a recap of the daily news every day at 9am based on my interests" and it will start a new chat with you every day at 9am with that content.

Couldn't you just ask it to write down what it knows about you and copy paste into another provider?

The next realization will be that Claude isn't clearly(/any?) better than Google's coding agents.

I think Gemini 3.0 the model is smarter than Opus 4.5, but Claude Code still gives better results in practice than Gemini CLI. I assume this is because the model is only half the battle, and the rest is how good your harness and integration tooling are. But that also doesn't seem like a very deep moat, or something Google can't catch up on with focused attention, and I suspect by this time next year, or maybe even six months from now, they'll be about the same.

> But that also doesn't seem like a very deep moat, or something Google can't catch up on with focused attention, and I suspect by this time next year, or maybe even six months from now, they'll be about the same.

The harnessing in Google's agentic IDE (Antigravity) is pretty great - the output quality is indistinguishable between Opus 4.5 and Gemini 3 for my use cases[1]

1. I tend to give detailed requirements for small-to-medium sized tasks (T-shirt sizing). YMMV on larger, less detailed tasks.


Claude is cranked to the max for coding and specifically agentic coding and even more specifically agentic coding using Claude Code. It's like the macbook of coding LLMs.

Claude Code + Opus 4.5 is an order of magnitude better than Gemini CLI + Gemini 3 Pro (at least, last time I tried it).

I don't know how much secret sauce is in CC vs the underlying model, but I would need a lot of convincing to even bother with Gemini CLI again.


That hasn’t been my experience. I agree Opus has the edge but it’s not by that much and I still sometimes get better results from Gemini, especially when debugging issues.

Claude Code is much better than Gemini CLI though.


If the bubble doesn't burst in the next few days, then this is clearly wrong.

Next few days? Might be a bit longer than that.

Why? They said "clearly demonstrated".

If it is so clear, then investors will want to pull their money out.


Most investors are dumb as rocks, or, at least, don't know shit about what they're investing in. I mean, I don't know squat about chemical manufacturing but I have some investment in that.

It's not about who's the best, it's about where the market is. Dogpiling on growing companies is a proven way to make a lot of money, so people do it, and it's accelerated by index funds. The REAL people supporting Google and Nvidia isn't wallstreet, it's your 401K.


What if all investors don't agree with this article?

Out of curiosity, why that specific timeframe? is there a significant unveiling supposed to happen? Something CES related?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: