It is a valid concern. We are firmly in the goldilocks phase of LLMs, like in the first couple of years of Google when it was truly amazing. Then SEO made Google defensive, then websites catered to Google and not users, then Google catered to Google and not websites and we end up with 30 page recipe sites.
LLMs are obviously different and will have different challenges, but their advantage is how deep into a user's request they go. Advertising comes down to a binary choice - use product X or not. If I want implementation instructions for a certain product on specific hardware an ad will be obviously out of place and irrelevant.
So "shopping comparison" asks might get broken, but those have been broken for a while.
There wouldn't be an "ad" anywhere, though. You'll just ask the LLM for alternative implementations in plan mode, and it will be selling you one of them during the conversation rather than giving you an unbiased comparison. If you become suspicious it will make sure the pros just slightly outweigh the cons, or mention how well the thing works with something else in your stack, or whatever else a skilled salesperson would do to guide your choice without you realizing.
It's already doing this by telling everyone to use React and Tailwind, it's just that nobody's getting paid for it to do that.
> Then SEO made Google defensive, then websites catered to Google and not users,
Google was created in response to simple proto-SEO techniques (e.g. keyword stuffing) that already ruined Alta Vista.
Google has been combating adversarial information retrieval since inception.
Google's background with that is one of the reasons to expect they will stay on top of the AI race. The recipe is: lots of good/novel data x careful weighting of trust x algorithm.
from my understanding Anthropic are now hiring a lot of experts in different who are writing content used to post-train models to make these decisions and they're constantly adjusted by the anthropic team themselves
this is why the stacks in the report and what cc suggests closely match latest developer "consensus"
your suggestion would degrade user experience and be noticed very quickly
I guess that’s why I’m not seeing anyone trying to build a skills marketplace for agent skills files. The llm api will read in any skills you want to add to context in plain text, and then use your content to help populate their own skills files.
That's how Google search worked back when it was at its most useful. They had a large "editorial team" that manually tweaked page ranks on a site-by-site basis.
The core graph reputation based page ranking algorithm lasted for a hot second before people started gaming it. No idea what they do these days.
This is the major point the anti-scraping crowd misses.
If you want your ideas to be appreciated, you should do everything in your power to put those ideas into the brains of LLMs. Like it or not, LLMs is how people interact with the world now.
that's very different and was more akin to prompt injection or engineering, depending on your perspective, with a very specific query to make it happen (required a web fetch).
Influencer seems like an insufficient word? Like, in the glorious agentic future where the coding agents are making their own decisions about what to build and how, you don't even have to persuade a human at all. They never see the options or even know what they are building on. The supply chain is just whatever the LLMs decide it is.
In my last conversation with a Google support person, I was sent a clearly LLM-generated recommendation to switch to a competitor's product. Either they're not doing this, or the support person wasn't using Gemini.
It's standard practice for customer support people to chase away unprofitable customers (in the US; no idea how Google works). Human or LLM, they may simply not want your business.
Probably closer to the Walmart / Amazon model where it's the arbiter of shelf space, and proceed to create their own alternatives (Great Value, Amazon Brand) once they see what features people want from their various SaaS.
how is it a conflict of interest for a google product to have a bias towards using google products?
As users we must hold some accountability. AI is aiming to substitute for humans in the workforce, and humans would get fired for recommending competitor products for use-cases their own company is targeting.
If we want a tool that is focused on the best interest of the public users, then it needs to be owned by the public.
"Conflict of interest" isn't exactly the right term. "Conflict of value proposition" perhaps? E.g., you're using Google search based on the proposition it will effectively find things for you, but that turns out to be not what it actually does.
Advertisers will only pay if AI providers will provide them data on the equivalent of “ad impressions”. And unlabeled/non-evident advertisements are illegal in many (most?) countries.
It doesn't necessarily have to be advertisers paying AI providers. It could be advertisers working to ensure they get recommended by the latest models. The next form of SEO.
There are competing terms currently being decided on by the market at large:
AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization)
Candidly I am working on a startup in this space myself, though we are taking a different angle than most incumbents.
While it's still early days for the space, I sense a lot of the original entrants who focus on, essentially, 'generate more content ideally with our paid tools' will run in to challenges as the general population has a pretty negative perception of 'AI Slop.' Doubly so when making purchasing decisions, hence the rise of influencers and popularity of reviews (though those are also in danger of sloppification).
There's an inevitable GIGO scenario if left unchecked IMO.
> There are competing terms currently being decided on by the market at large: AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization)
It really annoys me the industry seems to be narrowing in on the two worse options rather than AIO.
1. They can skip impressions and go right to collect affiliate fees.
2. Yes, the ad has to be labeled or disclosed... but if some agent does it and no one sees it, is it really an ad.
Advertisers pay for ads that don’t have impression data all the time. You can’t count how many people looked at a billboard or listened to your radio ad or paid attention to your televised ad.
Or not even advertising, just conflict of interest. A canary for this would be whether Gemini skews toward building stuff on GCP.