Exactly. Once you start looking at MCP as a protocol to access remote OAuth-protected resources, not an API for building agents, you realize the immense value
Yes, MCP could've been solved differently - eg with an extension to the openapi spec for example, at least from the perspective of REST APIs... But you're misunderstanding the selling point.
The issue is that granting the LLM access to the API needs something more granular then "I don't care, just keep doing whatever you wanna do" and getting promoted every 2 seconds for the LLM to ask the permission to access something.
With MCP, each of these actions is exposed as a tool and can be safely added to the "you may execute this as often as you want" list, and you'll never need to worry that the LLM randomly decides to delete something - because you'll still get a prompt for that, as that hasn't been whitelisted.
This is once again solvable in different ways, and you could argue the current way is actually pretty suboptimal too... Because I don't really need the LLM to ask for permission to delete something it just created for example. But the MCP would only let me whitelist action, hence still unnecessary security prompts. But the MCP tool adds a different layer - we can both use it as a layer to essentially remove the authentication on the API you want the LLM to be able to call and greenlight actions for it to execute unattended.
Again, it's not a silver bullet and I'm sure what we'll eventually settle on will be something different - however as of today, MCP servers provide value to the LLM stack. Even if this value may be provided even better differently, current alternative all come with different trade-offs
And all of what I wrote ignores the fact that not every MCP is just for rest APIs. Local permissions need to be solved too. The tool use model is leaky, but better then nothing.
And for a related rabbit hole where people actually went all the to the bottom, there's of course the full implementation of Tetris in GoL which was nerd-sniped by a CodeGolf challenge
Sometimes you see something that makes you wonder how it is that you get to exist in the same world as people with the drive and intelligence to do truly awesome ( in the original sense) thing like this. I am proud of myself when the compiler works on the first try.
I think it's awesome that they can do this amazing fun esoteric stuff, but at the same time a small part of me thinks maybe they need to be doing something more meaningful in the real world.
I wonder, what would that be, that thing that is more meaningful?
I would make the case that, zoomed out far enough, nothing at all is meaningful, so you might as well make beautiful things, and this is a delightfully beautiful thing.
We’re running a marketplace of 8,000+ tools called Actors for all kinds of web data extraction and automation use cases (see https://apify.com/store). Just last month, we paid out more than $500k to community developers who publish these Actors on the Apify platform.
The unit economics work for niche tools: scrapers for specific platforms, packaged open-source tools, MCP servers, or API wrappers. Too small for building SaaS around it, but developers earn a few thousand dollars per month as passive income.
We believe there can be many more such Actors. So we're putting $1M in prizes on the table to motivate developers to build new, useful Actors. Our bet is that 10,000 new specific tools can widely expand the capabilities of many AI agents and unlock a lot of value.
This is Jan, the founder of Apify (https://apify.com/) — a full-stack web scraping platform.
With the help of Python community and the early adopters feedback, after an year of building Crawlee for Python in beta mode, we are launching Crawlee for Python v1.0.0.
The main features are:
- Unified storage client system: less duplication, better extensibility, and a cleaner developer experience. It also opens the door for the community to build and share their own storage client implementations.
- Adaptive Playwright crawler: makes your crawls faster and cheaper, while still allowing you to reliably handle complex, dynamic websites. In practice, you get the best of both worlds: speed on simple pages and robustness on modern, JavaScript-heavy sites.
- New default HTTP client `ImpitHttpClient` (https://crawlee.dev/python/api/class/ImpitHttpClient), powered by the Impit (https://github.com/apify/impit) library): fewer false positives, more resilient crawls, and less need for complicated workarounds. Impit is also developed as an open-source project by Apify, so you can dive into the internals or contribute improvements yourself: you can also create your own instance, configure it to your needs (e.g., enable HTTP/3 or choose a specific browser profile), and pass it into your crawler.
- Sitemap request loader: easier to start large-scale crawls where sitemaps already provide full coverage of the site
- Robots exclusion standard: not only helps you build ethical crawlers, but can also save time and bandwidth by skipping disallowed or irrelevant pages
- Fingerprinting: each crawler run looks like a real browser on a real device. Using fingerprinting in Crawlee is straightforward: create a fingerprint generator with your desired options and pass it to the crawler.
- Open telemetry: monitor real-time dashboards or analyze traces to understand crawler performance. easier to integrate Crawlee into existing monitoring pipelines
we’re publishing this whitepaper that describes a new concept for building serverless microapps called Actors, which are easy to develop, share, integrate, and build upon. Actors are a reincarnation of the UNIX philosophy for programs running in the cloud.
Our goal is to make Actors an open web standard. We’d love to hear your thoughts.
reply