Hacker Newsnew | past | comments | ask | show | jobs | submit | jancurn's commentslogin

Exactly. Once you start looking at MCP as a protocol to access remote OAuth-protected resources, not an API for building agents, you realize the immense value

Aside from consistent auth, that's what all APIs have done for decades.

Only takes 2 minutes for an agent to sort out auth on other APIs so the consistent auth piece isn't much of a selling point either.


Yes, MCP could've been solved differently - eg with an extension to the openapi spec for example, at least from the perspective of REST APIs... But you're misunderstanding the selling point.

The issue is that granting the LLM access to the API needs something more granular then "I don't care, just keep doing whatever you wanna do" and getting promoted every 2 seconds for the LLM to ask the permission to access something.

With MCP, each of these actions is exposed as a tool and can be safely added to the "you may execute this as often as you want" list, and you'll never need to worry that the LLM randomly decides to delete something - because you'll still get a prompt for that, as that hasn't been whitelisted.

This is once again solvable in different ways, and you could argue the current way is actually pretty suboptimal too... Because I don't really need the LLM to ask for permission to delete something it just created for example. But the MCP would only let me whitelist action, hence still unnecessary security prompts. But the MCP tool adds a different layer - we can both use it as a layer to essentially remove the authentication on the API you want the LLM to be able to call and greenlight actions for it to execute unattended.

Again, it's not a silver bullet and I'm sure what we'll eventually settle on will be something different - however as of today, MCP servers provide value to the LLM stack. Even if this value may be provided even better differently, current alternative all come with different trade-offs

And all of what I wrote ignores the fact that not every MCP is just for rest APIs. Local permissions need to be solved too. The tool use model is leaky, but better then nothing.


Cool, adding this to my list of MCP CLIs:

  - https://github.com/apify/mcpc
  - https://github.com/chrishayuk/mcp-cli
  - https://github.com/wong2/mcp-cli
  - https://github.com/f/mcptools
  - https://github.com/adhikasp/mcp-client-cli
  - https://github.com/thellimist/clihub
  - https://github.com/EstebanForge/mcp-cli-ent
  - https://github.com/knowsuchagency/mcp2cli
  - https://github.com/philschmid/mcp-cli
  - https://github.com/steipete/mcporter
  - https://github.com/mattzcarey/cloudflare-mcp
  - https://github.com/assimelha/cmcp

Also https://github.com/mcpshim/mcpshim

It turns out everyone is having the same idea.



Precisely, there are about 100 of these, and everyone makes a new one every week.

This is entirely predictable: we get an army of vibe coders, vibe coding up tools to make vibe coding easier.

For simple stuff like this, it's easier to have the agent build something than it is to figure out how to install someone else's.

there is nobody making a new one ever week.

lol I didn’t know there were so many but I’m not surprised.

I was inspired by clihub (I credited them) but I also wanted 3 additional things.

1. OpenAPI support 2. Dynamic CLI generation. I don’t want to recompile my CLI if the server changes. 3. An agent skill


The biggest surprise of this list is someone grabbed "f" as github username, clever.

Which one do you recommend?

Roll your own

Yes!


Thank you! I guess I was fed up with lack of clients supporting enough MCP capabilities, so I had to build new one :)


You sent me down a rabbit hole: https://esolangs.org/wiki/APGsembly is mentioned in the book


And for a related rabbit hole where people actually went all the to the bottom, there's of course the full implementation of Tetris in GoL which was nerd-sniped by a CodeGolf challenge

https://codegolf.stackexchange.com/questions/11880/build-a-w...


Sometimes you see something that makes you wonder how it is that you get to exist in the same world as people with the drive and intelligence to do truly awesome ( in the original sense) thing like this. I am proud of myself when the compiler works on the first try.


I think it's awesome that they can do this amazing fun esoteric stuff, but at the same time a small part of me thinks maybe they need to be doing something more meaningful in the real world.


This small part is what makes broken people. Whoever reads this, go have fun! :)


You know what? I think I will.


I wonder, what would that be, that thing that is more meaningful?

I would make the case that, zoomed out far enough, nothing at all is meaningful, so you might as well make beautiful things, and this is a delightfully beautiful thing.


the only thing that's meaningful is having fun, everything else is a waste of time


Hey, this is Jan, founder of Apify.

We’re running a marketplace of 8,000+ tools called Actors for all kinds of web data extraction and automation use cases (see https://apify.com/store). Just last month, we paid out more than $500k to community developers who publish these Actors on the Apify platform.

The unit economics work for niche tools: scrapers for specific platforms, packaged open-source tools, MCP servers, or API wrappers. Too small for building SaaS around it, but developers earn a few thousand dollars per month as passive income.

We believe there can be many more such Actors. So we're putting $1M in prizes on the table to motivate developers to build new, useful Actors. Our bet is that 10,000 new specific tools can widely expand the capabilities of many AI agents and unlock a lot of value.


Hey HN,

This is Jan, the founder of Apify (https://apify.com/) — a full-stack web scraping platform.

With the help of Python community and the early adopters feedback, after an year of building Crawlee for Python in beta mode, we are launching Crawlee for Python v1.0.0.

The main features are:

- Unified storage client system: less duplication, better extensibility, and a cleaner developer experience. It also opens the door for the community to build and share their own storage client implementations.

- Adaptive Playwright crawler: makes your crawls faster and cheaper, while still allowing you to reliably handle complex, dynamic websites. In practice, you get the best of both worlds: speed on simple pages and robustness on modern, JavaScript-heavy sites.

- New default HTTP client `ImpitHttpClient` (https://crawlee.dev/python/api/class/ImpitHttpClient), powered by the Impit (https://github.com/apify/impit) library): fewer false positives, more resilient crawls, and less need for complicated workarounds. Impit is also developed as an open-source project by Apify, so you can dive into the internals or contribute improvements yourself: you can also create your own instance, configure it to your needs (e.g., enable HTTP/3 or choose a specific browser profile), and pass it into your crawler.

- Sitemap request loader: easier to start large-scale crawls where sitemaps already provide full coverage of the site

- Robots exclusion standard: not only helps you build ethical crawlers, but can also save time and bandwidth by skipping disallowed or irrelevant pages

- Fingerprinting: each crawler run looks like a real browser on a real device. Using fingerprinting in Crawlee is straightforward: create a fingerprint generator with your desired options and pass it to the crawler.

- Open telemetry: monitor real-time dashboards or analyze traces to understand crawler performance. easier to integrate Crawlee into existing monitoring pipelines

For details, you can read the announcement blog post: https://crawlee.dev/blog/crawlee-for-python-v1

Our team and I will be happy to answer here any questions you might have.


Hey all,

we’re publishing this whitepaper that describes a new concept for building serverless microapps called Actors, which are easy to develop, share, integrate, and build upon. Actors are a reincarnation of the UNIX philosophy for programs running in the cloud.

Our goal is to make Actors an open web standard. We’d love to hear your thoughts.

Here’s the corresponding GitHub repo: https://github.com/apify/actor-whitepaper


For this use case, you might use this ready-made Actor: https://apify.com/apify/website-content-crawler


For sure, simply store cookies after login and then use them to initiate the crawl.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: