Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I still don't understand understand. Aren't the risks the exact same for any external facing API? Maybe my imagined use case for MCP servers is different from others.


Imagine running an MCP server inside your network that grants you access to some internal databases. You might expect this to be safe but once you connect that internal MCP server to an AI agent all bets are off. It could be something as simple as the AI agent offering to search the Internet but being convinced to embed information provided from your internal MCP server into the search query for a public (or adversarial service). That's just the tip of the iceberg here...


I see. It's wild to me that people would be that trusting of LLMs.


This seems like the obvious outcome, considering all the hype. The more powerful the AI, the more power it has to break stuff. And there is literally ZERO possibility to remove that risk. So, whos going to tell your gungho CEO that the fancy features he wants are straight up impossible, without a giant security risk?


They weren’t kidding about hooking mcp servers to internal databases. You see people all the time connecting LLMs to production servers and losing everything — on reddit.

Its honestly a bit terrifying.


Claude has a habit of running ‘npm prisma reset —force’, then being super apologetic when I tell it that clears my dev database.


The Prisma team has done work that is part of the recent releases that specifically addresses this issue: https://prisma.io/changelog#log2025-08-27


> on reddit

Explains everything


LLMs are approximately your employees on their first day of work, if they didn't care about being fired and there were no penalties for anything they did. Some percentage of humans would just pull the nearest fire alarm for fun, or worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: