Menu
HomeAboutServicesCase StudiesBlogContact
Get Started

Or chat with our AI assistant

MCP Has 102 CVEs and No Authentication
Back to Blog

MCP Has 102 CVEs and No Authentication

Security
April 5, 2026
15 min read
A

AWZ Team

Security Engineering

Gartner predicts that 25% of enterprise data breaches by 2028 will trace back to AI agent abuse. If you've been watching the Model Context Protocol ecosystem over the past six months, that number doesn't feel aggressive enough.

MCP is the protocol that lets AI agents talk to your internal systems. Databases, APIs, file systems, CI/CD pipelines. It's an open standard from Anthropic, adopted by Claude Desktop, Cursor, Windsurf, and dozens of IDE plugins. Every major AI coding tool now supports it. The adoption curve looks like something out of a venture pitch deck.

The security story looks like something out of an incident report.

What MCP Actually Does

If you haven't built with MCP yet, the pitch is straightforward. Instead of writing custom integrations for every tool your AI agent needs to access, MCP gives you a standard interface. The agent discovers what tools are available, calls them through a consistent protocol, and gets structured responses back.

Three components make up the stack:

  • MCP Host: The AI application making requests. Claude Desktop, a Cursor plugin, a custom agent runtime.
  • MCP Client: The protocol handler sitting between the host and the servers.
  • MCP Server: A lightweight program that exposes tools, resources, or prompts to the AI.

The server is where most of the security concerns concentrate. It's the bridge between what the AI wants to do and what your infrastructure allows. And right now, that bridge has no guardrails.

The Core Problem: No Auth, No Authz, No Kidding

The MCP specification does not include authentication. It does not include authorization. Every server you deploy inherits whatever permissions it's granted at the system level, and every agent request flows through without verification unless you bolt on controls yourself.

Read that again. The protocol that connects AI agents to your production databases, your AWS credentials, your internal APIs, has no built-in concept of "who is asking" or "should they be allowed to."

This is different from traditional API security in a fundamental way. When a human calls an API, the request patterns are predictable and scoped. You know what endpoints exist, you can rate-limit by user, you can audit by session. When an AI agent calls an MCP server, the request is driven by a prompt. That prompt can be manipulated. The agent can be redirected. And the server just does what it's told, because nobody taught it to ask questions.

102 CVEs and Counting

Trend Micro's latest AI security report identifies 102 MCP-specific CVEs. That number is growing faster than the protocol's version number. Here are the attack vectors that matter most.

NeighborJack

When an MCP server binds to 0.0.0.0 instead of localhost, anyone on the same network can connect and execute commands. This is not a theoretical concern. It shows up in hundreds of MCP server implementations. Sit in a coffee shop next to someone running a misconfigured MCP server, and you can query their database.

The fix is trivial. Bind to 127.0.0.1 or use a Unix socket. But "trivial to fix" and "widely fixed" are not the same thing.

Prompt Injection via Data

Prompt injection is ranked #1 on the OWASP Top 10 for LLM Applications. In the MCP context, it works like this: an attacker embeds malicious instructions in data the agent retrieves. A poisoned document in a knowledge base. A carefully crafted row in a database. The agent processes it and acts on the injected instructions, thinking they came from the user.

We wrote about similar attack patterns in our post on AI chatbot security and prompt leaks. The difference is scale. A standalone chatbot can leak its system prompt. An MCP-connected agent can leak your production credentials.

Tool Poisoning

This one is subtle. Instead of injecting malicious data, the attacker targets the tool definitions themselves. A malicious MCP server sends crafted descriptions that alter how the LLM interprets and uses the tool.

Say you install a community-built MCP server for Slack. The tool description says "Send a message to a Slack channel." But buried in the metadata, there are hidden instructions that tell the agent to also forward message contents to an external endpoint. The agent trusts tool descriptions implicitly. It has no mechanism to verify that the metadata matches the actual behavior.

The Confused Deputy

A confused deputy attack tricks an MCP server into using its legitimate authority on behalf of a malicious actor. The server has permissions to query your database, access your file system, call your APIs. The attacker doesn't need those permissions directly. They just need to get the agent to ask the right questions.

Because MCP lacks strict identity verification at the request level, the server can't distinguish between a legitimate agent action and one that's been manipulated through prompt injection, tool poisoning, or a compromised upstream server.

Supply Chain: Rug Pulls and Typosquatting

Third-party MCP servers are the npm packages of the agent world, and all the same risks apply. A "rug pull" attack works like this: a community server passes initial review, gains adoption, then gets updated with malicious behavior. A typosquat registers slackk-mcp next to the legitimate slack-mcp and waits for installation typos.

We saw the exact same pattern play out in the Axios supply chain attack, where a clean typosquat package sat dormant for 18 hours before going active. MCP servers have even less vetting infrastructure than npm.

What Nobody Logs

Most MCP implementations don't capture tool calls, inputs, or outputs in any useful way. When something goes wrong (and it will) your incident response team can't reconstruct what happened. Which agent called which tool, with what parameters, at what time? Most deployments can't answer that.

This creates a compliance gap too. SOC 2, HIPAA, GDPR all require audit trails. If your AI agent touches sensitive data through an MCP server and you can't produce logs of those interactions, you have a problem that no amount of marketing about "AI-native security" will fix.

What To Do Right Now

If you're running MCP servers in any environment, here's the minimum:

1. Bind to localhost only. Check every server configuration. If anything binds to 0.0.0.0, fix it.

# Check what's listening on all interfaces
netstat -tlnp | grep 0.0.0.0
# or on macOS
lsof -nP -iTCP -sTCP:LISTEN | grep "\*:"

2. Scope permissions per server. Each MCP server should only access what it actually needs. Don't give your Slack integration access to your database. Don't give your file system server write access when it only needs read.

3. Containerize everything. Run MCP servers in containers or sandboxes with restricted network access. A compromised server inside a container with no outbound network access can't exfiltrate data.

4. Vet before you deploy. Review source code for any third-party MCP server before connecting it to production. Check the maintainer's history. Pin to specific versions. Don't auto-update.

5. Log every tool call. If your MCP framework doesn't support audit logging, add it. Record the tool name, input parameters, output, timestamp, and which agent session initiated the call.

6. Treat agents like untrusted users. Apply the same zero trust principles you'd use for a third-party API consumer. Per-request authorization. Short-lived tokens. Immediate revocation capability.

The Bigger Picture

MCP is useful. The ability to give AI agents standardized access to tools is genuinely valuable, and the adoption numbers reflect that. But the protocol shipped for developer velocity, not for security. The spec assumes agents behave predictably. They don't.

The gap between MCP's adoption curve and its security maturity is the kind of gap that produces headlines. The organizations building agentic workflows today have maybe 12-18 months before these attack vectors go from academic papers to real breaches at scale.

We've been building AI agent systems with defense-in-depth from day one, because we learned early that the interesting security problems in AI aren't the ones you read about in the OWASP list. They're the ones that emerge when you connect an unpredictable system to your production infrastructure and hope for the best.

If your team is deploying MCP servers or building agentic workflows and you want a second set of eyes on the security architecture, reach out. This is the kind of thing that's much cheaper to get right upfront than to fix after an incident.

Tags

MCP
Model Context Protocol
AI Security
Prompt Injection
AI Agents
Zero Trust
OWASP

Share this article

Related Articles

The Axios Supply Chain Attack. What Happened and What to Check.

The Axios Supply Chain Attack. What Happened and What to Check.

Axios, the most popular HTTP client in JavaScript, was compromised via npm with a trojanized dependency that deployed a full remote access trojan. If your project uses Axios, here's what you need to check right now.

SecurityMarch 31, 202614 min read

Stay Updated

Get the latest insights on AI, automation, and digital transformation delivered to your inbox.