The practical takeaway is straightforward. The strongest objections to the Model Context Protocol (MCP) are not ideological. They are operational. They center on token cost, security, cloud deployment, and enterprise controls. The weaker objections tend to be broader complaints that MCP is unnecessary or too closely tied to Anthropic. For teams building AI products, the right question is not whether MCP is good or bad in the abstract. It is whether the benefits of a shared AI-facing protocol outweigh the costs and risks for a specific production environment.
| Objection | Assessment |
|---|---|
| Strong objections | |
|
Context-window bloat and token cost Strong objection |
This is one of the most credible criticisms. In the standard tool-calling pattern, the model often has to read tool definitions before it can decide what to use. That can consume a large amount of context before the agent even starts on the user’s task. The practical effect is higher cost, slower responses, and less room for the conversation itself. The important nuance is that this is not unique to MCP. It is also a broader function-calling problem. MCP makes it easier to connect many tools, which makes the problem easier to trigger in practice. |
|
Security risks such as prompt injection and token misuse Strong objection |
This concern is well founded. Prompt injection means instructions hidden in tool descriptions, web content, or tool output can manipulate the model into doing something unsafe. In MCP systems, the risk is not limited to what happens after a tool is called. A malicious server can shape model behavior through metadata and other context that appears earlier in the flow. That makes security a first-order design issue, not a side concern. |
|
Tool poisoning and rogue MCP servers Strong objection |
This deserves to stand apart from generic prompt injection. Tool poisoning means the dangerous content is embedded in the tool metadata itself, such as a misleading description or hidden instruction that the model can see but the user may never inspect. MCP’s flexibility is part of the appeal, but it also turns trust into a supply-chain problem. In practice, the question is not only whether a server works. It is whether the server can be trusted not to shape the model’s behavior in unsafe ways. |
|
Stateful transport makes cloud scaling awkward Strong objection |
This is one of the most important operational critiques. Stateful means the client and server keep session context over time instead of treating each request as independent. That can be useful, but it also makes load balancing and horizontal scaling harder because requests may need to stick to the same server instance. For teams building enterprise infrastructure, this is not cosmetic. It affects how reliably MCP can run in modern cloud environments. |
|
Not enterprise-ready enough for governance, observability, and enterprise auth Strong objection, for now |
This criticism is also credible. Enterprises need audit trails, cost attribution, single sign-on backed identity, and a clean way to understand what an agent requested and what a server actually did. MCP is still catching up to the controls large organizations expect before they make a protocol part of core production infrastructure. That does not mean MCP cannot be used in enterprises today. It means the operational safeguards are still maturing. |
|
MCP scales poorly when large APIs are exposed naively Strong objection |
This is closely related to token bloat, but it matters enough to call out separately. A common mistake is to expose one MCP tool for every endpoint in a large API. That produces a huge surface area for the model to read and reason about. The practical lesson is straightforward. MCP can work at scale, but naive one-tool-per-endpoint design is often the wrong way to use it. |
| Fair but important objections | |
|
Weak composability and too much data routed through the model Fair but important objection |
This is a real architectural limitation in many current MCP setups. Composability here means whether tools can work together efficiently without the model acting as the middleman for every step. In practice, intermediate data often flows from one tool back into the model and then out to another tool, which increases latency, cost, and the chance of failure. That makes complex multi-step workflows more fragile than they need to be. |
|
Agent reliability can get worse as more MCP tools are added Fair but important objection |
More tools do not automatically make an AI system more capable. In many cases they make it harder for the model to choose correctly. As tool libraries grow, common failures include picking the wrong tool and sending the wrong parameters, especially when several tools have similar names or overlapping roles. For teams building agents, this is a practical warning that integration count is not the same thing as capability. |
|
It is overengineered for narrow workflows Mixed objection |
This complaint has merit in the right context. If a team already knows the exact internal API or command-line tool it wants to call, MCP can feel like extra machinery. But MCP is trying to standardize a broader AI-facing layer across many integrations, not just provide another way to make HTTP calls. That makes it more useful in platforms and ecosystems than in small one-off automations. |
|
Why not just use REST or OpenAPI instead Mixed objection |
This is one of the more serious strategic questions. REST and OpenAPI are mature, widely adopted ways to describe and use APIs. For conventional service integration, they often remain the better choice. MCP’s distinct value is that it standardizes an AI-facing capability layer, not just a transport layer. It is useful for a different job, and teams should be careful not to use it where a plain API contract would do. |
| Lower-priority and improving objections | |
|
Install, discovery, and maintenance have been painful Lower-priority objection |
This was a real source of friction, especially early on. Setup often required manual configuration, dependency wrangling, and extra developer tooling. That made many MCP servers feel harder to install and manage than they should have. Much of that pain reflects immature packaging and distribution rather than a fatal flaw in the protocol itself. |
|
Error handling and lifecycle management are still maturing Lower-priority objection |
This criticism should be stated carefully. MCP does have a formal lifecycle along with support for cancellation and progress updates for long-running work. The fairer point is that these mechanisms are still maturing and are not yet as proven or as uniformly implemented as those in older infrastructure standards. For production users, that means more testing burden and more variation across clients and servers than they may expect. |
|
Vendor lock-in and Anthropic control Low and diminishing objection |
This was a stronger criticism early on, when MCP looked closely tied to Claude and Anthropic. It is much weaker now. The protocol has moved toward broader governance, which makes the simple claim that MCP is merely an Anthropic-controlled moat harder to sustain. That does not remove ecosystem politics, but it does make vendor-control a less central concern than security, cost, or operational fit. |
