If you've managed to avoid the noise of social media influencers and AI hype cycles, you might have missed the introduction of Model Context Protocol (MCP). On the surface, MCP looks like yet another AI gimmick poised to be this week’s "next big thing.” Initially, I overlooked it, lumping it into the pile of influencer-driven acronyms that flood our feeds daily.
But I was wrong. This is different. I’m a believer now.
MCP promises to be REST for LLMs. Imagine pointing your Claude, ChatGPT, or Gemini directly at Gmail’s (hypothetical) MCP endpoint and asking it to organize your inbox today—no custom Python wrappers, no complicated API integrations, just simple, direct communication between LLMs and servers. When you expand this to Slack, WhatsApp, banking, Uber, Twilio, GitHub, and beyond, everything becomes interconnected and actionable by language models.
When I finally took the time to dive deeply into the spec and its implementation, I immediately saw MCP’s potential to fundamentally transform how LLMs interact with external applications. Imagine a world where every application exposes a standard API that any LLM can seamlessly use—this is exactly what MCP could enable. But the current spec and implementation severely limit its readiness for production. For example, the spec lacked any form of authentication until just last week. Even more surprising, the main MCP API hosts — including Claude for Desktop — communicate exclusively by spawning servers as local processes and interacting over stdio. They do not support remote hosts. In fact, at the time of writing, all of the official MCP servers listed in the repo work exclusively over stdio.
The "2024-11-05" protocol revision, the only one supported by Anthropic's official SDKs at the time of writing, does not contain any authentication:
Outside of specific use cases, such as the MCP local filesystem server, we can only build toy servers over HTTP with the official spec. To let the broader ecosystem (Claude for Desktop, Cursor, Claude Code, ChatGPT, etc.) interact with your API, you would have to expose your MCP server insecurely on the network.
As an industry, we have over 30 years of learnings and standards on how to build authenticated APIs over HTTP, but the spec seems willing to throw that away and start from scratch. The MCP spec confusingly attempts to retrofit HTTP semantics — like GET and POST— on top of JSON-RPC messages, causing unnecessary complexity and redundancy.
For example, MCP has the concept of Resources. To get one, you send a POST request with the URI as a parameter in the JSON message.
POST
/messages/?session_id=X
{
"jsonrpc":"2.0",
"id":2,
"method":"resources/read",
"params":{
"uri":"file:///project/src/main.rs"
}
}
You receive back the response from this POST, not in the body, but over a separate long-running network connection via HTTP-SSE. It’s your job to attribute the response across both network connections via identifiers.
A spec embracing standard HTTP semantics directly — or adopting gRPC for RPC use cases — would immediately simplify client and server implementations while aligning with decades of best practices.
From their decision to exclude auth to their choice of JSON-RPC, the spec authors have focused primarily on stdio, leaving HTTP as a second-class primitive. The latest spec even states servers "SHOULD support stdio." This seems completely wrong-headed for defining what essentially amounts to an RPC protocol.
Most examples provided require "servers" to be processes spawned by the client and interacted with over stdio. Claude for Desktop doesn’t even support HTTP clients—we had to write a proxy that translates stdio to HTTP.
In the current MCP spec, every server that wants to support authentication is required to operate as a fully-fledged Identity Provider (IdP). Rather than allowing servers to simply validate existing tokens — such as Okta-issued JWTs or tokens from other trusted IdPs — the spec mandates that each MCP server must issue, map, revoke, and validate its own tokens.
The requirement for each MCP server to be an IdP is a massive and unnecessary burden for teams who just want to expose a simple REST-like API for LLMs. Instead of relying on the well-understood model where authentication happens client-side via a standard IdP (Okta, Google SSO, etc.) and servers simply validate those tokens, MCP pushes developers into building and maintaining their own mini-auth system.
To make matters worse, the spec attempts to handle external identity providers by forcing servers to act as a proxy, bridging third-party tokens into the MCP token system. This bridging approach requires token mapping, state management, and additional security risks, ultimately increasing operational complexity — all just to accommodate users already verified by systems you trust.
There’s already a lengthy GitHub thread outlining these exact concerns, predating the latest spec release. Yet, the issue remains unresolved, and all official SDKs have begun implementing this awkward and heavyweight flow. Unfortunately, we too will have to implement it in MCPEngine — even though we know there's a simpler, more robust path.
MCP is already gaining real traction. Dozens of companies have released stdio-based MCP servers. OpenAI has officially announced native MCP support, and just yesterday, Google’s CEO publicly hinted at their plans to follow suit.
As the founder of Featureform, I regularly speak with data teams across industries — from fintech to e-commerce and healthcare and beyond. Almost every strategic executive I talk to is actively exploring MCP or asking about it. The interest is undeniable.
However, once these teams attempt to go beyond local prototypes and build real, production-grade MCP servers, they will run into the same blockers we did. The current spec — particularly over HTTP — simply isn’t built for serious enterprise use cases.
To unlock MCP’s full potential for real-world applications, we're releasing MCPEngine, an open-source (MIT-licensed) project designed to bring security, scalability, and modern API practices to the MCP ecosystem.
MCPEngine is composed of two parts:
401 Unauthorized
and initiate a familiar OAuth flow. Claude (or any other client) simply prints a login link. The proxy handles the rest transparently, without requiring any custom logic from hosts like Claude for Desktop.
To demonstrate MCPEngine in action, we’ve also built Smack, a simple Slack-like application. It runs as multiple containers, backed by Postgres, and fronted by MCPEngine-Proxy — just like a real production service. Smack exposes two simple tools: read_channel
and write_to_channel
. You deploy the MCPEngine-Proxy like any other MCP stdio server, passing in a single argument for the Smack endpoint — no need to adapt to the convoluted MCP auth extension or wait for everything to support it.
MCPEngine is available today as an early beta, and we're inviting the community to put it to the test. Whether you're building internal tools, experimenting with LLM workflows, or designing production-grade APIs, MCPEngine is ready to help you deploy secure, HTTP-native MCP servers without friction.
In the coming weeks, here’s what we're focused on:
If you try MCPEngine, we’d love to hear from you — whether it’s feedback, ideas, or requests. We’re also actively looking for partners who want to collaborate, stress-test, or shape the next generation of MCP tooling. Reach out directly, open an issue, or just fork it and show us what you build. You can also join our Slack community, where we’ve established a dedicated mcp-engine channel for feedback and discussion.
Our goal is simple: make MCPEngine the easiest and most production-ready way to bring real-world APIs into the MCP ecosystem.
As a technical founder, I have deep respect for the engineers at Anthropic and the broader MCP community. Innovating within such a fast-moving AI landscape is no easy task. Still, we believe strongly that MCP’s current local-first design limits its potential. Real-world production use cases demand robust HTTP support and standardized authentication. We're committed to making that vision real — and invite the community to join us.
See what a virtual feature store means for your organization.