Everything a Developer Needs to Know About the Model Context Protocol (MCP)

Head of Product Innovation & Developer Strategy, Neo4j
25 min read

Think of the Model Context Protocol (MCP) as a Protocol for AI applications, an universal way to plug external data sources, tools, infrastructure, and data APIs as Context into an AI agent’s Model to support a user’s workflows. Anthropic developed MCP to let everyone seamlessly connect large language models (LLMs) with these and other resources.
More in this guide:
The Ecosystem Potential of MCP
MCP gives users a wide range of services they can connect to from the comfort of their AI client—be it a chat interface like Claude AI, an IDE like Visual Studio (VS) Code, Cursor, Windsurf, or more integrated GenAI applications. It enables developers to open their services and APIs to many AI clients with the implementation of a single protocol. It allows for two-way communication, tool discovery, and a rich set of primitives that servers and clients can act upon.
So, like any other open, widely adopted standard protocol (think REST, USB-C, or TCP/IP) where both sides agree on a format and an interaction and transport protocol, it creates a massive ecosystem. Anthropic took steps to kickstart that ecosystem because agentic use cases need to access a plethora of services without building individual one-to-one integrations. So, with a “universal” protocol, suppliers and consumers can build an integration once, and it can use everything that adheres to the protocol.
In some ways, it reminds me of the introduction of the HTTP and, subsequently, REST architectures, which opened up the vast space of internet interactions and integrations we have today. It’s perhaps similar to how Language Server Protocol (LSP) made working with programming languages across IDEs so much easier. It also feels a bit like the reinvention of the wheel. 🙂
MCP Capabilities and Challenges
Since its launch in November 2024, MCP has taken the AI world by storm. Developers and companies have built tens of thousands of MCP servers, integrated it into more clients, and launched marketplaces and tools around the protocol.
It’s still in its early days, and many challenges need to be addressed—especially around security, observability, and discovery—before MCP can be integrated into dependable AI systems. But many of the solutions are already on the roadmap. The most recent specification update in spring 2025 contained OAuth support, additional tool annotations, batching, and a streaming HTTP server transport.
MCP addresses many-to-many integration challenges, improves interoperability, and allows users to pick the best client, LLM, and MCP servers for their needs.
It offers a great developer experience for rapid prototyping, quick workflow orchestration, and context-aware applications, while traditional API integrations will remain dominant for secure and performance-critical use cases for some time.
Impact on GenAI and Agent Systems
Of course, this benefits end-user AI assistants and larger GenAI systems. Any business system needs to integrate with internal and external tools. If a secure, trusted, open protocol reduces the custom integration effort for application developers, they can focus on the actual value provided by their system.
The integration goes both ways: MCP tools will be straightforward to integrate into agentic frameworks, and existing agentic AI tools will be exposed through MCP servers. We’re already seeing this from OpenAI, Google, AWS, crewAI, LangGraph, and others. MCP will reduce the effort for cross-system integration, from federating across data sources to taking information from one system, reasoning about it with another, and taking action in a third. Both RAG and GraphRAG architectures integrate well into this setup, as they can serve as full agentic MCP servers.
In 2025, the Year of Agents, it’s telling that MCP goes beyond pure retrieval and data use cases. It also allows the user to take action, not just on a business or personal level but also for infrastructure. Integration into AI-powered IDEs and developer tools makes the developer workflow more productive, as AI developer companions can now help investigate and act upon services like CloudFlare, GitHub, Supabase, Stripe, Neo4j, and others—without leaving the IDE.
MCP also goes beyond exposing tools as API access points—it offers concepts like prompts, sampling (back channels to the LLM), and resources.
Get Started With MCP
As a user, you can pick MCP servers that would make your life easier and add them to your AI client of choice. Here, we’ll use Cursor and Claude AI. Now, you often have to provide raw credentials to these kinds of tools, so make sure you protect the configuration file with good security measures.
After you add the service, it provides a number of tools the AI agent can use. From your conversation, it generates the right information to pass as parameters to invoke the tools. The outputs of the tool calls are then parsed and incorporated into the conversation. They can be used to answer your questions, render charts, or take further action with other tools, like investigating details.
Here, we add a Neo4j server that connects to a public demo database of movies, actors, genres, and ratings and allows you to read the information.

After successful connection, I can see the available tools with the hammer icon and the connected servers in the double-plug icon.


Now, we can ask questions, and Claude will fetch the relevant information from the database. It can also process the data further—to create charts, for example.


Provisioning Neo4j Instances With Cursor
Here’s an example of using a Neo4j Aura MCP server calling the Aura Provisioning API to list, create, and manage database instances from Cursor AI. You can see the tools that the server offers after the connection is successfully established.

Now we can use it to investigate the list of our instances and create a new one.


MCP Deep Dive: Core Components and Concepts
The protocol specification has a number of interacting components:
- An MCP host, the LLM application (like Claude Desktop or IDEs) that interacts with the user and initiates connections
- An MCP client that maintains one-to-one connections with a server inside the host application
- An MCP server that provides clients with context, tools, and prompts
The MCP host uses its configuration to start a client for each server, which then connects and retrieves capabilities. As the user interacts with the host LLM, if needed, the LLM performs relevant tasks like picking tools and resources, extracting parameters from the conversation, and instructing the MCP host to call one or more tools (e.g., for detail drill-down or follow-up) through the MCP client.

MCP Host
The MCP host is the application or tool (like Claude Desktop or Windsurf) that the user interacts with. It enables configured MCP servers and creates and manages MCP clients, and in most cases, it also handles the interaction with the LLM to drive MCP tool invocation and result processing.
MCP Client
The MCP client connects to one (or more) MCP servers, either by starting them locally (STDIO/HTTP) or by using them remotely (HTTP). After authentication, it retrieves available tools, prompts, and resources and makes them available to the MCP host for appropriate selection in the user interaction.
MCP Server
The MCP server exposes a list of tools through the protocol that offer read and write access to an API, service, database, or other functionality. The server can also provide canned prompts, resources, and workflows that are tailored to use its services, which the agent can then make use of in the conversation. The server runs a bi-directional persistent protocol (similar to websockets) either via STDIO locally or HTTP/SSE (server-sent events) remotely. A server can be a client again to another set of servers, so MCP composes nicely (e.g., a research server using a number of retrieval and analysis tools).
There are already thousands of MCP servers available. Here are a few examples:
- Sequential Thinking Server
- Brave Search
- Supabase Database Management
- Grafana Management
- Microsoft Playwright
- Stripe Customer & Payment Management
MCP Server and Client Features
- Tools (AI model–controlled)
- Structured “functions” to retrieve data or take action, well-documented tools, parameters, and responses
- Resources (application-controlled)
- Structured read-only information endpoints for websites, files, database views, and API endpoints, which can be static or dynamic (templated)
- Prompts (user-controlled)
- Predefined prompt template workflows with dynamic arguments (from user inputs), context injection from resources and multi-step interaction chaining, like slash-commands they can feed a large instruction from a shorter input
- Sampling (server-controlled)
- MCP server can request LLM inference from client, specify LLM, system prompt, hints, temperature, max tokens, etc. (can be used as a back channel for composability)
- Pings (client-controlled)
- Server health checks, capabilities, and
{tools,prompts,resources}/list
endpoints, automatic reconnection strategies
- Server health checks, capabilities, and
- Roots
- Discovery (in client config) (e.g., for MCP directories, marketplaces, or aggregation servers)
- Notifications (server-controlled)
- A server can send notifications to MCP clients (e.g., on change of resources)

Security
Security is still a bit of a weak point with MCP on several levels — for example, authorization, access control, credential management, and safeguarding from LLM injection vulnerabilities.
Local servers that only communicate with an MCP host client application directly or via a local HTTP interface are mostly shielded from public access. But public LLM servers need to be properly secured. The recent specification update from March 2025 adds OAuth2 integration for JWT tokens, so public HTTPS servers can authenticate on a user’s behalf.
Currently, MCP (e.g., TinyBird) has limited observability and monitoring integration. The server builder is responsible for scalable deployments, access control, and request limiting for public services.
There are plans to improve access control to tools through manifests, RBAC, and providing certification and verification through a registry.
Development and Testing
Most tools today use the Anthropic MCP SDKs for Python or JavaScript, but with the simple protocol, servers can be built in other languages, too.
There are several projects that help kickstart MCP server or client development by generating the necessary scaffolding. Cline is an AI-driven developer tool that directly takes you from prompt to MCP server; others are from Cloudflare, Mintlify, Stainless, Speakeasy, MCP Market, and Playbook. Stripe allows you to generate MCP servers from REST APIs.
Here’s an example of what the Neo4j Database MCP Server looks like. Besides connector management to the database, it contains mostly two parts: the declaration of available tools, with descriptions and parameter details; and the tool execution, which takes the tool selection and parameter input from the client, validates the parameters, and executes the tool. Then the execution returns either the results or a helpful error message wrapped as a JSON payload.
In our case, we have three tools:
- get_schema – Retrieve the graph database schema
- read_cypher – Execute a ready-only query to retrieve data from the data
- write_cypher – Execute a write statement to create or update data in the graph
The server provides both a tool definition listing:
async def main(neo4j_url: str, neo4j_username: str, neo4j_password: str):
logger.info(f"Connecting to neo4j MCP Server with DB URL: {neo4j_url}")
db = neo4jDatabase(neo4j_url, neo4j_username, neo4j_password)
server = Server("neo4j-manager")
# Register handlers
logger.debug("Registering handlers")
@server.list_tools()
async def handle_list_tools() -> list[types.Tool]:
"""List available tools"""
return [
types.Tool(
name="read-neo4j-cypher",
description="Execute a Cypher query on the neo4j database",
inputSchema={
"type": "object",
"properties": {
"query": {"type": "string", "description": "Cypher read query to execute"},
},
"required": ["query"],
},
),
...
Code language: Python (python)
And provides the actual tool execution, which gets the tool name and parameters from the MCP client, executes the tool, and returns results (or errors) to the client:
@server.call_tool()
async def handle_call_tool(
name: str, arguments: dict[str, Any] | None
) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
"""Handle tool execution requests"""
try:
if name == "read-neo4j-cypher":
if is_write_query(arguments["query"]):
raise ValueError("Only MATCH queries are allowed for read-query")
results = db._execute_query(arguments["query"])
return [types.TextContent(type="text", text=str(results))]
...
Code language: Python (python)
After building your tool, you can run and test it locally with one of the MCP hosts or with MCP Inspector.
To test your local server, you can use MCP Inspector from Anthropic, which launches a web application that lets you connect and test your server, and list and run all the tools and other capabilities directly on your machine. This is what it looks like for our server:

Your MCP server is also being validated against the schema spec and violations are reported.
MCP Discovery and Registries
A particular challenge for non-developer users is discovery of trustworthy MCP servers, something like Docker Hub. The current MCP roadmap includes developing an official registry with versioning, download, discovery, checksums, and certification. But given that existing MCP servers are implemented in different languages, it will need to cover all kinds of packaging and deployment options.
Meanwhile, several services provide discovery and directory services around MCP, hosting thousands of servers:
- MCP Servers GitHub (500)
- Smithery (3000)
- Glama.ai (3400)
- mcp.so (4800)
- Cursor Directory (1800)
- OpenTools (170)
- mcp.run (114 MCP Servlets)

The Future of MCP
When MCP launched in November 2024, the community was split between enthusiasts who saw its potential and counted on gradual improvements and skeptics who felt it was overcomplicated and insufficient. Now, tens of thousands of server implementations later, with the other LLM vendors coming on board, MCP is here to stay.
With an open standard and specification process and contributions from many players, MCP should evolve into a useful and stable protocol. After all, it makes life easier for everyone involved and brings efficiencies to users and vendors alike.
Quite a few critical developments in MCP are under way, mostly in the areas of security, operations, tooling, and enterprise readiness (federation), but also in model interactions (e.g., streaming and multi-modal). Many data and systems vendors are adding MCP into their offerings to allow for seamless integration, and even model vendors competing with Anthropic—chiefly OpenAI, Google Cloud Platform, AWS, and Microsoft—felt compelled to add support for the protocol.
It’s unlikely that all proprietary agent and tool interactions will be subsumed by MCP, but it would be beneficial, as vendors don’t want to implement and maintain a plethora of different APIs for their services.
The H1 roadmap for MCP looks promising:
- Remote connectivity — Securing connections to MCP servers with OAuth 2.0, service discovery, and support for stateless operations
- Developer resources — Creating reference client implementations and streamlining the protocol feature proposal process
- Deployment infrastructure — Developing standardized packaging formats, simplified installation tools, server sandboxing for security, and a centralized server registry
- Agent capabilities — Enhancing support for hierarchical agent systems, interactive user workflows, and real-time streaming of results from long-running operations
- Ecosystem expansion — Fostering community-led standards development with equal participation across AI providers, supporting additional modalities beyond text, and pursuing formal standardization
Neo4j built the first data-level tool integration with MCP in early December 2024, and we’ve added more infrastructure and knowledge graph memory capabilities into our MCP servers. We’re interested to see how you use these capabilities and your feedback.
IBM wants to be part of the game as well, but it started its own independent initiative called Agent Context Protocol (ACP). ACP focuses on agent-to-agent communication, and IBM intends standardize it. However, MCP is recognized as an interim solution. Its original design focuses on context sharing, making it an imperfect fit for ACP’s emerging requirements around agent communication. ACP plans to diverge from MCP during its alpha phase, addressing this misalignment by establishing a standalone standard specifically optimized for robust agent interactions.
Recent positive reviews of MCP came from Latent Space, Andreesen Horowitz (a16z), and ThursdAI, while LangChain‘s discussion was a bit more nuanced. Nuno, the maintainer of LangGraph, states that for MCP to be successful, it needs to address the issues of complexity of protocol and overarching ambitions, ease of implementation, scalable stateless architecture, and quality control of servers.
For Andrej Karpathy, it’s all a bit too much hype:
please make it stop
— Andrej Karpathy (@karpathy) March 12, 2025
But Sundar Pichai on the other hand wants to know:
To MCP or not to MCP, that's the question. Lmk in comments
— Sundar Pichai (@sundarpichai) March 30, 2025
Resources
There’s no shortage of resources on MCP, starting with this good explanation from Norah Sakal. The protocol documentation on the MCP site is helpful, and this deep-dive MCP workshop at the AI Engineers Summit, by Mahesh Murag of Anthropic, is also useful. Cloudflare, Weights & Biases, and Cursor have great documentation as well.
And if you’d rather get a quick 8-minute video introduction, Fireship has you covered.
For hands-on exploration, start with the Cloudflare MCP Workshop or experiment with Neo4j’s open source MCP servers. The protocol’s success will hinge on community adoption, but current traction suggests it’s becoming the de-facto standard for next-gen agent and tool interactions.