Neo4j Live: MCP for LLM Agents, APIs & Graphs
As AI agents become more powerful, their ability to interact with tools, APIs and data sources must keep pace. Enter the Model Context Protocol (MCP), an emerging open-source standard that enables seamless integration between LLMs and external systems. In a recent Neo4j Live session, Zachary Blumenfeld and Michael Hunger walked through MCP, how it works, and how you can use it with Neo4j to build more intelligent, responsive applications.
You can watch the full session above.
Whether you’re building RAG systems, agentic apps or DevOps automations, MCP is becoming an essential part of the stack. This post unpacks the highlights from the livestream: what MCP is, why it matters, and how you can start using it today.
What Is MCP and Why Does It Matter?
MCP stands for Model Context Protocol. It’s a simple but powerful idea: standardise how LLMs access tools, APIs and data. Think of it as the USB-C of AI tooling.
Instead of building one-off integrations between each LLM and each API, MCP defines a client-server architecture for connecting tools to LLMs. This modular approach unlocks discoverability, composability and reuse – exactly what developers need to scale agent-based systems.
Why Now?
While agent frameworks have exploded in popularity, the tooling layer has lagged behind. Developers faced many-to-many integration challenges, tightly coupled components and limited reuse. MCP addresses all of this by offering:
- A standardised interface for tool interaction
- Reusable, discoverable components via registries
- Support for both hosted and local servers
- Composability across IDEs, assistants and orchestration frameworks
Architecture: How MCP Works
MCP uses a simple but elegant client-server model:
- The MCP host is your development environment or AI interface (like Claude, VS Code or Cursor)
- The MCP client runs inside the host and connects to MCP servers
- Each MCP server exposes one or more tools, data resources or prompts via a standard interface
MCP servers can be local (using standard IO) or remote (using streaming HTTP or SSE). Servers expose functionality like querying databases, calling APIs or executing scripts.
Once a server is connected, your AI assistant can access it like a plugin – except it’s standardised, composable and reusable across tools.
Demo Highlights: Neo4j as an MCP Data Service
In the session, Michael Hunger showcased how to use Neo4j as an MCP server. Here’s what that looks like in practice:
Setup
Michael built a FastAPI-based MCP server using fastmcp. It exposed a few tools:
- get_schema: fetch the database schema
- Execute_read: run Cypher read queries
- Execute_write: run Cypher write queries
Once the server was running, it was connected to tools like Claude and Cursor via MCP, enabling LLMs to issue Cypher queries, retrieve data, and generate insights.
Example Use Cases
- Schema Inspection
Claude queried the schema to understand available categories in a retail dataset - Data Aggregation
Using Cypher, Claude retrieved order volumes per product category and returned the results as a table - Visualization Generation
Claude generated React chart code to visualise data – a seamless transition from data to UI - GraphQL Scaffolding
With one prompt, Claude generated GraphQL type definitions for the current database schema
These examples demonstrate just how powerful MCP is for enabling LLMs to become truly full-stack development assistants.
Using Graphs as LLM Memory
One advanced use case explored was using Neo4j as a memory layer for AI agents:
- User conversations and knowledge are stored as interconnected nodes
- The graph acts as persistent memory, accessible across sessions
- LLMs can query this graph for contextual grounding or long-term recall
By exposing this as an MCP tool, agents can store, retrieve, and reason over rich contextual memories, enabling far more sophisticated behaviour than stateless prompts.
Beyond Data: MCP for Infrastructure and Agents
MCP isn’t just for data. Michael also showed how to wrap infrastructure APIs as MCP tools. For example, he built a tool to manage Neo4j Aura instances via the public Aura REST API. From inside the IDE, he could:
- List running Aura instances
- Get detailed instance stats
- Spin up or drop instances
- Generate environment files for new instances
The ability to manage infrastructure directly from within AI-assisted workflows, without switching tools, highlights how MCP can transform developer productivity.
Agent SDK Integration
Finally, Michael showed how to integrate MCP tools into agent SDKs like Google’s ADK. By importing MCP toolsets into agent logic, devs can build autonomous systems that reason, plan, and act using graph data, infrastructure, or any tool exposed via MCP.
The key takeaway: MCP makes it easy to plug tools into whatever orchestration framework you use.
The Road Ahead: Challenges and Opportunities
Despite its rapid rise, MCP is still maturing. The speakers highlighted several open challenges:
- Security: OAuth flows and access controls are evolving, but still inconsistent across hosts.
- Discovery: Registries exist, but package signing, versioning and trust models need work.
- Statefulness: The protocol currently expects persistent sessions, which don’t scale well for cloud-native environments.
- Host Support: Tooling support varies. Some hosts (e.g., Claude Desktop) only support local servers, while others offer full remote support with caveats.
Still, the momentum is undeniable. With major companies investing, the spec evolving, and the developer community building, MCP is quickly becoming the foundation for how LLMs interface with the real world.
Key Takeaways for Developers
- Think modular: MCP lets you break tooling into reusable servers and plug them into any agent or IDE
- Start with data: Expose your Neo4j instance as an MCP server and explore AI-assisted querying and visualisation
- Automate your infrastructure: Wrap infrastructure APIs as tools and use LLMs to manage cloud resources from your IDE
- Use graphs as memory: Combine MCP and Neo4j to persist knowledge and context across agent interactions
- Stay secure: For production use, stick to trusted servers or host your own with strict access controls