AWS Hybrid Memory Provider
This guide shows how to use the HybridMemoryProvider to intelligently route queries between short-term session memory and long-term context graphs.
Overview
The Hybrid Memory Provider combines two memory systems:
-
Short-Term Memory: Recent conversation context, working facts
-
Long-Term Memory: Entity relationships, persistent knowledge
Queries are automatically routed to the appropriate backend based on content analysis.
Quick Start
from neo4j_agent_memory import MemoryClient, MemorySettings
from neo4j_agent_memory.integrations.agentcore import HybridMemoryProvider
async with MemoryClient(settings) as client:
provider = HybridMemoryProvider(
memory_client=client,
namespace="my-app",
routing_strategy="auto", # Intelligent routing
)
# Store a memory
await provider.store_memory(
session_id="session-123",
content="John mentioned he prefers morning meetings",
)
# Search - automatically routed
results = await provider.search_memory(
query="What time does John prefer meetings?",
)
Routing Strategies
AUTO (Recommended)
Analyzes query content to determine the best backend:
provider = HybridMemoryProvider(
memory_client=client,
routing_strategy="auto",
)
# Recent context → Short-term
results = await provider.search_memory("What did I just say?")
# Relationships → Long-term
results = await provider.search_memory("How is John related to Acme Corp?")
Query patterns routed to short-term:
-
"What did I say…"
-
"Recently mentioned…"
-
"In this conversation…"
-
"Last message…"
Query patterns routed to long-term:
-
"How is X related to Y…"
-
"What do I know about…"
-
"Entity relationships…"
-
"Tell me about [Person/Company]…"
EXPLICIT
Caller specifies which backend to use:
provider = HybridMemoryProvider(
memory_client=client,
routing_strategy="explicit",
)
# Explicitly search short-term
results = await provider.search_memory(
query="recent context",
memory_types=["short_term"],
)
# Explicitly search long-term
results = await provider.search_memory(
query="entity info",
memory_types=["long_term"],
)
ALL
Search both backends and merge results:
provider = HybridMemoryProvider(
memory_client=client,
routing_strategy="all",
)
# Searches both, merges by score
results = await provider.search_memory(query="John's preferences")
Memory Types
The provider supports four memory types:
| Type | Use Case | Storage |
|---|---|---|
|
Current session context |
Session-scoped messages |
|
Persistent facts and entities |
Context Graph (Entity nodes) |
|
Task execution patterns |
Reasoning traces |
|
Specific events and episodes |
Episode nodes with timestamps |
Storing Memories
Basic Storage
from neo4j_agent_memory.integrations.agentcore import MemoryType
memory = await provider.store_memory(
session_id="session-123",
content="The user prefers Python for backend development",
memory_type=MemoryType.LONG_TERM, # Optional, auto-detected
metadata={"source": "user_statement"},
)
Searching Memories
Basic Search
result = await provider.search_memory(
query="What are John's preferences?",
top_k=10,
)
for memory in result.memories:
print(f"[{memory.score:.2f}] {memory.content}")
Entity Relationships
Query entity relationships directly:
relationships = await provider.get_entity_relationships(
entity_name="John Smith",
depth=2, # Traverse 2 hops
relationship_types=["WORKS_AT", "KNOWS"], # Filter types
)
print(f"Entity: {relationships['entity']['name']}")
for rel in relationships['relationships']:
print(f" -{rel['type']}-> {rel['target']['name']}")
Configuration Options
provider = HybridMemoryProvider(
memory_client=client,
# Namespace for multi-tenant apps
namespace="my-app",
# Routing strategy
routing_strategy="auto",
# Sync entities between short-term and long-term stores
sync_entities=True,
# Max depth for relationship traversal
relationship_depth=2,
# Extract entities when storing memories
extract_entities=True,
# Generate embeddings for semantic search
generate_embeddings=True,
)
Embedding and LLM model configuration is set on the MemoryClient via MemorySettings, not on the HybridMemoryProvider directly.
See Configuration Reference for embedding settings.
|
Use with Strands Agents
Combine with Strands tools for full agent memory:
from strands import Agent
from neo4j_agent_memory.integrations.strands import context_graph_tools
from neo4j_agent_memory.integrations.agentcore import HybridMemoryProvider
# Create hybrid provider for backend logic
provider = HybridMemoryProvider(
memory_client=client,
routing_strategy="auto",
)
# Create tools for agent interface
tools = context_graph_tools(
neo4j_uri=settings.neo4j.uri,
neo4j_password=settings.neo4j.password,
embedding_provider="bedrock",
)
agent = Agent(
model="anthropic.claude-sonnet-4-20250514-v1:0",
tools=tools,
)