Configuration Reference
- Configuration Methods
- MemorySettings
- Neo4j Configuration
- Embedding Configuration
- Extraction Configuration
- Schema Configuration
- Resolution Configuration
- LLM Configuration
- Memory Configuration
- Search Configuration
- Geocoding Configuration
- Enrichment Configuration
- Deduplication Configuration
- Observability Configuration
- CLI Configuration
- Complete Example
- Configuration Precedence
- Validation
- Next Steps
Complete reference for all configuration options in neo4j-agent-memory.
Configuration Methods
Neo4j Agent Memory supports multiple configuration methods:
-
Python Configuration - Direct instantiation of settings objects
-
Environment Variables - Using the
NAM_prefix -
Configuration Files - YAML or JSON files
-
Mixed - Combine methods with environment variables taking precedence
MemorySettings
The main configuration class that contains all settings.
from neo4j_agent_memory import MemorySettings
settings = MemorySettings(
neo4j=Neo4jConfig(...),
embedding=EmbeddingConfig(...),
extraction=ExtractionConfig(...),
resolution=ResolutionConfig(...),
schema=SchemaConfig(...),
llm=LLMConfig(...),
memory=MemoryConfig(...),
search=SearchConfig(...),
)
Neo4j Configuration
Connection settings for Neo4j database.
Python Configuration
from neo4j_agent_memory import Neo4jConfig
from pydantic import SecretStr
neo4j_config = Neo4jConfig(
uri="bolt://localhost:7687",
username="neo4j",
password=SecretStr("password"),
database="neo4j", # Database name (default: neo4j)
max_connection_pool_size=50, # Connection pool size
connection_timeout=30.0, # Connection timeout in seconds
max_transaction_retry_time=30.0, # Max retry time for transactions
)
Environment Variables
NAM_NEO4J__URI=bolt://localhost:7687
NAM_NEO4J__USERNAME=neo4j
NAM_NEO4J__PASSWORD=your-password
NAM_NEO4J__DATABASE=neo4j
NAM_NEO4J__MAX_CONNECTION_POOL_SIZE=50
NAM_NEO4J__CONNECTION_TIMEOUT=30.0
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
|
str |
|
Neo4j connection URI |
|
str |
|
Authentication username |
|
SecretStr |
Required |
Authentication password |
|
str |
|
Database name |
|
int |
50 |
Maximum connection pool size |
|
float |
30.0 |
Connection timeout (seconds) |
|
float |
30.0 |
Maximum transaction retry time |
Embedding Configuration
Settings for vector embeddings.
Python Configuration
from neo4j_agent_memory import EmbeddingConfig, EmbeddingProvider
from pydantic import SecretStr
embedding_config = EmbeddingConfig(
provider=EmbeddingProvider.OPENAI,
model="text-embedding-3-small",
api_key=SecretStr("sk-..."), # Or use OPENAI_API_KEY env var
dimensions=1536, # Embedding dimensions
batch_size=100, # Batch size for bulk embedding
device="cpu", # Device for local models
)
Environment Variables
NAM_EMBEDDING__PROVIDER=openai
NAM_EMBEDDING__MODEL=text-embedding-3-small
NAM_EMBEDDING__API_KEY=sk-...
NAM_EMBEDDING__DIMENSIONS=1536
NAM_EMBEDDING__BATCH_SIZE=100
NAM_EMBEDDING__DEVICE=cpu
# Alternative: use standard OpenAI env var
OPENAI_API_KEY=sk-...
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
|
EmbeddingProvider |
|
Embedding provider (OPENAI, SENTENCE_TRANSFORMERS, VERTEX_AI, BEDROCK) |
|
str |
|
Model name |
|
SecretStr |
None |
API key (for OpenAI) |
|
int |
1536 |
Embedding dimensions |
|
int |
100 |
Batch size for bulk operations |
|
str |
|
Device for local models (cpu/cuda) |
|
str |
None |
GCP project ID (for Vertex AI) |
|
str |
|
GCP region (for Vertex AI) |
|
str |
|
Vertex AI task type |
|
str |
None |
AWS region (for Bedrock) |
|
str |
None |
AWS credentials profile name (for Bedrock) |
Embedding Providers
| Provider | Models | Notes |
|---|---|---|
|
|
Requires API key |
|
|
Runs locally |
|
|
Requires GCP project. Install with |
|
|
Requires AWS credentials. Install with |
Extraction Configuration
Settings for entity extraction pipeline.
Python Configuration
from neo4j_agent_memory import ExtractionConfig, ExtractorType, MergeStrategy
extraction_config = ExtractionConfig(
# Extractor type
extractor_type=ExtractorType.PIPELINE,
# Pipeline stages
enable_spacy=True,
enable_gliner=True,
enable_llm_fallback=True,
# Merge strategy
merge_strategy=MergeStrategy.CONFIDENCE,
fallback_on_empty=True,
# spaCy settings
spacy_model="en_core_web_sm",
spacy_confidence=0.85,
# GLiNER settings
gliner_model="urchade/gliner_medium-v2.1",
gliner_threshold=0.5,
gliner_device="cpu",
# GLiREL relation extraction (optional)
enable_gliner_relations=False,
gliner_relations_model="jackboyla/glirel-large-v0",
gliner_relations_threshold=0.3,
# LLM settings
llm_model="gpt-4o-mini",
# Entity types
entity_types=["PERSON", "ORGANIZATION", "LOCATION", "EVENT", "OBJECT"],
# Extraction options
extract_relations=True,
extract_preferences=True,
# Batch extraction settings
batch_size=10,
batch_max_concurrent=5,
# Streaming extraction settings
streaming_chunk_size=4000,
streaming_chunk_overlap=200,
)
Environment Variables
# Extractor type
NAM_EXTRACTION__EXTRACTOR_TYPE=pipeline # none, llm, spacy, gliner, pipeline
# Pipeline stages
NAM_EXTRACTION__ENABLE_SPACY=true
NAM_EXTRACTION__ENABLE_GLINER=true
NAM_EXTRACTION__ENABLE_LLM_FALLBACK=true
# Merge strategy
NAM_EXTRACTION__MERGE_STRATEGY=confidence # union, intersection, confidence, cascade
NAM_EXTRACTION__FALLBACK_ON_EMPTY=true
# spaCy settings
NAM_EXTRACTION__SPACY_MODEL=en_core_web_sm
NAM_EXTRACTION__SPACY_CONFIDENCE=0.85
# GLiNER settings
NAM_EXTRACTION__GLINER_MODEL=urchade/gliner_medium-v2.1
NAM_EXTRACTION__GLINER_THRESHOLD=0.5
NAM_EXTRACTION__GLINER_DEVICE=cpu
# GLiREL relation extraction (optional)
NAM_EXTRACTION__ENABLE_GLINER_RELATIONS=false
NAM_EXTRACTION__GLINER_RELATIONS_MODEL=jackboyla/glirel-large-v0
NAM_EXTRACTION__GLINER_RELATIONS_THRESHOLD=0.3
# LLM settings
NAM_EXTRACTION__LLM_MODEL=gpt-4o-mini
# Entity types (JSON array)
NAM_EXTRACTION__ENTITY_TYPES='["PERSON","ORGANIZATION","LOCATION","EVENT","OBJECT"]'
# Extraction options
NAM_EXTRACTION__EXTRACT_RELATIONS=true
NAM_EXTRACTION__EXTRACT_PREFERENCES=true
# Batch extraction settings
NAM_EXTRACTION__BATCH_SIZE=10
NAM_EXTRACTION__BATCH_MAX_CONCURRENT=5
# Streaming extraction settings
NAM_EXTRACTION__STREAMING_CHUNK_SIZE=4000
NAM_EXTRACTION__STREAMING_CHUNK_OVERLAP=200
Extractor Types
| Type | Description |
|---|---|
|
Disable extraction |
|
spaCy statistical NER only |
|
GLiNER zero-shot NER only |
|
LLM-based extraction only |
|
Multi-stage pipeline (default) |
Merge Strategies
| Strategy | Description |
|---|---|
|
Keep all unique entities from all stages |
|
Only keep entities found by multiple extractors |
|
Keep highest-confidence version (default) |
|
Use first stage, fill gaps with subsequent stages |
|
Stop after first successful stage |
Schema Configuration
Settings for the knowledge graph schema.
Python Configuration
from neo4j_agent_memory import SchemaConfig, SchemaModel
schema_config = SchemaConfig(
model=SchemaModel.POLEO, # Schema model
entity_types=None, # Custom types (for CUSTOM model)
enable_subtypes=True, # Track entity subtypes
strict_types=False, # Reject unknown types
custom_schema_path=None, # Path to schema file
)
Resolution Configuration
Settings for entity resolution (deduplication).
Python Configuration
from neo4j_agent_memory import ResolutionConfig, ResolverStrategy
resolution_config = ResolutionConfig(
strategy=ResolverStrategy.COMPOSITE,
exact_threshold=1.0, # Exact match threshold
fuzzy_threshold=0.85, # Fuzzy match threshold
semantic_threshold=0.9, # Semantic similarity threshold
)
LLM Configuration
Memory Configuration
Settings for memory behavior.
Python Configuration
from neo4j_agent_memory import MemoryConfig
memory_config = MemoryConfig(
default_session_ttl=86400, # Session TTL in seconds (24 hours)
max_messages_per_session=1000, # Max messages per session
auto_summarize=False, # Auto-summarize long conversations
summarize_threshold=50, # Messages before summarization
)
Search Configuration
Settings for search operations.
Geocoding Configuration
Settings for automatic geocoding of LOCATION entities. When enabled, location names are converted to latitude/longitude coordinates stored as Neo4j Point properties, enabling geospatial queries.
Python Configuration (Nominatim - Free)
from neo4j_agent_memory import GeocodingConfig, GeocodingProvider
# Nominatim is free but rate-limited to 1 request/second
geocoding_config = GeocodingConfig(
enabled=True,
provider=GeocodingProvider.NOMINATIM,
cache_results=True, # Cache to avoid repeated API calls
rate_limit_per_second=1.0, # Nominatim requires <= 1 req/sec
user_agent="my-app/1.0", # Required by Nominatim ToS
)
Python Configuration (Google Maps - Higher Accuracy)
from neo4j_agent_memory import GeocodingConfig, GeocodingProvider
from pydantic import SecretStr
# Google Maps API requires an API key and has usage costs
geocoding_config = GeocodingConfig(
enabled=True,
provider=GeocodingProvider.GOOGLE,
api_key=SecretStr("your-google-api-key"),
cache_results=True,
)
Environment Variables
# Enable geocoding
NAM_GEOCODING__ENABLED=true
# Provider selection
NAM_GEOCODING__PROVIDER=nominatim # nominatim or google
# API key (required for Google)
NAM_GEOCODING__API_KEY=your-google-api-key
# Caching
NAM_GEOCODING__CACHE_RESULTS=true
# Nominatim-specific settings
NAM_GEOCODING__RATE_LIMIT_PER_SECOND=1.0
NAM_GEOCODING__USER_AGENT=my-app/1.0
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
|
bool |
|
Enable automatic geocoding of LOCATION entities |
|
GeocodingProvider |
|
Geocoding provider (NOMINATIM or GOOGLE) |
|
SecretStr |
None |
API key (required for Google) |
|
bool |
|
Cache geocoding results in-memory |
|
float |
1.0 |
Rate limit for requests (Nominatim requires ≤1) |
|
str |
|
User-Agent header for Nominatim (required by ToS) |
Geocoding Providers
| Provider | Cost | Rate Limit | Notes |
|---|---|---|---|
|
Free |
1 request/second |
Uses OpenStreetMap data. Good for most use cases. |
|
Pay per use |
50 requests/second |
Higher accuracy, better address parsing. Requires API key. |
Usage Example
from neo4j_agent_memory import (
MemoryClient,
MemorySettings,
GeocodingConfig,
GeocodingProvider,
)
# Configure with geocoding enabled
settings = MemorySettings(
geocoding=GeocodingConfig(
enabled=True,
provider=GeocodingProvider.NOMINATIM,
)
)
async with MemoryClient(settings) as client:
# LOCATION entities are automatically geocoded
entity, dedup_result = await client.long_term.add_entity(
"Empire State Building, New York",
"LOCATION",
)
# Get coordinates
coords = await client.long_term.get_entity_coordinates(entity.id)
if coords:
lat, lon = coords
print(f"Coordinates: {lat}, {lon}")
# Search for nearby locations (within 5km)
nearby = await client.long_term.search_locations_near(
latitude=40.7484,
longitude=-73.9857,
radius_km=5.0,
)
# Batch geocode existing locations without coordinates
stats = await client.long_term.geocode_locations()
print(f"Geocoded {stats['geocoded']} of {stats['processed']} locations")
Geospatial Queries
Once locations are geocoded, you can run geospatial queries:
# Find locations within a bounding box
locations = await client.long_term.search_locations_in_bounds(
min_lat=40.70,
max_lat=40.80,
min_lon=-74.02,
max_lon=-73.95,
)
# Find locations near a point
nearby = await client.long_term.search_locations_near(
latitude=40.7484,
longitude=-73.9857,
radius_km=10.0,
limit=20,
)
Enrichment Configuration
Settings for background entity enrichment from external knowledge sources (Wikipedia, Diffbot).
Python Configuration
from neo4j_agent_memory.config.settings import EnrichmentConfig, EnrichmentProvider
enrichment_config = EnrichmentConfig(
enabled=True, # Enable enrichment
providers=[EnrichmentProvider.WIKIMEDIA], # Providers to use
# API keys (Diffbot only)
diffbot_api_key="your-api-key", # Or set DIFFBOT_API_KEY env var
# Rate limiting
wikimedia_rate_limit=0.5, # Seconds between requests
diffbot_rate_limit=0.2, # Seconds between requests
# Caching
cache_results=True, # Cache results in memory
cache_ttl_hours=168, # Cache TTL (1 week)
# Background processing
background_enabled=True, # Enable async processing
queue_max_size=1000, # Max queue size
max_retries=3, # Retry count
retry_delay_seconds=60.0, # Delay between retries
# Filtering
entity_types=["PERSON", "ORGANIZATION", "LOCATION", "EVENT"],
min_confidence=0.7, # Minimum confidence threshold
# API settings
language="en", # Wikipedia language
user_agent="neo4j-agent-memory/1.0", # User-Agent header
)
Environment Variables
# Enable enrichment
NAM_ENRICHMENT__ENABLED=true
# Providers (JSON array)
NAM_ENRICHMENT__PROVIDERS=["wikimedia", "diffbot"]
# Diffbot API key
NAM_ENRICHMENT__DIFFBOT_API_KEY=your-api-key
# Or use the standard env var
DIFFBOT_API_KEY=your-api-key
# Rate limiting
NAM_ENRICHMENT__WIKIMEDIA_RATE_LIMIT=0.5
NAM_ENRICHMENT__DIFFBOT_RATE_LIMIT=0.2
# Caching
NAM_ENRICHMENT__CACHE_RESULTS=true
NAM_ENRICHMENT__CACHE_TTL_HOURS=168
# Background processing
NAM_ENRICHMENT__BACKGROUND_ENABLED=true
NAM_ENRICHMENT__QUEUE_MAX_SIZE=1000
NAM_ENRICHMENT__MAX_RETRIES=3
NAM_ENRICHMENT__RETRY_DELAY_SECONDS=60.0
# Filtering
NAM_ENRICHMENT__ENTITY_TYPES=["PERSON", "ORGANIZATION", "LOCATION", "EVENT"]
NAM_ENRICHMENT__MIN_CONFIDENCE=0.7
# API settings
NAM_ENRICHMENT__LANGUAGE=en
NAM_ENRICHMENT__USER_AGENT=neo4j-agent-memory/1.0
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
|
bool |
|
Enable the enrichment system |
|
list[EnrichmentProvider] |
|
Providers to use (tried in order) |
|
SecretStr |
None |
API key for Diffbot |
|
float |
0.5 |
Seconds between Wikimedia requests |
|
float |
0.2 |
Seconds between Diffbot requests |
|
bool |
|
Cache enrichment results |
|
int |
168 |
Cache TTL in hours (1 week) |
|
bool |
|
Enable background processing |
|
int |
1000 |
Maximum enrichment queue size |
|
int |
3 |
Retry count for failures |
|
float |
60.0 |
Delay between retries |
|
list[str] |
See above |
Entity types to enrich |
|
float |
0.7 |
Minimum confidence threshold |
|
str |
|
Wikipedia language code |
|
str |
|
User-Agent header |
Enrichment Providers
| Provider | Cost | Rate Limit | Notes |
|---|---|---|---|
|
Free |
2 requests/second |
Uses Wikipedia REST API. Good for general entities. |
|
Pay per use |
5 requests/second |
Richer structured data, requires API key. |
See the Working with Entities guide for detailed usage documentation.
Deduplication Configuration
Settings for entity deduplication during ingest.
Python Configuration
from neo4j_agent_memory import DeduplicationConfig, DeduplicationStrategy
dedup_config = DeduplicationConfig(
enabled=True, # Enable deduplication
strategy=DeduplicationStrategy.COMPOSITE,
embedding_threshold=0.92, # Similarity for auto-merge
fuzzy_threshold=0.85, # Fuzzy match threshold
create_same_as=True, # Create SAME_AS for ambiguous matches
same_as_threshold=0.85, # Threshold for SAME_AS relationships
batch_size=100, # Entities to process per batch
)
Environment Variables
NAM_DEDUPLICATION__ENABLED=true
NAM_DEDUPLICATION__STRATEGY=composite # none, exact, fuzzy, embedding, composite
NAM_DEDUPLICATION__EMBEDDING_THRESHOLD=0.92
NAM_DEDUPLICATION__FUZZY_THRESHOLD=0.85
NAM_DEDUPLICATION__CREATE_SAME_AS=true
NAM_DEDUPLICATION__SAME_AS_THRESHOLD=0.85
NAM_DEDUPLICATION__BATCH_SIZE=100
Observability Configuration
Settings for tracing and monitoring with OpenTelemetry or Opik.
Python Configuration (OpenTelemetry)
from neo4j_agent_memory import ObservabilityConfig, TracingProvider
observability_config = ObservabilityConfig(
enabled=True,
provider=TracingProvider.OPENTELEMETRY,
service_name="my-agent-memory",
endpoint="http://localhost:4317", # OTLP endpoint
sample_rate=1.0, # Trace all requests
log_level="INFO",
)
Python Configuration (Opik)
from neo4j_agent_memory import ObservabilityConfig, TracingProvider
observability_config = ObservabilityConfig(
enabled=True,
provider=TracingProvider.OPIK,
project_name="my-agent-memory",
workspace="my-workspace", # Optional Opik workspace
track_llm_calls=True, # Track LLM interactions
track_extraction=True, # Track extraction pipeline
track_memory_ops=True, # Track memory operations
)
Environment Variables
# Common settings
NAM_OBSERVABILITY__ENABLED=true
NAM_OBSERVABILITY__PROVIDER=opentelemetry # opentelemetry, opik, auto
# OpenTelemetry settings
NAM_OBSERVABILITY__SERVICE_NAME=my-agent-memory
NAM_OBSERVABILITY__ENDPOINT=http://localhost:4317
NAM_OBSERVABILITY__SAMPLE_RATE=1.0
NAM_OBSERVABILITY__LOG_LEVEL=INFO
# Opik settings
NAM_OBSERVABILITY__PROJECT_NAME=my-agent-memory
NAM_OBSERVABILITY__WORKSPACE=my-workspace
NAM_OBSERVABILITY__TRACK_LLM_CALLS=true
NAM_OBSERVABILITY__TRACK_EXTRACTION=true
NAM_OBSERVABILITY__TRACK_MEMORY_OPS=true
# Opik API (if using cloud)
OPIK_API_KEY=your-api-key
OPIK_WORKSPACE=your-workspace
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
|
bool |
|
Enable observability |
|
TracingProvider |
|
Tracing provider to use |
|
str |
|
Service name for traces |
|
float |
1.0 |
Percentage of requests to trace (0.0-1.0) |
|
bool |
|
Track LLM API calls |
|
bool |
|
Track extraction pipeline stages |
|
bool |
|
Track memory read/write operations |
CLI Configuration
The CLI tool uses environment variables and optional configuration files.
Environment Variables
# Neo4j connection (required for memory commands)
NAM_NEO4J__URI=bolt://localhost:7687
NAM_NEO4J__USERNAME=neo4j
NAM_NEO4J__PASSWORD=your-password
# Extraction settings
NAM_EXTRACTION__EXTRACTOR_TYPE=pipeline
NAM_EXTRACTION__ENABLE_GLINER=true
# Output format
NAM_CLI__OUTPUT_FORMAT=json # json, table, yaml
NAM_CLI__VERBOSE=false
NAM_CLI__COLOR=true
Configuration File
Create a .neo4j-memory.yaml file in your project or home directory:
# .neo4j-memory.yaml
neo4j:
uri: bolt://localhost:7687
username: neo4j
password: your-password
extraction:
extractor_type: pipeline
enable_gliner: true
gliner_threshold: 0.5
cli:
output_format: json
verbose: false
color: true
CLI Commands
# Extract entities from text
neo4j-memory extract "John works at Acme Corp in New York"
# Extract with specific schema
neo4j-memory extract --schema poleo "..."
# Extract from file
neo4j-memory extract --input document.txt
# List available schemas
neo4j-memory schemas list
# Show schema details
neo4j-memory schemas show poleo
# Get extraction statistics
neo4j-memory stats
# Output as table
neo4j-memory extract --format table "..."
Complete Example
Python Configuration
from neo4j_agent_memory import (
MemorySettings,
Neo4jConfig,
EmbeddingConfig,
EmbeddingProvider,
ExtractionConfig,
ExtractorType,
MergeStrategy,
SchemaConfig,
SchemaModel,
ResolutionConfig,
ResolverStrategy,
LLMConfig,
LLMProvider,
DeduplicationConfig,
DeduplicationStrategy,
GeocodingConfig,
GeocodingProvider,
ObservabilityConfig,
TracingProvider,
)
from pydantic import SecretStr
settings = MemorySettings(
neo4j=Neo4jConfig(
uri="bolt://localhost:7687",
username="neo4j",
password=SecretStr("password"),
),
embedding=EmbeddingConfig(
provider=EmbeddingProvider.OPENAI,
model="text-embedding-3-small",
),
extraction=ExtractionConfig(
extractor_type=ExtractorType.PIPELINE,
enable_spacy=True,
enable_gliner=True,
enable_llm_fallback=True,
merge_strategy=MergeStrategy.CONFIDENCE,
# GLiREL for relation extraction
enable_gliner_relations=True,
),
schema=SchemaConfig(
model=SchemaModel.POLEO,
enable_subtypes=True,
),
resolution=ResolutionConfig(
strategy=ResolverStrategy.COMPOSITE,
),
deduplication=DeduplicationConfig(
enabled=True,
strategy=DeduplicationStrategy.COMPOSITE,
embedding_threshold=0.92,
),
geocoding=GeocodingConfig(
enabled=True,
provider=GeocodingProvider.NOMINATIM,
cache_results=True,
),
llm=LLMConfig(
provider=LLMProvider.OPENAI,
model="gpt-4o-mini",
),
observability=ObservabilityConfig(
enabled=True,
provider=TracingProvider.OPIK,
project_name="my-agent-memory",
),
)
Environment Variables (.env file)
# Neo4j
NAM_NEO4J__URI=bolt://localhost:7687
NAM_NEO4J__USERNAME=neo4j
NAM_NEO4J__PASSWORD=your-password
# Embedding
NAM_EMBEDDING__PROVIDER=openai
NAM_EMBEDDING__MODEL=text-embedding-3-small
# Extraction
NAM_EXTRACTION__EXTRACTOR_TYPE=pipeline
NAM_EXTRACTION__ENABLE_SPACY=true
NAM_EXTRACTION__ENABLE_GLINER=true
NAM_EXTRACTION__ENABLE_LLM_FALLBACK=true
NAM_EXTRACTION__MERGE_STRATEGY=confidence
NAM_EXTRACTION__ENABLE_GLINER_RELATIONS=true
# Schema
NAM_SCHEMA__MODEL=poleo
NAM_SCHEMA__ENABLE_SUBTYPES=true
# Resolution
NAM_RESOLUTION__STRATEGY=composite
# Deduplication
NAM_DEDUPLICATION__ENABLED=true
NAM_DEDUPLICATION__STRATEGY=composite
NAM_DEDUPLICATION__EMBEDDING_THRESHOLD=0.92
# Geocoding (for LOCATION entities)
NAM_GEOCODING__ENABLED=true
NAM_GEOCODING__PROVIDER=nominatim
NAM_GEOCODING__CACHE_RESULTS=true
# For Google Maps (instead of Nominatim):
# NAM_GEOCODING__PROVIDER=google
# NAM_GEOCODING__API_KEY=your-google-api-key
# Observability
NAM_OBSERVABILITY__ENABLED=true
NAM_OBSERVABILITY__PROVIDER=opik
NAM_OBSERVABILITY__PROJECT_NAME=my-agent-memory
# LLM
NAM_LLM__PROVIDER=openai
NAM_LLM__MODEL=gpt-4o-mini
# OpenAI API Key
OPENAI_API_KEY=sk-...
Configuration Precedence
When using multiple configuration methods, the precedence is:
-
Explicit Python arguments (highest priority)
-
Environment variables with
NAM_prefix -
Default values (lowest priority)
import os
# Set environment variable
os.environ["NAM_NEO4J__URI"] = "bolt://env-server:7687"
# This will use the environment variable
settings = MemorySettings()
print(settings.neo4j.uri) # bolt://env-server:7687
# This will override the environment variable
settings = MemorySettings(
neo4j={"uri": "bolt://explicit-server:7687"}
)
print(settings.neo4j.uri) # bolt://explicit-server:7687
Validation
Settings are validated using Pydantic. Invalid configurations raise ValidationError:
from neo4j_agent_memory import MemorySettings, ExtractionConfig
from pydantic import ValidationError
try:
settings = MemorySettings(
extraction=ExtractionConfig(
gliner_threshold=1.5 # Invalid: must be 0.0-1.0
)
)
except ValidationError as e:
print(f"Configuration error: {e}")