Performance recommendations
-
Specify the target database on all queries with the
database
parameter when creating new sessions. If no database is provided, the driver has to send an extra request to the server to figure out what the default database is. The overhead is minimal for a single session, but becomes significant over hundreds of sessions.# Good practice driver.session(database="<DB NAME>")
# Bad practice driver.session()
-
Use query parameters instead of hardcoding or concatenating values into queries. This allows the database to properly cache queries.
# Good practice session.run("MATCH (p:Person {name: $name}) RETURN p", name="Alice")
# Bad practice session.run("MATCH (p:Person {name: 'Alice'}) RETURN p") session.run("MATCH (p:Person {name: " + name + "}) RETURN p")
-
Specify node labels in all queries. To learn how to combine labels, see Cypher — Label expressions.
# Good practice session.run("MATCH (p:Person|Animal {name: $name}) RETURN p", name="Alice")
# Bad practice session.run("MATCH (p {name: $name}) RETURN p", name="Alice")
-
Batch queries when creating a lot of records using the
WITH
andUNWIND
Cypher clauses.# Good practice numbers = [{"value": random()} for _ in range(10000)] session.run(""" WITH $numbers AS batch UNWIND batch AS node MERGE (n:Number) SET n.value = node.value """, numbers=numbers, )
# Bad practice for _ in range(10000): session.run("MERGE (:Number {value: $value})", value=random())
The most efficient way of performing a first import of large amounts of data into a new database is the neo4j-admin database import
command. -
Filter for properties inline, as opposed to filtering in the
WHERE
clause.# Good practice session.run("MATCH (p:Person {name: $name}) RETURN p", name="Alice")
# Bad practice session.run("MATCH (p:Person) WHERE p.name = $name RETURN p", name="Alice")
-
Create indexes for properties that you often filter against. For example, if you often look up
Person
nodes by thename
property, it is beneficial to create an index onPerson.name
. You can create indexes with theCREATE INDEX
Cypher function, for both nodes and relationships. For more information, see Indexes for search performance.# Create an index on Person.name session.run("CREATE INDEX person_name FOR (n:Person) ON (n.name)")
-
Profile your queries to locate queries whose performance can be improved. You can profile queries by prepending them with
PROFILE
. The server output is available in theprofile
property of theResultSummary
object.res = session.run("PROFILE MATCH (p {name: $name}) RETURN p", name="Alice") summary = res.consume() print(summary.profile['args']['string-representation']) """ Planner COST Runtime PIPELINED Runtime version 5.0 Batch size 128 +-----------------+----------------+----------------+------+---------+----------------+------------------------+-----------+---------------------+ | Operator | Details | Estimated Rows | Rows | DB Hits | Memory (Bytes) | Page Cache Hits/Misses | Time (ms) | Pipeline | +-----------------+----------------+----------------+------+---------+----------------+------------------------+-----------+---------------------+ | +ProduceResults | p | 1 | 1 | 3 | | | | | | | +----------------+----------------+------+---------+----------------+ | | | | +Filter | p.name = $name | 1 | 1 | 4 | | | | | | | +----------------+----------------+------+---------+----------------+ | | | | +AllNodesScan | p | 10 | 4 | 5 | 120 | 9160/0 | 108.923 | Fused in Pipeline 0 | +-----------------+----------------+----------------+------+---------+----------------+------------------------+-----------+---------------------+ Total database accesses: 12, total allocated memory: 184 """
In case some queries are so slow that you are unable to even run them in reasonable times, you can prepend them with
EXPLAIN
instead ofPROFILE
. This will return the plan that the server would use to run the query, but without executing it. The server output is available in theplan
property of theResultSummary
object.res = session.run("EXPLAIN MATCH (p {name: $name}) RETURN p", name="Alice") summary = res.consume() print(summary.plan['args']['string-representation']) """ Planner COST Runtime PIPELINED Runtime version 5.0 Batch size 128 +-----------------+----------------+----------------+---------------------+ | Operator | Details | Estimated Rows | Pipeline | +-----------------+----------------+----------------+---------------------+ | +ProduceResults | p | 1 | | | | +----------------+----------------+ | | +Filter | p.name = $name | 1 | | | | +----------------+----------------+ | | +AllNodesScan | p | 10 | Fused in Pipeline 0 | +-----------------+----------------+----------------+---------------------+ Total database accesses: ? """
-
Use concurrency, either in the form of multithreading or with the async version of the driver. This is likely to be more impactful on performance if you parallelize complex and time-consuming queries in your application, but not so much if you run many simple ones.
Glossary
- LTS
-
A Long Term Support release is one guaranteed to be supported for a number of years. Neo4j 4.4 is LTS, and Neo4j 5 will also have an LTS version.
- Aura
-
Aura is Neo4j’s fully managed cloud service. It comes with both free and paid plans.
- Driver
-
A
Driver
object holds the details required to establish connections with a Neo4j database. Every Neo4j-backed application requires aDriver
object. - Cypher
-
Cypher is Neo4j’s graph query language that lets you retrieve data from the graph. It is like SQL, but for graphs.
- APOC
-
Awesome Procedures On Cypher (APOC) is a library of (many) functions that can not be easily expressed in Cypher itself.
- Bolt
-
Bolt is the protocol used for interaction between Neo4j instances and drivers. It listens on port 7687 by default.
- ACID
-
Atomicity, Consistency, Isolation, Durability (ACID) are properties guaranteeing that database transactions are processed reliably. An ACID-compliant DBMS ensures that the data in the database remains accurate and consistent despite failures.
- eventual consistency
-
A database is eventually consistent if it provides the guarantee that all cluster members will, at some point in time, store the latest version of the data.
- causal consistency
-
A database is causally consistent if read and write queries are seen by every member of the cluster in the same order. This is stronger than eventual consistency.
- null
-
The null marker is not a type but a placeholder for absence of value. For more information, see Cypher Manual — Working with
null
. - transaction
-
A transaction is a unit of work that is either committed in its entirety or rolled back on failure. An example is a bank transfer: it involves multiple steps, but they must all succeed or be reverted, to avoid money being subtracted from one account but not added to the other.
Was this page helpful?