Chapter 7. Transaction management

In order to fully maintain data integrity and ensure good transactional behavior, Neo4j supports the ACID properties:


7.1. Interaction cycle

All database operations that access the graph, indexes, or the schema must be performed in a transaction. Transactions are thread confined and can be nested as “flat nested transactions”. Flat nested transactions means that all nested transactions are added to the scope of the top level transaction. A nested transaction can mark the top level transaction for rollback, meaning the entire transaction will be rolled back. To only rollback changes made in a nested transaction is not possible.

The interaction cycle of working with transactions looks like this:

  1. Begin a transaction.
  2. Perform database operations.
  3. Mark the transaction as successful or not.
  4. Finish the transaction.

It is very important to finish each transaction. The transaction will not release the locks or memory it has acquired until it has been finished. The idiomatic use of transactions in Neo4j is to use a try-finally block, starting the transaction and then try to perform the graph operations. The last operation in the try block should mark the transaction as successful while the finally block should finish the transaction. Finishing the transaction will perform commit or rollback depending on the success status.

All modifications performed in a transaction are kept in memory. This means that very large updates have to be split into several top level transactions to avoid running out of memory. It must be a top level transaction since splitting up the work in many nested transactions will just add all the work to the top level transaction.

In an environment that makes use of thread pooling other errors may occur when failing to finish a transaction properly. Consider a leaked transaction that did not get finished properly. It will be tied to a thread and when that thread gets scheduled to perform work starting a new (what looks to be a) top level transaction it will actually be a nested transaction. If the leaked transaction state is “marked for rollback” (which will happen if a deadlock was detected) no more work can be performed on that transaction. Trying to do so will result in error on each call to a write operation.

7.2. Isolation levels

Transactions in Neo4j use a read-committed isolation level, which means they will see data as soon as it has been committed and will not see data in other transactions that have not yet been committed. This type of isolation is weaker than serialization but offers significant performance advantages whilst being sufficient for the overwhelming majority of cases.

In addition, the Neo4j Java API enables explicit locking of nodes and relationships. Using locks gives the opportunity to simulate the effects of higher levels of isolation by obtaining and releasing locks explicitly. For example, if a write lock is taken on a common node or relationship, then all transactions will serialize on that lock — giving the effect of a serialization isolation level.

7.2.1. Lost updates in Cypher

In Cypher it is possible to acquire write locks to simulate improved isolation in some cases. Consider the case where multiple concurrent Cypher queries increment the value of a property. Due to the limitations of the read-committed isolation level, the increments might not result in a deterministic final value. If there is a direct dependency, Cypher will automatically acquire a write lock before reading. A direct dependency is when the right-hand side of a SET has a dependent property read in the expression, or in the value of a key-value pair in a literal map.

For example, the following query, if run by one hundred concurrent clients, will very likely not increment the property n.prop to 100, unless a write lock is acquired before reading the property value. This is because all queries would read the value of n.prop within their own transaction, and would not see the incremented value from any other transaction that has not yet committed. In the worst case scenario the final value would be as low as 1, if all threads perform the read before any has committed their transaction.

Requires a write lock, and Cypher automatically acquires one. 

MATCH (n:X {id: 42})
SET n.prop = n.prop + 1

Also requires a write lock, and Cypher automatically acquires one. 

SET n += { prop: n.prop + 1 }

Due to the complexity of determining such a dependency in the general case, Cypher does not cover any of the below example cases:

Variable depending on results from reading the property in an earlier statement. 

WITH n.prop as p
// ... operations depending on p, producing k
SET n.prop = k + 1

Circular dependency between properties read and written in the same query. 

SET n += { propA: n.propB + 1, propB: n.propA + 1 }

To ensure deterministic behavior also in the more complex cases, it is necessary to explicitly acquire a write lock on the node in question. In Cypher there is no explicit support for this, but it is possible to work around this limitation by writing to a temporary property.

Acquires a write lock for the node by writing to a dummy property before reading the requested value. 

MATCH (n:X {id: 42})
SET n._LOCK_ = true
WITH n.prop as p
// ... operations depending on p, producing k
SET n.prop = k + 1

The existence of the SET n.*LOCK* statement before the read of the n.prop read ensures the lock is acquired before the read action, and no updates will be lost due to enforced serialization of all concurrent queries on that specific node.

7.3. Default locking behavior

  • When adding, changing or removing a property on a node or relationship a write lock will be taken on the specific node or relationship.
  • When creating or deleting a node a write lock will be taken for the specific node.
  • When creating or deleting a relationship a write lock will be taken on the specific relationship and both its nodes.

The locks will be added to the transaction and released when the transaction finishes.

7.4. Deadlocks

7.4.1. Understanding deadlocks

Since locks are used it is possible for deadlocks to happen. Neo4j will however detect any deadlock (caused by acquiring a lock) before they happen and throw an exception. Before the exception is thrown the transaction is marked for rollback. All locks acquired by the transaction are still being held but will be released when the transaction is finished (in the finally block as pointed out earlier). Once the locks are released other transactions that were waiting for locks held by the transaction causing the deadlock can proceed. The work performed by the transaction causing the deadlock can then be retried by the user if needed.

Experiencing frequent deadlocks is an indication of concurrent write requests happening in such a way that it is not possible to execute them while at the same time live up to the intended isolation and consistency. The solution is to make sure concurrent updates happen in a reasonable way. For example given two specific nodes (A and B), adding or deleting relationships to both these nodes in random order for each transaction will result in deadlocks when there are two or more transactions doing that concurrently. One solution is to make sure that updates always happens in the same order (first A then B). Another solution is to make sure that each thread/transaction does not have any conflicting writes to a node or relationship as some other concurrent transaction. This can for example be achieved by letting a single thread do all updates of a specific type.

Deadlocks caused by the use of other synchronization than the locks managed by Neo4j can still happen. Since all operations in the Neo4j API are thread safe unless specified otherwise, there is no need for external synchronization. Other code that requires synchronization should be synchronized in such a way that it never performs any Neo4j operation in the synchronized block.

7.4.2. Deadlock handling example code

Below you’ll find examples of how deadlocks can be handled in procedures, server extensions or when using Neo4j embedded.

The full source code used for the code snippets can be found at

When dealing with deadlocks in code, there are several issues you may want to address:

  • Only do a limited amount of retries, and fail if a threshold is reached.
  • Pause between each attempt to allow the other transaction to finish before trying again.
  • A retry-loop can be useful not only for deadlocks, but for other types of transient errors as well.

In the following sections you’ll find example code in Java which shows how this can be implemented. Handling deadlocks using TransactionTemplate

If you don’t want to write all the code yourself, there is a class called link:{neo4j-javadoc-base-uri}/org/neo4j/helpers/TransactionTemplate.html[TransactionTemplate] that will help you achieve what’s needed. Below is an example of how to create, customize, and use this template for retries in transactions.

First, define the base template:

TransactionTemplate template = new TransactionTemplate(  ).retries( 5 ).backoff( 3, TimeUnit.SECONDS );

Next, specify the database to use and a function to execute:

Object result = template.with(graphDatabaseService).execute( transaction -> {
    Object result1 = null;
    return result1;
} );

The operations that could lead to a deadlock should go into the apply method.

The TransactionTemplate uses a fluent API for configuration, and you can choose whether to set everything at once, or (as in the example) provide some details just before using it. The template allows setting a predicate for what exceptions to retry on, and also allows for easy monitoring of events that take place. Handling deadlocks using a retry loop

If you want to roll your own retry-loop code, see below for inspiration. Here’s an example of what a retry block might look like:

Throwable txEx = null;
int RETRIES = 5;
int BACKOFF = 3000;
for ( int i = 0; i < RETRIES; i++ )
    try ( Transaction tx = graphDatabaseService.beginTx() )
        Object result = doStuff(tx);
        return result;
    catch ( Throwable ex )
        txEx = ex;

        // Add whatever exceptions to retry on here
        if ( !(ex instanceof DeadlockDetectedException) )

    // Wait so that we don't immediately get into the same deadlock
    if ( i < RETRIES - 1 )
            Thread.sleep( BACKOFF );
        catch ( InterruptedException e )
            throw new TransactionFailureException( "Interrupted", e );

if ( txEx instanceof TransactionFailureException )
    throw ((TransactionFailureException) txEx);
else if ( txEx instanceof Error )
    throw ((Error) txEx);
else if ( txEx instanceof RuntimeException )
    throw ((RuntimeException) txEx);
    throw new TransactionFailureException( "Failed", txEx );

The above is the gist of what such a retry block would look like, and which you can customize to fit your needs.

7.5. Delete semantics

When deleting a node or a relationship all properties for that entity will be automatically removed but the relationships of a node will not be removed.

Neo4j enforces a constraint (upon commit) that all relationships must have a valid start node and end node. In effect this means that trying to delete a node that still has relationships attached to it will throw an exception upon commit. It is however possible to choose in which order to delete the node and the attached relationships as long as no relationships exist when the transaction is committed.

The delete semantics can be summarized in the following bullets:

  • All properties of a node or relationship will be removed when it is deleted.
  • A deleted node can not have any attached relationships when the transaction commits.
  • It is possible to acquire a reference to a deleted relationship or node that has not yet been committed.
  • Any write operation on a node or relationship after it has been deleted (but not yet committed) will throw an exception
  • After commit trying to acquire a new or work with an old reference to a deleted node or relationship will throw an exception.

7.6. Creating unique nodes

In many use cases, a certain level of uniqueness is desired among entities. You could for instance imagine that only one user with a certain e-mail address may exist in a system. If multiple concurrent threads naively try to create the user, duplicates will be created. There are three main strategies for ensuring uniqueness, and they all work across High Availability and single-instance deployments.

7.6.1. Single thread

By using a single thread, no two threads will even try to create a particular entity simultaneously. On High Availability, an external single-threaded client can perform the operations on the cluster.

7.6.2. Get or create

The preferred way to get or create a unique node is to use unique constraints and Cypher. See Section 4.13.1, “Get or create unique node using Cypher and unique constraints” for more information.

By using put-if-absent functionality, entity uniqueness can be guaranteed using an explicit index. Here the explicit index acts as the lock and will only lock the smallest part needed to guarantee uniqueness across threads and transactions.

See Section 4.13.2, “Get or create unique node using an explicit index” for how to do this using the core Java API. When using the REST API, see REST Documentation → Uniqueness.

7.6.3. Pessimistic locking

While this is a working solution, please consider using the preferred method method outlined above instead.

By using explicit, pessimistic locking, unique creation of entities can be achieved in a multi-threaded environment. It is most commonly done by locking on a single or a set of common nodes.

See Section 4.13.3, “Pessimistic locking for node creation” for how to do this using the core Java API.

7.7. Transaction events

A transaction event handler can be registered to receive Neo4j transaction events. Once it has been registered at a GraphDatabaseService instance it receives events for transactions before they are committed. Handlers get notified about transactions that have performed any write operation, and that will be committed. If Transaction#success() has not been called or the transaction has been marked as failed Transaction#failure() it will be rolled back, and no events are sent to the Handler.

Before a transaction is committed the Handler’s beforeCommit method is called with the entire diff of modifications made in the transaction. At this point the transaction is still running, so changes can still be made. The method may also throw an exception, which will prevent the transaction from being committed. If the transaction is rolled back, a call to the handler’s afterRollback method will follow.

The order in which handlers are executed is undefined — there is no guarantee that changes made by one handler will be seen by other handlers.

If beforeCommit is successfully executed in all registered handlers the transaction is committed and the afterCommit method is called with the same transaction data. This call also includes the object returned from beforeCommit.

In afterCommit the transaction has been closed and access to anything outside TransactionData requires a new transaction to be opened. A TransactionEventHandler gets notified about transactions that have any changes accessible via TransactionData so some indexing and schema changes will not be triggering these events.