Transaction management
This topic describes transactional management and behavior.
In order to fully maintain data integrity and ensure good transactional behavior, Neo4j DBMS supports the ACID properties:
-
Atomicity — If any part of a transaction fails, the database state is left unchanged.
-
Consistency — Any transaction will leave the database in a consistent state.
-
Isolation — During a transaction, modified data cannot be accessed by other operations.
-
Durability — The DBMS can always recover the results of a committed transaction.
Specifically:
-
All database operations that access the graph, indexes, or the schema must be performed in a transaction.
-
The default isolation level is read-committed isolation level.
-
Data retrieved by traversals is not protected from modification by other transactions.
-
Non-repeatable reads may occur (i.e., only write locks are acquired and held until the end of the transaction).
-
One can manually acquire write locks on nodes and relationships to achieve higher level of isolation — serialization isolation level.
-
Locks are acquired at the Node and Relationship level.
-
Deadlock detection is built into the core transaction management.
Interaction cycle
There are database operations that must be performed in a transaction to ensure the ACID-properties. Specifically operations that access the graph, indexes, or the schema are such operations. Transactions are single-threaded, confined, and independent. Multiple transactions can be started in a single thread and they will be independent from each other.
The interaction cycle of working with transactions looks like this:
-
Begin a transaction.
-
Perform database operations.
-
Commit or roll back the transaction.
It is very important to finish each transaction. The transaction will not release the locks or memory it has acquired until it has been finished. |
The idiomatic use of transactions in Neo4j is to use a try-with-resources
statement and declare transaction
as one of the resources.
Then start the transaction and try to perform graph operations.
The last operation in the try
block should commit or roll back the transaction, depending on the business logic.
In this scenario, try-with-resources
is used as a guard against unexpected exceptions and as an additional safety mechanism to ensure that the transaction gets closed no matter what happens inside the statement block.
All non-committed transactions will be rolled back as part of resource cleanup at the end of the statement.
In case a transaction has been explicitly committed or rolled back, resource cleanup will not be required and the transaction closure will be an empty operation.
All modifications performed in a transaction are kept in memory. This means that very large updates must be split into several transactions in order to avoid running out of memory. |
Isolation levels
Transactions in Neo4j use a read-committed isolation level, which means they will see data as soon as it has been committed but will not see data in other transactions that have not yet been committed. This type of isolation is weaker than serialization but offers significant performance advantages while being sufficient for the overwhelming majority of cases.
In addition, the Neo4j Java API enables explicit locking of nodes and relationships. Using locks gives the opportunity to simulate the effects of higher levels of isolation by obtaining and releasing locks explicitly. For example, if a write lock is taken on a common node or relationship, then all transactions will serialize on that lock — giving the effect of a serialization isolation level.
Lost updates in Cypher
In Cypher it is possible to acquire write locks to simulate improved isolation in some cases.
Consider the case where multiple concurrent Cypher queries increment the value of a property.
Due to the limitations of the read-committed isolation level, the increments might not result in a deterministic final value.
If there is a direct dependency, Cypher will automatically acquire a write lock before reading.
A direct dependency is when the right-hand side of a SET
has a dependent property read in the expression, or in the value of a key-value pair in a literal map.
For example, the following query, if run by one hundred concurrent clients, will very likely not increment the property n.prop
to 100, unless a write lock is acquired before reading the property value.
This is because all queries would read the value of n.prop
within their own transaction, and would not see the incremented value from any other transaction that has not yet committed.
In the worst case scenario the final value would be as low as 1, if all threads perform the read before any has committed their transaction.
The following example requires a write lock, and Cypher automatically acquires one:
MATCH (n:Example {id: 42})
SET n.prop = n.prop + 1
This example also requires a write lock, and Cypher automatically acquires one:
MATCH (n)
SET n += {prop: n.prop + 1}
Due to the complexity of determining such a dependency in the general case, Cypher does not cover any of the example cases below.
Variable depending on results from reading the property in an earlier statement:
MATCH (n)
WITH n.prop AS p
// ... operations depending on p, producing k
SET n.prop = k + 1
Circular dependency between properties read and written in the same query:
MATCH (n)
SET n += {propA: n.propB + 1, propB: n.propA + 1}
To ensure deterministic behavior also in the more complex cases, it is necessary to explicitly acquire a write lock on the node in question. In Cypher there is no explicit support for this, but it is possible to work around this limitation by writing to a temporary property.
This example acquires a write lock for the node by writing to a dummy property before reading the requested value:
MATCH (n:Example {id: 42})
SET n._LOCK_ = true
WITH n.prop AS p
// ... operations depending on p, producing k
SET n.prop = k + 1
REMOVE n._LOCK_
The existence of the SET n._LOCK_
statement before the read of the n.prop
read ensures the lock is acquired before the read action, and no updates will be lost due to enforced serialization of all concurrent queries on that specific node.
Default locking behavior
-
When adding, changing or removing a property on a node or relationship a write lock will be taken on the specific node or relationship.
-
When creating or deleting a node a write lock will be taken for the specific node.
-
When creating or deleting a relationship a write lock will be taken on the specific relationship and both its nodes.
The locks will be added to the transaction and released when the transaction finishes.
Deadlocks
Since locks are used it is possible for deadlocks to happen. Neo4j will however detect any deadlock (caused by acquiring a lock) before they happen and throw an exception. The transaction is marked for rollback before the exception is thrown. All locks acquired by the transaction will still be held but will be released when the transaction is finished (in the finally block as pointed out earlier). Once the locks are released, other transactions that were waiting for locks held by the transaction causing the deadlock, can proceed. The work performed by the transaction causing the deadlock can then be retried by the user if needed.
Experiencing frequent deadlocks is an indication of concurrent write requests happening in such a way that it is not possible to execute them while at the same time live up to the intended isolation and consistency. The solution is to make sure concurrent updates happen in a reasonable way. For example, given two specific nodes (A and B), adding or deleting relationships to both these nodes in random order for each transaction, will result in deadlocks when there are two or more transactions doing that concurrently. One option is to make sure that updates always happens in the same order (first A then B). Another option is to make sure that each thread/transaction does not have any conflicting writes to a node or relationship as some other concurrent transaction. This can, for example, be achieved by letting a single thread do all updates of a specific type.
Deadlocks caused by the use of other synchronization than the locks managed by Neo4j can still happen. Since all operations in the Neo4j API are thread safe unless specified otherwise, there is no need for external synchronization. Other code that requires synchronization should be synchronized in such a way that it never performs any Neo4j operation in the synchronized block. |
Deadlock handling an example
Below, you will find an example of how deadlocks can be handled in procedures, server extensions, or when using Neo4j embedded.
The full source code used for the code snippet can be found at DeadlockDocTest.java. |
When dealing with deadlocks in code, there are several issues you may want to address:
-
Only do a limited amount of retries, and fail if a threshold is reached.
-
Pause between each attempt to allow the other transaction to finish before trying again.
-
A retry-loop can be useful not only for deadlocks, but for other types of transient errors as well.
Below is an example that shows how this can be implemented.
If you do not want to write all the code yourself, there is a class called org.neo4j.helpers.TransactionTemplate
that will help you achieve what is needed.
Below is an example of how to create, customize, and use this template for retries in transactions.
First, define the base template:
TransactionTemplate template = new TransactionTemplate( ).retries( 5 ).backoff( 3, TimeUnit.SECONDS );
Next, specify the database to use and a function to execute:
Object result = template.with(graphDatabaseService).execute( transaction -> {
Object result1 = null;
return result1;
} );
The operations that could lead to a deadlock should go into the apply
method.
The TransactionTemplate
uses a fluent API for configuration, and you can choose whether to set everything at once, or (as in the example) provide some details just before using it.
The template enables setting a predicate for what exceptions to retry on, and also allows for monitoring of events that take place.
This example shows how to use a retry loop for handling deadlocks:
Throwable txEx = null;
int RETRIES = 5;
int BACKOFF = 3000;
for ( int i = 0; i < RETRIES; i++ )
{
try ( Transaction tx = graphDatabaseService.beginTx() )
{
Object result = doStuff(tx);
tx.success();
return result;
}
catch ( Throwable ex )
{
txEx = ex;
// Add whatever exceptions to retry on here
if ( !(ex instanceof DeadlockDetectedException) )
{
break;
}
}
// Wait so that we don't immediately get into the same deadlock
if ( i < RETRIES - 1 )
{
try
{
Thread.sleep( BACKOFF );
}
catch ( InterruptedException e )
{
throw new TransactionFailureException( "Interrupted", e );
}
}
}
if ( txEx instanceof TransactionFailureException )
{
throw ((TransactionFailureException) txEx);
}
else if ( txEx instanceof Error )
{
throw ((Error) txEx);
}
else if ( txEx instanceof RuntimeException )
{
throw ((RuntimeException) txEx);
}
else
{
throw new TransactionFailureException( "Failed", txEx );
}
Delete semantics
When deleting a node or a relationship all properties for that entity will be automatically removed but the relationships of a node will not be removed. Neo4j enforces a constraint (upon commit) that all relationships must have a valid start node and end node. In effect, this means that trying to delete a node that still has relationships attached to it will throw an exception upon commit. It is, however, possible to choose in which order to delete the node and the attached relationships as long as no relationships exist when the transaction is committed.
The delete semantics can be summarized as follows:
-
All properties of a node or relationship will be removed when it is deleted.
-
A deleted node cannot have any attached relationships when the transaction commits.
-
It is possible to acquire a reference to a deleted relationship or node that has not yet been committed.
-
Any write operation on a node or relationship after it has been deleted (but not yet committed) will throw an exception.
-
Trying to acquire a new or work with an old reference to a deleted node or relationship after commit, will throw an exception.
Creating unique nodes
In many use cases, a certain level of uniqueness is desired among entities. For example, only one user with a certain email address may exist in a system. If multiple concurrent threads naively try to create the user, duplicates will be created.
The following are the main strategies for ensuring uniqueness, and they all work across cluster and single-instance deployments.
Single thread
By using a single thread, no two threads will even try to create a particular entity simultaneously. On cluster an external single-threaded client can perform the operations.
Get or create
Defining a uniqueness constraint and using the Cypher MERGE
clause is the most efficient way to get or create a unique node.
See Unique nodes for more information.
Transaction events
A transaction event handler can be registered to receive Neo4j transaction events.
Once it has been registered at a GraphDatabaseService
instance it receives events for transactions before they are committed.
Handlers get notified about transactions that have performed any write operation, and that will be committed.
If Transaction#success()
has not been called or the transaction has been marked as failed Transaction#failure()
it will be rolled back, and no events are sent to the Handler.
Before a transaction is committed the Handler’s beforeCommit
method is called with the entire diff of modifications made in the transaction.
At this point the transaction is still running, so changes can still be made.
The method may also throw an exception, which will prevent the transaction from being committed.
If the transaction is rolled back, a call to the handler’s afterRollback
method will follow.
The order in which handlers are executed is undefined -- there is no guarantee that changes made by one handler will be seen by other handlers. |
If beforeCommit
is successfully executed in all registered handlers the transaction is committed and the afterCommit
method is called with the same transaction data.
This call also includes the object returned from beforeCommit
.
In afterCommit
the transaction has been closed and access to anything outside TransactionData
requires a new transaction to be opened.
A TransactionEventHandler
gets notified about transactions that have any changes accessible via TransactionData
so some indexing and schema changes will not be triggering these events.