Chapter 3. Reference

This chapter is the reference documentation for Neo4j OGM. It covers the programming model, APIs, concepts, annotations and technical details of the Neo4j OGM.

3.1. Introduction

Neo4j OGM is a fast object-graph mapping library for Neo4j, optimised for server-based installations utilising Cypher.

It aims to simplify development with the Neo4j graph database and like JPA, it uses annotations on simple POJO domain objects to do so.

With a focus on performance, the OGM introduces a number of innovations, including:

  • non-reflection based classpath scanning for much faster startup times;
  • variable-depth persistence to allow you to fine-tune requests according to the characteristics of your graph;
  • smart object-mapping to reduce redundant requests to the database, improve latency and minimise wasted CPU cycles; and
  • user-definable session lifetimes, helping you to strike a balance between memory-usage and server request efficiency in your applications.

3.1.1. Overview

This reference documentation is broken down into sections to help the user understand specifics of how the OGM works.

Getting started
Getting started can sometimes be a chore. What versions of the OGM do you need? Where do you get them from? What build tool should you use? Getting Started is the perfect place to well…​ get started!
Drivers, Logging; properties, configuration via Java. How to make sense of all the options? Configuration has got you covered.
Annotating your Domain Objects
To get started with your OGM application, you need only your domain model and the annotations provided by the library. You use annotations to mark domain objects to be reflected by nodes and relationships of the graph database. For individual fields the annotations allow you to declare how they should be processed and mapped to the graph. For property fields and references to other entities this is straightforward. Because Neo4j is a schema-free database, the OGM uses a simple mechanism to map Java types to Neo4j nodes using labels. Relationships between entities are first class citizens in a graph database and therefore worth a section of it’s own describing their usage in Neo4j OGM.
Connecting to the Database
Managing how you connect to the database is important. Connecting to the Database has all the details on what needs to happen to get you up and running.
Indexing and Primary Constraints
Indexing is an important part of any database. The Neo4j OGM provides a variety of features to support the management of Indexes as well as the ability to query your domain objects by something other than the internal Neo4j ID. Indexing has everything you will want to know when it comes to getting that working.
Interacting with the Graph Model
Neo4j OGM offers a session for interacting with the mapped entities and the Neo4j graph database. Neo4j uses transactions to guarantee the integrity of your data and Neo4j OGM supports this fully. The implications of this are described here. To use advanced functionality like Cypher queries, a basic understanding of the graph data model is required. The graph data model is explained in the chapter about Neo4j.
Type Conversion
The OGM provides support for default and bespoke type conversions, which allow you to configure how certain data types are mapped to nodes or relationships in Neo4j. See Type Converstion for more details.
Filtering your domain objects
Filters provides a simple API to append criteria to your stock Session.loadX() behaviour. This is covered in more detail in Filters
Reacting to Persistence events
The Events mechanism allows users to register event listeners for handling persistence events related both to top-level objects being saved as well as connected objects. Event handling discusses all the aspects of working with events.
Testing in your application
Sometimes you want to be able to run your tests against an in-memory version of the OGM. Testing goes into more detail of how to set that up.
Support for High Availability
For those using Neo4j Enterprise, support for high availability is extremely important. The chapter on High Availability goes into all the options the OGM provides to support this.

3.2. Getting Started

3.2.1. Versions

Consult the version table to determine which version of the OGM to use with a particular version of Neo4j and related technologies.

Neo4j-OGM Version Neo4j Version Bolt Version# Spring Data Neo4j Version Spring Boot Version


2.3.x, 3.0.x, 3.1.x





2.3.x, 3.0.x


4.1.2 - 4.1.6+



2.2.x, 2.3.x


4.1.0 - 4.1.1



2.1.x, 2.2.x, 2.3.x




* These versions are no longer actively developed or supported.

# Not applicable to Embedded and HTTP drivers

3.2.2. Dependency Management

For building an application, your build automation tool needs to be configured to include the Neo4j OGM dependencies.

The OGM dependencies consist of neo4j-ogm-core, together with the relevant dependency declarations on the driver you want to use. OGM 2.1 provides support for connecting to Neo4j by configuring one of the following Drivers:

  • neo4j-ogm-http-driver - Uses HTTP to communicate between the OGM and a remote Neo4j instance.
  • neo4j-ogm-embedded-driver - Connects directly to the Neo4j database engine.
  • neo4j-ogm-bolt-driver - Uses native Bolt protocol to communicate between the OGM and a remote Neo4j instance.

If you’re not using a particular driver, you don’t need to declare it.

Neo4j OGM projects can be built using Maven, Gradle or any other build system that utilises Maven’s artifact repository structure. Maven

In the <dependencies> section of your pom.xml add the following:

Maven dependencies. 


<!-- Only add if you're using the HTTP driver -->

<!-- Only add if you're using the Embedded driver -->

<!-- Only add if you're using the Bolt driver -->

If you plan on using a development (i.e. SNAPSHOT) version of the OGM you will need to add the following to the <repositories> section of your pom.xml:

Neo4j Snapshot Repository. 

    <name>Neo4j Maven 2 snapshot repository</name>
</repository> Gradle

Ensure the following dependencies are added to you build.gradle:

Gradle dependencies. 

dependencies {
    compile 'org.neo4j:neo4j-ogm-core:2.1'
    runtime 'org.neo4j:neo4j-ogm-http-driver:2.1' // Only add if you're using the HTTP driver
    runtime 'org.neo4j:neo4j-ogm-embedded-driver:2.1' //  Only add if you're using the Embedded driver
    runtime 'org.neo4j:neo4j-ogm-bolt-driver:2.1' // Only add if you're using the Bolt driver

If you plan on using a development (i.e. SNAPSHOT) version of the OGM you will need to add the following section of your build.gradle:

Neo4j Snapshot Repository. 

repositories {
    maven { url "" }

3.3. Configuration

3.3.1. Configuration method

There are two ways to supply configuration to the OGM:

  • using an file; and
  • programmatically using Java. Using

Unless you supply an explicit Configuration object to the SessionFactory (see below), the OGM will attempt to auto-configure itself using a file called, which it expects to find on the root of the classpath.

If you want to configure the OGM using a properties file, but with a different filename, you must set a System property or Environment variable called '' pointing to the alternative configuration file you want to use. Programmatically using Java

In cases where you are not be able to provide configuration via a properties file you can configure the OGM programmatically instead.

The Configuration object provides a fluent API to set various configuration options. This object then needs to be supplied to the SessionFactory constructor in order to be configured.

3.3.2. Driver Configuration HTTP Driver

Table 3.1. Basic HTTP Driver Configuration Java Configuration
Configuration configuration = new Configuration();
    .setURI("http://user:password@localhost:7474"); Bolt Driver

Note that for the URI, if no port is specified, the default Bolt port of 7687 is used. Otherwise, a port can be specified with bolt://neo4j:password@localhost:1234

Also, the bolt driver allows you to define a connection pool size, which refers to the maximum number of sessions per URL. This property is optional and defaults to 50

Table 3.2. Basic Bolt Driver Configuration Java Configuration
Configuration configuration = new Configuration();

A timeout to the database with the Bolt driver can be set by updating your Database’s neo4j.conf. The exact setting to change can be found here. Embedded Driver

You should use the Embedded driver if you don’t want to use a client-server model, or if your application is running as a Neo4j Unmanaged Extension. You can specify a permanent data store location to provide durability of your data after your application shuts down, or you can use an impermanent data store, which will only exist while your application is running.

As of 2.1.0 the Neo4j OGM embedded driver no longer ships with the Neo4j kernel. Users are expected to provide this dependency through their dependency management system. See Getting Started for more details.

Table 3.3. Permanent Data Store Embedded Driver Configuration Java Configuration
Configuration configuration = new Configuration();

To use an impermanent data store which will be deleted on shutdown of the JVM, you just omit the URI attribute.

Table 3.4. Impermanent Data Store Embedded Driver Configuration Java Configuration
Configuration configuration = new Configuration();
Embedded Driver in an Unmanaged Extension

When your application is running as unmanaged extension inside the Neo4j server itself, you will need to set up the Driver configuration slightly differently. In this situation, an existing GraphDatabaseService will already be available via a @Context annotation, and you must configure the Components framework to enable the OGM to use the provided instance. Note your application should typically do this only once before the call to set up a Configuration object.

    Components.setDriver(new EmbeddedDriver(graphDatabaseService)); Credentials

If you are using the HTTP or Bolt Driver you have a number of different ways to supply credentials to the Driver Configuration. Java Configuration
# embedded

# separately
Configuration configuration = new Configuration();

Configuration configuration = new Configuration();
     .setCredentials("user", "password);

Credentials credentials = new UsernameAndPasswordCredentials("user", "password");
Configuration configuration = new Configuration();

Note: Currently only Basic Authentication is supported by Neo4j, so the only Credentials implementation supplied by the OGM is `UsernameAndPasswordCredentials` Transport Layer Security (TLS/SSL)

The Bolt and Http drivers also allow you to connect to Neo4j over a secure channel. These rely on Transport Layer Security (aka SSL) and require the installation of a signed certificate on the server.

In certain situations (e.g. some cloud environments) it may not be possible to install a signed certificate even though you still want to use an encrypted connection.

To support this, both drivers have configuration settings allowing you to bypass certificate checking, although they differ in their implementation.

Both these strategies leave you vulnerable to a MITM attack. You should probably not use them unless your servers are behind a secure firewall.

Bolt Java Configuration
#Encryption level (TLS), optional, defaults to REQUIRED.
#Valid values are NONE,REQUIRED

#Trust strategy, optional, not used if not specified.

#Trust certificate file, required if trust.strategy is specified
Configuration configuration = new Configuration();

TRUST_ON_FIRST_USE means that the Bolt Driver will trust the first connection to a host to be safe and intentional. On subsequent connections, the driver will verify that the host is the same as on that first connection.

HTTP Java Configuration
trust.strategy = ACCEPT_UNSIGNED
Configuration configuration = new Configuration();

The ACCEPT_UNSIGNED strategy permits the Http Driver to accept Neo4j’s default snakeoil.cert (and any other) unsigned certificate when connecting over HTTPS.

3.3.3. Logging

Neo4j OGM uses SLF4J to log statements. In production, you can set the log level in a file called logback.xml to be found at the root of the classpath. Please see the Logback manual for further details.

3.4. Annotating Entities

3.4.1. @NodeEntity: The basic building block

The @NodeEntity annotation is used to declare that a POJO class is an entity backed by a node in the graph database. Entities handled by the OGM must have one empty public constructor to allow the library to construct the objects.

Fields on the entity are by default mapped to properties of the node. Fields referencing other node entities (or collections thereof) are linked with relationships.

@NodeEntity annotations are inherited from super-types and interfaces. It is not necessary to annotate your domain objects at every inheritance level.

If the label attribute is set then this will replace the default label applied to the node in the database. The default label is just the simple class name of the annotated entity. All parent classes (excluding java.lang.Object) are also added as labels so that retrieving a collection of nodes via a parent type is supported.

Entity fields can be annotated with @Property, @GraphId, @Transient or @Relationship. All annotations live in the org.neo4j.ogm.annotation package. Marking a field with the transient modifier has the same effect as annotating it with @Transient; it won’t be persisted to the graph database.

Persisting an annotated entity. 

public class Actor extends DomainObject {

   private Long id;

   private String fullName;

   @Relationship(type="ACTED_IN", direction=Relationship.OUTGOING)
   private List<Movie> filmography;


public class Movie {

   @GraphId Long id;

   private String name;


Saving a simple object graph containing one actor and one film using the above annotated objects would result in the following being persisted in Neo4j.

(:Actor:DomainObject {name:'Tom Cruise'})-[:ACTED_IN]->(:Film {title:'Mission Impossible'})

When annotating your objects, you can apply the annotations to either the fields or their accessor methods, but bear in mind the aforementioned EntityAccessStrategy ordering when annotating your domain model.

Persisting a non-annotated entity. 

public class Actor extends DomainObject {

   private Long id;
   private String fullName;
   private List<Movie> filmography;


public class Movie {

   private Long id;
   private String name;


In this case, a graph similar to the following would be persisted.

(:Actor:DomainObject {fullName:'Tom Cruise'})-[:FILMOGRAPHY]->(:Movie {name:'Mission Impossible'})

While this will map successfully to the database, it’s important to understand that the names of the properties and relationship types are tightly coupled to the class’s member names. Renaming any of these fields will cause parts of the graph to map incorrectly, hence the recommendation to use annotations. Runtime managed labels

As stated above, the label applied to a node is the contents of the @NodeEntity label property, or if not specified, it will default to the simple class name of the entity. Sometimes it might be necessary to add and remove additional labels to a node at runtime. We can do this using the @Labels annotation. Let’s provide a facility for adding additional labels to the Student entity:

public class Student {

    private List<String> labels = new ArrayList<>();


Now, upon save, the node’s labels will correspond to the entity’s class hierarchy plus whatever the contents of the backing field are. We can use one @Labels field per class hierarchy - it should be exposed or hidden from sub-classes as appropriate.

3.4.2. @Relationship: Connecting node entities

Every field of an entity that references one or more other node entities is backed by relationships in the graph. These relationships are managed by the OGM automatically.

The simplest kind of relationship is a single object reference pointing to another entity (1:1). In this case, the reference does not have to be annotated at all, although the annotation may be used to control the direction and type of the relationship. When setting the reference, a relationship is created when the entity is persisted. If the field is set to null, the relationship is removed.

Single relationship field. 

public class Movie {
    private Actor topActor;

It is also possible to have fields that reference a set of entities (1:N). Neo4j OGM supports the following types of entity collections:

  • java.util.Vector
  • java.util.List, backed by a java.util.ArrayList
  • java.util.SortedSet, backed by a java.util.TreeSet
  • java.util.Set, backed by a java.util.HashSet
  • Arrays

Node entity with relationships. 

public class Actor {
    @Relationship(type = "TOP_ACTOR", direction = Relationship.INCOMING)
    private Set<Movie> topActorIn;

    @Relationship(type = "ACTS_IN")
    private Set<Movie> movies;

For graph to object mapping, the automatic transitive loading of related entities depends on the depth of the horizon specified on the call to Session.load(). The default depth of 1 implies that related node or relationship entities will be loaded and have their properties set, but none of their related entities will be populated.

If this Set of related entities is modified, the changes are reflected in the graph once the root object (Actor, in this case) is saved. Relationships are added, removed or updated according to the differences between the root object that was loaded and the corresponding one that was saved..

Neo4j OGM ensures by default that there is only one relationship of a given type between any two given entities. The exception to this rule is when a relationship is specified as either OUTGOING or INCOMING between two entities of the same type. In this case, it is possible to have two relationships of the given type between the two entities, one relationship in either direction.

If you don’t care about the direction then you can specify direction=Relationship.UNDIRECTED which will guarantee that the path between two node entities is navigable from either side.

For example, consider the PARTNER relationship between two companies, where (A)-[:PARTNER_OF]→(B) implies (B)-[:PARTNER_OF]→(A). The direction of the relationship does not matter; only the fact that a PARTNER_OF relationship exists between these two companies is of importance. Hence an UNDIRECTED relationship is the correct choice, ensuring that there is only one relationship of this type between two partners and navigating between them from either entity is possible.

The direction attribute on a @Relationship defaults to OUTGOING. Any fields or methods backed by an INCOMING relationship must be explicitly annotated with an INCOMING direction. Using more than one relationship of the same type

In some cases, you want to model two different aspects of a conceptual relationship using the same relationship type. Here is a canonical example:

Clashing Relationship Type. 

class Person {
    private Long id;
    private Car car;

    private Pet pet;

This will work just fine, however, please be aware that this is only because the end node types (Car and Pet) are different types. If you wanted a person to own two cars, for example, then you’d have to use a Collection of cars or use differently-named relationship types. Ambiguity in relationships

In cases where the relationship mappings could be ambiguous, the recommendation is that:

  • The objects be navigable in both directions.
  • The @Relationship annotations are explicit. This means if the entity has setter methods, they must be annotated

Examples of ambiguous relationship mappings are multiple relationship types that resolve to the same types of entities, in a given direction, but whose domain objects are not navigable in both directions.

3.4.3. @RelationshipEntity: Rich relationships

To access the full data model of graph relationships, POJOs can also be annotated with @RelationshipEntity, making them relationship entities. Just as node entities represent nodes in the graph, relationship entities represent relationships. Such POJOs allow you to access and manage properties on the underlying relationships in the graph.

Fields in relationship entities are similar to node entities, in that they’re persisted as properties on the relationship. For accessing the two endpoints of the relationship, two special annotations are available: @StartNode and @EndNode. A field annotated with one of these annotations will provide access to the corresponding endpoint, depending on the chosen annotation.

For controlling the relationship-type a String attribute called type is available on the @RelationshipEntity annotation. Like the simple strategy for labelling node entities, if this is not provided then the name of the class is used to derive the relationship type, although it’s converted into SNAKE_CASE to honour the naming conventions of Neo4j relationships As of the current version of the OGM, the type must be specified on the @RelationshipEntity annotation as well as its corresponding @Relationship annotations.

You must include @RelationshipEntity plus exactly one @StartNode field and one @EndNode field on your relationship entity classes or the OGM will throw a MappingException when reading or writing. It is not possible to use relationship entities in a non-annotated domain model.

A simple Relationship entity. 

public class Actor {
    Long id;
    private Role playedIn;

public class Role {
    @GraphId   private Long relationshipId;
    @Property  private String title;
    @StartNode private Actor actor;
    @EndNode   private Movie movie;

public class Movie {
    private Long id;
    private String title;

Note that the Actor also contains a reference to a Role. This is important for persistence, even when saving the Role directly, because paths in the graph are written starting with nodes first and then relationships are created between them. Therefore, you need to structure your domain models so that relationship entities are reachable from node entities for this to work correctly.

Additionally, the OGM will not persist a relationship entity that doesn’t have any properties defined. If you don’t want to include properties in your relationship entity then you should use a plain @Relationship instead. Multiple relationship entities which have the same property values and relate the same nodes are indistinguishable from each other and are represented as a single relationship by the OGM.

The @RelationshipEntity annotation must appear on all leaf subclasses if they are part of a class hierarchy representing relationship entities. This annotation is optional on superclasses.

3.4.4. @GraphId: Neo4j id field

This is a required field which must be of type java.lang.Long. It is used by Neo4j OGM to store the node or relationship-id to re-connect the entity to the graph. As such, user code should never assign a value to it.

It must not be a primitive type because then an object in a transient state cannot be represented, as the default value 0 would point to the reference node.

Do not rely on this ID for long running applications. Neo4j will reuse deleted node ID’s. It is recommended users come up with their own unique identifier for their domain objects (or use a UUID).

If the field is simply named 'id' then it is not necessary to annotate it with @GraphId as the OGM will use it automatically. Entity Equality

Entity equality can be a grey area. There are many debatable issues, such as whether natural keys or database identifiers best describe equality and the effects of versioning over time. Neo4j OGM does not impose a dependency upon a particular style of equals() or hashCode() implementation. The graph-id field is directly checked to see if two entities represent the same node and a 64-bit hash code is used for dirty checking, so you’re not forced to write your code in a certain way!

You are free to write your equals and hashcode in a domain specific way for managed entities. However, we strongly advise developers to not use the @GraphId field in these implementations. This is because when you first persist an entity, its hashcode changes because the OGM populates the database ID on save. This causes problems if you had inserted the newly created entity into a hash-based collection before saving.

3.4.5. @Property: Optional annotation for property fields

As we touched on earlier, it is not necessary to annotate property fields as they are persisted by default. Fields that are annotated as @Transient or with transient are exempted from persistence. All fields that contain primitive values are persisted directly to the graph. All fields convertible to a String using the conversion services will be stored as a string. Neo4j OGM includes default type converters that deal with the following types:

  • java.util.Date to a String in the ISO 8601 format: "yyyy-MM-dd’T’HH:mm:ss.SSSXXX"
  • java.math.BigInteger to a String property
  • java.math.BigDecimal to a String property
  • binary data (as byte[] or Byte[]) to base-64 String
  • java.lang.Enum types using the enum’s name() method and Enum.valueOf()

Collections of primitive or convertible values are stored as well. They are converted to arrays of their type or strings respectively. Custom converters are also specified by using @Convert - this is discussed in detail later on.

Node property names can be explicitly assigned by setting the name attribute. For example @Property(name="last_name") String lastName. The node property name defaults to the field name when not specified.

Property fields to be persisted to the graph must not be declared final.

3.4.6. Object Read/Write Ordering

Neo4j OGM supports mapping annotated and non-annotated objects models. It’s possible to save any POJO without annotations to the graph, as the framework applies conventions to decide what to do. This is useful in cases when you don’t have control over the classes that you want to persist. The recommended approach, however, is to use annotations wherever possible, since this gives greater control and means that code can be refactored safely without risking breaking changes to the labels and relationships in your graph.

Annotated and non-annotated objects can be used within the same project without issue. There is an EntityAccessStrategy used to control how objects are read from or written to. The default implementation of this uses the following convention:

  1. Annotated method (getter/setter).
  2. Annotated field.
  3. Plain method (getter/setter).
  4. Plain field.

The object graph mapping comes into play whenever an entity is constructed from a node or relationship. This could be done explicitly during the lookup or create operations of the Session but also implicitly while executing any graph operation that returns nodes or relationships and expecting mapped entities to be returned.

Entities handled by the OGM must have one empty public constructor to allow the library to construct the objects.

Unless annotations are used to specify otherwise, the framework will attempt to map any of an object’s "simple" fields to node properties and any rich composite objects to related nodes. A "simple" field is any primitive, boxed primitive or String or arrays thereof, essentially anything that naturally fits into a Neo4j node property. For related entities the type of a relationship is inferred by the bean property name.

3.5. Indexing

Indexing is used in Neo4j to quickly find nodes and relationships from which to start graph operations. Indexes are also employed to ensure uniqueness of elements with certain labels and properties.

Please note that the lucene-based manual indexes are deprecated with Neo4j 2.0. The default index is now based on labels and schema indexes and the related old APIs have been deprecated as well. The "legacy" index framework should only be used for fulltext and spatial indexes which are not currently supported via schema-based indexes.

Previous version of the OGM have restricted developers to only be able to find unique Objects the internal Neo4j ID. This can be a problem for users when running Neo4j across restarts as this can result in key duplication. In order to support querying for unique objects via business or synthetic (e.g. UUID) properties, the OGM can now enforce uniqueness constraints on certain properties to make lookups easier. On top of this, many users have the use case of wanting to save their objects via this same property and in effect have the OGM run the MERGE operation using the same property to identify the object.

3.5.1. Indexes and Constraints

Indexes and unique constraints based on labels and properties are supported as of version 2.1.0 with the @Index annotation. Any property field annotated with @Index will use have an appropriate schema index created. For @Index(unique=true) a constraint is created.

You may add as many indexes or constraints as you like to your class. If you annotate a field in a class that is part of an inheritance hierarchy then the index or constraint will only be added to that class’s label.

3.5.2. Index Creation

By default index management is set to None.

If you would like the OGM to manage your schema creation there are several ways to go about it.

Only classes marked with @Index will be used. Indexes will always be generated with the containing class’s label and the annotated property’s name. Index generation behaviour can be defined in by defining a property called: and providing a value of:

Below is a table of all options available for configuring Auto-Indexing.

Option Description Properties Example Java Example

none (default)

Nothing is done with index and constraint annotations.




Make sure the connected database has all indexes and constraints in place before starting up



Drops all constraints and indexes on startup then builds indexes based on whatever is represented in OGM by @Index. Handy during development



Dumps the generated constraints and indexes to a file. Good for setting up environments. none: Default. Simply marks the field as using an index.<a directory><a filename>

config.autoIndexConfiguration().setAutoIndex("dump"); config.autoIndexConfiguration().setDumpDir("XXX"); config.autoIndexConfiguration().setDumpFilename("XXX");

3.5.3. Primary Constraints

The OGM can now be used to look up or merge entities based on a unique constraint. This unique constraint is known as a primary constraint. It can be enabled through the index annotation like so: Index(unique=true, primary=true).

Fields marked with unique=false, primary=true together will be ignored.

Improvements have also been made to Session to allow you to load a particular object via this primary constraint.

T Session.load(Class<T>, U id), where T is the highest class in a class hierarchy and a primary constraint of type U is defined on a field in that same class.

There may only be one primary=true index per Class hierarchy. Merging

When an object is created via a call to a check is made to see if that object’s class has a primary constraint associated with it. If it does that constraint will always as part of a Cypher MERGE operation, otherwise a regular CREATE will be used.

3.6. Connecting to the Graph

In order to interact with mapped entities and the Neo4j graph, your application will require a Session, which is provided by the SessionFactory.

3.6.1. SessionFactory

The SessionFactory is needed by OGM to create instances of Session as required. This also sets up the object-graph mapping metadata when constructed, which is then used across all Session objects that it creates. The packages to scan for domain object metadata should be provided to the SessionFactory constructor. Multiple packages may be provided as well. If you would rather just pass in specific classes you can also do that via an overloaded constructor. The SessionFactory must also be configured. There are two ways this can be done. Please see the section below on Configuration for further details.

Multiple packages. 

SessionFactory sessionFactory = new SessionFactory("first.package.domain", "second.package.domain",...);

Note that the SessionFactory should typically be set up once during life of your application.

3.7. Using the OGM Session

The Session provides the core functionality to persist objects to the graph and load them in a variety of ways.

3.7.1. Session Configuration

A Session is used to drive the object-graph mapping framework. It keeps track of the changes that have been made to entities and their relationships. The reason it does this is so that only entities and relationships that have changed get persisted on save, which is particularly efficient when working with large graphs. Once an entity is tracked by the session, reloading this entity within the scope of the same session will result in the session cache returning the previously loaded entity. However, the subgraph in the session will expand if the entity or its related entities retrieve additional relationships from the graph.

If you want to fetch fresh data from the graph, then this can be achieved by using a new session or clearing the current sessions context using Session.clear().

The lifetime of the Session can be managed in code. For example, associated with single fetch-update-save cycle or unit of work.

If your application relies on long-running sessions then you may not see changes made from other users and find yourself working with outdated objects. On the other hand, if your sessions have too narrow a scope then your save operations can be unnecessarily expensive, as updates will be made to all objects if the session isn’t aware of the those that were originally loaded.

There’s therefore a trade off between the two approaches. In general, the scope of a Session should correspond to a "unit of work" in your application.

3.7.2. Basic operations

Basic operations are limited to CRUD operations on entities and executing arbitrary Cypher queries; more low-level manipulation of the graph database is not possible.

There is no way to manipulate relationship- and node-objects directly.

Given that the Neo4j OGM framework is driven by Cypher queries alone, there’s no way to work directly with Node and Relationship objects in remote server mode. Similarly, Traversal Framework operations are not supported, again because the underlying query-driven model doesn’t handle it in an efficient way.

If you find yourself in trouble because of the omission of these features, then your best options are:

  1. Write a Cypher query to perform the operations on the nodes/relationships instead.
  2. Write a Neo4j server extension and call it over REST from your application.

Of course, there are pros and cons to both of these approaches, but these are largely outside the scope of this document. In general, for low-level, very high-performance operations like complex graph traversals you’ll get the best performance by writing a server-side extension. For most purposes, though, Cypher will be performant and expressive enough to perform the operations that you need.

3.7.3. Persisting entities

Session allows to save, load, loadAll and delete entities with transaction handling and exception translation managed for you. The eagerness with which objects are retrieved is controlled by specifying the 'depth' argument to any of the load methods.

Entity persistence is performed through the save() method on the underlying Session object.

Under the bonnet, the implementation of Session has access to the MappingContext that keeps track of the data that has been loaded from Neo4j during the lifetime of the session. Upon invocation of save() with an entity, it checks the given object graph for changes compared with the data that was loaded from the database. The differences are used to construct a Cypher query that persists the deltas to Neo4j before repopulating it’s state based on the response from the database server.

The OGM doesn’t automatically commit when a transaction closes, so an explicit call to save(…​) is required in order to persist changes to the database.

Example 3.1. Persisting entities
public class Person {
   private String name;
   public Person(String name) { = name;

// Store Michael in the database.
Person p = new Person("Michael");; Save depth

As mentioned previously, save(entity) is overloaded as save(entity, depth), where depth dictates the number of related entities to save starting from the given entity. The default depth, -1, will persist properties of the specified entity as well as every modified entity in the object graph reachable from it. This means that all affected objects in the entity model that are reachable from the root object being persisted will be modified in the graph. This is the recommended approach because it means you can persist all your changes in one request. The OGM is able to detect which objects and relationships require changing, so you won’t flood Neo4j with a bunch of objects that don’t require modification. You can change the persistence depth to any value, but you should not make it less than the value used to load the corresponding data or you run the risk of not having changes you expect to be made actually being persisted in the graph. A depth of 0 will persist only the properties of the specified entity to the database.

Specifying the save depth is handy when it comes to dealing with complex collections, that could potentially be very expensive to load.

Example 3.2. Relationship save cascading
class Movie {
    String title;
    Actor topActor;
    public void setTopActor(Actor actor) {
        topActor = actor;

class Actor {
    String name;

Movie movie = new Movie("Polar Express");
Actor actor = new Actor("Tom Hanks");


Neither the actor nor the movie has been assigned a node in the graph. If we were to call, then the OGM would first create a node for the movie. It would then note that there is a relationship to an actor, so it would save the actor in a cascading fashion. Once the actor has been persisted, it will create the relationship from the movie to the actor. All of this will be done atomically in one transaction.

The important thing to note here is that if is called instead, then only the actor will be persisted. The reason for this is that the actor entity knows nothing about the movie entity - it is the movie entity that has the reference to the actor. Also note that this behaviour is not dependent on any configured relationship direction on the annotations. It is a matter of Java references and is not related to the data model in the database.

In the following example, the actor and the movie are both managed entities, having both been previously persisted to the graph:

Example 3.3. Cascade for modified fields

In this case, even though the movie has a reference to the actor, the property change on the actor will be persisted by the call to save(movie). The reason for this is, as mentioned above, that cascading will be done for fields that have been modified and reachable from the root object being saved.

In the example below,,1) will persist all modified objects reachable from user up to one level deep. This includes posts and groups but not entities related to them, namely author, comments, members or location. A persistence depth of 0 i.e.,0) will save only the properties on the user, ignoring any related entities. In this case, fullName is persisted but not friends, posts or groups.

Persistence Depth. 

public class User  {

   private Long id;
   private String fullName;
   private List<Post> posts;
   private List<Group> groups;


public class Post {

   private Long id;
   private String name;
   private String content;
   private User author;
   private List<Comment> comments;


public class Group {

   private Long id;
   private String name;
   private List<User> members;
   private Location location;


3.7.4. Loading Entities

Entities can be loaded from the OGM through the use of the session.loadXXX() methods or via session.query()/session.queryForObject() which will accept your own Cypher queries (See section below on cypher queries).

Neo4j OGM includes the concept of persistence horizon (depth). On any individual request, the persistence horizon indicates how many relationships should be traversed in the graph when loading or saving data. A horizon of zero means that only the root object’s properties will be loaded or saved, a horizon of 1 will include the root object and all its immediate neighbours, and so on. This attribute is enabled via a depth argument available on all session methods, but the OGM chooses sensible defaults so that you don’t have to specify the depth attribute unless you want change the default values. Load depth

By default, loading an instance will map that object’s simple properties and its immediately-related objects (i.e. depth = 1). This helps to avoid accidentally loading the entire graph into memory, but allows a single request to fetch not only the object of immediate interest, but also its closest neighbours, which are likely also to be of interest. This strategy attempts to strike a balance between loading too much of the graph into memory and having to make repeated requests for data.

If parts of your graph structure are deep and not broad (for example a linked-list), you can increase the load horizon for those nodes accordingly. Finally, if your graph will fit into memory, and you’d like to load it all in one go, you can set the depth to -1.

On the other hand when fetching structures which are potentially very "bushy" (e.g. lists of things that themselves have many relationships), you may want to set the load horizon to 0 (depth = 0) to avoid loading thousands of objects most of which you won’t actually inspect.

When loading entities with a custom depth less than the one used previously to load the entity within the session, existing relationships will not be flushed from the session; only new entities and relationships are added. This means that reloading entities will always result in retaining related objects loaded at the highest depth within the session for those entities. If it is required to load entities with a lower depth than previously requested, this must be done on a new session, or after clearing your current session with Session.clear(). Cypher queries

Cypher is Neo4j’s powerful query language. It is understood by all the different drivers in the OGM which means that your application code should run identically, whichever driver you choose to use. This makes application development much easier: you can use the Embedded Driver for your integration tests, and then plug in the Http Driver or the Bolt Driver when deploying your code into a production client-server environment.

The Session also allows execution of arbitrary Cypher queries via its query, queryForObject and queryForObjects methods. Cypher queries that return tabular results should be passed into the query method which returns an Result. This consists of QueryStatistics representing statistics of modifying cypher statements if applicable, and an Iterable<Map<String,Object>> containing the raw data, which can be either used as-is or converted into a richer type if needed. The keys in each Map correspond to the names listed in the return clause of the executed Cypher query.

queryForObject specifically queries for entities and as such, queries supplied to this method must return nodes and not individual properties.

In the current version, custom queries do not support paging, sorting or a custom depth. In addition, it does not support mapping a path to domain entities, as such, a path should not be returned from a Cypher query. Instead, return nodes and relationships to have them mapped to domain entities.

Modifications made to the graph via Cypher queries directly will not be reflected in your domain objects within the session. Sorting and paging

Neo4j OGM supports Sorting and Paging of results when using the Session object. The Session object methods take independent arguments for Sorting and Pagination


Iterable<World> worlds = session.loadAll(World.class,
                                        new Pagination(pageNumber,itemsPerPage), depth)


Iterable<World> worlds = session.loadAll(World.class,
                                        new SortOrder().add("name"), depth)

Sort in descending order. 

Iterable<World> worlds = session.loadAll(World.class,
                                        new SortOrder().add(SortOrder.Direction.DESC,"name"))

Sorting with paging. 

Iterable<World> worlds = session.loadAll(World.class,
                                        new SortOrder().add("name"), new Pagination(pageNumber,itemsPerPage))

Neo4j OGM does not yet support sorting and paging on custom queries.

3.7.5. Transactions

Neo4j is a transactional database, only allowing operations to be performed within transaction boundaries.

Transactions can be managed explicitly by calling the beginTransaction() method on the Session followed by a commit() or rollback() as required.

Transaction management. 

Transaction tx = session.beginTransaction();
Person person = session.load(Person.class,personId);
Concert concert= session.load(Concert.class,concertId);
Hotel hotel = session.load(Hotel.class,hotelId);

try {
    bookHotel(person, hotel);
catch (SoldOutException e) {

In the example above, the transaction is committed only when both a concert ticket and hotel room is available, otherwise, neither booking is made.

If you do not manage a transaction in this manner, auto commit transactions are provided implicitly for Session methods such as save, load, delete, execute and so on.

3.8. Type Conversion

The object-graph mapping framework provides support for default and bespoke type conversions, which allow you to configure how certain data types are mapped to nodes or relationships in Neo4j.

3.8.1. Built-in type conversions

Neo4j OGM will automatically perform the following type conversions:

  • java.util.Date to a String in the ISO 8601 format: "yyyy-MM-dd’T’HH:mm:ss.SSSXXX"
  • Any object that extends java.lang.Number to a String property
  • binary data (as byte[] or Byte[]) to base-64 String as Cypher does not support byte arrays
  • java.lang.Enum types using the enum’s name() method and Enum.valueOf()

Two Date converters are provided "out of the box":

  1. @DateString
  2. @DateLong

By default, the OGM will use the @DateString converter as described above. However if you want to use a different date format, you can annotate your entity attribute accordingly:

Example of user-defined date format. 

public class MyEntity {

    private Date entityDate;

Alternatively, if you want to store Dates as long values, use the @DateLong annotation:

Example of date stored as a long value. 

public class MyEntity {

    private Date entityDate;

Collections of primitive or convertible values are also automatically mapped by converting them to arrays of their type or strings respectively.

3.8.2. Custom Type Conversion

In order to define bespoke type conversions for particular members, you can annotate a field or method with @Convert. One of either two convert implementations can be used. For simple cases where a single property maps to a single field, with type conversion, specify an implementation of AttributeConverter.

Example of mapping a single property to a field. 

public class MoneyConverter implements AttributeConverter<DecimalCurrencyAmount, Integer> {

   public Integer toGraphProperty(DecimalCurrencyAmount value) {
       return value.getFullUnits() * 100 + value.getSubUnits();

   public DecimalCurrencyAmount toEntityAttribute(Integer value) {
       return new DecimalCurrencyAmount(value / 100, value % 100);


You could then apply this to your class as follows:

public class Invoice {

   private DecimalCurrencyAmount value;

When more than one node property is to be mapped to a single field, use: CompositeAttributeConverter.

Example of mapping multiple node entity properties onto a single instance of a type. 

* This class maps latitude and longitude properties onto a Location type that encapsulates both of these attributes.
public class LocationConverter implements CompositeAttributeConverter<Location> {

    public Map<String, ?> toGraphProperties(Location location) {
        Map<String, Double> properties = new HashMap<>();
        if (location != null)  {
            properties.put("latitude", location.getLatitude());
            properties.put("longitude", location.getLongitude());
        return properties;

    public Location toEntityAttribute(Map<String, ?> map) {
        Double latitude = (Double) map.get("latitude");
        Double longitude = (Double) map.get("longitude");
        if (latitude != null && longitude != null) {
            return new Location(latitude, longitude);
        return null;


And just as with an AttributeConverter, a CompositeAttributeConverter could be applied to your class as follows:

public class Person {

   private Location location;

3.9. Filters

Filters provide a mechanism for customising the where clause of CYPHER generated by OGM. They can be chained together with boolean operators, and associated with a comparison operator. Additionally, each filter contains a FilterFunction. A filter function can be provided when the filter is instantiated, otherwise, by default a PropertyComparison is used.

In the example below, we’re return a collection containing any satellites that are manned.

Example of using a default property comparison Filter. 

Collection<Satellite> satellites = session.loadAll(Satellite.class, new Filter("manned", true));

3.10. Events

As of version 2.0.4, Neo4j OGM supports persistence events.

3.10.1. Event types

There are four types of events:


Events are fired for every @NodeEntity or @RelationshipEntity object that is created, updated or deleted, or otherwise affected by a save or delete request. This includes:

  • The top-level objects or objects being created, modified or deleted.
  • Any connected objects that have been modified, created or deleted.
  • Any objects affected by the creation, modification or removal of a relationship in the graph.

Events will only fire when one of the or session.delete() methods is invoked. Directly executing Cypher queries against the database using session.query() will not trigger any events.

3.10.2. Interfaces

The Events mechanism introduces two new interfaces, Event and EventListener.

The Event interface

The Event interface is implemented by PersistenceEvent. Whenever an application wishes to handle an event it will be given an instance of Event, which exposes the following methods:

public interface Event {

    Object getObject();
    LIFECYCLE getLifeCycle();

    enum LIFECYCLE {

The Event Listener interface

The EventListener interface provides methods allowing implementing classes to handle each of the different Event types:

public interface EventListener {

    void onPreSave(Event event);
    void onPostSave(Event event);
    void onPreDelete(Event event);
    void onPostDelete(Event event);


Although the Event interface allows you to retrieve the event type, in most cases, your code won’t need it because the EventListener provides methods to capture each type of event explicitly.

3.10.3. Registering an EventListener

There are two way to register an event listener:

  • on an individual Session
  • across multiple sessions by using a SessionFactory

In this example we register an anonymous EventListener to inject a UUID onto new objects before they’re saved

class AddUuidPreSaveEventListener implements EventListener {

    void onPreSave(Event event) {
        DomainEntity entity = (DomainEntity) event.getObject():
        if (entity.getId() == null) {
    void onPostSave(Event event) {
    void onPreDelete(Event event) {
    void onPostDelete(Event event) {

EventListener eventListener = new AddUuidPreSaveEventListener();

// register it on an individual session

// remove it.

// register it across multiple sessions

// remove it.

It’s possible and sometimes desirable to add several EventListener objects to the session, depending on the application’s requirements. For example, our business logic might require us to add a UUID to a new object, as well as manage wider concerns such as ensuring that a particular persistence event won’t leave our domain model in a logically inconsistent state. It’s usually a good idea to separate these concerns into different objects with specific responsibilities, rather than having one single object try to do everything.

3.10.4. Using the EventListenerAdapter

The EventListener above is fine, but we’ve had to create three methods for events we don’t intend to handle. It would be preferable if we didn’t have to do this each time we needed an EventListener.

The EventListenerAdapter is an abstract class providing a no-op implementation of the EventListener interface. If you don’t need to handle all the different types of persistence event you can create a subclass of EventListenerAdapter instead and override just the methods for the event types you’re interested in.

For example:

class PreSaveEventListener extends EventListenerAdaper {
    void onPreSave(Event event) {
        DomainEntity entity = (DomainEntity) event.getObject();
        if ( == null) {
            entity.UUID = UUID.randomUUID();

3.10.5. Disposing of an EventListener

Something to bear in mind is that once an EventListener has been registered it will continue to respond to any and all persistence events. Sometimes you may want only to handle events for a short period of time, rather than for the duration of the entire session.

If you’re done with an EventListener you can stop it from firing any more events by invoking session.dispose(…​), passing in the EventListener to be disposed of.

The process of collecting persistence events prior to dispatching them to any EventListeners adds a small performance overhead to the persistence layer. Consequently, the OGM is configured to suppress the event collection phase if there are no EventListeners registered with the Session. Using dispose() when you’re finished with an EventListener is good practice!

To remove an event listener across multiple sessions use the degresiter method on the SessionFactory.

3.10.6. Connected objects

As mentioned previously, events are not only fired for the top-level objects being saved but for all their connected objects as well.

Connected objects are any objects reachable in the domain model from the top-level object being saved. Connected objects can be many levels deep in the domain model graph.

In this way, the Events mechanism allows us to capture events for objects that we didn’t explicitly save ourselves.

// initialise the graph
Folder folder = new Folder("folder");
Document a = new Document("a");
Document b = new Document("b");
folder.addDocuments(a, b);;

// change the names of both documents and save one of them

// because `b` is reachable from `a` (via the common shared folder) they will both be persisted,
// with PRE_SAVE and POST_SAVE events being fired for each of them;

3.10.7. Events and types

When we delete a Type, all the nodes with a label corresponding to that Type are deleted in the graph. The affected objects are not enumerated by the Events mechanism (they may not even be known). Instead, _DELETE events will be raised for the Type:

    // 2 events will be fired when the type is deleted.
    // - PRE_DELETE Document.class
    // - POST_DELETE Document.class

3.10.8. Events and collections

When saving or deleting a collection of objects, separate events are fired for each object in the collection, rather than for the collection itself.

Document a = new Document("a");
Document b = new Document("b");

// 4 events will be fired when the collection is saved.
// - PRE_SAVE a
// - PRE_SAVE b
// - POST_SAVE a
// - POST_SAVE b, b));

3.10.9. Event ordering

Events are partially ordered. PRE_ events are guaranteed to fire before any POST_ event within the same save or delete request. However, the internal ordering of the PRE_ events and POST_ events with the request is undefined.

Example: Partial ordering of events. 

Document a = new Document("a");
Document b = new Document("b");

// Although the save order of objects is implied by the request, the PRE_SAVE event for `b`
// may be fired before the PRE_SAVE event for `a`, and similarly for the POST_SAVE events.
// However, all PRE_SAVE events will be fired before any POST_SAVE event., b));

3.10.10. Relationship events

The previous examples show how events fire when the underlying node representing an entity is updated or deleted in the graph. Events are also fired when a save or delete request results in the modification, addition or deletion of a relationship in the graph.

For example, if you delete a Document object that is a member of a Folder’s documents collection, events will be fired for the Document as well as the Folder, to reflect the fact that the relationship between the folder and the document has been removed in the graph

Example: Deleting a Document attached to a Folder. 

Folder folder = new Folder();
Document a = new Document("a");

// When we delete the document, the following events will be fired
// - PRE_SAVE folder  (1)
// - POST_SAVE folder


Note that the folder events are _SAVE events, not _DELETE events. The folder was not deleted.

The event mechanism does not try to synchronise your domain model. In this example, the folder is still holding a reference to the Document, even though it no longer exists in the graph. As always, your code must take care of domain model synchronisation.

3.10.11. Event uniqueness

The event mechanism guarantees to not fire more than one event of the same type for an object in a save or delete request.

Example: Multiple changes, single event of each type. 

 // Even though we're making changes to both the folder node, and its relationships,
 // only one PRE_SAVE and one POST_SAVE event will be fired.

3.11. Testing

In 2.0, the Neo4jIntegrationTestRule class has been removed from the test-jar.

In previous versions this class provided access to an underlying GraphDatabaseService instance, allowing you to independently verify your code was working correctly. However it is incompatible with the Driver interfaces in 2.0, as it always requires you to connect using HTTP.

The recommended approach is to configure an Embedded Driver for testing as described above, although you can still use an in-process HTTP server if you wish (see below). Please note that if you’re just using the Embedded Driver for your tests you do not need to include any additional test jars in your pom.

3.11.1. Log levels

When running unit tests, it can be useful to see what the OGM is doing, and in particular to see the Cypher requests being transferred between your application and the database. The OGM uses slf4j along with Logback as its logging framework and by default the log level for all the OGM components is set to WARN, which does not include any Cypher output. To change the OGM log level, create a file logback-test.xml in your test resources folder, configured as shown below:


<?xml version="1.0" encoding="UTF-8"?>

    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
            <pattern>%d %5p %40.40c:%4L - %m%n</pattern>

      ~ Set the required log level for the OGM components here.
      ~ To just see Cypher statements set the level to "info"
      ~ For finer-grained diagnostics, set the level to "debug".
    <logger name="org.neo4j.ogm" level="info" />

    <root level="warn">
        <appender-ref ref="console" />


3.11.2. Using an in-process server for testing

If you want don’t want to use the Embedded Driver to run your tests, it is still possible to create an in-process HTTP server instead. Just like the Embedded Driver, a TestServer exposes a GraphDatabaseService instance which you can use in your tests. You should always close the server when you’re done with it.

You’ll first need to add the OGM test dependency to your pom:


Next, create a TestServer instance:

testServer = new TestServer.Builder()
                .enableAuthentication(true)    // defaults to false
                .transactionTimeoutSeconds(10) // defaults to 30 seconds
                .port(2222)                    // defaults to a random non-privileged port

A TestServer is backed by an impermanent database store, and configures the OGM to use an HttpDriver. The driver authenticates automatically if you have requested an authenticating server so you don’t have to do provide additional credentials.

Example test class using an in-process HTTP server. 

private static TestServer testServer;

public static setupTestServer() {
    testServer = new TestServer.Builder().build();

public static teardownTestServer() {

public void shouldCreateUser() { User("Bilbo Baggins"));

    GraphDatabaseService db = testServer.getGraphDatabaseService();
    try (Transaction tx = db.beginTx()) {
        Result r = db.execute("MATCH (u:User {name: 'Bilbo Baggins'}) RETURN u");

3.12. High Availability (HA) Support

The clustering features are available in Neo4j Enterprise Edition.

Neo4j offers two separate solutions for ensuring redundancy and performance in a high-demand production environment:

  • Causal Clustering
  • Highly Available (HA) Cluster

Neo4j 3.1 introduced Causal Clustering – a brand-new architecture using the state-of-the-art Raft protocol – that enables support for ultra-large clusters and a wider range of cluster topologies for data center and cloud.

A Neo4j HA cluster is comprised of a single master instance and zero or more slave instances. All instances in the cluster have full copies of the data in their local database files. The basic cluster configuration usally consists of three instances.

3.12.1. Causal Clustering

To find out more about Causal Clustering architecture please see:

Causal Clustering only works with the Neo4j Bolt Driver (1.1.0 onwards). Trying to set this up with the HTTP or Embedded Driver will not work. The bolt driver will fully handle any load balancing, which operate in concert with the Causal Cluster to spread the workload. New cluster-aware sessions, managed on the client-side by the Bolt drivers, alleviate complex infrastructure concerns for developers. Configuring the OGM

To use clustering, simply configure your bolt URI (in or through Components.driver().setURI()) to use the bolt routing protocol:


instance0 must be one of your core cluster group (that accepts reads and writes). Using OGM Sessions.

By default all Session 's Transaction s are set to read/write. This means reads and writes will always hit the core cluster. In order to break this up you can call session.beginTransaction(Transaction.Type) with READ to hit the replica servers. Bookmarks

Causal consistency allows you to specify guarantees around query ordering, including the ability to read your own writes, view the last data you read, and later on, committed writes from other users. The Bolt drivers collaborate with the core servers to ensure that all transactions are applied in the same order using a concept of a bookmark.

The cluster returns a bookmark when it commits an update transaction, so then the driver links a bookmark to the user’s next transaction. The server that received query starts this new bookmarked transaction only when its internal state reached the desired bookmark. This ensures that the view of related data is always consistent, that all servers are eventually updated, and that users reading and re-reading data always see the same — and the latest — data.

If you have multiple application tier JVM instances you will need to manage this state across them. The Session object allows you to set and retrieve bookmarks through the use of: Session.withBookmark() and Session.getLastBookmark().

3.12.2. Highly Available (HA) Cluster

A typical Neo4j HA cluster will consist of a master node and a couple of slave nodes for providing failover capability and optionally for handling reads. (Although it is possible to write to slaves, this is uncommon because it requires additional effort to synchronise a slave with the master node)

Typical HA Cluster Transaction binding in HA mode

When operating in HA mode, Neo4j does not make open transactions available across all nodes in the cluster. This means we must bind every request within a specific transaction to the same node in the cluster, or the commit will fail with 404 Not Found. Read-only transactions

As of Version 2.0.5 read-only transactions are supported by the OGM.

API changes

There is new API method session.beginTransaction(Transaction.Type) where Transaction.Type is one of:


In the case that your code doesn’t declare an explicit transaction, autocommit transactions (or their logical equivalent) will be created by the various drivers. These will be READ_WRITE.

The previous API semantics have not changed, i.e. session.beginTransaction() creates a READ_WRITE transaction.


The Drivers have been updated to transmit additional information about the transaction type of the current transaction to the server.

  • The HttpDriver implementation sets a Http Header "X-WRITE" to "1" for READ_WRITE transactions (the default) or to "0" for READ_ONLY ones.
  • The Embedded Driver can support both READ_ONLY and READ_WRITE (as of version 2.1.0).
  • The native Bolt Driver can support both READ_ONLY and READ_WRITE (as of version 2.1.0). Dynamic binding via a load balancer

In the Neo4j HA architecture, a cluster is typically fronted by a load balancer.

The following example shows how to configure your application and set up HAProxy as a load balancer to route write requests to whichever machine in the cluster is currently identified as the master, with read requests being distributed to any available machine in the cluster on a round-robin basis.

This configuration will also ensure that requests against a specific transaction are directed to the server where the transaction was created.

Example cluster fronted by HAProxy
  1. haproxy:
  2. neo4j-server1:
  3. neo4j-server2:
  4. neo4j-server3:

OGM Binding via HAProxy. 


Sample haproxy.cfg. 

    maxconn 256

    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

frontend http-in
    bind *:80
    acl write_hdr hdr_val(X-WRITE) eq 1
    use_backend neo4j-master if write_hdr
    default_backend neo4j-cluster

backend neo4j-cluster
    balance roundrobin
    # create a sticky table so that requests with a transaction id are always sent to the correct server
    stick-table type integer size 1k expire 70s
    stick match path,word(4,/)
    stick store-response hdr(Location),word(6,/)
    option httpchk GET /db/manage/server/ha/available
    server s1 maxconn 32
    server s2 maxconn 32
    server s3 maxconn 32

backend neo4j-master
    option httpchk GET /db/manage/server/ha/master
    server s1 maxconn 32
    server s2 maxconn 32
    server s3 maxconn 32

listen admin
    bind *:8080
    stats enable