The session API

This section details the Session API that is made available by the driver.

1. Simple sessions

Simple sessions provide a "classic" blocking style API for Cypher execution. In general, simple sessions provide the easiest programming style to work with since API calls are executed in a strictly sequential fashion.

1.1. Lifecycle

The session lifetime extends from session construction to session closure. In languages that support them, simple sessions are usually scoped within a context block; this ensures that they are properly closed and that any underlying connections are released and not leaked.

try (Session session = driver.session(...)) {
    // transactions go here
}

Sessions can be configured in a number of different ways. This is carried out by supplying configuration inside the session constructor. See Session configuration for more details.

1.2. Transaction functions

Transaction functions are used for containing transactional units of work. This form of transaction requires minimal boilerplate code and allows for a clear separation of database queries and application logic.

Transaction functions are also desirable since they encapsulate retry logic and allow for the greatest degree of flexibility when swapping out a single instance of server for a cluster.

Transaction functions can be called as either read or write operations. This choice will route the transaction to an appropriate server within a clustered environment. If you are operating in a single instance environment, this routing has no impact. It does give you flexibility if you choose to adopt a clustered environment later on.

Before writing a transaction function it is important to ensure that it is designed to be idempotent. This is because a function may be executed multiple times if initial runs fail.

Any query results obtained within a transaction function should be consumed within that function, as connection-bound resources cannot be managed correctly when out of scope. To that end, transaction functions can return values but these should be derived values rather than raw results.

Transaction functions are the recommended form for containing transactional units of work.

When a transaction fails, the driver retry logic is invoked. For several failure cases, the transaction can be immediately retried against a different server. These cases include connection issues, server role changes (e.g. leadership elections) and transient errors.

public void addPerson( final String name )
{
    try ( Session session = driver.session() )
    {
        session.writeTransaction( tx -> {
            tx.run( "CREATE (a:Person {name: $name})", parameters( "name", name ) );
            return 1;
        } );
    }
}

1.3. Auto-commit transactions

An auto-commit transaction is a basic but limited form of transaction. Such a transaction consists of only one Cypher query and is not automatically replayed on failure. Therefore any error scenarios must be handled by the client application itself.

Auto-commit transactions are intended to be used for simple use cases such as when learning Cypher or writing one-off scripts.

It is not recommended to use auto-commit transactions in production environments.

Unlike other kinds of Cypher Query, PERIODIC COMMIT queries do not participate in the causal chain.

Therefore, the only way to execute PERIODIC COMMIT from a driver is to use auto-commit transactions.

public void addPerson( String name )
{
    try ( Session session = driver.session() )
    {
        session.run( "CREATE (a:Person {name: $name})", parameters( "name", name ) );
    }
}

1.4. Consuming results

Query results are typically consumed as a stream of records. The drivers provide a way to iterate through that stream.

public List<String> getPeople()
{
    try ( Session session = driver.session() )
    {
        return session.readTransaction( tx -> {
            List<String> names = new ArrayList<>();
            Result result = tx.run( "MATCH (a:Person) RETURN a.name ORDER BY a.name" );
            while ( result.hasNext() )
            {
                names.add( result.next().get( 0 ).asString() );
            }
            return names;
        } );
    }
}

1.5. Retaining results

Within a session, only one result stream can be active at any one time. Therefore, if the result of one query is not fully consumed before another query is executed, the remainder of the first result will be automatically buffered within the result object.

driver result buffer
Figure 1. Result buffer

This buffer provides a staging point for results, and divides result handling into fetching (moving from the network to the buffer) and consuming (moving from the buffer to the application).

For large results, the result buffer may require a significant amount of memory.

For this reason, it is recommended to consume results in order wherever possible.

Client applications can choose to take control of more advanced query patterns by explicitly retaining results. Such explicit retention may also be useful when a result needs to be saved for future processing. The drivers offer support for this process, as per the example below:

public int addEmployees( final String companyName )
{
    try ( Session session = driver.session() )
    {
        int employees = 0;
        List<Record> persons = session.readTransaction( new TransactionWork<List<Record>>()
        {
            @Override
            public List<Record> execute( Transaction tx )
            {
                return matchPersonNodes( tx );
            }
        } );
        for ( final Record person : persons )
        {
            employees += session.writeTransaction( new TransactionWork<Integer>()
            {
                @Override
                public Integer execute( Transaction tx )
                {
                    tx.run( "MATCH (emp:Person {name: $person_name}) " +
                            "MERGE (com:Company {name: $company_name}) " +
                            "MERGE (emp)-[:WORKS_FOR]->(com)",
                            parameters( "person_name", person.get( "name" ).asString(), "company_name",
                                    companyName ) );
                    return 1;
                }
            } );
        }
        return employees;
    }
}

private static List<Record> matchPersonNodes( Transaction tx )
{
    return tx.run( "MATCH (a:Person) RETURN a.name AS name" ).list();
}

2. Asynchronous sessions

Asynchronous sessions provide an API wherein function calls typically return available objects such as futures. This allows client applications to work within asynchronous frameworks and take advantage of cooperative multitasking.

2.1. Lifecycle

Session lifetime begins with session construction. A session then exists until it is closed, which is typically set to occur after its contained query results have been consumed.

Sessions can be configured in a number of different ways. This is carried out by supplying configuration inside the session constructor.

See Session configuration for more details.

2.2. Transaction functions

Transaction functions are the recommended form for containing transactional units of work. This form of transaction requires minimal boilerplate code and allows for a clear separation of database queries and application logic. Transaction functions are also desirable since they encapsulate retry logic and allow for the greatest degree of flexibility when swapping out a single instance of server for a cluster.

Functions can be called as either read or write operations. This choice will route the transaction to an appropriate server within a clustered environment. If you are in a single instance environment, this routing has no impact but it does give you the flexibility should you choose to later adopt a clustered environment.

Before writing a transaction function it is important to ensure that any side-effects carried out by a transaction function should be designed to be idempotent. This is because a function may be executed multiple times if initial runs fail.

Any query results obtained within a transaction function should be consumed within that function, as connection-bound resources cannot be managed correctly when out of scope. To that end, transaction functions can return values but these should be derived values rather than raw results.

When a transaction fails, the driver retry logic is invoked. For several failure cases, the transaction can be immediately retried against a different server.

These cases include connection issues, server role changes (e.g. leadership elections) and transient errors. Retry logic can be configured when creating a session.

public CompletionStage<ResultSummary> printAllProducts()
{
    String query = "MATCH (p:Product) WHERE p.id = $id RETURN p.title";
    Map<String,Object> parameters = Collections.singletonMap( "id", 0 );

    AsyncSession session = driver.asyncSession();

    return session.readTransactionAsync( tx ->
            tx.runAsync( query, parameters )
                    .thenCompose( cursor -> cursor.forEachAsync( record ->
                            // asynchronously print every record
                            System.out.println( record.get( 0 ).asString() ) ) )
    );
}

2.3. Auto-commit transactions

An auto-commit transaction is a basic but limited form of transaction. Such a transaction consists of only one Cypher query and is not automatically replayed on failure. Therefore any error scenarios must be handled by the client application itself.

Auto-commit transactions are intended to be used for simple use cases such as when learning Cypher or writing one-off scripts.

It is not recommended to use auto-commit transactions in production environments.

Unlike other kinds of Cypher Query, PERIODIC COMMIT queries do not participate in the causal chain.

Therefore, the only way to execute them from a driver is to use auto-commit transactions.

public CompletionStage<List<String>> readProductTitles()
{
    String query = "MATCH (p:Product) WHERE p.id = $id RETURN p.title";
    Map<String,Object> parameters = Collections.singletonMap( "id", 0 );

    AsyncSession session = driver.asyncSession();

    return session.runAsync( query, parameters )
            .thenCompose( cursor -> cursor.listAsync( record -> record.get( 0 ).asString() ) )
            .exceptionally( error ->
            {
                // query execution failed, print error and fallback to empty list of titles
                error.printStackTrace();
                return Collections.emptyList();
            } )
            .thenCompose( titles -> session.closeAsync().thenApply( ignore -> titles ) );
}

2.4. Consuming results

The asynchronous session API provides language-idiomatic methods to aid integration with asynchronous applications and frameworks.

public CompletionStage<List<String>> getPeople()
{

    String query = "MATCH (a:Person) RETURN a.name ORDER BY a.name";
    AsyncSession session = driver.asyncSession();
    return session.readTransactionAsync( tx ->
            tx.runAsync( query )
                    .thenCompose( cursor -> cursor.listAsync( record ->
                            record.get( 0 ).asString() ) )
    );
}

3. Reactive Sessions

Starting with Neo4j 4.0, the reactive processing of queries is supported. This can be achieved through reactive sessions. Reactive sessions allow for dynamic management of the data that is being exchanged between the driver and the server.

Typical of reactive programming, consumers control the rate at which they consume records from queries and the driver in turn manages the rate at which records are requested from the server. Flow control is supported throughout the entire Neo4j stack, meaning that the query engine responds correctly to the flow control signals. This results in far more efficient resource handling and ensures that the receiving side is not forced to buffer arbitrary amounts of data.

For more information about reactive stream, please see the following:

Reactive sessions will typically be used in a client application that is already oriented towards the reactive style; it is expected that a reactive dependency or framework is in place.

Refer to Get started for more information on recommended dependencies.

3.1. Lifecycle

Session lifetime begins with session construction. A session then exists until it is closed, which is typically set to occur after its contained query results have been consumed.

3.2. Transaction functions

This form of transaction requires minimal boilerplate code and allows for a clear separation of database queries and application logic. Transaction functions are also desirable since they encapsulate retry logic and allow for the greatest degree of flexibility when swapping out a single instance of server for a cluster.

Functions can be called as either read or write operations. This choice will route the transaction to an appropriate server within a clustered environment. If you are in a single instance environment, this routing has no impact but it does give you the flexibility should you choose to later adopt a clustered environment.

Before writing a transaction function it is important to ensure that any side-effects carried out by a transaction function should be designed to be idempotent. This is because a function may be executed multiple times if initial runs fail.

Any query results obtained within a transaction function should be consumed within that function, as connection-bound resources cannot be managed correctly when out of scope. To that end, transaction functions can return values but these should be derived values rather than raw results.

When a transaction fails, the driver retry logic is invoked. For several failure cases, the transaction can be immediately retried against a different server. These cases include connection issues, server role changes (e.g. leadership elections) and transient errors. Retry logic can be configured when creating a session.
public Flux<ResultSummary> printAllProducts()
{
    String query = "MATCH (p:Product) WHERE p.id = $id RETURN p.title";
    Map<String,Object> parameters = Collections.singletonMap( "id", 0 );

    return Flux.usingWhen( Mono.fromSupplier( driver::rxSession ),
            session -> session.readTransaction( tx -> {
                RxResult result = tx.run( query, parameters );
                return Flux.from( result.records() )
                        .doOnNext( record -> System.out.println( record.get( 0 ).asString() ) ).then( Mono.from( result.consume() ) );
            }
         ), RxSession::close );
}

Sessions can be configured in a number of different ways. This is carried out by supplying configuration inside the session constructor. See Session configuration for more details.

3.3. Auto-commit transactions

An auto-commit transaction is a basic but limited form of transaction. Such a transaction consists of only one Cypher Query and is not automatically replayed on failure. Therefore any error scenarios will need to be handled by the client application itself.

Auto-commit transactions are intended to be used for simple use cases such as when learning Cypher or writing one-off scripts.

It is not recommended to use auto-commit transactions in production environments.

The only way to execute PERIODIC COMMIT Cypher Queries is to auto-commit the transaction.

Unlike other kinds of Cypher query, PERIODIC COMMIT queries do not participate in the causal chain.

public Flux<String> readProductTitles()
{
    String query = "MATCH (p:Product) WHERE p.id = $id RETURN p.title";
    Map<String,Object> parameters = Collections.singletonMap( "id", 0 );

    return Flux.usingWhen( Mono.fromSupplier( driver::rxSession ),
            session -> Flux.from( session.run( query, parameters ).records() ).map( record -> record.get( 0 ).asString() ),
            RxSession::close );
}

3.4. Consuming results

To consume data from a query in a reactive session, a subscriber is required to handle the results that are being returned by the publisher.

Each transaction corresponds to a data flow which supplies the data from the server. Result processing begins when records are pulled from this flow. Only one subscriber may pull data from a given flow.

public Flux<String> getPeople()
{
    String query = "MATCH (a:Person) RETURN a.name ORDER BY a.name";

    return Flux.usingWhen( Mono.fromSupplier( driver::rxSession ),
            session -> session.readTransaction( tx -> {
                        RxResult result = tx.run( query );
                        return Flux.from( result.records() )
                                .map( record -> record.get( 0 ).asString() );
                    }
            ), RxSession::close );
}

3.5. Cancellation

As per the reactive stream specification, a reactive data flow can be cancelled part way through. This prematurely commits or rolls back the transaction and stops the query engine from producing any more records.

4. Session configuration

Bookmarks

The mechanism which ensures causal consistency between transactions within a session. Bookmarks are implicitly passed between transactions within a single session to meet the causal consistency requirements. There may be scenarios where you might want to use the bookmark from one session in a different new session.

Default: None (Sessions will initially be created without a bookmark)

DefaultAccessMode

A fallback for the access mode setting when transaction functions are not used. Typically, access mode is set per transaction by calling the appropriate transaction function method. In other cases, this setting is inherited. Note that transaction functions will ignore/override this setting.

Default: Write

Database

The database with which the session will interact. When you are working with a database which is not the default (i.e. the system database or another database in Neo4j 4.0 Enterprise Edition), you can explicitly configure the database which the driver is executing transactions against. See Operations Manual → The default database for more information on databases.

Default: the default database as configured on the server.

Fetch Size

The number of records to fetch in each batch from the server. Neo4j 4.0 introduces the ability to pull records in batches, allowing the client application to take control of data population and apply back pressure to the server. This FetchSize applies to simple sessions and async-sessions whereas reactive sessions can be controlled directly using the request method of the subscription.

Default: 1000 records