13.6.2. Endpoints for status information

This section describes HTTP endpoints for monitoring the health of a Neo4j Causal Cluster.

A Causal Cluster exposes some HTTP endpoints which can be used to monitor the health of the cluster. In this section we will describe these endpoints and explain their semantics.

The section includes:

13.6.2.1. Adjusting security settings for Causal Clustering endpoints

If authentication and authorization is enabled in Neo4j, the Causal Clustering status endpoints will also require authentication credentials. The setting dbms.security.auth_enabled controls whether the native auth provider is enabled. For some load balancers and proxy servers, providing authentication credentials with the request is not an option. For those situations, consider disabling authentication of the Causal Clustering status endpoints by setting dbms.security.causal_clustering_status_auth_enabled=false in neo4j.conf.

13.6.2.2. Unified endpoints

A unified set of endpoints exist, both on Core Servers and on Read Replicas, with the following behavior:

  • /db/<databasename>/cluster/writable — Used to direct write traffic to specific instances.
  • /db/<databasename>/cluster/read-only — Used to direct read traffic to specific instances.
  • /db/<databasename>/cluster/available — Available for the general case of directing arbitrary request types to instances that are available for processing read transactions.
  • /db/<databasename>/cluster/status — Gives a detailed description of this instance’s view of its own status within the cluster. See Status endpoint for further details.

Every endpoint targets a specific database with its own Raft-group. The databaseName path parameter represents the name of the database. By default, a fresh Neo4j installation will have endpoints for two databases system and neo4j:

http://localhost:7474/db/system/cluster/writable
http://localhost:7474/db/system/cluster/read-only
http://localhost:7474/db/system/cluster/available
http://localhost:7474/db/system/cluster/status

http://localhost:7474/db/neo4j/cluster/writable
http://localhost:7474/db/neo4j/cluster/read-only
http://localhost:7474/db/neo4j/cluster/available
http://localhost:7474/db/neo4j/cluster/status
Table 13.19. Unified HTTP endpoint responses
Endpoint Instance state Returned code Body text

/db/<databasename>/cluster/writable

Leader

200 OK

true

Follower

404 Not Found

false

Read Replica

404 Not Found

false

/db/<databasename>/cluster/read-only

Leader

404 Not Found

false

Follower

200 OK

true

Read Replica

200 OK

true

/db/<databasename>/cluster/available

Leader

200 OK

true

Follower

200 OK

true

Read Replica

200 OK

true

/db/<databasename>/cluster/status

Leader

200 OK

JSON - See Status endpoint for details.

Follower

200 OK

JSON - See Status endpoint for details.

Read Replica

200 OK

JSON - See Status endpoint for details.

Example 13.17. Use a Causal Clustering monitoring endpoint

From the command line, a common way to ask those endpoints is to use curl. With no arguments, curl will do an HTTP GET on the URI provided and will output the body text, if any. If you also want to get the response code, just add the -v flag for verbose output. Here are some examples:

  • Requesting writable endpoint on a Core Server that is currently elected leader with verbose output:
#> curl -v localhost:7474/db/neo4j/cluster/writable
* About to connect() to localhost port 7474 (#0)
*   Trying ::1...
* connected
* Connected to localhost (::1) port 7474 (#0)
> GET /db/neo4j/cluster/writable HTTP/1.1
> User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r zlib/1.2.5
> Host: localhost:7474
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Access-Control-Allow-Origin: *
< Transfer-Encoding: chunked
< Server: Jetty(9.4.17)
<
* Connection #0 to host localhost left intact
true* Closing connection #0
Status endpoint

The status endpoint, available at /db/<databasename>/cluster/status, is to be used to assist with rolling upgrades.

Typically, you will want to have some guarantee that a Core is safe to shutdown before removing it from a cluster. The status endpoint provides the following information in order to help resolve such issues:

Example 13.18. Example status response
{
  "lastAppliedRaftIndex":0,
  "votingMembers":["30edc1c4-519c-4030-8348-7cb7af44f591","80a7fb7b-c966-4ee7-88a9-35db8b4d68fe","f9301218-1fd4-4938-b9bb-a03453e1f779"],
  "memberId":"80a7fb7b-c966-4ee7-88a9-35db8b4d68fe",
  "leader":"30edc1c4-519c-4030-8348-7cb7af44f591",
  "millisSinceLastLeaderMessage":84545,
  "participatingInRaftGroup":true,
  "core":true,
  "isHealthy":true,
  "raftCommandsPerSecond":124
}
Table 13.20. Status endpoint descriptions
Field Type Optional Example Description

core

boolean

no

true

Used to distinguish between Core Servers and Read Replicas.

lastAppliedRaftIndex

number

no

4321

Every transaction in a cluster is associated with a raft index.

Gives an indication of what the latest applied raft log index is.

participatingInRaftGroup

boolean

no

false

A participating member is able to vote. A Core is considered participating when it is part of the voter membership and has kept track of the leader.

votingMembers

string[]

no

[]

A member is considered a voting member when the leader has been receiving communication with it.

List of member’s memberId that are considered part of the voting set by this Core.

isHealthy

boolean

no

true

Reflects that the local database of this member has not encountered a critical error preventing it from writing locally.

memberId

string

no

30edc1c4-519c-4030-8348-7cb7af44f591

Every member in a cluster has it’s own unique member id to identify it. Use memberId to distinguish between Core and Read Replica.

leader

string

yes

80a7fb7b-c966-4ee7-88a9-35db8b4d68fe

Follows the same format as memberId, but if it is null or missing, then the leader is unknown.

millisSinceLastLeaderMessage

number

yes

1234

The number of milliseconds since the last heartbeat-like leader message. Not relevant to Read Replicas, and hence is not included.

raftCommandsPerSecond

number

yes

124

An estimate of the average Raft state machine throughput over a sampling windown configurable via causal_clustering.status_throughput_window setting.

After an instance has been switched on, you can access the status endpoint in order to make sure all the guarantees listed in the table below are met.

To get the most accurate view of a cluster it is strongly recommended to access the status endpoint on all core members and compare the result. The following table explains how results can be compared.

Table 13.21. Measured values, accessed via the status endpoint
Name of check Method of calculation Description

allServersAreHealthy

Every Core’s status endpoint indicates isHealthy==true.

We want to make sure the data across the entire cluster is healthy. Whenever any Cores are false that indicates a larger problem.

allVotingSetsAreEqual

For any 2 Cores (A and B), status endpoint A’s votingMembers== status endpoint B’s votingMembers.

When the voting begins, all the Cores are equal to each other, and you know all members agree on membership.

allVotingSetsContainAtLeastTargetCluster

For all Cores (S), excluding Core Z (to be switched off), every member in S contains S in their voting set. Membership is determined by using the memberId and votingMembers from the status endpoint.

Sometimes network conditions will not be perfect and it may make sense to switch off a different Core to the one we originally wanted to switch off. If you run this check for all Cores, the ones that match this condition can be switched off (providing other conditions are also met).

hasOneLeader

For any 2 Cores (A and B), A.leader == B.leader && leader!=null.

If the leader is different then there may be a partition (alternatively, this could also occur due to bad timing). If the leader is unknown, that means the leader messages have actually timed out.

noMembersLagging

For Core A with lastAppliedRaftIndex = min, and Core B with lastAppliedRaftIndex = max, B.lastAppliedRaftIndex-A.lastAppliedRaftIndex<raftIndexLagThreshold.

If there is a large difference in the applied indexes between Cores, then it could be dangerous to switch off a Core.

For more information on rolling upgrades for causal clusters, see Section 10.3.2, “Rolling upgrade”.