Chapter 9. JMX metrics

For more monitoring options, see Neo4j Operations Manual → Monitoring. Most of the monitoring features are only available in the Enterprise edition of Neo4j.

In order to be able to continuously get an overview of the health of a Neo4j database, there are different levels of monitoring facilities available. Most of these are exposed through JMX. Neo4j Enterprise also has the ability to automatically report metrics to commonly used monitoring systems.

9.1. Adjusting remote JMX access to the Neo4j Server

Per default, the Neo4j Enterprise Server edition does not allow remote JMX connections, since the relevant options in the conf/neo4j.conf configuration file are commented out. To enable this feature, you have to remove the # characters from the various com.sun.management.jmxremote options there.

When commented in, the default values are set up to allow remote JMX connections with certain roles, refer to the conf/jmx.password, conf/jmx.access, and conf/neo4j.conf files for details.

Make sure that conf/jmx.password has the correct file permissions. The owner of the file has to be the user that will run the service, and the permissions should be read only for that user. On Unix systems, this is 0600.

On Windows, follow the tutorial at http://docs.oracle.com/javase/8/docs/technotes/guides/management/security-windows.html to set the correct permissions. If you are running the service under the Local System Account, the user that owns the file and has access to it should be SYSTEM.

With this setup, you should be able to connect to JMX monitoring of the Neo4j server using <IP-OF-SERVER>:3637, with the username monitor and the password Neo4j.

Note that it is possible that you have to update the permissions and/or ownership of the conf/jmx.password and conf/jmx.access files — refer to the relevant section in conf/neo4j.conf for details.

For maximum security, please adjust at least the password settings in conf/jmx.password for a production installation.

For more details, see: http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html.

9.2. How to connect to a Neo4j instance using JMX and JConsole

First, start your Neo4j instance, for example using

$NEO4j_HOME/bin/neo4j start

Now, start JConsole with

$JAVA_HOME/bin/jconsole

Connect to the process running your Neo4j database instance:

Figure 9.1. Connecting JConsole to the Neo4j Java process
Connecting with JConsole

Now, beside the MBeans exposed by the JVM, you will see an org.neo4j section in the MBeans tab. Under that, you will have access to all the monitoring information exposed by Neo4j.

For opening JMX to remote monitoring access, please see Section 9.1, “Adjusting remote JMX access to the Neo4j Server” and the JMX documention.

Figure 9.2. Neo4j MBeans View
Neo4j MBeans view

9.3. How to connect to the JMX monitoring programmatically

In order to programmatically connect to the Neo4j JMX server, there are some convenience methods in the Neo4j Management component to help you find out the most commonly used monitoring attributes of Neo4j. See Section 4.12, “Reading a management attribute” for an example.

Once you have access to this information, you can use it to for instance expose the values to SNMP or other monitoring systems.

9.4. Reference of supported JMX MBeans

Below is a complete reference of Neo4j JMX management beans in two parts. First part shows all beans available in Neo4j Enterprise Edition when the instance is part of a Causal Cluster. The second part shows the management beans that are uniquely available when running in High Availability mode.

For additional information on the primitive datatypes (int, long etc.) used in the JMX attributes, please see Developer Manual → Introduction: Properties.

9.4.1. JMX MBeans available on instances in a Causal Cluster

The following JMX management beans are available on instances that are part of a Causal Cluster.

Table 9.1. MBeans exposed by Neo4j
Name Description

Causal Clustering

Information about an instance participating in a causal cluster.

Configuration

The configuration parameters used to configure Neo4j.

Diagnostics

Diagnostics provided by Neo4j.

Index sampler

Handle index sampling.

Kernel

Information about the Neo4j kernel.

Locking

Information about the Neo4j lock status.

Memory Mapping

The status of Neo4j memory mapping.

Page cache

Information about the Neo4j page cache. All numbers are counts and sums since the Neo4j instance was started.

Primitive count

Estimates of the numbers of different kinds of Neo4j primitives.

Reports

Reports operations.

Store file sizes

This bean is deprecated, use StoreSize bean instead; Information about the sizes of the different parts of the Neo4j graph store.

Store sizes

Information about the disk space used by different parts of the Neo4j graph store.

Transactions

Information about the Neo4j transaction manager.

Table 9.2. MBean Causal Clustering (org.neo4j.management.CausalClustering) Attributes
Name Description Type Read Write

Information about an instance participating in a causal cluster

RaftLogSize

The total amount of disk space used by the raft log, in bytes

long

yes

no

ReplicatedStateSize

The total amount of disk space used by the replicated states, in bytes

long

yes

no

Role

The current role this member has in the cluster

String

yes

no

Table 9.3. MBean Configuration (org.neo4j.jmx.impl.ConfigurationBean) Attributes
Name Description Type Read Write

The configuration parameters used to configure Neo4j

bolt.ssl_policy

Specify the SSL policy to use

String

yes

no

causal_clustering.array_block_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of ARRAY_BLOCK IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.catch_up_client_inactivity_timeout

The catch up protocol times out if the given duration elapses with no network activity. Every message received by the client from the server extends the time out duration.

String

yes

no

causal_clustering.catchup_batch_size

The maximum batch size when catching up (in unit of entries)

String

yes

no

causal_clustering.cluster_allow_reads_on_followers

Configure if the dbms.cluster.routing.getServers() procedure should include followers as read endpoints or return only read replicas. Note: if there are no read replicas in the cluster, followers are returned as read end points regardless the value of this setting. Defaults to true so that followers are available for read-only queries in a typical heterogeneous setup.

String

yes

no

causal_clustering.cluster_routing_ttl

How long drivers should cache the data from the dbms.cluster.routing.getServers() procedure.

String

yes

no

causal_clustering.cluster_topology_refresh

Time between scanning the cluster to refresh current server’s view of topology

String

yes

no

causal_clustering.connect-randomly-to-server-group

Comma separated list of groups to be used by the connect-randomly-to-server-group selection strategy. The connect-randomly-to-server-group strategy is used if the list of strategies (causal_clustering.upstream_selection_strategy) includes the value connect-randomly-to-server-group.

String

yes

no

causal_clustering.database

The name of the database being hosted by this server instance. This configuration setting may be safely ignored unless deploying a multicluster. Instances may be allocated to distinct sub-clusters by assigning them distinct database names using this setting. For instance if you had 6 instances you could form 2 sub-clusters by assigning half the database name "foo", half the name "bar". The setting value must match exactly between members of the same sub-cluster. This setting is a one-off: once an instance is configured with a database name it may not be changed in future without using neo4j-admin unbind.

String

yes

no

causal_clustering.disable_middleware_logging

Prevents the network middleware from dumping its own logs. Defaults to true.

String

yes

no

causal_clustering.discovery_advertised_address

Advertised cluster member discovery management communication.

String

yes

no

causal_clustering.discovery_listen_address

Host and port to bind the cluster member discovery management communication.

String

yes

no

causal_clustering.discovery_type

Configure the discovery type used for cluster name resolution

String

yes

no

causal_clustering.enable_pre_voting

Enable pre-voting extension to the Raft protocol (this is breaking and must match between the core cluster members)

String

yes

no

causal_clustering.expected_core_cluster_size

Expected number of Core machines in the cluster before startup

String

yes

no

causal_clustering.global_session_tracker_state_size

The maximum file size before the global session tracker state file is rotated (in unit of entries)

String

yes

no

causal_clustering.handshake_timeout

Time out for protocol negotiation handshake

String

yes

no

causal_clustering.id_alloc_state_size

The maximum file size before the ID allocation file is rotated (in unit of entries)

String

yes

no

causal_clustering.in_flight_cache.max_bytes

The maximum number of bytes in the in-flight cache.

String

yes

no

causal_clustering.in_flight_cache.max_entries

The maximum number of entries in the in-flight cache.

String

yes

no

causal_clustering.in_flight_cache.type

Type of in-flight cache.

String

yes

no

causal_clustering.initial_discovery_members

A comma-separated list of other members of the cluster to join.

String

yes

no

causal_clustering.join_catch_up_timeout

Time out for a new member to catch up

String

yes

no

causal_clustering.label_token_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of LABEL_TOKEN IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.label_token_name_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of LABEL_TOKEN_NAME IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.last_applied_state_size

The maximum file size before the storage file is rotated (in unit of entries)

String

yes

no

causal_clustering.leader_election_timeout

The time limit within which a new leader election will occur if no messages are received.

String

yes

no

causal_clustering.load_balancing.config

The configuration must be valid for the configured plugin and usually existsunder matching subkeys, e.g. ..config.server_policies.*This is just a top-level placeholder for the plugin-specific configuration.

String

yes

no

causal_clustering.load_balancing.plugin

The load balancing plugin to use.

String

yes

no

causal_clustering.load_balancing.shuffle

Enables shuffling of the returned load balancing result.

String

yes

no

causal_clustering.log_shipping_max_lag

The maximum lag allowed before log shipping pauses (in unit of entries)

String

yes

no

causal_clustering.middleware_logging.level

The level of middleware logging

String

yes

no

causal_clustering.minimum_core_cluster_size_at_formation

Minimum number of Core machines in the cluster at formation. The expected_core_cluster size setting is used when bootstrapping the cluster on first formation. A cluster will not form without the configured amount of cores and this should in general be configured to the full and fixed amount. When using multi-clustering (configuring multiple distinct database names across core hosts), this setting is used to define the minimum size of each sub-cluster at formation.

String

yes

no

causal_clustering.minimum_core_cluster_size_at_runtime

Minimum number of Core machines required to be available at runtime. The consensus group size (core machines successfully voted into the Raft) can shrink and grow dynamically but bounded on the lower end at this number. The intention is in almost all cases for users to leave this setting alone. If you have 5 machines then you can survive failures down to 3 remaining, e.g. with 2 dead members. The three remaining can still vote another replacement member in successfully up to a total of 6 (2 of which are still dead) and then after this, one of the superfluous dead members will be immediately and automatically voted out (so you are left with 5 members in the consensus group, 1 of which is currently dead). Operationally you can now bring the last machine up by bringing in another replacement or repairing the dead one. When using multi-clustering (configuring multiple distinct database names across core hosts), this setting is used to define the minimum size of each sub-cluster at runtime.

String

yes

no

causal_clustering.multi_dc_license

Enable multi-data center features. Requires appropriate licensing.

String

yes

no

causal_clustering.neostore_block_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of NEOSTORE_BLOCK IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.node_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of NODE IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.node_labels_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of NODE_LABELS IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.property_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of PROPERTY IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.property_key_token_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of PROPERTY_KEY_TOKEN IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.property_key_token_name_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of PROPERTY_KEY_TOKEN_NAME IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.protocol_implementations.catchup

Catchup protocol implementation versions that this instance will allow in negotiation as a comma-separated list. Order is not relevant: the greatest value will be preferred. An empty list will allow all supported versions

String

yes

no

causal_clustering.protocol_implementations.compression

Network compression algorithms that this instance will allow in negotiation as a comma-separated list. Listed in descending order of preference for incoming connections. An empty list implies no compression. For outgoing connections this merely specifies the allowed set of algorithms and the preference of the remote peer will be used for making the decision. Allowable values: [Gzip,Snappy,Snappy_validating,LZ4,LZ4_high_compression,LZ_validating,LZ4_high_compression_validating]

String

yes

no

causal_clustering.protocol_implementations.raft

Raft protocol implementation versions that this instance will allow in negotiation as a comma-separated list. Order is not relevant: the greatest value will be preferred. An empty list will allow all supported versions

String

yes

no

causal_clustering.pull_interval

Interval of pulling updates from cores.

String

yes

no

causal_clustering.raft_advertised_address

Advertised hostname/IP address and port for the RAFT server.

String

yes

no

causal_clustering.raft_in_queue_max_batch

Largest batch processed by RAFT

String

yes

no

causal_clustering.raft_in_queue_size

Size of the RAFT in queue

String

yes

no

causal_clustering.raft_listen_address

Network interface and port for the RAFT server to listen on.

String

yes

no

causal_clustering.raft_log_implementation

RAFT log implementation

String

yes

no

causal_clustering.raft_log_prune_strategy

RAFT log pruning strategy

String

yes

no

causal_clustering.raft_log_pruning_frequency

RAFT log pruning frequency

String

yes

no

causal_clustering.raft_log_reader_pool_size

RAFT log reader pool size

String

yes

no

causal_clustering.raft_log_rotation_size

RAFT log rotation size

String

yes

no

causal_clustering.raft_membership_state_size

The maximum file size before the membership state file is rotated (in unit of entries)

String

yes

no

causal_clustering.raft_messages_log_enable

Enable or disable the dump of all network messages pertaining to the RAFT protocol

String

yes

no

causal_clustering.raft_messages_log_path

Path to RAFT messages log.

String

yes

no

causal_clustering.raft_term_state_size

The maximum file size before the term state file is rotated (in unit of entries)

String

yes

no

causal_clustering.raft_vote_state_size

The maximum file size before the vote state file is rotated (in unit of entries)

String

yes

no

causal_clustering.read_replica_time_to_live

Time To Live before read replica is considered unavailable

String

yes

no

causal_clustering.read_replica_transaction_applier_batch_size

Maximum transaction batch size for read replicas when applying transactions pulled from core servers.

String

yes

no

causal_clustering.refuse_to_be_leader

Prevents the current instance from volunteering to become Raft leader. Defaults to false, and should only be used in exceptional circumstances by expert users. Using this can result in reduced availability for the cluster.

String

yes

no

causal_clustering.relationship_group_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of RELATIONSHIP_GROUP IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.relationship_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of RELATIONSHIP IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.relationship_type_token_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of RELATIONSHIP_TYPE_TOKEN IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.relationship_type_token_name_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of RELATIONSHIP_TYPE_TOKEN_NAME IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.replicated_lock_token_state_size

The maximum file size before the replicated lock token state file is rotated (in unit of entries)

String

yes

no

causal_clustering.replication_leader

The retry timeout for finding a leader for replication. Relevant during leader elections.

String

yes

no

causal_clustering.replication_retry_timeout_base

The initial timeout until replication is retried. The timeout will increase exponentially.

String

yes

no

causal_clustering.replication_retry_timeout_limit

The upper limit for the exponentially incremented retry timeout.

String

yes

no

causal_clustering.replication_total_size_limit

The maximum amount of data which can be in the replication stage concurrently.

String

yes

no

causal_clustering.schema_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of SCHEMA IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.server_groups

A list of group names for the server used when configuring load balancing and replication policies.

String

yes

no

causal_clustering.ssl_policy

Name of the SSL policy to be used by the clustering, as defined under the dbms.ssl.policy.* settings. If no policy is configured then the communication will not be secured.

String

yes

no

causal_clustering.state_machine_apply_max_batch_size

The maximum number of operations to be batched during applications of operations in the state machines

String

yes

no

causal_clustering.state_machine_flush_window_size

The number of operations to be processed before the state machines flush to disk

String

yes

no

causal_clustering.store_copy_backoff_max_wait

Maximum backoff timeout for store copy requests

String

yes

no

causal_clustering.store_copy_max_retry_time_per_request

Maximum retry time per request during store copy. Regular store files and indexes are downloaded in separate requests during store copy. This configures the maximum time failed requests are allowed to resend.

String

yes

no

causal_clustering.string_block_id_allocation_size

The size of the ID allocation requests Core servers will make when they run out of STRING_BLOCK IDs. Larger values mean less frequent requests but also result in more unused IDs (and unused disk space) in the event of a crash.

String

yes

no

causal_clustering.transaction_advertised_address

Advertised hostname/IP address and port for the transaction shipping server.

String

yes

no

causal_clustering.transaction_listen_address

Network interface and port for the transaction shipping server to listen on. Please note that it is also possible to run the backup client against this port so always limit access to it via the firewall and configure an ssl policy.

String

yes

no

causal_clustering.unknown_address_logging_throttle

Throttle limit for logging unknown cluster member address

String

yes

no

causal_clustering.upstream_selection_strategy

An ordered list in descending preference of the strategy which read replicas use to choose the upstream server from which to pull transactional updates.

String

yes

no

causal_clustering.user_defined_upstream_strategy

Configuration of a user-defined upstream selection strategy. The user-defined strategy is used if the list of strategies (causal_clustering.upstream_selection_strategy) includes the value user_defined.

String

yes

no

cypher.default_language_version

Set this to specify the default parser (language version).

String

yes

no

cypher.forbid_exhaustive_shortestpath

This setting is associated with performance optimization. Set this to true in situations where it is preferable to have any queries using the 'shortestPath' function terminate as soon as possible with no answer, rather than potentially running for a long time attempting to find an answer (even if there is no path to be found). For most queries, the 'shortestPath' algorithm will return the correct answer very quickly. However there are some cases where it is possible that the fast bidirectional breadth-first search algorithm will find no results even if they exist. This can happen when the predicates in the WHERE clause applied to 'shortestPath' cannot be applied to each step of the traversal, and can only be applied to the entire path. When the query planner detects these special cases, it will plan to perform an exhaustive depth-first search if the fast algorithm finds no paths. However, the exhaustive search may be orders of magnitude slower than the fast algorithm. If it is critical that queries terminate as soon as possible, it is recommended that this option be set to true, which means that Neo4j will never consider using the exhaustive search for shortestPath queries. However, please note that if no paths are found, an error will be thrown at run time, which will need to be handled by the application.

String

yes

no

cypher.forbid_shortestpath_common_nodes

This setting is associated with performance optimization. The shortest path algorithm does not work when the start and end nodes are the same. With this setting set to false no path will be returned when that happens. The default value of true will instead throw an exception. This can happen if you perform a shortestPath search after a cartesian product that might have the same start and end nodes for some of the rows passed to shortestPath. If it is preferable to not experience this exception, and acceptable for results to be missing for those rows, then set this to false. If you cannot accept missing results, and really want the shortestPath between two common nodes, then re-write the query using a standard Cypher variable length pattern expression followed by ordering by path length and limiting to one result.

String

yes

no

cypher.hints_error

Set this to specify the behavior when Cypher planner or runtime hints cannot be fulfilled. If true, then non-conformance will result in an error, otherwise only a warning is generated.

String

yes

no

cypher.min_replan_interval

The minimum time between possible cypher query replanning events. After this time, the graph statistics will be evaluated, and if they have changed by more than the value set by cypher.statistics_divergence_threshold, the query will be replanned. If the statistics have not changed sufficiently, the same interval will need to pass before the statistics will be evaluated again. Each time they are evaluated, the divergence threshold will be reduced slightly until it reaches 10% after 7h, so that even moderately changing databases will see query replanning after a sufficiently long time interval.

String

yes

no

cypher.planner

Set this to specify the default planner for the default language version.

String

yes

no

cypher.statistics_divergence_threshold

The threshold when a plan is considered stale. If any of the underlying statistics used to create the plan have changed more than this value, the plan will be considered stale and will be replanned. Change is calculated as abs(a-b)/max(a,b). This means that a value of 0.75 requires the database to approximately quadruple in size. A value of 0 means replan as soon as possible, with the soonest being defined by the cypher.min_replan_interval which defaults to 10s. After this interval the divergence threshold will slowly start to decline, reaching 10% after about 7h. This will ensure that long running databases will still get query replanning on even modest changes, while not replanning frequently unless the changes are very large.

String

yes

no

db.temporal.timezone

Database timezone for temporal functions. All Time and DateTime values that are created without an explicit timezone will use this configured default timezone.

String

yes

no

dbms.active_database

Name of the database to load

String

yes

no

dbms.allow_format_migration

Whether to allow a store upgrade in case the current version of the database starts against an older store version. Setting this to true does not guarantee successful upgrade, it just allows an upgrade to be performed.

String

yes

no

dbms.allow_upgrade

Whether to allow an upgrade in case the current version of the database starts against an older version.

String

yes

no

dbms.auto_index.nodes.enabled

Controls the auto indexing feature for nodes. Setting it to false shuts it down, while true enables it by default for properties listed in the dbms.auto_index.nodes.keys setting.

String

yes

no

dbms.auto_index.nodes.keys

A list of property names (comma separated) that will be indexed by default. This applies to nodes only.

String

yes

no

dbms.auto_index.relationships.enabled

Controls the auto indexing feature for relationships. Setting it to false shuts it down, while true enables it by default for properties listed in the dbms.auto_index.relationships.keys setting.

String

yes

no

dbms.auto_index.relationships.keys

A list of property names (comma separated) that will be indexed by default. This applies to relationships only.

String

yes

no

dbms.backup.address

Listening server for online backups. The protocol running varies depending on deployment. In a Causal Clustering environment this is the same protocol that runs on causal_clustering.transaction_listen_address.

String

yes

no

dbms.backup.enabled

Enable support for running online backups

String

yes

no

dbms.backup.ssl_policy

Name of the SSL policy to be used by backup, as defined under the dbms.ssl.policy.* settings. If no policy is configured then the communication will not be secured.

String

yes

no

dbms.checkpoint

Configures the general policy for when check-points should occur. The default policy is the 'periodic' check-point policy, as specified by the 'dbms.checkpoint.interval.tx' and 'dbms.checkpoint.interval.time' settings. The Neo4j Enterprise Edition provides two alternative policies: The first is the 'continuous' check-point policy, which will ignore those settings and run the check-point process all the time. The second is the 'volumetric' check-point policy, which makes a best-effort at check-pointing often enough so that the database doesn’t get too far behind on deleting old transaction logs in accordance with the 'dbms.tx_log.rotation.retention_policy' setting.

String

yes

no

dbms.checkpoint.interval.time

Configures the time interval between check-points. The database will not check-point more often than this (unless check pointing is triggered by a different event), but might check-point less often than this interval, if performing a check-point takes longer time than the configured interval. A check-point is a point in the transaction logs, from which recovery would start from. Longer check-point intervals typically means that recovery will take longer to complete in case of a crash. On the other hand, a longer check-point interval can also reduce the I/O load that the database places on the system, as each check-point implies a flushing and forcing of all the store files.

String

yes

no

dbms.checkpoint.interval.tx

Configures the transaction interval between check-points. The database will not check-point more often than this (unless check pointing is triggered by a different event), but might check-point less often than this interval, if performing a check-point takes longer time than the configured interval. A check-point is a point in the transaction logs, from which recovery would start from. Longer check-point intervals typically means that recovery will take longer to complete in case of a crash. On the other hand, a longer check-point interval can also reduce the I/O load that the database places on the system, as each check-point implies a flushing and forcing of all the store files. The default is '100000' for a check-point every 100000 transactions.

String

yes

no

dbms.checkpoint.iops.limit

Limit the number of IOs the background checkpoint process will consume per second. This setting is advisory, is ignored in Neo4j Community Edition, and is followed to best effort in Enterprise Edition. An IO is in this case a 8 KiB (mostly sequential) write. Limiting the write IO in this way will leave more bandwidth in the IO subsystem to service random-read IOs, which is important for the response time of queries when the database cannot fit entirely in memory. The only drawback of this setting is that longer checkpoint times may lead to slightly longer recovery times in case of a database or system crash. A lower number means lower IO pressure, and consequently longer checkpoint times. The configuration can also be commented out to remove the limitation entirely, and let the checkpointer flush data as fast as the hardware will go. Set this to -1 to disable the IOPS limit.

String

yes

no

dbms.config.strict_validation

A strict configuration validation will prevent the database from starting up if unknown configuration options are specified in the neo4j settings namespace (such as dbms., ha., cypher., etc). This is currently false by default but will be true by default in 4.0.

String

yes

no

dbms.connector.bolt.advertised_address

Advertised address for this connector.

String

yes

no

dbms.connector.bolt.enabled

Enable this connector.

String

yes

no

dbms.connector.bolt.listen_address

Address the connector should bind to.

String

yes

no

dbms.connector.bolt.type

Connector type. This setting is deprecated and its value will instead be inferred from the name of the connector.

String

yes

no

dbms.connector.http.advertised_address

Advertised address for this connector.

String

yes

no

dbms.connector.http.enabled

Enable this connector.

String

yes

no

dbms.connector.http.listen_address

Address the connector should bind to.

String

yes

no

dbms.connector.http.type

Connector type. This setting is deprecated and its value will instead be inferred from the name of the connector.

String

yes

no

dbms.connectors.default_advertised_address

Default hostname or IP address the server uses to advertise itself to its connectors. To advertise a specific hostname or IP address for a specific connector, specify the advertised_address property for the specific connector.

String

yes

no

dbms.connectors.default_listen_address

Default network interface to listen for incoming connections. To listen for connections on all interfaces, use "0.0.0.0". To bind specific connectors to a specific network interfaces, specify the listen_address properties for the specific connector.

String

yes

no

dbms.db.timezone

Database timezone. Among other things, this setting influences which timezone the logs and monitoring procedures use.

String

yes

no

dbms.directories.certificates

Directory for storing certificates to be used by Neo4j for TLS connections

String

yes

no

dbms.directories.data

Path of the data directory. You must not configure more than one Neo4j installation to use the same data directory.

String

yes

no

dbms.directories.import

Sets the root directory for file URLs used with the Cypher LOAD CSV clause. This must be set to a single directory, restricting access to only those files within that directory and its subdirectories.

String

yes

no

dbms.directories.logs

Path of the logs directory.

String

yes

no

dbms.directories.metrics

The target location of the CSV files: a path to a directory wherein a CSV file per reported field will be written.

String

yes

no

dbms.directories.plugins

Location of the database plugin directory. Compiled Java JAR files that contain database procedures will be loaded if they are placed in this directory.

String

yes

no

dbms.directories.tx_log

Location where Neo4j keeps the logical transaction logs.

String

yes

no

dbms.ids.reuse.types.override

Specified names of id types (comma separated) that should be reused. Currently only 'node' and 'relationship' types are supported.

String

yes

no

dbms.import.csv.legacy_quote_escaping

Selects whether to conform to the standard https://tools.ietf.org/html/rfc4180 for interpreting escaped quotation characters in CSV files loaded using LOAD CSV. Setting this to false will use the standard, interpreting repeated quotes '""' as a single in-lined quote, while true will use the legacy convention originally supported in Neo4j 3.0 and 3.1, allowing a backslash to include quotes in-lined in fields.

String

yes

no

dbms.index.default_schema_provider

Index provider to use for newly created schema indexes. An index provider may store different value types in separate physical indexes. lucene-1.0: Store spatial and temporal value types in native indexes, remaining value types in a Lucene index. lucene+native-1.0: Store numbers in a native index and remaining value types like lucene-1.0. This improves read and write performance for non-composite indexed numbers. lucene+native-2.0: Store strings in a native index and remaining value types like lucene+native-1.0. This improves write performance for non-composite indexed strings. This version of the native string index has a value limit of 4047B, such that byte-representation of a string to index cannot be larger than that limit, or the transaction trying to index such a value will fail. This version of the native string index also has reduced performance for CONTAINS and ENDS WITH queries, due to resorting to index scan+filter internally. Native indexes generally has these benefits over Lucene: - Faster writes - Less garbage and heap presence - Less CPU resources per operation - Controllable memory usage, due to being bound by the page cache

String

yes

no

dbms.index_sampling.background_enabled

Enable or disable background index sampling

String

yes

no

dbms.index_sampling.buffer_size

Size of buffer used by index sampling. This configuration setting is no longer applicable as from Neo4j 3.0.3. Please use dbms.index_sampling.sample_size_limit instead.

String

yes

no

dbms.index_sampling.sample_size_limit

Index sampling chunk size limit

String

yes

no

dbms.index_sampling.update_percentage

Percentage of index updates of total index size required before sampling of a given index is triggered

String

yes

no

dbms.index_searcher_cache_size

The maximum number of open Lucene index searchers.

String

yes

no

dbms.jvm.additional

Additional JVM arguments.

String

yes

no

dbms.label_index

Backend to use for label -→ nodes index

String

yes

no

dbms.lock.acquisition.timeout

The maximum time interval within which lock should be acquired.

String

yes

no

dbms.logs.debug.level

Debug log level threshold.

String

yes

no

dbms.logs.debug.path

Path to the debug log file.

String

yes

no

dbms.logs.debug.rotation.delay

Minimum time interval after last rotation of the debug log before it may be rotated again.

String

yes

no

dbms.logs.debug.rotation.keep_number

Maximum number of history files for the debug log.

String

yes

no

dbms.logs.debug.rotation.size

Threshold for rotation of the debug log.

String

yes

no

dbms.logs.query.allocation_logging_enabled

Log allocated bytes for the executed queries being logged. The logged number is cumulative over the duration of the query, i.e. for memory intense or long-running queries the value may be larger than the current memory allocation.

String

yes

no

dbms.logs.query.enabled

Log executed queries that take longer than the configured threshold, dbms.logs.query.threshold. Log entries are by default written to the file query.log located in the Logs directory. For location of the Logs directory, see ???. This feature is available in the Neo4j Enterprise Edition.

String

yes

no

dbms.logs.query.page_logging_enabled

Log page hits and page faults for the executed queries being logged.

String

yes

no

dbms.logs.query.parameter_logging_enabled

Log parameters for the executed queries being logged.

String

yes

no

dbms.logs.query.path

Path to the query log file.

String

yes

no

dbms.logs.query.rotation.keep_number

Maximum number of history files for the query log.

String

yes

no

dbms.logs.query.rotation.size

The file size in bytes at which the query log will auto-rotate. If set to zero then no rotation will occur. Accepts a binary suffix k, m or g.

String

yes

no

dbms.logs.query.runtime_logging_enabled

Logs which runtime that was used to run the query

String

yes

no

dbms.logs.query.threshold

If the execution of query takes more time than this threshold, the query is logged - provided query logging is enabled. Defaults to 0 seconds, that is all queries are logged.

String

yes

no

dbms.logs.query.time_logging_enabled

Log detailed time information for the executed queries being logged.

String

yes

no

dbms.logs.security.level

Security log level threshold.

String

yes

no

dbms.logs.security.path

Path to the security log file.

String

yes

no

dbms.logs.security.rotation.delay

Minimum time interval after last rotation of the security log before it may be rotated again.

String

yes

no

dbms.logs.security.rotation.keep_number

Maximum number of history files for the security log.

String

yes

no

dbms.logs.security.rotation.size

Threshold for rotation of the security log.

String

yes

no

dbms.logs.timezone

Database logs timezone.

String

yes

no

dbms.memory.heap.initial_size

Initial heap size. By default it is calculated based on available system resources.

String

yes

no

dbms.memory.heap.max_size

Maximum heap size. By default it is calculated based on available system resources.

String

yes

no

dbms.memory.pagecache.size

The amount of memory to use for mapping the store files, in bytes (or kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g'). If Neo4j is running on a dedicated server, then it is generally recommended to leave about 2-4 gigabytes for the operating system, give the JVM enough heap to hold all your transaction state and query context, and then leave the rest for the page cache. If no page cache memory is configured, then a heuristic setting is computed based on available system resources.

String

yes

no

dbms.memory.pagecache.swapper

Specify which page swapper to use for doing paged IO. This is only used when integrating with proprietary storage technology.

String

yes

no

dbms.mode

Configure the operating mode of the database — 'SINGLE' for stand-alone operation, 'HA' for operating as a member in an HA cluster, 'ARBITER' for a cluster member with no database in an HA cluster, 'CORE' for operating as a core member of a Causal Cluster, or 'READ_REPLICA' for operating as a read replica member of a Causal Cluster.

String

yes

no

dbms.procedures.kill_query_verbose

Specifies whether or not dbms.killQueries produces a verbose output, with information about which queries were not found

String

yes

no

dbms.query_cache_size

The number of Cypher query execution plans that are cached.

String

yes

no

dbms.read_only

Only allow read operations from this Neo4j instance. This mode still requires write access to the directory for lock purposes.

String

yes

no

dbms.record_format

Database record format. Valid values: standard, high_limit. The high_limit format is available for Enterprise Edition only. It is required if you have a graph that is larger than 34 billion nodes, 34 billion relationships, or 68 billion properties. A change of the record format is irreversible. Certain operations may suffer from a performance penalty of up to 10%, which is why this format is not switched on by default.

String

yes

no

dbms.relationship_grouping_threshold

Relationship count threshold for considering a node to be dense

String

yes

no

dbms.security.allow_csv_import_from_file_urls

Determines if Cypher will allow using file URLs when loading data using LOAD CSV. Setting this value to false will cause Neo4j to fail LOAD CSV clauses that load data from the file system.

String

yes

no

dbms.security.auth_cache_max_capacity

The maximum capacity for authentication and authorization caches (respectively).

String

yes

no

dbms.security.auth_cache_ttl

The time to live (TTL) for cached authentication and authorization info when using external auth providers (LDAP or plugin). Setting the TTL to 0 will disable auth caching. Disabling caching while using the LDAP auth provider requires the use of an LDAP system account for resolving authorization information.

String

yes

no

dbms.security.auth_cache_use_ttl

Enable time-based eviction of the authentication and authorization info cache for external auth providers (LDAP or plugin). Disabling this setting will make the cache live forever and only be evicted when dbms.security.auth_cache_max_capacity is exceeded.

String

yes

no

dbms.security.auth_enabled

Enable auth requirement to access Neo4j.

String

yes

no

dbms.security.auth_provider

The authentication and authorization provider that contains both the users and roles. This can be one of the built-in native or ldap providers, or it can be an externally provided plugin, with a custom name prefixed by plugin-, i.e. plugin-<AUTH_PROVIDER_NAME>.

String

yes

no

dbms.security.auth_providers

A list of security authentication and authorization providers containing the users and roles. They will be queried in the given order when login is attempted.

String

yes

no

dbms.security.causal_clustering_status_auth_enabled

Require authorization for access to the Causal Clustering status endpoints.

String

yes

no

dbms.security.ha_status_auth_enabled

Require authorization for access to the HA status endpoints.

String

yes

no

dbms.security.ldap.authentication.cache_enabled

Determines if the result of authentication via the LDAP server should be cached or not. Caching is used to limit the number of LDAP requests that have to be made over the network for users that have already been authenticated successfully. A user can be authenticated against an existing cache entry (instead of via an LDAP server) as long as it is alive (see dbms.security.auth_cache_ttl). An important consequence of setting this to true is that Neo4j then needs to cache a hashed version of the credentials in order to perform credentials matching. This hashing is done using a cryptographic hash function together with a random salt. Preferably a conscious decision should be made if this method is considered acceptable by the security standards of the organization in which this Neo4j instance is deployed.

String

yes

no

dbms.security.ldap.authentication.mechanism

LDAP authentication mechanism. This is one of simple or a SASL mechanism supported by JNDI, for example DIGEST-MD5. simple is basic username and password authentication and SASL is used for more advanced mechanisms. See RFC 2251 LDAPv3 documentation for more details.

String

yes

no

dbms.security.ldap.authentication.use_samaccountname

Perform authentication with sAMAccountName instead of DN. Using this setting requires dbms.security.ldap.authorization.system_username and dbms.security.ldap.authorization.system_password to be used since there is no way to log in through ldap directly with the sAMAccountName, instead the login name will be resolved to a DN that will be used to log in with.

String

yes

no

dbms.security.ldap.authentication.user_dn_template

LDAP user DN template. An LDAP object is referenced by its distinguished name (DN), and a user DN is an LDAP fully-qualified unique user identifier. This setting is used to generate an LDAP DN that conforms with the LDAP directory’s schema from the user principal that is submitted with the authentication token when logging in. The special token {0} is a placeholder where the user principal will be substituted into the DN string.

String

yes

no

dbms.security.ldap.authentication_enabled

Enable authentication via settings configurable LDAP authentication provider.

String

yes

no

dbms.security.ldap.authorization.group_membership_attributes

A list of attribute names on a user object that contains groups to be used for mapping to roles when LDAP authorization is enabled.

String

yes

no

dbms.security.ldap.authorization.group_to_role_mapping

An authorization mapping from LDAP group names to Neo4j role names. The map should be formatted as a semicolon separated list of key-value pairs, where the key is the LDAP group name and the value is a comma separated list of corresponding role names. For example: group1=role1;group2=role2;group3=role3,role4,role5 You could also use whitespaces and quotes around group names to make this mapping more readable, for example: ---- dbms.security.ldap.authorization.group_to_role_mapping=\ "cn=Neo4j Read Only,cn=users,dc=example,dc=com" = reader; \ "cn=Neo4j Read-Write,cn=users,dc=example,dc=com" = publisher; \ "cn=Neo4j Schema Manager,cn=users,dc=example,dc=com" = architect; \ "cn=Neo4j Administrator,cn=users,dc=example,dc=com" = admin ----

String

yes

no

dbms.security.ldap.authorization.system_password

An LDAP system account password to use for authorization searches when dbms.security.ldap.authorization.use_system_account is true.

String

yes

no

dbms.security.ldap.authorization.system_username

An LDAP system account username to use for authorization searches when dbms.security.ldap.authorization.use_system_account is true. Note that the dbms.security.ldap.authentication.user_dn_template will not be applied to this username, so you may have to specify a full DN.

String

yes

no

dbms.security.ldap.authorization.use_system_account

Perform LDAP search for authorization info using a system account instead of the user’s own account. If this is set to false (default), the search for group membership will be performed directly after authentication using the LDAP context bound with the user’s own account. The mapped roles will be cached for the duration of dbms.security.auth_cache_ttl, and then expire, requiring re-authentication. To avoid frequently having to re-authenticate sessions you may want to set a relatively long auth cache expiration time together with this option. NOTE: This option will only work if the users are permitted to search for their own group membership attributes in the directory. If this is set to true, the search will be performed using a special system account user with read access to all the users in the directory. You need to specify the username and password using the settings dbms.security.ldap.authorization.system_username and dbms.security.ldap.authorization.system_password with this option. Note that this account only needs read access to the relevant parts of the LDAP directory and does not need to have access rights to Neo4j, or any other systems.

String

yes

no

dbms.security.ldap.authorization.user_search_base

The name of the base object or named context to search for user objects when LDAP authorization is enabled. A common case is that this matches the last part of dbms.security.ldap.authentication.user_dn_template.

String

yes

no

dbms.security.ldap.authorization.user_search_filter

The LDAP search filter to search for a user principal when LDAP authorization is enabled. The filter should contain the placeholder token {0} which will be substituted for the user principal.

String

yes

no

dbms.security.ldap.authorization_enabled

Enable authorization via settings configurable LDAP authorization provider.

String

yes

no

dbms.security.ldap.connection_timeout

The timeout for establishing an LDAP connection. If a connection with the LDAP server cannot be established within the given time the attempt is aborted. A value of 0 means to use the network protocol’s (i.e., TCP’s) timeout value.

String

yes

no

dbms.security.ldap.host

URL of LDAP server to use for authentication and authorization. The format of the setting is <protocol>://<hostname>:<port>, where hostname is the only required field. The supported values for protocol are ldap (default) and ldaps. The default port for ldap is 389 and for ldaps 636. For example: ldaps://ldap.example.com:10389. You may want to consider using STARTTLS (dbms.security.ldap.use_starttls) instead of LDAPS for secure connections, in which case the correct protocol is ldap.

String

yes

no

dbms.security.ldap.read_timeout

The timeout for an LDAP read request (i.e. search). If the LDAP server does not respond within the given time the request will be aborted. A value of 0 means wait for a response indefinitely.

String

yes

no

dbms.security.ldap.referral

The LDAP referral behavior when creating a connection. This is one of follow, ignore or throw. * follow automatically follows any referrals * ignore ignores any referrals * throw throws an exception, which will lead to authentication failure

String

yes

no

dbms.security.ldap.use_starttls

Use secure communication with the LDAP server using opportunistic TLS. First an initial insecure connection will be made with the LDAP server, and a STARTTLS command will be issued to negotiate an upgrade of the connection to TLS before initiating authentication.

String

yes

no

dbms.security.log_successful_authentication

Set to log successful authentication events to the security log. If this is set to false only failed authentication events will be logged, which could be useful if you find that the successful events spam the logs too much, and you do not require full auditing capability.

String

yes

no

dbms.security.native.authentication_enabled

Enable authentication via native authentication provider.

String

yes

no

dbms.security.native.authorization_enabled

Enable authorization via native authorization provider.

String

yes

no

dbms.security.plugin.authentication_enabled

Enable authentication via plugin authentication providers.

String

yes

no

dbms.security.plugin.authorization_enabled

Enable authorization via plugin authorization providers.

String

yes

no

dbms.security.procedures.default_allowed

The default role that can execute all procedures and user-defined functions that are not covered by the dbms.security.procedures.roles setting. If the dbms.security.procedures.default_allowed setting is the empty string (default), procedures will be executed according to the same security rules as normal Cypher statements.

String

yes

no

dbms.security.procedures.roles

This provides a finer level of control over which roles can execute procedures than the dbms.security.procedures.default_allowed setting. For example: dbms.security.procedures.roles=apoc.convert.*:reader;apoc.load.json*:writer;apoc.trigger.add:TriggerHappy will allow the role reader to execute all procedures in the apoc.convert namespace, the role writer to execute all procedures in the apoc.load namespace that starts with json and the role TriggerHappy to execute the specific procedure apoc.trigger.add. Procedures not matching any of these patterns will be subject to the dbms.security.procedures.default_allowed setting.

String

yes

no

dbms.security.procedures.unrestricted

A list of procedures and user defined functions (comma separated) that are allowed full access to the database. The list may contain both fully-qualified procedure names, and partial names with the wildcard '*'. Note that this enables these procedures to bypass security. Use with caution.

String

yes

no

dbms.security.procedures.whitelist

A list of procedures (comma separated) that are to be loaded. The list may contain both fully-qualified procedure names, and partial names with the wildcard '*'. If this setting is left empty no procedures will be loaded.

String

yes

no

dbms.security.property_level.blacklist

An authorization mapping for property level access for roles. The map should be formatted as a semicolon separated list of key-value pairs, where the key is the role name and the value is a comma separated list of blacklisted properties. For example: role1=prop1;role2=prop2;role3=prop3,prop4,prop5 You could also use whitespaces and quotes around group names to make this mapping more readable, for example: dbms.security.property_level.blacklist=\ "role1" = ssn; \ "role2" = ssn,income; \

String

yes

no

dbms.security.property_level.enabled

Set to true to enable property level security.

String

yes

no

dbms.shutdown_transaction_end_timeout

The maximum amount of time to wait for running transactions to complete before allowing initiated database shutdown to continue

String

yes

no

dbms.ssl.policy.<policyname>.allow_key_generation

Allows the generation of a private key and associated self-signed certificate. Only performed when both objects cannot be found.

String

yes

no

dbms.ssl.policy.<policyname>.base_directory

The mandatory base directory for cryptographic objects of this policy. It is also possible to override each individual configuration with absolute paths.

String

yes

no

dbms.ssl.policy.<policyname>.ciphers

Restrict allowed ciphers.

String

yes

no

dbms.ssl.policy.<policyname>.client_auth

Client authentication stance.

String

yes

no

dbms.ssl.policy.<policyname>.private_key

Private PKCS#8 key in PEM format.

String

yes

no

dbms.ssl.policy.<policyname>.private_key_password

The password for the private key.

String

yes

no

dbms.ssl.policy.<policyname>.public_certificate

X.509 certificate (chain) of this server in PEM format.

String

yes

no

dbms.ssl.policy.<policyname>.revoked_dir

Path to directory of CRLs (Certificate Revocation Lists) in PEM format.

String

yes

no

dbms.ssl.policy.<policyname>.tls_versions

Restrict allowed TLS protocol versions.

String

yes

no

dbms.ssl.policy.<policyname>.trust_all

Makes this policy trust all remote parties. Enabling this is not recommended and the trusted directory will be ignored.

String

yes

no

dbms.ssl.policy.<policyname>.trusted_dir

Path to directory of X.509 certificates in PEM format for trusted parties.

String

yes

no

dbms.track_query_allocation

Enables or disables tracking of how many bytes are allocated by the execution of a query.

String

yes

no

dbms.track_query_cpu_time

Enables or disables tracking of how much time a query spends actively executing on the CPU.

String

yes

no

dbms.transaction.bookmark_ready_timeout

The maximum amount of time to wait for the database state represented by the bookmark.

String

yes

no

dbms.transaction.monitor.check.interval

Configures the time interval between transaction monitor checks. Determines how often monitor thread will check transaction for timeout.

String

yes

no

dbms.transaction.timeout

The maximum time interval of a transaction within which it should be completed.

String

yes

no

dbms.tx_log.rotation.retention_policy

Make Neo4j keep the logical transaction logs for being able to backup the database. Can be used for specifying the threshold to prune logical logs after. For example "10 days" will prune logical logs that only contains transactions older than 10 days from the current time, or "100k txs" will keep the 100k latest transactions and prune any older transactions.

String

yes

no

dbms.tx_log.rotation.size

Specifies at which file size the logical log will auto-rotate. Minimum accepted value is 1M.

String

yes

no

dbms.udc.enabled

Enable the UDC extension.

String

yes

no

dbms.windows_service_name

Name of the Windows Service.

String

yes

no

ha.allow_init_cluster

Whether to allow this instance to create a cluster if unable to join.

String

yes

no

ha.branched_data_copying_strategy

Strategy for how to order handling of branched data on slaves and copying of the store from the master. The default is copy_then_branch, which, when combined with the keep_last or keep_none branch handling strategies results in a safer branching strategy, as there is always a store present so store failure to copy a store (for example, because of network failure) does not leave the instance without a store.

String

yes

no

ha.branched_data_policy

Policy for how to handle branched data.

String

yes

no

ha.broadcast_timeout

Timeout for broadcasting values in cluster. Must consider end-to-end duration of Paxos algorithm. This value is the default value for the ha.join_timeout and ha.leave_timeout settings.

String

yes

no

ha.configuration_timeout

Timeout for waiting for configuration from an existing cluster member during cluster join.

String

yes

no

ha.data_chunk_size

Max size of the data chunks that flows between master and slaves in HA. Bigger size may increase throughput, but may also be more sensitive to variations in bandwidth, whereas lower size increases tolerance for bandwidth variations.

String

yes

no

ha.default_timeout

Default timeout used for clustering timeouts. Override specific timeout settings with proper values if necessary. This value is the default value for the ha.heartbeat_interval, ha.paxos_timeout and ha.learn_timeout settings.

String

yes

no

ha.election_timeout

Timeout for waiting for other members to finish a role election. Defaults to ha.paxos_timeout.

String

yes

no

ha.heartbeat_interval

How often heartbeat messages should be sent. Defaults to ha.default_timeout.

String

yes

no

ha.heartbeat_timeout

How long to wait for heartbeats from other instances before marking them as suspects for failure. This value reflects considerations of network latency, expected duration of garbage collection pauses and other factors that can delay message sending and processing. Larger values will result in more stable masters but also will result in longer waits before a failover in case of master failure. This value should not be set to less than twice the ha.heartbeat_interval value otherwise there is a high risk of frequent master switches and possibly branched data occurrence.

String

yes

no

ha.host.coordination

Host and port to bind the cluster management communication.

String

yes

no

ha.host.data

Hostname and port to bind the HA server.

String

yes

no

ha.initial_hosts

A comma-separated list of other members of the cluster to join.

String

yes

no

ha.internal_role_switch_timeout

Timeout for waiting for internal conditions during state switch, like for transactions to complete, before switching to master or slave.

String

yes

no

ha.join_timeout

Timeout for joining a cluster. Defaults to ha.broadcast_timeout. Note that if the timeout expires during cluster formation, the operator may have to restart the instance or instances.

String

yes

no

ha.learn_timeout

Timeout for learning values. Defaults to ha.default_timeout.

String

yes

no

ha.leave_timeout

Timeout for waiting for cluster leave to finish. Defaults to ha.broadcast_timeout.

String

yes

no

ha.max_acceptors

Maximum number of servers to involve when agreeing to membership changes. In very large clusters, the probability of half the cluster failing is low, but protecting against any arbitrary half failing is expensive. Therefore you may wish to set this parameter to a value less than the cluster size.

String

yes

no

ha.max_channels_per_slave

Maximum number of connections a slave can have to the master.

String

yes

no

ha.paxos_timeout

Default value for all Paxos timeouts. This setting controls the default value for the ha.phase1_timeout, ha.phase2_timeout and ha.election_timeout settings. If it is not given a value it defaults to ha.default_timeout and will implicitly change if ha.default_timeout changes. This is an advanced parameter which should only be changed if specifically advised by Neo4j Professional Services.

String

yes

no

ha.phase1_timeout

Timeout for Paxos phase 1. If it is not given a value it defaults to ha.paxos_timeout and will implicitly change if ha.paxos_timeout changes. This is an advanced parameter which should only be changed if specifically advised by Neo4j Professional Services.

String

yes

no

ha.phase2_timeout

Timeout for Paxos phase 2. If it is not given a value it defaults to ha.paxos_timeout and will implicitly change if ha.paxos_timeout changes. This is an advanced parameter which should only be changed if specifically advised by Neo4j Professional Services.

String

yes

no

ha.pull_batch_size

Size of batches of transactions applied on slaves when pulling from master

String

yes

no

ha.pull_interval

Interval of pulling updates from master.

String

yes

no

ha.role_switch_timeout

Timeout for request threads waiting for instance to become master or slave.

String

yes

no

ha.server_id

Id for a cluster instance. Must be unique within the cluster.

String

yes

no

ha.slave_lock_timeout

Timeout for taking remote (write) locks on slaves. Defaults to ha.slave_read_timeout.

String

yes

no

ha.slave_only

Whether this instance should only participate as slave in cluster. If set to true, it will never be elected as master.

String

yes

no

ha.slave_read_timeout

How long a slave will wait for response from master before giving up.

String

yes

no

ha.strict_initial_hosts

Configuration attribute

String

yes

no

ha.tx_push_factor

The amount of slaves the master will ask to replicate a committed transaction.

String

yes

no

ha.tx_push_strategy

Push strategy of a transaction to a slave during commit.

String

yes

no

hazelcast.license_key

Hazelcast license key

String

yes

no

metrics.bolt.messages.enabled

Enable reporting metrics about Bolt Protocol message processing.

String

yes

no

metrics.csv.enabled

Set to true to enable exporting metrics to CSV files

String

yes

no

metrics.csv.interval

The reporting interval for the CSV files. That is, how often new rows with numbers are appended to the CSV files.

String

yes

no

metrics.csv.rotation.keep_number

Maximum number of history files for the csv files.

String

yes

no

metrics.csv.rotation.size

The file size in bytes at which the csv files will auto-rotate. If set to zero then no rotation will occur. Accepts a binary suffix k, m or g.

String

yes

no

metrics.cypher.replanning.enabled

Enable reporting metrics about number of occurred replanning events.

String

yes

no

metrics.enabled

The default enablement value for all the supported metrics. Set this to false to turn off all metrics by default. The individual settings can then be used to selectively re-enable specific metrics.

String

yes

no

metrics.graphite.enabled

Set to true to enable exporting metrics to Graphite.

String

yes

no

metrics.graphite.interval

The reporting interval for Graphite. That is, how often to send updated metrics to Graphite.

String

yes

no

metrics.graphite.server

The hostname or IP address of the Graphite server

String

yes

no

metrics.jvm.buffers.enabled

Enable reporting metrics about the buffer pools.

String

yes

no

metrics.jvm.gc.enabled

Enable reporting metrics about the duration of garbage collections

String

yes

no

metrics.jvm.memory.enabled

Enable reporting metrics about the memory usage.

String

yes

no

metrics.jvm.threads.enabled

Enable reporting metrics about the current number of threads running.

String

yes

no

metrics.neo4j.causal_clustering.enabled

Enable reporting metrics about Causal Clustering mode.

String

yes

no

metrics.neo4j.checkpointing.enabled

Enable reporting metrics about Neo4j check pointing; when it occurs and how much time it takes to complete.

String

yes

no

metrics.neo4j.cluster.enabled

Enable reporting metrics about HA cluster info.

String

yes

no

metrics.neo4j.counts.enabled

Enable reporting metrics about approximately how many entities are in the database; nodes, relationships, properties, etc.

String

yes

no

metrics.neo4j.enabled

The default enablement value for all Neo4j specific support metrics. Set this to false to turn off all Neo4j specific metrics by default. The individual metrics.neo4j.* metrics can then be turned on selectively.

String

yes

no

metrics.neo4j.logrotation.enabled

Enable reporting metrics about the Neo4j log rotation; when it occurs and how much time it takes to complete.

String

yes

no

metrics.neo4j.network.enabled

Enable reporting metrics about the network usage.

String

yes

no

metrics.neo4j.pagecache.enabled

Enable reporting metrics about the Neo4j page cache; page faults, evictions, flushes, exceptions, etc.

String

yes

no

metrics.neo4j.server.enabled

Enable reporting metrics about Server threading info.

String

yes

no

metrics.neo4j.tx.enabled

Enable reporting metrics about transactions; number of transactions started, committed, etc.

String

yes

no

metrics.prefix

A common prefix for the reported metrics field names. By default, this is either be 'neo4j', or a computed value based on the cluster and instance names, when running in an HA configuration.

String

yes

no

metrics.prometheus.enabled

Set to true to enable the Prometheus endpoint

String

yes

no

metrics.prometheus.endpoint

The hostname and port to use as Prometheus endpoint

String

yes

no

tools.consistency_checker.check_graph

This setting is deprecated. See commandline arguments for neoj4-admin check-consistency instead. Perform checks between nodes, relationships, properties, types and tokens.

String

yes

no

tools.consistency_checker.check_indexes

This setting is deprecated. See commandline arguments for neoj4-admin check-consistency instead. Perform checks on indexes. Checking indexes is more expensive than checking the native stores, so it may be useful to turn off this check for very large databases.

String

yes

no

tools.consistency_checker.check_label_scan_store

This setting is deprecated. See commandline arguments for neoj4-admin check-consistency instead. Perform checks on the label scan store. Checking this store is more expensive than checking the native stores, so it may be useful to turn off this check for very large databases.

String

yes

no

tools.consistency_checker.check_property_owners

This setting is deprecated. See commandline arguments for neoj4-admin check-consistency instead. Perform optional additional checking on property ownership. This can detect a theoretical inconsistency where a property could be owned by multiple entities. However, the check is very expensive in time and memory, so it is skipped by default.

String

yes

no

unsupported.cypher.compiler_tracing

Enable tracing of compilation in cypher.

String

yes

no

unsupported.cypher.idp_solver_duration_threshold

To improve IDP query planning time, we can restrict the internal planning loop duration, triggering more frequent compaction of candidate plans. The smaller the threshold the faster the planning, but the higher the risk of sub-optimal plans.

String

yes

no

unsupported.cypher.idp_solver_table_threshold

To improve IDP query planning time, we can restrict the internal planning table size, triggering compaction of candidate plans. The smaller the threshold the faster the planning, but the higher the risk of sub-optimal plans.

String

yes

no

unsupported.cypher.morsel_size

The size of the morsels

String

yes

no

unsupported.cypher.non_indexed_label_warning_threshold

The threshold when a warning is generated if a label scan is done after a load csv where the label has no index

String

yes

no

unsupported.cypher.number_of_workers

Number of threads to allocate to Cypher worker threads. If set to 0, two workers will be started for every physical core in the system.

String

yes

no

unsupported.cypher.plan_with_minimum_cardinality_estimates

Enable using minimum cardinality estimates in the Cypher cost planner, so that cardinality estimates for logical plan operators are not allowed to go below certain thresholds even when the statistics give smaller numbers. This is especially useful for large import queries that write nodes and relationships into an empty or small database, where the generated query plan needs to be able to scale beyond the initial statistics. Otherwise, when this is disabled, the statistics on an empty or tiny database may lead the cost planner to for example pick a scan over an index seek, even when an index exists, because of a lower estimated cost.

String

yes

no

unsupported.cypher.replan_algorithm

Large databases might change slowly, and to prevent queries from never being replanned the divergence threshold set by cypher.statistics_divergence_threshold is configured to shrink over time using the algorithm set here. This will cause the threshold to reach the value set by unsupported.cypher.statistics_divergence_target once the time since the previous replanning has reached the value set in unsupported.cypher.target_replan_interval. Setting the algorithm to 'none' will cause the threshold to not decay over time.

String

yes

no

unsupported.cypher.runtime

Set this to specify the default runtime for the default language version.

String

yes

no

unsupported.cypher.statistics_divergence_target

Large databases might change slowly, and so to prevent queries from never being replanned the divergence threshold set by cypher.statistics_divergence_threshold is configured to shrink over time. The algorithm used to manage this change is set by unsupported.cypher.replan_algorithm and will cause the threshold to reach the value set here once the time since the previous replanning has reached unsupported.cypher.target_replan_interval. Setting this value to higher than the cypher.statistics_divergence_threshold will cause the threshold to not decay over time.

String

yes

no

unsupported.cypher.target_replan_interval

Large databases might change slowly, and to prevent queries from never being replanned the divergence threshold set by cypher.statistics_divergence_threshold is configured to shrink over time. The algorithm used to manage this change is set by unsupported.cypher.replan_algorithm and will cause the threshold to reach the value set by unsupported.cypher.statistics_divergence_target once the time since the previous replanning has reached the value set here. Setting this value to less than the value of cypher.min_replan_interval will cause the threshold to not decay over time.

String

yes

no

unsupported.dbms.block_size.array_properties

Specifies the block size for storing arrays. This parameter is only honored when the store is created, otherwise it is ignored. Also note that each block carries a ~10B of overhead so record size on disk will be slightly larger than the configured block size

String

yes

no

unsupported.dbms.block_size.labels

Specifies the block size for storing labels exceeding in-lined space in node record. This parameter is only honored when the store is created, otherwise it is ignored. Also note that each block carries a ~10B of overhead so record size on disk will be slightly larger than the configured block size

String

yes

no

unsupported.dbms.block_size.strings

Specifies the block size for storing strings. This parameter is only honored when the store is created, otherwise it is ignored. Note that each character in a string occupies two bytes, meaning that e.g a block size of 120 will hold a 60 character long string before overflowing into a second block. Also note that each block carries a ~10B of overhead so record size on disk will be slightly larger than the configured block size

String

yes

no

unsupported.dbms.bolt.inbound_message_throttle.high_watermark

When the number of queued inbound messages grows beyond this value, reading from underlying channel will be paused (no more inbound messages will be available) until queued number of messages drops below the configured low watermark value.

String

yes

no

unsupported.dbms.bolt.inbound_message_throttle.low_watermark

When the number of queued inbound messages, previously reached configured high watermark value, drops below this value, reading from underlying channel will be enabled and any pending messages will start queuing again.

String

yes

no

unsupported.dbms.bolt.outbound_buffer_throttle

Whether to apply network level outbound network buffer based throttling

String

yes

no

unsupported.dbms.bolt.outbound_buffer_throttle.high_watermark

When the size (in bytes) of outbound network buffers, used by bolt’s network layer, grows beyond this value bolt channel will advertise itself as unwritable and will block related processing thread until it becomes writable again.

String

yes

no

unsupported.dbms.bolt.outbound_buffer_throttle.low_watermark

When the size (in bytes) of outbound network buffers, previously advertised as unwritable, gets below this value bolt channel will re-advertise itself as writable and blocked processing thread will resume execution.

String

yes

no

unsupported.dbms.bolt.outbound_buffer_throttle.max_duration

When the total time outbound network buffer based throttle lock is held exceeds this value, the corresponding bolt channel will be aborted. Setting this to 0 will disable this behaviour.

String

yes

no

unsupported.dbms.counts_store_rotation_timeout

Maximum time to wait for active transaction completion when rotating counts store

String

yes

no

unsupported.dbms.db.spatial.crs.cartesian-3d.x.max

The maximum x value for the index extents for 3D cartesian-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.cartesian-3d.x.min

The minimum x value for the index extents for 3D cartesian-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.cartesian-3d.y.max

The maximum y value for the index extents for 3D cartesian-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.cartesian-3d.y.min

The minimum y value for the index extents for 3D cartesian-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.cartesian-3d.z.max

The maximum z value for the index extents for 3D cartesian-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.cartesian-3d.z.min

The minimum z value for the index extents for 3D cartesian-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.cartesian.x.max

The maximum x value for the index extents for 2D cartesian spatial index. The 2Dto 1D mapping function divides all 2Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 2Dspace. This requires that the extents of the 2Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.cartesian.x.min

The minimum x value for the index extents for 2D cartesian spatial index. The 2Dto 1D mapping function divides all 2Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 2Dspace. This requires that the extents of the 2Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.cartesian.y.max

The maximum y value for the index extents for 2D cartesian spatial index. The 2Dto 1D mapping function divides all 2Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 2Dspace. This requires that the extents of the 2Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.cartesian.y.min

The minimum y value for the index extents for 2D cartesian spatial index. The 2Dto 1D mapping function divides all 2Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 2Dspace. This requires that the extents of the 2Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.wgs-84-3d.x.max

The maximum x value for the index extents for 3D wgs-84-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.wgs-84-3d.x.min

The minimum x value for the index extents for 3D wgs-84-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.wgs-84-3d.y.max

The maximum y value for the index extents for 3D wgs-84-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.wgs-84-3d.y.min

The minimum y value for the index extents for 3D wgs-84-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.wgs-84-3d.z.max

The maximum z value for the index extents for 3D wgs-84-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.wgs-84-3d.z.min

The minimum z value for the index extents for 3D wgs-84-3d spatial index. The 3Dto 1D mapping function divides all 3Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 3Dspace. This requires that the extents of the 3Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.wgs-84.x.max

The maximum x value for the index extents for 2D wgs-84 spatial index. The 2Dto 1D mapping function divides all 2Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 2Dspace. This requires that the extents of the 2Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.wgs-84.x.min

The minimum x value for the index extents for 2D wgs-84 spatial index. The 2Dto 1D mapping function divides all 2Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 2Dspace. This requires that the extents of the 2Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.wgs-84.y.max

The maximum y value for the index extents for 2D wgs-84 spatial index. The 2Dto 1D mapping function divides all 2Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 2Dspace. This requires that the extents of the 2Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.db.spatial.crs.wgs-84.y.min

The minimum y value for the index extents for 2D wgs-84 spatial index. The 2Dto 1D mapping function divides all 2Dspace into discrete tiles, and orders these using a space filling curve designed to optimize the requirement that tiles that are close together in this ordered list are also close together in 2Dspace. This requires that the extents of the 2Dspace be known in advance and never changed. If you do change these settings, you need to recreate any affected index in order for the settings to apply, otherwise the index will retain the previous settings.

String

yes

no

unsupported.dbms.directories.auth

Configuration attribute

String

yes

no

unsupported.dbms.directories.database

Configuration attribute

String

yes

no

unsupported.dbms.directories.neo4j_home

Root relative to which directory settings are resolved. This is set in code and should never be configured explicitly.

String

yes

no

unsupported.dbms.disconnected

Disable all protocol connectors.

String

yes

no

unsupported.dbms.edition

Configuration attribute

String

yes

no

unsupported.dbms.enable_native_schema_index

Configuration attribute

String

yes

no

unsupported.dbms.ephemeral

Configuration attribute

String

yes

no

unsupported.dbms.executiontime_limit.enabled

Please use dbms.transaction.timeout instead.

String

yes

no

unsupported.dbms.id_generator_fast_rebuild_enabled

Use a quick approach for rebuilding the ID generators. This give quicker recovery time, but will limit the ability to reuse the space of deleted entities.

String

yes

no

unsupported.dbms.id_reuse_safe_zone

Duration for which master will buffer ids and not reuse them to allow slaves read consistently. Slaves will also terminate transactions longer than this duration, when applying received transaction stream, to make sure they do not read potentially inconsistent/reused records.

String

yes

no

unsupported.dbms.index.archive_failed

Create an archive of an index before re-creating it if failing to load on startup.

String

yes

no

unsupported.dbms.index.spatial.curve.bottom_threshold

When searching the spatial index we need to convert a 2D range in the quad tree into a set of 1D ranges on the underlying 1D space filling curve index. There is a balance to be made between many small 1D ranges that have few false positives, and fewer, larger 1D ranges that have more false positives. The former has a more efficient filtering of false positives, while the latter will have a more efficient search of the numerical index. The maximum depth to which the quad tree is processed when mapping 2D to 1D is based on the size of the search area compared to the size of the 2D tiles at that depth. When traversing the tree to this depth, we can stop early based on when the search envelope overlaps the current tile by more than a certain threshold. The threshold is calculated based on depth, from the top_threshold at the top of the tree to the bottom_threshold at the depth calculated by the area comparison. Setting the top to 0.99 and the bottom to 0.5, for example would mean that if we reached the maximum depth, and the search area overlapped the current tile by more than 50%, we would stop traversing the tree, and return the 1D range for that entire tile to the search set. If the overlap is even higher, we would stop higher in the tree. This technique reduces the number of 1D ranges passed to the underlying space filling curve index. Setting this value to zero turns off this feature.

String

yes

no

unsupported.dbms.index.spatial.curve.extra_levels

When searching the spatial index we need to convert a 2D range in the quad tree into a set of 1D ranges on the underlying 1D space filling curve index. There is a balance to be made between many small 1D ranges that have few false positives, and fewer, larger 1D ranges that have more false positives. The former has a more efficient filtering of false positives, while the latter will have a more efficient search of the numerical index. The maximum depth to which the quad tree is processed when mapping 2D to 1D is based on the size of the search area compared to the size of the 2D tiles at that depth. This setting will cause the algorithm to search deeper, reducing false positives.

String

yes

no

unsupported.dbms.index.spatial.curve.max_bits

The maximum number of bits to use for levels in the quad tree representing the spatial index. When creating the spatial index, we simulate a quad tree using a 2D (or 3D) to 1D mapping function. This requires that the extents of the index and the depth of the tree be defined in advance, so ensure the 2D to 1D mapping is deterministic and repeatable. This setting will define the maximum depth of any future spatial index created, calculated as max_bits / dimensions. For example 60 bits will define 30 levels in 2D and 20 levels in 3D. Existing indexes will not be changed, and need to be recreated if you wish to use the new value. For 2D indexes, a value of 30 is the largest supported. For 3D indexes 20 is the largest.

String

yes

no

unsupported.dbms.index.spatial.curve.top_threshold

When searching the spatial index we need to convert a 2D range in the quad tree into a set of 1D ranges on the underlying 1D space filling curve index. There is a balance to be made between many small 1D ranges that have few false positives, and fewer, larger 1D ranges that have more false positives. The former has a more efficient filtering of false positives, while the latter will have a more efficient search of the numerical index. The maximum depth to which the quad tree is processed when mapping 2D to 1D is based on the size of the search area compared to the size of the 2D tiles at that depth. When traversing the tree to this depth, we can stop early based on when the search envelope overlaps the current tile by more than a certain threshold. The threshold is calculated based on depth, from the top_threshold at the top of the tree to the bottom_threshold at the depth calculated by the area comparison. Setting the top to 0.99 and the bottom to 0.5, for example would mean that if we reached the maximum depth, and the search area overlapped the current tile by more than 50%, we would stop traversing the tree, and return the 1D range for that entire tile to the search set. If the overlap is even higher, we would stop higher in the tree. This technique reduces the number of 1D ranges passed to the underlying space filling curve index. Setting this value to zero turns off this feature.

String

yes

no

unsupported.dbms.kernel_id

An identifier that uniquely identifies this graph database instance within this JVM. Defaults to an auto-generated number depending on how many instance are started in this JVM.

String

yes

no

unsupported.dbms.lock_manager

Configuration attribute

String

yes

no

unsupported.dbms.logs.bolt.enabled

Configuration attribute

String

yes

no

unsupported.dbms.logs.bolt.path

Configuration attribute

String

yes

no

unsupported.dbms.logs.debug.debug_loggers

Debug log contexts that should output debug level logging

String

yes

no

unsupported.dbms.memory.pagecache.pagesize

Target size for pages of mapped memory. If set to 0, then a reasonable default is chosen, depending on the storage device used.

String

yes

no

unsupported.dbms.memory.pagecache.warmup.enable

Page cache can be configured to perform usage sampling of loaded pages that can be used to construct active load profile. According to that profile pages can be reloaded on the restart, replication, etc. This setting allows disabling that behavior. This feature available in Neo4j Enterprise Edition.

String

yes

no

unsupported.dbms.memory.pagecache.warmup.profile.interval

The profiling frequency for the page cache. Accurate profiles allow the page cache to do active warmup after a restart, reducing the mean time to performance. This feature available in Neo4j Enterprise Edition.

String

yes

no

unsupported.dbms.multi_threaded_schema_index_population_enabled

Configuration attribute

String

yes

no

unsupported.dbms.query.snapshot

Specifies if engine should run cypher query based on a snapshot of accessed data. Query will be restarted in case if concurrent modification of data will be detected.

String

yes

no

unsupported.dbms.query.snapshot.retries

Specifies number or retries that query engine will do to execute query based on stable accessed data snapshot before giving up.

String

yes

no

unsupported.dbms.record_id_batch_size

Specifies the size of id batches local to each transaction when committing. Committing a transaction which contains changes most often results in new data records being created. For each record a new id needs to be generated from an id generator. It’s more efficient to allocate a batch of ids from the contended id generator, which the transaction holds and generates ids from while creating these new records. This setting specifies how big those batches are. Remaining ids are freed back to id generator on clean shutdown.

String

yes

no

unsupported.dbms.report_configuration

Print out the effective Neo4j configuration after startup.

String

yes

no

unsupported.dbms.schema.release_lock_while_building_constraint

Whether or not to release the exclusive schema lock is while building uniqueness constraints index

String

yes

no

unsupported.dbms.security.auth_max_failed_attempts

Configuration attribute

String

yes

no

unsupported.dbms.security.auth_store.location

Configuration attribute

String

yes

no

unsupported.dbms.security.ldap.authorization.connection_pooling

Set to true if connection pooling should be used for authorization searches using the system account.

String

yes

no

unsupported.dbms.security.module

Configuration attribute

String

yes

no

unsupported.dbms.security.tls_certificate_file

Path to the X.509 public certificate to be used by Neo4j for TLS connections

String

yes

no

unsupported.dbms.security.tls_key_file

Path to the X.509 private key to be used by Neo4j for TLS connections

String

yes

no

unsupported.dbms.tracer

Configuration attribute

String

yes

no

unsupported.dbms.transaction_start_timeout

The maximum amount of time to wait for the database to become available, when starting a new transaction.

String

yes

no

unsupported.dbms.tx_state.memory_allocation

[Experimental] Defines whether memory for transaction state should allocaten on- or off-heap.

String

yes

no

unsupported.dbms.udc.first_delay

Configuration attribute

String

yes

no

unsupported.dbms.udc.host

Configuration attribute

String

yes

no

unsupported.dbms.udc.interval

Configuration attribute

String

yes

no

unsupported.dbms.udc.reg

Configuration attribute

String

yes

no

unsupported.dbms.udc.source

Configuration attribute

String

yes

no

unsupported.ha.cluster_name

The name of a cluster.

String

yes

no

unsupported.ha.instance_name

Configuration attribute

String

yes

no

unsupported.tools.batch_inserter.batch_size

Specifies number of operations that batch inserter will try to group into one batch before flushing data into underlying storage.

String

yes

no

unsupported.vm_pause_monitor.measurement_duration

Configuration attribute

String

yes

no

unsupported.vm_pause_monitor.stall_alert_threshold

Configuration attribute

String

yes

no

Table 9.4. MBean Diagnostics (org.neo4j.management.Diagnostics) Attributes
Name Description Type Read Write

Diagnostics provided by Neo4j

DiagnosticsProviders

A list of the ids for the registered diagnostics providers.

List (java.util.List)

yes

no

Table 9.5. MBean Diagnostics (org.neo4j.management.Diagnostics) Operations
Name Description ReturnType Signature

dumpAll

Dump diagnostics information to JMX

String

(no parameters)

dumpToLog

Dump diagnostics information to the log.

void

(no parameters)

dumpToLog

Dump diagnostics information to the log.

void

java.lang.String

extract

Operation exposed for management

String

java.lang.String

Table 9.6. MBean Index sampler (org.neo4j.management.IndexSamplingManager) Operations
Name Description ReturnType Signature

triggerIndexSampling

triggerIndexSampling

void

java.lang.String,java.lang.String,boolean

Table 9.7. MBean Kernel (org.neo4j.jmx.Kernel) Attributes
Name Description Type Read Write

Information about the Neo4j kernel

DatabaseName

The name of the mounted database

String

yes

no

KernelStartTime

The time from which this Neo4j instance was in operational mode.

Date (java.util.Date)

yes

no

KernelVersion

The version of Neo4j

String

yes

no

MBeanQuery

An ObjectName that can be used as a query for getting all management beans for this Neo4j instance.

javax.management.ObjectName

yes

no

ReadOnly

Whether this is a read only instance

boolean

yes

no

StoreCreationDate

The time when this Neo4j graph store was created.

Date (java.util.Date)

yes

no

StoreId

An identifier that, together with store creation time, uniquely identifies this Neo4j graph store.

String

yes

no

StoreLogVersion

The current version of the Neo4j store logical log.

long

yes

no

Table 9.8. MBean Locking (org.neo4j.management.LockManager) Attributes
Name Description Type Read Write

Information about the Neo4j lock status

Locks

Information about all locks held by Neo4j

java.util.List<org.neo4j.kernel.info.LockInfo> as CompositeData[]

yes

no

NumberOfAvertedDeadlocks

The number of lock sequences that would have lead to a deadlock situation that Neo4j has detected and averted (by throwing DeadlockDetectedException).

long

yes

no

Table 9.9. MBean Locking (org.neo4j.management.LockManager) Operations
Name Description ReturnType Signature

getContendedLocks

getContendedLocks

java.util.List<org.neo4j.kernel.info.LockInfo> as CompositeData[]

long

Table 9.10. MBean Memory Mapping (org.neo4j.management.MemoryMapping) Attributes
Name Description Type Read Write

The status of Neo4j memory mapping

MemoryPools

Get information about each pool of memory mapped regions from store files with memory mapping enabled

org.neo4j.management.WindowPoolInfo[] as CompositeData[]

yes

no

Table 9.11. MBean Page cache (org.neo4j.management.PageCache) Attributes
Name Description Type Read Write

Information about the Neo4j page cache. All numbers are counts and sums since the Neo4j instance was started

BytesRead

Number of bytes read from durable storage.

long

yes

no

BytesWritten

Number of bytes written to durable storage.

long

yes

no

EvictionExceptions

Number of exceptions caught during page eviction. This number should be zero, or at least not growing, in a healthy database. Otherwise it could indicate drive failure, storage space, or permission problems.

long

yes

no

Evictions

Number of page evictions. How many pages have been removed from memory to make room for other pages.

long

yes

no

Faults

Number of page faults. How often requested data was not found in memory and had to be loaded.

long

yes

no

FileMappings

Number of files that have been mapped into the page cache.

long

yes

no

FileUnmappings

Number of files that have been unmapped from the page cache.

long

yes

no

Flushes

Number of page flushes. How many dirty pages have been written to durable storage.

long

yes

no

HitRatio

Ratio of hits to the total number of lookups in the page cache

double

yes

no

Pins

Number of page pins. How many pages have been accessed (monitoring must be enabled separately).

long

yes

no

UsageRatio

The percentage of used pages. Will return NaN if it cannot be determined.

double

yes

no

Table 9.12. MBean Primitive count (org.neo4j.jmx.Primitives) Attributes
Name Description Type Read Write

Estimates of the numbers of different kinds of Neo4j primitives

NumberOfNodeIdsInUse

An estimation of the number of nodes used in this Neo4j instance

long

yes

no

NumberOfPropertyIdsInUse

An estimation of the number of properties used in this Neo4j instance

long

yes

no

NumberOfRelationshipIdsInUse

An estimation of the number of relationships used in this Neo4j instance

long

yes

no

NumberOfRelationshipTypeIdsInUse

The number of relationship types used in this Neo4j instance

long

yes

no

Table 9.13. MBean Reports (org.neo4j.dbms.diagnostics.jmx.Reports) Attributes
Name Description Type Read Write

Reports operations

EnvironmentVariables

Returns a map if the current environment variables

String

yes

no

Table 9.14. MBean Reports (org.neo4j.dbms.diagnostics.jmx.Reports) Operations
Name Description ReturnType Signature

listTransactions

List all active transactions

String

(no parameters)

Table 9.15. MBean Store file sizes (org.neo4j.jmx.StoreFile) Attributes
Name Description Type Read Write

This bean is deprecated, use StoreSize bean instead; Information about the sizes of the different parts of the Neo4j graph store

ArrayStoreSize

The amount of disk space used to store array properties, in bytes.

long

yes

no

LogicalLogSize

The amount of disk space used by the current Neo4j logical log, in bytes.

long

yes

no

NodeStoreSize

The amount of disk space used to store nodes, in bytes.

long

yes

no

PropertyStoreSize

The amount of disk space used to store properties (excluding string values and array values), in bytes.

long

yes

no

RelationshipStoreSize

The amount of disk space used to store relationships, in bytes.

long

yes

no

StringStoreSize

The amount of disk space used to store string properties, in bytes.

long

yes

no

TotalStoreSize

The total disk space used by this Neo4j instance, in bytes.

long

yes

no

Table 9.16. MBean Store sizes (org.neo4j.jmx.StoreSize) Attributes
Name Description Type Read Write

Information about the disk space used by different parts of the Neo4j graph store

ArrayStoreSize

Disk space used to store array properties, in bytes.

long

yes

no

CountStoreSize

Disk space used to store counters, in bytes

long

yes

no

IndexStoreSize

Disk space used to store all indices, in bytes

long

yes

no

LabelStoreSize

Disk space used to store labels, in bytes

long

yes

no

NodeStoreSize

Disk space used to store nodes, in bytes.

long

yes

no

PropertyStoreSize

Disk space used to store properties (excluding string values and array values), in bytes.

long

yes

no

RelationshipStoreSize

Disk space used to store relationships, in bytes.

long

yes

no

SchemaStoreSize

Disk space used to store schemas (index and constrain declarations), in bytes

long

yes

no

StringStoreSize

Disk space used to store string properties, in bytes.

long

yes

no

TotalStoreSize

Disk space used by whole store, in bytes.

long

yes

no

TransactionLogsSize

Disk space used by the transaction logs, in bytes.

long

yes

no

Table 9.17. MBean Transactions (org.neo4j.management.TransactionManager) Attributes
Name Description Type Read Write

Information about the Neo4j transaction manager

LastCommittedTxId

The id of the latest committed transaction

long

yes

no

NumberOfCommittedTransactions

The total number of committed transactions

long

yes

no

NumberOfOpenedTransactions

The total number started transactions

long

yes

no

NumberOfOpenTransactions

The number of currently open transactions

long

yes

no

NumberOfRolledBackTransactions

The total number of rolled back transactions

long

yes

no

PeakNumberOfConcurrentTransactions

The highest number of transactions ever opened concurrently

long

yes

no

9.4.2. High Availability JMX MBeans

The following JMX management beans are unique to instances that are part of a High Availability cluster.

Table 9.18. MBeans exposed by Neo4j in High Availability mode
Name Description

Branched Store

Information about the branched stores present in this HA cluster member.

High Availability

Information about an instance participating in a HA cluster.

Table 9.19. MBean Branched Store (org.neo4j.management.BranchedStore) Attributes
Name Description Type Read Write

Information about the branched stores present in this HA cluster member

BranchedStores

A list of the branched stores

org.neo4j.management.BranchedStoreInfo[] as CompositeData[]

yes

no

Table 9.20. MBean High Availability (org.neo4j.management.HighAvailability) Attributes
Name Description Type Read Write

Information about an instance participating in a HA cluster

Alive

Whether this instance is alive or not

boolean

yes

no

Available

Whether this instance is available or not

boolean

yes

no

InstanceId

The identifier used to identify this server in the HA cluster

String

yes

no

InstancesInCluster

Information about all instances in this cluster

org.neo4j.management.ClusterMemberInfo[] as CompositeData[]

yes

no

LastCommittedTxId

The latest transaction id present in this instance’s store

long

yes

no

LastUpdateTime

The time when the data on this instance was last updated from the master

String

yes

no

Role

The role this instance has in the cluster

String

yes

no

Table 9.21. MBean High Availability (org.neo4j.management.HighAvailability) Operations
Name Description ReturnType Signature

update

(If this is a slave) Update the database on this instance with the latest transactions from the master

String

(no parameters)