Configuration settings
This page provides a complete reference to the Neo4j configuration settings, which can be set in neo4j.conf. Refer to The neo4j.conf file for details on how to use configuration settings.
-
browser.allow_outgoing_connections: Enterprise onlyConfigure the policy for outgoing Neo4j Browser connections.
-
browser.credential_timeout: Enterprise onlyConfigure the Neo4j Browser to time out logged in users after this idle period.
-
browser.post_connect_cmd: Commands to be run when Neo4j Browser successfully connects to this server.
-
browser.remote_content_hostname_whitelist: Whitelist of hosts for the Neo4j Browser to be allowed to fetch content from.
-
browser.retain_connection_credentials: Enterprise onlyConfigure the Neo4j Browser to store or not store user credentials.
-
browser.retain_editor_history: Enterprise onlyConfigure the Neo4j Browser to store or not store user editor history.
-
client.allow_telemetry: Configure client applications such as Browser and Bloom to send Product Analytics data.
-
db.checkpoint: Configures the general policy for when check-points should occur.
-
db.checkpoint.interval.time: Configures the time interval between check-points.
-
db.checkpoint.interval.tx: Configures the transaction interval between check-points.
-
db.checkpoint.interval.volume: Configures the volume of transaction logs between check-points.
-
db.checkpoint.iops.limit: Limit the number of IOs the background checkpoint process will consume per second.
-
db.cluster.catchup.pull_interval: Enterprise onlyInterval of pulling updates from cores.
-
db.cluster.raft.apply.buffer.max_bytes: Enterprise onlyThe maximum number of bytes in the apply buffer.
-
db.cluster.raft.apply.buffer.max_entries: Enterprise onlyThe maximum number of entries in the raft log entry prefetch buffer.
-
db.cluster.raft.in_queue.batch.max_bytes: Enterprise onlyLargest batch processed by RAFT in bytes.
-
db.cluster.raft.in_queue.max_bytes: Enterprise onlyMaximum number of bytes in the RAFT in-queue.
-
db.cluster.raft.leader_transfer.priority_group: Enterprise onlyThe name of a server_group whose members should be prioritized as leaders.
-
db.cluster.raft.log.prune_strategy: Enterprise onlyRAFT log pruning strategy that determines which logs are to be pruned.
-
db.cluster.raft.log_shipping.buffer.max_bytes: Enterprise onlyThe maximum number of bytes in the in-flight cache.
-
db.cluster.raft.log_shipping.buffer.max_entries: Enterprise onlyThe maximum number of entries in the in-flight cache.
-
db.filewatcher.enabled: Allows the enabling or disabling of the file watcher service.
-
db.format: Database format.
-
db.import.csv.buffer_size: The size of the internal buffer in bytes used by
LOAD CSV
. -
db.import.csv.legacy_quote_escaping: Selects whether to conform to the standard https://tools.ietf.org/html/rfc4180 for interpreting escaped quotation characters in CSV files loaded using
LOAD CSV
. -
db.index.fulltext.default_analyzer: The name of the analyzer that the fulltext indexes should use by default.
-
db.index.fulltext.eventually_consistent: Whether or not fulltext indexes should be eventually consistent by default or not.
-
db.index.fulltext.eventually_consistent_index_update_queue_max_length: The eventually_consistent mode of the fulltext indexes works by queueing up index updates to be applied later in a background thread.
-
db.index_sampling.background_enabled: Enable or disable background index sampling.
-
db.index_sampling.sample_size_limit: Index sampling chunk size limit.
-
db.index_sampling.update_percentage: Percentage of index updates of total index size required before sampling of a given index is triggered.
-
db.lock.acquisition.timeout: The maximum time interval within which lock should be acquired.
-
db.logs.query.early_raw_logging_enabled: Log query text and parameters without obfuscating passwords.
-
db.logs.query.enabled: Log executed queries.
-
db.logs.query.max_parameter_length: Sets a maximum character length use for each parameter in the log.
-
db.logs.query.obfuscate_literals: Obfuscates all literals of the query before writing to the log.
-
db.logs.query.parameter_logging_enabled: Log parameters for the executed queries being logged.
-
db.logs.query.plan_description_enabled: Log query plan description table, useful for debugging purposes.
-
db.logs.query.threshold: If the execution of query takes more time than this threshold, the query is logged once completed - provided query logging is set to INFO.
-
db.logs.query.transaction.enabled: Log the start and end of a transaction.
-
db.logs.query.transaction.threshold: If the transaction is open for more time than this threshold, the transaction is logged once completed - provided transaction logging (db.logs.query.transaction.enabled) is set to
INFO
. -
db.memory.pagecache.warmup.enable: Page cache can be configured to perform usage sampling of loaded pages that can be used to construct active load profile.
-
db.memory.pagecache.warmup.preload: Page cache warmup can be configured to prefetch files, preferably when cache size is bigger than store size.
-
db.memory.pagecache.warmup.preload.allowlist: Page cache warmup prefetch file allowlist regex.
-
db.memory.pagecache.warmup.profile.interval: The profiling frequency for the page cache.
-
db.memory.transaction.max: Limit the amount of memory that a single transaction can consume, in bytes (or kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g').
-
db.memory.transaction.total.max: Limit the amount of memory that all transactions in one database can consume, in bytes (or kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g').
-
db.recovery.fail_on_missing_files: If
true
, Neo4j will abort recovery if transaction log files are missing. -
db.relationship_grouping_threshold: Relationship count threshold for considering a node to be dense.
-
db.shutdown_transaction_end_timeout: The maximum amount of time to wait for running transactions to complete before allowing initiated database shutdown to continue.
-
db.store.files.preallocate: Specify if Neo4j should try to preallocate store files as they grow.
-
db.temporal.timezone: Database timezone for temporal functions.
-
db.track_query_cpu_time: Enables or disables tracking of how much time a query spends actively executing on the CPU.
-
db.transaction.bookmark_ready_timeout: The maximum amount of time to wait for the database state represented by the bookmark.
-
db.transaction.concurrent.maximum: The maximum number of concurrently running transactions.
-
db.transaction.monitor.check.interval: Configures the time interval between transaction monitor checks.
-
db.transaction.sampling.percentage: Transaction sampling percentage.
-
db.transaction.timeout: The maximum time interval of a transaction within which it should be completed.
-
db.transaction.tracing.level: Transaction creation tracing level.
-
db.tx_log.buffer.size: On serialization of transaction logs, they will be temporary stored in the byte buffer that will be flushed at the end of the transaction or at any moment when buffer will be full.
-
db.tx_log.preallocate: Specify if Neo4j should try to preallocate logical log file in advance.
-
db.tx_log.rotation.retention_policy: Tell Neo4j how long logical transaction logs should be kept to backup the database.For example, "10 days" will prune logical logs that only contain transactions older than 10 days.Alternatively, "100k txs" will keep the 100k latest transactions from each database and prune any older transactions.
-
db.tx_log.rotation.size: Specifies at which file size the logical log will auto-rotate.
-
db.tx_state.memory_allocation: Defines whether memory for transaction state should be allocated on- or off-heap.
-
dbms.cluster.catchup.client_inactivity_timeout: Enterprise onlyThe catch up protocol times out if the given duration elapses with no network activity.
-
dbms.cluster.discovery.endpoints: Enterprise onlyA comma-separated list of endpoints which a server should contact in order to discover other cluster members.
-
dbms.cluster.discovery.log_level: Enterprise onlyThe level of middleware logging.
-
dbms.cluster.discovery.type: Enterprise onlyConfigure the discovery type used for cluster name resolution.
-
dbms.cluster.minimum_initial_system_primaries_count: Enterprise onlyMinimum number of machines initially required to formed a clustered DBMS.
-
dbms.cluster.network.handshake_timeout: Enterprise onlyTime out for protocol negotiation handshake.
-
dbms.cluster.network.max_chunk_size: Enterprise onlyMaximum chunk size allowable across network by clustering machinery.
-
dbms.cluster.network.supported_compression_algos: Enterprise onlyNetwork compression algorithms that this instance will allow in negotiation as a comma-separated list.
-
dbms.cluster.raft.binding_timeout: Enterprise onlyThe time allowed for a database on a Neo4j server to either join a cluster or form a new cluster with the other Neo4j Servers provided by
dbms.cluster.discovery.endpoints
. -
dbms.cluster.raft.client.max_channels: Enterprise onlyThe maximum number of TCP channels between two nodes to operate the raft protocol.
-
dbms.cluster.raft.election_failure_detection_window: Enterprise onlyThe rate at which leader elections happen.
-
dbms.cluster.raft.leader_failure_detection_window: Enterprise onlyThe time window within which the loss of the leader is detected and the first re-election attempt is held.
-
dbms.cluster.raft.leader_transfer.balancing_strategy: Enterprise onlyWhich strategy to use when transferring database leaderships around a cluster.
-
dbms.cluster.raft.log.pruning_frequency: Enterprise onlyRAFT log pruning frequency.
-
dbms.cluster.raft.log.reader_pool_size: Enterprise onlyRAFT log reader pool size.
-
dbms.cluster.raft.log.rotation_size: Enterprise onlyRAFT log rotation size.
-
dbms.cluster.raft.membership.join_max_lag: Enterprise onlyMaximum amount of lag accepted for a new follower to join the Raft group.
-
dbms.cluster.raft.membership.join_timeout: Enterprise onlyTime out for a new member to catch up.
-
dbms.cluster.store_copy.max_retry_time_per_request: Enterprise onlyMaximum retry time per request during store copy.
-
dbms.cypher.forbid_exhaustive_shortestpath: This setting is associated with performance optimization.
-
dbms.cypher.forbid_shortestpath_common_nodes: This setting is associated with performance optimization.
-
dbms.cypher.hints_error: Set this to specify the behavior when Cypher planner or runtime hints cannot be fulfilled.
-
dbms.cypher.lenient_create_relationship: Set this to change the behavior for Cypher create relationship when the start or end node is missing.
-
dbms.cypher.min_replan_interval: The minimum time between possible cypher query replanning events.
-
dbms.cypher.planner: Set this to specify the default planner for the default language version.
-
dbms.cypher.render_plan_description: If set to
true
a textual representation of the plan description will be rendered on the server for all queries running withEXPLAIN
orPROFILE
. -
dbms.cypher.statistics_divergence_threshold: The threshold for statistics above which a plan is considered stale.
If any of the underlying statistics used to create the plan have changed more than this value, the plan will be considered stale and will be replanned.
-
dbms.databases.seed_from_uri_providers: Enterprise onlyDatabases may be created from an existing 'seed' (a database backup or dump) stored at some source URI.
-
dbms.db.timezone: Database timezone.
-
dbms.kubernetes.address: Enterprise onlyAddress for Kubernetes API.
-
dbms.kubernetes.ca_crt: Enterprise onlyFile location of CA certificate for Kubernetes API.
-
dbms.kubernetes.cluster_domain: Enterprise onlyKubernetes cluster domain.
-
dbms.kubernetes.label_selector: Enterprise onlyLabelSelector for Kubernetes API.
-
dbms.kubernetes.namespace: Enterprise onlyFile location of namespace for Kubernetes API.
-
dbms.kubernetes.service_port_name: Enterprise onlyService port name for discovery for Kubernetes API.
-
dbms.kubernetes.token: Enterprise onlyFile location of token for Kubernetes API.
-
dbms.logs.http.enabled: Enable HTTP request logging.
-
dbms.max_databases: Enterprise onlyThe maximum number of databases.
-
dbms.memory.tracking.enable: Enable off heap and on heap memory tracking.
-
dbms.memory.transaction.total.max: Limit the amount of memory that all of the running transactions can consume, in bytes (or kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g').
-
dbms.netty.ssl.provider: Netty SSL provider.
-
dbms.routing.client_side.enforce_for_domains: Always use client side routing (regardless of the default router) for neo4j:// protocol connections to these domains.
-
dbms.routing.default_router: Routing strategy for neo4j:// protocol connections. Default is
CLIENT
, using client-side routing, with server-side routing as a fallback (if enabled). When set toSERVER
, client-side routing is short-circuited, and requests will rely on server-side routing (which must be enabled for proper operation, i.e. -
dbms.routing.driver.connection.connect_timeout: Socket connection timeout. A timeout of zero is treated as an infinite timeout and will be bound by the timeout configured on the operating system level.
-
dbms.routing.driver.connection.max_lifetime: Pooled connections older than this threshold will be closed and removed from the pool. Setting this option to a low value will cause a high connection churn and might result in a performance hit. It is recommended to set maximum lifetime to a slightly smaller value than the one configured in network equipment (load balancer, proxy, firewall, etc.
-
dbms.routing.driver.connection.pool.acquisition_timeout: Maximum amount of time spent attempting to acquire a connection from the connection pool. This timeout only kicks in when all existing connections are being used and no new connections can be created because maximum connection pool size has been reached. Error is raised when connection can’t be acquired within configured time. Negative values are allowed and result in unlimited acquisition timeout.
-
dbms.routing.driver.connection.pool.idle_test: Pooled connections that have been idle in the pool for longer than this timeout will be tested before they are used again, to ensure they are still alive. If this option is set too low, an additional network call will be incurred when acquiring a connection, which causes a performance hit. If this is set high, no longer live connections might be used which might lead to errors. Hence, this parameter tunes a balance between the likelihood of experiencing connection problems and performance Normally, this parameter should not need tuning. Value 0 means connections will always be tested for validity.
-
dbms.routing.driver.connection.pool.max_size: Maximum total number of connections to be managed by a connection pool. The limit is enforced for a combination of a host and user.
-
dbms.routing.driver.logging.level: Sets level for driver internal logging.
-
dbms.routing.enabled: Enable server-side routing in clusters using an additional bolt connector. When configured, this allows requests to be forwarded from one cluster member to another, if the requests can’t be satisfied by the first member (e.g.
-
dbms.routing.load_balancing.plugin: Enterprise onlyThe load balancing plugin to use.
-
dbms.routing.load_balancing.shuffle_enabled: Enterprise onlyEnables shuffling of the returned load balancing result.
-
dbms.routing.reads_on_primaries_enabled: Enterprise onlyConfigure if the
dbms.routing.getRoutingTable()
procedure should include non-writer primaries as read endpoints or return only secondaries. -
dbms.routing.reads_on_writers_enabled: Enterprise onlyConfigure if the
dbms.routing.getRoutingTable()
procedure should include the writer as read endpoint or return only non-writers (non writer primaries and secondaries) Note: writer is returned as read endpoint if no other member is present all. -
dbms.routing_ttl: How long callers should cache the response of the routing procedure
dbms.routing.getRoutingTable()
. -
dbms.security.allow_csv_import_from_file_urls: Determines if Cypher will allow using file URLs when loading data using
LOAD CSV
. -
dbms.security.auth_cache_max_capacity: Enterprise onlyThe maximum capacity for authentication and authorization caches (respectively).
-
dbms.security.auth_cache_ttl: Enterprise onlyThe time to live (TTL) for cached authentication and authorization info when using external auth providers (LDAP or plugin).
-
dbms.security.auth_cache_use_ttl: Enterprise onlyEnable time-based eviction of the authentication and authorization info cache for external auth providers (LDAP or plugin).
-
dbms.security.auth_enabled: Enable auth requirement to access Neo4j.
-
dbms.security.auth_lock_time: The amount of time user account should be locked after a configured number of unsuccessful authentication attempts.
-
dbms.security.auth_max_failed_attempts: The maximum number of unsuccessful authentication attempts before imposing a user lock for the configured amount of time, as defined by
dbms.security.auth_lock_time
.The locked out user will not be able to log in until the lock period expires, even if correct credentials are provided. -
dbms.security.authentication_providers: Enterprise onlyA list of security authentication providers containing the users and roles.
-
dbms.security.authorization_providers: Enterprise onlyA list of security authorization providers containing the users and roles.
-
dbms.security.cluster_status_auth_enabled: Enterprise onlyRequire authorization for access to the Causal Clustering status endpoints.
-
dbms.security.http_access_control_allow_origin: Value of the Access-Control-Allow-Origin header sent over any HTTP or HTTPS connector.
-
dbms.security.http_auth_allowlist: Defines an allowlist of http paths where Neo4j authentication is not required.
-
dbms.security.http_strict_transport_security: Value of the HTTP Strict-Transport-Security (HSTS) response header.
-
dbms.security.key.name: Enterprise onlyName of the 256 length AES encryption key, which is used for the symmetric encryption.
-
dbms.security.keystore.password: Enterprise onlyPassword for accessing the keystore holding a 256 length AES encryption key, which is used for the symmetric encryption.
-
dbms.security.keystore.path: Enterprise onlyLocation of the keystore holding a 256 length AES encryption key, which is used for the symmetric encryption of secrets held in system database.
-
dbms.security.ldap.authentication.attribute: Enterprise onlyThe attribute to use when looking up users. Using this setting requires
dbms.security.ldap.authentication.search_for_attribute
to be true and thusdbms.security.ldap.authorization.system_username
anddbms.security.ldap.authorization.system_password
to be configured. -
dbms.security.ldap.authentication.cache_enabled: Enterprise onlyDetermines if the result of authentication via the LDAP server should be cached or not.
-
dbms.security.ldap.authentication.mechanism: Enterprise onlyLDAP authentication mechanism.
-
dbms.security.ldap.authentication.search_for_attribute: Enterprise onlyPerform authentication by searching for an unique attribute of a user. Using this setting requires
dbms.security.ldap.authorization.system_username
anddbms.security.ldap.authorization.system_password
to be configured. -
dbms.security.ldap.authentication.user_dn_template: Enterprise onlyLDAP user DN template.
-
dbms.security.ldap.authorization.access_permitted_group: Enterprise onlyThe LDAP group to which a user must belong to get any access to the system.Set this to restrict access to a subset of LDAP users belonging to a particular group.
-
dbms.security.ldap.authorization.group_membership_attributes: Enterprise onlyA list of attribute names on a user object that contains groups to be used for mapping to roles when LDAP authorization is enabled.
-
dbms.security.ldap.authorization.group_to_role_mapping: Enterprise onlyAn authorization mapping from LDAP group names to Neo4j role names.
-
dbms.security.ldap.authorization.nested_groups_enabled: Enterprise onlyThis setting determines whether multiple LDAP search results will be processed (as is required for the lookup of nested groups).
-
dbms.security.ldap.authorization.nested_groups_search_filter: Enterprise onlyThe search template which will be used to find the nested groups which the user is a member of.
-
dbms.security.ldap.authorization.system_password: Enterprise onlyAn LDAP system account password to use for authorization searches when
dbms.security.ldap.authorization.use_system_account
istrue
. -
dbms.security.ldap.authorization.system_username: Enterprise onlyAn LDAP system account username to use for authorization searches when
dbms.security.ldap.authorization.use_system_account
istrue
. -
dbms.security.ldap.authorization.use_system_account: Enterprise onlyPerform LDAP search for authorization info using a system account instead of the user’s own account. If this is set to
false
(default), the search for group membership will be performed directly after authentication using the LDAP context bound with the user’s own account. -
dbms.security.ldap.authorization.user_search_base: Enterprise onlyThe name of the base object or named context to search for user objects when LDAP authorization is enabled.
-
dbms.security.ldap.authorization.user_search_filter: Enterprise onlyThe LDAP search filter to search for a user principal when LDAP authorization is enabled.
-
dbms.security.ldap.connection_timeout: Enterprise onlyThe timeout for establishing an LDAP connection.
-
dbms.security.ldap.host: Enterprise onlyURL of LDAP server to use for authentication and authorization.
-
dbms.security.ldap.read_timeout: Enterprise onlyThe timeout for an LDAP read request (i.e.
-
dbms.security.ldap.referral: Enterprise onlyThe LDAP referral behavior when creating a connection.
-
dbms.security.ldap.use_starttls: Enterprise onlyUse secure communication with the LDAP server using opportunistic TLS.
-
dbms.security.log_successful_authentication: Enterprise onlySet to log successful authentication events to the security log.
-
dbms.security.oidc.<provider>.audience: Enterprise onlyExpected values of the Audience (aud) claim in the id token.
-
dbms.security.oidc.<provider>.auth_endpoint: Enterprise onlyThe OIDC authorization endpoint.
-
dbms.security.oidc.<provider>.auth_flow: Enterprise onlyThe OIDC flow to use.
-
dbms.security.oidc.<provider>.auth_params: Enterprise onlyOptional additional parameters that the auth endpoint requires.
-
dbms.security.oidc.<provider>.authorization.group_to_role_mapping: Enterprise onlyAn authorization mapping from IdP group names to Neo4j role names.
-
dbms.security.oidc.<provider>.claims.groups: Enterprise onlyThe claim to use as the list of groups in Neo4j.
-
dbms.security.oidc.<provider>.claims.username: Enterprise onlyThe claim to use as the username in Neo4j.
-
dbms.security.oidc.<provider>.client_id: Enterprise onlyClient id needed if token contains multiple Audience (aud) claims.
-
dbms.security.oidc.<provider>.config: Enterprise only
-
dbms.security.oidc.<provider>.display_name: Enterprise onlyThe user-facing name of the provider as provided by the discovery endpoint to clients (Bloom, Browser etc.).
-
dbms.security.oidc.<provider>.get_groups_from_user_info: Enterprise onlyWhen turned on, Neo4j gets the groups from the provider user info endpoint.
-
dbms.security.oidc.<provider>.get_username_from_user_info: Enterprise onlyWhen turned on, Neo4j gets the username from the provider user info endpoint.
-
dbms.security.oidc.<provider>.issuer: Enterprise onlyThe expected value of the iss claim in the id token.
-
dbms.security.oidc.<provider>.jwks_uri: Enterprise onlyThe location of the JWK public key set for the identity provider.
-
dbms.security.oidc.<provider>.params: Enterprise onlyThe map is a semicolon separated list of key-value pairs.
-
dbms.security.oidc.<provider>.token_endpoint: Enterprise onlyThe OIDC token endpoint.
-
dbms.security.oidc.<provider>.token_params: Enterprise onlyOptional query parameters that the token endpoint requires.
-
dbms.security.oidc.<provider>.user_info_uri: Enterprise onlyThe identity providers user info uri.
-
dbms.security.oidc.<provider>.well_known_discovery_uri: Enterprise onlyThe 'well known' OpenID Connect Discovery endpoint used to fetch identity provider settings.
-
dbms.security.procedures.allowlist: A list of procedures (comma separated) that are to be loaded.
-
dbms.security.procedures.unrestricted: A list of procedures and user defined functions (comma separated) that are allowed full access to the database.
-
initial.dbms.database_allocator: Enterprise onlyName of the initial database allocator.
-
initial.dbms.default_database: Name of the default database (aliases are not supported).
-
initial.dbms.default_primaries_count: Enterprise onlyInitial default number of primary instances of user databases.
-
initial.dbms.default_secondaries_count: Enterprise onlyInitial default number of secondary instances of user databases.
-
initial.server.allowed_databases: Enterprise onlyThe names of databases that are allowed on this server - all others are denied.
-
initial.server.denied_databases: Enterprise onlyThe names of databases that are not allowed on this server.
-
initial.server.mode_constraint: Enterprise onlyAn instance can restrict itself to allow databases to be hosted only as primaries or secondaries.
-
server.backup.enabled: Enterprise onlyEnable support for running online backups.
-
server.backup.listen_address: Enterprise onlyNetwork interface and port for the backup server to listen on.
-
server.backup.store_copy_max_retry_time_per_request: Enterprise onlyMaximum retry time per request during store copy.
-
server.bolt.advertised_address: Advertised address for this connector.
-
server.bolt.connection_keep_alive: The maximum time to wait before sending a NOOP on connections waiting for responses from active ongoing queries.The minimum value is 1 millisecond.
-
server.bolt.connection_keep_alive_for_requests: The type of messages to enable keep-alive messages for (ALL, STREAMING or OFF).
-
server.bolt.connection_keep_alive_probes: The total amount of probes to be missed before a connection is considered stale.The minimum for this value is 1.
-
server.bolt.connection_keep_alive_streaming_scheduling_interval: The interval between every scheduled keep-alive check on all connections with active queries.
-
server.bolt.enabled: Enable the bolt connector.
-
server.bolt.listen_address: Address the connector should bind to.
-
server.bolt.ocsp_stapling_enabled: Enable server OCSP stapling for bolt and http connectors.
-
server.bolt.thread_pool_keep_alive: The maximum time an idle thread in the thread pool bound to this connector will wait for new tasks.
-
server.bolt.thread_pool_max_size: The maximum number of threads allowed in the thread pool bound to this connector.
-
server.bolt.thread_pool_min_size: The number of threads to keep in the thread pool bound to this connector, even if they are idle.
-
server.bolt.tls_level: Encryption level to require this connector to use.
-
server.cluster.advertised_address: Enterprise onlyAdvertised hostname/IP address and port for the transaction shipping server.
-
server.cluster.catchup.connect_randomly_to_server_group: Enterprise onlyComma separated list of groups to be used by the connect-randomly-to-server-group selection strategy.
-
server.cluster.catchup.upstream_strategy: Enterprise onlyAn ordered list in descending preference of the strategy which secondaries use to choose the upstream server from which to pull transactional updates.
-
server.cluster.catchup.user_defined_upstream_strategy: Enterprise onlyConfiguration of a user-defined upstream selection strategy.
-
server.cluster.listen_address: Enterprise onlyNetwork interface and port for the transaction shipping server to listen on.
-
server.cluster.network.native_transport_enabled: Enterprise onlyUse native transport if available.
-
server.cluster.raft.advertised_address: Enterprise onlyAdvertised hostname/IP address and port for the RAFT server.
-
server.cluster.raft.listen_address: Enterprise onlyNetwork interface and port for the RAFT server to listen on.
-
server.cluster.system_database_mode: Enterprise onlyUsers must manually specify the mode for the system database on each instance.
-
server.config.strict_validation.enabled: A strict configuration validation will prevent the database from starting up if unknown configuration options are specified in the neo4j settings namespace (such as dbms., cypher., etc) or if settings are declared multiple times.
-
server.databases.default_to_read_only: Whether or not any database on this instance are read_only by default.
-
server.databases.read_only: List of databases for which to prevent write queries.
-
server.databases.writable: List of databases for which to allow write queries.
-
server.db.query_cache_size: The number of cached Cypher query execution plans per database.
-
server.default_advertised_address: Default hostname or IP address the server uses to advertise itself.
-
server.default_listen_address: Default network interface to listen for incoming connections.
-
server.directories.cluster_state: Enterprise onlyDirectory to hold cluster state including Raft log.
-
server.directories.data: Path of the data directory.
-
server.directories.dumps.root: Root location where Neo4j will store database dumps optionally produced when dropping said databases.
-
server.directories.import: Sets the root directory for file URLs used with the Cypher
LOAD CSV
clause. -
server.directories.lib: Path of the lib directory.
-
server.directories.licenses: Path of the licenses directory.
-
server.directories.logs: Path of the logs directory.
-
server.directories.metrics: Enterprise onlyThe target location of the CSV files: a path to a directory wherein a CSV file per reported field will be written.
-
server.directories.neo4j_home: Root relative to which directory settings are resolved.
-
server.directories.plugins: Location of the database plugin directory.
-
server.directories.run: Path of the run directory.
-
server.directories.script.root: Root location where Neo4j will store scripts for configured databases.
-
server.directories.transaction.logs.root: Root location where Neo4j will store transaction logs for configured databases.
-
server.discovery.advertised_address: Enterprise onlyAdvertised cluster member discovery management communication.
-
server.discovery.listen_address: Enterprise onlyHost and port to bind the cluster member discovery management communication.
-
server.dynamic.setting.allowlist: Enterprise onlyA list of setting name patterns (comma separated) that are allowed to be dynamically changed.
-
server.groups: Enterprise onlyA list of tag names for the server used when configuring load balancing and replication policies.
-
server.http.advertised_address: Advertised address for this connector.
-
server.http.enabled: Enable the http connector.
-
server.http.listen_address: Address the connector should bind to.
-
server.http_enabled_modules: Defines the set of modules loaded into the Neo4j web server.
-
server.https.advertised_address: Advertised address for this connector.
-
server.https.enabled: Enable the https connector.
-
server.https.listen_address: Address the connector should bind to.
-
server.jvm.additional: Additional JVM arguments.
-
server.logs.config: Path to the logging configuration for debug, query, http and security logs.
-
server.logs.debug.enabled: Enable the debug log.
-
server.logs.gc.enabled: Enable GC Logging.
-
server.logs.gc.options: GC Logging Options.
-
server.logs.gc.rotation.keep_number: Number of GC logs to keep.
-
server.logs.gc.rotation.size: Size of each GC log that is kept.
-
server.logs.user.config: Path to the logging configuration of user logs.
-
server.max_databases: Enterprise onlyThe maximum number of databases. This setting will be deprecated in favour of
dbms.max_databases
in a future version. -
server.memory.heap.initial_size: Initial heap size.
-
server.memory.heap.max_size: Maximum heap size.
-
server.memory.off_heap.block_cache_size: Defines the size of the off-heap memory blocks cache.
-
server.memory.off_heap.max_cacheable_block_size: Defines the maximum size of an off-heap memory block that can be cached to speed up allocations.
-
server.memory.off_heap.max_size: The maximum amount of off-heap memory that can be used to store transaction state data; it’s a total amount of memory shared across all active transactions.
-
server.memory.pagecache.directio: Use direct I/O for page cache.
-
server.memory.pagecache.flush.buffer.enabled: Page cache can be configured to use a temporal buffer for flushing purposes.
-
server.memory.pagecache.flush.buffer.size_in_pages: Page cache can be configured to use a temporal buffer for flushing purposes.
-
server.memory.pagecache.scan.prefetchers: The maximum number of worker threads to use for pre-fetching data when doing sequential scans.
-
server.memory.pagecache.size: The amount of memory to use for mapping the store files.
-
server.metrics.csv.enabled: Enterprise onlySet to
true
to enable exporting metrics to CSV files. -
server.metrics.csv.interval: Enterprise onlyThe reporting interval for the CSV files.
-
server.metrics.csv.rotation.compression: Enterprise onlyDecides what compression to use for the csv history files.
-
server.metrics.csv.rotation.keep_number: Enterprise onlyMaximum number of history files for the csv files.
-
server.metrics.csv.rotation.size: Enterprise onlyThe file size in bytes at which the csv files will auto-rotate.
-
server.metrics.enabled: Enterprise onlyEnable metrics.
-
server.metrics.filter: Enterprise onlySpecifies which metrics should be enabled by using a comma separated list of globbing patterns.
-
server.metrics.graphite.enabled: Enterprise onlySet to
true
to enable exporting metrics to Graphite. -
server.metrics.graphite.interval: Enterprise onlyThe reporting interval for Graphite.
-
server.metrics.graphite.server: Enterprise onlyThe hostname or IP address of the Graphite server.
-
server.metrics.jmx.enabled: Enterprise onlySet to
true
to enable the JMX metrics endpoint. -
server.metrics.prefix: Enterprise onlyA common prefix for the reported metrics field names.
-
server.metrics.prometheus.enabled: Enterprise onlySet to
true
to enable the Prometheus endpoint. -
server.metrics.prometheus.endpoint: Enterprise onlyThe hostname and port to use as Prometheus endpoint.
-
server.panic.shutdown_on_panic: Enterprise onlyIf there is a Database Management System Panic (an irrecoverable error) should the neo4j process shut down or continue running.
-
server.routing.advertised_address: Enterprise onlyThe advertised address for the intra-cluster routing connector.
-
server.routing.listen_address: The address the routing connector should bind to.
-
server.threads.worker_count: Number of Neo4j worker threads.
-
server.unmanaged_extension_classes: Comma-separated list of <classname>=<mount point> for unmanaged extensions.
-
server.windows_service_name: Name of the Windows Service managing Neo4j when installed using
neo4j install-service
.
Description |
Enterprise onlyConfigure the policy for outgoing Neo4j Browser connections. |
Valid values |
browser.allow_outgoing_connections, a boolean |
Default value |
|
Description |
Enterprise onlyConfigure the Neo4j Browser to time out logged in users after this idle period. Setting this to 0 indicates no limit. |
Valid values |
browser.credential_timeout, a duration (Valid units are: |
Default value |
|
Description |
Commands to be run when Neo4j Browser successfully connects to this server. Separate multiple commands with semi-colon. |
Valid values |
browser.post_connect_cmd, a string |
Default value |
Description |
Whitelist of hosts for the Neo4j Browser to be allowed to fetch content from. |
Valid values |
browser.remote_content_hostname_whitelist, a string |
Default value |
|
Description |
Enterprise onlyConfigure the Neo4j Browser to store or not store user credentials. |
Valid values |
browser.retain_connection_credentials, a boolean |
Default value |
|
Description |
Enterprise onlyConfigure the Neo4j Browser to store or not store user editor history. |
Valid values |
browser.retain_editor_history, a boolean |
Default value |
|
Description |
Configure client applications such as Browser and Bloom to send Product Analytics data. |
Valid values |
client.allow_telemetry, a boolean |
Default value |
|
Description |
Configures the general policy for when check-points should occur. The default policy is the 'periodic' check-point policy, as specified by the 'db.checkpoint.interval.tx' and 'db.checkpoint.interval.time' settings. The Neo4j Enterprise Edition provides two alternative policies: The first is the 'continuous' check-point policy, which will ignore those settings and run the check-point process all the time. The second is the 'volumetric' check-point policy, which makes a best-effort at check-pointing often enough so that the database doesn’t get too far behind on deleting old transaction logs in accordance with the 'db.tx_log.rotation.retention_policy' setting. |
Valid values |
db.checkpoint, one of [PERIODIC, CONTINUOUS, VOLUME, VOLUMETRIC] |
Default value |
|
Description |
Configures the time interval between check-points. The database will not check-point more often than this (unless check pointing is triggered by a different event), but might check-point less often than this interval, if performing a check-point takes longer time than the configured interval. A check-point is a point in the transaction logs, which recovery would start from. Longer check-point intervals typically mean that recovery will take longer to complete in case of a crash. On the other hand, a longer check-point interval can also reduce the I/O load that the database places on the system, as each check-point implies a flushing and forcing of all the store files. |
Valid values |
db.checkpoint.interval.time, a duration (Valid units are: |
Default value |
|
Description |
Configures the transaction interval between check-points. The database will not check-point more often than this (unless check pointing is triggered by a different event), but might check-point less often than this interval, if performing a check-point takes longer time than the configured interval. A check-point is a point in the transaction logs, which recovery would start from. Longer check-point intervals typically mean that recovery will take longer to complete in case of a crash. On the other hand, a longer check-point interval can also reduce the I/O load that the database places on the system, as each check-point implies a flushing and forcing of all the store files. The default is '100000' for a check-point every 100000 transactions. |
Valid values |
db.checkpoint.interval.tx, an integer which is minimum |
Default value |
|
Description |
Configures the volume of transaction logs between check-points. The database will not check-point more often than this (unless check pointing is triggered by a different event), but might check-point less often than this interval, if performing a check-point takes longer time than the configured interval. A check-point is a point in the transaction logs, which recovery would start from. Longer check-point intervals typically mean that recovery will take longer to complete in case of a crash. On the other hand, a longer check-point interval can also reduce the I/O load that the database places on the system, as each check-point implies a flushing and forcing of all the store files. |
Valid values |
db.checkpoint.interval.volume, a byte size (valid multipliers are |
Default value |
|
Description |
Limit the number of IOs the background checkpoint process will consume per second. This setting is advisory, is ignored in Neo4j Community Edition, and is followed to best effort in Enterprise Edition. An IO is in this case a 8 KiB (mostly sequential) write. Limiting the write IO in this way will leave more bandwidth in the IO subsystem to service random-read IOs, which is important for the response time of queries when the database cannot fit entirely in memory. The only drawback of this setting is that longer checkpoint times may lead to slightly longer recovery times in case of a database or system crash. A lower number means lower IO pressure, and consequently longer checkpoint times. Set this to -1 to disable the IOPS limit and remove the limitation entirely; this will let the checkpointer flush data as fast as the hardware will go. Removing the setting, or commenting it out, will set the default value of 600. |
Valid values |
db.checkpoint.iops.limit, an integer |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyInterval of pulling updates from cores. |
Valid values |
db.cluster.catchup.pull_interval, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyThe maximum number of bytes in the apply buffer. This parameter limits the amount of memory that can be consumed by the apply buffer. If the bytes limit is reached, buffer size will be limited even if max_entries is not exceeded. |
Valid values |
db.cluster.raft.apply.buffer.max_bytes, a byte size (valid multipliers are |
Default value |
|
Description |
Enterprise onlyThe maximum number of entries in the raft log entry prefetch buffer. |
Valid values |
db.cluster.raft.apply.buffer.max_entries, an integer |
Default value |
|
Description |
Enterprise onlyLargest batch processed by RAFT in bytes. |
Valid values |
db.cluster.raft.in_queue.batch.max_bytes, a byte size (valid multipliers are |
Default value |
|
Description |
Enterprise onlyMaximum number of bytes in the RAFT in-queue. |
Valid values |
db.cluster.raft.in_queue.max_bytes, a byte size (valid multipliers are |
Default value |
|
Description |
Enterprise onlyThe name of a server_group whose members should be prioritized as leaders. This does not guarantee that members of this group will be leader at all times, but the cluster will attempt to transfer leadership to such a member when possible. If a database is specified using |
Valid values |
db.cluster.raft.leader_transfer.priority_group, a string identifying a Server Tag |
Default value |
Description |
Enterprise onlyRAFT log pruning strategy that determines which logs are to be pruned. Neo4j only prunes log entries up to the last applied index, which guarantees that logs are only marked for pruning once the transactions within are safely copied over to the local transaction logs and safely committed by a majority of cluster members. Possible values are a byte size or a number of transactions (e.g., 200K txs). |
Valid values |
db.cluster.raft.log.prune_strategy, a string |
Default value |
|
Description |
Enterprise onlyThe maximum number of bytes in the in-flight cache. This parameter limits the amount of memory that can be consumed by cache. If the bytes limit is reached, cache size will be limited even if max_entries is not exceeded. |
Valid values |
db.cluster.raft.log_shipping.buffer.max_bytes, a byte size (valid multipliers are |
Default value |
|
Description |
Enterprise onlyThe maximum number of entries in the in-flight cache. Increasing size will require more memory but might improve performance in high load situations. |
Valid values |
db.cluster.raft.log_shipping.buffer.max_entries, an integer |
Default value |
|
Description |
Allows the enabling or disabling of the file watcher service. This is an auxiliary service but should be left enabled in almost all cases. |
Valid values |
db.filewatcher.enabled, a boolean |
Default value |
|
Description |
Database format. This is the format that will be used for new databases. Valid values are |
Valid values |
db.format, a string |
Dynamic |
true |
Default value |
|
Description |
The size of the internal buffer in bytes used by |
Valid values |
db.import.csv.buffer_size, a long which is minimum |
Default value |
|
Description |
Selects whether to conform to the standard https://tools.ietf.org/html/rfc4180 for interpreting escaped quotation characters in CSV files loaded using |
Valid values |
db.import.csv.legacy_quote_escaping, a boolean |
Default value |
|
Description |
The name of the analyzer that the fulltext indexes should use by default. |
Valid values |
db.index.fulltext.default_analyzer, a string |
Default value |
|
Description |
Whether or not fulltext indexes should be eventually consistent by default or not. |
Valid values |
db.index.fulltext.eventually_consistent, a boolean |
Default value |
|
Description |
The eventually_consistent mode of the fulltext indexes works by queueing up index updates to be applied later in a background thread. This newBuilder sets an upper bound on how many index updates are allowed to be in this queue at any one point in time. When it is reached, the commit process will slow down and wait for the index update applier thread to make some more room in the queue. |
Valid values |
db.index.fulltext.eventually_consistent_index_update_queue_max_length, an integer which is in the range |
Default value |
|
Description |
Enable or disable background index sampling. |
Valid values |
db.index_sampling.background_enabled, a boolean |
Default value |
|
Description |
Index sampling chunk size limit. |
Valid values |
db.index_sampling.sample_size_limit, an integer which is in the range |
Default value |
|
Description |
Percentage of index updates of total index size required before sampling of a given index is triggered. |
Valid values |
db.index_sampling.update_percentage, an integer which is minimum |
Default value |
|
Description |
The maximum time interval within which lock should be acquired. Zero (default) means timeout is disabled. |
Valid values |
db.lock.acquisition.timeout, a duration (Valid units are: |
Dynamic |
true |
Default value |
|
Description |
Log query text and parameters without obfuscating passwords. This allows queries to be logged earlier before parsing starts. |
Valid values |
db.logs.query.early_raw_logging_enabled, a boolean |
Dynamic |
true |
Default value |
|
Description |
Log executed queries. Valid values are
Log entries are written to the query log. This feature is available in the Neo4j Enterprise Edition. |
Valid values |
db.logs.query.enabled, one of [OFF, INFO, VERBOSE] |
Dynamic |
true |
Default value |
|
Description |
Sets a maximum character length use for each parameter in the log. This only takes effect if |
Valid values |
db.logs.query.max_parameter_length, an integer |
Dynamic |
true |
Default value |
|
Description |
Obfuscates all literals of the query before writing to the log. Note that node labels, relationship types and map property keys are still shown. Changing the setting will not affect queries that are cached. So, if you want the switch to have immediate effect, you must also call |
Valid values |
db.logs.query.obfuscate_literals, a boolean |
Dynamic |
true |
Default value |
|
Description |
Log parameters for the executed queries being logged. |
Valid values |
db.logs.query.parameter_logging_enabled, a boolean |
Dynamic |
true |
Default value |
|
Description |
Log query plan description table, useful for debugging purposes. |
Valid values |
db.logs.query.plan_description_enabled, a boolean |
Dynamic |
true |
Default value |
|
Description |
If the execution of query takes more time than this threshold, the query is logged once completed - provided query logging is set to INFO. Defaults to 0 seconds, that is all queries are logged. |
Valid values |
db.logs.query.threshold, a duration (Valid units are: |
Dynamic |
true |
Default value |
|
Description |
Log the start and end of a transaction. Valid values are 'OFF', 'INFO', or 'VERBOSE'. OFF: no logging. INFO: log start and end of transactions that take longer than the configured threshold, db.logs.query.transaction.threshold. VERBOSE: log start and end of all transactions. Log entries are written to the query log. This feature is available in the Neo4j Enterprise Edition. |
Valid values |
db.logs.query.transaction.enabled, one of [OFF, INFO, VERBOSE] |
Dynamic |
true |
Default value |
|
Description |
If the transaction is open for more time than this threshold, the transaction is logged once completed - provided transaction logging (db.logs.query.transaction.enabled) is set to |
Valid values |
db.logs.query.transaction.threshold, a duration (Valid units are: |
Dynamic |
true |
Default value |
|
Description |
Page cache can be configured to perform usage sampling of loaded pages that can be used to construct active load profile. According to that profile pages can be reloaded on the restart, replication, etc. This setting allows disabling that behavior. This feature is available in Neo4j Enterprise Edition. |
Valid values |
db.memory.pagecache.warmup.enable, a boolean |
Default value |
|
Description |
Page cache warmup can be configured to prefetch files, preferably when cache size is bigger than store size. Files to be prefetched can be filtered by 'dbms.memory.pagecache.warmup.preload.allowlist'. Enabling this disables warmup by profile. |
Valid values |
db.memory.pagecache.warmup.preload, a boolean |
Default value |
|
Description |
Page cache warmup prefetch file allowlist regex. By default matches all files. |
Valid values |
db.memory.pagecache.warmup.preload.allowlist, a string |
Default value |
|
Description |
The profiling frequency for the page cache. Accurate profiles allow the page cache to do active warmup after a restart, reducing the mean time to performance. This feature is available in Neo4j Enterprise Edition. |
Valid values |
db.memory.pagecache.warmup.profile.interval, a duration (Valid units are: |
Default value |
|
Description |
Limit the amount of memory that a single transaction can consume, in bytes (or kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g'). Zero means 'largest possible value'. |
Valid values |
db.memory.transaction.max, a byte size (valid multipliers are |
Dynamic |
true |
Default value |
|
Description |
Limit the amount of memory that all transactions in one database can consume, in bytes (or kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g'). Zero means 'unlimited'. |
Valid values |
db.memory.transaction.total.max, a byte size (valid multipliers are |
Dynamic |
true |
Default value |
|
Description |
If |
Valid values |
db.recovery.fail_on_missing_files, a boolean |
Default value |
|
Description |
Relationship count threshold for considering a node to be dense. |
Valid values |
db.relationship_grouping_threshold, an integer which is minimum |
Default value |
|
Description |
The maximum amount of time to wait for running transactions to complete before allowing initiated database shutdown to continue. |
Valid values |
db.shutdown_transaction_end_timeout, a duration (Valid units are: |
Default value |
|
Description |
Specify if Neo4j should try to preallocate store files as they grow. |
Valid values |
db.store.files.preallocate, a boolean |
Default value |
|
Description |
Database timezone for temporal functions. All Time and DateTime values that are created without an explicit timezone will use this configured default timezone. |
Valid values |
db.temporal.timezone, a string describing a timezone, either described by offset (e.g. |
Default value |
|
Description |
Enables or disables tracking of how much time a query spends actively executing on the CPU. Calling |
Valid values |
db.track_query_cpu_time, a boolean |
Dynamic |
true |
Default value |
|
Description |
The maximum amount of time to wait for the database state represented by the bookmark. |
Valid values |
db.transaction.bookmark_ready_timeout, a duration (Valid units are: |
Dynamic |
true |
Default value |
|
Description |
The maximum number of concurrently running transactions. If set to 0, limit is disabled. |
Valid values |
db.transaction.concurrent.maximum, an integer |
Dynamic |
true |
Default value |
|
Description |
Configures the time interval between transaction monitor checks. Determines how often monitor thread will check transaction for timeout. |
Valid values |
db.transaction.monitor.check.interval, a duration (Valid units are: |
Default value |
|
Description |
Transaction sampling percentage. |
Valid values |
db.transaction.sampling.percentage, an integer which is in the range |
Dynamic |
true |
Default value |
|
Description |
The maximum time interval of a transaction within which it should be completed. |
Valid values |
db.transaction.timeout, a duration (Valid units are: |
Dynamic |
true |
Default value |
|
Description |
Transaction creation tracing level. |
Valid values |
db.transaction.tracing.level, one of [DISABLED, SAMPLE, ALL] |
Dynamic |
true |
Default value |
|
Description |
On serialization of transaction logs, they will be temporary stored in the byte buffer that will be flushed at the end of the transaction or at any moment when buffer will be full. |
Valid values |
db.tx_log.buffer.size, a long which is minimum |
Default value |
|
Description |
Specify if Neo4j should try to preallocate the logical log file in advance. It optimizes filesystem by ensuring there is room to accommodate newly generated files and avoid file-level fragmentation. |
Valid values |
db.tx_log.preallocate, a boolean |
Dynamic |
true |
Default value |
|
Description |
Tell Neo4j how long logical transaction logs should be kept to backup the database.For example, "10 days" will prune logical logs that only contain transactions older than 10 days.Alternatively, "100k txs" will keep the 100k latest transactions from each database and prune any older transactions. |
Valid values |
db.tx_log.rotation.retention_policy, a string which matches the pattern |
Dynamic |
true |
Default value |
|
Description |
Specifies at which file size the logical log will auto-rotate. Minimum accepted value is 128 KiB. |
Valid values |
db.tx_log.rotation.size, a byte size (valid multipliers are |
Dynamic |
true |
Default value |
|
Description |
Defines whether memory for transaction state should be allocated on- or off-heap. Note that for small transactions you can gain up to 25% write speed by setting it to |
Valid values |
db.tx_state.memory_allocation, one of [ON_HEAP, OFF_HEAP] |
Default value |
|
Description |
Enterprise onlyThe catch up protocol times out if the given duration elapses with no network activity. Every message received by the client from the server extends the time out duration. |
Valid values |
dbms.cluster.catchup.client_inactivity_timeout, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyA comma-separated list of endpoints which a server should contact in order to discover other cluster members. |
Valid values |
dbms.cluster.discovery.endpoints, a ',' separated list with elements of type 'a socket address in the format 'hostname:port', 'hostname' or ':port''. |
Description |
Enterprise onlyThe level of middleware logging. |
Valid values |
dbms.cluster.discovery.log_level, one of [DEBUG, INFO, WARN, ERROR, NONE] |
Default value |
|
Description |
Enterprise onlyConfigure the discovery type used for cluster name resolution. |
Valid values |
dbms.cluster.discovery.type, one of [DNS, LIST, SRV, K8S] which may require different settings depending on the discovery type: |
Default value |
|
Description |
Enterprise onlyMinimum number of machines initially required to form a clustered DBMS. The cluster is considered formed when at least this many members have discovered each other, bound together and bootstrapped a highly available system database. As a result, at least this many of the cluster’s initial machines must have 'server.cluster.system_database_mode' set to 'PRIMARY'.NOTE: If 'dbms.cluster.discovery.type' is set to 'LIST' and 'dbms.cluster.discovery.endpoints' is empty then the user is assumed to be deploying a standalone DBMS, and the value of this setting is ignored. |
Valid values |
dbms.cluster.minimum_initial_system_primaries_count, an integer which is minimum |
Default value |
|
Description |
Enterprise onlyTime out for protocol negotiation handshake. |
Valid values |
dbms.cluster.network.handshake_timeout, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyMaximum chunk size allowable across network by clustering machinery. |
Valid values |
dbms.cluster.network.max_chunk_size, an integer which is in the range |
Default value |
|
Description |
Enterprise onlyNetwork compression algorithms that this instance will allow in negotiation as a comma-separated list. Listed in descending order of preference for incoming connections. An empty list implies no compression. For outgoing connections this merely specifies the allowed set of algorithms and the preference of the remote peer will be used for making the decision. Allowable values: [Gzip, Snappy, Snappy_validating, LZ4, LZ4_high_compression, LZ_validating, LZ4_high_compression_validating] |
Valid values |
dbms.cluster.network.supported_compression_algos, a ',' separated list with elements of type 'a string'. |
Default value |
Description |
Enterprise onlyThe time allowed for a database on a Neo4j server to either join a cluster or form a new cluster with the other Neo4j Servers provided by |
Valid values |
dbms.cluster.raft.binding_timeout, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyThe maximum number of TCP channels between two nodes to operate the raft protocol. Each database gets allocated one channel, but a single channel can be used by more than one database. |
Valid values |
dbms.cluster.raft.client.max_channels, an integer |
Default value |
|
Description |
Enterprise onlyThe rate at which leader elections happen. Note that due to election conflicts it might take several attempts to find a leader. The window should be significantly larger than typical communication delays to make conflicts unlikely. |
Valid values |
dbms.cluster.raft.election_failure_detection_window, a duration-range <min-max> (Valid units are: |
Default value |
|
Description |
Enterprise onlyThe time window within which the loss of the leader is detected and the first re-election attempt is held. The window should be significantly larger than typical communication delays to make conflicts unlikely. |
Valid values |
dbms.cluster.raft.leader_failure_detection_window, a duration-range <min-max> (Valid units are: |
Default value |
|
Description |
Enterprise onlyWhich strategy to use when transferring database leaderships around a cluster. This can be one of |
Valid values |
dbms.cluster.raft.leader_transfer.balancing_strategy, one of [NO_BALANCING, EQUAL_BALANCING] |
Default value |
|
Description |
Enterprise onlyRAFT log pruning frequency. |
Valid values |
dbms.cluster.raft.log.pruning_frequency, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyRAFT log reader pool size. |
Valid values |
dbms.cluster.raft.log.reader_pool_size, an integer |
Default value |
|
Description |
Enterprise onlyRAFT log rotation size. |
Valid values |
dbms.cluster.raft.log.rotation_size, a byte size (valid multipliers are |
Default value |
|
Description |
Enterprise onlyMaximum amount of lag accepted for a new follower to join the Raft group. |
Valid values |
dbms.cluster.raft.membership.join_max_lag, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyTime out for a new member to catch up. |
Valid values |
dbms.cluster.raft.membership.join_timeout, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyMaximum retry time per request during store copy. Regular store files and indexes are downloaded in separate requests during store copy. This configures the maximum time failed requests are allowed to resend. |
Valid values |
dbms.cluster.store_copy.max_retry_time_per_request, a duration (Valid units are: |
Default value |
|
Description |
This setting is associated with performance optimization. Set this to |
Valid values |
dbms.cypher.forbid_exhaustive_shortestpath, a boolean |
Default value |
|
Description |
This setting is associated with performance optimization. The shortest path algorithm does not work when the start and end nodes are the same. With this setting set to |
Valid values |
dbms.cypher.forbid_shortestpath_common_nodes, a boolean |
Default value |
|
Description |
Set this to specify the behavior when Cypher planner or runtime hints cannot be fulfilled. If true, then non-conformance will result in an error, otherwise only a warning is generated. |
Valid values |
dbms.cypher.hints_error, a boolean |
Default value |
|
Description |
Set this to change the behavior for Cypher create relationship when the start or end node is missing. By default this fails the query and stops execution, but by setting this flag the create operation is simply not performed and execution continues. |
Valid values |
dbms.cypher.lenient_create_relationship, a boolean |
Default value |
|
Description |
The minimum time between possible cypher query replanning events. After this time, the graph statistics will be evaluated, and if they have changed by more than the value set by dbms.cypher.statistics_divergence_threshold, the query will be replanned. If the statistics have not changed sufficiently, the same interval will need to pass before the statistics will be evaluated again. Each time they are evaluated, the divergence threshold will be reduced slightly until it reaches 10% after 7h, so that even moderately changing databases will see query replanning after a sufficiently long time interval. |
Valid values |
dbms.cypher.min_replan_interval, a duration (Valid units are: |
Default value |
|
Description |
Set this to specify the default planner for the default language version. |
Valid values |
dbms.cypher.planner, one of [DEFAULT, COST] |
Default value |
|
Description |
If set to |
Valid values |
dbms.cypher.render_plan_description, a boolean |
Dynamic |
true |
Default value |
|
Description |
The threshold for statistics above which a plan is considered stale. If any of the underlying statistics used to create the plan have changed more than this value, the plan will be considered stale and will be replanned. Change is calculated as This means that a value of This interval is defined by |
Valid values |
dbms.cypher.statistics_divergence_threshold, a double which is in the range |
Default value |
|
Description |
Enterprise onlyDatabases may be created from an existing 'seed' (a database backup or dump) stored at some source URI. Different types of seed source are supported by different implementations of |
Valid values |
dbms.databases.seed_from_uri_providers, a ',' separated list with elements of type 'a string'. |
Default value |
|
Description |
Database timezone. Among other things, this setting influences the monitoring procedures. |
Valid values |
dbms.db.timezone, one of [UTC, SYSTEM] |
Default value |
|
Description |
Enterprise onlyAddress for Kubernetes API. |
Valid values |
dbms.kubernetes.address, a socket address in the format 'hostname:port', 'hostname' or ':port' |
Default value |
|
Description |
Enterprise onlyFile location of CA certificate for Kubernetes API. |
Valid values |
dbms.kubernetes.ca_crt, a path |
Default value |
|
Description |
Enterprise onlyKubernetes cluster domain. |
Valid values |
dbms.kubernetes.cluster_domain, a string |
Default value |
|
Description |
Enterprise onlyLabelSelector for Kubernetes API. |
Valid values |
dbms.kubernetes.label_selector, a string |
Description |
Enterprise onlyFile location of namespace for Kubernetes API. |
Valid values |
dbms.kubernetes.namespace, a path |
Default value |
|
Description |
Enterprise onlyService port name for discovery for Kubernetes API. |
Valid values |
dbms.kubernetes.service_port_name, a string |
Description |
Enterprise onlyFile location of token for Kubernetes API. |
Valid values |
dbms.kubernetes.token, a path |
Default value |
|
Description |
Enable HTTP request logging. |
Valid values |
dbms.logs.http.enabled, a boolean |
Default value |
|
Description |
Enterprise onlyThe maximum number of databases. |
Valid values |
dbms.max_databases, a long which is minimum |
Default value |
|
Description |
Enable off heap and on heap memory tracking. Should not be set to |
Valid values |
dbms.memory.tracking.enable, a boolean |
Default value |
|
Description |
Limit the amount of memory that all of the running transactions can consume, in bytes (or kilobytes with the 'k' suffix, megabytes with 'm' and gigabytes with 'g'). Zero means 'unlimited'. |
Valid values |
dbms.memory.transaction.total.max, a byte size (valid multipliers are |
Dynamic |
true |
Default value |
|
Description |
Netty SSL provider. |
Valid values |
dbms.netty.ssl.provider, one of [JDK, OPENSSL, OPENSSL_REFCNT] |
Default value |
|
Description |
Always use client side routing (regardless of the default router) for neo4j:// protocol connections to these domains. A comma separated list of domains. Wildcards (*) are supported. |
Valid values |
dbms.routing.client_side.enforce_for_domains, a ',' separated set with elements of type 'a string'. |
Dynamic |
true |
Default value |
Description |
Routing strategy for neo4j:// protocol connections.
Default is |
Valid values |
dbms.routing.default_router, one of [SERVER, CLIENT] |
Default value |
|
Description |
Socket connection timeout. A timeout of zero is treated as an infinite timeout and will be bound by the timeout configured on the operating system level. |
Valid values |
dbms.routing.driver.connection.connect_timeout, a duration (Valid units are: |
Default value |
|
Description |
Pooled connections older than this threshold will be closed and removed from the pool. Setting this option to a low value will cause a high connection churn and might result in a performance hit. It is recommended to set maximum lifetime to a slightly smaller value than the one configured in network equipment (load balancer, proxy, firewall, etc. can also limit maximum connection lifetime). Zero and negative values result in lifetime not being checked. |
Valid values |
dbms.routing.driver.connection.max_lifetime, a duration (Valid units are: |
Default value |
|
Description |
Maximum amount of time spent attempting to acquire a connection from the connection pool. This timeout only kicks in when all existing connections are being used and no new connections can be created because maximum connection pool size has been reached. Error is raised when connection can’t be acquired within configured time. Negative values are allowed and result in unlimited acquisition timeout. Value of 0 is allowed and results in no timeout and immediate failure when connection is unavailable. |
Valid values |
dbms.routing.driver.connection.pool.acquisition_timeout, a duration (Valid units are: |
Default value |
|
Description |
Pooled connections that have been idle in the pool for longer than this timeout will be tested before they are used again, to ensure they are still alive. If this option is set too low, an additional network call will be incurred when acquiring a connection, which causes a performance hit. If this is set high, no longer live connections might be used which might lead to errors. Hence, this parameter tunes a balance between the likelihood of experiencing connection problems and performance Normally, this parameter should not need tuning. Value 0 means connections will always be tested for validity. |
Valid values |
dbms.routing.driver.connection.pool.idle_test, a duration (Valid units are: |
Default value |
|
Description |
Maximum total number of connections to be managed by a connection pool. The limit is enforced for a combination of a host and user. Negative values are allowed and result in unlimited pool. Value of 0is not allowed. |
Valid values |
dbms.routing.driver.connection.pool.max_size, an integer |
Default value |
|
Description |
Sets level for driver internal logging. |
Valid values |
dbms.routing.driver.logging.level, one of [DEBUG, INFO, WARN, ERROR, NONE] |
Default value |
|
Description |
Enable server-side routing in clusters using an additional bolt connector. When configured, this allows requests to be forwarded from one cluster member to another, if the requests can’t be satisfied by the first member (e.g. write requests received by a non-leader). |
Valid values |
dbms.routing.enabled, a boolean |
Default value |
|
Description |
Enterprise onlyThe load balancing plugin to use. |
Valid values |
dbms.routing.load_balancing.plugin, a string which specified load balancer plugin exist. |
Default value |
|
Description |
Enterprise onlyEnables shuffling of the returned load balancing result. |
Valid values |
dbms.routing.load_balancing.shuffle_enabled, a boolean |
Default value |
|
Description |
Enterprise onlyConfigure if the |
Valid values |
dbms.routing.reads_on_primaries_enabled, a boolean |
Default value |
|
Description |
Enterprise onlyConfigure if the |
Valid values |
dbms.routing.reads_on_writers_enabled, a boolean |
Dynamic |
true |
Default value |
|
Description |
How long callers should cache the response of the routing procedure |
Valid values |
dbms.routing_ttl, a duration (Valid units are: |
Default value |
|
Description |
Determines if Cypher will allow using file URLs when loading data using |
Valid values |
dbms.security.allow_csv_import_from_file_urls, a boolean |
Default value |
|
Description |
Enterprise onlyThe maximum capacity for authentication and authorization caches (respectively). |
Valid values |
dbms.security.auth_cache_max_capacity, an integer |
Default value |
|
Description |
Enterprise onlyThe time to live (TTL) for cached authentication and authorization info when using external auth providers (LDAP or plugin). Setting the TTL to 0 will disable auth caching. Disabling caching while using the LDAP auth provider requires the use of an LDAP system account for resolving authorization information. |
Valid values |
dbms.security.auth_cache_ttl, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyEnable time-based eviction of the authentication and authorization info cache for external auth providers (LDAP or plugin). Disabling this setting will make the cache live forever and only be evicted when |
Valid values |
dbms.security.auth_cache_use_ttl, a boolean |
Default value |
|
Description |
Enable auth requirement to access Neo4j. |
Valid values |
dbms.security.auth_enabled, a boolean |
Default value |
|
Description |
Neo4j v5.3The minimum number of characters required in a password. |
Valid values |
dbms.security.auth_minimum_password_length, an integer |
Default value |
|
Description |
The amount of time user account should be locked after a configured number of unsuccessful authentication attempts. The locked out user will not be able to log in until the lock period expires, even if correct credentials are provided. Setting this configuration option to a low value is not recommended because it might make it easier for an attacker to brute force the password. |
Valid values |
dbms.security.auth_lock_time, a duration (Valid units are: |
Default value |
|
Description |
The maximum number of unsuccessful authentication attempts before imposing a user lock for the configured amount of time, as defined by |
Valid values |
dbms.security.auth_max_failed_attempts, an integer which is minimum |
Default value |
|
Description |
Enterprise onlyA list of security authentication providers containing the users and roles. This can be any of the built-in |
Valid values |
dbms.security.authentication_providers, a ',' separated list with elements of type 'a string'. |
Default value |
|
Description |
Enterprise onlyA list of security authorization providers containing the users and roles. This can be any of the built-in |
Valid values |
dbms.security.authorization_providers, a ',' separated list with elements of type 'a string'. |
Default value |
|
Description |
Enterprise onlyRequire authorization for access to the Causal Clustering status endpoints. |
Valid values |
dbms.security.cluster_status_auth_enabled, a boolean |
Default value |
|
Description |
Value of the Access-Control-Allow-Origin header sent over any HTTP or HTTPS connector. This defaults to '*', which allows broadest compatibility. Note that any URI provided here limits HTTP/HTTPS access to that URI only. |
Valid values |
dbms.security.http_access_control_allow_origin, a string |
Default value |
|
Description |
Defines an allowlist of http paths where Neo4j authentication is not required. |
Valid values |
dbms.security.http_auth_allowlist, a ',' separated list with elements of type 'a string'. |
Default value |
|
Description |
Value of the HTTP Strict-Transport-Security (HSTS) response header. This header tells browsers that a webpage should only be accessed using HTTPS instead of HTTP. It is attached to every HTTPS response. Setting is not set by default so 'Strict-Transport-Security' header is not sent. Value is expected to contain directives like 'max-age', 'includeSubDomains' and 'preload'. |
Valid values |
dbms.security.http_strict_transport_security, a string |
Description |
Enterprise onlyName of the 256 length AES encryption key, which is used for the symmetric encryption. |
Valid values |
dbms.security.key.name, a string |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyPassword for accessing the keystore holding a 256 length AES encryption key, which is used for the symmetric encryption. |
Valid values |
dbms.security.keystore.password, a secure string |
Dynamic |
true |
Description |
Enterprise onlyLocation of the keystore holding a 256 length AES encryption key, which is used for the symmetric encryption of secrets held in system database. |
Valid values |
dbms.security.keystore.path, a path |
Dynamic |
true |
Description |
Enterprise onlyThe attribute to use when looking up users.
Using this setting requires |
Valid values |
dbms.security.ldap.authentication.attribute, a string which matches the pattern |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyDetermines if the result of authentication via the LDAP server should be cached or not. Caching is used to limit the number of LDAP requests that have to be made over the network for users that have already been authenticated successfully. A user can be authenticated against an existing cache entry (instead of via an LDAP server) as long as it is alive (see |
Valid values |
dbms.security.ldap.authentication.cache_enabled, a boolean |
Default value |
|
Description |
Enterprise onlyLDAP authentication mechanism. This is one of |
Valid values |
dbms.security.ldap.authentication.mechanism, a string |
Default value |
|
Description |
Enterprise onlyPerform authentication by searching for an unique attribute of a user.
Using this setting requires |
Valid values |
dbms.security.ldap.authentication.search_for_attribute, a boolean |
Default value |
|
Description |
Enterprise onlyLDAP user DN template. An LDAP object is referenced by its distinguished name (DN), and a user DN is an LDAP fully-qualified unique user identifier. This setting is used to generate an LDAP DN that conforms with the LDAP directory’s schema from the user principal that is submitted with the authentication token when logging in. The special token {0} is a placeholder where the user principal will be substituted into the DN string. |
Valid values |
dbms.security.ldap.authentication.user_dn_template, a string which Must be a string containing '{0}' to understand where to insert the runtime authentication principal. |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyThe LDAP group to which a user must belong to get any access to the system.Set this to restrict access to a subset of LDAP users belonging to a particular group. If this is not set, any user to successfully authenticate via LDAP will have access to the PUBLIC role and any other roles assigned to them via dbms.security.ldap.authorization.group_to_role_mapping. |
Valid values |
dbms.security.ldap.authorization.access_permitted_group, a string |
Dynamic |
true |
Default value |
Description |
Enterprise onlyA list of attribute names on a user object that contains groups to be used for mapping to roles when LDAP authorization is enabled. This setting is ignored when |
Valid values |
dbms.security.ldap.authorization.group_membership_attributes, a ',' separated list with elements of type 'a string'. which Can not be empty |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyAn authorization mapping from LDAP group names to Neo4j role names. The map should be formatted as a semicolon separated list of key-value pairs, where the key is the LDAP group name and the value is a comma separated list of corresponding role names. For example: group1=role1;group2=role2;group3=role3,role4,role5 You could also use whitespaces and quotes around group names to make this mapping more readable, for example: `dbms.security.ldap.authorization.group_to_role_mapping`=\ "cn=Neo4j Read Only,cn=users,dc=example,dc=com" = reader; \ "cn=Neo4j Read-Write,cn=users,dc=example,dc=com" = publisher; \ "cn=Neo4j Schema Manager,cn=users,dc=example,dc=com" = architect; \ "cn=Neo4j Administrator,cn=users,dc=example,dc=com" = admin |
Valid values |
dbms.security.ldap.authorization.group_to_role_mapping, a string which must be semicolon separated list of key-value pairs or empty |
Dynamic |
true |
Default value |
Description |
Enterprise onlyThis setting determines whether multiple LDAP search results will be processed (as is required for the lookup of nested groups). If set to |
Valid values |
dbms.security.ldap.authorization.nested_groups_enabled, a boolean |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyThe search template which will be used to find the nested groups which the user is a member of. The filter should contain the placeholder token |
Valid values |
dbms.security.ldap.authorization.nested_groups_search_filter, a string |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyAn LDAP system account password to use for authorization searches when |
Valid values |
dbms.security.ldap.authorization.system_password, a secure string |
Description |
Enterprise onlyAn LDAP system account username to use for authorization searches when |
Valid values |
dbms.security.ldap.authorization.system_username, a string |
Description |
Enterprise onlyPerform LDAP search for authorization info using a system account instead of the user’s own account.
If this is set to |
Valid values |
dbms.security.ldap.authorization.use_system_account, a boolean |
Default value |
|
Description |
Enterprise onlyThe name of the base object or named context to search for user objects when LDAP authorization is enabled. A common case is that this matches the last part of |
Valid values |
dbms.security.ldap.authorization.user_search_base, a string which Can not be empty |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyThe LDAP search filter to search for a user principal when LDAP authorization is enabled. The filter should contain the placeholder token {0} which will be substituted for the user principal. |
Valid values |
dbms.security.ldap.authorization.user_search_filter, a string |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyThe timeout for establishing an LDAP connection. If a connection with the LDAP server cannot be established within the given time the attempt is aborted. A value of 0 means to use the network protocol’s (i.e., TCP’s) timeout value. |
Valid values |
dbms.security.ldap.connection_timeout, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyURL of LDAP server to use for authentication and authorization. The format of the setting is |
Valid values |
dbms.security.ldap.host, a string |
Default value |
|
Description |
Enterprise onlyThe timeout for an LDAP read request (i.e. search). If the LDAP server does not respond within the given time the request will be aborted. A value of 0 means wait for a response indefinitely. |
Valid values |
dbms.security.ldap.read_timeout, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyThe LDAP referral behavior when creating a connection. This is one of |
Valid values |
dbms.security.ldap.referral, a string |
Default value |
|
Description |
Enterprise onlyUse secure communication with the LDAP server using opportunistic TLS. First an initial insecure connection will be made with the LDAP server, and a STARTTLS command will be issued to negotiate an upgrade of the connection to TLS before initiating authentication. |
Valid values |
dbms.security.ldap.use_starttls, a boolean |
Default value |
|
Description |
Enterprise onlySet to log successful authentication events to the security log. If this is set to |
Valid values |
dbms.security.log_successful_authentication, a boolean |
Default value |
|
Description |
Enterprise onlyExpected values of the Audience (aud) claim in the id token. |
Valid values |
dbms.security.oidc.<provider>.audience, a ',' separated list with elements of type 'a string'. which Can not be empty |
Dynamic |
true |
Description |
Enterprise onlyThe OIDC authorization endpoint. If this is not supplied Neo4j will attempt to discover it from the well_known_discovery_uri. |
Valid values |
dbms.security.oidc.<provider>.auth_endpoint, a URI |
Dynamic |
true |
Description |
Enterprise onlyThe OIDC flow to use. This is exposed to clients via the discovery endpoint. Supported values are |
Valid values |
dbms.security.oidc.<provider>.auth_flow, one of [PKCE, IMPLICIT] |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyOptional additional parameters that the auth endpoint requires. Please use params instead. The map is a semicolon separated list of key-value pairs. For example: |
Valid values |
dbms.security.oidc.<provider>.auth_params, A simple key value map pattern |
Dynamic |
true |
Default value |
|
Deprecated |
The |
Description |
Enterprise onlyAn authorization mapping from IdP group names to Neo4j role names. The map should be formatted as a semicolon separated list of key-value pairs, where the key is the IdP group name and the value is a comma separated list of corresponding role names. For example: group1=role1;group2=role2;group3=role3,role4,role5 You could also use whitespaces and quotes around group names to make this mapping more readable, for example: dbms.security.oidc.<provider>.authorization.group_to_role_mapping=\ "Neo4j Read Only" = reader; \ "Neo4j Read-Write" = publisher; \ "Neo4j Schema Manager" = architect; \ "Neo4j Administrator" = admin |
Valid values |
dbms.security.oidc.<provider>.authorization.group_to_role_mapping, a string which must be semicolon separated list of key-value pairs or empty |
Dynamic |
true |
Description |
Enterprise onlyThe claim to use as the list of groups in Neo4j. These could be Neo4J roles directly, or can be mapped using dbms.security.oidc.<provider>.authorization.group_to_role_mapping. |
Valid values |
dbms.security.oidc.<provider>.claims.groups, a string |
Dynamic |
true |
Description |
Enterprise onlyThe claim to use as the username in Neo4j. This would typically be sub, but in some situations it may be be desirable to use something else such as email. |
Valid values |
dbms.security.oidc.<provider>.claims.username, a string |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyClient id needed if token contains multiple Audience (aud) claims. |
Valid values |
dbms.security.oidc.<provider>.client_id, a string |
Dynamic |
true |
Description |
Enterprise onlyThe accepted values (all optional) are:
|
Valid values |
dbms.security.oidc.<provider>.config, A simple key value map pattern |
Dynamic |
true |
Default value |
|
Description |
When set to |
Valid values |
dbms.security.logs.oidc.jwt_claims_at_debug_level_enabled, a boolean |
Default value |
|
Description |
Enterprise onlyThe user-facing name of the provider as provided by the discovery endpoint to clients (Bloom, Browser etc.). |
Valid values |
dbms.security.oidc.<provider>.display_name, a string |
Description |
Enterprise onlyWhen turned on, Neo4j gets the groups from the provider user info endpoint. |
Valid values |
dbms.security.oidc.<provider>.get_groups_from_user_info, a boolean |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyWhen turned on, Neo4j gets the username from the provider user info endpoint. |
Valid values |
dbms.security.oidc.<provider>.get_username_from_user_info, a boolean |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyThe expected value of the iss claim in the id token. If this is not supplied Neo4j will attempt to discover it from the well_known_discovery_uri. |
Valid values |
dbms.security.oidc.<provider>.issuer, a string |
Dynamic |
true |
Description |
Enterprise onlyThe location of the JWK public key set for the identity provider. If this is not supplied Neo4j will attempt to discover it from the well_known_discovery_uri. |
Valid values |
dbms.security.oidc.<provider>.jwks_uri, a URI |
Dynamic |
true |
Description |
Enterprise onlyThe map is a semicolon separated list of key-value pairs. For example: client_id: the SSO Idp client idenfifier. response_type: code if auth_flow is pkce or token for implicit auth_flow. scope: often containing a subset of 'email profile openid groups'. For example: |
Valid values |
dbms.security.oidc.<provider>.params, A simple key value map pattern |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyThe OIDC token endpoint. If this is not supplied Neo4j will attempt to discover it from the well_known_discovery_uri. |
Valid values |
dbms.security.oidc.<provider>.token_endpoint, a URI |
Dynamic |
true |
Description |
Enterprise onlyOptional query parameters that the token endpoint requires. The map is a semicolon separated list of key-value pairs. For example: |
Valid values |
dbms.security.oidc.<provider>.token_params, A simple key value map pattern |
Dynamic |
true |
Default value |
|
Description |
Enterprise onlyThe identity providers user info uri. |
Valid values |
dbms.security.oidc.<provider>.user_info_uri, a URI |
Dynamic |
true |
Description |
Enterprise onlyThe 'well known' OpenID Connect Discovery endpoint used to fetch identity provider settings. If not provided, |
Valid values |
dbms.security.oidc.<provider>.well_known_discovery_uri, a URI |
Dynamic |
true |
Description |
A list of procedures (comma separated) that are to be loaded. The list may contain both fully-qualified procedure names, and partial names with the wildcard '*'. If this setting is left empty no procedures will be loaded. |
Valid values |
dbms.security.procedures.allowlist, a ',' separated list with elements of type 'a string'. |
Default value |
|
Description |
A list of procedures and user defined functions (comma separated) that are allowed full access to the database. The list may contain both fully-qualified procedure names, and partial names with the wildcard '*'. Note that this enables these procedures to bypass security. Use with caution. |
Valid values |
dbms.security.procedures.unrestricted, a ',' separated list with elements of type 'a string'. |
Default value |
Description |
Enterprise onlyName of the initial database allocator. After the creation of the dbms it can be set with the 'dbms.setDatabaseAllocator' procedure. |
Valid values |
initial.dbms.database_allocator, a string |
Default value |
|
Description |
Name of the default database (aliases are not supported).
|
||
Valid values |
initial.dbms.default_database, A valid database name containing only alphabetic characters, numbers, dots and dashes with a length between 3 and 63 characters, starting with an alphabetic character but not with the name 'system' |
||
Default value |
|
Description |
Enterprise onlyInitial default number of primary instances of user databases. If the user does not specify the number of primaries in 'CREATE DATABASE', this value will be used, unless it is overwritten with the 'dbms.setDefaultAllocationNumbers' procedure. |
Valid values |
initial.dbms.default_primaries_count, an integer which is minimum |
Default value |
|
Description |
Enterprise onlyInitial default number of secondary instances of user databases. If the user does not specify the number of secondaries in 'CREATE DATABASE', this value will be used, unless it is overwritten with the 'dbms.setDefaultAllocationNumbers' procedure. |
Valid values |
initial.dbms.default_secondaries_count, an integer which is minimum |
Default value |
|
Description |
Enterprise onlyThe names of databases that are allowed on this server - all others are denied. Empty means all are allowed. Can be overridden when enabling the server, or altered at runtime, without changing this setting. Exclusive with 'server.initial_denied_databases' |
Valid values |
initial.server.allowed_databases, a ',' separated set with elements of type 'a string'. |
Default value |
Description |
Enterprise onlyThe names of databases that are not allowed on this server. Empty means nothing is denied. Can be overridden when enabling the server, or altered at runtime, without changing this setting. Exclusive with 'server.initial_allowed_databases' |
Valid values |
initial.server.denied_databases, a ',' separated set with elements of type 'a string'. |
Default value |
Description |
Enterprise onlyAn instance can restrict itself to allow databases to be hosted only as primaries or secondaries. This setting is the default input for the |
Valid values |
initial.server.mode_constraint, one of [PRIMARY, SECONDARY, NONE] |
Default value |
|
Description |
Enterprise onlyEnable support for running online backups. |
Valid values |
server.backup.enabled, a boolean |
Default value |
|
Description |
Enterprise onlyNetwork interface and port for the backup server to listen on. |
Valid values |
server.backup.listen_address, a socket address in the format 'hostname:port', 'hostname' or ':port' |
Default value |
|
Description |
Enterprise onlyMaximum retry time per request during store copy. Regular store files and indexes are downloaded in separate requests during store copy. This configures the maximum time failed requests are allowed to resend. |
Valid values |
server.backup.store_copy_max_retry_time_per_request, a duration (Valid units are: |
Default value |
|
Description |
Advertised address for this connector. |
Valid values |
server.bolt.advertised_address, a socket address in the format 'hostname:port', 'hostname' or ':port' which accessible address. If missing port or hostname it is acquired from server.default_advertised_address |
Default value |
|
Description |
The maximum time to wait before sending a NOOP on connections waiting for responses from active ongoing queries.The minimum value is 1 millisecond. |
Valid values |
server.bolt.connection_keep_alive, a duration (Valid units are: |
Default value |
|
Description |
The type of messages to enable keep-alive messages for (ALL, STREAMING or OFF) |
Valid values |
server.bolt.connection_keep_alive_for_requests, one of [ALL, STREAMING, OFF] |
Default value |
|
Description |
The total amount of probes to be missed before a connection is considered stale.The minimum for this value is 1. |
Valid values |
server.bolt.connection_keep_alive_probes, an integer which is minimum |
Default value |
|
Description |
The interval between every scheduled keep-alive check on all connections with active queries. Zero duration turns off keep-alive service. |
Valid values |
server.bolt.connection_keep_alive_streaming_scheduling_interval, a duration (Valid units are: |
Default value |
|
Description |
Enable the bolt connector. |
Valid values |
server.bolt.enabled, a boolean |
Default value |
|
Description |
Address the connector should bind to. |
Valid values |
server.bolt.listen_address, a socket address in the format 'hostname:port', 'hostname' or ':port'. If missing port or hostname it is acquired from server.default_listen_address |
Default value |
|
Description |
Enable server OCSP stapling for bolt and http connectors. |
Valid values |
server.bolt.ocsp_stapling_enabled, a boolean |
Default value |
|
Description |
The maximum time an idle thread in the thread pool bound to this connector will wait for new tasks. |
Valid values |
server.bolt.thread_pool_keep_alive, a duration (Valid units are: |
Default value |
|
Description |
The maximum number of threads allowed in the thread pool bound to this connector. |
Valid values |
server.bolt.thread_pool_max_size, an integer |
Default value |
|
Description |
The number of threads to keep in the thread pool bound to this connector, even if they are idle. |
Valid values |
server.bolt.thread_pool_min_size, an integer |
Default value |
|
Description |
Encryption level to require this connector to use. |
Valid values |
server.bolt.tls_level, one of [REQUIRED, OPTIONAL, DISABLED] |
Default value |
|
Description |
Enterprise onlyAdvertised hostname/IP address and port for the transaction shipping server. |
Valid values |
server.cluster.advertised_address, a socket address in the format 'hostname:port', 'hostname' or ':port' which accessible address. If missing port or hostname it is acquired from server.default_advertised_address |
Default value |
|
Description |
Enterprise onlyComma separated list of groups to be used by the connect-randomly-to-server-group selection strategy. The connect-randomly-to-server-group strategy is used if the list of strategies ( |
Valid values |
server.cluster.catchup.connect_randomly_to_server_group, a ',' separated list with elements of type 'a string identifying a Server Tag'. |
Dynamic |
true |
Default value |
Description |
Enterprise onlyAn ordered list in descending preference of the strategy which secondaries use to choose the upstream server from which to pull transactional updates. If none are valid or the list is empty, there is a default strategy of |
Valid values |
server.cluster.catchup.upstream_strategy, a ',' separated list with elements of type 'a string'. |
Default value |
Description |
Enterprise onlyConfiguration of a user-defined upstream selection strategy. The user-defined strategy is used if the list of strategies ( |
Valid values |
server.cluster.catchup.user_defined_upstream_strategy, a string |
Default value |
Description |
Enterprise onlyNetwork interface and port for the transaction shipping server to listen on. Please note that it is also possible to run the backup client against this port so always limit access to it via the firewall and configure an ssl policy. |
Valid values |
server.cluster.listen_address, a socket address in the format 'hostname:port', 'hostname' or ':port'. If missing port or hostname it is acquired from server.default_listen_address |
Default value |
|
Description |
Enterprise onlyUse native transport if available. Epoll for Linux or Kqueue for MacOS/BSD. If this setting is set to false, or if native transport is not available, Nio transport will be used. |
Valid values |
server.cluster.network.native_transport_enabled, a boolean |
Default value |
|
Description |
Enterprise onlyAdvertised hostname/IP address and port for the RAFT server. |
Valid values |
server.cluster.raft.advertised_address, a socket address in the format 'hostname:port', 'hostname' or ':port' which accessible address. If missing port or hostname it is acquired from server.default_advertised_address |
Default value |
|
Description |
Enterprise onlyNetwork interface and port for the RAFT server to listen on. |
Valid values |
server.cluster.raft.listen_address, a socket address in the format 'hostname:port', 'hostname' or ':port'. If missing port or hostname it is acquired from server.default_listen_address |
Default value |
|
Description |
Enterprise onlyUsers must manually specify the mode for the system database on each instance. |
Valid values |
server.cluster.system_database_mode, one of [PRIMARY, SECONDARY] |
Default value |
|
Description |
A strict configuration validation will prevent the database from starting up if unknown configuration options are specified in the neo4j settings namespace (such as dbms., cypher., etc) or if settings are declared multiple times. |
Valid values |
server.config.strict_validation.enabled, a boolean |
Default value |
|
Description |
Whether or not any database on this instance are read_only by default. If false, individual databases may be marked as read_only using server.database.read_only. If true, individual databases may be marked as writable using server.databases.writable. |
Valid values |
server.databases.default_to_read_only, a boolean |
Dynamic |
true |
Default value |
|
Description |
List of databases for which to prevent write queries. Databases not included in this list maybe read_only anyway depending upon the value of server.databases.default_to_read_only. |
Valid values |
server.databases.read_only, a ',' separated set with elements of type 'A valid database name containing only alphabetic characters, numbers, dots and dashes with a length between 3 and 63 characters, starting with an alphabetic character but not with the name 'system''. which Value 'system' can’t be included in read only databases collection! |
Dynamic |
true |
Default value |
Description |
List of databases for which to allow write queries. Databases not included in this list will allow write queries anyway, unless server.databases.default_to_read_only is set to true. |
Valid values |
server.databases.writable, a ',' separated set with elements of type 'A valid database name containing only alphabetic characters, numbers, dots and dashes with a length between 3 and 63 characters, starting with an alphabetic character but not with the name 'system''. |
Dynamic |
true |
Default value |
Description |
The number of cached Cypher query execution plans per database. The max number of query plans that can be kept in cache is the |
Valid values |
server.db.query_cache_size, an integer which is minimum |
Default value |
|
Description |
Default hostname or IP address the server uses to advertise itself. |
Valid values |
server.default_advertised_address, a socket address in the format 'hostname:port', 'hostname' or ':port' which has no specified port and accessible address |
Default value |
|
Description |
Default network interface to listen for incoming connections. To listen for connections on all interfaces, use "0.0.0.0". |
Valid values |
server.default_listen_address, a socket address in the format 'hostname:port', 'hostname' or ':port' which has no specified port |
Default value |
|
Description |
Enterprise onlyDirectory to hold cluster state including Raft log. |
Valid values |
server.directories.cluster_state, a path. If relative it is resolved from server.directories.data |
Default value |
|
Description |
Path of the data directory. You must not configure more than one Neo4j installation to use the same data directory. |
Valid values |
server.directories.data, a path. If relative it is resolved from server.directories.neo4j_home |
Default value |
|
Description |
Root location where Neo4j will store database dumps optionally produced when dropping said databases. |
Valid values |
server.directories.dumps.root, a path. If relative it is resolved from server.directories.data |
Default value |
|
Description |
Sets the root directory for file URLs used with the Cypher |
Valid values |
server.directories.import, a path. If relative it is resolved from server.directories.neo4j_home |
Description |
Path of the lib directory. |
Valid values |
server.directories.lib, a path. If relative it is resolved from server.directories.neo4j_home |
Default value |
|
Description |
Path of the licenses directory. |
Valid values |
server.directories.licenses, a path. If relative it is resolved from server.directories.neo4j_home |
Default value |
|
Description |
Path of the logs directory. |
Valid values |
server.directories.logs, a path. If relative it is resolved from server.directories.neo4j_home |
Default value |
|
Description |
Enterprise onlyThe target location of the CSV files: a path to a directory wherein a CSV file per reported field will be written. |
Valid values |
server.directories.metrics, a path. If relative it is resolved from server.directories.neo4j_home |
Default value |
|
Description |
Root relative to which directory settings are resolved. Calculated and set by the server on startup. |
Valid values |
server.directories.neo4j_home, a path which is absolute |
Default value |
|
Description |
Location of the database plugin directory. Compiled Java JAR files that contain database procedures will be loaded if they are placed in this directory. |
Valid values |
server.directories.plugins, a path. If relative it is resolved from server.directories.neo4j_home |
Default value |
|
Description |
Path of the run directory. This directory holds Neo4j’s runtime state, such as a pidfile when it is running in the background. The pidfile is created when starting neo4j and removed when stopping it. It may be placed on an in-memory filesystem such as tmpfs. |
Valid values |
server.directories.run, a path. If relative it is resolved from server.directories.neo4j_home |
Default value |
|
Description |
Root location where Neo4j will store scripts for configured databases. |
Valid values |
server.directories.script.root, a path. If relative it is resolved from server.directories.data |
Default value |
|
Description |
Root location where Neo4j will store transaction logs for configured databases. |
Valid values |
server.directories.transaction.logs.root, a path. If relative it is resolved from server.directories.data |
Default value |
|
Description |
Enterprise onlyAdvertised cluster member discovery management communication. |
Valid values |
server.discovery.advertised_address, a socket address in the format 'hostname:port', 'hostname' or ':port' which accessible address. If missing port or hostname it is acquired from server.default_advertised_address |
Default value |
|
Description |
Enterprise onlyHost and port to bind the cluster member discovery management communication. |
Valid values |
server.discovery.listen_address, a socket address in the format 'hostname:port', 'hostname' or ':port'. If missing port or hostname it is acquired from server.default_listen_address |
Default value |
|
Description |
Enterprise onlyA list of setting name patterns (comma separated) that are allowed to be dynamically changed. The list may contain both full setting names, and partial names with the wildcard '*'. If this setting is left empty all dynamic settings updates will be blocked. |
Valid values |
server.dynamic.setting.allowlist, a ',' separated list with elements of type 'a string'. |
Default value |
|
Description |
Enterprise onlyA list of tag names for the server used when configuring load balancing and replication policies. |
Valid values |
server.groups, a ',' separated list with elements of type 'a string identifying a Server Tag'. |
Dynamic |
true |
Default value |
Description |
Advertised address for this connector. |
Valid values |
server.http.advertised_address, a socket address in the format 'hostname:port', 'hostname' or ':port' which accessible address. If missing port or hostname it is acquired from server.default_advertised_address |
Default value |
|
Description |
Enable the http connector. |
Valid values |
server.http.enabled, a boolean |
Default value |
|
Description |
Address the connector should bind to. |
Valid values |
server.http.listen_address, a socket address in the format 'hostname:port', 'hostname' or ':port'. If missing port or hostname it is acquired from server.default_listen_address |
Default value |
|
Description |
Defines the set of modules loaded into the Neo4j web server. Options include TRANSACTIONAL_ENDPOINTS, BROWSER, UNMANAGED_EXTENSIONS and ENTERPRISE_MANAGEMENT_ENDPOINTS (if applicable). |
Valid values |
server.http_enabled_modules, a ',' separated set with elements of type 'one of [TRANSACTIONAL_ENDPOINTS, UNMANAGED_EXTENSIONS, BROWSER, ENTERPRISE_MANAGEMENT_ENDPOINTS]'. |
Default value |
|
Description |
Advertised address for this connector. |
Valid values |
server.https.advertised_address, a socket address in the format 'hostname:port', 'hostname' or ':port' which accessible address. If missing port or hostname it is acquired from server.default_advertised_address |
Default value |
|
Description |
Enable the https connector. |
Valid values |
server.https.enabled, a boolean |
Default value |
|
Description |
Address the connector should bind to. |
Valid values |
server.https.listen_address, a socket address in the format 'hostname:port', 'hostname' or ':port'. If missing port or hostname it is acquired from server.default_listen_address |
Default value |
|
Description |
Additional JVM arguments. Argument order can be significant. To use a Java commercial feature, the argument to unlock commercial features must precede the argument to enable the specific feature in the config value string. For example, to use Flight Recorder, |
Valid values |
server.jvm.additional, one or more jvm arguments |
Description |
Path to the logging configuration for debug, query, http and security logs. |
Valid values |
server.logs.config, a path. If relative it is resolved from server.directories.neo4j_home |
Default value |
|
Description |
Enable the debug log. |
Valid values |
server.logs.debug.enabled, a boolean |
Default value |
|
Description |
Enable GC Logging. |
Valid values |
server.logs.gc.enabled, a boolean |
Default value |
|
Description |
GC Logging Options. |
Valid values |
server.logs.gc.options, a string |
Default value |
|
Description |
Number of GC logs to keep. |
Valid values |
server.logs.gc.rotation.keep_number, an integer |
Default value |
|
Description |
Size of each GC log that is kept. |
Valid values |
server.logs.gc.rotation.size, a byte size (valid multipliers are |
Default value |
|
Description |
Path to the logging configuration of user logs. |
Valid values |
server.logs.user.config, a path. If relative it is resolved from server.directories.neo4j_home |
Default value |
|
Description |
Enterprise onlyThe maximum number of databases. |
Valid values |
server.max_databases, a long which is minimum |
Default value |
|
Deprecated |
The |
Description |
Initial heap size. By default it is calculated based on available system resources. |
Valid values |
server.memory.heap.initial_size, a byte size (valid multipliers are |
Description |
Maximum heap size. By default it is calculated based on available system resources. |
Valid values |
server.memory.heap.max_size, a byte size (valid multipliers are |
Description |
Defines the size of the off-heap memory blocks cache. The cache will contain this number of blocks for each block size that is power of two. Thus, maximum amount of memory used by blocks cache can be calculated as 2 * server.memory.off_heap.max_cacheable_block_size * |
Valid values |
server.memory.off_heap.block_cache_size, an integer which is minimum |
Default value |
|
Description |
Defines the maximum size of an off-heap memory block that can be cached to speed up allocations. The value must be a power of 2. |
Valid values |
server.memory.off_heap.max_cacheable_block_size, a byte size (valid multipliers are |
Default value |
|
Description |
The maximum amount of off-heap memory that can be used to store transaction state data; it’s a total amount of memory shared across all active transactions. Zero means 'unlimited'. Used when db.tx_state.memory_allocation is set to 'OFF_HEAP'. |
Valid values |
server.memory.off_heap.max_size, a byte size (valid multipliers are |
Default value |
|
Description |
Use direct I/O for page cache. Setting is supported only on Linux and only for a subset of record formats that use platform aligned page size. |
Valid values |
server.memory.pagecache.directio, a boolean |
Default value |
|
Description |
Page cache can be configured to use a temporal buffer for flushing purposes. It is used to combine, if possible, sequence of several cache pages into one bigger buffer to minimize the number of individual IOPS performed and better utilization of available I/O resources, especially when those are restricted. |
Valid values |
server.memory.pagecache.flush.buffer.enabled, a boolean |
Dynamic |
true |
Default value |
|
Description |
Page cache can be configured to use a temporal buffer for flushing purposes. It is used to combine, if possible, sequence of several cache pages into one bigger buffer to minimize the number of individual IOPS performed and better utilization of available I/O resources, especially when those are restricted. Use this setting to configure individual file flush buffer size in pages (8KiB). To be able to utilize this buffer during page cache flushing, buffered flush should be enabled. |
Valid values |
server.memory.pagecache.flush.buffer.size_in_pages, an integer which is in the range |
Dynamic |
true |
Default value |
|
Description |
The maximum number of worker threads to use for pre-fetching data when doing sequential scans. Set to '0' to disable pre-fetching for scans. |
Valid values |
server.memory.pagecache.scan.prefetchers, an integer which is in the range |
Default value |
|
Description |
The amount of memory to use for mapping the store files. If Neo4j is running on a dedicated server, then it is generally recommended to leave about 2-4 gigabytes for the operating system, give the JVM enough heap to hold all your transaction state and query context, and then leave the rest for the page cache. If no page cache memory is configured, then a heuristic setting is computed based on available system resources. |
Valid values |
server.memory.pagecache.size, a byte size (valid multipliers are |
Default value |
|
Description |
Enterprise onlySet to |
Valid values |
server.metrics.csv.enabled, a boolean |
Default value |
|
Description |
Enterprise onlyThe reporting interval for the CSV files. That is, how often new rows with numbers are appended to the CSV files. |
Valid values |
server.metrics.csv.interval, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyDecides what compression to use for the csv history files. |
Valid values |
server.metrics.csv.rotation.compression, one of [NONE, ZIP, GZ] |
Default value |
|
Description |
Enterprise onlyMaximum number of history files for the csv files. |
Valid values |
server.metrics.csv.rotation.keep_number, an integer which is minimum |
Default value |
|
Description |
Enterprise onlyThe file size in bytes at which the csv files will auto-rotate. If set to zero then no rotation will occur. Accepts a binary suffix |
Valid values |
server.metrics.csv.rotation.size, a byte size (valid multipliers are |
Default value |
|
Description |
Enterprise onlyEnable metrics. Setting this to |
Valid values |
server.metrics.enabled, a boolean |
Default value |
|
Description |
Enterprise onlySpecifies which metrics should be enabled by using a comma separated list of globbing patterns. Only the metrics matching the filter will be enabled. For example |
Valid values |
server.metrics.filter, a ',' separated list with elements of type 'A simple globbing pattern that can use |
Default value |
|
Description |
Enterprise onlySet to |
Valid values |
server.metrics.graphite.enabled, a boolean |
Default value |
|
Description |
Enterprise onlyThe reporting interval for Graphite. That is, how often to send updated metrics to Graphite. |
Valid values |
server.metrics.graphite.interval, a duration (Valid units are: |
Default value |
|
Description |
Enterprise onlyThe hostname or IP address of the Graphite server. |
Valid values |
server.metrics.graphite.server, a socket address in the format 'hostname:port', 'hostname' or ':port'. If missing port or hostname it is acquired from server.default_listen_address |
Default value |
|
Description |
Enterprise onlySet to |
Valid values |
server.metrics.jmx.enabled, a boolean |
Default value |
|
Description |
Enterprise onlyA common prefix for the reported metrics field names. |
Valid values |
server.metrics.prefix, a string |
Default value |
|
Description |
Enterprise onlySet to |
Valid values |
server.metrics.prometheus.enabled, a boolean |
Default value |
|
Description |
Enterprise onlyThe hostname and port to use as Prometheus endpoint. |
Valid values |
server.metrics.prometheus.endpoint, a socket address in the format 'hostname:port', 'hostname' or ':port'. If missing port or hostname it is acquired from server.default_listen_address |
Default value |
|
Description |
Enterprise onlyIf there is a Database Management System Panic (an irrecoverable error) should the neo4j process shut down or continue running. Following a DbMS panic it is likely that a significant amount of functionality will be lost. Recovering full functionality will require a Neo4j restart. This feature is available in Neo4j Enterprise Edition. |
Valid values |
server.panic.shutdown_on_panic, a boolean |
Default value |
|
Description |
Enterprise onlyThe advertised address for the intra-cluster routing connector. |
Valid values |
server.routing.advertised_address, a socket address in the format 'hostname:port', 'hostname' or ':port' which accessible address. If missing port or hostname it is acquired from server.default_advertised_address |
Default value |
|
Description |
The address the routing connector should bind to. |
Valid values |
server.routing.listen_address, a socket address in the format 'hostname:port', 'hostname' or ':port'. If missing port or hostname it is acquired from server.default_listen_address |
Default value |
|
Description |
Number of Neo4j worker threads. This setting is only valid for REST, and does not influence bolt-server. It sets the amount of worker threads for the Jetty server used by neo4j-server. This option can be tuned when you plan to execute multiple, concurrent REST requests, with the aim of getting more throughput from the database. Your OS might enforce a lower limit than the maximum value specified here. |
Valid values |
server.threads.worker_count, an integer which is in the range |
Default value |
|
Description |
Comma-separated list of <classname>=<mount point> for unmanaged extensions. |
Valid values |
server.unmanaged_extension_classes, a ',' separated list with elements of type '<classname>=<mount point> string'. |
Default value |
Description |
Name of the Windows Service managing Neo4j when installed using |
Valid values |
server.windows_service_name, a string |
Default value |
|
Was this page helpful?