Operations
The section describes some operations when running Neo4j in a Kubernetes cluster.
1. Online Maintenance
Online maintenance does not require stopping the neo4j
process.
It is performed using the command kubectl exec
.
To directly run tasks:
kubectl exec <release-name>-0 -- neo4j-admin store-info --all /var/lib/neo4j/data/databases --expand-commands
All |
To run a series of commands, use an interactive shell:
kubectl exec -it <release-name>-0 -- bash
Processes executed using |
2. Offline Maintenance
You use the Neo4j offline maintenance mode to perform maintenance tasks that require Neo4j to be offline.
In this mode, the neo4j
process is not running.
However, the Neo4j Pod does run, but it never reaches the status READY
.
2.1. Put the Neo4j instance in offline mode
-
To put the Neo4j instance in offline maintenance mode, you set the
offlineMaintenanceModeEnabled: true
and upgrade the helm release.-
You can do that by using the values.yaml file:
-
Open your values.yaml file and add
offlineMaintenanceModeEnabled: true
to theneo4j
object:neo4j: offlineMaintenanceModeEnabled: true
-
Run
helm upgrade
to apply the changes:helm upgrade <release-name> neo4j/neo4j-standalone -f values.yaml
-
-
Alternatively, you can set
neo4j.offlineMaintenanceModeEnabled
totrue
as part of thehelm upgrade
command:helm upgrade <release-name> neo4j/neo4j-standalone --version={neo4j-version-exact} --set neo4j.offlineMaintenanceModeEnabled=true
-
-
Poll
kubectl get pods
until the pod has restarted (STATUS
=Running
).kubectl get pod <release-name>-0
-
Connect to the pod with an interactive shell:
kubectl exec -it "<release-name>-0" -- bash
-
View running java processes:
jps
19 Jps
The result shows no running java process other than
jps
itself.
2.2. Run task in offline mode
Offline maintenance tasks are performed using the command kubectl exec
.
-
To directly run tasks:
kubectl exec <release-name>-0 -- neo4j-admin store-info --all /var/lib/neo4j/data/databases --expand-commands
-
To run a series of commands, use an interactive shell:
kubectl exec -it <release-name>-0 -- bash
-
For long-running commands, use a shell and run tasks using
nohup
so they continue if thekubectl exec
connection is lost:kubectl exec -it <release-name>-0 -- bash $ nohup neo4j-admin check-consistency --database=neo4j --expand-commands &>job.out </dev/null & $ tail -f job.out
2.3. Put the Neo4j DBMS in online mode
When you finish with the maintenance tasks, return the Neo4j instance to a normal operation:
-
You can do that by using the values.yaml file:
-
Open your values.yaml file and add
offlineMaintenanceModeEnabled: false
to theneo4j
object:neo4j: offlineMaintenanceModeEnabled: false
-
Run
helm upgrade
to apply the changes:helm upgrade <release-name> neo4j/neo4j-standalone -f values.yaml
-
-
Alternatively, you can run
helm upgrade
with the flag set tofalse
:helm upgrade <release-name> neo4j/neo4j-standalone --version={neo4j-version-exact} --set neo4j.offlineMaintenanceModeEnabled=false
3. Reset the neo4j
user password
You reset the neo4j
user password by disabling authentication and then re-enabling it.
-
In the values.yaml file, set
dbms.security.auth_enabled:
tofalse
to disable the authentication:All Neo4j
config
values must be YAML strings, not YAML booleans. Therefore, make sure you put quotes around values, such as"true"
or"false"
, so that they are handled correctly by Kubernetes.# Neo4j Configuration (yaml format) config: dbms.security.auth_enabled: "false"
-
Run the following command to apply the changes:
helm upgrade <release-name> neo4j/neo4j-standalone -f values.yaml
Authentication is now disabled.
-
Connect with
cypher-shell
and set the desired password:ALTER USER neo4j SET PASSWORD '<new-password>'
-
Update the Neo4j configuration to enable authentication:
# Neo4j Configuration (yaml format) config: dbms.security.auth_enabled: "true"
-
Run the following command to apply the update and re-enable authentication:
helm upgrade <release-name> neo4j/neo4j-standalone -f values.yaml
Authentication is now enabled, and the Neo4j user password has been reset to the desired password.
4. Dump and load databases (offline)
You can use the neo4j-admin dump
command to make a full backup (an archive) of an offline database(s) and neo4j-admin load
to load it back into a Neo4j deployment.
These operations are performed in offline maintenance mode.
4.1. Dump the neo4j
and system
databases
-
Dump
neo4j
andsystem
databases:neo4j-admin dump --expand-commands --database=system --to /backups/system.dump && neo4j-admin dump --expand-commands --database=neo4j --to /backups/neo4j.dump
-
Verify that Neo4j is working by refreshing Neo4j Browser.
For information about the command syntax, options, and usage, see Back up an offline database. |
4.2. Load the neo4j
and system
databases
-
Run
neo4j-admin load
commands:neo4j-admin load --expand-commands --database=system --from /backups/system.dump && neo4j-admin load --expand-commands --database=neo4j --from /backups/neo4j.dump
For information about the command syntax, options, and usage, see Restore a database dump.
-
Verify that Neo4j is working by refreshing Neo4j Browser.
5. Back up and restore a single database (online)
You can use the neo4j-admin backup
command to make a full or incremental backup of an online database(s) and neo4j-admin restore
to restore it in a live Neo4j DBMS or cluster.
These operations are performed in online maintenance mode.
For performing backups, Neo4j uses the Admin Service, which is only available inside the Kubernetes cluster and access to it should be guarded. For more information, see Access the Neo4j cluster from inside Kubernetes and Access the Neo4j cluster from outside Kubernetes. |
5.1. Back up a single database
The neo4j-admin backup
command can be run both from the same and a separate pod.
However, it uses resources (CPU, RAM) in the Neo4j container (competing with Neo4j itself), because it checks the database consistency at the end of every backup operation.
Therefore, it is recommended to run the operation in a separate pod.
In the Neo4j Helm Charts, the backup configurations are set by default to Note that the default for Neo4j on-site installations is to listen only on 127.0.0.1, which will not work from other containers, since they would not be able to access the backup port. |
Back up a database from a separate pod
-
Create a Neo4j instance pod to get access to the
neo4j-admin
command:kubectl run —rm -it —image “neo4j:4.4.6-enterprise” backup — bash
-
Run the following command to back up the database you want. In this example, this is the
neo4j
database. The command is the same for standalone instances and Neo4j cluster members.bin/neo4j-admin backup --from=my-neo4j-release-admin.default.svc.cluster.local:6362 --database=neo4j --backup-dir=/backups --expand-commands
5.2. Restore a single database
To restore a single offline database or a database backup, you first need to delete the database that you want to replace unless you want to restore the backup as an additional database in your DBMS, then
use the restore command of neo4j-admin
to restore the database backup, and finally, use the Cypher command CREATE DATABASE name
to create the restored database in the system
database.
5.2.1. Delete the database that you want to replace
Before you restore the database backup, you have to delete the database that you want to replace with that backup using the Cypher command DROP DATABASE name
against the system
database.
If you want to restore the backup as an additional database in your DBMS, then you can proceed to the next section.
For Neo4j cluster deployments, you run the Cypher command |
-
Connect to the Neo4j DBMS:
kubectl exec -it <release-name>-0 -- bash
-
Connect to the
system
database usingcypher-shell
:cypher-shell -u neo4j -p <password> -d system
-
Drop the database you want to replace with the backup:
DROP DATABASE neo4j;
-
Exit the Cypher Shell command-line console:
:exit;
5.2.2. Restore the database backup
You use the neo4j-admin restore
command to restore the database backup, and then the Cypher command CREATE DATABASE name
to create the restored database in the system
database.
For information about the command syntax, options, and usage, see Restore a database backup.
-
Restore the
neo4j
database backup.For Neo4j cluster deployments, restore the database backup on each cluster member.
-
Run the
neo4j-admin restore
command:neo4j-admin restore --database=neo4j --from=/backups/neo4j --expand-commands
-
Connect to the
system
database usingcypher-shell
:cypher-shell -u neo4j -p <password> -d system
-
Create
neo4j
database.For Neo4j cluster deployments, you run the Cypher command
CREATE DATABASE name
only on one of the cluster members.CREATE DATABASE neo4j;
-
Open the browser at http://<external-ip>:7474/browser/ and check that all data has been successfully restored.
-
Execute a Cypher command against the
neo4j
database, for example:MATCH (n) RETURN n
If you have backed up your database with the option
--include-metadata
, you can manually restore the users and roles metadata. For more information, see Restore a database backup → Example.
To restore the |
6. Upgrade the Neo4j DBMS on Kubernetes
To upgrade from Neo4j Community to Enterprise edition, run:
helm upgrade <release-name> neo4j/neo4j-standalone --set neo4j.edition=enterprise --set neo4j.acceptNeo4jLicenseAgreement=yes
To upgrade to the next patch release of Neo4j, update your Neo4j values.yaml file and upgrade the helm release.
-
Open the values.yaml file, using the code editor of your choice, and add the following line to the
image
object:image: customImage: neo4j:4.4.6
-
Run
helm upgrade
to apply the changes:helm upgrade <release-name> neo4j/neo4j-standalone -f values.yaml
7. Scale a Neo4j deployment
Neo4j supports both vertical and horizontal scaling.
7.1. Vertical scaling
To increase or decrease the resources (CPU, memory) available to a Neo4j instance, change the neo4j.resources
object in the values.yaml file to set the desired resource usage, and then perform a helm upgrade.
If you change the memory allocated to the Neo4j container, you should also change the Neo4j’s memory configuration ( |
For example, if your running Neo4j instance has the following allocated resources:
# values.yaml
neo4j:
resources:
cpu: "1"
memory: "3Gi"
# Neo4j Configuration (yaml format)
config:
dbms.memory.heap.initial_size: "2G"
dbms.memory.heap.max_size: "2G"
dbms.memory.pagecache.size: "500m"
And, you want to increase them to 2 CPUs and 4 GB of memory (allocating additional memory to the pagecache).
-
Modify the values.yaml file to set the desired resource usage:
# values.yaml neo4j: resources: cpu: "2" memory: "4Gi" # Neo4j Configuration (yaml format) config: dbms.memory.heap.initial_size: "2G" dbms.memory.heap.max_size: "2G" dbms.memory.pagecache.size: "1G"
-
Run
helm upgrade
with the modified deployment values.yaml file and the respective Helm Chart (neo4j/neo4j-standalone, neo4j/neo4j-cluster-core, or neo4j/neo4j-cluster-read-replica) to apply the changes. For example:helm upgrade <release-name> neo4j/neo4j-standalone -f values.yaml
7.2. Horizontal scaling
You can add a new core member or a read replica to the Neo4j cluster to scale out write or read workloads.
-
In the Kubernetes cluster, verify that you have a node that you can use for the new Neo4j cluster member.
-
Create a persistent disk for the new Neo4j cluster member to be used for its
data
volume mount. For more information, see Create a persistent volume for each cluster member and Volume mounts and persistent volumes. -
Create a Helm deployment YAML file for the new Neo4j cluster member with all the configuration settings and the disk you have created for it. For more information, see Create Helm deployment values files and Configure a Neo4j Helm deployment.
-
Install the new member using the command
helm install
, the deployment values.yaml file, and the respective Helm Chart (neo4j/neo4j-cluster-core or neo4j/neo4j-cluster-read-replica). For example:helm install rr-2 neo4j/neo4j-cluster-read-replica -f rr-2.values.yaml
8. Use custom images from private registries
Neo4j 4.4.4 introduces the support for using custom images from private registries by adding new or existing imagePullSecrets
.
8.1. Add an existing imagePullSecret
You can use an existing imagePullSecret
for your Neo4j deployment by specifying its name in the values.yaml file.
Neo4j Helm charts will check if the provided imagePullSecret
exists in the Kubernetes cluster and use it.
If a secret with the given name does not exist in the cluster, Helm Charts will throw an error.
# values.yaml
# Override image settings in Neo4j pod
image:
imagePullPolicy: IfNotPresent
# set a customImage if you want to use your own docker image
customImage: demo_neo4j_image:v1
#imagePullSecrets list
imagePullSecrets:
- "mysecret"
8.2. Create and add a new imagePullSecret
You can create a new imagePullSecret
for your Neo4j deployment by defining an equivalent imageCredential
in the values.yaml file.
Neo4j Helm charts will create a secret with the given name and use it as an imagePullSecret
to pull the custom image defined.
The following example shows how to define a private docker registry mysecret
imageCredential
.
mysecret
as the imagePullSecret
to the cluster.# values.yaml
# Override image settings in Neo4j pod
image:
imagePullPolicy: IfNotPresent
# set a customImage if you want to use your own docker image
customImage: demo_neo4j_image:v1
#imagePullSecrets list
imagePullSecrets:
- "mysecret"
#imageCredentials list for which secret of type docker-registry will be created automatically using the details provided
# registry, username, password, email are compulsory fields for an imageCredential, without any helm chart will throw an error
# imageCredential name should be part of the imagePullSecrets list or else the respective imageCredential will be ignored and no secret creation will be done
imageCredentials:
- registry: "https://index.docker.io/v1/"
username: "demouser"
password: "demopass123"
email: "demo@company1.com"
name: "mysecret"
9. Assign Neo4j pods to specific nodes
From version 4.4.5, Neo4j provides support for assigning Neo4j pods to specific nodes using nodeSelector
labels.
You specify the nodeSelector
labels in the values.yaml file.
If there is no node with the given labels, the Helm chart will throw an error. |
#nodeSelector labels
#Ensure the respective labels are present on one of the cluster nodes or else Helm charts will throw an error.
nodeSelector:
nodeNumber: one
name: node1
Was this page helpful?