Operations

Neo4j supports two maintenance modes: online and offline, which you can use to perform different maintenance tasks.

Online Maintenance

Online maintenance does not require stopping the neo4j process. It is performed using the command kubectl exec.

To directly run tasks:

kubectl exec <release-name>-0 -- neo4j-admin database info --from-path=/var/lib/neo4j/data/databases --expand-commands

All neo4j-admin commands need the --expand-commands flag to run in the Neo4j container. This is because the Neo4j Helm chart defines the Neo4j configuration using command expansion to dynamically resolve some configuration parameters at runtime.

To run a series of commands, use an interactive shell:

kubectl exec -it <release-name>-0 -- bash

Processes executed using kubectl exec count towards the Neo4j container’s memory allocation. Therefore, running tasks that use a significant amount of memory or running Neo4j in an extremely memory-constrained configuration could cause the Neo4j container to be terminated by the underlying Operating System.

Offline Maintenance

You use the Neo4j offline maintenance mode to perform maintenance tasks that require Neo4j to be offline. In this mode, the neo4j process is not running. However, the Neo4j Pod does run, but it never reaches the status READY.

Put the Neo4j instance in offline mode

  1. To put the Neo4j instance in offline maintenance mode, you set the offlineMaintenanceModeEnabled: true and upgrade the helm release.

    • You can do that by using the values.yaml file:

      1. Open your values.yaml file and add offlineMaintenanceModeEnabled: true to the neo4j object:

        neo4j:
         offlineMaintenanceModeEnabled: true
      2. Run helm upgrade to apply the changes:

        helm upgrade <release-name> neo4j/neo4j -f values.yaml
    • Alternatively, you can set neo4j.offlineMaintenanceModeEnabled to true as part of the helm upgrade command:

      helm upgrade  neo4j/neo4j --version=5.8.0 --reuse-values --set neo4j.offlineMaintenanceModeEnabled=true
  2. Poll kubectl get pods until the pod has restarted (STATUS=Running).

    kubectl get pod <release-name>-0
  3. Connect to the pod with an interactive shell:

    kubectl exec -it "<release-name>-0" -- bash
  4. View running java processes:

    jps
    19 Jps

    The result shows no running java process other than jps itself.

Run task in offline mode

Offline maintenance tasks are performed using the command kubectl exec.

  • To directly run tasks:

    kubectl exec <release-name>-0 -- neo4j-admin database info --from-path=/var/lib/neo4j/data/databases --expand-commands
  • To run a series of commands, use an interactive shell:

    kubectl exec -it <release-name>-0 -- bash
  • For long-running commands, use a shell and run tasks using nohup so they continue if the kubectl exec connection is lost:

    kubectl exec -it <release-name>-0 -- bash
      $ nohup neo4j-admin database check neo4j --expand-commands &>job.out </dev/null &
      $ tail -f job.out

Put the Neo4j DBMS in online mode

When you finish with the maintenance tasks, return the Neo4j instance to normal operation:

  • You can do that by using the values.yaml file:

    1. Open your values.yaml file and add offlineMaintenanceModeEnabled: false to the neo4j object:

      neo4j:
       offlineMaintenanceModeEnabled: false
    2. Run helm upgrade to apply the changes:

      helm upgrade <release-name> neo4j/neo4j -f values.yaml
  • Alternatively, you can run helm upgrade with the flag set to false:

    helm upgrade  neo4j/neo4j-standalone --version=5.8.0 --reuse-values --set neo4j.offlineMaintenanceModeEnabled=false

Reset the neo4j user password

You reset the neo4j user password by disabling authentication and then re-enabling it.

  1. In the values.yaml file, set dbms.security.auth_enabled: to false to disable the authentication:

    All Neo4j config values must be YAML strings, not YAML booleans. Therefore, make sure you put quotes around values, such as "true" or "false", so that they are handled correctly by Kubernetes.

    # Neo4j Configuration (yaml format)
    config:
      dbms.security.auth_enabled: "false"
  2. Run the following command to apply the changes:

    helm upgrade <release-name> neo4j/neo4j -f values.yaml

    Authentication is now disabled.

  3. Connect with cypher-shell and set the desired password:

    ALTER USER neo4j SET PASSWORD '<new-password>'
  4. Update the Neo4j configuration to enable authentication:

    # Neo4j Configuration (yaml format)
    config:
      dbms.security.auth_enabled: "true"
  5. Run the following command to apply the update and re-enable authentication:

    helm upgrade <release-name> neo4j/neo4j -f values.yaml

    Authentication is now enabled, and the Neo4j user password has been reset to the desired password.

Dump and load databases (offline)

You can use the neo4j-admin database dump command to make a full backup (an archive) of an offline database(s) and neo4j-admin database load to load it back into a Neo4j deployment. These operations are performed in offline maintenance mode.

Dump the neo4j and system databases

  1. Put the Neo4j instance in offline mode.

  2. Dump neo4j and system databases:

    neo4j-admin database dump --expand-commands system --to-path=/backups && neo4j-admin database dump --expand-commands neo4j --to-path=/backups
  3. Put the Neo4j DBMS in online mode.

  4. Verify that Neo4j is working by refreshing Neo4j Browser.

For information about the command syntax, options, and usage, see Back up an offline database.

Load the neo4j and system databases

  1. Put the Neo4j instance in offline mode.

  2. Run neo4j-admin database load commands:

    neo4j-admin database load --expand-commands system --from-path=/backups && neo4j-admin database load --expand-commands neo4j --from-path=/backups

    For information about the command syntax, options, and usage, see Restore a database dump.

  3. Put the Neo4j DBMS in online mode.

  4. Verify that Neo4j is working by refreshing Neo4j Browser.

Back up and restore a single database (online)

You can use the neo4j-admin database backup command to make a full or differential backup of an online database(s) and neo4j-admin database restore to restore it in a live Neo4j DBMS or cluster. These operations are performed in online maintenance mode.

For performing backups, Neo4j uses the Admin Service, which is only available inside the Kubernetes cluster and access to it should be guarded. For more information, see Access the Neo4j cluster from inside Kubernetes and Access the Neo4j cluster from outside Kubernetes.

Back up a single database

The neo4j-admin database backup command can be run both from the same and a separate pod. However, it uses resources (CPU, RAM) in the Neo4j container (competing with Neo4j itself), because it checks the database consistency at the end of every backup operation. Therefore, it is recommended to run the operation in a separate pod.

In the Neo4j Helm chart, the backup configurations are set by default to server.backup.enabled=true and server.backup.listen_address=0.0.0.0:6362.

Note that the default for Neo4j on-site installations is to listen only on 127.0.0.1, which will not work from other containers, since they would not be able to access the backup port.

Back up a database from a separate pod

  1. Create a Neo4j instance pod to get access to the neo4j-admin command:

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: neo4j-backup
    spec:
      containers:
        - name: neo4j-backup
          image: neo4j:5.8.0-enterprise
          imagePullPolicy: IfNotPresent
          env:
            - name: NEO4J_ACCEPT_LICENSE_AGREEMENT
              value: "yes"
            - name: NEO4J_server_config_strict__validation_enabled
              value: "false"
          volumeMounts:
            - mountPath: /backup
              name: backups
      volumes:
        - name: backups
          emptyDir: {}
    EOF
  2. Run the following command to back up the database you want. In this example, this is the neo4j database. The command is the same for standalone instances and Neo4j cluster members.

    kubectl exec -ti neo4j-backup -- neo4j-admin database backup --from=my-neo4j-release-admin:6362 --to-path=/backups --expand-commands neo4j
  3. Finally, copy the backup from the Pods volume mount to your local machine so that it can be moved to a resilient backup location such as S3:

    kubectl cp neo4j-backup:/backup /tmp/backup

Restore a single database

To restore a single offline database or a database backup, you first need to delete the database that you want to replace unless you want to restore the backup as an additional database in your DBMS. Then, use the restore command of neo4j-admin to restore the database backup. Finally, use the Cypher command CREATE DATABASE name to create the restored database in the system database.

Delete the database that you want to replace

Before you restore the database backup, you have to delete the database that you want to replace with that backup using the Cypher command DROP DATABASE name against the system database. If you want to restore the backup as an additional database in your DBMS, then you can proceed to the next section.

For Neo4j cluster deployments, you run the Cypher command DROP DATABASE name only on one of the cluster servers. The command is automatically routed from there to the other cluster members.

  1. Connect to the Neo4j DBMS:

    kubectl exec -it <release-name>-0 -- bash
  2. Connect to the system database using cypher-shell:

    cypher-shell -u neo4j -p <password> -d system
  3. Drop the database you want to replace with the backup:

    DROP DATABASE neo4j;
  4. Exit the Cypher Shell command-line console:

    :exit;

Restore the database backup

You use the neo4j-admin database restore command to restore the database backup, and then the Cypher command CREATE DATABASE name to create the restored database in the system database. For information about the command syntax, options, and usage, see Restore a database backup.

For Neo4j cluster deployments, restore the database backup on each cluster server.

  1. Run the neo4j-admin database restore command to restore the database backup:

    neo4j-admin database restore neo4j --from-path=/backups/neo4j --expand-commands
  2. Connect to the system database using cypher-shell:

    cypher-shell -u neo4j -p <password> -d system
  3. Create the neo4j database.

    For Neo4j cluster deployments, you run the Cypher command CREATE DATABASE name only on one of the cluster servers.

    CREATE DATABASE neo4j;
  4. Open the browser at http://<external-ip>:7474/browser/ and check that all data has been successfully restored.

  5. Execute a Cypher command against the neo4j database, for example:

    MATCH (n) RETURN n

    If you have backed up your database with the option --include-metadata, you can manually restore the users and roles metadata. For more information, see Restore a database backup → Example.

To restore the system database, follow the steps described in Dump and load databases (offline).

Upgrade Neo4j Community to Enterprise edition

To upgrade from Neo4j Community to Enterprise edition, run:

helm upgrade <release-name> neo4j/neo4j --reuse-values --set neo4j.edition=enterprise --set neo4j.acceptLicenseAgreement=yes

To upgrade to the next patch release of Neo4j, update your Neo4j values.yaml file and upgrade the helm release.

  1. Open the values.yaml file, using the code editor of your choice, and add the following line to the image object:

    image:
      customImage: neo4j:5.8.0
  2. Run helm upgrade to apply the changes:

    helm upgrade <release-name> neo4j/neo4j -f values.yaml

Migrate Neo4j from the Labs Helm charts to the Neo4j Helm charts (offline)

To migrate your Neo4j deployment from the Labs Helm charts to the Neo4j Helm charts, back up your standalone instance or cluster created with the Labs Helm chart and restore it in a standalone instance or a cluster created using the Neo4j Helm chart.

Neo4j supports the following migration paths for a single instance and a cluster:

Single instance
  • From the Labs Helm chart 3.5 or earlier to either the Neo4j Helm chart 4.3 or 4.4 — upgrade your Neo4j deployment to whichever version you want to move to, using the steps in the https://neo4j.com/labs/neo4j-helm/1.0.0/ and then migrate from the Labs Helm chart (4.3 or 4.4) to the Neo4j Helm chart 4.3 or 4.4 using the steps described here.

  • From the Labs Helm chart 4.3 to the Neo4j Helm chart 4.3 — follow the steps described here.

  • From the Labs Helm chart 4.3 to the Neo4j Helm chart 4.4 — follow the steps described here.

Cluster

From the Labs Helm chart 4.3 or 4.4 to the Neo4j Helm chart 4.4 — follow the steps described here.

Back up a Neo4j deployment created with the Labs Helm chart

To back up your Neo4j deployment created with the Labs Helm chart, follow the steps in the Neo4j-Helm User Guide → Backing up Neo4j Containers.

Restore your backup into a standalone or a cluster created with the Neo4j Helm chart

If the backup exists on a cloud provider, you can take one of the following approaches:

Approach 1
  1. Create a standalone or a cluster using the Neo4j Helm chart with a custom Neo4j image that has all the cloud provider utilities to download the backup from the respective cloud provider storage to your specific mount.

  2. Restore the backup following the steps described in Restore a single database.

Approach 2
  1. Get the backup on your local machine.

  2. Copy the backup to the respective mount in your new cluster created using the Neo4j Helm chart, using the command kubectl cp <local-path> <pod>:<path>. For example,

    kubectl cp /Users/username/Desktop/backup/4.3.3/neo4j standalone-0:/tmp/

    where the /tmp directory refers to the mount.

  3. Restore the backup following the steps described in Restore a single database.

Scale a Neo4j deployment

Neo4j supports both vertical and horizontal scaling.

Vertical scaling

To increase or decrease the resources (CPU, memory) available to a Neo4j instance, change the neo4j.resources object in the values.yaml file to set the desired resource usage, and then perform a helm upgrade.

If you change the memory allocated to the Neo4j container, you should also change the Neo4j’s memory configuration (server.memory.heap.initial_size and server.memory.pagecache.size in particular). See Configure Resource Allocation for more details.

For example, if your running Neo4j instance has the following allocated resources:

# values.yaml
neo4j:
  resources:
    cpu: "1"
    memory: "3Gi"

# Neo4j Configuration (yaml format)
config:
  server.memory.heap.initial_size: "2G"
  server.memory.heap.initial_size: "2G"
  server.memory.pagecache.size: "500m"

And, you want to increase them to 2 CPUs and 4 GB of memory (allocating additional memory to the pagecache).

  1. Modify the values.yaml file to set the desired resource usage:

    # values.yaml
    neo4j:
      resources:
        cpu: "2"
        memory: "4Gi"
    
    # Neo4j Configuration (yaml format)
    config:
      server.memory.heap.initial_size: "2G"
      server.memory.heap.initial_size: "2G"
      server.memory.pagecache.size: "1G"
  2. Run helm upgrade with the modified deployment values.yaml file and the Neo4j Helm chart to apply the changes. For example:

    helm upgrade <release-name> neo4j/neo4j -f values.yaml

Horizontal scaling

You can add a new server to the Neo4j cluster to scale out write or read workloads.

Example — add a new server to an existing cluster

The following example assumes that you have a cluster with 3 servers.

  1. In the Kubernetes cluster, verify that you have a node that you can use for the new server, server4.

  2. Install server4 using the same value for neo4j.name as your existing cluster:

    helm install server4 neo4j --set neo4j.edition=enterprise --set neo4j.acceptLicenseAgreement=yes --set volumes.data.mode=defaultStorageClass --set neo4j.password="password" --set neo4j.minimumClusterSize=3 --set neo4j.name=my-cluster

    Alternatively, you can use a values.yaml file to set the values for the new server and the neo4j/neo4j Helm chart to install the new server. For more information, see Create Helm deployment values files and Install Neo4j cluster servers.

    When the new server joins the cluster, it will initially be in the Free state.

  3. Enable server4 to be able to host databases by using cypher-shell (or Neo4j Browser) to connect to one of the existing servers:

    1. Access the cypher-shell on server1:

      kubectl exec -ti server1-0 -- cypher-shell -u neo4j -p password -d neo4j
    2. When the cypher-shell prompt is ready, verify that server4 is in the Free state, and take a note of its name:

      SHOW SERVERS;
      +---------------------------------------------------------------------------------------------------------------------------------+
      | name                                   | address                                | state     | health      | hosting             |
      +---------------------------------------------------------------------------------------------------------------------------------+
      | "0908819d-238a-473d-9877-5cc406050ea2" | "server4.neo4j.svc.cluster.local:7687" | "Free"    | "Available" | ["system"]          |
      | "19817354-5cd1-4579-8c45-8b897808fdb4" | "server2.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["system", "neo4j"] |
      | "b3c91592-1806-41d0-9355-8fc6ba236043" | "server3.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["system", "neo4j"] |
      | "eefd7216-6096-46f5-9c41-a74f79684172" | "server1.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["system", "neo4j"] |
      +---------------------------------------------------------------------------------------------------------------------------------+
  4. Using its name, enable server4 to use it in the cluster:

    ENABLE SERVER "0908819d-238a-473d-9877-5cc406050ea2";
  5. Run SHOW SERVERS; again to verify that server4 is enabled:

    SHOW SERVERS;
    +---------------------------------------------------------------------------------------------------------------------------------+
    | name                                   | address                                | state     | health      | hosting             |
    +---------------------------------------------------------------------------------------------------------------------------------+
    | "0908819d-238a-473d-9877-5cc406050ea2" | "server4.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["system"]          |
    | "19817354-5cd1-4579-8c45-8b897808fdb4" | "server2.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["system", "neo4j"] |
    | "b3c91592-1806-41d0-9355-8fc6ba236043" | "server3.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["system", "neo4j"] |
    | "eefd7216-6096-46f5-9c41-a74f79684172" | "server1.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["system", "neo4j"] |
    +---------------------------------------------------------------------------------------------------------------------------------+

    Notice in the output that although server4 is now enabled, it is not hosting the neo4j database. You need to change the database topology to also use the new server.

  6. Alter the neo4j database topology to be hosted on three primary and one secondary servers:

    ALTER DATABASE neo4j SET TOPOLOGY 3 PRIMARIES 1 SECONDARY;
  7. Now run the SHOW DATABASES; command to verify the new topology:

    SHOW DATABASES;
    +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | name     | type       | aliases | access       | address                                | role        | writer | requestedStatus | currentStatus | statusMessage | default | home  | constituents |
    +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | "neo4j"  | "standard" | []      | "read-write" | "server2.neo4j.svc.cluster.local:7687" | "primary"   | TRUE   | "online"        | "online"      | ""            | TRUE    | TRUE  | []           |
    | "neo4j"  | "standard" | []      | "read-write" | "server4.neo4j.svc.cluster.local:7687" | "secondary" | FALSE  | "online"        | "online"      | ""            | TRUE    | TRUE  | []           |
    | "neo4j"  | "standard" | []      | "read-write" | "server3.neo4j.svc.cluster.local:7687" | "primary"   | FALSE  | "online"        | "online"      | ""            | TRUE    | TRUE  | []           |
    | "neo4j"  | "standard" | []      | "read-write" | "server1.neo4j.svc.cluster.local:7687" | "primary"   | FALSE  | "online"        | "online"      | ""            | TRUE    | TRUE  | []           |
    | "system" | "system"   | []      | "read-write" | "server2.neo4j.svc.cluster.local:7687" | "primary"   | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []           |
    | "system" | "system"   | []      | "read-write" | "server4.neo4j.svc.cluster.local:7687" | "primary"   | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []           |
    | "system" | "system"   | []      | "read-write" | "server3.neo4j.svc.cluster.local:7687" | "primary"   | TRUE   | "online"        | "online"      | ""            | FALSE   | FALSE | []           |
    | "system" | "system"   | []      | "read-write" | "server1.neo4j.svc.cluster.local:7687" | "primary"   | FALSE  | "online"        | "online"      | ""            | FALSE   | FALSE | []           |
    +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

    Note that server4 now hosts the neo4j database with the secondary role.

Use custom images from private registries

From v4.4.4, Neo4j supports using custom images from private registries by adding new or existing imagePullSecrets.

Add an existing imagePullSecret

You can use an existing imagePullSecret for your Neo4j deployment by specifying its name in the values.yaml file. The Neo4j Helm chart checks if the provided imagePullSecret exists in the Kubernetes cluster and uses it. If a Secret with the given name does not exist in the cluster, the Neo4j Helm chart throws an error.

For more information on how to set your Docker credentials in the cluster as a Secret, see the Kubernetes documentation.

Using an already existing Secret mysecret
# values.yaml
# Override image settings in Neo4j pod
image:
  imagePullPolicy: IfNotPresent
  # set a customImage if you want to use your own docker image
  customImage: demo_neo4j_image:v1

  #imagePullSecrets list
  imagePullSecrets:
      - "mysecret"

Create and add a new imagePullSecret

Alternatively, you can create a new imagePullSecret for your Neo4j deployment by defining an equivalent imageCredential in the values.yaml file.

The Neo4j Helm chart creates a Secret with the given name and uses it as an imagePullSecret to pull the custom image defined. The following example shows how to define a private docker registry imageCredential with the name mysecret.

Creating and adding mysecret as the imagePullSecret to the cluster.
# values.yaml
# Override image settings in Neo4j pod
image:
  imagePullPolicy: IfNotPresent
  # set a customImage if you want to use your own docker image
  customImage: custom_neo4j_image:v1

  #imagePullSecrets list
  imagePullSecrets:
      - "mysecret"

  #imageCredentials list for which Secret of type docker-registry will be created automatically using the details provided
  # password and name are compulsory fields for an imageCredential, without these fields helm chart will throw an error
  # registry, username, and email are optional fields, but either the username or the email must be provided
  # imageCredential name should be part of the imagePullSecrets list or else the respective imageCredential will be ignored and no Secret creation will be done
  # In case of a Secret already pre-existing you don't need to mention the imageCredential, just add the pre-existing secretName to the imagePullSecret list
  # and that will be used as an imagePullSecret
  imageCredentials:
    - registry: "https://index.docker.io/v1/"
      username: "myusername"
      password: "mypassword"
      email: "myusername@example.com"
      name: "mysecret"

Assign Neo4j pods to specific nodes

The Neo4j Helm chart provides support for assigning your Neo4j pods to specific nodes using nodeSelector labels.

You specify the nodeSelector labels in the values.yaml file.

If there is no node with the given labels, the Helm chart will throw an error.

nodeSelector labels in values.yaml
#nodeSelector labels
#Ensure the respective labels are present on one of the cluster nodes or else Helm chart will throw an error.
nodeSelector:
   nodeNumber: one
   name: node1
nodeSelector along with the --dry-run flag

When running helm install --dry-run or helm template --dry-run with nodeSelector, you must disable the lookup function of nodeSelector by setting disableLookups: true. Otherwise, the commands will fail.

You can either add the following to the values.yaml file:

disableLookups: true

or, use --set disableLookups=true as part of the command, for example:

helm template standalone neo4j --set disableLookups=true .. ... .. --dry-run