Access the Neo4j cluster from inside Kubernetes
By default, client-side routing is used for accessing a Neo4j cluster from inside Kubernetes.
1. Access the Neo4j cluster using a specific member
You run cypher-shell
in a new pod and point it directly to one of the core members.
-
Run
cypher-shell
in a pod to access, for example,core-3
:kubectl run --rm -it --image "neo4j:4.4.8-enterprise" cypher-shell \ -- cypher-shell -a "neo4j://core-3.default.svc.cluster.local:7687" -u neo4j -p "my-password"
If you don't see a command prompt, try pressing enter. Connected to Neo4j using Bolt protocol version 4.4 at neo4j://core-3.default.svc.cluster.local:7687 as user neo4j. Type :help for a list of available commands or :exit to exit the shell. Note that Cypher queries must end with a semicolon.
-
Run the Cypher command
SHOW DATABASES
to verify that all cluster members are online:SHOW DATABASES;
+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | name | aliases | access | address | role | requestedStatus | currentStatus | error | default | home | +----------------------------------------------------------------------------------------------------------------------------------------------------------+ | "neo4j" | [] | "read-write" | "core-1.default.svc.cluster.local:7687" | "follower" | "online" | "online" | "" | TRUE | TRUE | | "neo4j" | [] | "read-write" | "core-3.default.svc.cluster.local:7687" | "follower" | "online" | "online" | "" | TRUE | TRUE | | "neo4j" | [] | "read-write" | "core-2.default.svc.cluster.local:7687" | "leader" | "online" | "online" | "" | TRUE | TRUE | | "neo4j" | [] | "read-write" | "rr-1.default.svc.cluster.local:7687" | "read_replica" | "online" | "online" | "" | TRUE | TRUE | | "system" | [] | "read-write" | "core-1.default.svc.cluster.local:7687" | "leader" | "online" | "online" | "" | FALSE | FALSE | | "system" | [] | "read-write" | "core-3.default.svc.cluster.local:7687" | "follower" | "online" | "online" | "" | FALSE | FALSE | | "system" | [] | "read-write" | "core-2.default.svc.cluster.local:7687" | "follower" | "online" | "online" | "" | FALSE | FALSE | | "system" | [] | "read-write" | "rr-1.default.svc.cluster.local:7687" | "read_replica" | "online" | "online" | "" | FALSE | FALSE | +----------------------------------------------------------------------------------------------------------------------------------------------------------+ 8 rows ready to start consuming query after 27 ms, results consumed after another 243 ms
-
Exit
cypher-shell
. Exitingcypher-shell
automatically deletes the pod created to run it.:exit;
Bye! Session ended, resume using 'kubectl attach cypher-shell -c cypher-shell -i -t' command when the pod is running pod "cypher-shell" deleted
2. Access the Neo4j cluster using headless service
To allow for an application running inside Kubernetes to access the Neo4j cluster without using a specific core for bootstrapping, you need to install the neo4j-cluster-headless-service Helm Chart. This will create a K8s Service with a DNS entry that includes all the Neo4j cores. You can use the created DNS entry to bootstrap drivers connecting to the cluster as shown in the image:
The headless service is a Kubernetes term for a service that has no ClusterIP. For more information, see the Kubernetes official documentation.
-
Install the headless service using the release name
headless
, neo4j/neo4j-cluster-headless-service Helm Chart, and the name of your cluster as a value of theneo4j.name
parameter.Alternatively, you can create a values.yaml file with all the configurations for the service. To see what options are configurable on the neo4j/neo4j-cluster-headless-service Helm Chart, use
helm show values neo4j/neo4j-cluster-headless-service
.helm install headless neo4j/neo4j-cluster-headless-service --set neo4j.name=my-cluster
NAME: headless LAST DEPLOYED: Fri Nov 5 16:58:54 2021 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing neo4j-cluster-headless-service. Your release "headless" has been installed in namespace "default". Once rollout is complete you can connect to your Neo4j cluster using "neo4j://headless-neo4j.default.svc.cluster.local:7687". Try: $ kubectl run --rm -it --image "neo4j:4.4.8-enterprise" cypher-shell \ -- cypher-shell -a "neo4j://headless-neo4j.default.svc.cluster.local:7687" Graphs are everywhere!
-
Check that the
headless
service is available:kubectl get services | grep head
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE headless-neo4j ClusterIP None <none> 7474/TCP,7473/TCP,7687/TCP 3m22s
-
Use
kubectl describe service
to see the service details:kubectl describe service headless-neo4j
Name: headless-neo4j Namespace: default Labels: app=my-cluster app.kubernetes.io/managed-by=Helm helm.neo4j.com/neo4j.name=my-cluster helm.neo4j.com/service=neo4j Annotations: cloud.google.com/neg: {"ingress":true} meta.helm.sh/release-name: headless meta.helm.sh/release-namespace: default Selector: app=my-cluster,helm.neo4j.com/dbms.mode=CORE,helm.neo4j.com/neo4j.loadbalancer=include,helm.neo4j.com/neo4j.name=my-cluster Type: ClusterIP IP Families:
IP: None IPs: None Port: http 7474/TCP TargetPort: 7474/TCP Endpoints: 10.108.1.19:7474,10.108.2.14:7474,10.108.4.37:7474 Port: https 7473/TCP TargetPort: 7473/TCP Endpoints: 10.108.1.19:7473,10.108.2.14:7473,10.108.4.37:7473 Port: tcp-bolt 7687/TCP TargetPort: 7687/TCP Endpoints: 10.108.1.19:7687,10.108.2.14:7687,10.108.4.37:7687 Session Affinity: None Events: You should see three “endpoints” for each port in the service — these are the IP addresses of the three Neo4j cores servers. These endpoints are contacted to bootstrap the drivers used by applications running in Kubernetes. The drivers will use them to obtain the initial routing table.
-
Run
cypher-shell
in another pod and connect to the cluster nodes via the headless service:kubectl run --rm -it --image "neo4j:4.4.8-enterprise" cypher-shell \ -- cypher-shell -a "neo4j://headless-neo4j.default.svc.cluster.local:7687" -u neo4j -p "my-password"
If you don't see a command prompt, try pressing enter. Connected to Neo4j using Bolt protocol version 4.4 at neo4j://headless-neo4j.default.svc.cluster.local:7687 as user neo4j. Type :help for a list of available commands or :exit to exit the shell. Note that Cypher queries must end with a semicolon.
-
Run the Cypher command
SHOW DATABASES
to verify that all cluster members are online.SHOW DATABASES;
+----------------------------------------------------------------------------------------------------------------------------------------------------------+ | name | aliases | access | address | role | requestedStatus | currentStatus | error | default | home | +----------------------------------------------------------------------------------------------------------------------------------------------------------+ | "neo4j" | [] | "read-write" | "core-1.default.svc.cluster.local:7687" | "follower" | "online" | "online" | "" | TRUE | TRUE | | "neo4j" | [] | "read-write" | "core-3.default.svc.cluster.local:7687" | "follower" | "online" | "online" | "" | TRUE | TRUE | | "neo4j" | [] | "read-write" | "core-2.default.svc.cluster.local:7687" | "leader" | "online" | "online" | "" | TRUE | TRUE | | "neo4j" | [] | "read-write" | "rr-1.default.svc.cluster.local:7687" | "read_replica" | "online" | "online" | "" | TRUE | TRUE | | "system" | [] | "read-write" | "core-1.default.svc.cluster.local:7687" | "leader" | "online" | "online" | "" | FALSE | FALSE | | "system" | [] | "read-write" | "core-3.default.svc.cluster.local:7687" | "follower" | "online" | "online" | "" | FALSE | FALSE | | "system" | [] | "read-write" | "core-2.default.svc.cluster.local:7687" | "follower" | "online" | "online" | "" | FALSE | FALSE | | "system" | [] | "read-write" | "rr-1.default.svc.cluster.local:7687" | "read_replica" | "online" | "online" | "" | FALSE | FALSE | +----------------------------------------------------------------------------------------------------------------------------------------------------------+ 8 rows ready to start consuming query after 4 ms, results consumed after another 42 ms
-
Exit
cypher-shell
. Exitingcypher-shell
automatically deletes the pod created to run it.:exit;
Bye! Session ended, resume using 'kubectl attach cypher-shell -c cypher-shell -i -t' command when the pod is running pod "cypher-shell" deleted
Was this page helpful?