Access the Neo4j cluster from inside Kubernetes
By default, client-side routing is used for accessing a Neo4j cluster from inside Kubernetes.
Access the Neo4j cluster using a specific member
You run cypher-shell
in a new pod and point it directly to one of the servers.
-
Run
cypher-shell
in a pod to access, for example,server-3
:kubectl run --rm -it --image "neo4j:5.26.0-enterprise" cypher-shell \ -- cypher-shell -a "neo4j://server-3.default.svc.cluster.local:7687" -u neo4j -p "my-password"
If you don't see a command prompt, try pressing enter. Connected to Neo4j using Bolt protocol version 5 at neo4j://server-3.default.svc.cluster.local:7687 as user neo4j. Type :help for a list of available commands or :exit to exit the shell. Note that Cypher queries must end with a semicolon.
-
Run the Cypher command
SHOW DATABASES
to verify that all cluster servers are online.SHOW DATABASES;
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | name | type | aliases | access | address | role | writer | requestedStatus | currentStatus | statusMessage | default | home | constituents | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | "neo4j" | "standard" | [] | "read-write" | "server-2.neo4j.svc.cluster.local:7687" | "primary" | TRUE | "online" | "online" | "" | TRUE | TRUE | [] | | "neo4j" | "standard" | [] | "read-write" | "server-1.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | TRUE | TRUE | [] | | "neo4j" | "standard" | [] | "read-write" | "server-3.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | TRUE | TRUE | [] | | "system" | "system" | [] | "read-write" | "server-2.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | FALSE | FALSE | [] | | "system" | "system" | [] | "read-write" | "server-1.neo4j.svc.cluster.local:7687" | "primary" | TRUE | "online" | "online" | "" | FALSE | FALSE | [] | | "system" | "system" | [] | "read-write" | "server-3.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | FALSE | FALSE | [] | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 6 rows ready to start consuming query after 27 ms, results consumed after another 243 ms
-
Run the Cypher command
SHOW SERVERS
to verify that all cluster servers are enabled:SHOW SERVERS;
+----------------------------------------------------------------------------------------------------------------------------------+ | name | address | state | health | hosting | +----------------------------------------------------------------------------------------------------------------------------------+ | "ad5c3cf1-541a-44f8-a19b-28bc36030914" | "server-3.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["system", "neo4j"] | | "cbdebc59-64c2-4542-a041-24a1f051e64f" | "server-1.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["system", "neo4j"] | | "f37e98a7-15ec-4dc4-a6bf-df9e418a7488" | "server-2.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["system", "neo4j"] | +----------------------------------------------------------------------------------------------------------------------------------+ 3 rows ready to start consuming query after 27 ms, results consumed after another 363 ms
-
Exit
cypher-shell
. Exitingcypher-shell
automatically deletes the pod created to run it.:exit;
Bye! Session ended, resume using 'kubectl attach cypher-shell -c cypher-shell -i -t' command when the pod is running pod "cypher-shell" deleted
Access the Neo4j cluster using headless service
To allow for an application running inside Kubernetes to access the Neo4j cluster without using a specific server for bootstrapping, you need to install the neo4j-cluster-headless-service Helm chart. This will create a K8s Service with a DNS entry that includes all the Neo4j servers. You can use the created DNS entry to bootstrap drivers connecting to the cluster.
The headless service is a Kubernetes term for a service that has no ClusterIP. For more information, see the Kubernetes official documentation.
-
Install the headless service using the release name
headless
, neo4j/neo4j-cluster-headless-service Helm chart, and the name of your cluster as a value of theneo4j.name
parameter.Alternatively, you can create a values.yaml file with all the configurations for the service. To see what options are configurable on the neo4j/neo4j-cluster-headless-service Helm chart, use
helm show values neo4j/neo4j-headless-service
.helm install headless neo4j/neo4j-headless-service --namespace neo4j --set neo4j.name=my-cluster
NAME: headless LAST DEPLOYED: Wed Oct 26 13:11:14 2022 NAMESPACE: neo4j STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing neo4j-cluster-headless-service. Your release "headless" has been installed in namespace "neo4j". Once rollout is complete you can connect to your Neo4j cluster using "neo4j://headless-neo4j.neo4j.svc.cluster.local:7687". Try: $ kubectl run --rm -it --namespace "neo4j" --image "neo4j:5.26.0-enterprise" cypher-shell \ -- cypher-shell -a "neo4j://headless-neo4j.neo4j.svc.cluster.local:7687" Graphs are everywhere!
-
Check that the
headless
service is available:export NEO4J_NAME=my-cluster kubectl get service ${NEO4J_NAME}-headless
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-cluster-headless ClusterIP None <none> 7474/TCP,7687/TCP 113s
-
Use
kubectl describe service
to see the service details:kubectl describe service ${NEO4J_NAME}-headless
Name: my-cluster-headless Namespace: neo4j Labels: app=my-cluster app.kubernetes.io/managed-by=Helm helm.neo4j.com/neo4j.name=my-cluster Annotations: cloud.google.com/neg: {"ingress":true} meta.helm.sh/release-name: headless meta.helm.sh/release-namespace: neo4j Selector: app=my-cluster,helm.neo4j.com/neo4j.loadbalancer=include Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: None IPs: None Port: http 7474/TCP TargetPort: 7474/TCP Endpoints: 10.24.0.131:7474,10.24.1.3:7474,10.24.1.67:7474 Port: https 7473/TCP TargetPort: 7473/TCP Endpoints: 10.24.0.131:7473,10.24.1.3:7473,10.24.1.67:7473 Port: tcp-bolt 7687/TCP TargetPort: 7687/TCP Endpoints: 10.24.0.131:7687,10.24.1.3:7687,10.24.1.67:7687 Session Affinity: None Events:
You should see three “endpoints” for each port in the service — these are the IP addresses of the three Neo4j servers. These endpoints are contacted to bootstrap the drivers used by applications running in Kubernetes. The drivers will use them to obtain the initial routing table.
-
Run
cypher-shell
in another pod and connect to the cluster servers via the headless service:kubectl run --rm -it --namespace "neo4j" --image "neo4j:5.26.0-enterprise"cypher-shell -- cypher-shell -a \ "neo4j://my-cluster-headless.neo4j.svc.cluster.local:7687" -u neo4j -p "my-password"
If you don't see a command prompt, try pressing enter. Connected to Neo4j using Bolt protocol version 5 at neo4j://headless-neo4j.default.svc.cluster.local:7687 as user neo4j. Type :help for a list of available commands or :exit to exit the shell. Note that Cypher queries must end with a semicolon.
-
Run the Cypher command
SHOW DATABASES
to verify that all cluster servers are online.SHOW DATABASES;
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | name | type | aliases | access | address | role | writer | requestedStatus | currentStatus | statusMessage | default | home | constituents | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | "neo4j" | "standard" | [] | "read-write" | "server-3.neo4j.svc.cluster.local:7687" | "primary" | TRUE | "online" | "online" | "" | TRUE | TRUE | [] | | "neo4j" | "standard" | [] | "read-write" | "server-2.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | TRUE | TRUE | [] | | "neo4j" | "standard" | [] | "read-write" | "server-1.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | TRUE | TRUE | [] | | "system" | "system" | [] | "read-write" | "server-3.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | FALSE | FALSE | [] | | "system" | "system" | [] | "read-write" | "server-2.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | FALSE | FALSE | [] | | "system" | "system" | [] | "read-write" | "server-1.neo4j.svc.cluster.local:7687" | "primary" | TRUE | "online" | "online" | "" | FALSE | FALSE | [] | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 6 rows ready to start consuming query after 4 ms, results consumed after another 42 ms
-
Exit
cypher-shell
. Exitingcypher-shell
automatically deletes the pod created to run it.:exit;
Bye! Session ended, resume using 'kubectl attach cypher-shell -c cypher-shell -i -t' command when the pod is running pod "cypher-shell" deleted