This article will look at an easy way to set up a Neo4j cluster on your local Mac using k3d and K3s. k3d is a lightweight wrapper for running K3s (Rancher Lab’s minimal Kubernetes distribution) in Docker. The only major prerequisite we need is Docker Desktop or Podman. And, of course, Homebrew, if you want to install stuff easily. If you choose to use Podman, a few steps must be executed. For working with the Kubernetes cluster, you will need kubectl and Helm. If you have Homebrew installed, enter the following commands:
brew install kubectl
brew install helm
Assuming you have Docker Desktop installed, let’s go ahead and install k3d:
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
Once k3d is installed, it’s time to create a 3-node K3s cluster:
k3d cluster create mycluster --servers 3
Check that the cluster is up and the nodes are up:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-mycluster-server-0 Ready control-plane,etcd,master 5d23h v1.29.6+k3s2
k3d-mycluster-server-1 Ready control-plane,etcd,master 5d23h v1.29.6+k3s2
k3d-mycluster-server-2 Ready control-plane,etcd,master 5d23h v1.29.6+k3s2
Time to configure the Neo4j Helm Charts repo:
helm repo add neo4j https://helm.neo4j.com/neo4j
helm repo update
Create a directory to hold your YAML files for Neo4j, then create three for the three servers that will form your cluster (you can get away with creating just one if you want to use the same values). Creating three files is good if you want to, for example, change resources for one server and re-apply the configuration. Replace [your_password] with a password of your choice. This guide used the Enterprise edition of Neo4j; check out the documentation for more options.
server-1.values.yaml:
neo4j:
name: "my-cluster"
minimumClusterSize: 3
resources:
cpu: "1"
memory: "4Gi"
password: "{your_password}"
edition: "enterprise"
acceptLicenseAgreement: "yes"
volumes:
data:
mode: "defaultStorageClass"
server-2.values.yaml:
neo4j:
name: "my-cluster"
minimumClusterSize: 3
resources:
cpu: "1"
memory: "4Gi"
password: "{your_password}"
edition: "enterprise"
acceptLicenseAgreement: "yes"
volumes:
data:
mode: "defaultStorageClass"
server-3.values.yaml:
neo4j:
name: "my-cluster"
minimumClusterSize: 3
resources:
cpu: "1"
memory: "4Gi"
password: "{your_password}"
edition: "enterprise"
acceptLicenseAgreement: "yes"
volumes:
data:
mode: "defaultStorageClass"
Let’s create a namespace for the Neo4j deployment, set the current context, and deploy the servers:
kubectl create namespace neo4j
kubectl config set-context --current --namespace=neo4j
helm install server-1 neo4j/neo4j --namespace neo4j -f server-1.values.yaml
helm install server-2 neo4j/neo4j --namespace neo4j -f server-2.values.yaml
helm install server-3 neo4j/neo4j --namespace neo4j -f server-3.values.yaml
Let’s verify the deployment and wait for all of the servers to reach the running state:
kubectl get pods
NAME READY STATUS RESTARTS AGE
server-1-0 1/1 Running 0 119m
server-2-0 1/1 Running 0 120m
server-3-0 1/1 Running 0 120m
Let’s look at the services:
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-cluster-lb-neo4j LoadBalancer 10.43.130.252 172.18.0.3,172.18.0.4,172.18.0.5 7474:31783/TCP,7473:32438/TCP,7687:31385/TCP 120m
server-1 ClusterIP 10.43.246.132 <none> 7687/TCP,7474/TCP 120m
server-1-admin ClusterIP 10.43.30.63 <none> 6362/TCP,7687/TCP,7474/TCP 120m
server-1-internals ClusterIP 10.43.165.133 <none> 6362/TCP,7687/TCP,7474/TCP,7688/TCP,5000/TCP,7000/TCP,6000/TCP 120m
server-2 ClusterIP 10.43.238.155 <none> 7687/TCP,7474/TCP 120m
server-2-admin ClusterIP 10.43.18.249 <none> 6362/TCP,7687/TCP,7474/TCP 120m
server-2-internals ClusterIP 10.43.40.4 <none> 6362/TCP,7687/TCP,7474/TCP,7688/TCP,5000/TCP,7000/TCP,6000/TCP 120m
server-3 ClusterIP 10.43.115.238 <none> 7687/TCP,7474/TCP 119m
server-3-admin ClusterIP 10.43.216.253 <none> 6362/TCP,7687/TCP,7474/TCP 119m
server-3-internals ClusterIP 10.43.74.152 <none> 6362/TCP,7687/TCP,7474/TCP,7688/TCP,5000/TCP,7000/TCP,6000/TCP 119m
As you can see, there is a my-cluster-lb-neo4j LoadBalancer service. It has external-ips, but since the Kubernetes cluster is running on Docker, those IPs will not be directly accessible. You can also see that the ports for the service are 7474, 7473, and 7687. To access the service locally, we need to port-forward those ports:
kubectl port-forward service/my-cluster-lb-neo4j 7474:7474 7473:7473 7687:7687
Forwarding from 127.0.0.1:7474 -> 7474
Forwarding from [::1]:7474 -> 7474
Forwarding from 127.0.0.1:7473 -> 7473
Forwarding from [::1]:7473 -> 7473
Forwarding from 127.0.0.1:7687 -> 7687
Forwarding from [::1]:7687 -> 7687
Now let’s install Cypher and use it to verify the connectivity. Replace [your_password] with the password you chose previously:
brew install cypher-shell
cypher-shell -a neo4j://localhost -u neo4j -p {your_password}
Connected to Neo4j using Bolt protocol version 5.4 at neo4j://localhost:7687 as user neo4j.
Type :help for a list of available commands or :exit to exit the shell.
Note that Cypher queries must end with a semicolon.
neo4j@neo4j> SHOW SERVERS;
+----------------------------------------------------------------------------------------------------------------------------------+
| name | address | state | health | hosting |
+----------------------------------------------------------------------------------------------------------------------------------+
| "977aa970-c814-4fbb-9c68-eaf0b3609454" | "server-1.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["neo4j", "system"] |
| "a1851bd4-e886-4cdc-bd81-99b336b9cf18" | "server-3.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["neo4j", "system"] |
| "aa2f1fdb-94e2-4789-b265-3b34eca36cc4" | "server-2.neo4j.svc.cluster.local:7687" | "Enabled" | "Available" | ["neo4j", "system"] |
+----------------------------------------------------------------------------------------------------------------------------------+
3 rows
ready to start consuming query after 7 ms, results consumed after another 2 ms
neo4j@neo4j> SHOW DATABASES;
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| name | type | aliases | access | address | role | writer | requestedStatus | currentStatus | statusMessage | default | home | constituents |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "neo4j" | "standard" | [] | "read-write" | "server-3.neo4j.svc.cluster.local:7687" | "primary" | TRUE | "online" | "online" | "" | TRUE | TRUE | [] |
| "neo4j" | "standard" | [] | "read-write" | "server-1.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | TRUE | TRUE | [] |
| "neo4j" | "standard" | [] | "read-write" | "server-2.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | TRUE | TRUE | [] |
| "system" | "system" | [] | "read-write" | "server-1.neo4j.svc.cluster.local:7687" | "primary" | TRUE | "online" | "online" | "" | FALSE | FALSE | [] |
| "system" | "system" | [] | "read-write" | "server-3.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | FALSE | FALSE | [] |
| "system" | "system" | [] | "read-write" | "server-2.neo4j.svc.cluster.local:7687" | "primary" | FALSE | "online" | "online" | "" | FALSE | FALSE | [] |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
6 rows
ready to start consuming query after 20 ms, results consumed after another 8 ms
And that’s it! You now have a running Neo4j cluster and a lightweight Kubernetes cluster on your Mac. This works fine on ARM Macs, too! The Neo4j docs have a Kubernetes section where you can look for further configuration options such as SSL, plugins, etc.
Easy Local Neo4j Cluster Setup Using k3d and K3s on Mac in 10 Minutes was originally published in Neo4j Developer Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.