Create a persistent volume for each cluster member

You create a persistent disk for each cluster member to be used for its data volume mount. For more information on persistent volumes and mounts, see Volume mounts and persistent volumes. In this guide, you attach cloud disks/volumes, e.g., gcePersistentDisk, azureDisk, and awsElasticBlockStore without creating Kubernetes PersistentVolumes.

The following examples use core-disk-1, core-disk-2, core-disk-3, and rr-disk-1 as names for the persistent disks.

Create a GCP persistent disk for each cluster member (both cores and read replicas) by executing the following commands. This is a normal GCP persistent disk and not Kubernetes specific:

gcloud compute disks create --size 128Gi --type pd-ssd "core-disk-1"
gcloud compute disks create --size 128Gi --type pd-ssd "core-disk-2"
gcloud compute disks create --size 128Gi --type pd-ssd "core-disk-3"
gcloud compute disks create --size 128Gi --type pd-ssd "rr-disk-1"

Each of the commands should return the name, zone, size, type, and the status of the corresponding disk, for example:

Created [https://www.googleapis.com/compute/v1/projects/neo4j-helm/zones/europe-west6-c/disks/core-disk-1].
NAME         ZONE            SIZE_GB  TYPE    STATUS
core-disk-1  europe-west6-c  128      pd-ssd  READY

New disks are unformatted. You must format and mount a disk before it
can be used. You can find instructions on how to do this at:

https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting

Created [https://www.googleapis.com/compute/v1/projects/neo4j-helm/zones/europe-west6-c/disks/core-disk-2].
NAME         ZONE            SIZE_GB  TYPE    STATUS
core-disk-2  europe-west6-c  128      pd-ssd  READY

New disks are unformatted. You must format and mount a disk before it
can be used. You can find instructions on how to do this at:

https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting

Created [https://www.googleapis.com/compute/v1/projects/neo4j-helm/zones/europe-west6-c/disks/core-disk-3].
NAME         ZONE            SIZE_GB  TYPE    STATUS
core-disk-3  europe-west6-c  128      pd-ssd  READY

New disks are unformatted. You must format and mount a disk before it
can be used. You can find instructions on how to do this at:

https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting

Created [https://www.googleapis.com/compute/v1/projects/neo4j-helm/zones/europe-west6-c/disks/rr-disk-1].
NAME       ZONE            SIZE_GB  TYPE    STATUS
rr-disk-1  europe-west6-c  128      pd-ssd  READY

New disks are unformatted. You must format and mount a disk before it
can be used. You can find instructions on how to do this at:

https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting

The message New disks are unformatted. You must format and mount a disk before it can be used. should not be a cause for concern, and there is no need to take action to format the disk. If necessary, the disk will be formatted automatically when used in Kubernetes.

Create an AWS EBS disk for each cluster member (both cores and read replicas) using the following commands. This is a normal AWS EBS disk and not Kubernetes specific:

aws ec2 create-volume --availability-zone=${AWS_AVAILABILITY_ZONE} --size=64 --volume-type=gp3 --tag-specifications 'ResourceType=volume,Tags=[{Key=volume,Value=core-disk-1}]'

aws ec2 create-volume --availability-zone=${AWS_AVAILABILITY_ZONE} --size=64 --volume-type=gp3 --tag-specifications 'ResourceType=volume,Tags=[{Key=volume,Value=core-disk-2}]'

aws ec2 create-volume --availability-zone=${AWS_AVAILABILITY_ZONE} --size=64 --volume-type=gp3 --tag-specifications 'ResourceType=volume,Tags=[{Key=volume,Value=core-disk-3}]'

aws ec2 create-volume --availability-zone=${AWS_AVAILABILITY_ZONE} --size=64 --volume-type=gp3 --tag-specifications 'ResourceType=volume,Tags=[{Key=volume,Value=rr-disk-1}]'

Fetch the IDs of the disks that were just created, for example:

aws ec2 describe-volumes --filters Name=tag:volume,Values=core-disk-1,core-disk-2,core-disk-3,rr-disk-1 --query "Volumes[*].{ID:VolumeId}" --output text

Make a note of all IDs.

vol-081665be3681e384c
vol-000ce1338f430d173
vol-03b099df30ca4f0d3
vol-04f5ddffea2d7e092

Create an Azure-managed disk for each cluster member (both cores and read replicas) using the following commands. This is a normal Azure managed disk and not Kubernetes specific:

az disk create --name "core-disk-1" --size-gb "64" --sku Premium_LRS --max-shares 1

az disk create --name "core-disk-2" --size-gb "64" --sku Premium_LRS --max-shares 1

az disk create --name "core-disk-3" --size-gb "64" --sku Premium_LRS --max-shares 1

az disk create --name "rr-disk-1" --size-gb "64" --sku Premium_LRS --max-shares 1

Fetch the IDs of the disks that were just created, for example:

az disk show --name "core-disk-1" --query id

Make a note of all IDs.

"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myneo4jrg/providers/Microsoft.Compute/disks/core-disk-1"