Creating a Storage Class in DigitalOcean
Introduction
For DigitalOcean, Bunnyshell supports only ReadWriteMany (network) volumes type and not ReadWriteOnce due to the following reasons:
DigitalOcean has a limit of seven volume mounts per droplet and this limit cannot be changed
DigitalOcean's managed Kubernetes cluster supports only Persistent Volume Claims (PVC) with
ReadWriteOnce.
As a solution, Bunnyshell will create all PVC using storage class bns-network-sc. However, you need to create the bns-network-sc CSI yourself, following the instructions below.
The Container Storage Interface (CSI) in Kubernetes is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Read more at the Kubernetes CSI Documentation page.
Prerequisites
Make sure you're connected to the cluster and that the cluster is the current config context. How to Connect to a DigitalOcean Kubernetes Cluster.
Install Helm. For detailed instruction, visit the Helm docs platform.
If you downloaded the Configuration file from DigitalOcean, but did not add it to the ~/.kube directory:
Set the KUBECONFIG env variable:
export KUBECONFIG=<path to k8s config file>Make sure the variable is set correctly:
stat $KUBECONFIG
Setting the proper context
Starting here, you will work in the terminal. Make sure you're connected to the cluster and that the cluster is the current context. Use the command kubectl config --help to obtain the necessary information.
Steps to create Disk and Network Volumes
The NFS component that has the Volume mounted will be exported using a Kubernetes deployment.
Creating a NFS server
Start by creating a deployment file nfs-deployment.yaml:
Apply the deployment file:
This will create the container which will export a host volume
Creating a Kubernetes service for nfs-deployment.yaml
Create a file named service.yaml containing the text below:
Apply the file
Wait for the service and pod to be created, then check the Status for each one.
To check the pod status, use the command below.
To check the service status, use the following command.
Use the Helm charts to create the Storage Class
Retrieve the Service Endpoint IP and assign it to the environment variable NFS_SERVICE_ENDPOINT_IP using the command below:
In this case, 10.244.0.89 is the NFS Server IP that was assigned to the environment variable NFS_SERVICE_ENDPOINT_IP. This will be used later in the Helm Chart command.
Add the following Helm Chart repository:
Use the Helm Chart to create the Storage Class:
Wait until the Storage Class is created, check status using command:
Testing the Storage Class
1. Create the test.yaml file with the contents below. Later, the file will generate the test PVC and Pod:
2. Apply the test.yaml file:
3. Wait until the test-app-network and test-app-disk pods reach the status Running. To check that the pods reached the Running status, perform the following command:
4. Check for the presence of a persistent volume that has the following properties:
STORAGECLASS set to bns-network-sc for the 2 PVs created
CLAIM for one of the PVs is set to default/test-pvc-disk and the ACCESS MODES is set to RWO
CLAIM for the other PV is set to default/test-pvc-network and the ACCESS MODES is set to RWX
5. Verify that the test-app-network pod is writing data to the volume:
6. Verify that the test-app-disk pod is writing data to the volume:
7. If the your results are similar with the output displayed above, then you've completed the process successfully and you can delete the test resources. Delete the PVCs and the Pods. This will also cause the PVs to be deleted:
8. Check if the PVs displayed at step 4 are no longer present.
Checking the first PV
Checking the second PV
Last updated
Was this helpful?
