Shared Persistent Storage in Kubernetes with Amazon EFS

& DevOps Practitioner

& DevOps Practitioner
Shared Persistent Storage in Kubernetes with Amazon EFS (Copy/Paste Runbook)
When multiple Kubernetes pods need access to the same files, container-local storage is not enough. Pods restart, reschedule, and roll during deployments, so local filesystem changes are not reliable.
This is a reusable, implementation-first runbook for creating shared persistent storage with:
- a dynamic
StorageClass - a
PersistentVolumeClaim(PVC) - shared mounts across multiple Deployments
All examples are intentionally generic and safe to reuse across environments.
Why this pattern is used
An EFS-backed PVC solves the three problems teams usually hit first:
-
Durability across pod lifecycle events
- Data survives restarts, rollouts, and node replacement.
-
True multi-pod shared access
- EFS supports
ReadWriteMany, so multiple replicas can read and write the same directory.
- EFS supports
-
Repeatable infrastructure-as-code
- The whole pattern can be templated in Helm and promoted across environments.
What we are building
You will create:
- A Helm
valuessection for EFS storage settings - A
StorageClasstemplate (dynamic provisioning via EFS CSI) - A PVC template bound to that storage class
- Deployment templates mounting that claim into multiple pods
- Validation checks to prove cross-pod file sharing works
Prerequisites
Before applying any YAML, confirm:
- EFS CSI driver is installed in your cluster
- an EFS filesystem already exists (for example
fs-0123456789abcdef0) - worker nodes can reach EFS over NFS (
2049) - required IAM/network permissions are in place
Step 1: Add storage values (full copy/paste)
Add this block to your environment values file (for example values-qa.yaml or values-prod.yaml):
namespace: my-application
storageClass:
enabled: true
name: myapp-qa-efs-storage
annotations:
helm.sh/resource-policy: keep
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: fs-0123456789abcdef0
directoryPerms: "0777"
name: "myapp-qa-access-point"
uid: "33"
gid: "33"
persistentVolumeClaim:
name: myapp-qa-shared-public-pvc
annotations:
helm.sh/resource-policy: keep
accessModes:
- ReadWriteMany
storageClassName: myapp-qa-efs-storage
storage: 50Gi
Values you should always change
namespacestorageClass.namestorageClass.parameters.fileSystemIdstorageClass.parameters.namepersistentVolumeClaim.namepersistentVolumeClaim.storageClassName
Step 2: Add the StorageClass Helm template (full copy/paste)
Create templates/storageClass.yaml in your Helm chart:
{{- if .Values.storageClass.enabled }}
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.storageClass.name }}
{{- if .Values.storageClass.annotations }}
annotations:
{{- range $key, $value := .Values.storageClass.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
provisioner: {{ .Values.storageClass.provisioner }}
parameters:
provisioningMode: {{ .Values.storageClass.parameters.provisioningMode }}
fileSystemId: {{ .Values.storageClass.parameters.fileSystemId }}
directoryPerms: {{ .Values.storageClass.parameters.directoryPerms | quote }}
basePath: {{ .Values.storageClass.parameters.name | quote }}
uid: {{ .Values.storageClass.parameters.uid | quote }}
gid: {{ .Values.storageClass.parameters.gid | quote }}
allowVolumeExpansion: true
{{- end }}
Why this template matters
- keeps storage optional per environment via
enabled - supports dynamic provisioning through EFS CSI
- keeps ownership/permissions explicit and version-controlled
Step 3: Add the PVC Helm template (full copy/paste)
Create templates/pvc.yaml:
{{- if .Values.persistentVolumeClaim }}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.persistentVolumeClaim.name }}
namespace: {{ .Values.namespace }}
{{- if .Values.persistentVolumeClaim.annotations }}
annotations:
{{- range $key, $value := .Values.persistentVolumeClaim.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
spec:
accessModes:
{{- range .Values.persistentVolumeClaim.accessModes }}
- {{ . }}
{{- end }}
storageClassName: {{ .Values.persistentVolumeClaim.storageClassName }}
resources:
requests:
storage: {{ .Values.persistentVolumeClaim.storage }}
{{- end }}
Why this template matters
- asks Kubernetes for a shared claim using
ReadWriteMany - binds directly to the StorageClass from Step 2
- gives a stable claim name that multiple deployments can reference
Step 4: Mount the same PVC in multiple Deployments (full copy/paste)
Below are two complete deployment examples (nginx and backend) mounting the same claim.
The backend deployment is intentionally runtime-agnostic, so it can represent Node.js, Go, PHP, or any other server process.
Example A: nginx deployment template
Create or update templates/nginx-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: {{ .Values.namespace }}
spec:
replicas: {{ .Values.nginx.replicas }}
selector:
matchLabels:
app: {{ .Values.nginx.container.name }}
template:
metadata:
labels:
app: {{ .Values.nginx.container.name }}
spec:
initContainers:
- name: seed-shared-public
image: "{{ .Values.nginx.image.repository }}"
command: ["/bin/sh", "-c"]
args:
- 'cp -r /app/public/. /mnt/shared-public/'
volumeMounts:
- name: shared-public-storage
mountPath: /mnt/shared-public
containers:
- name: {{ .Values.nginx.container.name }}
image: "{{ .Values.nginx.image.repository }}"
ports:
- containerPort: {{ .Values.nginx.container.port }}
volumeMounts:
- name: shared-public-storage
mountPath: /app/public
volumes:
- name: shared-public-storage
persistentVolumeClaim:
claimName: {{ .Values.persistentVolumeClaim.name }}
Example B: backend deployment template
Create or update templates/backend-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: {{ .Values.namespace }}
spec:
replicas: {{ .Values.backend.replicas }}
selector:
matchLabels:
app: {{ .Values.backend.container.name }}
template:
metadata:
labels:
app: {{ .Values.backend.container.name }}
spec:
initContainers:
- name: seed-shared-public
image: "{{ .Values.backend.image.repository }}"
command: ["/bin/sh", "-c"]
args:
- 'cp -r /app/public/. /mnt/shared-public/'
volumeMounts:
- name: shared-public-storage
mountPath: /mnt/shared-public
containers:
- name: {{ .Values.backend.container.name }}
image: "{{ .Values.backend.image.repository }}"
ports:
- containerPort: {{ .Values.backend.container.port }}
volumeMounts:
- name: shared-public-storage
mountPath: /app/public
volumes:
- name: shared-public-storage
persistentVolumeClaim:
claimName: {{ .Values.persistentVolumeClaim.name }}
Add matching values for these templates
nginx:
replicas: 2
container:
name: nginx
port: 80
image:
repository: <your-container-registry>/myapp-nginx:<tag>
backend:
replicas: 2
container:
name: backend
port: 3000
image:
repository: <your-container-registry>/myapp-backend:<tag>
Step 5: Deploy and validate (full command set)
Deploy with Helm
helm upgrade --install myapp-release ./deployment/helm \
--namespace my-application \
--create-namespace \
-f ./deployment/values-qa.yaml
Validate storage resources
kubectl get storageclass
kubectl get pvc -n my-application
kubectl describe pvc myapp-qa-shared-public-pvc -n my-application
kubectl get pods -n my-application
Validate cross-pod sharing
- Open a shell in one pod and write a file:
kubectl exec -it deploy/nginx -n my-application -- sh
echo "efs-shared-test" > /app/public/efs-test.txt
exit
- Open a shell in another deployment and read the same file:
kubectl exec -it deploy/backend -n my-application -- sh
cat /app/public/efs-test.txt
exit
If the second command prints efs-shared-test, shared storage is working.
Troubleshooting quick-reference
PVC stuck in Pending
Check:
- EFS CSI driver is installed and healthy
fileSystemIdis correct- worker nodes can route to EFS mount targets
- IAM and security group permissions allow NFS (
2049)
Pods running but files not shared
Check:
- both Deployments use the same
claimName - both containers mount the same path (
/app/publicin examples) - init container copy path and mount path are correct
Helm upgrade replaced names unexpectedly
Prevent this by:
- keeping stable names for
StorageClassand PVC - parameterizing names clearly per environment
- avoiding ad-hoc renaming after first successful deployment
Reusable skeleton for future projects
For any new project, only replace placeholders:
my-applicationmyapp-releasemyapp-qa-efs-storagemyapp-qa-shared-public-pvcfs-0123456789abcdef0<your-container-registry>/...
Everything else can remain the same as a baseline implementation.
Final takeaway
The StorageClass + PVC + shared volumeMounts pattern is a durable and repeatable standard for shared files in Kubernetes.
Once templated, teams can reuse it across services and environments with only a small set of value substitutions.
