Persistent Storage
This example builds on Secret Injection by mounting filesystem storage into the container. The platform supports four kinds of volume mounts on a Deployment: a freshly-provisioned PersistentVolumeClaim, a reference to an existing PersistentVolumeClaim, an ephemeral emptyDir, and ConfigMap/Secret files.
What You'll Learn
In this lesson, you'll learn:
- How to mount a
PersistentVolumeClaim(PVC) — both provisioned by the platform and pre-existing - When to use
emptyDirfor ephemeral scratch space - How to mount ConfigMap and Secret data as files (not env vars)
- The singleton-mode guardrails that protect ReadWriteOnce volumes
- When to reach for a
StatefulSetinstead
What Gets Created
In addition to resources from Secret Injection, the platform creates:
- One
PersistentVolumeClaimpervolumeMounts[]entry that hassource.size(provision mode) - A
volumeMounton the container plus a matchingvolumeon the pod spec for every entry
Reference-mode mounts (source.claimName) and emptyDir mounts (no source) don't create new PVCs — they wire up volumes only.
What Changed
Added the spec.deployment.volumeMounts section:
apiVersion: meta.p6m.dev/v1alpha1
kind: PlatformApplication
metadata:
name: demo-http-echo
namespace: demo-http-echo
labels:
p6m.dev/app: demo-http-echo
spec:
config:
LOG_LEVEL: debug
HTTP_PORT: "8080"
ENVIRONMENT: dev
APP_NAME: demo-http-echo
secrets:
- name: super-secret-data
deployment:
image: mendhak/http-https-echo:31
ports:
- port: 8080
protocol: http
readinessProbe:
port: 8080
path: /
# NEW
volumeMounts:
- name: data
mountPath: /var/lib/app
source:
size: 10Gi
- name: cache
mountPath: /var/cache/app
# No source = emptyDir (ephemeral, lives with the pod)
# NEW
networking:
ingress:
enabled: true
path: /
gateway: public-open
This adds a 10Gi persistent volume mounted at /var/lib/app and an ephemeral scratch directory at /var/cache/app.
Volume Source Types
The shape of source decides what kind of volume the platform wires up:
| Shape | Volume Type | Lifetime | Use For |
|---|---|---|---|
source: {size: "10Gi"} | New PersistentVolumeClaim | Survives pod restarts and PA deletion | Databases, caches that should persist, anything you'd lose sleep over |
source: {claimName: "shared-data"} | Existing PersistentVolumeClaim | Whatever you already set up | Sharing storage across applications, restoring from a snapshot |
source: {configMap: {name: "..."}} | ConfigMap projected as files | Lives with the pod | Mounting configs that need to be on disk (TLS configs, structured data) |
source: {secret: {secretName: "..."}} | Secret projected as files | Lives with the pod | TLS keys, certificates, anything that wants a file path instead of an env var |
| omitted | emptyDir | Wiped when the pod is deleted | Scratch space, build caches, IPC between sidecars |
You only need to pick the fields for the source type you want. Mixing fields (for example, both size and claimName) is rejected at admission.
Provisioning a New PVC
volumeMounts:
- name: data
mountPath: /var/lib/app
source:
size: 50Gi
storageClassName: gp3 # Optional, defaults to the cluster default
accessModes: [ReadWriteOnce] # Optional, defaults to [ReadWriteOnce]
The platform creates a PVC named <app>-<volume-name> (so demo-http-echo-data). The PVC has no owner reference to the PlatformApplication, so deleting the app does not delete the PVC — your data is safe from accidental teardown.
Referencing an Existing PVC
volumeMounts:
- name: data
mountPath: /var/lib/app
source:
claimName: my-existing-pvc
The platform mounts my-existing-pvc without creating or modifying it. Useful when you've restored a PVC from a snapshot, or when several applications share a ReadWriteMany volume.
emptyDir for Ephemeral Storage
volumeMounts:
- name: cache
mountPath: /var/cache/app
# source omitted on purpose
The pod gets a fresh empty directory at /var/cache/app. Anything written there is lost the moment the pod is replaced. Cheaper than a PVC and zero blast radius if it fills up.
Singleton Mode (RWO PVCs)
When a Deployment mounts a ReadWriteOnce PVC, the platform automatically switches into singleton mode to prevent attach-storms. Without these guardrails, two pods would fight over the volume during a rolling update and the rollout would deadlock.
In singleton mode the platform forces:
replicas: 1strategy: Recreate(terminate the old pod, then start the new one)- HPA disabled
You don't configure any of this — it kicks in whenever a Deployment has at least one RWO PVC volume mount. RWX PVCs (accessModes: [ReadWriteMany]) skip the override since multiple pods can attach simultaneously.
Hard-Blocked Combinations
Admission rejects these at apply time, before the resource ever reaches the controller:
spec.autoscalingset on aDeploymentwith an RWO PVC. The HPA would scale up, the second pod would never attach, and your rollout would hang. Use aStatefulSetif you need replicas.- A
volumeMountwith bothsizeandclaimNameset. Pick one. - A
volumeMountwith neithersource.sizenorsource.claimNamebut with other PVC-only fields likestorageClassName. Either remove the extra fields (you wantedemptyDir) or add asize/claimName(you wanted a PVC).
Deploy Steps
- ArgoCD
- kubectl
ArgoCD automatically updates to your PlatformApplication after the Platform Dispatch Action to update your .platform repository is run.
Use the Kinds filter to locate the new PersistentVolumeClaim alongside your existing resources.
- Check out our ArgoCD Cheat Sheet for tips on interacting with ArgoCD.
- For more information on setting up ArgoCD for Platform Applications, see the ArgoCD Deployment Tutorial.
Apply the updated PlatformApplication:
kubectl apply -f application.yaml
Verify the PVC was created and bound:
kubectl get pvc -n demo-http-echo
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
# demo-http-echo-data Bound pvc-abc123... 10Gi RWO gp3 30s
Check the volume is mounted in the pod:
POD=$(kubectl get pods -n demo-http-echo -l p6m.dev/app=demo-http-echo -o jsonpath='{.items[0].metadata.name}')
kubectl exec -n demo-http-echo $POD -- df -h /var/lib/app
# Filesystem Size Used Avail Use% Mounted on
# /dev/nvme1n1 9.8G ... ... ... /var/lib/app
Write a marker file to prove persistence across pod restarts:
kubectl exec -n demo-http-echo $POD -- sh -c 'echo "hello from $(hostname) at $(date)" > /var/lib/app/hello.txt'
kubectl delete pod -n demo-http-echo $POD
# Once the new pod is ready, exec in and read the file:
NEW_POD=$(kubectl get pods -n demo-http-echo -l p6m.dev/app=demo-http-echo -o jsonpath='{.items[0].metadata.name}')
kubectl exec -n demo-http-echo $NEW_POD -- cat /var/lib/app/hello.txt
# hello from demo-http-echo-... at ...
The file survives because it lives on the PVC, not the pod's filesystem.
Common Observations
Why does my Deployment only have one replica now?
You're in singleton mode — see Singleton Mode above. Once you mount an RWO PVC, the platform forces replicas: 1 and strategy: Recreate so the old pod releases the volume before the new one tries to attach. If you want multiple replicas, drop the PVC, switch the volume to ReadWriteMany, or move to a StatefulSet.
My PVC didn't get deleted when I deleted the PlatformApplication.
That's deliberate. PVCs are owned by you, not the operator — deleting the application leaves the volume behind so a re-apply picks the data right back up. To actually drop the storage, run kubectl delete pvc <name> explicitly.
Can I resize a PVC after creation?
Not through the PlatformApplication. The platform refuses to mutate PVC specs once they exist (resizing depends on the StorageClass supporting it and on the underlying volume driver). If you need to grow a volume, edit the PVC directly and let the storage driver handle the resize.
When should I use a StatefulSet instead?
When you need more than one replica with persistent storage — for example, a database with sharding, or any system where each pod needs its own disk and stable identity. See the StatefulSet Workloads lesson for the full walkthrough.
Want to dive deeper? See Persistent Storage - Details for the full constraint matrix, label-mismatch guard, and verification procedures.
Next Steps
- StatefulSet Workloads - Per-pod PVCs, ordered rollouts, and multi-replica stateful apps
Troubleshooting
For common issues and solutions, see the Troubleshooting Guide.
Specific sections that may be helpful:
Cleanup
Check out the Cleanup Instructions from the Basic Deployment lesson to remove all resources created in this walkthrough.
PVCs created by the platform are intentionally not deleted when the PlatformApplication is removed. Drop them explicitly:
kubectl delete pvc -n demo-http-echo -l p6m.dev/app=demo-http-echo
Related Documentation
- Kubernetes Persistent Volumes - Understanding PVs and PVCs
- Volume Types - All Kubernetes volume sources
- Storage Classes - Provisioner configuration
- Access Modes - RWO, ROX, RWX, RWOP semantics