Persistent Storage - Details
← Back to Persistent Storage Tutorial
This document provides comprehensive information about spec.deployment.volumeMounts, including the singleton-mode override, admission rules, PVC lifecycle, and the label-mismatch guard.
Volume Source Reference
spec.deployment.volumeMounts[] is a list of named filesystem mounts. Every entry has a name and a mountPath. The optional source field decides what backs the mount.
source Field | Required For | Notes |
|---|---|---|
size | Provisioning a new PVC | Must be a Kubernetes quantity, e.g., 10Gi, 500Mi |
claimName | Referencing an existing PVC | The PVC must already exist in the namespace |
storageClassName | Optional on provision | Falls back to the cluster default StorageClass |
accessModes | Optional on provision | Defaults to [ReadWriteOnce]. Max 4 entries |
configMap.name | Mounting a ConfigMap as files | The ConfigMap must exist in the namespace |
secret.secretName | Mounting a Secret as files | The Secret must exist in the namespace |
| omitted | Ephemeral emptyDir | Lifetime tied to the pod |
The platform creates one PVC per volumeMounts[] entry that has source.size. The PVC name is <application-name>-<volume-name>, and the PVC carries p6m.dev/app: <application-name> labels for identification.
Singleton Mode (Automatic Behavior)
When a Deployment mounts at least one ReadWriteOnce PVC, the platform automatically forces:
| Setting | Forced Value | Why |
|---|---|---|
replicas | 1 | Two pods cannot attach an RWO PVC simultaneously |
strategy.type | Recreate | Old pod must release the volume before the new pod attaches |
autoscaling | Disabled | An HPA would scale up and the second pod would never start |
You don't configure this. The override engages whenever the workload kind is Deployment and at least one mounted PVC has access mode ReadWriteOnce. StatefulSet workloads skip the override entirely — they get one PVC per pod via volumeClaimTemplates, and ordered rolling updates don't deadlock the way Deployment surge-then-terminate does.
If your volume is ReadWriteMany (e.g., backed by EFS, Azure Files, or a NFS provisioner), the singleton override is skipped and you can run multiple replicas.
Admission Rules (CEL Validation)
The CRD enforces these rules at apply time. Violating manifests are rejected by the API server before the operator ever sees them.
1. autoscaling + RWO PVC is rejected
spec:
autoscaling:
minReplicas: 2
maxReplicas: 10
deployment:
volumeMounts:
- name: data
mountPath: /var/lib/app
source:
size: 10Gi # Defaults to ReadWriteOnce
The HPA would scale the Deployment up, but the second pod would fail to attach the RWO volume forever. We block the combination at admission rather than letting the rollout deadlock at runtime.
2. Both size and claimName is rejected
volumeMounts:
- name: data
mountPath: /var/lib/app
source:
size: 10Gi
claimName: existing-pvc # Pick one
size means "provision a new PVC", claimName means "use this existing PVC". They're mutually exclusive.
3. PVC-only fields without size or claimName is rejected
volumeMounts:
- name: data
mountPath: /var/lib/app
source:
storageClassName: gp3 # No size, no claimName — what is this?
Either you're provisioning (add size), referencing (add claimName), or you wanted an emptyDir (drop the whole source block).
4. volumeMounts is capped
The CRD limits volumeMounts to a maximum of 32 entries, and each accessModes list to 4 entries, so CEL validation fits within Kubernetes' rule cost budget. Both ceilings are well above any sensible application.
PVC Lifecycle
Creation
The platform creates PVCs only if they don't already exist. No server-side apply, no spec rewriting after the fact. This is intentional:
- PVC specs are largely immutable post-binding (you can't change
accessModesorstorageClassName) - Server-side apply on a bound PVC would flap status and confuse the storage provisioner
- Pre-existing PVCs (e.g., restored from snapshot) must be left exactly as-is
Deletion
PVCs created by the platform have no owner reference to the PlatformApplication. Deleting the application leaves the PVC behind, and the next apply re-binds to the existing volume. To actually free the storage:
kubectl delete pvc -n <namespace> -l p6m.dev/app=<application-name>
This is deliberate — losing data because of an accidental kubectl delete platformapplication would be unforgivable.
Resizing
The platform refuses to mutate an existing PVC's spec. To resize, edit the PVC directly and let the storage driver handle the volume expansion:
kubectl patch pvc -n <namespace> <pvc-name> --type=merge -p '{"spec":{"resources":{"requests":{"storage":"50Gi"}}}}'
This requires the underlying StorageClass to have allowVolumeExpansion: true and the volume driver to support online resize.
Label-Mismatch Guard
If a PVC with the right name already exists but does not carry p6m.dev/app: <application-name>, the operator refuses to adopt it. The PlatformApplication is marked as not ready with an error message naming the offending PVC.
This catches the case where an unrelated workload provisioned a PVC with a colliding name and the platform would otherwise silently mount someone else's data into your pod. To resolve:
- Rename your
volumeMountto avoid the collision, or - Manually add the
p6m.dev/app: <application-name>label to the existing PVC if you really do want to adopt it
ConfigMap and Secret Volume Mounts
ConfigMaps and Secrets can be mounted as files (in addition to the env-var injection covered in the Configuration and Secrets lessons).
volumeMounts:
- name: tls-certs
mountPath: /etc/tls
source:
secret:
secretName: my-tls-cert
- name: app-config
mountPath: /etc/app
source:
configMap:
name: my-app-config
The referenced ConfigMap or Secret must already exist in the namespace. The platform doesn't create them as part of the PlatformApplication — that's spec.config (env vars only, separate ConfigMap) and spec.secrets (env vars from ExternalSecret).
Use file-mode mounts when:
- The application reads config from a file path (TLS certificates, structured config files)
- You need a whole directory of related files mounted together
- You want the file to update automatically when the ConfigMap/Secret changes (env-var injection requires a pod restart)
When to Use a StatefulSet Instead
Deployment + RWO PVC is fine for many use cases — caches, single-replica databases, anything that's content with one pod and one volume. Reach for a StatefulSet when:
- You need multiple replicas, each with its own disk (sharded databases, distributed message queues)
- You need stable per-pod identity and DNS (some leader-election protocols depend on it)
- You need ordered rollouts where pod N+1 doesn't start until pod N is ready
The StatefulSet workload kind is enabled via spec.deployment.kind: StatefulSet and adds volumeClaimTemplates and podManagementPolicy. See the StatefulSet Workloads lesson and the StatefulSet - Details page for the full walkthrough.
Verification Procedures
Confirm the PVC was created and bound
kubectl get pvc -n <namespace>
# STATUS should be "Bound"; an unbound PVC means provisioning failed
Inspect the volume mount on the pod
POD=$(kubectl get pods -n <namespace> -l p6m.dev/app=<app> -o jsonpath='{.items[0].metadata.name}')
kubectl get pod -n <namespace> $POD -o jsonpath='{.spec.volumes}' | jq
kubectl exec -n <namespace> $POD -- mount | grep <mount-path>
Check the singleton-mode override took effect
kubectl get deployment -n <namespace> <app> -o jsonpath='{.spec.replicas},{.spec.strategy.type}'
# 1,Recreate
Verify the label-mismatch guard
kubectl get platformapplication -n <namespace> <app> -o jsonpath='{.status}' | jq
# Look for ResourcePhase: NotReady and a message naming the offending PVC
Related Documentation
- Kubernetes Persistent Volumes - PV/PVC concepts
- Storage Classes - Provisioner configuration
- Volume Expansion - Resizing PVCs
- Access Modes - RWO, ROX, RWX, RWOP semantics
- StatefulSets - When you need per-pod storage