Skip to main content

Persistent Storage

This example builds on Secret Injection by mounting filesystem storage into the container. The platform supports four kinds of volume mounts on a Deployment: a freshly-provisioned PersistentVolumeClaim, a reference to an existing PersistentVolumeClaim, an ephemeral emptyDir, and ConfigMap/Secret files.

What You'll Learn

In this lesson, you'll learn:

  • How to mount a PersistentVolumeClaim (PVC) — both provisioned by the platform and pre-existing
  • When to use emptyDir for ephemeral scratch space
  • How to mount ConfigMap and Secret data as files (not env vars)
  • The singleton-mode guardrails that protect ReadWriteOnce volumes
  • When to reach for a StatefulSet instead

What Gets Created

In addition to resources from Secret Injection, the platform creates:

  • One PersistentVolumeClaim per volumeMounts[] entry that has source.size (provision mode)
  • A volumeMount on the container plus a matching volume on the pod spec for every entry

Reference-mode mounts (source.claimName) and emptyDir mounts (no source) don't create new PVCs — they wire up volumes only.

What Changed

Added the spec.deployment.volumeMounts section:

apiVersion: meta.p6m.dev/v1alpha1
kind: PlatformApplication
metadata:
name: demo-http-echo
namespace: demo-http-echo
labels:
p6m.dev/app: demo-http-echo
spec:
config:
LOG_LEVEL: debug
HTTP_PORT: "8080"
ENVIRONMENT: dev
APP_NAME: demo-http-echo
secrets:
- name: super-secret-data
deployment:
image: mendhak/http-https-echo:31
ports:
- port: 8080
protocol: http
readinessProbe:
port: 8080
path: /
# NEW
volumeMounts:
- name: data
mountPath: /var/lib/app
source:
size: 10Gi
- name: cache
mountPath: /var/cache/app
# No source = emptyDir (ephemeral, lives with the pod)
# NEW
networking:
ingress:
enabled: true
path: /
gateway: public-open

This adds a 10Gi persistent volume mounted at /var/lib/app and an ephemeral scratch directory at /var/cache/app.

Volume Source Types

The shape of source decides what kind of volume the platform wires up:

ShapeVolume TypeLifetimeUse For
source: {size: "10Gi"}New PersistentVolumeClaimSurvives pod restarts and PA deletionDatabases, caches that should persist, anything you'd lose sleep over
source: {claimName: "shared-data"}Existing PersistentVolumeClaimWhatever you already set upSharing storage across applications, restoring from a snapshot
source: {configMap: {name: "..."}}ConfigMap projected as filesLives with the podMounting configs that need to be on disk (TLS configs, structured data)
source: {secret: {secretName: "..."}}Secret projected as filesLives with the podTLS keys, certificates, anything that wants a file path instead of an env var
omittedemptyDirWiped when the pod is deletedScratch space, build caches, IPC between sidecars

You only need to pick the fields for the source type you want. Mixing fields (for example, both size and claimName) is rejected at admission.

Provisioning a New PVC

volumeMounts:
- name: data
mountPath: /var/lib/app
source:
size: 50Gi
storageClassName: gp3 # Optional, defaults to the cluster default
accessModes: [ReadWriteOnce] # Optional, defaults to [ReadWriteOnce]

The platform creates a PVC named <app>-<volume-name> (so demo-http-echo-data). The PVC has no owner reference to the PlatformApplication, so deleting the app does not delete the PVC — your data is safe from accidental teardown.

Referencing an Existing PVC

volumeMounts:
- name: data
mountPath: /var/lib/app
source:
claimName: my-existing-pvc

The platform mounts my-existing-pvc without creating or modifying it. Useful when you've restored a PVC from a snapshot, or when several applications share a ReadWriteMany volume.

emptyDir for Ephemeral Storage

volumeMounts:
- name: cache
mountPath: /var/cache/app
# source omitted on purpose

The pod gets a fresh empty directory at /var/cache/app. Anything written there is lost the moment the pod is replaced. Cheaper than a PVC and zero blast radius if it fills up.

Singleton Mode (RWO PVCs)

When a Deployment mounts a ReadWriteOnce PVC, the platform automatically switches into singleton mode to prevent attach-storms. Without these guardrails, two pods would fight over the volume during a rolling update and the rollout would deadlock.

In singleton mode the platform forces:

  • replicas: 1
  • strategy: Recreate (terminate the old pod, then start the new one)
  • HPA disabled

You don't configure any of this — it kicks in whenever a Deployment has at least one RWO PVC volume mount. RWX PVCs (accessModes: [ReadWriteMany]) skip the override since multiple pods can attach simultaneously.

Hard-Blocked Combinations

Admission rejects these at apply time, before the resource ever reaches the controller:

  • spec.autoscaling set on a Deployment with an RWO PVC. The HPA would scale up, the second pod would never attach, and your rollout would hang. Use a StatefulSet if you need replicas.
  • A volumeMount with both size and claimName set. Pick one.
  • A volumeMount with neither source.size nor source.claimName but with other PVC-only fields like storageClassName. Either remove the extra fields (you wanted emptyDir) or add a size/claimName (you wanted a PVC).

Deploy Steps

ArgoCD automatically updates to your PlatformApplication after the Platform Dispatch Action to update your .platform repository is run.

Use the Kinds filter to locate the new PersistentVolumeClaim alongside your existing resources.

Common Observations

Why does my Deployment only have one replica now?

You're in singleton mode — see Singleton Mode above. Once you mount an RWO PVC, the platform forces replicas: 1 and strategy: Recreate so the old pod releases the volume before the new one tries to attach. If you want multiple replicas, drop the PVC, switch the volume to ReadWriteMany, or move to a StatefulSet.

My PVC didn't get deleted when I deleted the PlatformApplication.

That's deliberate. PVCs are owned by you, not the operator — deleting the application leaves the volume behind so a re-apply picks the data right back up. To actually drop the storage, run kubectl delete pvc <name> explicitly.

Can I resize a PVC after creation?

Not through the PlatformApplication. The platform refuses to mutate PVC specs once they exist (resizing depends on the StorageClass supporting it and on the underlying volume driver). If you need to grow a volume, edit the PVC directly and let the storage driver handle the resize.

When should I use a StatefulSet instead?

When you need more than one replica with persistent storage — for example, a database with sharding, or any system where each pod needs its own disk and stable identity. See the StatefulSet Workloads lesson for the full walkthrough.

tip

Want to dive deeper? See Persistent Storage - Details for the full constraint matrix, label-mismatch guard, and verification procedures.

Next Steps

Troubleshooting

For common issues and solutions, see the Troubleshooting Guide.

Specific sections that may be helpful:

Cleanup

Check out the Cleanup Instructions from the Basic Deployment lesson to remove all resources created in this walkthrough.

PVCs created by the platform are intentionally not deleted when the PlatformApplication is removed. Drop them explicitly:

kubectl delete pvc -n demo-http-echo -l p6m.dev/app=demo-http-echo