StatefulSet Workloads
This example builds on Persistent Storage by switching the workload from a Deployment to a StatefulSet. StatefulSets unlock multi-replica stateful applications — each pod gets its own PersistentVolumeClaim and a stable identity, with ordered rolling updates.
What You'll Learn
In this lesson, you'll learn:
- How to switch a
PlatformApplicationfromDeploymenttoStatefulSet volumeClaimTemplatesand how per-pod PVCs are provisionedpodManagementPolicy(OrderedReadyvsParallel)- Why singleton-mode constraints don't apply to StatefulSets
- When a
StatefulSetis the right call vs aDeploymentwith a shared PVC
When to Use a StatefulSet
| Use a StatefulSet when... | Use a Deployment + PVC when... |
|---|---|
| Multi-replica with per-pod storage (sharded databases, distributed queues) | Single replica, possibly with a single PVC |
| Pod identity matters (leader election, peer discovery) | All pods are interchangeable |
| You need ordered startup/shutdown | Rolling updates are fine |
| You're running Postgres/Redis/Kafka/Elasticsearch | You're running a stateless web service |
If you're not sure, start with Deployment. The StatefulSet workload kind is immutable after creation — switching means deleting and recreating the PlatformApplication.
What Gets Created
Compared to Persistent Storage, the platform now creates:
- A
StatefulSetinstead of aDeployment - One
PersistentVolumeClaimper pod, pervolumeClaimTemplates[]entry — Kubernetes provisions them as the pods come up (<template>-<sts>-0,<template>-<sts>-1, ...) serviceNameon theStatefulSetpoints at the existing primaryService
What Changed
Switched kind to StatefulSet and replaced volumeMounts.source.size with volumeClaimTemplates:
apiVersion: meta.p6m.dev/v1alpha1
kind: PlatformApplication
metadata:
name: demo-http-echo
namespace: demo-http-echo
labels:
p6m.dev/app: demo-http-echo
spec:
config:
LOG_LEVEL: debug
HTTP_PORT: "8080"
ENVIRONMENT: dev
APP_NAME: demo-http-echo
secrets:
- name: super-secret-data
deployment:
# NEW
kind: StatefulSet
podManagementPolicy: OrderedReady
# NEW
image: mendhak/http-https-echo:31
ports:
- port: 8080
protocol: http
readinessProbe:
port: 8080
path: /
# NEW (replaces the volumeMounts.source.size from lesson 5)
volumeClaimTemplates:
- name: data
mountPath: /var/lib/app
size: 10Gi
# NEW
networking:
ingress:
enabled: true
path: /
gateway: public-open
Now every pod (demo-http-echo-0, demo-http-echo-1, ...) gets its own PVC named data-demo-http-echo-0, data-demo-http-echo-1, etc.
volumeClaimTemplates vs volumeMounts
The two ways of attaching persistent storage look similar but mean very different things:
| Field | Workload | Behavior |
|---|---|---|
spec.deployment.volumeMounts[*].source.size (lesson 5) | Deployment | One PVC, shared across replicas. Forces singleton mode for RWO. |
spec.deployment.volumeClaimTemplates[] (this lesson) | StatefulSet | One PVC per pod. No singleton mode — replicas scale up freely. |
volumeMounts[] still works on a StatefulSet for emptyDir, ConfigMap, and Secret mounts. Only the PVC source (source.size / source.claimName) is replaced by volumeClaimTemplates on a StatefulSet.
podManagementPolicy
Controls how Kubernetes creates and deletes pods during a rollout:
| Value | Behavior | Use For |
|---|---|---|
OrderedReady (default) | Pod N+1 doesn't start until pod N is ready. Shutdown in reverse. | Apps with leader election, replicated databases that need quorum, anything where bringing a peer up out-of-order would cause split-brain. |
Parallel | All pods start (and stop) simultaneously. | Sharded apps where each pod is independent. Faster rollouts when ordering doesn't matter. |
Field is optional; omit it to get the Kubernetes default (OrderedReady).
No Singleton Mode Override
The singleton-mode override from lesson 5 — which forces replicas: 1 and strategy: Recreate on a Deployment with an RWO PVC — does not apply to StatefulSets. That's the whole point: each StatefulSet pod gets its own PVC, so two pods never fight over the same volume. You can scale a StatefulSet up freely (autoscaling stays on), and rolling updates work the way they would for any other StatefulSet.
The autoscaling + RWO PVC admission rejection from lesson 5 is also gated on kind: Deployment, so a StatefulSet with autoscaling and RWO volumeClaimTemplates passes through unchallenged.
Deploy Steps
- ArgoCD
- kubectl
ArgoCD automatically updates to your PlatformApplication after the Platform Dispatch Action to update your .platform repository is run.
Use the Kinds filter to locate the StatefulSet (now in place of the Deployment) and the per-pod PersistentVolumeClaim resources that come up alongside it.
- Check out our ArgoCD Cheat Sheet for tips on interacting with ArgoCD.
- For more information on setting up ArgoCD for Platform Applications, see the ArgoCD Deployment Tutorial.
Switching kind between Deployment and StatefulSet is rejected at admission on an existing PlatformApplication. To change kinds, delete the PlatformApplication first, then re-apply with the new kind.
Apply the updated PlatformApplication:
kubectl apply -f application.yaml
Verify a StatefulSet was created (not a Deployment):
kubectl get statefulset -n demo-http-echo
# NAME READY AGE
# demo-http-echo 2/2 30s
kubectl get deployment -n demo-http-echo
# No resources found in demo-http-echo namespace.
Confirm a PVC per pod:
kubectl get pvc -n demo-http-echo
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
# data-demo-http-echo-0 Bound pvc-abc... 10Gi RWO gp3 30s
# data-demo-http-echo-1 Bound pvc-def... 10Gi RWO gp3 20s
Write a marker file on pod-0 only and confirm pod-1 doesn't see it (per-pod storage):
kubectl exec -n demo-http-echo demo-http-echo-0 -- sh -c 'echo "pod-0 only" > /var/lib/app/marker.txt'
kubectl exec -n demo-http-echo demo-http-echo-1 -- ls /var/lib/app
# (empty)
Common Observations
Why is serviceName set to the same name as the primary Service?
Per Kubernetes API, StatefulSet.spec.serviceName is a required field. The platform points it at the existing primary Service so the StatefulSet is well-formed. This does not give you per-pod DNS. Per-pod DNS (pod-N.<svc>.<ns>.svc.cluster.local) requires a headless Service, which the platform doesn't yet create. If you need stable per-pod DNS today (e.g., for Cassandra or etcd peer discovery), create a headless Service manually alongside the PlatformApplication. Operator-managed headless Service support is tracked as a follow-up.
My PVCs aren't deleted when I delete the StatefulSet.
Deliberate. Kubernetes' default StatefulSet PVC retention policy keeps per-pod volumes after deletion so a re-apply re-binds to the existing data. The platform doesn't override this. To actually drop the storage:
kubectl delete pvc -n demo-http-echo -l app.kubernetes.io/instance=demo-http-echo
Can I change kind: Deployment to kind: StatefulSet (or back) on an existing app?
No. The kind field is immutable after creation — admission rejects the change with a clear error. Workload migration requires deleting and recreating the PlatformApplication. PVCs survive the deletion (no owner reference to the PlatformApplication), so a fresh apply with a StatefulSet will adopt the existing data if PVC names match.
Does this still get an HPA?
Yes — autoscaling works on StatefulSets. The autoscaling + RWO PVC admission rejection from lesson 5 only fires on a Deployment, since each StatefulSet pod has its own PVC and there's no attach-storm risk.
Want to dive deeper? See StatefulSet - Details for kind immutability, podManagementPolicy semantics, headless-Service workarounds, and per-pod DNS patterns.
You've Completed the Tutorial! 🎉
Further Learning:
- Cloud Resources with Crossplane
- Autoscaling with KEDA
- Service Mesh configuration
- Observability integration
Check the Platform Documentation for advanced topics.
Troubleshooting
For common issues and solutions, see the Troubleshooting Guide.
Specific sections that may be helpful:
Cleanup
Check out the Cleanup Instructions from the Basic Deployment lesson to remove all resources created in this walkthrough.
Per-pod PVCs created by a StatefulSet are intentionally not deleted when the application is removed. Drop them explicitly:
kubectl delete pvc -n demo-http-echo -l app.kubernetes.io/instance=demo-http-echo
Related Documentation
- Kubernetes StatefulSets - Understanding StatefulSets
- Headless Services - Per-pod DNS for StatefulSets
- Pod Management Policies - OrderedReady vs Parallel
- StatefulSet PVC Retention - Lifecycle of per-pod PVCs