Changing Your Application Namespace
How to migrate an application to a new Kubernetes namespace.
By default, the application namespace matches the repository name. It determines the Kubernetes namespace, the ArgoCD application name, the .platform/kubernetes/ folder path, and the container image name. This guide covers how to change it — whether as part of a repo rename or independently.
The general approach is to deploy the application to the new namespace alongside the old one, cut traffic over, then tear down the old deployment. For most applications on our platform — where databases and other stateful services are external to the namespace — this can be done with zero downtime.
If your application uses in-cluster state (EBS-backed PersistentVolumeClaims, in-cluster databases, etc.), those resources are namespace-scoped and cannot simply be moved. You'll need to plan a data migration, which may involve downtime. The procedure below assumes stateless in-cluster workloads with external databases.
How the Namespace Is Set
Each application namespace must belong to exactly one repository. If two repos share the same namespace, their pipelines will overwrite each other's deployments — only one will end up running at a time. When choosing a new namespace, make sure it's not already in use by another repo.
Before changing anything, you need to understand how your CI/CD workflow determines the namespace. There are two patterns:
Direct dispatch step
Your workflow has a platform-application-manifest-dispatch step directly in your repo, with an explicit directory-name parameter:
- name: Update Application Manifest
uses: p6m-actions/platform-application-manifest-dispatch@v1
with:
repository: ${{ github.repository }}
image-name: my-service
environment: "dev"
digest: ${{ needs.build.outputs.digest }}
update-manifest-token: ${{ secrets.UPDATE_MANIFEST_TOKEN }}
platform-dispatch-url: ${{ vars.PLATFORM_DISPATCH_URL }}
directory-name: my-service # <-- this controls the namespace
The directory-name parameter controls the namespace. If it's hardcoded, changing the namespace means changing this value.
Upstream reusable workflow
Your workflow calls a shared workflow from p6m-dev/github-actions that handles the dispatch internally:
jobs:
build_and_deploy:
uses: p6m-dev/github-actions/.github/workflows/build-deploy-python-container.yaml@main
secrets:
ARTIFACTORY_USERNAME: ${{ secrets.ARTIFACTORY_USERNAME }}
ARTIFACTORY_IDENTITY_TOKEN: ${{ secrets.ARTIFACTORY_IDENTITY_TOKEN }}
UPDATE_MANIFEST_TOKEN: ${{ secrets.UPDATE_MANIFEST_TOKEN }}
ARTIFACTORY_TOKEN: ${{ secrets.ARTIFACTORY_TOKEN }}
with:
ARTIFACTORY_REGISTRY: "p6m.jfrog.io"
APPS: my-service
DOCKER_REPO: my-org-docker/applications
In this case, the namespace defaults to the repository name. To override it, add the PLATFORM_NAMESPACE input:
with:
ARTIFACTORY_REGISTRY: "p6m.jfrog.io"
APPS: my-service
DOCKER_REPO: my-org-docker/applications
PLATFORM_NAMESPACE: my-new-namespace # <-- add this
What's Affected
Everything scoped to the old namespace needs to move or be updated:
- Kubernetes namespace and all resources within it (deployments, services, configmaps, secrets)
- ArgoCD application name and target namespace
.platform/kubernetes/<name>folder in the.platformrepo- Container image name in JFrog
- Service-to-service references (
service.old-name.svc.cluster.local) - Ingress rules and external DNS entries
- External secrets / key vault references
- Network policies
- Any monitoring dashboards or alerts scoped to the old namespace
Before You Start
- Identify which workflow pattern you're using (direct dispatch or upstream reusable workflow — see above)
- Find the ArgoCD application for your service and check its status in all environments (dev, stg, prd) — confirm it's healthy and synced before making changes
- Locate the
.platform/kubernetes/<current-name>directory in your.platformrepo - Identify all applications that communicate with this service (callers, downstream dependencies)
- Inventory other references to the current namespace (ingress rules, external DNS, monitoring dashboards, alerts, network policies, external secrets)
- Identify whether any in-cluster state (PVCs, etc.) lives inside this namespace — if so, plan a data migration separately
- Coordinate with dependent teams
- Decide whether to do this per-environment (dev first, then stg, then prd) or all at once
Procedure
Step 1: Update the Workflow
Depending on your pattern:
- Direct dispatch: Change the
directory-nameparameter to the new namespace. - Upstream reusable workflow: Add (or update) the
PLATFORM_NAMESPACEinput with the new namespace.
Merge this change to main.
Step 2: Remove the Old ArgoCD Application
Delete the .platform/kubernetes/<old-name> folder from the .platform repo. Then background delete (non-cascading delete) the ArgoCD application. This removes the Application from ArgoCD without touching the underlying Kubernetes resources — your pods, services, and namespace remain running.
The ApplicationSet controller may automatically background-delete the ArgoCD application after you remove the .platform/kubernetes/ folder, depending on how quickly it syncs. If the app is already gone by the time you get to this step, that's fine. Just confirm your pods are still running.
Step 3: Deploy to the New Namespace
Run the CI/CD pipeline. This will:
- Build a new container image under the new name
- Generate a new
.platform/kubernetes/<new-name>folder - Create a new ArgoCD application targeting the new namespace
Step 4: Verify the New Deployment
Confirm the application is healthy in the new namespace.
- ArgoCD application syncs successfully
- Pods are running and healthy
- Application responds correctly
Step 5: Update Downstream References
Update everything that pointed to the old namespace:
- Ingress rules and external DNS entries
- Service URLs in other applications (
service.new-name.svc.cluster.local) - Monitoring dashboards and alerts
- Network policies
- External secrets / key vault references
- Documentation and runbooks
Step 6: Clean Up the Old Namespace
Once traffic is flowing to the new namespace and everything is stable, clean up the orphaned resources in the old namespace (pods, services, etc. left behind by the background delete).
Validation
- New ArgoCD application is synced and healthy in all environments
- Traffic is flowing through the new namespace
- Service-to-service communication works
- Database connections are active
- Monitoring dashboards show data from the new namespace
- Old namespace is empty and cleaned up
- No orphaned ArgoCD applications remain
Rollback
Since the old resources stay running until Step 6 (background delete preserves them), rollback at any point before that is straightforward — revert the workflow change and re-run the pipeline to recreate the ArgoCD application for the old namespace. Undo any traffic changes (Step 5) and clean up the new namespace.