Backup overview
NetFoundry Self-Hosted provides two levels of backup protection:
- Ziti database snapshots — Automatic daily snapshots of the Ziti controller database (BoltDB), stored on a dedicated PVC within the cluster. These are installed by the quickstart and run without any additional setup. Use these when you need to restore the Ziti database to a previous point in time.
- Full cluster backups with Velero — Complete backup of all Kubernetes resources and persistent volumes to an external storage target (AWS S3 or on-site MinIO). Use these for disaster recovery, cluster migration, or when you need to restore the entire installation including the support stack.
Ziti database snapshots
The quickstart installer automatically deploys a scheduled snapshot job that captures the Ziti controller database
daily at 1:00 AM and retains snapshots for 7 days. This is configured via snapshot-values.yml:
# Snapshot schedule - uses cron syntax
local_pvc_snapshot_schedule: "0 1 * * *"
# How long snapshots are retained
local_pvc_retention_days: 7
To apply configuration changes:
helm upgrade -n ziti --install ziti-snapshots ./helm-charts/snapshots/ --values snapshot-values.yml
Create an on-demand snapshot
To take a snapshot immediately:
nf-create-snapshot
Restore from a snapshot
To restore the Ziti database from a snapshot:
nf-restore-snapshot
The restore script will:
- List all available snapshot files and prompt you to select one.
- Warn that the Ziti controller will be temporarily stopped, disrupting network services.
- Scale down the controller, replace the database file, and restart the controller.
Restoring a snapshot replaces the current Ziti database. The controller is stopped during the restore, which temporarily disrupts all network services.
Full cluster backups with Velero
For full cluster backups that include all Kubernetes resources and persistent volumes, NetFoundry Self-Hosted provides scripts built on Velero. Velero requires an external storage target — either AWS S3 or an S3-compatible on-site store like MinIO.
Choose the guide that matches your environment:
- Automated backups with AWS S3 — Uses the included
velero_backup.shscript with an AWS S3 bucket. - On-site backups with MinIO — Uses a local MinIO instance for environments without cloud storage access.
- Restore and migration — Restoring from a Velero backup and migrating to a new cluster.
S3/IAM prerequisites
If using AWS S3, an S3 bucket with IAM credentials must be set up before running the backup scripts. Credentials are
persisted to ./velero/s3-credentials-velero — temporary credentials should not be used as scheduled backups
require persistent access.
Create an S3 bucket
BUCKET=<YOUR_BUCKET>
REGION=<YOUR_REGION>
aws s3api create-bucket \
--bucket $BUCKET \
--region $REGION \
--create-bucket-configuration LocationConstraint=$REGION
Create a Velero IAM user
aws iam create-user --user-name velero
Create an IAM policy document
Save the following as ./velero-policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::${BUCKET}/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::${BUCKET}"
]
}
]
}
Attach the policy:
aws iam put-user-policy \
--user-name velero \
--policy-name velero \
--policy-document file://velero-policy.json
Create an access key
aws iam create-access-key --user-name velero
The result should look like:
{
"AccessKey": {
"UserName": "velero",
"Status": "Active",
"CreateDate": "2025-07-31T21:21:41.556Z",
"SecretAccessKey": <AWS_SECRET_ACCESS_KEY>,
"AccessKeyId": <AWS_ACCESS_KEY_ID>
}
}
Update the ./velero/s3-credentials-velero file with the credentials:
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
Storage setup for multi-node production clusters
CSI-enabled storage provides volume resizing, storage snapshots, and automated backups. Storage drivers depend on your Kubernetes provider. For a full list of vendor-maintained drivers, see Drivers on K8s docs.
For EKS clusters, initialize the ebs.csi.aws.com driver:
./installers/setup_eks_storage.sh