Backing Up and Restoring the K2s System
This guide explains how to back up and restore your entire K2s cluster state, including Kubernetes resources, persistent volumes, and container images.
Overview
K2s provides comprehensive backup and restore functionality for your cluster:
k2s system backup: Creates a complete backup of cluster resources, persistent volumes, and container imagesk2s system restore: Restores cluster state from a backup archive- Integration with Upgrade: The
k2s system upgradecommand automatically creates a backup before upgrading
Configuration Settings
Global configuration for backup/restore operations is available in cfg/config.json:
{
"backup": {
"excludednamespaces": "kube-system,kube-public,kube-node-lease",
"excludednamespacedresources": "events,endpoints,endpointslices",
"excludedclusterresources": "nodes,certificatesigningrequests,leases",
"excludedaddonpersistentvolumes": "postgresql-pv-volume,dicom-pv-volume,orthanc-pv,registry-pv,smb-static-pv,opensearch-cluster-master-pv"
}
}
Configuration Options
| Setting | Description | Default |
|---|---|---|
excludednamespaces |
Comma-separated list of namespaces to exclude from backup | kube-system,kube-public,kube-node-lease |
excludednamespacedresources |
Comma-separated list of namespaced resource types to exclude | events,endpoints,endpointslices |
excludedclusterresources |
Comma-separated list of cluster-scoped resource types to exclude | nodes,certificatesigningrequests,leases |
excludedaddonpersistentvolumes |
Comma-separated list of addon-managed PV names to exclude from system backup | postgresql-pv-volume,dicom-pv-volume,orthanc-pv,registry-pv,smb-static-pv,opensearch-cluster-master-pv |
Why Exclude Resources?
Certain resources are ephemeral or cluster-specific and should not be backed up:
- Events: Transient informational messages
- Endpoints/EndpointSlices: Auto-generated by services
- Nodes: Physical/virtual infrastructure specific to the cluster
- Leases: Short-lived coordination primitives
- Addon-managed PVs: Persistent volumes managed by addons should be backed up via addon-specific backup, not system backup
Creating a System Backup
Basic Usage
This creates a backup archive containing:
- All Kubernetes resources (excluding configured exclusions)
- Persistent volume data
- User workload container images
- Cluster configuration (
config.json) - Backup metadata (
backup.json)
Command Options
| Option | Short | Description | Required |
|---|---|---|---|
--file |
-f |
Path to backup archive file (.zip) |
Yes |
--skip-images |
Skip container image backup (faster, smaller backup) | No | |
--skip-pvs |
Skip persistent volume backup | No | |
--output-style |
-o |
Output style (standard, verbose, structured) | No |
--show-logs |
-v |
Show detailed logs during backup | No |
--help |
-h |
Display help information | No |
Examples
Full Backup (All Resources, Images, and PVs)
Quick Backup (Skip Images for Speed)
Configuration-Only Backup (No Images or PVs)
Verbose Backup with Detailed Logs
Backup Contents
Archive Structure
A K2s backup archive contains:
backup.json # Backup metadata
config/
config.json # Cluster configuration snapshot
Namespaced/
<namespace>/
<resource-type>.yaml # Namespaced resources per namespace
NotNamespaced/
<resource-type>.yaml # Cluster-scoped resources
pv/
<pv-name>/
data/ # Persistent volume data
metadata.json # PV metadata
images/
<image-name>.tar # Container images (nerdctl save format)
hooks/
output.txt # Output from custom backup hooks
Backup Metadata (backup.json)
The backup.json file contains important information about the backup:
{
"apiVersion": "k2s.backup/v1",
"kind": "SystemBackup",
"metadata": {
"backupTimestamp": "2026-02-24T14:30:00Z",
"backupTool": "k2s system backup",
"backupToolVersion": "1.6.0",
"backupFormatVersion": "1"
},
"cluster": {
"name": "k2s",
"k2sVersion": "1.6.0"
},
"content": {
"included": {
"clusterResources": true,
"namespaces": ["default", "k2s", "my-app"]
},
"excluded": {
"namespaces": ["kube-system", "kube-public"],
"namespacedResources": ["events", "endpoints"],
"clusterResources": ["nodes", "leases"]
}
},
"configSnapshot": {
"source": "config/config.json"
}
}
Persistent Volumes Backup
What Gets Backed Up
The backup process automatically discovers and backs up:
- User-created PersistentVolumes: All PVs not managed by addons
- PersistentVolumeClaims: Claims and their binding metadata
- Volume Data: Actual data stored in hostPath, local, and other volume types
Excluded PVs
Addon-managed PVs are excluded from system backup (they're handled by addon-specific backup). The list of excluded PVs is configured in cfg/config.json under backup.excludedaddonpersistentvolumes.
Default excluded PVs:
postgresql-pv-volume(Database addon)dicom-pv-volume(DICOM addon)orthanc-pv(DICOM addon)registry-pv(Registry addon)smb-static-pv(Storage addon)opensearch-cluster-master-pv(Logging addon)
Customizing Excluded PVs
You can customize which PVs are excluded by modifying the excludedaddonpersistentvolumes setting in cfg/config.json. Add or remove PV names as a comma-separated list.
How It Works
- Discovery: Query cluster for all PVs and PVCs
- Filtering: Exclude addon-managed and system PVs
- Data Copy: Copy volume data from storage backend (e.g., hostPath)
- Metadata: Save PV/PVC manifests and binding information
Large PVs
Backing up large persistent volumes can significantly increase backup time and size. Consider using --skip-pvs if you have external PV backup solutions or if PV data is not critical.
Container Images Backup
What Gets Backed Up
The backup includes:
- User workload images: Images used by your applications
- Excludes system images: Kubernetes system components (kube-apiserver, coredns, etc.)
- Excludes addon images: Images managed by enabled addons (handled separately by addon backup)
Image Discovery
Images are discovered by:
- Scanning all pods in non-excluded namespaces
- Extracting image references from pod specs
- Resolving image IDs from the container runtime
- Using
nerdctl image saveto export images
Example Output
k2s system backup -f backup.zip
⏳ [14:30:05] Backing up user workload images...
⏳ [14:30:06] Found 5 user workload images
⏳ [14:30:07] ✅ docker.io/myapp/frontend:v1.0 (ID: abc123def456)
⏳ [14:30:08] ✅ docker.io/myapp/backend:v1.0 (ID: def456ghi789)
⏳ [14:30:09] ✅ quay.io/myorg/worker:latest (ID: ghi789jkl012)
⏳ [14:30:15] Successfully backed up 5 user workload container images
Backup Size
Container images can be large (hundreds of MB to several GB each). The --skip-images flag skips image backup, reducing backup time and size significantly.
Custom Backup Hooks
K2s supports custom backup logic via hooks:
Hook Locations
- Built-in:
lib/scripts/k2s/system/backup/hooks/ - Custom: Specify with
--additional-hooks-dirflag
Hook Types
- Pre-backup: Execute before backup starts
- Post-backup: Execute after backup completes
- Resource-specific: Execute for specific resource types
Hook Example
# hooks/my-app-backup.ps1
param(
[Parameter(Mandatory = $true)]
[string] $BackupDir
)
Write-Host "Executing custom backup logic for my-app..."
# Example: Dump database to file
kubectl exec -n my-app my-db-0 -- pg_dump mydb > "$BackupDir/my-app-db.sql"
Write-Host "Custom backup complete"
For more details on hooks, see Hook System Documentation.
Restoring a System Backup
Basic Usage
This restores:
- Kubernetes resources (cluster-scoped and namespaced)
- Persistent volume data
- Container images
- Cluster configuration
Command Options
| Option | Short | Description | Required |
|---|---|---|---|
--file |
-f |
Path to backup archive file (.zip) |
Yes |
--error-on-conflict |
-e |
Fail if resource conflicts occur (default: warnings only) | No |
--output-style |
-o |
Output style (standard, verbose, structured) | No |
--show-logs |
-v |
Show detailed logs during restore | No |
--help |
-h |
Display help information | No |
Examples
Standard Restore
Strict Restore (Fail on Any Conflict)
Verbose Restore with Detailed Logs
Restore Behavior
Resource Conflicts
When restoring to an existing cluster, resource conflicts may occur:
| Conflict Type | Default Behavior | With -e Flag |
|---|---|---|
| CRD already exists | Warning logged, continue | Fail restore |
| Namespace exists | Merge resources | Fail restore |
| Resource exists | Update/patch resource | Fail restore |
| Immutable field | Warning logged, skip | Fail restore |
Clean Slate Restore
For the cleanest restore experience, restore to a freshly installed K2s cluster. This avoids conflicts entirely.
Restore Order
Resources are restored in this order to satisfy dependencies:
High-Level Restore Order
- Container Images: Load images into runtime (before deploying workloads that need them)
- Restore Hooks: Execute custom pre-restore logic
- Persistent Volume Data: Restore PV data to storage backend
- Cluster-scoped Resources: Apply cluster-wide resources
- Namespaced Resources: Apply namespace-specific resources
Custom Restore Hooks
Similar to backup, restore supports hooks:
# hooks/my-app-restore.ps1
param(
[Parameter(Mandatory = $true)]
[string] $BackupDir
)
Write-Host "Executing custom restore logic for my-app..."
# Example: Restore database from backup
cat "$BackupDir/my-app-db.sql" | kubectl exec -i -n my-app my-db-0 -- psql mydb
Write-Host "Custom restore complete"
Integration with System Upgrade
The k2s system upgrade command automatically integrates backup/restore:
Upgrade Workflow
Internal Sequence:
- Pre-upgrade validation: Check cluster health
- Automatic backup:
k2s system backup(internal call) - Uninstall old version: Remove current K2s installation
- Install new version: Install upgraded K2s version
- Automatic restore:
k2s system restore(internal call) - Post-upgrade validation: Verify cluster health
Upgrade with Custom Backup Options
The upgrade command supports backup-related flags:
These flags are passed to the internal k2s system backup call.
Error Handling
Backup Errors
The backup process implements strict error handling:
| Error Scenario | Behavior |
|---|---|
| PV backup fails | Fails backup (use --skip-pvs to bypass) |
| Image backup fails | Fails backup (use --skip-images to bypass) |
| Config.json missing | Fails backup (critical file) |
| Manifest creation fails | Fails backup |
| ZIP compression fails | Fails backup |
| Hook execution fails | Fails backup |
Restore Errors
| Error Scenario | Default Behavior | With -e Flag |
|---|---|---|
| CRD conflict | Log warning, continue | Fail restore |
| Resource apply fails | Log warning, continue | Fail restore |
| Image load fails | Log warning, continue | Fail restore |
| PV restore fails | Log warning, continue | Fail restore |
| Hook fails | Log warning, continue | Fail restore |
Best Practices
Backup Strategy
- Regular Backups: Schedule periodic backups (daily/weekly) based on change frequency
- Pre-Change Backups: Always backup before major changes (upgrades, large deployments)
- Test Restores: Periodically test restore process to verify backup integrity
- Retention Policy: Keep multiple backup versions (e.g., last 7 daily, 4 weekly, 12 monthly)
Backup Optimization
- Skip Images: Use
--skip-imagesfor frequent backups if images rarely change - Skip PVs: Use
--skip-pvsif you have external PV backup solutions - Incremental Strategy: Full backup weekly, config-only backup daily
Restore Considerations
- Clean Cluster: Restore to freshly installed cluster when possible
- Strict Mode: Use
-efor production restores to catch conflicts early - Monitor: Watch logs with
-vto identify issues immediately