Backups don't work after K10 reinstall

Backups don't work after K10 reinstall

Description: Jobs fail to run after reinstalling K10 on the same cluster

K10 backups can be transferred to any S3 compliant object store to keep 2nd copies and for long-term retention, based on compliance needs. Both metadata and data are saved in a location defined by the cluster ID. To prevent accidental loss, K10 will refuse to overwrite this data in case it is deleted and reinstalled and will display errors if new backups on a new K10 install are initiated.

Error: "Failed to connect to the backup repository"

K10 uses passkeys (automatically generated or user-defined) to protect both metadata and data. Since this is unique to each K10 install, and because old data exists in the same backup location, a reinstalled K10 instance will run into conflicts.

Warning: These steps must only be followed when testing/evaluating K10 or if existing data is no longer required. When in doubt, contact Kasten Support.


To overcome this failure, existing data in the object storage location should be removed from the previous cluster. To do this, first extract the cluster ID of the current cluster (to know which one shouldn't be removed).

  1. Cluster ID can be extracted in two ways:
    1. CLI: 
      $ kubectl get namespace default -ojsonpath="{.metadata.uid}{'\n'}"
    2. K10 Dashboard: append settings/support to the end of the URL

  2. Retrieve the specific S3 bucket by selecting Settings --> Locations and identify the name under Bucket Name

  3. Go to the S3 Console --> select the appropriate bucket --> click into the K10 folder --> click into the cluster ID folder --> delete everything.