Restoring volume snapshots from secondary storage in new CS cluster #12254
-
|
If I need to restore all my volume snapshots to a new cluster, what is the optimal way to do that? Say i have a full backup of my secondary storage and all volumes. I mounted it read only to the new cluster as additional secondary storage. Of course, no volume snapshots from there magically show up in the UI. All the filenames are in UUID so I can't just guess at which is which. Would it be required to import the snapshot list from the other clusters database in order to be able to restore/create templates from the backup storage? I'm assuming yes, or is there a better way? I do see the Snapshot copy feature. Seemingly this will let me copy the snapshots between zones. This looks useful. If I choose to copy a snapshot, but it already exists on the destination, would it just add it to the new zones db? Probably not right. I'm already replicating the storage on the underlying file system, I just want the template list to be in sync. |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 3 replies
-
|
Looks like if I want to keep both sites mostly separate, I will need to do some scripting to keep snapshots in the db up to date. Otherwise I need to do a full multi-region installation and connect the actual mariadb machines with replication etc. etc. Do-able but now there are greater odds of database issues. Not sure I want to do all this for just snapshot copying to a remote zone. Update: Looks like the process of doing this is quite agonizing , much has to be created in the second clusters db in order to get any UI functionality to restore snapshots from a remote systems storage. It even wants vm instance IDs etc. I realize I can probably just use the qcow2 images to create new VMs but obviously I was hoping for something a little easier and done through the UI. Perhaps the NAS backup functionality will work better for this type of thing - checking that out...
So.. if I have backups from "NAS Backup" plugin on one cluster and want to restore to another - I can't without modifying the database with the old records from the original cluster? Gross I was trying to keep these two clusters completely separate and still have a viable restore option if the original cluster is completely lost. Perhaps if I try to restore a backup from backup and restore it will still prompt for all the needed information and be able to restore, I guess all I can do is try that. |
Beta Was this translation helpful? Give feedback.
-
|
@Jayd603 Since you’re restoring backups into a totally separate CloudStack environment, it’s going to be tough to get them to show up without some database changes. However, I was thinking you might be able to bypass the DB issues entirely by using the KVM QCOW2 import feature. The trick is to register your backup target (the NAS) as Primary Storage instead of Secondary. If you do that, you can use the 'Import QCOW2 image from Shared Storage' option under Tools > Import-Export Instances. I am referring to this feature https://docs.cloudstack.apache.org/en/4.22.0.0/adminguide/virtual_machines/importing_unmanaging_vms.html#import-instances-from-shared-storage.. you can do this from GUI as well ( From Tools --> Import-Export Instances --> Select KVM --> Select "Import QCOW2 image from Shared Storage" under Action) It’s a bit of a 'hack' because that tool is technically for migrating external KVM VMs, but I think it can work for your use case. It lets you pick the raw .qcow2 files directly from the storage and spin them up as managed instances in your new setup without worrying about the old metadata. The only catch is that you’ll need a way to map the files back to the right machines. Since the files are named with UUIDs, you'll need to reference your old database (or a file list) to figure out which .qcow2 belongs to which VM before you start the import |
Beta Was this translation helpful? Give feedback.
-
|
Another issue is , i'm trying to connect zfs replicated storage, this is supposed to be read only so it doesn't break replication. Cloudstack doesn't allow it. So I would need to copy all the files from one share to another before even being able to import into cloudstack. It gets better. I attempted to remove the primary storage from the db and restart things - nothing works. So I guess I now triggered it to use shared storage and it doesn't like that I removed it. No Agents start. Re-enabled the shared storage in the db and tried to use UI to disable shared storage pool. -- everything rebooted again. This is after disabling "HA" in cloudstack, which always right AI told me would prevent that script from rebooting the hosts. nope. After rebooting everything again - host agents have reconnected and things seemed normal but all hosts rebooted again, the shared storage was disabled - HA was disabled for the zone. still reboots. Adding another primary shared storage pool still does not solve the issue - the cluster still reboots hosts! |
Beta Was this translation helpful? Give feedback.
-
|
@Jayd603 Can you try the suggestion here to disable the reboot of the hosts in case it cannot write to shared storage #8682 (comment) |
Beta Was this translation helpful? Give feedback.
-
|
@prashanthr2 ...and now that I have a working primary storage with all my volume snapshots to import. The Tools > Import/Export Instances or Import Data Volumes does not work. It does not display any of the snapshots that are on the share. So your trick to import that way seemingly will not work. These are not qcow2 images in a root directory, this is a byte for byte copy of the secondary storage in another cluster where all the scheduled volume snapshots reside. So it would need to scan /snapshots/* .. the snapshots created by cloudstack are also in UUID format with no file extension. I attempted to import by typing the path of the uuid snapshot: I mean I want to create an instance from the snapshot in shared storage on a local disk. I will try a shared offering - maybe I can migrate it after adding it. Update: It still would not work (image not found or invalid) - i'm assuming because it is in a sub directory. What did work was copying the snapshot file to the local disk storage pool and then importing. |
Beta Was this translation helpful? Give feedback.
@Jayd603 Since you’re restoring backups into a totally separate CloudStack environment, it’s going to be tough to get them to show up without some database changes. However, I was thinking you might be able to bypass the DB issues entirely by using the KVM QCOW2 import feature.
The trick is to register your backup target (the NAS) as Primary Storage instead of Secondary. If you do that, you can use the 'Import QCOW2 image from Shared Storage' option under Tools > Import-Export Instances.
I am referring to this feature https://docs.cloudstack.apache.org/en/4.22.0.0/adminguide/virtual_machines/importing_unmanaging_vms.html#import-instances-from-shared-storage.. you can do this from GUI as w…