You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While trying out the tool I noticed that VMs that have disk attached through proxmox-csi-plugin cannot be migrated anymore. Proxmox complains that the disks are owned by a different VM, namely the non-existent VM 9999. I searched if there was some way to force Proxmox to migrate the VM, but I could not find anything. I could only find ways that included a lot of manual work and downtime for the VM that will be migrated, which is not desirable.
Then I realized that even if Proxmox allowed the migration, this will probably mess up the administration of the PVs for the proxmox-csi-plugin. The volumeHandle and the nodeAffinity would not be correct anymore.
So my question/feature request is, would there be any way to support migrating VMs with disks provisioned by proxmox-csi-plugin? Ideally without any downtime to VM and/or pods, but if this is required than at least in an automated fashion and with minimal downtime.
Community Note
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
PS. Thank you for this awesome project, integrating Kubernetes with Proxmox is a great idea and the implementation works really really well.
The text was updated successfully, but these errors were encountered:
thombohlk
changed the title
Support VMs to be migrated between Proxmox nodes
Support migrating VMs between Proxmox nodes
Oct 16, 2024
Hi, I hope you have found the answer already. I wanted to share some information about the limitations of Kubernetes and Proxmox. Kubernetes has values that cannot be changed after they are set. Proxmox can only move the VM if the disk is part of that VM.
In Kubernetes, PV/PVC is not part of the VM. That’s why the disk ID is 9999. There is no way to rename the disk either. The name of the block disk is the ID of the PV in Kubernetes, and there is no metadata for the block device — only the name.
Lastly, Kubernetes has a feature called node drain. It moves the pod to a different place. This is the main idea of Kubernetes.
During VM migration, you may have noticed a performance issue. It seems like VM migration might not be the best option for what you need.
PS. Please let me know why you need VM migration if you feel it is important for your situation.
Feature Request
Description
While trying out the tool I noticed that VMs that have disk attached through proxmox-csi-plugin cannot be migrated anymore. Proxmox complains that the disks are owned by a different VM, namely the non-existent VM 9999. I searched if there was some way to force Proxmox to migrate the VM, but I could not find anything. I could only find ways that included a lot of manual work and downtime for the VM that will be migrated, which is not desirable.
Then I realized that even if Proxmox allowed the migration, this will probably mess up the administration of the PVs for the proxmox-csi-plugin. The volumeHandle and the nodeAffinity would not be correct anymore.
So my question/feature request is, would there be any way to support migrating VMs with disks provisioned by proxmox-csi-plugin? Ideally without any downtime to VM and/or pods, but if this is required than at least in an automated fashion and with minimal downtime.
Community Note
PS. Thank you for this awesome project, integrating Kubernetes with Proxmox is a great idea and the implementation works really really well.
The text was updated successfully, but these errors were encountered: