Closed
Description
Describe the bug
I added disks to a cluster to each of my nodes to replace the existing disks as the main storage.
For each node, I went into the disk settings and disabled the scheduling for the old disks and enabled eviction. The new disks are activated for scheduling.
It's been hours and not a single replicat have been moved. I can't see any logs in the longhorn ui. The disks works when I force delete a replicat and it rebuilt fine in the other disks.
To Reproduce
Steps to reproduce the behavior:
- Add disk to node
- Setup disk in longhorn with scheduled enable
- Setup old disk in longhorn to disable scheduling and enable eviction
- Wait and watch for nothing happening
Expected behavior
Replicat are being moved. If not, show logs or information in the volume page info or node info or main page.
Log
If needed, I can generate a support bundle
Environment:
- Longhorn version: 1.1.2
- Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Rancher Catalog
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: K3s 1.21.4
- Number of management node in the cluster: 1
- Number of worker node in the cluster:3 (the management node is also a worker, so 4 nodes)
- Node config
- OS type and version: Raspbian 10 (Buster)
- CPU per node: 4
- Memory per node: 8gb
- Disk type(e.g. SSD/NVMe): SD Card for main OS, external USB HDD 1TB (node 2, 3 and 4)
- Network bandwidth between the nodes: 1GB
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal):Baremetal
- Number of Longhorn volumes in the cluster: 11
Additional context
N/A
Metadata
Labels
Type
Projects
Status
Resolved
Status
Closed