[BUG] Unable to export RAID1 bdev in degraded state #5650
Closed
Description
Describe the bug (🐛 if you encounter this issue)
Unable to export RAID1 bdev in degrade state.
The RAID1 bdev should be exportable if there is at least one healthy lvol.
To Reproduce
Steps to reproduce the behavior:
- Launch SPDK target
- Prepare a bdev lvol
- Create a bdev raid based on the newly created lvol and a non-existing lvol:
sudo ~/go/src/github.com/longhorn/spdk/scripts/rpc.py bdev_raid_create -n raid-degraded -r raid1 -b "<A Valid Lvol> <A Non-existing Lvol>"
- Create a nvmf and use the bdev raid as the ns:
sudo ~/go/src/github.com/longhorn/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2023-01.io.spdk:testvol -a -s SPDK00000000000020 -d SPDK_Controller
sudo ~/go/src/github.com/longhorn/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2023-01.io.spdk:testvol raid-degraded
The ns add cmd will error out.
Expected behavior
The degraded raid can be added as a nvmf subsystem NS
Log or Support bundle
If applicable, add the Longhorn managers' log or support bundle when the issue happens.
You can generate a Support Bundle using the link at the footer of the Longhorn UI.
Environment
- Longhorn version:
- Installation method (e.g. Rancher Catalog App/Helm/Kubectl):
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version:
- Number of management node in the cluster:
- Number of worker node in the cluster:
- Node config
- OS type and version:
- CPU per node:
- Memory per node:
- Disk type(e.g. SSD/NVMe):
- Network bandwidth between the nodes:
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal):
- Number of Longhorn volumes in the cluster:
Additional context
cc @longhorn/dev-data-plane
Metadata
Assignees
Labels
Type
Projects
Status
Closed