You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The topology storage class created by the import script has invalid JSON on the topologyConstrainedPools parameter. This is resulting in PVCs failing to be provisioned.
Storage class should be operable with valid JSON. Working around issue with:
kubectl get sc ceph-rbd-topology -o=yaml > crtSc.yaml
Edit topologyConstrainedPools to remove the last comma
kubectl replace -f crtSc.yaml --force
How to reproduce it (minimal and precise):
Generate params using create-external-cluster-resources.py with the flags --topology-pools=pool-zone1,pool-zone2,pool-zone3,pool-zone4 --topology-failure-domain-label=zone --topology-failure-domain-values=zone1,zone2,zone3,zone4
Use the output to create the storage class using import-external-cluster.sh
Label nodes as appropriate with topology.kubernetes.io/zone
Generate a PVC using the storage class of ceph-rbd-topology
Logs to submit:
Operator's log
I1229 19:14:25.693836 1 event.go:389] "Event occurred" object="cnpg-system/test-db-1" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message=<
failed to provision volume with StorageClass "ceph-rbd-topology": rpc error: code = InvalidArgument desc = failed to parse JSON encoded topology constrained pools parameter ([
{"poolName":"pool-zone1",
"domainSegments":[
{"domainLabel":"zone","value":"zone1"}]},
{"poolName":"pool-zone2",
"domainSegments":[
{"domainLabel":"zone","value":"zone2"}]},
{"poolName":"pool-zone3",
"domainSegments":[
{"domainLabel":"zone","value":"zone3"}]},
{"poolName":"pool-zone4",
"domainSegments":[
{"domainLabel":"zone","value":"zone4"}]},
]
): invalid character ']' looking for beginning of value
Environment:
OS (e.g. from /etc/os-release): Ubuntu 24.04.1 LTS
Kernel (e.g. uname -a): 6.8.0-51-generic
Cloud provider or hardware configuration: Proxmox VM
Rook version (use rook version inside of a Rook Pod): 1.16.0
Storage backend version (e.g. for ceph do ceph -v): 19.2.0
Kubernetes version (use kubectl version): 1.31.4+k3s1
Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): k3s, ceph in Proxmox 8.3.1
Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox): HEALTH_WARN 4 pool(s) have no replicas configured
The text was updated successfully, but these errors were encountered:
Is this a bug report or feature request?
Deviation from expected behavior:
The topology storage class created by the import script has invalid JSON on the topologyConstrainedPools parameter. This is resulting in PVCs failing to be provisioned.
Storage class created from the import script.
Expected behavior:
Storage class should be operable with valid JSON. Working around issue with:
kubectl get sc ceph-rbd-topology -o=yaml > crtSc.yaml
kubectl replace -f crtSc.yaml --force
How to reproduce it (minimal and precise):
create-external-cluster-resources.py
with the flags--topology-pools=pool-zone1,pool-zone2,pool-zone3,pool-zone4 --topology-failure-domain-label=zone --topology-failure-domain-values=zone1,zone2,zone3,zone4
import-external-cluster.sh
topology.kubernetes.io/zone
Logs to submit:
Environment:
uname -a
): 6.8.0-51-genericrook version
inside of a Rook Pod): 1.16.0ceph -v
): 19.2.0kubectl version
): 1.31.4+k3s1ceph health
in the Rook Ceph toolbox): HEALTH_WARN 4 pool(s) have no replicas configuredThe text was updated successfully, but these errors were encountered: