-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(cstor-volume-mgmt): add replica scale down support #1499
feat(cstor-volume-mgmt): add replica scale down support #1499
Conversation
Signed-off-by: mittachaitu <sai.chaithanya@mayadata.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will it be possible to draft the PR message in a better way / simple way.
@AmitKumarDas updated the PR message with a detailed explanation. |
Signed-off-by: mittachaitu <sai.chaithanya@mayadata.io>
Signed-off-by: mittachaitu <sai.chaithanya@mayadata.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: mittachaitu <sai.chaithanya@mayadata.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm .. it will be good to test once more locally, after all the changes has been merge in other repos.
…ive#1499) Signed-off-by: mittachaitu <sai.chaithanya@mayadata.io>
…ive#1499) Signed-off-by: mittachaitu <sai.chaithanya@mayadata.io>
Signed-off-by: mittachaitu sai.chaithanya@mayadata.io
What this PR does / why we need it:
This PR adds replica scale down support in the cstor-volume-mgmt(sidecar of cstor-istgt).
Note:
Pre-requisites:
Healthy
state except the cstorvolume replica that is going to delete(i.e deleting CVR can be in any state).Steps to execute by the user/operator:
Kubectl edit cstorvolume <cstorvolume_name> -n openebs
(both should be updated at a time).User has the following PVC setup
sai@sai:~$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE demo-vol-claim Bound pvc-79ba260d-fad6-11e9-8791-42010a8000c5 5G RWO cstor-sc 10m
Following are cstorvolume and CVRs related to above PVC
Explanation for the above steps:
pvc-79ba260d-fad6-11e9-8791-42010a8000c5-cstor-sparse-pool-mch2
CVR. User should follow the mentioned steps..spec.replicaID
in this case replicaID isAEB135CD6AA1E65EF2DAEA085A4C9BA7
.Following is a snippet of cstorvolume:
Above snippet will be updated as follows using kubectl edit
After editing snippet 1 will looks as snippet 2( updated the desired replication factor to 2 and removed replicaID entry[AEB135CD...] from spec.replicaDetails.knownreplicas).
Events on cstorvolume:
If you observe status.replicadetails.knownreplicas is updated with latest info. ReplicationFactor and consistencyFactor is updated to 2.
Now, delete the CVR
pvc-79ba260d-fad6-11e9-8791-42010a8000c5-cstor-sparse-pool-mch2
using kubectl delete command.Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #Special notes for your reviewer:
Checklist:
documentation
tagbreaking-changes
tagrequires-upgrade
tag