-
Notifications
You must be signed in to change notification settings - Fork 14
Add openebs snapshot feature #37
Add openebs snapshot feature #37
Conversation
Signed-off-by: prateekpandey14 <prateekpandey14@gmail.com>
f028ac8
to
37b9fc1
Compare
Signed-off-by: prateekpandey14 <prateekpandey14@gmail.com>
snapshot.alpha.kubernetes.io/snapshot: fastfurious | ||
spec: | ||
storageClassName: snapshot-promoter | ||
accessModes: [ "ReadWriteOnce" ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be read-only?
openebs/pkg/v1/snapshot_api.go
Outdated
//Marshal serializes the value provided into a YAML document | ||
yamlValue, _ := yaml.Marshal(snap) | ||
|
||
glog.V(2).Infof("[DEBUG] snapshot Spec Created:\n%v\n", string(yamlValue)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it a good practice to use [DEBUG]? It will be good to follow the conventions mentioned here for error levels. https://github.com/kubernetes/community/blob/master/contributors/devel/logging.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will clean this up , and set proper log levels
@prateekpandey14 - can you add the e2e tests to the ci folder to do the following:
|
1. Get the maya-apiserver endpoint(MAPI_ADDR) from maya-service and set as env variable, which needed for making API request. 2. Add the deployment yaml to deploy snapshot-controller and provisioner. Signed-off-by: prateekpandey14 <prateekpandey14@gmail.com>
fe88c48
to
4401144
Compare
Signed-off-by: prateekpandey14 <prateekpandey14@gmail.com>
a487c05
to
a866cf7
Compare
7624de0
to
377d3ca
Compare
6744844
to
7727b49
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please resolve the merge conflicts
@prateekpandey14 I see a lot of redundant yaml files between CI and the examples. Can they be refactored to remove the redundancy? Also, if you are using the helm chart to install the OpenEBS maya-apiserver etc, can you remove the openebs-operator listed here? |
cab106a
to
c7f8689
Compare
@@ -0,0 +1,189 @@ | |||
# Define the Service Account |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How to keep this file in sync with the actual openebs-operator.yaml? Is this fine really required? Why not install via helm?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With helm there is issue to get the OPENEBS_MAPI_SERVICE(its auto generated prefix name) env to get MAPI_ADDR , which is set during the openebs installation using helm( inside provisioner POD), and can not be imported in snapshot-controller and snapshot-provisioner pod
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
openebs-operator yaml file uses ci
images of maya-apiserver, provisioner and jiva.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use the following:
helm install stable/openebs --name ci --namespace openebs --set apiserver.imageTag="ci",jiva.replicas="1",jiva.imageTag="ci", provisioner.imageTag="ci"
The MAPI_ADDR can then be obtained using:
MAPI_SVC_ADDR=`kubectl get service -n openebs ci-openebs-apiservice -o json | grep clusterIP | awk -F\" '{print $4}'`
It will be good to avoid duplicating the operator.yaml files. Infact we should reduce the usage so much and then remove it. The operator-yaml should be auto-generated from helm chart.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How we can export this inside a pod containers dynamically ?
10c4319
to
3ed8f76
Compare
Signed-off-by: prateekpandey14 <prateekpandey14@gmail.com>
3ed8f76
to
154a776
Compare
//} | ||
|
||
//snapshotName := &tags["kubernetes.io/created-for/snapshot/name"] | ||
snapshotName := createSnapshotName(pv.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we pass the snapname here as well to generate a unique snapname?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the snapshot name as a part of unique snapshot name
volumeSpec.Metadata.Labels.Namespace = pvc.Namespace | ||
volumeSpec.Metadata.Name = pvName | ||
|
||
err := openebsVol.ListVolume(pvRefName, pvRefNamespace, &oldvolume) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Snapshot Provisioner should pass the source volume name to the maya-apiserver. The API server should have the intelligence to retrieve the required clone IP. This way the snapshot provisioner doesn't have to depend on the internal details of the volume types supported by the maya-apiserver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The storage class name passed should be that of the source volume, so that the maya-apsierver can bring up the cloned volume with the same parameters as source volume. In case the cloned volume needs to override some values, they should be mentioned in the PVC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a second thought, it might be better to have the maya-apiserver fetch the storage class associated with the source volume, while it is fetching the source controller IP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added , now snapshot-provisioner will fetch the SC from the source volume.
Signed-off-by: prateekpandey14 <prateekpandey14@gmail.com>
1. Now snapshot provisioner will use StorageClass of source volume for promoting snapshot as a new PV. 2. Add unit test for snapshot API requests. Signed-off-by: prateekpandey14 <prateekpandey14@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an initial PR to support the workflow for snapshot creation. A new PR will be raised that will harden the workflow for error conditions and also the API between snapshotcontroller and maya-apiserver will change to pass the PV name instead of storageclass name.
* Add OpenEBS Plugin for snapshot-controller and snapshot-provisioner * Add e2e test for openebs-snapshot and creating a PV from snapshot * Update README with snapshot clone/restore steps * Add unit test for snapshot API requests. Signed-off-by: prateekpandey14 <prateekpandey14@gmail.com>
This PR will add OpenEBS as snapshot provider.
Below are the update to track down the features:
Fixes: Add Snaphsot feature for OpenEBS in K8s snapshot API openebs/openebs#1046
Start Snapshot Controller:
(assuming you have a running Kubernetes local cluster):
Note: Get Maya-apisrever address and export the address as env variable
export MAPI_ADDR=http://172.18.0.5:5656
Start snapshot controller:
_output/bin/snapshot-provisioner -kubeconfig=${HOME}/.kube/config
Prepare a PV to take snapshot. You can either use OpenEBS dynamic provisioned PVs or static PVs.
Create a snapshot
Now we have PVC bound to a PV that contains some data. We want to take snapshot of this data so we can restore the data later.
Check VolumeSnapshot and VolumeSnapshotData are created
Restore Snapshot
snapshot-demo
as a new PV :Delete the Snapshot:
$ kubectl delete volumesnapshot/snapshot-demo volumesnapshot "snapshot-demo" deleted
Verify the volumesnapshot object: