Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modified influxdb petset to provision persistent volume. #28840

Merged
merged 1 commit into from
Aug 2, 2016

Conversation

jszczepkowski
Copy link
Contributor

Analytics

[WIP] Modified influxdb petset to create claim.

@jszczepkowski
Copy link
Contributor Author

@bprashanth
I've moved creation of claim from separate file to pet set definition. Unfortunately, it doesn't work.

$ kubectl get pv
NAME          CAPACITY   ACCESSMODES   STATUS     CLAIM                                                                      REASON    AGE
influxdb-pv   10Gi       RWO,ROX       Released   kube-system/influxdb-persistent-storage-monitoring-influxdb-grafana-v3-0             17m
$ kubectl describe pvc  influxdb-persistent-storage-monitoring-influxdb-grafana-v3-0  --namespace=kube-system
Name:       influxdb-persistent-storage-monitoring-influxdb-grafana-v3-0
Namespace:  kube-system
Status:     Pending
Volume:     influxdb-pv
Labels:     k8s-app=influxGrafana,kubernetes.io/cluster-service=true,version=v3
Capacity:   0
Access Modes:   
NAME                               READY     STATUS              RESTARTS   AGE
monitoring-influxdb-grafana-v3-0   0/2       ContainerCreating   0          15m
$ kubectl describe pods monitoring-influxdb-grafana-v3-0   --namespace=kube-systemName:     monitoring-influxdb-grafana-v3-0
Namespace:  kube-system
Node:       kubernetes-minion-group-ciu0/10.240.0.3
Start Time: Tue, 12 Jul 2016 16:20:41 +0200
Labels:     k8s-app=influxGrafana,kubernetes.io/cluster-service=true,version=v3
Status:     Pending
IP:     
Controllers:    PetSet/monitoring-influxdb-grafana-v3
Containers:
  influxdb:
    Container ID:   
    Image:      eu.gcr.io/google_containers/heapster_influxdb:v0.5
    Image ID:       
    Ports:      8083/TCP, 8086/TCP
    QoS Tier:
      cpu:  Guaranteed
      memory:   Guaranteed
    Limits:
      cpu:  100m
      memory:   500Mi
    Requests:
      cpu:      100m
      memory:       500Mi
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Environment Variables:
  grafana:
    Container ID:   
    Image:      eu.gcr.io/google_containers/heapster_grafana:v2.6.0-2
    Image ID:       
    Port:       
    QoS Tier:
      cpu:  Guaranteed
      memory:   Guaranteed
    Limits:
      memory:   100Mi
      cpu:  100m
    Requests:
      cpu:      100m
      memory:       100Mi
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Environment Variables:
      INFLUXDB_SERVICE_URL:     http://monitoring-influxdb:8086
      GF_AUTH_BASIC_ENABLED:        false
      GF_AUTH_ANONYMOUS_ENABLED:    true
      GF_AUTH_ANONYMOUS_ORG_ROLE:   Admin
      GF_SERVER_ROOT_URL:       /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  influxdb-persistent-storage:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  influxdb-persistent-storage-monitoring-influxdb-grafana-v3-0
    ReadOnly:   false
  grafana-persistent-storage:
    Type:   EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium: 
  default-token-dte16:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-dte16
Events:
  FirstSeen LastSeen    Count   From                    SubobjectPath   Type        Reason      Message
  --------- --------    -----   ----                    -------------   --------    ------      -------
  17m       17m     2   {default-scheduler }                    Warning     FailedScheduling    [PersistentVolume 'influxdb-pv' not found, PersistentVolume 'influxdb-pv' not found]
  17m       17m     1   {default-scheduler }                    Normal      Scheduled   Successfully assigned monitoring-influxdb-grafana-v3-0 to kubernetes-minion-group-ciu0
  15m       20s     8   {kubelet kubernetes-minion-group-ciu0}          Warning     FailedMount Unable to mount volumes for pod "monitoring-influxdb-grafana-v3-0_kube-system(cfa0e1ee-483b-11e6-97e3-42010af00002)": timeout expired waiting for volumes to attach/mount for pod "monitoring-influxdb-grafana-v3-0"/"kube-system". list of unattached/unmounted volumes=[influxdb-persistent-storage]
  15m       20s     8   {kubelet kubernetes-minion-group-ciu0}          Warning     FailedSync  Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "monitoring-influxdb-grafana-v3-0"/"kube-system". list of unattached/unmounted volumes=[influxdb-persistent-storage]

Do you know what could go wrong?

serviceName: monitoring-influxdb
volumeClaimTemplates:
- metadata:
name: influxdb-persistent-storage
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will show up as a really long pv name influxdb-persistent-storage-monitoring-influxdb-grafana-v3-0, maybe we can shorten it?

@bprashanth
Copy link
Contributor

Aren't there some GA features that depend on influx? are you sure you want it to depend on an alpha feature (it should be ok, but it sounds like priority inversion)

@jszczepkowski
Copy link
Contributor Author

Aren't there some GA features that depend on influx? are you sure you want it to depend on an alpha feature (it should be ok, but it sounds like priority inversion)

AFAIK we don't have GA features depending on influx. I think it is fine to move it to pet set for 1.4 branch.

@jszczepkowski
Copy link
Contributor Author

Whatever I do (provisioning of volumes by me or by pet set), I'm hitting the problem with lack of claim in my pod: the pod is hanging in container creating state because the volume is not mount.

@bprashanth
Copy link
Contributor

The name on your volume mount needs to match the name on the volume claim. Checkout this working example: https://github.com/kubernetes/kubernetes/blob/master/test/e2e/testing-manifests/petset/zookeeper/petset.yaml#L86

@k8s-github-robot k8s-github-robot added do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. retest-not-required-docs-only and removed retest-not-required-docs-only labels Jul 28, 2016
@jszczepkowski jszczepkowski changed the title [WIP] Modified influxdb petset to create claim. Modified influxdb petset to provision persistent volume. Jul 29, 2016
@jszczepkowski
Copy link
Contributor Author

@bprashanth
The PR is ready for the reivew now. I changed the code in a way that pet set creates persistent volumen.

@jszczepkowski jszczepkowski removed the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Jul 29, 2016
@jszczepkowski
Copy link
Contributor Author

This PR should fix #27470.

serviceName: monitoring-influxdb
volumeClaimTemplates:
- metadata:
name: influxdb-ps
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fyi, this will give you a pv with a reall long name. Something like influxdb-ps-monitoring-influxdb-grafana-v3-{index}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fine.

Modified influxdb petset to provision pv.
@jszczepkowski
Copy link
Contributor Author

@bprashanth
Comments applied, PTAL

@bprashanth bprashanth added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-label-needed labels Aug 2, 2016
@bprashanth bprashanth added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 2, 2016
@k8s-bot
Copy link

k8s-bot commented Aug 2, 2016

GCE e2e build/test passed for commit f7167d1.

@k8s-github-robot
Copy link

@k8s-bot test this [submit-queue is verifying that this PR is safe to merge]

@k8s-bot
Copy link

k8s-bot commented Aug 2, 2016

GCE e2e build/test passed for commit f7167d1.

@k8s-github-robot
Copy link

Automatic merge from submit-queue

@k8s-github-robot k8s-github-robot merged commit cadee46 into kubernetes:master Aug 2, 2016
@lavalamp
Copy link
Member

lavalamp commented Aug 2, 2016

This is leaking resources in gce e2e.

@bprashanth
Copy link
Contributor

It's probably the dynamically provisioned pvs that are not torn down on kube-down. The petset e2es cleanup after themselves for this reason. I don't see an easy way to do this unless we have an ordered shutdown of the cluster (#16337).

@bprashanth
Copy link
Contributor

@lavalamp
Copy link
Member

lavalamp commented Aug 2, 2016

@bprashanth Yeah I don't have any suggestions on how to fix this. But it can't get merged without a fix :)

@zmerlynn
Copy link
Member

zmerlynn commented Aug 3, 2016

Without fixing this, kubernetes-e2e-gce-master-on-cvm is broken. Please revert the original PetSet or find a way to fix this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants