-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to start monitoring-influxdb-grafana-v3-0 using petset #28591
Comments
@ddysher |
Here's the result:
|
I need to see you Pet Set yaml. The message is saying that the PV was missing. Did you mean to have the PV auto provisioned? |
I believe so. I'm walking through petset and this is what i got when running
For monitoring, I didn't do any customization, so I believe the pet set yaml comes from here? |
@jszczepkowski @piosz suggest modifying the petset to: apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: monitoring-influxdb-grafana-v4
# Note: Modified namespace
namespace: default
labels:
name: grafana
version: v4
spec:
# This service must exist
serviceName: monitoring-influxdb
replicas: 1
template:
metadata:
labels:
# Note: Modified labels.
name: grafana
version: v4
annotations:
# This is an alpha safety hook for quorum datbases. If it's false the petset won't scale.
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- image: gcr.io/google_containers/heapster_influxdb:v0.5
name: influxdb
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 500Mi
ports:
- containerPort: 8083
- containerPort: 8086
volumeMounts:
- name: influxdb-persistent-storage
mountPath: /data
- image: gcr.io/google_containers/heapster_grafana:v2.6.0-2
name: grafana
env:
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
# This variable is required to setup templates in Grafana.
- name: INFLUXDB_SERVICE_URL
value: http://monitoring-influxdb:8086
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var
volumes:
# These are a list of hardcoded claims, specifying a pvs here means the admin
# will pre-provision it.
- name: grafana-persistent-storage
emptyDir: {}
# This is a list of petset volumes. A new one will get created for each pet.
volumeClaimTemplates:
- metadata:
# This name must match a mounted volume on a pet
name: influxdb-persistent-storage
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi See inline comments and docs mentioned in #260 (comment) for more context. Also note that both petset/volume provisioning are in alpha. |
But if you're provisioning the volume yourself, as part of kube-up, then why do you need petset? It doens't look like you're using its DNS property either? |
@bprashanth same thoughts here. What's the benefits of using petset for monitoring? It doesn't seems like the addon is stateful. |
@bprashanth @ddysher |
@bprashanth
Am I right? |
I think serviceName is already in the spec https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-petset.yaml#L72 That makes scaling the rc hard. With petset you also get a DNS name, if you require it to cluster your scaled influx instances (most cluster software needs it): https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/user-guide/petset.md#network-identity. If you can add a clustered influx db example using petset, and an e2e that uses petset (we already have a few: https://github.com/kubernetes/kubernetes/blob/master/test/e2e/petset.go#L223) that would be great. |
@bprashanth |
The |
Suggest not using petset till we have cluster shutdown sorted out, or meticuously deleting all resources from kube-down without relying on order of shutdown of controllers/namespace etc (you need to continue deleting pd like we previously did). |
If PetSet is broken without a shutdown API, can we back out the influxdb change? |
This was fixed by #30080. Please reopen in case or more problems. |
monitoring-influxdb-grafana-v3-0 stays in
ContainerCreating
logs from ps controller
kubernetes version
The cluster is brought up using
cluster/kube-up.sh
@jszczepkowski @bprashanth
The text was updated successfully, but these errors were encountered: