-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
https://<host>:10250/metrics 403 Forbidden #1834
Comments
I reconfigured prometheus-operator, except kube-dns (0/6 up), everything else is fine, I use coredns The problem should be that the metrics port of coredns is incorrect. |
Kubelet port 10255 is disabled by default: |
@ktpktr0 : thanks for your help. |
The Helm chart is not part of this repository anymore, thus closing. |
What did you do?
git clone https://github.com/coreos/prometheus-operator.git
Install prometheus-operator using helm
What did you expect to see?
All metrics work properly
Environment
K8s version: 1.10.3 helm version: 2.10.0
Insert output of
kubectl version
hereInstall k8s cluster using kubeadmin
[root@k8s-master1 ~]# kubectl get service -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager ClusterIP 10.106.190.23 9093/TCP 2h
alertmanager-operated ClusterIP None 9093/TCP,6783/TCP 2h
grafana-grafana NodePort 10.97.176.217 80:30902/TCP 2h
kube-prometheus ClusterIP 10.97.175.195 9090/TCP 2h
kube-prometheus-alertmanager ClusterIP 10.107.180.246 9093/TCP 2h
kube-prometheus-exporter-kube-state ClusterIP 10.108.203.80 80/TCP 2h
kube-prometheus-exporter-node ClusterIP 10.96.39.140 9100/TCP 2h
kube-prometheus-grafana ClusterIP 10.101.106.125 80/TCP 2h
prometheus NodePort 10.108.156.204 9090:30900/TCP 2h
prometheus-operated ClusterIP None 9090/TCP 2h
[root@k8s-master1 ~]# kubectl get servicemonitor -n monitoring
NAME CREATED AT
alertmanager 2h
grafana 2h
kube-prometheus 2h
kube-prometheus-alertmanager 2h
kube-prometheus-exporter-kube-controller-manager 2h
kube-prometheus-exporter-kube-dns 2h
kube-prometheus-exporter-kube-etcd 2h
kube-prometheus-exporter-kube-scheduler 2h
kube-prometheus-exporter-kube-state 2h
kube-prometheus-exporter-kubelets 2h
kube-prometheus-exporter-kubernetes 2h
kube-prometheus-exporter-node 2h
kube-prometheus-grafana 2h
prometheus 2h
prometheus-operator 2h
[root@k8s-master2 ~]# kubectl logs -f prometheus-prometheus-0 -n monitoring
Error from server (BadRequest): a container name must be specified for pod prometheus-prometheus-0, choose one of: [prometheus prometheus-config-reloader rules-configmap-reloader]
[root@k8s-master2 ~]# kubectl logs -f prometheus-operator-d75587d6-8fct2 -n monitoring
github.com/coreos/prometheus-operator/pkg/prometheus/operator.go:317: Failed to list *v1.Secret: secrets is forbidden: User "system:serviceaccount:monitoring:prometheus-operator" cannot list secrets at the cluster scope
I tried the following methods and failed to solve the problem.
Fist update the kubelet service to include webhook and restart.
KUBEADM_SYSTEMD_CONF=/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Sed -e "/cadvisor-port=0/d" -i "$KUBEADM_SYSTEMD_CONF"
If ! grep -q "authentication-token-webhook=true" "$KUBEADM_SYSTEMD_CONF"; then
Sed -e "s/--authorization-mode=Webhook/--authentication-token-webhook=true --authorization-mode=Webhook/" -i "$KUBEADM_SYSTEMD_CONF"
Fi
Systemctl daemon-reload
Systemctl restart kubelet
Next, modify the kube controller and kube scheduler to allow for reading data.
Sed -e "s/- --address=127.0.0.1/- --address=0.0.0.0/" -i /etc/kubernetes/manifests/kube-controller-manager.yaml
Sed -e "s/- --address=127.0.0.1/- --address=0.0.0.0/" -i /etc/kubernetes/manifests/kube-scheduler.yaml
The text was updated successfully, but these errors were encountered: