-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Launch Elasticsearch and Kibana automatically #3292
Conversation
@@ -0,0 +1,25 @@ | |||
apiVersion: v1beta1 | |||
kind: Pod |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I had intended to change this to have a replication controller (still with one controlled pod). I'll make the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thockin: I am working on replication controllers. I will add them along with a few other enhancements to monitoring.
dc655af
to
1b2f57d
Compare
@thockin : changed to using replication controllers, all yml files changed to yaml and updated documentation. PTAL. |
11e55e5
to
73a298b
Compare
I just did one more tweak to add tag version numbers to the two Dockerfiles that we create (both at tag 1.0 here) and then I mention this tag in the pod specs. This will allow us to update the logging work without breaking existing clusters. |
@@ -38,6 +38,7 @@ PORTAL_NET="10.0.0.0/16" | |||
|
|||
# Optional: Install node monitoring. | |||
ENABLE_NODE_MONITORING=true | |||
ELASTICSEARCH_LOGGING_REPLICAS=1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should probably be lumped with other logging flags, rather than with node-monitoring
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, my mistake, fixed.
If we use this at cluster setup, it needs to move to cluster/addons |
This PR is going to cause a headache for GKE. Please make sure to be delicate around the GKE provider on the OSS side or you'll break GKE Jenkins. Check it as disable initially there, create a bug, assign it to me. I'll consider how to generically launch other pods when I'm looking at #3305. (Looking further, looks like you default to off. Great!) |
For GKE, can we just set |
Yeah, you can just set it to false for now. It looks like it's default off if you don't plumb the variable through? (Or did I misread the PR?) You can just "export KUBERNETES_PROVIDER=gke" and then "go run hack/e2e.go -up" etc. to get gke. |
@thockin : I think I have done your bidding. PTAL. |
@zmerlynn : running the e2e tests with the GKE provider did not give an encouraging result. Running with just
|
ENABLE_CLUSTER_LOGGING=false | ||
if [[ "${ENABLE_NODE_LOGGING-}" == "true" ]]; then | ||
if [[ "${LOGGING_DESTINATION-}" == "elasticsearch" ]]; then | ||
if [[ "${ENABLE_CLUSTER_LOGGING-}" == "true" ]]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you really need all the logic, or can you just set the variable, or even just set it and comment the line out? Same for others...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I added that logic because you wanted the ELASTICSEARCH_LOGGING_REPLICAS
definition guarded? Initially, I did just set the variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I don't recall saying that. Maybe something I said was too cryptic? Either way, I think this should be a config file, not a program.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am sure I just mis-understood you. Shall I remove two of the three tests, or just set the variable at the top-level like I did originally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think simpler is better in the config files.
On Thu, Jan 8, 2015 at 8:40 PM, Satnam Singh notifications@github.com
wrote:
In cluster/aws/config-default.sh
#3292 (diff)
:LOGGING_DESTINATION=elasticsearch # options: elasticsearch, gcp
+
+# Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up.
+ENABLE_CLUSTER_LOGGING=false
+if [[ "${ENABLE_NODE_LOGGING-}" == "true" ]]; then
- if [[ "${LOGGING_DESTINATION-}" == "elasticsearch" ]]; then
- if [[ "${ENABLE_CLUSTER_LOGGING-}" == "true" ]]; then
I am sure I just mis-understood you. Shall I remove two of the three
tests, or just set the variable at the top-level like I did originally?Reply to this email directly or view it on GitHub
https://github.com/GoogleCloudPlatform/kubernetes/pull/3292/files#r22701289
.
@thockin : moved predicate to inside the bring up and tear down functions for cluster logging; removed conditional guards for setting |
@satnam: Sorry, to test on GKE you'll also need --check_version_skew=false. On Thu, Jan 8, 2015 at 8:01 PM, Satnam Singh notifications@github.com
|
Yes, I finally worked that out after trying various configurations and good to know about |
f9ae7bb
to
5090cff
Compare
@@ -54,4 +54,6 @@ if [[ "${ENABLE_CLUSTER_DNS}" == "true" ]]; then | |||
| "${KUBE_ROOT}/cluster/kubectl.sh" create -f - | |||
fi | |||
|
|||
setup-fluentd-elasticsearch-logging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
last nit: rename to setup-logging
Changed the bring up and tear down names to |
Launch Elasticsearch and Kibana automatically
This PR changes our setup to automatically launch an Elasticsearch pod for ingesting logs and a Kibana pod for viewing logs when logging with Fluentd to Elasticsearch is selected. The URLs for Elasticsearch and the Kibana dashboard are reported at cluster creation time:
I have also changed the names of the Elasticsearch and Kibana pods and services (adding -logging) to avoid any name clashes with other instances of Elasticsearch in the cluster.