Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persistent influxdb data to directory on local host. #10528

Merged
merged 1 commit into from
Jul 1, 2015

Conversation

dchen1107
Copy link
Member

Fixed #10497

cc/ @saad-ali

@dchen1107
Copy link
Member Author

cc/ @thockin @vmarmol

@thockin
Copy link
Member

thockin commented Jun 30, 2015

LGTM, I assume that directory means something to influx

@k8s-bot
Copy link

k8s-bot commented Jun 30, 2015

GCE e2e build/test passed for commit b6cf1134ed7e3b948e8207f5f76812edb423295c.

@vmarmol
Copy link
Contributor

vmarmol commented Jun 30, 2015

LGTM

@saad-ali
Copy link
Member

LGTM
@thockin I don't see this documented anywhere in the InfluxDB documentation, but /opt/influxdb/shared appears to be the default InfluxDB database storage directory. See link, for example.

@thockin
Copy link
Member

thockin commented Jul 1, 2015

I am fine with this change, but I want Dawn to confirm it is working properly and dumping state to this volume and reduces the memory footprint

@saad-ali
Copy link
Member

saad-ali commented Jul 1, 2015

Actually it looks like we use a custom InfluxDB config to change the storage dir to /data/
@dchen1107 is looking in to it

@dchen1107
Copy link
Member Author

Ok, I think this pr is ready to have another review now. /opt/influxdb/shared is the default directory of influxdb for configuration and storage. But our heapster version influxdb's storage is moved to /data/db:

https://github.com/GoogleCloudPlatform/heapster/blob/master/influxdb/config.toml#L88

I manually tested it by restarting influxdb container on the node, and check monitoring-grafana page, and stats before killing are exist.

@k8s-bot
Copy link

k8s-bot commented Jul 1, 2015

GCE e2e build/test passed for commit ff17623.

@zmerlynn
Copy link
Member

zmerlynn commented Jul 1, 2015

LGTM

@zmerlynn zmerlynn added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 1, 2015
@thockin
Copy link
Member

thockin commented Jul 1, 2015

LGTM

@zmerlynn
Copy link
Member

zmerlynn commented Jul 1, 2015

Someone should slap the magic label on it so I remember to merge it in the morning. Or I could do it now.

@thockin
Copy link
Member

thockin commented Jul 1, 2015

done

On Tue, Jun 30, 2015 at 10:50 PM, Zach Loafman notifications@github.com
wrote:

Someone should slap the magic label on it so I remember to merge it in the
morning. Or I could do it now.


Reply to this email directly or view it on GitHub
#10528 (comment)
.

@dchen1107
Copy link
Member Author

Ok, I still observed memory leakage of influxdb. After some research, it is known issue for influxdb 0.8 release which is what we are using here https://github.com/GoogleCloudPlatform/heapster/blob/master/influxdb/Dockerfile#L5.

The issue filed against influxdb and latest update can be found at: influxdata/influxdb#1228 (comment)

But this pr and upcoming one adding resource limit for addons can help with the situation even with memory leakage since most of stats are persistent to the local host, and restarting such docker container will pick up preexisting stats.

cc/ @thockin and @bgrant0607 for TL's lgtm. Thanks!

@dchen1107
Copy link
Member Author

@thockin thanks! you are beating me :-)

@bgrant0607
Copy link
Member

LGTM

zmerlynn added a commit that referenced this pull request Jul 1, 2015
Persistent influxdb data to directory on local host.
@zmerlynn zmerlynn merged commit ffeb982 into kubernetes:master Jul 1, 2015
@romanek-adam
Copy link

I just wanted to mention that this change probably led to another issue: #13508.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants