-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
start etcd compactor in background #25010
Conversation
We actually don't need e2e for this |
Sorry, I accidentally pushed some testing code and caused above e2e failure.. Branching is complicated. |
I don't think we need e2e testing for this though. |
10f266f
to
cd1921a
Compare
// older than fixed time. | ||
// We need to compact keys because we can't let on disk data grow forever. | ||
// We save the most recent 10 minutes data. It should be enough for slow watchers and to tolerate burst. | ||
// We might keep a longer history (12h) in the future once storage API can take |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make this a todo
LGTM after fixing the comments. |
Great comments. All addressed. |
|
||
var ( | ||
endpointsMap map[string]struct{} | ||
endpointsMapLock sync.Mutex |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move the lock before the map it protects. also rename it to endpointMapMu probably.
Addressed. |
Note that e2e test doesn't go through this currently |
Fixed failure due to the removal of "endpoints" in parameter (dismissed in caller). |
GCE e2e build/test passed for commit 3144ebc. |
} | ||
} | ||
for _, ep := range client.Endpoints() { | ||
endpointsMap[ep] = emptyStruct |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not put row59 behind row56?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is incorrect to put it under line56. see #25010 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. i see.
Automatic merge from submit-queue |
@hongchaodeng @xiang90 Did this PR take multi API server into mind? If I have 3 API servers for HA, and I start those three API server every three minutes, will the actual compact interval becomes 3.3 minutes(instead of 10 minute)? |
Come to think of it, we should probably input option all the way through the api-server. |
May be it's a better choice to config the compact interval at etcd's server side, and only leader member could make compaction. Otherwise, when we have multi client, even we config them with same compact interval, the actual compact interval may doesn't agree with what we want, since those client may start at different time. |
One etcd cluster should only have one compactor. Re. potential config possibilities, I don't want to add too much complexity but make it work at first. Will tackle it once having more use cases. |
We should open a separate issue on this now. |
This will result in kube api server log: E0531 13:59:04.624372 2949 watcher.go:188] watch chan error: etcdserver: mvcc: required revision has been compacted |
@violalu we know this. Is there an issue that you are seeing? |
Just to make it clear, are these messages benign? I'm also seeing these "Warning" messages but have found no issue yet. |
is it better to set the api log level |
ref: #22448
What's in this PR?