Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core] Fix core worker client pool leak #41535

Merged
merged 7 commits into from
Dec 9, 2023
Merged

Conversation

jjyao
Copy link
Collaborator

@jjyao jjyao commented Nov 30, 2023

Why are these changes needed?

Currently core worker client pool doesn't remove clients in most cases (there are one or two places where Disconnect() might be called) and this caused memory leak. This PR adds a GC inside core worker client pool to remove IDLE clients (i.e.g clients that don't have active connections).

Related issue number

Closes #41260

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
Signed-off-by: Jiajun Yao <jeromeyjj@gmail.com>
@jjyao jjyao marked this pull request as ready for review December 3, 2023 06:15
@@ -781,6 +781,8 @@ RAY_CONFIG(int64_t, grpc_client_keepalive_time_ms, 300000)
/// grpc keepalive timeout for client.
RAY_CONFIG(int64_t, grpc_client_keepalive_timeout_ms, 120000)

RAY_CONFIG(int64_t, grpc_client_idle_timeout_ms, 1800000)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the grpc default value: 30 minutes.

/// Also see https://grpc.github.io/grpc/core/md_doc_connectivity-semantics-and-api.html
/// for channel connectivity state machine.
bool IsChannelIdleAfterRPCs() const {
return (channel_->GetState(false) == GRPC_CHANNEL_IDLE) && call_method_invoked_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GetState is not blocking right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not blocking.

auto id = WorkerID::FromBinary(addr_proto.worker_id());
auto it = client_map_.find(id);
if (it != client_map_.end()) {
return it->second;
entry = *it->second;
client_list_.erase(it->second);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this actually pretty expensive (O(N)) if there are lots of connections?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we just make RemoveIdleClients called every 30 seconds or something instead
?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::list is doubly linked list so it's constant time.

@rkooo567 rkooo567 added the @author-action-required The PR author is responsible for the next step. Remove tag to send back to the reviewer. label Dec 6, 2023
@jjyao jjyao merged commit 1dffb4d into ray-project:master Dec 9, 2023
14 of 17 checks passed
@jjyao jjyao deleted the jjyao/leak branch December 9, 2023 00:20
@m-harmonic
Copy link

Hello, am I understanding correctly that this fix is not yet merged into any release? Thanks

@jjyao
Copy link
Collaborator Author

jjyao commented Feb 21, 2024

@m-harmonic Yes, it will be part of Ray 2.10 release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@author-action-required The PR author is responsible for the next step. Remove tag to send back to the reviewer.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Ray remote task + fastapi memory leak
4 participants