-
Notifications
You must be signed in to change notification settings - Fork 446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add cluster nodes info test #299
add cluster nodes info test #299
Conversation
95dbc93
to
5ed0807
Compare
Looks good to me -- are we just adding a check that the worker is connected to Ray? |
so far this code checks the number of nodes in the cluster, not the details. I think this simple check is ok for now. |
Checking |
def tearDown(self): | ||
def test_cluster_info(self): | ||
client = docker.from_env() | ||
container = client.containers.run(ray_image, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
containers.run
is equivalent to docker run
or docker exec
? Seems it just create a new container based on ray_image and execute the following scripts locally but not test against kuberay
cluster. @wilsonwang371
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correct. This container is using hostnetwork and will connect to the ray cluster through host port exposed by kind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a couple of comments in the code explaining the use of the docker client?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wilsonwang371 Let's add a comment here? otherwise, I feel people may have similar confusion like me. This code snippet assumes the following prerequisite
https://github.com/ray-project/kuberay/blob/7f16c1205ee7f484ee08e68d38d563947e31a5fa/tests/config/raycluster-service.yaml#L18-L21
https://github.com/ray-project/kuberay/blob/7f16c1205ee7f484ee08e68d38d563947e31a5fa/tests/config/cluster-config.yaml#L20-L23
Using ray.util.connect
probably makes more sense because it won't init the cluster in any scenario.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for not calling ray.util.connect is because we want exactly the same version of ray for both the ray cluster and the ray client.
@wilsonwang371 Let's add a comment here? otherwise, I feel people may have similar confusion like me. This code snippet assumes the following prerequisite
https://github.com/ray-project/kuberay/blob/7f16c1205ee7f484ee08e68d38d563947e31a5fa/tests/config/raycluster-service.yaml#L18-L21 https://github.com/ray-project/kuberay/blob/7f16c1205ee7f484ee08e68d38d563947e31a5fa/tests/config/cluster-config.yaml#L20-L23
Using
ray.util.connect
probably makes more sense because it won't init the cluster in any scenario.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For KubeRay tests in the Ray CI, we install Ray in the host environment to test Ray Client and Job Submission.
Using docker directly works as well. Kubectl exec would probably work too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe later we can switch to install particular version of Ray on the host environment too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for not calling ray.util.connect is because we want exactly the same version of ray for both the ray cluster and the ray client.
em. Is this related? We can still use the exact same version by using ray.util.connect
instead of ray.init
. Am i missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason for not calling ray.util.connect is because we want exactly the same version of ray for both the ray cluster and the ray client.
em. Is this related? We can still use the exact same version by using ray.util.connect
instead of ray.init
. Am i missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ray.init("ray://...")
is the preferred API for Ray client these days.
If I understand right, the issue is that we don't currently have the same Ray version installed in the host CI environment as in the Ray pods in KinD, so Ray Client in the host will fail to connect to the server in the Ray head pod.
89a087d
to
b2f4791
Compare
The change looks good to me and let's merge it |
Why are these changes needed?
add more tests for kuberay
Related issue number
#298
Checks