-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests/utils.go: Get rid of RunVMI #11033
Conversation
/cc @fossedihelm |
/sig code-quality |
4588685
to
e7391bc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @0xFelix! It looks good.
I just have a question about one test below.
tests/container_disk_test.go
Outdated
for _, vmi := range vmis { | ||
verifyContainerDiskVMI(virtClient, vmi) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly, I don't undestand why we should verify all the vmi at every iteration (O(n2)) instead of create 2 for
cycle (O(n))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand either what is the point here, but I took that from the original test. IMO it should suffice to create all the VMs and then verify them in a second loop, as you said. If five VMIs were verified, less VMIs should be OK too.
e7391bc
to
6e8e992
Compare
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching and fixing this.
|
||
var obj *v1.VirtualMachineInstance | ||
var err error | ||
Eventually(func() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if Create()
fails, tests should typically just fail.
tests/container_disk_test.go
Outdated
@@ -296,3 +263,26 @@ func getContainerDiskContainerOfPod(pod *k8sv1.Pod, volumeName string) *k8sv1.Co | |||
diskContainerName := fmt.Sprintf("volume%s", volumeName) | |||
return tests.GetContainerOfPod(pod, diskContainerName) | |||
} | |||
|
|||
func verifyContainerDiskVMI(virtClient kubecli.KubevirtClient, vmi *v1.VirtualMachineInstance) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for avoiding the redundant lambda
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went one step further and dropped it alltogether.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice, thanks.
I had one inline comment.
tests/container_disk_test.go
Outdated
for idx, vmi := range vmis { | ||
verifyContainerDiskVMI(vmi, objs[idx]) | ||
By("Verifying the containerdisk is online") | ||
containerdiskFound := false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we avoid this nasty variables and just extract all the code below to a helper functions that returns this? e.g. hasContainerDisk(...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hasContainerDisk
in a sense that it doesn't care how many?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the change. It is exactly what I meant.
7fb5264
to
5e24759
Compare
tests/container_disk_test.go
Outdated
vmi := libvmi.NewCirros(libvmi.WithResourceMemory("1Mi")) | ||
vmi, err := virtClient.VirtualMachineInstance(testsuite.GetTestNamespace(vmi)).Create(context.Background(), vmi) | ||
Expect(err).ToNot(HaveOccurred()) | ||
libwait.WaitForSuccessfulVMIStart(vmi) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I understand correctly that the old code did not have a wait at all (and thus had a race condition)?
I am all for waiting for the VMIs to start, but I'd do it after all count
vmis are created, to save time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the old implementation was waiting for the vmi to start inside the verifyContainerDiskVMI
function.
So for the first vmi 5 times, for the second vmi 4 times, for the third vmi 3 times, and so on... waste of time, even if milliseconds.
I agree anyway, to move this under the second for loop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The old code had a wait and since it was using a nested-loop, it was basically progressively waiting for every VMI to become available after it was created. In each iteration of the outer loop a new VMI was added to the list, and during the inner loop the existing VMIs should already have been waited on, while it still needs to wait for the VMI that was added last.
By waiting for all VMs to be created first, we know that once we reach the second loop all the VMIs are already running. We can take the wiat out of the first loop, but then I'd add another loop for it. So IMO since it doesn't matter for the verification I'd leave it as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can take the wiat out of the first loop, but then I'd add another loop for it. So IMO since it doesn't matter for the verification I'd leave it as is.
Ok. This improvement can be done in a follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree anyway, to move this under the second for loop
I also think it makes sense.
We can take the wiat out of the first loop, but then I'd add another loop for it. So IMO since it doesn't matter for the verification I'd leave it as is
Why another for-loop is needed?
If the second step depends on all VMIs to be running, then you are right.
But if it is not, then before treating each VMI a check that it is in the right phase and wait on it also makes sense. After waiting for the first, it is likely that the others are already running (parallel execution).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My idea was to keep it in the first loop so the second loop would only test the scenario where 5 vmis are already running. But in the end both choices are valid. WDYT, which one do you prefer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you wait on creation, you serialize the processes which prolongs the test significantly.
If you first create all, then wait for each, it should be much faster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved the wait to the second loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved the wait to the second loop.
Thanks! But since this changes what the test does, I would have preferred if it was in its own commit, explaining why it is more sensible to create all VMIs first and wait for them later. Or at least add a word about it in the second commit.
I'm not really worried about this not-so-important test; but it is a good practice to separate mere cleanup from improvements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this way, it should actually be more in line with how the test originally worked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
I think https://github.com/kubevirt/kubevirt/pull/11033/files#r1457444148 can be resolved in this PR as an additional commit.
But it can also be optimized in a follow up.
5e24759
to
70ef884
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
70ef884
to
6d89cf4
Compare
@EdDev Rebased on main, adapted changes in |
/retest |
6d89cf4
to
cc28c2a
Compare
@0xFelix Thank you! /approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: enp0s3 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Makefile
Outdated
@@ -206,7 +206,7 @@ format: | |||
fmt: format | |||
|
|||
lint: | |||
if [ $$(wc -l < tests/utils.go) -gt 2350 ]; then echo >&2 "do not make tests/utils longer"; exit 1; fi | |||
if [ $$(wc -l < tests/utils.go) -gt 2168 ]; then echo >&2 "do not make tests/utils longer"; exit 1; fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If not for this modification, the PR would have been merged. Better use a separate PR to bump this...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, next time
Get rid of RunVMI in container_disk_test.go and refactor/rewrite a test to be cleaner and more concise. Signed-off-by: Felix Matouschek <fmatouschek@redhat.com>
This function was giving the false impression of allowing the caller to wait for the VMI to be running by allowing to supply a timeout. In reality this function only created a VMI and was using a nonsensical Eventually loop around the Create call. Get rid of RunVMI by replacing all its usages with direct calls to the kubevirt client. Signed-off-by: Felix Matouschek <fmatouschek@redhat.com>
cc28c2a
to
f1e004f
Compare
Required labels detected, running phase 2 presubmits: |
@0xFelix: The following tests failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
What this PR does / why we need it:
This function was giving the false impression of allowing the caller to
wait for the VMI to be running by allowing to supply a timeout.
In reality this function only created a VMI and was using a nonsensical
Eventually loop around the Create call.
Get rid of RunVMI by replacing all its usages with direct calls to the
kubevirt client.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Checklist
This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.
Release note: