Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests/utils.go: Get rid of RunVMI #11033

Merged
merged 2 commits into from
Jan 22, 2024
Merged

Conversation

0xFelix
Copy link
Member

@0xFelix 0xFelix commented Jan 18, 2024

What this PR does / why we need it:

This function was giving the false impression of allowing the caller to
wait for the VMI to be running by allowing to supply a timeout.

In reality this function only created a VMI and was using a nonsensical
Eventually loop around the Create call.

Get rid of RunVMI by replacing all its usages with direct calls to the
kubevirt client.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:

Checklist

This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.

Release note:

None

@kubevirt-bot kubevirt-bot added release-note-none Denotes a PR that doesn't merit a release note. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. sig/network size/L labels Jan 18, 2024
@0xFelix
Copy link
Member Author

0xFelix commented Jan 18, 2024

/cc @fossedihelm
/cc @dankenigsberg

@0xFelix
Copy link
Member Author

0xFelix commented Jan 18, 2024

/sig code-quality

@kubevirt-bot kubevirt-bot added the kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. label Jan 18, 2024
Copy link
Contributor

@fossedihelm fossedihelm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @0xFelix! It looks good.
I just have a question about one test below.

Comment on lines 155 to 175
for _, vmi := range vmis {
verifyContainerDiskVMI(virtClient, vmi)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, I don't undestand why we should verify all the vmi at every iteration (O(n2)) instead of create 2 for cycle (O(n))

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand either what is the point here, but I took that from the original test. IMO it should suffice to create all the VMs and then verify them in a second loop, as you said. If five VMIs were verified, less VMIs should be OK too.

@fossedihelm
Copy link
Contributor

/lgtm

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 18, 2024
Copy link
Member

@dankenigsberg dankenigsberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching and fixing this.


var obj *v1.VirtualMachineInstance
var err error
Eventually(func() error {
Copy link
Member

@dankenigsberg dankenigsberg Jan 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if Create() fails, tests should typically just fail.

@@ -296,3 +263,26 @@ func getContainerDiskContainerOfPod(pod *k8sv1.Pod, volumeName string) *k8sv1.Co
diskContainerName := fmt.Sprintf("volume%s", volumeName)
return tests.GetContainerOfPod(pod, diskContainerName)
}

func verifyContainerDiskVMI(virtClient kubecli.KubevirtClient, vmi *v1.VirtualMachineInstance) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for avoiding the redundant lambda

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went one step further and dropped it alltogether.

tests/container_disk_test.go Outdated Show resolved Hide resolved
@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Jan 18, 2024
Copy link
Member

@EdDev EdDev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice, thanks.

I had one inline comment.

for idx, vmi := range vmis {
verifyContainerDiskVMI(vmi, objs[idx])
By("Verifying the containerdisk is online")
containerdiskFound := false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we avoid this nasty variables and just extract all the code below to a helper functions that returns this? e.g. hasContainerDisk(...)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hasContainerDisk in a sense that it doesn't care how many?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the change. It is exactly what I meant.

@0xFelix 0xFelix force-pushed the runvmi-cleanup branch 2 times, most recently from 7fb5264 to 5e24759 Compare January 18, 2024 13:16
vmi := libvmi.NewCirros(libvmi.WithResourceMemory("1Mi"))
vmi, err := virtClient.VirtualMachineInstance(testsuite.GetTestNamespace(vmi)).Create(context.Background(), vmi)
Expect(err).ToNot(HaveOccurred())
libwait.WaitForSuccessfulVMIStart(vmi)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand correctly that the old code did not have a wait at all (and thus had a race condition)?

I am all for waiting for the VMIs to start, but I'd do it after all count vmis are created, to save time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the old implementation was waiting for the vmi to start inside the verifyContainerDiskVMI function.
So for the first vmi 5 times, for the second vmi 4 times, for the third vmi 3 times, and so on... waste of time, even if milliseconds.
I agree anyway, to move this under the second for loop

Copy link
Member Author

@0xFelix 0xFelix Jan 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old code had a wait and since it was using a nested-loop, it was basically progressively waiting for every VMI to become available after it was created. In each iteration of the outer loop a new VMI was added to the list, and during the inner loop the existing VMIs should already have been waited on, while it still needs to wait for the VMI that was added last.

By waiting for all VMs to be created first, we know that once we reach the second loop all the VMIs are already running. We can take the wiat out of the first loop, but then I'd add another loop for it. So IMO since it doesn't matter for the verification I'd leave it as is.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can take the wiat out of the first loop, but then I'd add another loop for it. So IMO since it doesn't matter for the verification I'd leave it as is.

Ok. This improvement can be done in a follow-up PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree anyway, to move this under the second for loop

I also think it makes sense.

We can take the wiat out of the first loop, but then I'd add another loop for it. So IMO since it doesn't matter for the verification I'd leave it as is

Why another for-loop is needed?
If the second step depends on all VMIs to be running, then you are right.
But if it is not, then before treating each VMI a check that it is in the right phase and wait on it also makes sense. After waiting for the first, it is likely that the others are already running (parallel execution).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My idea was to keep it in the first loop so the second loop would only test the scenario where 5 vmis are already running. But in the end both choices are valid. WDYT, which one do you prefer?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you wait on creation, you serialize the processes which prolongs the test significantly.
If you first create all, then wait for each, it should be much faster.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved the wait to the second loop.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved the wait to the second loop.

Thanks! But since this changes what the test does, I would have preferred if it was in its own commit, explaining why it is more sensible to create all VMIs first and wait for them later. Or at least add a word about it in the second commit.

I'm not really worried about this not-so-important test; but it is a good practice to separate mere cleanup from improvements.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this way, it should actually be more in line with how the test originally worked.

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 18, 2024
Copy link
Member

@EdDev EdDev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

I think https://github.com/kubevirt/kubevirt/pull/11033/files#r1457444148 can be resolved in this PR as an additional commit.
But it can also be optimized in a follow up.

@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Jan 18, 2024
Copy link
Member

@EdDev EdDev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 18, 2024
@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Jan 19, 2024
@0xFelix
Copy link
Member Author

0xFelix commented Jan 19, 2024

@EdDev Rebased on main, adapted changes in tests/monitoring/vm_monitoring.go.

@0xFelix
Copy link
Member Author

0xFelix commented Jan 19, 2024

/retest

@kubevirt-bot kubevirt-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 20, 2024
@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 21, 2024
@enp0s3
Copy link
Contributor

enp0s3 commented Jan 21, 2024

@0xFelix Thank you!

/approve

@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: enp0s3

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 21, 2024
Makefile Outdated
@@ -206,7 +206,7 @@ format:
fmt: format

lint:
if [ $$(wc -l < tests/utils.go) -gt 2350 ]; then echo >&2 "do not make tests/utils longer"; exit 1; fi
if [ $$(wc -l < tests/utils.go) -gt 2168 ]; then echo >&2 "do not make tests/utils longer"; exit 1; fi
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If not for this modification, the PR would have been merged. Better use a separate PR to bump this...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, next time

Get rid of RunVMI in container_disk_test.go and refactor/rewrite
a test to be cleaner and more concise.

Signed-off-by: Felix Matouschek <fmatouschek@redhat.com>
This function was giving the false impression of allowing the caller to
wait for the VMI to be running by allowing to supply a timeout.

In reality this function only created a VMI and was using a nonsensical
Eventually loop around the Create call.

Get rid of RunVMI by replacing all its usages with direct calls to the
kubevirt client.

Signed-off-by: Felix Matouschek <fmatouschek@redhat.com>
@kubevirt-bot kubevirt-bot removed lgtm Indicates that a PR is ready to be merged. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Jan 22, 2024
@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 22, 2024
@kubevirt-commenter-bot
Copy link

Required labels detected, running phase 2 presubmits:
/test pull-kubevirt-e2e-k8s-1.27-ipv6-sig-network

@kubevirt-bot
Copy link
Contributor

kubevirt-bot commented Jan 22, 2024

@0xFelix: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubevirt-check-tests-for-flakes f1e004f link false /test pull-kubevirt-check-tests-for-flakes
pull-kubevirt-e2e-arm64 f1e004f link false /test pull-kubevirt-e2e-arm64

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@kubevirt-bot kubevirt-bot merged commit 0a3141f into kubevirt:main Jan 22, 2024
35 of 37 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm Indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. sig/network size/L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants