-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make all informer tests use the new helper #10930
Conversation
Codecov Report
@@ Coverage Diff @@
## main #10930 +/- ##
=======================================
Coverage 88.02% 88.02%
=======================================
Files 188 188
Lines 9104 9104
=======================================
Hits 8014 8014
Misses 835 835
Partials 255 255 Continue to review full report at Codecov.
|
f3b4ab5
to
611bdc8
Compare
611bdc8
to
ea83d0c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
/approve |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: markusthoemmes, n3wscott, vagababov The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@julz and I spitballed our flakes a bit and they seem to come from the recent speedup in
WaitForCacheSync
. Since I've asked in K8s about this, I was made aware of kubernetes/kubernetes#95372 and I kinda think this might be causing a lot of the spurious flakes that we have regardless.The necessary package PRs have landed so this is the enablement in Serving.