-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The first invocation of mockk exceeds the runTest
default timeout
#3800
Comments
runTest
default timeout
I edited the title of the issue. The original one didn't describe the problem but proposed a solution. So far, it's unclear whether adding a global parameter is the way to go. So far, I didn't see any indications that the timeout we chose isn't sensible, we're dealing with an edge case here. A solution to this specific problem, proposed in that same thread, is /**
* Initialize ByteBuddy in advance so it doesn't skew the reported runtime of the test code due to a long initialization sequence.
*/
@BeforeTest
fun initializeByteBuddy() {
// initialize here
} This is an easy, straightforward fix for code bases with a class that every test class inherits from, and if you're using |
This solution assumes that there is a class that all tests inherit, neither in our project nor in many projects which I know it's not the case. It would be more reasonable if it would be a simple way to do this in a multi-module project to specify a custom test runner for all unit tests in all modules, but it's not the case. One more problem with this proposal is that it will slow down every single affected test because of bytebyddy init, even ones that do not use mocking, and make the whole suite of tests is slower, not faster.
We do not override setMain by default because it's unnecessary for most tests, so there is no base test class. 10 seconds is a very arbitrary timeout, and an option to configure it would be very beneficial I see at least to reasonable scenarios:
It will never be a timeout that works for all projects, and the suggestion to significantly change the structure of all tests by using the user's wrapper on top of runTest or by having some base class doesn't look very friendly for projects affected by this. Using jvmArgs for junit runner probably would be the best solution, so at least it would be configured not on the level of a particular test but on the level of a module (and for most projects, it would be easy to apply it to all modules if necessary, or only to ones which have larger classpath, mocking libraries and so on) |
Are you sure? I see mentions of it being a one-time thing, like
If it's not the case, you are right, multiplying such heavy initialization costs throughout test suites is not acceptable.
So far, 10 seconds seems to be just fine for everyone but bytebuddy users. If it turns out bytebuddy is indeed the only case when this issue is significant, maybe we can work out some workaround specific to bytebuddy and keep the timeout. |
One time, yes, but there are multiple cases when one want to run tests only in a single module or in a file, or a particular test (during development), in all those cases single test run will load bytebuddy even when it's not needed Also, it's not necessary a one-time, because multiple parallel test executions in different classloaders are possible each will have this
Any slow init will cause it, the fact that butebuddy is often mentioned, is because it's a part of most popular mocking library, so many affected by this, it doesn't mean that there are no other cases. I don't see why some butebuddy specific handling is needed, let users override timeout and keep 10 sec as default.
Why not just expose tests timeout with system properties? For one project 10 seconds can be too little, for another too large. I don't see any actual reason why timeout should be 10 seconds specifically, not 100, or 5 seconds |
Got it, thanks!
Does any other slow init commonly happen?
If it turns out there is a use case for this (or if it turns out there is no nicer way to support the slow ByteBuddy init; I haven't looked into this yet), we'll do it.
|
We have around ~10k unit tests in one of our Android projects. We also use mockk quite extensively, and sometimes also Robolectric (which is also famously renowned by a quite slow loading time). We did this coroutines update last week and we had also some timeout issues in about a dozen of tests. After inspection we noticed that the problem was actually because those tests were mocking heavily inside the Maybe @gildor something like this is what is happening to you and can solve your issue? See an example below: @Test
fun `this test might timeout due mocking inside the test scope`() = runTest {
// Test setup
val what : Ever = mockk()
every { some.method() } return mockk()
//The actual test
sut.testMe()
verify { what.ever() }
}
@Test
fun `this test should work fine`() {
// Test setup
val what : Ever = mockk()
every { some.method() } return mockk()
runTest {
// The actual test
sut.testMe()
verify { what.ever() }
}
} I believe this pattern might be quite extended because the official Android documentation recommends using |
It's required to be used as an expression body for multiplatform tests where the return type of the function changes based on platform. |
I was not aware of that requirement for multiplatform, good point. But it's still valid for pure Java/Android projects. I have not much experience with KMP, but also I wonder if multiplatform requires it to be an expression body, or just requires a @Test
fun `this test should work fine`() : TestResult {
// Test setup
return runTest {
// The actual test
}
} |
Let's have a look at that linked PR that "fixes the test flakiness": These are 406 lines of code which contributes to the overall test complexity and draws the developers attention away from what really matters. "Add this warmup rule whenever you write a test" is putting a lot of burden on developers. And this will also introduce a lot of new feedback loops. Sometimes developer will forget this. And they won't directly notice it as it's just a source of flakiness. This will lead to situations where all of a sudden a new PR starts failing which is completely unrelated to the changed code. A property would fix the JVM tests. For us that's the vast majority. Native tests are another story. My preferred solution for now would be to set the default back to one minute and (maybe) introduce a property to change it on the jvm. |
Let me list the things that it looks like we agree on:
What we don't know yet:
I think it would be more productive to focus on these questions. They are important because a system property is a very subpar API and should be avoided. Having some magic incantation in the depths of
Because of all this, in this project, system properties are usually reserved for workarounds for tricky, uncommon requirements. |
@dkhalanskyjb Let's discuss it, but I feel that until this issue is solved, kotlinx.coroutines should revert 10 second timeout and return to 60 second one. It's not perfect, but at least it reduces chance of false-positive timeouts |
We solved this issue by refactoring all tests in the app to use our version of runTest and writing a custom lint check to prevent using kotlinx.coroutines.test.runTest. What you are suggesting is indeed a solution, but it worsen readability of tests imo and require much more refactring that just replace import as our solution. Also error prone, very easy to forget to do this or even just call a method which does mocking under the hood
It's indeed a good point about multiplatform. But system properties are already a part of kotlinx.coroutines API there is system.property API to enable kotlinx.coroutines.debug or configure thread pool size with kotlinx.coroutines.scheduler.max.pool.size So maybe a more generic multiplatform solution is required, but I think it's reasonable to support system property first, considering that it is already used for other APIs and allow to configure timeout
Thanks to the link by @PaulWoitaschek on element-hq/element-x-android#1226
I think it's just a matter of amount of large/complex projects to get cases on other platforms. On JS it easily can be caused by dynamic module loading, which can use even network for this (so it may be slow only on first run, and only whille module is not cached)
Warm-up is the only way which I see now, and the tradeoff is increased complexity and warming-up test which may not use mockk at all, especially harmful when they run in isolation from the rest of the suite
They are not good, but I feel that a universal MPP solution for this shouldn't block this particular request and be extracted to a separate issue, maybe even outside of kotlinx.coroutines, but on level of Kotlin plugin and MPP |
I hope this issue will get solved before the next release one way or another, so there's no need to go back and forth.
Back when the property was introduced, there were no thread pools on other platforms, so it made sense at the time. If we introduced it now, we would think about whether it makes sense to expose the same API on Native.
I don't, and I listed the reasons why in #3800 (comment)
and
Thanks for the find w.r.t. Molecule! We'll have to dig into why Molecule is also taking too long, but I'm wondering: if it's always the first invocation, maybe we could special-case it in the library. Something like, if the timeout is not set explicitly, allow a couple of tests per run to exceed the timeout.
Another way I can think of immediately is to try to check the stack trace at the moment of cancellation, and if it contains a bytebuddy (or Molecule) initialization frame, prolong the timeout. Or we could use reflection at the moment of the timeout to check if bytebuddy was asked to initialize. This could probably work, and if it fixes the issue for everyone, it may be a good decision. There are probably yet more options I don't see yet.
Nothing is blocked until a release is near. |
Just to chime in on this: we're having thousands of tests, and we're quite affected by the new Coroutine change dropping the timeouts from 60s to 10s. Some teams use mockk, some use Robolectric, some just do more stuff (e.g. Koin, just doing more things, etc.). Not having a global timeout that we can change now forces us to spend developer hours on finding the tests that now became flaky. This is also quite tricky, considering that most tests run fine under normal circumstances, but somehow fail when run on an older Intel Mac, or on a slow CI instance, or just some other application (Slack/Teams/Chrome/etc.) wastes CPU cycles on a personal developer machine, or some Gradle workers decided to run on the same CPU. Having a (let's say opt-in or deprecated) timeout that we could change globally would have at least eased our pain, because then we would have had more time to upgrade. Another plus for a configurable global runTest timeout: by setting this to an extreme value (e.g. 1 second) we would be able to:
|
Personally, I would be firmly against such a solution, it tries to solve what is not a problem and move the burden of decision on coroutines library instead give the owner of code base decide, who knows how many similar cases could be out there, and especially if this slow init caused by your own code, there is no way to add hack for it on level of kotlinx.coroutines. If I have my own project/library that has long init, let's say I have some dynamic network class loader, external mock server that is starting on a separate machine, or any other complex setup, it will be affected by this and it cannot be easily fixed, as stated in previous comment by @mreichelt it requires a lot of time just to find affected tests, not mentioning to fix them |
A heads-up. The lead about Molecule turned out to be very valuable. Digging in deeper, I discovered that Molecule has nothing to do with the problem, and the warm-up procedure works only somewhat incidentally. The real culprit is the coverage checking enabled for these tests: the coverage framework performs bytecode instrumentation for all classes it encounters. So, the reason warm-up works is that it touches sufficiently many classes that this instrumentation doesn't happen as much during the test execution itself. This certainly rules out the option to patch the timeout on a case-by-case basis: here, the time loss is not in flipping some global switch like "bytebuddy was successfully initialized," they are spread throughout execution, even if the losses are much more pronounced when tests only start executing. The option to special-case timeouts so that a couple of them may slightly miss the time goal also doesn't seem all that robust anymore: I'm not even sure the Element X project won't need to introduce another warm-up procedure like the first one, but one that touches another set of classes largely disjoint from the first one, so, allowing just how many tests to miss the mark is enough? 2? 3? 5? This behavior would be a bit too brittle and magical. The problem in the Element X project also seems general enough to potentially affect other projects: what if the code is heavily instrumented, significantly slowing it down? So far, the solution seems to be twofold:
I looked into detecting whether bytebuddy is installed, and it seems easy enough. Mockk seems to work via If anyone has any specific arguments for why 10 seconds is not enough time for a test, now is the time to provide them. |
As mentioned before, there might be multiple reasons for having slower tests:
Most of these tests should actually fit into the 10s threshold, if run individually under good circumstances.
Don't get me wrong: I think having a 10s timeout can actually lead to some good, if we carefully migrate. If we consider this to be a breaking change, then it makes it more clear how a good migration could look like:
|
I don't have anything against 10 seconds timeout as default, it's reasonable by itself Maybe I would suggest creating a doc with a timeout explanation and how to change it if necessary as a part of the timeout fail message (or at least in a kdoc of this exception) Still not sure about handling ByteBuddy explicitly, but probably it is indeed beneficial to avoid setting timeout globally higher for all tests, but how many of such cases will exist in future is hard to say, especially on MPP projects (maybe makes sense to have some kind global timeout policy handler for custom cases, but not sure how feasible it is) |
I'd like to ask everyone once again to avoid descending into theoretic rhetoric and what-ifs that don't add anything to the discussion. We aren't proving theorems here, where a single theoretical counterexample breaks the whole logic, but doing engineering. Examples of good contributions, the ones that meaningfully help:
Thank you, these are valuable leads that explain more about why some tests may take too long! On the other hand, just restating your points or aggressively trying to push your preferred solution won't get us anywhere. Also, I don't understand the sense of desperation that seems to creep into the discussion, like
First of all, what's going on here? Where is this dramatic exaggeration even coming from? Reading this, one could get an impression that an industry-wide outage is in effect, even though the overwhelming majority of tests will always finish in a second no matter if you're running a dozen of copies of Slack in parallel. Even if the tests that are slower are common, the change to the timeout is only since 1.7.0, and not much has happened in the test framework since 1.6.4. If this change causes much grief, are there any problems with sticking to
I think my question, "is 10 seconds enough?" wasn't detailed enough. Here are some examples of just how long is 10 seconds:
The point is that 10 seconds is a lot of time for a computer, and if you hit this limit somehow, there's a good chance your computer is doing a lot of meaningless work. (And you're also literally wasting cumulative developer-years of extra time staring at the screen and waiting for the tests to run). Of course, some tests are expected to take more than a couple of seconds to run, and that's mostly stress tests and exhaustive enumeration tests. But such tests are not at all standard! So, the hypothesis under which we operate is that 2-4 seconds should be enough for almost any given test. Under especially bad circumstances, it should be able to run in 8 seconds. 10 should be enough even under bad circumstances. If it's not, then 10 seconds is not enough for that test. At least—I should emphasize this—it's the way that I personally currently understand the situation. I may well be wrong. Hence the question: does anyone have sprawling codebases that contain lots of tests that are not just burning the CPU time for no clear reason but actually do something meaningful? What kinds of tests are these? @mreichelt, you mentioned some tests "just doing a lot"—could you elaborate?
We could close this issue today by restoring the timeout to 60 seconds and/or introducing a JVM system property to control it (I don't think anyone mentioned encountering this issue outside the JVM). No big deal. Most people would walk away happy, and we'd all forget about this. The problem is that this would be the lazy, suboptimal way out, and we still have the time to do better than that. There are three conflicting goals at play:
With a 10-second timeout, we prioritized goal 1, compromising goal 2, causing some headaches. We should improve w.r.t. goal 2—while minimally compromising goals 1 and 3. There are several different universes, and we don't know yet in which we live:
Which one is it? |
After a small interview with @mreichelt, here are some crucial points that I believe weren't raised before but are relevant to real-world projects that are large enough:
I think these are very solid arguments for restoring the 60-second timeout and providing a well-established way to configure it globally. Let's consider two groups of projects:
From this perspective, I get where the "cumulative developer-years" figure came from and now am inclined to agree. If 10 seconds is a timeout that's prone to letting most tests pass most of the time (as opposed to 2 seconds, which won't let most slow tests pass even on the developer's machine, or 60 seconds, which is reserved only for outliers), many typical projects can be negatively affected by us choosing it. On the other hand, the excellent-engineering-quality teams won't significantly benefit from any specific timeout that we'd set if we provide a way to configure it, as they will typically do their homework, as with everything, and will be able to tinker with the setting. Huge kudos to @mreichelt for getting us out of this stalemate! |
Another argument as I shared before, you can't run parameterized tests in Kotlin Multiplatform. One solution is using Kotest Property testing, which means you can run N tests "in 1 test". Running N iterations was working perfectly on JVM for 10 seconds, but it was crashing in native or JS. I think it was on Windows. Obviously, there is no golden value for all iterations as you can set 1,000,000 iterations, but for the default standards (1,000) it was working right with 60 secs. |
This commit attempts to fix #3800. The first part of the fix, reverting the timeout to 60 seconds, is successful. The second part, allowing global configuration, not so much. On JVM, Native, and Node JS, it's possible to use environment variables to communicate data to the process, and it could in theory be the solution, but it doesn't seem to interoperate well with Gradle. The best attempt so far was to use this: ```kotlin tasks.withType(AbstractTestTask::class).all { if (this is ProcessForkOptions) { environment("kotlinx.coroutines.test.default_timeout", "1ms") } } ``` Unfortunately, only `jvmTest` implements `ProcessForkOptions`. Without a clear way to configure the `runTest` timeout in Gradle builds, we can't claim that this is a proper solution.
Small note: thanks to the tip by @dkhalanskyjb to force |
@dkhalanskyjb Thanks for your last comment. It looks very reasonable to me I would also add a bit of context about "The test gets fixed," I'm fine if a test fails, if it takes too long, or if it is an issue of the test itself (so code of the test), but from the fact that tests often rely on the test environment (classpath, additional services, the app initialization logic, mocking framework and so on). |
Run Test Timeout Issues
The new 10s runTest timeout which was introduced by #3603 is currently creating a lot of issues.
There are various discussions about it here:
runTest
default timeout #3270And it boils down to basically this:
Users of ByteBuddy (for instance mockk users) face a very long initialization phase on the very first mocking initialization.
Example
For a Mac M1 Max, this test:
Takes about 1.5 seconds on my machine. CI machines are often times slower than that and might run compilations in parallel or even run multiple projects at once using virtualization.
This has lead to the fact that we do see a lot of sporadically failing CI jobs for tests which don't do much, basides switching to an IO dispatcher to write to a test database.
Proposed Solution
I proposed several solutions here:
https://kotlinlang.slack.com/archives/C1CFAFJSK/p1688326989830229?thread_ts=1688240798.449439&cid=C1CFAFJSK
With my preferred one being:
That way when running tests locally they could quickly time out but depending on the setup, developers could configure a longer time on the CI where they know that the machine is actually slow
The text was updated successfully, but these errors were encountered: