-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAINT: py3.10 in more jobs and bump some 3.8 to 3.9 #14832
Conversation
Azure does not yet have Py3.10. But maybe we could move some jobs from azure to GH? Or do we have a credit reason, or other incentives, to stay with Azure? |
I also see an issue with gmpy2. Why is this needed? I did not see any usage in the codebase. Also the last stable is quite old and they have a recent RC. Maybe worth checking this if we do need this package. [EDIT:] oh ok it's for mpmath to run faster. I will try to use a more recent version. |
Not really--the last time we had a CI "crisis" (when Travis CI put the credit limit into production), we mostly just tried to "load balance" such that we didn't run into massive rate-limiting problems with a single service. I think Azure gives us 10 parallel jobs so we just try to balance with that in mind I think (Actions may be similar?). |
It would be good for us to reduce our CI footprint as well. I saw a blog where they examined usage of various python open source projects, we're very near the top. They converted to cash amount and it's scary figures. Once the new build system is in place our demands will go down, but it's also worth examining test run time again. There are a few stats tests that are quite demanding of time. |
Thank you both for your inputs. This PR only remove a job by moving the coverage to another job (this is done in another open PR on the coverage). For the rest I am not long enough here to know what could be removed. If you have ideas I will be happy to change things. What is missing in general is more caching from one commit to another. I have a failed attempt PR to add caching to CircleCI but for some reason I am having trouble with our build script. Another possibility would be to use labels to trigger CI. We could have a small selection of workflows which would run all the time. And depending on the PR we could trigger manually more runs by adding a label. What do you think? |
I thought Ralf took a good solid look at that recently--I know I did a test run to confirm that we were not getting much benefit from multiple cores for the test suite. I guess the meson stuff will speed up builds--we may not always get tons of benefit from parallel builds depending on the CI service, though just cutting in half or so with ~2 cores available would still be huge. I'd be nervous about substantially cutting down the CI entries since we already get stuff slipping through to the wheels repo. |
I've looked at a few CI test runs:
Whilst there may be an anomaly for test_gh12999 (e.g. something happens just before that test runs), it's a bit suspicious. Taking almost 7 minutes to run a test (that's not marked as slow) is excessive, even 2 minutes is too long. Furthermore that specific test doesn't appear to be seeded, it failed straight out on my desktop when I copied code into a REPL. Some of the MGCStat are marked slow, but at what point does one run a ~50 s individual test during a CI run? |
Wooo... I never looked at these but indeed we would have more benefit reducing these instead! I am also questioning the very long tests. Maybe they test known results, but CI tests are not supposed to be rocket science so I am pretty sure there are ways to tests these in another way. It's like the xfails, do we actually review these from time to time? We have a lot of them and if they are most of the time failing it's also some lost CI time. Maybe another PR though, I am neutral here. |
We should update windows distribution as well. We get this deprecation warning now.
|
I guess this will change with Meson coming soon (I've seen the blog post on the CI test suite). So I am not sure how this PR is relevant. Feel free to close it if something better is coming soon 😄 |
That blog post was wrong by two orders of magnitude I believe. That said, it's still true that we should use fewer resources.
I still have an open PR for adding
Yes, the new CI jobs should be significantly faster, due to both build improvements and caching.
I would say it's still very helpful, would like to see this PR merged. Would be great if the pre-release job stops failing and then have this all green. |
There are parts that are finished, and an overall matrix which will help in choosing configs and for documenting our setup (right now it's just reading through all yaml files ...), but it's not like we plan to redo the whole thing from scratch. EDIT: this is the blog post in question: https://labs.quansight.org/blog/2021/10/re-engineering-cicd-pipelines-for-scipy/ |
It looks like some of these got slower, by a lot. Unclear why. But that is indeed unacceptable, I'd mark them as |
|
Ok thanks for confirming. I will see to make it green. I will also see if I can mark other slow tests. And since NumPy has an updated windows image I will do the same here. |
I think the failures in For |
Extracted a failure, since it's not visible in the latest CI run right now:
This looks odd indeed. The CI job uses: Windows, Python It's unclear to me why that is failing. I'd say yes, let's ignore that for now if needed. If you touched the config for the |
@rgommers thanks. Indeed and I just got why. The only change I did was to change Py 3.8 to 3.9 for this job, hence the version for NumPy in For windows, it's failing with 3.10 on
|
I will look into this later and will keep in on priority (though a bit lower because some NumPy stuff and SciPy module renaming is underway). |
Thanks @rgommers. Seems like there is another warning issue and just the doctest failure and we are good. |
All is green @rgommers 🎉 Anything else to do? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This now LGTM. It's a relatively small PR at this point, and the mix of 3.8/3.9/3.10 jobs seems good. Let's get this in. Thanks @tupui!
Update some versions of Python to 3.9 and 3.10. Only considering a few variables, it would look like the following:
Mac
Linux
Windows