-
-
Notifications
You must be signed in to change notification settings - Fork 10.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BLD: use install-tags to optionally install tests #26274
Conversation
The failing CI jobs show |
The next step here is to extend |
A few more things to mark as tests or move to the right place:
|
Let me test on CI before shifting files to correct places. |
Now this will be done in next commit. |
(numpy-dev) 16:25:13:~/Quansight/numpy % python tools/check_installed_files.py build-install/usr/lib/python3.11/site-packages/numpy --no-tests
----------- No test files were installed --------------
----------- All the necessary .pyi files were installed -------------- With |
Also add
|
Sure. I also wanted to do something similar but a separate job came into my mind to avoid pollution and doing |
This is starting to look good. I noticed a single issue that breaks an installed
That private test function doesn't really need a docstring; it isn't rendered anywhere. You could move the content from Aside from that, everything works for me when testing against SciPy. And the
This is the kind of thing where we have to be pragmatic. A separate job would be ~10 minutes, while a single step at the end of the existing job takes only a few seconds, and can be implemented with less code for the CI test. The |
Yes correct.
It's just that sometimes previous commands influence the results of later commands, especially in case of builds. However I think |
One more idea for a check to make things more robust. We have only four tags, and we should ensure that everything indeed does have one of those tags to avoid some file going silently missing (if it doesn't have a tag, it will no longer be installed). Something like this should do it: import json
import os
all_tags = set()
with open(os.path.join('build', 'meson-info', 'intro-install_plan.json'), 'r') as f:
targets = json.load(f)
for key in targets.keys():
for values in list(targets[key].values()):
if not values['tag'] in all_tags:
all_tags.add(values['tag'])
if not all_tags == set(['runtime', 'runtime-python', 'devel', 'tests']):
raise AssertionError(f"Found unexpected install tag: {all_tags}") This could go in the same |
After that I think this is about ready to merge. Docs and actually changing the wheel contents is probably best done in a follow-up PR. |
I think |
|
Ah! Is warning a right response for unexpected tags? I think |
The reason is two-fold I think:
|
I guess one CI job failing is irrelevant? May be happened due to I added |
Not due to that, it's just a random unrelated failure due to an issue with |
All set on this PR from my end. |
The two tests take 41 seconds together, and pass locally after this fix. The runtime is too slow to re-enable this test in a CI job. [skip azp] [skip cirrus] [skip circle]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @czgdp1807. I pushed two small fixes. Code comment style as prescribed in https://numpy.org/neps/nep-0045-c_style_guide.html for multi-line comments, and a fix for the pyinstaller
test because I had a hunch that we weren't running that in CI anymore (it's super slow, 40 seconds on my machine).
If CI is happy, I think this is about ready.
Tests pass. :D. |
This PR shouldn't change anything yet, it only makes it possible to omit installing the tests. The next step is to actually start doing that - I opened gh-26289 with a few of the decisions that need making. |
Merged - thanks again @czgdp1807! |
#25737