-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor testing to use the data structure directly #25
Conversation
f855175
to
ba84406
Compare
I still have a bit of work to do to fix mypy's complaints about assignments to |
Yeah, that would remove a source of potential test failures. It'd still be worth keeping a test that ensures that valid TOML is emitted regardless, but we'd then be able to make it just one single test and not multiple like we have now. |
This commit splits WritePyproject.run() into two methods, one that generates the data structure corresponding to the content of pyproject.toml and another that writes it out. This split will come in handy for unit testing, so that we can write tests against _generate() which don't have to go through the loop of writing out TOML-encoded content and reading it back in.
In this commit I'm adding the Project.generate() method, which directly returns the data structure generated by the plugin, without rendering it to TOML. This gives us a more efficient way to test the plugin's output without invoking or mimicing a subprocess and without going through an unnecessary round of conversion to and back from TOML markup. I duplicated all the existing tests which involve running setup.py with script_runner.run(), except I changed the copies to use the generate() method. We don't really need to have all these tests duplicated, but I'm leaving it for a future commit to decide which ones from the original batch to remove and which to keep. We'll definitely want to keep some of the original tests that actually run setup.py, just to make sure that the whole system does work, but going forward we can probably use generate() to do the majority of our testing. As part of this change, I added a call to monkeypatch.chdir() in the project fixture so that every test which uses a project constructed by that fixture will run in that project's root directory by default. That's necessary because setuptools.run_setup() doesn't provide a way to set the working directory itself like script_runner.run() does, so we have to manually change to the right directory beforehand. I could have added a call to chdir() as part of generate(), but changing directory as part of a fixture allows pytest to restore the original directory after the tests run, in case that's relevant. In the future, perhaps we can also programmatically construct Distribution objects instead of having to execute a setup.py file, but that's an entirely separate feature.
ba84406
to
474125a
Compare
Yay, I finally managed to satisfy mypy 😁 It was a bit tricky in this case. @sjlongland Any thoughts about which of the original tests in |
I don't think there's one that emits everything yet.
True, and if we keep the serialisation routine as dumb as possible, we're realistically not testing our own code, we're testing Another approach is the call to It comes down to how much you trust the upstream project to be consistent in their output. :-) |
Makes sense. I'll push a commit that removes the others.
Yeah, I mean we haven't implemented support for most of the fields but I think there should be one test that emits all the fields we do have support for, and as we add support for new fields we just keep expanding the scope of that test. Sorry if I wasn't clear about that.
My thoughts as well 👍
Hmm, interesting idea. I hadn't thought of that. I do think it makes the test code a little more complicated to follow, though probably not enough to be a big deal. FWIW I do trust |
This commit changes the set of tests which actually run `setup.py pyproject` to be just two: one extremely minimal test which just has a name and version, and another comprehensive test that is supposed to include all relevant fields that can appear in setup.cfg. The idea is that as we add support for more fields, we'll adjust the expected output of that test to keep up. I populated the dummy setup.cfg file for the latter test with a bunch of fields that exist in setuptools' own setup.cfg file as well as this project's setup.cfg.
In this PR I'm splitting out most of the functionality of the plugin's
run()
method into a new method,_generate()
, which returns the data structure corresponding to the contents of thepyproject.toml
file but without actually writing it out as TOML. I've also modified the test infrastructure to give access to the return value of this method directly, through the newProject.generate()
wrapper, so we can run tests without going through an unnecessary (ish?) round of writing the structure to TOML and then reading it back in.I also copied all the existing tests but changed them to use this new
generate()
method. This introduces a bunch of duplicate tests. I think we can safely get rid of many of the original tests and just keep a few to make sure the whole process works, but most of our testing from now on can probably be done by usinggenerate()
. As I create this PR I'm not yet sure which/how many of the original tests it makes sense to remove, but we'll figure that out. @sjlongland I'd appreciate your thoughts on this.