-
Notifications
You must be signed in to change notification settings - Fork 238
General Testing Standards
All IDAES tests should be written using the pytest
framework (documentation).
Every subpackage (i.e. folder) in the IDAES project should have an associated "tests" sub-folder which contains tests associated with the code in that subpackage/folder.
For each module, e.g. idaes.<subpackage>.<module>
(which would be in the file "idaes/subpackage/module.py"), there should be a corresponding test file idaes.<subpackage>.tests.test_<module>
(which would be in the file "idaes/subpackage/tests.test_module.py"). Modules with larger numbers of test may wish to add additional test files, which should have names which follow the format idaes.<subpackage>.tests.test_<module>_<descriptor>
.
For classes inside the modules, transform the class name by putting underscores between lowercase and capital letters, then make it all lowercase (e.g. MyClassName
becomes my_class_name
) and name the test for that class test_<transformed_name>()
(e.g. test_my_class_name()
). If you have method specific tests, add the method name (which should already be lowercase with underscores) after the class name, e.g. test_my_class_name_my_method()
.
Exception: For modules that have only one class, the class name can be omitted from the prefix of the tests. So, if you have a module which has all its functionality in the class "TimeAndRelativeDimensionsInSpace", you can name your tests simply
test_<thing>
rather thantest_time_and_relative_dimensions_in_space_<thing>
.
For functions inside modules, use the function name as the test function name, with test_
prepended, e.g., test_my_function()
.
Test developers may also want to make use of TestClasses
to organize tests within their test files, such as to group all the tests associated with a specific class within a module, or a specific case study being tested.
Module idaes.core.control_volume0d
has all its functionality in the class ControlVolume0DBlockData
. According to the exception mentioned above, the class name is left out of the name of the test. Therefore a test of the ControlVolume0DBlockData.add_state_blocks()
method is:
@pytest.mark.unit
def test_add_state_blocks():
# test body
A more specific test of the same method, with the parameter "has_phase_equilibrium" set to True is:
@pytest.mark.unit
def test_add_state_blocks_has_phase_equilibrium():
# test body
TBA
A starting point for determining quality of testing in a repository is to look at the percentage of the lines of code in the project that are executed during test runs, and which lines are missed. A number of automated tools are available to do this, and IDAES uses CodeCov
for this purpose, which is integrated in to the Continuous Integration environment and reported for all Pull Requests. CodeCov
provides a number of different reports which can be used to examine test coverage across the project and identify areas of low coverage for improvement.
85% test coverage is generally accepted as a good goal to aim for, and IDAES aims to meet this as a minimum standard for test coverage. In order to meet this target, the following requirements are placed on Pull Requests to ensure test coverage is constantly increasing:
- A Pull Request may not cause the overall test coverage to decrease, and
- The test coverage on the changed code (diff coverage) must be equal to or greater than that of the overall test coverage.
However, there are limitations to relying on test coverage as the sole metric for testing, as coverage only checks to see which lines of code were executed during testing, and cannot does not consider quality of the tests. Further, it is often necessary to test the same line of code multiple times for different user inputs (e.g. testing for cases where a user passes a string when a float was expected) which is not easily quantified. This is especially true for process models, where the same model needs to be tested under a wide range of conditions to be accepted as validated. Thus, code coverage should only be considered as an initial, minimum bound on testing.
- Set up pre-commit
- Run pytest with coverage report
- Run Pylint locally
- Update the Pyomo version
- Install Pyomo from a local Git clone
- Set up GitHub authentication with GCM
- Handle warnings in pytest