Closed
Description
Description
There are a lot of pieces to putting a pull request together and the information that code management needs isn't always readily available and can take developers a long time to gather. The logs are overly verbose and don't provide developers a quick way to find what went wrong with their code changes.
Solution
The logs should be more succinct. Having information on all file comparisons kept in the history of the repository seems overkill. Simplify the logs to provide information that all tests are pass/fail and provide a log for failed tests for developers to easily find out what went wrong.
Ideas
- Creates a synopsis that clarifies:
- number of tests ran to completion (double checking all tests are run)
- Listing all failed comparisons.
- number of compilations ran to completion (double checking all compiles are run).
- Listing all failed compiles.
- number of tests ran to completion (double checking all tests are run)
- Write synopsis to RegressionTests_.log
- fail_test uses number while fail_compile uses compile name, changing fail_test to use test name.
Extras
- The resulting PR will also need to adjust the PR template.
Activity
BrianCurtis-NOAA commentedon Dec 20, 2023
As I start working on this, adding tags for @junwang-noaa @DusanJovic-NOAA @DeniseWorthen. Is there anything else you think this should do, or not do, based on what I have in the issue?
DeniseWorthen commentedon Dec 20, 2023
Do you envision that the pre-test log is short (just summarizing and flagging failed tests etc) or would it be like the existing log with a summary attached?
BrianCurtis-NOAA commentedon Dec 20, 2023
My goal is to limit it to the information we need, and skip the individual file comparisons. If it's not easily fit into the code, it may be larger.
DeniseWorthen commentedon Dec 20, 2023
And is there anyway to have Github understand the pre-test log and apply some sort of "green light" on a PR?
BrianCurtis-NOAA commentedon Dec 20, 2023
I think this would be implemented through a script that the github action uses. We would have to write it, though.
BrianCurtis-NOAA commentedon Dec 21, 2023
I've added code to the script that creates a symbolic link called run_dir that links to the current rt_###### for your running job and put it into the tests dir. It will update that for each new job.
BrianCurtis-NOAA commentedon Dec 21, 2023
This should close #1821
BrianCurtis-NOAA commentedon Dec 22, 2023
The above is output from the new (current state of) RegressionTests_pretest.log with a successful test.
DeniseWorthen commentedon Dec 22, 2023
Don't we just need to say that something failed, rather than listing everything that passed? So if everything is OK
And only if something fails, logging specifics about what did not work (a compile, a run, a comparison).
BrianCurtis-NOAA commentedon Dec 22, 2023
Part of the verification function is to spit out more if there's a failure/missing test/compile. I think having the output above is important, and it cuts down a lot from the verbose RegressionTests_<machine>.log
DeniseWorthen commentedon Dec 22, 2023
Hm, well maybe others can chime in. I find the above too verbose. From the perspective of someone making a PR, all I care about is what doesn't work---either it fails to compile, it fails to run or it fails to compare. No news is good news.
junwang-noaa commentedon Dec 22, 2023
Maybe just me, I feel it's convenient to see a list of verified tests to have a full picture.
28 remaining items