This project is a fork from the original Textualize/Rich PyPI library. Group 1 of CS489 (Winter 2024) at the University of Waterloo has selected this open source project to improve its software delivery practices. The details of the improvements are submitted as a report to the grading team. This README serves the purpose to outline the commands that need to be executed for the project setup and experiments.
- Make sure you have Python (>= 3.7.0) and pip installed on your machine.
- Make sure to have Poetry installed. If not, run
pip install poetry
or follow any other recommended methods. - Install dependencies using poetry
poetry install
- Boot up a poetry shell (Use the poetry virtual env to execute all the successive commands)
poetry shell
- Run pre-commit setup script to configure the hook
make setup-pre-commit
Make sure to complete the Common pre-build steps prior to proceeding to test each of the experiment groups.
- Check out to
master
and pull latestmaster
from remote repository
git checkout master
git pull
- Run the build
poetry build
- Configure devpi-client
devpi use http://3.140.87.9:80
devpi login testuser --password=testusercs489
devpi use testuser/cs489-test-index
- Upload to devpi-server
devpi upload --from-dir dist/*
- Follow the Creating release on Github steps to create a new release
- Go to Actions -> Build, Publish, and Release Rich workflow tab on the repository
- Select the workflow triggered by your merge/push to
master
- Click on
Review deployments
and selectdeployment
environment from options - Click
Approve and deploy
to approve and run the workflow - Wait for the job to finish and then check the Releases section to verify the release.
Details about Benchmarking can be found in this README.md file. A more in-depth process of replicating the 2 workflows can be seen in the Replicability section of Improvement 2 (Monitoring - Improve Benchmarks) under the Evaluation Report.
Manual Process:
- Ensure any tags you wish to benchmark are included in the file
asvhashfile
at the root of the repo. - Run the benchmarks for those tags by running
asv run HASHFILE:asvhashfile
. This will take several minutes. - Create the HTML locally for those benchmarks by running
asv publish
. - Run
asv preview
to launch a local webserver that will let you preview the benchmarks dashboard. Navigate to the URL this command gives you and check everything looks fine. - Checkout the
rich-benchmarks
repo from here andcd
into it. - Copy the HTML you generated earlier into the root of this repo, e.g.
cp -r ../rich/benchmarks/html/* .
(assuming you checked outrich-benchmarks
alongsiderich
in your filesystem) - When the HTML is merged into
main
, the benchmark dashboard will be updated via a GitHub Action.
Automated Process:
- Run
./run_benchmarks.sh <tag_name>
wheretag_name
is the new tag you want to monitor using asv. - When your changes are pushed to the master branch, the benchmarking dashboard will be updated automatically via a GitHub Action.