-
Notifications
You must be signed in to change notification settings - Fork 27.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add benchmarks_entrypoint.py
#34495
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
41487ff
to
49dcf70
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice!
@@ -82,7 +40,7 @@ def collect_metrics(benchmark_id, continue_metric_collection): | |||
conn.close() | |||
|
|||
|
|||
def run_benchmark(branch: str, commit_id: str, commit_msg: str, num_tokens_to_generate=100): | |||
def run_benchmark(logger: Logger, branch: str, commit_id: str, commit_msg: str, num_tokens_to_generate=100): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a way to only pass the logger and a class that wrapps around your continue_metric_collection
.
This way the only thing people need when adding a new file with run_benchmark is:
- Call start() on the class
- compute some stuff
- call .record() (for example) which take the dict of inputs, and adds them to the data, commit and close!
completely up to debate, try to find the simplest way to do it! 🤗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that's probably the best path forward! Will iterate and let you know when I have something nicer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perfect thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lmk if the changes look good to you
f7d0286
to
796affb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes looks great, good to go when you have the pannel thing sorted out! or written in the doc how to handle when you only have say 3 metrics!
I'll update the README and then we can merge! |
3190b19
to
9c493ff
Compare
Adding `benchmarks_entrypoint.py` file, which will be run from the benchmarks CI. This python script will list all python files from the `benchmark/` folder and run the included `run_benchmark` function, allowing people to add new benchmarks scripts.
afa207f
to
c2d4b82
Compare
Adding
benchmarks_entrypoint.py
file, which will be run from the benchmarks CI.This python script will list all python files from the
benchmark/
folder and run the includedrun_benchmark
function, allowing people to add new benchmarks scripts.