Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add --perf_args usage to jenkins performance runner scripts #9038

Merged
merged 1 commit into from
Jan 14, 2017

Conversation

apolcyn
Copy link
Contributor

@apolcyn apolcyn commented Dec 9, 2016

needed to use the perf flame graphs on more jenkins performance jobs

Copy link
Member

@ctiller ctiller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm good with this, but I'm a little rusty on how all this fits together. @jtattermusch, @nicolasnoble, @matt-kwong any thoughts?

@apolcyn
Copy link
Contributor Author

apolcyn commented Dec 9, 2016

btw I just saw that these were the scripts called by gRPC_performance_master, gRPC_performance_sweep, and gRPC_performance_experiment jenkins jobs

@matt-kwong
Copy link
Contributor

I think you covered all of the performance jobs we run, but I would let Nico or Jan confirm.

Copy link
Contributor

@jtattermusch jtattermusch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Who sets the $PERF_ARGS env variable? Are they set in jenkins job config? If so, I'm not sure we want that. We have the tools/jenkins/run_* scripts because we don't want to have too much configuration done in Jenkins UI (where it's not versioned).

Also, won't enabling the perf args influence the benchmark results?

@@ -67,6 +69,7 @@ tools/run_tests/run_performance_tests.py \
--bq_result_table performance_test.performance_experiment_windows \
--remote_worker_host grpc-performance-windows1 grpc-performance-windows2 \
--xml_report report_windows.xml \
--perf_args "$PERF_ARGS" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these are running on windows, using "perf" command will probably break them.

@apolcyn
Copy link
Contributor Author

apolcyn commented Dec 12, 2016

For setting the $PERF_ARGS variable: yeah I was planning on setting these with a PERF_ARGS environment variable in the jenkins UI. I figured we could set up default parameters to PERF_ARGS on these jobs that might would generate flame graphs, but then we could also turn them off for specific runs by passing an empty string.
Note perf profiling doesn't get turned on if --perf_args is passed an empty string.

From what I've seen, running under perf has been pretty low overhead, maybe 5-10%

Maybe this should change to hard code the perf parameters? Or run perf on different jobs?

@apolcyn
Copy link
Contributor Author

apolcyn commented Dec 12, 2016

Sorry I'm actually thinking this will not work out very well, mostly because the experiment and master jobs are running jobs for other languages besides c++/go (what the flame graphs are most relevant for).

Thinking these flame graphs might want to stay on the sweep job, but be put on a new performance job specific to flame graphs that could cover master/experiment c++/go scenarios, if this sounds ok.

@jtattermusch
Copy link
Contributor

Yeah, that sounds alright - I think it makes sense to make these benchmarks run where the sweep suite is running. We already have enough stuff running on the perf workers where multilang configurations are running.

@apolcyn apolcyn force-pushed the run_perf_on_benchmarks branch from d36c7a8 to dbce6e9 Compare December 20, 2016 21:24
@apolcyn apolcyn force-pushed the run_perf_on_benchmarks branch from dbce6e9 to 66c6782 Compare December 20, 2016 21:30
@apolcyn
Copy link
Contributor Author

apolcyn commented Dec 20, 2016

Latest updates add flamegraph directory name option to the driver, for running multiple run_performance_tests.py on a jenkins job, and also rebase and squash.

PTAL, this should be ready for review

For the new jenkins job:
Added https://grpc-testing.appspot.com/job/gRPC_performance_flamegraphs/ jenkins job for c++ and go flame graphs, which could be put into the regular loop. Latest job in that (https://grpc-testing.appspot.com/job/gRPC_performance_flamegraphs/10/), has profiles separately for c++ and go.

@ctiller
Copy link
Member

ctiller commented Jan 11, 2017

Is this still active?

@apolcyn
Copy link
Contributor Author

apolcyn commented Jan 11, 2017

Yes this should be ready. Actually the jenkins job created along with it is still alive in https://grpc-testing.appspot.com/job/gRPC_performance_flamegraphs/ - the most recent job there was has flame graphs for c++ and go attached.

(btw the new job calls the new tools/jenkins/run_performance_flamegraphs.sh script added here)

@ctiller
Copy link
Member

ctiller commented Jan 14, 2017

This is amazing. Please merge and enable it for sweeps.

@apolcyn
Copy link
Contributor Author

apolcyn commented Jan 14, 2017

failed tests were under bazel full, ignoring

log of failures in https://grpc-testing.appspot.com/job/gRPC_pull_requests_Bazel_full/38/console

@apolcyn apolcyn merged commit d9f5b23 into grpc:master Jan 14, 2017
@apolcyn
Copy link
Contributor Author

apolcyn commented Jan 14, 2017

@ctiller, done, these reports should start showing up in the sweep job

@apolcyn
Copy link
Contributor Author

apolcyn commented Jan 17, 2017

heads up on sweeps these aren't running yet - needed to re-run worker init scripts.

@lock lock bot locked as resolved and limited conversation to collaborators Jan 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants