-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate StreamManager with run_sweep() #6285
Integrate StreamManager with run_sweep() #6285
Conversation
* Added a feature flag defaulting to True. * Added logic to temporarily rewrite `processor_ids` to `processor_id` prior to full deprecation of `processor_ids` in `Engine.run_sweep()`.
@@ -798,22 +799,13 @@ def test_list_reservations_time_filter_behavior(list_reservations): | |||
|
|||
@mock.patch('cirq_google.engine.engine_client.EngineClient', autospec=True) | |||
def test_run_sweep_params(client): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having to change these tests because of a feature flag is a bit of a code-smell, especially since they would have to be changed back if we flip the default value of the flag. You might consider instead putting the stream configuration on EngineContext
, which has other things like the protocol version and serializer that can be changed. Then for tests we can exercise both the streaming and non-streaming execution by creating an appropriate context, and those tests will continue to work even if we change the default value of the flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the catch! Yeah I agree this is a code smell. Using EngineContext
sounds good to me as long as it's not user-facing. Is this true for this class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EngineContext is not typically used directly by end-users. It's used to share API state between the various resource classes.
@@ -59,6 +59,11 @@ | |||
_R = TypeVar('_R') | |||
|
|||
|
|||
# Feature gate for making Quantum Engine requests using the stream RPC. | |||
# TODO(#5996) Remove the flag once the feature is stable. | |||
_STREAM_FEATURE_FLAG = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mentioned this offline - can we make this False
initially so that we can explicitly enable this in a followup PR from the feature code itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we discussed offline, I'll leave this as True
and open another PR to turn this off in case things break.
|
||
if _STREAM_FEATURE_FLAG: | ||
print( | ||
'\nRunning using the Quantum Engine stream RPC. To revert to unary RPCs, ' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems annoying to see this for every call to run_sweep_async
. Can we just notify out oncall/team and/or the cirq team to be aware of this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could see that, although given that I don't expect the feature flag to stay around for long (probably no longer than 3 months), I'm erring towards over-notifying and having the switch readily accessible when the user needs it most. WDYT?
I'll also add a TODO for removing the feature gate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you must notify, doing this at most once should be sufficient. e.g., this could be done when the engine or context is created, but I would prefer not to notify go general users since the "stream" vs "unary" feature is opaque to them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chatted offline - will remove the log message.
batched results or the result of a focused calibration. | ||
batched results or the result of a focused calibration. | ||
stream_job_response_future: If set, the job is sent over the Quantum Engine | ||
QuantumRunStream bidirectional stream, and the future is completed when the Engine |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: is it important that the future come from a bidirectional stream, or is there a more general way to frame what parameter does? e.g., is this just a future that resolves to the job result?
@@ -798,22 +799,13 @@ def test_list_reservations_time_filter_behavior(list_reservations): | |||
|
|||
@mock.patch('cirq_google.engine.engine_client.EngineClient', autospec=True) | |||
def test_run_sweep_params(client): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EngineContext is not typically used directly by end-users. It's used to share API state between the various resource classes.
@maffoo Sorry I haven't fully addressed all of your comments just yet. Will add tests based on the EngineContext feature gate. |
…t/end-user-integration-final
Added tests and ready for review |
@@ -69,6 +69,9 @@ def __init__( | |||
context: 'engine_base.EngineContext', | |||
_job: Optional[quantum.QuantumJob] = None, | |||
result_type: ResultType = ResultType.Program, | |||
stream_job_response_future: Optional[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copied from a prior review:
Is it important that the future come from a bidirectional stream, or is there a more general way to frame what parameter is used for? e.g., is this just a future that resolves to the job result? If so, there's probably not a reason to restrict it to the
QuantumRunStream
in the documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated docstring for the initializer, and changing the field name to job_response_future
.
when the Engine responds over the stream. When a caller asks for the job result, | ||
EngineJob will await this future to wait for the result to be available from the | ||
Engine. | ||
job_response_future: A future to be completed when the job result is available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: should this be job_result_future
since it resolves to the job_result
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM % the nit.
Codecov ReportAll modified lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #6285 +/- ##
========================================
Coverage 97.89% 97.89%
========================================
Files 1104 1104
Lines 95897 96038 +141
========================================
+ Hits 93876 94016 +140
- Misses 2021 2022 +1
☔ View full report in Codecov by Sentry. |
This PR enables the streaming RPC for end users by default, while also providing a feature flag to turn it off.
processor_ids
toprocessor_id
prior to full deprecation ofprocessor_ids
inEngine.run_sweep()
, so thatEngineClient.run_job_over_stream()
can omit theprocessor_ids
parameter.@dstrain115 @wcourtney