Proposal: PrepareMsg API to more easily parallelize serialization #2432
Description
If the time to serialize and compress a message is less than the time to transmit it, a single stream is capable of saturating a network connection. This is because, in grpc-go, the N+1th message can be serialized and compressed while the Nth message is being transmitted. However, compression is usually a slow process that results in a small amount of data which can be quickly transmitted, so this is not usually the case with compression enabled. References: #1879, #2355.
We would like to separate the encoding and transmission steps so that users are able to perform multiple encodes simultaneously and take advantage of system parallelism.
Proposed API:
package grpc
type PreparedMsg struct { /* Nothing exported */ }
// Encode prepares msg into p (marshals and compresses) for use with s.SendMsg.
// msg may not be modified until after SendMsg is called with p. p is not valid
// if a non-nil error is returned.
func (p *PreparedMsg) Encode(s grpc.Stream, msg interface{}) error { ... }
If a PreparedMsg
is passed to SendMsg
, SendMsg
will use the PreparedMsg
's internal buffer to send the message on the stream, bypassing the marshal and compress steps.
This API, as opposed to func NewPreparedMsg(msg interface{}) PreparedMsg
, would allow users to re-use a PreparedMsg, and may save allocations of internal buffers.