Skip to content

Commit

Permalink
add some docs (cloudwego#87)
Browse files Browse the repository at this point in the history
  • Loading branch information
SinnerA authored Aug 27, 2021
1 parent ace612b commit 78908c5
Show file tree
Hide file tree
Showing 14 changed files with 387 additions and 23 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,13 +69,11 @@ Kitex 字节跳动内部的 Golang 微服务 RPC 框架,具有**高性能**、

- [消息类型(PingPong、Oneway、Streaming)](docs/guide/basic-features/message_type_cn.md)

- [支持协议(Thrift、Kitex Protobuf、gRPC protocol)](docs/guide/basic-features/protocols_cn.md)

- [应用层传输协议 TTHeader](docs/guide/basic-features/ttheader_cn.md)
- [编解码协议](docs/guide/basic-features/serialization_protocol_cn.md)

- [直连访问](docs/guide/basic-features/visit_directly_cn.md)

- [连接池](docs/guide/basic-features/connpool_cn.md)
- [连接池](docs/guide/basic-features/connection_pool_cn.md)

- [超时控制](docs/guide/basic-features/timeout_cn.md)

Expand Down Expand Up @@ -123,7 +121,7 @@ Kitex 字节跳动内部的 Golang 微服务 RPC 框架,具有**高性能**、

- [服务注册扩展](docs/guide/extension/registry_cn.md)

- [服务发现扩展](docs/guide/extension/discovery_cn.md)
- [服务发现扩展](docs/guide/extension/service_discovery_cn.md)

- [负载均衡扩展](docs/guide/extension/loadbalance_cn.md)

Expand All @@ -143,6 +141,8 @@ Kitex 字节跳动内部的 Golang 微服务 RPC 框架,具有**高性能**、

- **参考**

- [应用层传输协议 TTHeader](docs/reference/transport_protocol_ttheader_cn.md)

- [异常说明](docs/reference/exception_cn.md)

- [版本管理](docs/reference/version_cn.md)
Expand Down
67 changes: 67 additions & 0 deletions docs/guide/basic-features/connection_pool.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# Connection Pool

Kitex provides short connection pool and long connection pool for different business scenarios.

## Short Connection Pool

Without any settings, Kitex choose using short connection pool.

## Long Connection Pool

Initializing the client with an Option:

```go
client.WithLongConnection(connpool.IdleConfig{
MaxIdlePerAddress: 10,
MaxIdleGlobal: 1000,
MaxIdleTimeout: 60 * time.Second,
})
```

- `MaxIdlePerAddress`: the maximum number of idle connections per downstream instance
- `MaxIdleGlobal`: the global maximum number of idle connections
- `MaxIdleTimeout`: the idle duration of the connection, connection that exceed this duration would be closed (minimum value is 3s, default value is 30s)

## Internal Implementation

Each downstream address corresponds to a connection pool, the connection pool is a ring composed of connections, and the size of the ring is `MaxIdlePerAddress`.

When getting a connection of downstream address, proceed as follows:
1. Try to fetch a connection from the ring, if fetching failed (no idle connections remained), then try to establish a new connection. In other words, the number of connections may exceed `MaxIdlePerAddress`
2. If fetching succeed, then checking whether the idle time of the connection (since the last time it was placed in the connection pool) has exceeded `MaxIdleTimeout`, if yes, would close this connection and create a new connection

When the connection is ready to be returned after used, proceed as follows:

1. Check whether the connection is normal, if not, close it directly
2. Check whether the idle connection number exceeds `MaxIdleGlobal`, and if yes, close it directly
3. Check whether free space remained in the ring of the target connection pool, if yes, put it into the pool, otherwise close it directly

## Parameter Setting

The setting of parameters is suggested as follows:
- `MaxIdlePerAddress`: the minimum value is 1, otherwise long connections would degenerate to short connections
- What value should be set should be determined according to the throughput of downstream address. The approximate estimation formula is: `MaxIdlePerAddress = qps_per_dest_host*avg_response_time_sec`
- For example, the cost of each request is 100ms, and the request spread to each downstream address is 100QPS, the value is suggested to set to 10, because each connection handles 10 requests per second, 100QPS requires 10 connections to handled
- In the actual scenario, the fluctuation of traffic is also necessary to be considered. Pay attention, the connection within MaxIdleTimeout will be recycled if it is not used
- Summary: this value be set too large or too small would lead to degenerating to short connection
- `MaxIdleGlobal`: should be larger than the total number of `downstream targets number * MaxIdlePerAddress`
- Notice: this value is not very valuable, it is suggested to set it to a super large value. In subsequent versions, considers discarding this parameter and providing a new interface
- `MaxIdleTimeout`: since the server will clean up inactive connections within 10min, the client also needs to clean up long-idle connections in time to avoid using invalid connections. This value cannot exceed 10min when the downstream is also a Kitex service

## Status Monitoring

Connection pooling defines the `Reporter` interface for connection pool status monitoring, such as the reuse rate of long connections.
Users should implement the interface themselves and inject it by `SetReporter`.

```go
// Reporter report status of connection pool.
type Reporter interface {
ConnSucceed(poolType ConnectionPoolType, serviceName string, addr net.Addr)
ConnFailed(poolType ConnectionPoolType, serviceName string, addr net.Addr)
ReuseSucceed(poolType ConnectionPoolType, serviceName string, addr net.Addr)
}

// SetReporter set the common reporter of connection pool, that can only be set once.
func SetReporter(r Reporter)
```

Original file line number Diff line number Diff line change
Expand Up @@ -42,15 +42,15 @@ client.WithLongConnection(connpool.IdleConfig{

### 参数设置建议

从上面可以看出,几个参数的选择考虑如下
下面是参数设置的一些建议

- `MaxIdlePerAddress` 表示池化的连接数量,最小为 1,否则长连接会退化为短连接
- 具体的值与每个目标地址的吞吐量有关,近似的估算公式为:`MaxIdlePerAddress = qps_per_dest_host*avg_response_time_sec `
- 举例如下,假设每个请求的响应时间为 100ms,平摊到每个下游地址的请求为 100qps,该值设置为 10(100*0.1),因为每条连接每秒可以处理 10 个请求 , 100qps 需要 10 个连接进行处理
- 在实际场景中,也需要考虑到流量的波动。需要特别注意的是,由于存在 MaxIdleTimeout,即 MaxIdleTimeout 内该连接没有被使用则会被回收
- 总结:该值设置较大 / 较小,都会导致连接复用率低,长连接退化为短连接
- `MaxIdleGlobal` 表示总的空闲连接数应大于 `下游目标总数 *MaxIdlePerAddress`,超出部分是为了限制未能从连接池中获取连接而主动新建连接的总数量
- 备注:该值存在的价值不大,建议设置为一个较大的值,在后续版本中考虑废弃该参数并提供新的接口
- 举例如下,假设每个请求的响应时间为 100ms,平摊到每个下游地址的请求为 100QPS,该值建议设置为10,因为每条连接每秒可以处理 10 个请求, 100QPS 则需要 10 个连接进行处理
- 在实际场景中,也需要考虑到流量的波动。需要特别注意的是,即 MaxIdleTimeout 内该连接没有被使用则会被回收
- 总而言之,该值设置过大或者过小,都会导致连接复用率低,长连接退化为短连接
- `MaxIdleGlobal` 表示总的空闲连接数应大于 `下游目标总数*MaxIdlePerAddress`,超出部分是为了限制未能从连接池中获取连接而主动新建连接的总数量
- 注意:该值存在的价值不大,建议设置为一个较大的值,在后续版本中考虑废弃该参数并提供新的接口
- `MaxIdleTimeout` 表示连接空闲时间,由于 server 在 10min 内会清理不活跃的连接,因此 client 端也需要及时清理空闲较久的连接,避免使用无效的连接,该值在下游也为 Kitex 时不可超过 10min

## 状态监控
Expand Down
61 changes: 61 additions & 0 deletions docs/guide/basic-features/serialization_protocol.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Serialization Protocol

Kitex support had support two serialization protocol: Thrift and Protobuf.

## Thrift

Kitex only support Thrift [Binary](https://github.com/apache/thrift/blob/master/doc/specs/thrift-binary-protocol.md) protocol codec, [Compact](https://github.com/apache/thrift/blob/master/doc/specs/thrift-compact-protocol.md) currently is not supported.

If you want using thrift protocol encoding, should generate codes by kitex cmd:

Client side:

```
kitex -type thrift ${service_name} ${idl_name}.thrift
```

Server side:

```
kitex -type thrift -service ${service_name} ${idl_name}.thrift
```

We have optimized Thrift's Binary protocol codec. For details of the optimization, please refer to the "Reference - High Performance Thrift Codec" chapter. If you want to close these optimizations, you can add the `-no-fast-api` argument when generating code.

## Protobuf

### Protocol Type

There are two types suporting of protobuf:

1. **Custom message protocol**: it's been considered as kitex protobuf, the way of generated code is consistent with Thrift.
2. **gRPC protocol**: it can communication with grpc directly, and support streaming.

If the streaming method is defined in the IDL, the serialization protocol would adopt gRPC protocol, otherwise Kitex protobuf would be adopted. If you want using gRPC protocol, but without stream definition in your proto file, you need specify the transport protocol when initializing client (No changes need to be made on the server because protocol detection is supported):

```go
// Using WithTransportProtocol specify the transport protocol
cli, err := service.NewClient(destService, client.WithTransportProtocol(transport.GRPC))
```

### Generated Code

Only support proto3, the grammar reference: https://developers.google.com/protocol-buffers/docs/gotutorial.

Notice:

1. What is different from other languages, generating go codes must define `go_package` in the proto file
2. Instead of the full path, just using `go_package` specify the package name, such as: go_package = "pbdemo"
3. Download the `protoc` binary and put it in the $PATH directory

Client side:

```
kitex -type protobuf -I idl/ idl/${proto_name}.proto
```

Server side:

```
kitex -type protobuf -service ${service_name} -I idl/ idl/${proto_name}.proto
```
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,10 @@ Kitex 支持了 Thrift 的 [Binary](https://github.com/apache/thrift/blob/master

Kitex 对 protobuf 支持的协议有两种:

1. 自定义的消息协议,理解为 Kitex Protobuf,使用方式与 thrift 一样
1. 自定义的消息协议,可以理解为 Kitex Protobuf,使用方式与 thrift 一样
2. gRPC 协议,可以与 gRPC 互通,并且支持 streaming 调用

如果 idl 中定义了 streaming 方法默认走 gRPC 协议,没有定义默认走 Kitex Protobuf。没有 streaming 方法,若想指定 gRPC 协议,需要 client 初始化做如下配置(server 支持协议探测无需配置) :
如果 IDL 文件中定义了 streaming 方法则走 gRPC 协议,否则走 Kitex Protobuf。没有 streaming 方法,又想指定 gRPC 协议,需要 client 初始化做如下配置(server 支持协议探测无需配置) :

```go
// 使用 WithTransportProtocol 指定 transport
Expand All @@ -48,7 +48,7 @@ cli, err := service.NewClient(destService, client.WithTransportProtocol(transpor
2. go_package 和 thrift 的 namespace 定义一样,不用写完整的路径,只需指定包名,相当于 thrift 的 namespace,如:go_package = "pbdemo"
3. 提前下载好 protoc 二进制放在 $PATH 目录下

生成代码时需要指定 protobuf :
生成代码时需要指定 protobuf 协议

- 客户端

Expand Down
10 changes: 10 additions & 0 deletions docs/guide/basic-features/visit_directly.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Visit Directly

If you want to send the requet to downstream that address determined, you can choose visiting directly without service discovery.

Client can specify downstream addresse in two forms:

- Using `WithHostPort` Option, supports two parameters:
- Normal IP address, in the form of `host:port`, support `IPv6`
- Sock file address, communicating with UDS (Unix Domain Socket)
- Using `WithURL` Option, the parameter must be valid HTTP URL address
2 changes: 1 addition & 1 deletion docs/guide/basic-features/visit_directly_cn.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# 直连访问

在明确下游地址时,可以选择直连访问的方式,不需要经过服务发现。
在明确要访问某个下游地址时,可以选择直连访问的方式,不需要经过服务发现。

client 可以有两种形式指定下游地址:

Expand Down
37 changes: 37 additions & 0 deletions docs/guide/extension/monitoring.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Monitoring Extension

[kitex-contrib](https://github.com/kitex-contrib/monitor-prometheus) has provided the prometheus monitoring extensions.

If you want to get the more detailed monitoring, such as message packet size, or want to adopt other data source, such as InfluxDB, you can implement the `Trace` interface according to your requirements and inject by `WithTracer` Option.

```go
// Tracer is executed at the start and finish of an RPC.
type Tracer interface {
Start(ctx context.Context) context.Context
Finish(ctx context.Context)
}
```

RPCInfo can be obtained from ctx, and further request time cost, package size, and error information returned by the request can be obtained from RPCInfo, for example:

```go
type clientTracer struct {
// contain entities which recording metric
}

// Start record the beginning of an RPC invocation.
func (c *clientTracer) Start(ctx context.Context) context.Context {
// do nothing
return ctx
}

// Finish record after receiving the response of server.
func (c *clientTracer) Finish(ctx context.Context) {
ri := rpcinfo.GetRPCInfo(ctx)
rpcStart := ri.Stats().GetEvent(stats.RPCStart)
rpcFinish := ri.Stats().GetEvent(stats.RPCFinish)
cost := rpcFinish.Time().Sub(rpcStart.Time())
// TODO: record the cost of request
}
```

4 changes: 2 additions & 2 deletions docs/guide/extension/monitoring_cn.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# 监控扩展

用户如果需要更详细的打点,例如包大小;或更换其他数据源,例如 influxDB 的需求,用户可以根据需求实现 `Trace` 接口,并通过 `WithTracer` Option 来注入
用户如果需要更详细的打点,例如包大小,或者想要更换其他数据源,例如 influxDB,用户可以根据自己的需求实现 `Trace` 接口,并通过 `WithTracer` Option来注入

```go
// Tracer is executed at the start and finish of an RPC.
Expand All @@ -10,7 +10,7 @@ type Tracer interface {
}
```

从 ctx 中可以获得 RPCInfo,从而可以得到请求耗时、包大小和请求返回的错误信息等,举例:
从 ctx 中可以获得 RPCInfo,进一步的从 RPCInfo 中获取请求耗时、包大小和请求返回的错误信息等,举例:

```go
type clientTracer struct {
Expand Down
62 changes: 62 additions & 0 deletions docs/guide/extension/service_discovery.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Service Discovery Extension

[kitex-contrib](https://github.com/kitex-contrib/resolver-dns) has provided the DNS service discovery extensions.

If you want to adopt other service discovery protocol, such as ETCD, you can implement the `Resolver` interface, and clients can inject it by `WithResolver` Option.

## Interface Definition

The interface is defined in `pkg/discovery/discovery.go` and is defined as follows:

```go
type Resolver interface {
Target(ctx context.Context, target rpcinfo.EndpointInfo) string
Resolve(ctx context.Context, key string) (Result, error)
Diff(key string, prev, next Result) (Change, bool)
Name() string
}

type Result struct {
Cacheable bool // if can be cached
CacheKey string // the unique key of cached result
Instances []Instance // the result of service discovery
}

// the diff result
type Change struct {
Result Result
Added []Instance
Updated []Instance
Removed []Instance
}
```

`Resolver` interface detail:

- `Resolve`: as the core method of `Resolver`, it obtains the service discovery result from target key
- `Target`: it resolves the unique target endpoint that from the downstream endpoints provided by `Resolve`, and the result will be used as the unique key of the cache
- `Diff`: it is used to compare the discovery results with the last time. The differences in results are used to notify other components, such as [loadbalancer](https://github.com/cloudwego/kitex/blob/develop/docs/guide/extension/loadbalance.md) and circuitbreaker, etc
- `Name`: it is used to specify a unique name for `Resolver`, and will use it to cache and reuse `Resolver`

## Usage Example

You need to implement the the `Resolver` interface, and using it by Option:

```go
import (
"xx/kitex/client"
)

func main() {
opt := client.WithResolver(YOUR_RESOLVER)

// new client
xxx.NewClient("p.s.m", opt)
}
```

## Attention

To improve performance, Kitex reusing `Resolver`, so the `Resolver` method implementation must be concurrent security.


Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# 服务发现扩展

[kitex-contrib](https://github.com/kitex-contrib) 中提供了 dns 服务发现扩展。
[kitex-contrib](https://github.com/kitex-contrib) 中提供了 DNS 服务发现扩展。

用户如果需要更换其他的服务发现,例如 ETCD,用户可以根据需求实现 `Resolver ` 接口,client 通过 `WithResolver` Option 来注入。

Expand Down Expand Up @@ -32,12 +32,12 @@ type Change struct {
}
```

Resolver 定义如下 :
`Resolver` 接口定义如下:

- `Target` 方法是从 Kitex 提供的对端 EndpointInfo 中解析出 `Resolve` 需要使用的唯一 target, 同时这个 target 将作为缓存的唯一 key
- `Resolve` 方法作为 Resolver 的核心方法, 从 target key 中获取我们需要的服务发现结果 `Result`
- `Diff` 方法用于计算两次服务发现的变更, 计算结果一般用于通知其他组件, 如 [loadbalancer](./loadbalance_cn.md) 和熔断等, 返回的是变更 `Change`
- `Name` 方法用于指定 Resolver 的唯一名称, 同时 Kitex 会用它来缓存和复用 Resolver
- `Resolve`:作为 `Resolver` 的核心方法, 从 target key 中获取我们需要的服务发现结果 `Result`
- `Target`:从 Kitex 提供的对端 EndpointInfo 中解析出 `Resolve` 需要使用的唯一 target, 同时这个 target 将作为缓存的唯一 key
- `Diff`:用于计算两次服务发现的变更, 计算结果一般用于通知其他组件, 如 [loadbalancer](./loadbalance_cn.md) 和熔断等, 返回的是变更 `Change`
- `Name`:用于指定 Resolver 的唯一名称, 同时 Kitex 会用它来缓存和复用 Resolver

## 自定义 Resolver

Expand Down
Loading

0 comments on commit 78908c5

Please sign in to comment.