Skip to content

Commit

Permalink
Make it clear that qbusbridge is not only for Kafka
Browse files Browse the repository at this point in the history
  • Loading branch information
BewareMyPower committed Aug 28, 2020
1 parent 098ce82 commit 186c760
Show file tree
Hide file tree
Showing 5 changed files with 115 additions and 2 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ qbus.go

*.so
*.ini
!config/*.ini

php/src
python/src
Expand Down
19 changes: 18 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,23 @@

## Introduction [中文](https://github.com/Qihoo360/kafkabridge/blob/master/README_ZH.md)
* Qbusbridge is based on the [librdkafka](https://github.com/edenhill/librdkafka) under the hook. A mass of details related to how to use has been hidden, that making QBus more simple and easy-to-use than [librdkafka](https://github.com/edenhill/librdkafka). For producing and consuming messages, the only thing need the users to do is to invoke a few APIs, for these they don't need to understand too much about Kafka.
* Qbusbridge is a client SDK for pub-sub messaging systems. Currently it supports:

* [Apache Kafka](http://kafka.apache.org/)
* [Apache Pulsar](https://pulsar.apache.org/)

User could switch to any pub-sub messaging system by changing the configuration file. The default config is accessing Kafka, if you want to change it to Pulsar, change the config to:

```ini
mq.type=pulsar
# Other configs for pulsar...
```

See [config](config/) for more details.

> TODO: English config docs is missed currently.
* Qbusbridge-Kafka is based on the [librdkafka](https://github.com/edenhill/librdkafka) under the hook. A mass of details related to how to use has been hidden, that making QBus more simple and easy-to-use than [librdkafka](https://github.com/edenhill/librdkafka). For producing and consuming messages, the only thing need the users to do is to invoke a few APIs, for these they don't need to understand too much about Kafka.

* The reliability of messages producing, that is may be the biggest concerns of the users, has been considerably improved.

## Features
Expand Down
17 changes: 16 additions & 1 deletion README_ZH.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,20 @@
## 简介 [English](https://github.com/Qihoo360/kafkabridge/blob/master/README.md)
* qbusbridge 底层基于 [librdkafka](https://github.com/edenhill/librdkafka), 与之相比封装了大量的使用细节,简单易用,使用者无需了解过多的Kafka系统细节,只需调用极少量的接口,就可完成消息的生产和消费;
* Qbusbridge 是 pub-sub 消息系统的客户端 SDK,目前它支持:

* [Apache Kafka](http://kafka.apache.org/)
* [Apache Pulsar](https://pulsar.apache.org/)

用户可以通过修改配置切换到任意一个 pub-sub 消息系统。默认配置是访问 Kafka,如果想要切换到 Pulsar,需要修改配置为:

```ini
mq.type=pulsar
# Other configs for Pulsar...
```

更多细节见 [config](config/)

* Qbusbridge-Kafka 底层基于 [librdkafka](https://github.com/edenhill/librdkafka), 与之相比封装了大量的使用细节,简单易用,使用者无需了解过多的Kafka系统细节,只需调用极少量的接口,就可完成消息的生产和消费;

* 针对使用者比较关心的消息生产的可靠性,作了近一步的提升

## 特点
Expand Down
19 changes: 19 additions & 0 deletions config/kafka.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
[global]
; 消费者 group
group.id=my-sub

[topic]
; 如果 group 没有初始 offset 或者 offset 超出范围,则自动重置 offset 至最旧或者最新
; 默认 earliest(重置至最旧),可选:latest (重置至最新)
auto.offset.reset=earliest
; 消息超时时间,默认 3 秒
message.timeout.ms=3000


[sdk]
; Kafka 集群地址,英文逗号分隔的的 ip:port 列表
broker.list=localhost:9092
; 日志等级
log.level=info
; 生产者调用 uninit() 时会将未发送的消息 flush,该配置为 flush 等待的最大时长,默认 3 秒
flush.timeout.ms=3000
61 changes: 61 additions & 0 deletions config/pulsar.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
; [必选] 使用 Pulsar 作为消息队列
mq.type=pulsar

; [必选] Pulsar broker 的 ip:port
broker=localhost:6650
; topic 是否持久化
persistent=true
; [必选] topic 所在租户的名称
tenant=public
; [必选] topic 所在命名空间的名称
namespace=default
; [消费者必选] 订阅名
subscription=my-sub
; 日志等级,默认 info,可选:debug/info/warn/error,若配置错误则使用默认值
log.level=info

[producer]
; JSON Web Token,默认为空字符串,即不进行认证
auth.jwt=
; 消息发送的超时,如果消息进入内部队列后超过这个设置仍未收到 broker 的 ack,则会出现 Timeout 错误。
; 单位:ms,默认:30000,即 30 s。如果要自定义 timeout,请确保大于 batching.max.publish.delay.ms
send.timeout.ms=30000
; 出现消息超时(即超过 send.timeout.ms 未能成功送达)错误时的重发次数,默认:0
; 启用该重试功能可能导致发送消息乱序,也可能导致重复发送,但是能尽最大努力保证消息送达,比如
; 在 broker 不可用且一时无法恢复导致的超时错误,能重新发送到可用的 broker
timeout.retry.count=0
; 队列满时,produce 会立刻失败,而此时可能内部队列的消息正在进行 batching(参考 batch 相关参数)
; 所以会尝试等待若干毫秒然后重试发送,count 即最大重发次数,interval.ms 为两次重发之间的间隔
queue.full.retry.count=5
queue.full.retry.interval.ms=10
; 一个 partition 内最大待发送的消息数
max.pending.messages=1000
; 是否支持批量发送,默认 true,若启用,则消息不会立刻发送,而是积攒直到:
; 1. 消息数量到达上限,或者
; 2. 消息总字节数到达上限,或者
; 3. 从第一条消息进入 batch 中超过时间上限。
; 对应后面 3 项配置,因此若设为 false,后面 3 项配置无效。
batching.enable=true
; 批量发送的最大消息数量,默认 1000
batching.max.messages=1000
; 批量发送的最大字节数,单位:KiB,默认 128
batching.max.allowed.size.kbytes=128
; 消息等待批量发送的最大时间,单位:毫秒,默认 10 ms
batching.max.publish.delay.ms=10
; 压缩类型,默认:None,即不进行压缩。可修改为:LZ4,ZLib,若配置错误,则使用默认值
compression.type=None

[consumer]
; JSON Web Token,默认为空字符串,即不进行认证
auth.jwt=
; 消费线程循环中每次接收消息的 timeout,单位:毫秒
poll.timeout.ms=100
; 若设为 true,则用户需要重写 deliveryMsgForCommitOffset 并调用 commitOffset 提交消费过的消息,否则下次订阅时会重复消费
manual.ack=false
; 消费者调用 stop 时,是否等待最新一条消息被消费
force.stop=false
; 创建消费者时若订阅不存在,会创建新的订阅,此时订阅的消费者会从最新或最旧消费
; 默认 latest(从最新消费),可选:earliest(从最旧消费)
initial.position=latest
; 批量确认消息(类似 Kafka 提交 offset)的时间间隔,单位:ms,默认 200 ms
manual.commit.time.ms=200

0 comments on commit 186c760

Please sign in to comment.