前言
上一篇讲了如何用多台虚拟机部署 TiKV 集群,用的工具为 TiUP。但是那个方法无法用于 Docker,我试了下,开 四个 Container 模拟四台机器 ,但 tiup cluster deploy 会一直失败,因为 systemctl 无法执行。 问题的本质在于,Docker 只是进程隔离,而不是系统隔离,所有容器用的还是主机的系统,因此没有 systemctl 一说,故 Docker 部署 TiKV 无法用一般的多机部署方法。
幸运的是,PingCap 为 Docker 部署 TiKV 提供了专门的方法,它给了两个镜像:pingcap/tikv 与 pingcap/pd ,分别作为对应的示例。官方部署文档地址:TiKV | Docker Deployment
但是,官方的部署方法一堆错误,用它的流程根本跑不起来,比如:
- 没有架设 Docker 网桥,没有给 Container 设置 IP,因此 run 的时候 IP 是不确定的。
- 端口映射重复了,怎么可能多个 Container 映射主机的同一个端口。
- 6 个示例的数据挂载在主机的同一个目录 /data 下,这样做不报错,但多实例就没有意义了。
这里更新一下部署流程,解决上述三个问题。
实例分配
| 容器名称 | 容器 IP | 端口 | 服务 | 数据挂载路径 |
|---|---|---|---|---|
| pd1 | 172.18.0.11 | 12379:2379 | PD1 | ~/ysy/tikv/data/pd1 |
| pd2 | 172.18.0.12 | 22379:2379 | PD2 | ~/ysy/tikv/data/pd2 |
| pd3 | 172.18.0.13 | 32379:2379 | PD3 | ~/ysy/tikv/data/pd3 |
| tikv1 | 172.18.0.14 | 40160:20160 | TiKV1 | ~/ysy/tikv/data/tikv1 |
| tikv2 | 172.18.0.15 | 50160:20160 | TiKV2 | ~/ysy/tikv/data/tikv2 |
| tikv3 | 172.18.0.16 | 60160:20160 | TiKV3 | ~/ysy/tikv/data/tikv3 |
环境初始化
创建挂载目录,当前位于 ~/ysy 中:
mkdir -p tikv/data/pd1
mkdir -p tikv/data/pd2
mkdir -p tikv/data/pd3
mkdir -p tikv/data/tikv1
mkdir -p tikv/data/tikv2
mkdir -p tikv/data/tikv3
创建虚拟网桥:
docker network create --subnet=172.18.0.0/16 tikv-network
拉取镜像:
docker pull pingcap/tikv:latest
docker pull pingcap/pd:latest
节点启动
PD1:
docker run -d --name pd1 \
-p 12379:2379 \
-p 12380:2380 \
-v /etc/localtime:/etc/localtime:ro \
-v /home/zyh/ysy/tikv/data/pd1:/data \
--network tikv-network \
--ip 172.18.0.11 \
pingcap/pd:latest \
--name="pd1" \
--data-dir="/data/pd1" \
--client-urls="http://0.0.0.0:2379" \
--advertise-client-urls="http://172.18.0.11:2379" \
--peer-urls="http://0.0.0.0:2380" \
--advertise-peer-urls="http://172.18.0.11:2380" \
--initial-cluster="pd1=http://172.18.0.11:2380,pd2=http://172.18.0.12:2380,pd3=http://172.18.0.13:2380"
PD2:
docker run -d --name pd2 \
-p 22379:2379 \
-p 22380:2380 \
-v /etc/localtime:/etc/localtime:ro \
-v /home/zyh/ysy/tikv/data/pd2:/data \
--network tikv-network \
--ip 172.18.0.12 \
pingcap/pd:latest \
--name="pd2" \
--data-dir="/data/pd2" \
--client-urls="http://0.0.0.0:2379" \
--advertise-client-urls="http://172.18.0.12:2379" \
--peer-urls="http://0.0.0.0:2380" \
--advertise-peer-urls="http://172.18.0.12:2380" \
--initial-cluster="pd1=http://172.18.0.11:2380,pd2=http://172.18.0.12:2380,pd3=http://172.18.0.13:2380"
PD3:
docker run -d --name pd3 \
-p 32379:2379 \
-p 32380:2380 \
-v /etc/localtime:/etc/localtime:ro \
-v /home/zyh/ysy/tikv/data/pd3:/data \
--network tikv-network \
--ip 172.18.0.13 \
pingcap/pd:latest \
--name="pd3" \
--data-dir="/data/pd3" \
--client-urls="http://0.0.0.0:2379" \
--advertise-client-urls="http://172.18.0.13:2379" \
--peer-urls="http://0.0.0.0:2380" \
--advertise-peer-urls="http://172.18.0.13:2380" \
--initial-cluster="pd1=http://172.18.0.11:2380,pd2=http://172.18.0.12:2380,pd3=http://172.18.0.13:2380"
TiKV1:
docker run -d --name tikv1 \
-p 40160:20160 \
-v /etc/localtime:/etc/localtime:ro \
-v /home/zyh/ysy/tikv/data/tikv1:/data \
--network tikv-network \
--ip 172.18.0.14 \
pingcap/tikv:latest \
--addr="0.0.0.0:20160" \
--advertise-addr="172.18.0.14:20160" \
--data-dir="/data/tikv1" \
--pd="172.18.0.11:2379,172.18.0.12:2379,172.18.0.13:2379"
TiKV2:
docker run -d --name tikv2 \
-p 50160:20160 \
-v /etc/localtime:/etc/localtime:ro \
-v /home/zyh/ysy/tikv/data/tikv2:/data \
--network tikv-network \
--ip 172.18.0.15 \
pingcap/tikv:latest \
--addr="0.0.0.0:20160" \
--advertise-addr="172.18.0.15:20160" \
--data-dir="/data/tikv2" \
--pd="172.18.0.11:2379,172.18.0.12:2379,172.18.0.13:2379"
TiKV3:
docker run -d --name tikv3 \
-p 60160:20160 \
-v /etc/localtime:/etc/localtime:ro \
-v /home/zyh/ysy/tikv/data/tikv3:/data \
--network tikv-network \
--ip 172.18.0.16 \
pingcap/tikv:latest \
--addr="0.0.0.0:20160" \
--advertise-addr="172.18.0.16:20160" \
--data-dir="/data/tikv3" \
--pd="172.18.0.11:2379,172.18.0.12:2379,172.18.0.13:2379"
验证
curl 172.18.0.11:2379/pd/api/v1/stores # 验证集群,所有返回Up即成功!
结果如下:
zyh@zyh ~/ysy $ curl 172.18.0.11:2379/pd/api/v1/stores
{
"count": 3,
"stores": [
{
"store": {
"id": 4,
"address": "172.18.0.15:20160",
"version": "5.0.1",
"status_address": "127.0.0.1:20180",
"git_hash": "e26389a278116b2f61addfa9f15ca25ecf38bc80",
"start_timestamp": 1667380969,
"deploy_path": "/",
"last_heartbeat": 1667382689745138538,
"state_name": "Up"
},
"status": {
"capacity": "228.2GiB",
"available": "25.18GiB",
"used_size": "31.5MiB",
"leader_count": 0,
"leader_weight": 1,
"leader_score": 0,
"leader_size": 0,
"region_count": 1,
"region_weight": 1,
"region_score": 5010678.9452130785,
"region_size": 1,
"start_ts": "2022-11-02T09:22:49Z",
"last_heartbeat_ts": "2022-11-02T09:51:29.745138538Z",
"uptime": "28m40.745138538s"
}
},
{
"store": {
"id": 6,
"address": "172.18.0.16:20160",
"version": "5.0.1",
"status_address": "127.0.0.1:20180",
"git_hash": "e26389a278116b2f61addfa9f15ca25ecf38bc80",
"start_timestamp": 1667381000,
"deploy_path": "/",
"last_heartbeat": 1667382690721349602,
"state_name": "Up"
},
"status": {
"capacity": "228.2GiB",
"available": "25.18GiB",
"used_size": "31.5MiB",
"leader_count": 0,
"leader_weight": 1,
"leader_score": 0,
"leader_size": 0,
"region_count": 1,
"region_weight": 1,
"region_score": 4968170.338600396,
"region_size": 1,
"start_ts": "2022-11-02T09:23:20Z",
"last_heartbeat_ts": "2022-11-02T09:51:30.721349602Z",
"uptime": "28m10.721349602s"
}
},
{
"store": {
"id": 1,
"address": "172.18.0.14:20160",
"version": "5.0.1",
"status_address": "127.0.0.1:20180",
"git_hash": "e26389a278116b2f61addfa9f15ca25ecf38bc80",
"start_timestamp": 1667380946,
"deploy_path": "/",
"last_heartbeat": 1667382697269407432,
"state_name": "Up"
},
"status": {
"capacity": "228.2GiB",
"available": "25.18GiB",
"used_size": "31.5MiB",
"leader_count": 1,
"leader_weight": 1,
"leader_score": 1,
"leader_size": 1,
"region_count": 1,
"region_weight": 1,
"region_score": 5065447.578683018,
"region_size": 1,
"start_ts": "2022-11-02T09:22:26Z",
"last_heartbeat_ts": "2022-11-02T09:51:37.269407432Z",
"uptime": "29m11.269407432s"
}
}
]
}