一看必会系列:etcd 单机集群部署

来源:本站原创 Linux 超过171 views围观 0条评论

etcd 单机集群部署

下载
https://github.com/etcd-io/etcd/releases/tag/v3.3.12

ETCD_VER=v3.3.12

# choose either URL
GOOGLE_URL=https://storage.googleapis.com/etcd
GITHUB_URL=https://github.com/etcd-io/etcd/releases/download
DOWNLOAD_URL=${GOOGLE_URL}

rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
rm -rf /tmp/etcd-download-test && mkdir -p /tmp/etcd-download-test

curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download-test –strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz

/tmp/etcd-download-test/etcd –version
ETCDCTL_API=3 /tmp/etcd-download-test/etcdctl version

也可以下载后将
etcd*  复制到/usr/local/bin  就可以直接使用etcd  etcdctl命令了

 

创建
root@docker:~# tree /opt/etcd -L 2
/opt/etcd
├── conf
│   ├── node1.yml   配置文件
│   ├── node2.yml
│   └── node3.yml
└── data
    ├── node1       节点数据目录
    ├── node2
    ├── node3
    └── node4

6 directories, 3 files
root@docker:~#

各节点配置文件
root@docker:~# cat /opt/etcd/conf/*

name: node1
data-dir: /opt/etcd/data/node1
listen-client-urls: ‘http://0.0.0.0:9002′
advertise-client-urls: ‘http://0.0.0.0:9002′
listen-peer-urls: ‘http://0.0.0.0:9001′
initial-advertise-peer-urls: ‘http://0.0.0.0:9001′
initial-cluster: node1=http://0.0.0.0:9001,node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005
initial-cluster-token: etcd-cluster-1
initial-cluster-state: new

name: node2
data-dir: /opt/etcd/data/node2
listen-client-urls: ‘http://0.0.0.0:9004′
advertise-client-urls: ‘http://0.0.0.0:9004′
listen-peer-urls: ‘http://0.0.0.0:9003′
initial-advertise-peer-urls: ‘http://0.0.0.0:9003′
initial-cluster: node1=http://0.0.0.0:9001,node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005
initial-cluster-token: etcd-cluster-1
initial-cluster-state: new

name: node3
data-dir: /opt/etcd/data/node3
listen-client-urls: ‘http://0.0.0.0:9006′
advertise-client-urls: ‘http://0.0.0.0:9006′
listen-peer-urls: ‘http://0.0.0.0:9005′
initial-advertise-peer-urls: ‘http://0.0.0.0:9005′
initial-cluster: node1=http://0.0.0.0:9001,node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005
initial-cluster-token: etcd-cluster-1
initial-cluster-state: new
root@docker:~#

启动脚本
nohup etcd –config-file=/opt/etcd/conf/node1.yml &
nohup etcd –config-file=/opt/etcd/conf/node2.yml &
nohup etcd –config-file=/opt/etcd/conf/node3.yml &
root@docker:~#

 

参数说明:
● –data-dir 指定节点的数据存储目录,若不指定,则默认是当前目录。这些数据包括节点ID,集群ID,集群初始化配置,Snapshot文件,若未指 定–wal-dir,还会存储WAL文件
● –wal-dir 指定节点的was文件存储目录,若指定了该参数,wal文件会和其他数据文件分开存储
● –name 节点名称
● –initial-advertise-peer-urls 告知集群其他节点的URL,tcp2380端口用于集群通信
● –listen-peer-urls 监听URL,用于与其他节点通讯
● –advertise-client-urls 告知客户端的URL, 也就是服务的URL,tcp2379端口用于监听客户端请求
● –initial-cluster-token 集群的ID
● –initial-cluster 集群中所有节点
● –initial-cluster-state 集群状态,new为新创建集群,existing为已存在的集群

在etcd1、etcd2上分别做相似操作,只需将脚本中–advertise-client-urls 和 –initial-advertis-peer-urls 参数修改一下即可。

注意:上面的初始化只是在集群初始化时运行一次,之后节点的服务有重启,必须要去掉initial参数,否则报错。

验证
root@docker:~# etcdctl –endpoints http://127.0.0.1:9002,http://127.0.0.1:9004,http://127.0.0.1:9006 member list

b5b6e1baef01d74: name=node2 peerURLs=http://0.0.0.0:9003 clientURLs=http://0.0.0.0:9004 isLeader=false
7f630db3033b1564: name=node1 peerURLs=http://0.0.0.0:9001 clientURLs=http://0.0.0.0:9002 isLeader=false
fd1de2479ca19cfa: name=node3 peerURLs=http://0.0.0.0:9005 clientURLs=http://0.0.0.0:9006 isLeader=true
root@docker:~#

root@docker:~# etcdctl –endpoints http://127.0.0.1:9006 cluster-health
member b5b6e1baef01d74 is healthy: got healthy result from http://0.0.0.0:9004
member 7f630db3033b1564 is healthy: got healthy result from http://0.0.0.0:9002
member fd1de2479ca19cfa is healthy: got healthy result from http://0.0.0.0:9006
cluster is healthy
root@docker:~#

修改节点信息

root@docker:~# etcdctl –endpoints http://127.0.0.1:9006 member update fd1de2479ca19cfa http://192.168.10.67:9006
Updated member with ID fd1de2479ca19cfa in cluster
root@docker:~# etcdctl –endpoints http://127.0.0.1:9006 cluster-health
member b5b6e1baef01d74 is healthy: got healthy result from http://0.0.0.0:9004
member 7f630db3033b1564 is healthy: got healthy result from http://0.0.0.0:9002
member fd1de2479ca19cfa is healthy: got healthy result from http://0.0.0.0:9006

如果你想更新一个节点的IP(peerURLS),首先你需要知道那个节点的ID,就是最前面的一段b5b6e1baef01d74
root@docker:~# etcdctl –endpoints http://127.0.0.1:9006 member list
b5b6e1baef01d74: name=node2 peerURLs=http://0.0.0.0:9003 clientURLs=http://0.0.0.0:9004 isLeader=false
7f630db3033b1564: name=node1 peerURLs=http://0.0.0.0:9001 clientURLs=http://0.0.0.0:9002 isLeader=false
fd1de2479ca19cfa: name=node3 peerURLs=http://192.168.10.67:9006 clientURLs=http://0.0.0.0:9006 isLeader=true
root@docker:~#

删除一个节点
root@docker:~# etcdctl –endpoints http://127.0.0.1:9006 member remove fd1de2479ca19cfa
Removed member fd1de2479ca19cfa from cluster

验证
root@docker:~# etcdctl –endpoints http://127.0.0.1:9002 member list
b5b6e1baef01d74: name=node2 peerURLs=http://0.0.0.0:9003 clientURLs=http://0.0.0.0:9004 isLeader=true
7f630db3033b1564: name=node1 peerURLs=http://0.0.0.0:9001 clientURLs=http://0.0.0.0:9002 isLeader=false

增加一个节点
root@docker:~# etcdctl –endpoints http://127.0.0.1:9002 member add node3 http://0.0.0.0:9005
Added member named node3 with ID 3979a731e0408e32 to cluster

提示信息
ETCD_NAME="node3"
ETCD_INITIAL_CLUSTER="node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005,node1=http://0.0.0.0:9001"
ETCD_INITIAL_CLUSTER_STATE="existing"
root@docker:~#

验证是否正常
root@docker:/opt/etcd/conf# etcdctl –endpoints http://127.0.0.1:9002 member list
b5b6e1baef01d74: name=node2 peerURLs=http://0.0.0.0:9003 clientURLs=http://0.0.0.0:9004 isLeader=true
3979a731e0408e32[unstarted]: peerURLs=http://0.0.0.0:9005        -----状态不对
7f630db3033b1564: name=node1 peerURLs=http://0.0.0.0:9001 clientURLs=http://0.0.0.0:9002 isLeader=false
root@docker:/opt/etcd/conf#

解决
清空目标节点etcd3的data-dir
节点删除后,集群中的成员信息会更新,新节点是作为一个全新的节点加入集群,如果data-dir有数据,
etcd启动时会读取己经存在的数据,仍然用老的memberID会造成无法加入集群,所以一定要清空新节点的data-dir。

root@docker:/opt/etcd/conf# rm -rf /opt/etcd/data/node3/

这里的initial标记一定要指定为existing,如果为new,则会自动生成一个新的memberID,
这和前面添加节点时生成的ID不一致,故日志中会报节点ID不匹配的错。

正确配置如下
root@docker:/opt/etcd/conf# vim node3.yml
name: node3
data-dir: /opt/etcd/data/node3
listen-client-urls: ‘http://0.0.0.0:9006′
advertise-client-urls: ‘http://0.0.0.0:9006′
listen-peer-urls: ‘http://0.0.0.0:9005′
initial-advertise-peer-urls: ‘http://0.0.0.0:9005′
initial-cluster: node1=http://0.0.0.0:9001,node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005
initial-cluster-token: etcd-cluster-1
initial-cluster-state: existing    —[]这里由new改为existing:
修改–advertise-client-urls 和 –initial-advertis-peer-urls 参数修改为etcd3的,–initial-cluster-state改为existing

 

启动node3
nohup etcd –config-file=/opt/etcd/conf/node3.yml &

ps -ef |grep etcd
root     12747     1  1 12:39 pts/0    00:00:05 etcd –config-file=/opt/etcd/conf/node1.yml
root     12748     1  1 12:39 pts/0    00:00:06 etcd –config-file=/opt/etcd/conf/node2.yml
root     12966 31275  0 12:44 pts/0    00:00:01 etcd –config-file=/opt/etcd/conf/node3.yml

验证,结果正确
root@docker:~# etcdctl –endpoints http://127.0.0.1:9004 get jdccie
http://www.jdccie.com
root@docker:~# etcdctl –endpoints http://127.0.0.1:9006 get jdccie
http://www.jdccie.com
root@docker:~#

root@docker:~# etcdctl –endpoints http://127.0.0.1:9004 set ssl sslvpn.ccie.wang
sslvpn.ccie.wang
root@docker:~# etcdctl –endpoints http://127.0.0.1:9006 get ssl
sslvpn.ccie.wang
root@docker:~#

节点扩容

1 加节点
etcdctl –endpoints http://127.0.0.1:9002 member add node4 http://0.0.0.0:9007
Added member named node4 with ID 63bd58b500460e51 to cluster

ETCD_NAME="node4"
ETCD_INITIAL_CLUSTER="node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005,node4=http://0.0.0.0:9007,node1=http://0.0.0.0:9001"
ETCD_INITIAL_CLUSTER_STATE="existing"

2.生成node4配置
root@docker:~# cat /opt/etcd/conf/node4.yml
name: node4
data-dir: /opt/etcd/data/node4
listen-client-urls: ‘http://0.0.0.0:9008′
advertise-client-urls: ‘http://0.0.0.0:9008′
listen-peer-urls: ‘http://0.0.0.0:9007′
initial-advertise-peer-urls: ‘http://0.0.0.0:9007′
initial-cluster: node1=http://0.0.0.0:9001,node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005,node4=http://0.0.0.0:9007
initial-cluster-token: etcd-cluster-1
initial-cluster-state: existing
root@docker:~#

3.启动node4

nohup etcd –config-file=/opt/etcd/conf/node4.yml > node4.log &
 
验证进程
ps -ef |grep etcd
root     12747     1  0 12:39 pts/0    00:00:46 etcd –config-file=/opt/etcd/conf/node1.yml
root     12748     1  0 12:39 pts/0    00:00:58 etcd –config-file=/opt/etcd/conf/node2.yml
root     12966 31275  0 12:44 pts/0    00:00:42 etcd –config-file=/opt/etcd/conf/node3.yml
root     18667 31275  0 14:15 pts/0    00:00:00 etcd –config-file=/opt/etcd/conf/node4.yml

验证member
root@docker:~# etcdctl –endpoints http://127.0.0.1:9002 member list
b5b6e1baef01d74: name=node2 peerURLs=http://0.0.0.0:9003 clientURLs=http://0.0.0.0:9004 isLeader=true
3979a731e0408e32: name=node3 peerURLs=http://0.0.0.0:9005 clientURLs=http://0.0.0.0:9006 isLeader=false
63bd58b500460e51: name=node4 peerURLs=http://0.0.0.0:9007 clientURLs=http://0.0.0.0:9008 isLeader=false
7f630db3033b1564: name=node1 peerURLs=http://0.0.0.0:9001 clientURLs=http://0.0.0.0:9002 isLeader=false
root@docker:~#

验证端口
root@docker:~# netstat -ntlp |grep 900
tcp6       0      0 :::9001                 :::*                    LISTEN      12747/etcd         
tcp6       0      0 :::9002                 :::*                    LISTEN      12747/etcd         
tcp6       0      0 :::9003                 :::*                    LISTEN      12748/etcd         
tcp6       0      0 :::9004                 :::*                    LISTEN      12748/etcd         
tcp6       0      0 :::9005                 :::*                    LISTEN      12966/etcd         
tcp6       0      0 :::9006                 :::*                    LISTEN      12966/etcd         
tcp6       0      0 :::9007                 :::*                    LISTEN      18667/etcd         
tcp6       0      0 :::9008                 :::*                    LISTEN      18667/etcd         
root@docker:~#

 

验证数据  9008为node4
root@docker:~# etcdctl –endpoints http://127.0.0.1:9008 get jdccie
http://www.jdccie.com
root@docker:~#

 

 

数据一致性验证
root@docker:~# etcdctl –endpoints http://127.0.0.1:9002 set jdccie http://www.jdccie.com
http://www.jdccie.com

root@docker:~# etcdctl –endpoints http://127.0.0.1:9002 get jdccie
http://www.jdccie.com
root@docker:~# etcdctl –endpoints http://127.0.0.1:9004 get jdccie
http://www.jdccie.com
root@docker:~# etcdctl –endpoints http://127.0.0.1:9006 get jdccie
http://www.jdccie.com
root@docker:~#

那么问题来了
新加的节点。原来节点配置不变的情况下。重启node1 集群是否正常

验证一下
1063  ps -ef |grep etcd
1064  kill -9 12747
1065  sh +x etcd-cluster1.sh
 
脚本etcd-cluster1.sh
#!/bin/bash
nohup etcd –config-file=/opt/etcd/conf/node1.yml >> node1.log &
nohup etcd –config-file=/opt/etcd/conf/node2.yml >> node2.log &
nohup etcd –config-file=/opt/etcd/conf/node3.yml >> node3.log &
nohup etcd –config-file=/opt/etcd/conf/node4.yml >> node4.log &                     

集群仍存在。并且选择了 node2为主节点
root@docker:~# etcdctl –endpoints http://127.0.0.1:9008 member list
b5b6e1baef01d74: name=node2 peerURLs=http://0.0.0.0:9003 clientURLs=http://0.0.0.0:9004 isLeader=true
3979a731e0408e32: name=node3 peerURLs=http://0.0.0.0:9005 clientURLs=http://0.0.0.0:9006 isLeader=false
63bd58b500460e51: name=node4 peerURLs=http://0.0.0.0:9007 clientURLs=http://0.0.0.0:9008 isLeader=false
7f630db3033b1564: name=node1 peerURLs=http://0.0.0.0:9001 clientURLs=http://0.0.0.0:9002 isLeader=false
root@docker:~#

附节点配置
cat /opt/etcd/conf/*

name: node1
data-dir: /opt/etcd/data/node1
listen-client-urls: ‘http://0.0.0.0:9002′
advertise-client-urls: ‘http://0.0.0.0:9002′
listen-peer-urls: ‘http://0.0.0.0:9001′
#initial-advertise-peer-urls: ‘http://0.0.0.0:9001′
#initial-cluster: node1=http://0.0.0.0:9001,node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005
#initial-cluster-token: etcd-cluster-1
#initial-cluster-state: new

name: node2
data-dir: /opt/etcd/data/node2
listen-client-urls: ‘http://0.0.0.0:9004′
advertise-client-urls: ‘http://0.0.0.0:9004′
listen-peer-urls: ‘http://0.0.0.0:9003′
#initial-advertise-peer-urls: ‘http://0.0.0.0:9003′
#initial-cluster: node1=http://0.0.0.0:9001,node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005
#initial-cluster-token: etcd-cluster-1
#initial-cluster-state: new

name: node3
data-dir: /opt/etcd/data/node3
listen-client-urls: ‘http://0.0.0.0:9006′
advertise-client-urls: ‘http://0.0.0.0:9006′
listen-peer-urls: ‘http://0.0.0.0:9005′
initial-advertise-peer-urls: ‘http://0.0.0.0:9005′
initial-cluster: node1=http://0.0.0.0:9001,node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005
initial-cluster-token: etcd-cluster-1
initial-cluster-state: existing

name: node4
data-dir: /opt/etcd/data/node4
listen-client-urls: ‘http://0.0.0.0:9008′
advertise-client-urls: ‘http://0.0.0.0:9008′
listen-peer-urls: ‘http://0.0.0.0:9007′
initial-advertise-peer-urls: ‘http://0.0.0.0:9007′
initial-cluster: node1=http://0.0.0.0:9001,node2=http://0.0.0.0:9003,node3=http://0.0.0.0:9005,node4=http://0.0.0.0:9007
initial-cluster-token: etcd-cluster-1
initial-cluster-state: existing

文章出自:CCIE那点事 http://www.jdccie.com/ 版权所有。本站文章除注明出处外,皆为作者原创文章,可自由引用,但请注明来源。 禁止全文转载。
本文链接:http://www.jdccie.com/?p=4078转载请注明转自CCIE那点事
如果喜欢:点此订阅本站