一看必会系列:k8s 练习20 使用kubeadm创建单master集群1.14.1

来源:本站原创 Kubernetes 超过227 views围观 0条评论

前体
systemctl stop firewalld
systemctl disable firewalld
swapoff -a
#同时永久禁掉swap分区,打开如下文件注释掉swap那一行
sudo vi /etc/fstab

#国内写法
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

setenforce 0
sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config

yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes

systemctl enable –now kubelet

1.modprobe br_netfilter

2.
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
3.
sysctl –system

 

 

#如果安装错误,可以用这个命令重置
kubeadm reset

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

1.先下载相关镜像

查看需要哪些镜像
[root@host0 ~]# grep image /etc/kubernetes/manifests/*
/etc/kubernetes/manifests/etcd.yaml:    image: k8s.gcr.io/etcd:3.3.10
/etc/kubernetes/manifests/etcd.yaml:    imagePullPolicy: IfNotPresent
/etc/kubernetes/manifests/kube-apiserver.yaml:    image: k8s.gcr.io/kube-apiserver:v1.14.1
/etc/kubernetes/manifests/kube-apiserver.yaml:    imagePullPolicy: IfNotPresent
/etc/kubernetes/manifests/kube-controller-manager.yaml:    image: k8s.gcr.io/kube-controller-manager:v1.14.1
/etc/kubernetes/manifests/kube-controller-manager.yaml:    imagePullPolicy: IfNotPresent
/etc/kubernetes/manifests/kube-scheduler.yaml:    image: k8s.gcr.io/kube-scheduler:v1.14.1
/etc/kubernetes/manifests/kube-scheduler.yaml:    imagePullPolicy: IfNotPresent

解决:参照k8s 练习 x利用阿里云下载google k8s镜像进行下载

docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-apiserver1.14.1
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-controller-manager1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-scheduler1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxy1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:coredns1.3.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1
docker pull  registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:etcd3.3.10

docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-apiserver1.14.1  k8s.gcr.io/kube-apiserver:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-controller-manager1.14.1     k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-scheduler1.14.1      k8s.gcr.io/kube-scheduler:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxy1.14.1     k8s.gcr.io/kube-proxy:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:coredns1.3.1      k8s.gcr.io/coredns:1.3.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1   k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:etcd3.3.10 k8s.gcr.io/etcd:3.3.10

2.
#初始化
kubeadm init

执行 kubeadm init 时,会先请求 https://dl.k8s.io/release/stable-1.txt 获取最新稳定的版本号,
该地址实际会跳转到 https://storage.googleapis.com/kubernetes-release/release/stable-1.txt
在写本文时此时的返回值为 v1.14.1。由于被墙无法请求该地址,为了避免这个问题,我们可以直接指定要获取的版本,执行下面的命令:

这里建议指定下 –pod-network-cidr=10.168.0.0/16 默认的可能和现在网络冲突

kubeadm init –kubernetes-version=v1.14.1 –pod-network-cidr=10.168.0.0/16

提示进行下面操作

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

如要部署网络可以用以下命令
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

使用如下命令部署calico
wget   https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
sed -i ‘s/192.168.0.0/10.168.0.0/g’ calico.yaml
kubectl apply -f calico.yaml

#下面其它节点用来加入集群的命令
kubeadm join 192.168.10.72:6443 –token ptxgf1.hzulb340o8qs3npk \
    –discovery-token-ca-cert-hash sha256:a82ff8a6d7b438c3eedb065e9fb9a8e3d46146a5d6d633b35862b703f1a0a285

#具体参考 https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#join-nodes

移除垃圾nodes
[root@host0 script]# kubectl taint nodes –all node-role.kubernetes.io/master-
node/host0 untainted  #这个显示为正常

网段确认10.168.0.0/16
[root@host0 script]# kubectl get pod -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE   READINESS GATES
calico-kube-controllers-5cbcccc885-5klll   1/1     Running   0          28s     10.168.150.2    host0   <none>           <none>
calico-node-4k2ph                          1/1     Running   0          28s     192.168.10.72   host0   <none>           <none>
coredns-fb8b8dccf-jjw8n                    0/1     Running   0          4m4s    10.168.150.3    host0   <none>           <none>
coredns-fb8b8dccf-nfvwt                    1/1     Running   0          4m3s    10.168.150.1    host0   <none>           <none>
etcd-host0                                 1/1     Running   0          3m2s    192.168.10.72   host0   <none>           <none>
kube-apiserver-host0                       1/1     Running   0          2m59s   192.168.10.72   host0   <none>           <none>
kube-controller-manager-host0              1/1     Running   0          3m8s    192.168.10.72   host0   <none>           <none>
kube-proxy-h8xnf                           1/1     Running   0          4m4s    192.168.10.72   host0   <none>           <none>
kube-scheduler-host0                       1/1     Running   0          2m58s   192.168.10.72   host0   <none>           <none>

 

以下是过程
[root@host0 script]# wget   https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
[root@host0 script]# sed -i ‘s/192.168.0.0/10.168.0.0/g’ calico.yaml
[root@host0 script]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@host0 script]#

————–报错coredns Pending

[root@host0 script]# kubectl get pod
No resources found.
[root@host0 script]# kubectl get pod -n kube-system
NAME                            READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-24grq         0/1     Pending   0          3m35s  #没有部署网络
coredns-fb8b8dccf-7zxw4         0/1     Pending   0          3m35s
etcd-host0                      1/1     Running   0          2m42s
kube-apiserver-host0            1/1     Running   0          2m45s
kube-controller-manager-host0   1/1     Running   0          2m30s
kube-proxy-rdp2t                1/1     Running   0          3m35s
kube-scheduler-host0            1/1     Running   0          2m20s
[root@host0 script]#

部署网络即可
用以下命令
wget   https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
sed -i ‘s/192.168.0.0/10.168.0.0/g’ calico.yaml
kubectl apply -f calico.yaml


kubectl apply -f \
> https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

查看网段
[root@host0 script]# ip a | tail -4
9: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.168.150.0/32 brd 10.168.150.0 scope global tunl0
       valid_lft forever preferred_lft forever
[root@host0 script]#

———–知识扩展1
1
Quickstart for Calico on Kubernetes

https://docs.projectcalico.org/v3.6/getting-started/kubernetes/

———–知识扩展2
2
token 重新创建,默认只有24小时,超过要加入集群就需要重建token

kubeadm token create

输出类似值  5didvk.d09sbcov8ph2amjw

#查看token
kubeadm token list

 

3.再获取hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed ‘s/^.* //’
输出类似值 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78

4.然后通过命令加入
kubeadm join –token <token> <master-ip>:<master-port> –discovery-token-ca-cert-hash sha256:<hash>
记得替换相应值

———–知识扩展3
如果需要在集群master以外的机器上控制集群
需要在其它机器上进行以下配置

1 复制admin.conf到所需的机器
scp root@<master ip>:/etc/kubernetes/admin.conf .
2  用以下命令调用
kubectl –kubeconfig ./admin.conf get nodes

———–知识扩展4
代理 apiserver 到本地
如果要从集群个连接apiserver 可以使用kubectl proxy

1
scp root@<master ip>:/etc/kubernetes/admin.conf .
2
kubectl –kubeconfig ./admin.conf proxy

3.在本地访问 http://localhost:8001/api/v1

———–知识扩展5
要撤消kubeadm所做的事情,首先应该排空节点并确保节点在关闭之前是空的。

1 ,运行:

kubectl drain <node name> –delete-local-data –force –ignore-daemonsets
kubectl delete node <node name>

2  节点全部移除后
kubeadm reset

3. 清除iptables
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

ipvsadm -C

4. 如果想重新开始,那么从来即可

kubeadm init or kubeadm join

———–知识扩展6
如何维护集群
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/

文章出自:CCIE那点事 http://www.jdccie.com/ 版权所有。本站文章除注明出处外,皆为作者原创文章,可自由引用,但请注明来源。 禁止全文转载。
本文链接:http://www.jdccie.com/?p=4136转载请注明转自CCIE那点事
如果喜欢:点此订阅本站
  • 相关文章
  • 为您推荐
  • 各种观点

暂时还木有人评论,坐等沙发!
发表评论

您必须 [ 登录 ] 才能发表留言!