一看必会系列:k8s 练习22 使用多master高用集群增加nodes

来源:本站原创 Kubernetes 超过532 views围观 0条评论

 

192.168.10.73 HOST1
192.168.10.73 host1
192.168.10.74 HOST2
192.168.10.74 host2
192.168.10.72 HOST0
192.168.10.72 host0
192.168.10.69 k8s-node1
192.168.10.68 k8s-node3
192.168.10.71 k8s-node2

 

systemctl stop firewalld
systemctl disable firewalld
swapoff -a
#同时永久禁掉swap分区,打开如下文件注释掉swap那一行
sudo vi /etc/fstab

1.modprobe br_netfilter

2.
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
3.
sysctl –system

安装
#国内写法
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

setenforce 0
sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config

#保持和master版本一致
yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes
systemctl enable –now kubelet

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager –add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安装 Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce

systemctl enable docker
systemctl restart kubelet
systemctl restart docker

 

 

 

使用前一章的命令进行加入

kubeadm join 192.168.10.199:6443 –token 4gzmbk.2dlkrzgwjy4gseq9   \
  –discovery-token-ca-cert-hash \
  sha256:37b9f9957e0c8dc00aa3f9445881433f4241a3bd6d5966b8a98e9a58ec71862b

[root@k8s-node1 ~]# kubeadm join 192.168.10.199:6443 –token 4gzmbk.2dlkrzgwjy4gseq9     –discovery-token-ca-cert-hash sha256:37b9f9957e0c8dc00aa3f9445881433f4241a3bd6d5966b8a98e9a58ec71862b

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

在master上进行验证
[root@host0 redis_web]# kubectl get nodes
NAME        STATUS   ROLES    AGE    VERSION
host0       Ready    master   133m   v1.14.1
host1       Ready    master   128m   v1.14.1
host2       Ready    master   126m   v1.14.1
k8s-node1   Ready    <none>   77m    v1.14.1   #ready为正常

 

在master上创建 svc,rc 进行测试

配置rc
vim frontend-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: frontend-rc
  labels:
    name: frontend-pod-lb
spec:
  replicas: 3
  selector:
    name: frontend-pod-lb
  template:
    metadata:
      labels:
        name: frontend-pod-lb
    spec:
     containers:
     – name: frontend-name
       image: reg.ccie.wang/test/guestbook-php-frontend:latest
       ports:
       – containerPort: 80
       env:
       – name: GET_HOSTS_FROM
         value: "env"

配置svc
vim frontend-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: frontend-svc
  labels:
    name: frontend-pod-lb
spec:
  type: NodePort
  ports:
    – port: 80
      nodePort: 30011
  selector:
    name: frontend-pod-lb

几分钟后进行查看
[root@host0 redis_web]# kubectl get svc,pod -o wide
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE    SELECTOR
service/frontend-svc   NodePort    10.99.213.11   <none>        80:30011/TCP   123m   name=frontend-pod-lb
service/kubernetes     ClusterIP   10.96.0.1      <none>        443/TCP        136m   <none>

NAME                    READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
pod/frontend-rc-9vf45   1/1     Running   0          123m   10.168.36.70   k8s-node1   <none>           <none>
pod/frontend-rc-fpwg8   1/1     Running   0          123m   10.168.36.68   k8s-node1   <none>           <none>
pod/frontend-rc-twbzn   1/1     Running   0          123m   10.168.36.69   k8s-node1   <none>           <none>

运行正常

直接curl 验证
[root@host0 ~]# curl http://192.168.10.69:30011/
<html ng-app="redis">
  <head>
    <title>Guestbook</title>
    <link rel="stylesheet" href="bootstrap.min.css">
    <script src="angular.min.js"></script>
    <script src="controllers.js"></script>
    <script src="ui-bootstrap-tpls.js"></script>
  </head>
  <body ng-controller="RedisCtrl">
    <div style="width: 50%; margin-left: 20px">
      <h2>Guestbook</h2>
    <form>
以上为正常

再增加两个 worker node

查看node状态  三台master 三台worker node
[root@host0 redis_web]# kubectl get node
NAME        STATUS   ROLES    AGE    VERSION
host0       Ready    master   2d5h   v1.14.1
host1       Ready    master   2d5h   v1.14.1
host2       Ready    master   2d5h   v1.14.1
k8s-node1   Ready    <none>   2d4h   v1.14.1
k8s-node2   Ready    <none>   36m    v1.14.1
k8s-node3   Ready    <none>   20m    v1.14.1
[root@host0 redis_web]#

用frontend-rc.yaml   生成rc 进行测试
kubectl delete -f frontend-rc.yaml
kubectl create -f frontend-rc.yaml

每个worker node上调度一台 rc

[root@host0 redis_web]# kubectl get pod -o wide
NAME                READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
frontend-rc-9l4xl   1/1     Running   0          81s   10.168.36.82     k8s-node1   <none>           <none>
frontend-rc-9pwqw   1/1     Running   0          81s   10.168.169.131   k8s-node2   <none>           <none>
frontend-rc-g8bz9   1/1     Running   0          81s   10.168.107.193   k8s-node3   <none>           <none>
[root@host0 redis_web]#

 

———报错1

Events:
  Type    Reason                   Age                    From                Message
  —-    ——                   —-                   —-                ——-
  Normal  NodeHasSufficientMemory  7m28s (x8 over 7m41s)  kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    7m28s (x8 over 7m41s)  kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m19s)   kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m19s)   kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasNoDiskPressure
  Normal  Starting                 16s                    kubelet, k8s-node1  Starting kubelet.
  Normal  NodeAllocatableEnforced  16s                    kubelet, k8s-node1  Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  16s                    kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    16s                    kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     16s                    kubelet, k8s-node1  Node k8s-node1 status is now: NodeHasSufficientPID
[root@host0 redis_web]#

Apr 17 03:46:42 k8s-node1 kubelet: E0417 03:46:42.736042    3008 pod_workers.go:190] Error syncing pod 0d3a60f6-60e3-11e9-a41a-0050569642b8 ("kube-proxy-b4l5f_kube-system(0d3a60f6-60e3-11e9-a41a-0050569642b8)"), skipping: failed to "CreatePodSandbox" for "kube-proxy-b4l5f_kube-system(0d3a60f6-60e3-11e9-a41a-0050569642b8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-proxy-b4l5f_kube-system(0d3a60f6-60e3-11e9-a41a-0050569642b8)\" failed: rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.1\": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Apr 17 03:46:42 k8s-node1 dockerd: time="2019-04-17T03:46:42.735721205-04:00" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Apr 17 03:46:42 k8s-node1 kubelet: W0417 03:46:42.775671    3008 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 17 03:46:42 k8s-node1 kubelet: E0417 03:46:42.894006    3008 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

分析对比发现,是镜像的问题,用本文的镜像下载方式再来一扁就好了

解决:参照k8s 练习 x利用阿里云下载google k8s镜像进行下载

docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-apiserver1.14.1
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-controller-manager1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-scheduler1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxy1.14.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:coredns1.3.1   
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1
docker pull  registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:etcd3.3.10

docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-apiserver1.14.1  k8s.gcr.io/kube-apiserver:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-controller-manager1.14.1     k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-scheduler1.14.1      k8s.gcr.io/kube-scheduler:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxy1.14.1     k8s.gcr.io/kube-proxy:v1.14.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:coredns1.3.1      k8s.gcr.io/coredns:1.3.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1   k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:etcd3.3.10 k8s.gcr.io/etcd:3.3.10

 

然后执行即可
kubeadm reset
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
kubeadm join 192.168.10.199:6443 –token 4gzmbk.2dlkrzgwjy4gseq9   \
  –discovery-token-ca-cert-hash sha256:37b9f9957e0c8dc00aa3f9445881433f4241a3bd6d5966b8a98e9a58ec71862b
 
 
—————–报错2  token过期

[root@k8s-node2 ~]# kubeadm join 192.168.10.199:6443 –token 4gzmbk.2dlkrzgwjy4gseq9 \
>     –discovery-token-ca-cert-hash sha256:37b9f9957e0c8dc00aa3f9445881433f4241a3bd6d5966b8a98e9a58ec71862b
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.10.199:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.10.199:6443"
[discovery] Requesting info from "https://192.168.10.199:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.10.199:6443"
[discovery] Successfully established connection with API Server "192.168.10.199:6443"
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized

 

 

——————报错3  kubelet端口被占用

分析,以前安装过其它版本清理后即可。

[root@k8s-node3 ~]# kubeadm join 192.168.10.199:6443 –token h8py6g.eoxih97bqekr7249     –discovery-token-ca-cert-hash sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
[root@k8s-node3 ~]#

—————报错3   卡住

[root@k8s-node3 ~]# kubeadm join 192.168.10.199:6443 –token h8py6g.eoxih97bqekr7249     –discovery-token-ca-cert-hash sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn’t validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s

[root@k8s-node3 ~]# rpm -qa |grep kube
kubectl-1.13.3-0.x86_64
kubernetes-cni-0.7.5-0.x86_64
kubelet-1.14.1-0.x86_64
kubeadm-1.14.1-0.x86_64

 

解决:

# Install Docker CE
## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2

### Add Docker repository.
yum-config-manager \
  –add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.
yum update && yum install docker-ce-18.06.2.ce

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker

 

———-报错4

[root@k8s-node3 ~]# kubeadm join 192.168.10.199:6443 –token h8py6g.eoxih97bqekr7249    –discovery-token-ca-cert-hash sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Hostname]: hostname "k8s-node3" could not be reached
    [WARNING Hostname]: hostname "k8s-node3": lookup k8s-node3 on 192.168.10.66:53: no such host

分析  apiserver 找不到对应的hostname 修改每台node hosts

192.168.10.73 HOST1
192.168.10.73 host1
192.168.10.74 HOST2
192.168.10.74 host2
192.168.10.72 HOST0
192.168.10.72 host0
192.168.10.69 k8s-node1
192.168.10.68 k8s-node3
192.168.10.71 k8s-node2

———–报错5

[root@k8s-node3 ~]# kubeadm join 192.168.10.199:6443 –token h8py6g.eoxih97bqekr7249    –discovery-token-ca-cert-hash sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
[root@k8s-node3 ~]# vim /etc/fstab
[root@k8s-node3 ~]#

关闭 swap

swapoff -a

 

并修改/etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

————-报错6

systemctl status kubelet

Apr 19 06:16:39 k8s-node3 systemd[1]: Unit kubelet.service entered failed state.
Apr 19 06:16:39 k8s-node3 systemd[1]: kubelet.service failed.
[root@k8s-node3 ~]# systemctl restart kubelet
[root@k8s-node3 ~]# systemctl status kubelet
● kubelet.service – kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Fri 2019-04-19 06:16:50 EDT; 1s ago
     Docs: https://kubernetes.io/docs/
  Process: 2588 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 2588 (code=exited, status=255)

查看日志
tail -f /var/log/messages

config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Apr 19 06:18:54 k8s-node3 systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Apr 19 06:18:54 k8s-node3 systemd: Unit kubelet.service entered failed state.
Apr 19 06:18:54 k8s-node3 systemd: kubelet.service failed.

解决:这个是因为 kubeadm  join 还没有执行导致的

——————-报错7

[root@k8s-node3 ~]# kubeadm join 192.168.10.199:6443 –token h8py6g.eoxih97bqekr7249     –discovery-token-ca-cert-hash sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[preflight] Running pre-flight checks  卡住

解决,大概率是因为 token 或hash不对的原因.找到正常的就可以了

1.
kubeadm token list
2.
kubeadm token create
3.
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed ‘s/^.* //’
4.拿到 .2,3的信息拼出,再执行即可

kubeadm join –token <token> <master-ip>:<master-port> –discovery-token-ca-cert-hash sha256:<hash>

 

————报错8  NotReady

[root@host0 ~]# kubectl get nodes
NAME        STATUS     ROLES    AGE     VERSION
host0       Ready      master   2d4h    v1.14.1
host1       Ready      master   2d4h    v1.14.1
host2       Ready      master   2d4h    v1.14.1
k8s-node1   Ready      <none>   2d3h    v1.14.1
k8s-node2   NotReady   <none>   23s     v1.14.1
k8s-node3   NotReady   <none>   8m53s   v1.14.1

看日志,网络插件没装.理论上是自动安装的,那么就可能是镜像没有的原因
ile waiting for connection (Client.Timeout exceeded while awaiting headers)"
Apr 19 07:50:12 k8s-node2 kubelet: W0419 07:50:12.642892   15468 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 19 07:50:12 k8s-node2 kubelet: E0419 07:50:12.796773   15468 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr 19 07:50:17 k8s-node2 kubelet: W0419 07:50:17.643069   15468 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 19 07:50:17 k8s-node2 kubelet: E0419 07:50:17.797784   15468 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr 19 07:50:22 k8s-node2 kubelet: W0419 07:50:22.643351   15468 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 19 07:50:22 k8s-node2 kubelet: E0419 07:50:22.798903   15468 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

1.缺文件
/etc/cni/net.d

从正常的node上 复制过来
[root@k8s-node2 net.d]# ll
total 8
-rw-r–r– 1 root root  528 Apr 17 04:50 10-calico.conflist
-rw-r–r– 1 root root 2565 Apr 17 04:50 calico-kubeconfig
[root@k8s-node2 net.d]#

systemctl restart docker
systemctl restart kubelet
即可

 

查看calico.yaml配置
[root@host0 script]# grep image calico.yaml
          image: calico/cni:v3.6.1
          image: calico/cni:v3.6.1
          image: calico/node:v3.6.1
          image: calico/kube-controllers:v3.6.1

下载这个镜像
          image: calico/kube-controllers:v3.6.1
         
    docker pull calico/kube-controllers:v3.6.1
    docker pull calico/node:v3.6.1
    docker pull calico/cni:v3.6.1

—————报错9

pull k8s.gcr.io/pause:3.1 报错
pull k8s.gcr.io/kube-proxy:v1.14.1
Apr 19 08:06:34 k8s-node3 kubelet: E0419 08:06:34.051365  
14060 pod_workers.go:190] Error syncing pod 4ff2b462-629b-11e9-a41a-0050569642b8
("kube-proxy-cw5sf_kube-system(4ff2b462-629b-11e9-a41a-0050569642b8)"),
skipping: failed to "CreatePodSandbox" for
"kube-proxy-cw5sf_kube-system(4ff2b462-629b-11e9-a41a-0050569642b8)"
with CreatePodSandboxError: "CreatePodSandbox for pod
\"kube-proxy-cw5sf_kube-system(4ff2b462-629b-11e9-a41a-0050569642b8)\"
 
failed: rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.1\":
Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while
waiting for connection (Client.Timeout exceeded while awaiting headers)"     

下载镜像

docker pull registry.cn-shanghai.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1
docker tag  registry.cn-shanghai.aliyuncs.com/jdccie-rgs/kubenetes:pause3.1 k8s.gcr.io/pause:3.1
docker pull registry.cn-shanghai.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxyv1.14.1
docker tag registry.cn-shanghai.aliyuncs.com/jdccie-rgs/kubenetes:kube-proxyv1.14.1 k8s.gcr.io/kube-proxy:v1.14.1

systemctl restart kubelet

恢复正常

文章出自:CCIE那点事 http://www.jdccie.com/ 版权所有。本站文章除注明出处外,皆为作者原创文章,可自由引用,但请注明来源。 禁止全文转载。
本文链接:http://www.jdccie.com/?p=4148转载请注明转自CCIE那点事
如果喜欢:点此订阅本站
  • 相关文章
  • 为您推荐
  • 各种观点

暂时还木有人评论,坐等沙发!
发表评论

您必须 [ 登录 ] 才能发表留言!