分类:服务器技术

服务器

一看必会系列:DOCKER_OPTS参数不生效问题

No Comments Docker

 

docker.service

配置docker.service的时候,EnvironmentFile的文件默认配置了/etc/sysconfig/docker(基本配置)、/etc/sysconfig/docker-storage(存储)、/etc/sysconfig/docker-network(网络),我们想要/etc/default/docker 生效,我们就需要添加EnvironmentFile=-/etc/default/docker,让后在ExecStart这个配置中,添加引用的参数$DOCKER_OPTS,下面就是我的配置文件/usr/lib/systemd/system/docker.service

#修改配置文件
vi /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
Wants=docker-storage-setup.service
Requires=docker-cleanup.timer

[Service]
Type=notify
NotifyAccess=all
KillMode=process
#添加我们自定义的配置文件
EnvironmentFile=-/etc/default/docker #添加配置文件,(-代表ignore error)
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
          –add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          –default-runtime=docker-runc \
          –exec-opt native.cgroupdriver=systemd \
          –userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY \
          $DOCKER_OPTS #需要引用的参数,也是网卡设定参数
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
MountFlags=slave

[Install]
WantedBy=multi-user.target
EnvironmentFile=-/etc/default/docker

修改以后,需要重载,然后重启服务才可以使用/etc/default/docker里定义的DOCKER_OPTS参数

#重载
systemctl daemon-reload
#重启docker服务
service docker restart

docker环境配置文件

vi /etc/sysconfig/docker
DOCKER_OPTS="-b=br0"

#或者直接写数据
echo ‘DOCKER_OPTS="-b=br0"’ >> /etc/default/docker
docker自定义网桥

#安装网桥工具
yum install bridge-utils 

#添加网桥
brctl addbr br0

------中间广告---------

#查看网桥
brctl show

#设定网桥地址和子网掩码
ifconfig br0 192.168.110.1 netmask 255.255.255.0

#设定网桥
echo ‘DOCKER_OPTS="-b=br0"’ >> /etc/default/docker

#配置docker
vi /usr/lib/systemd/system/docker.service
#添加我们自己的配置文件
EnvironmentFile=-/etc/sysconfig/docker
#应用参数
ExecStart=/usr/bin/dockerd-current \
               –add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
                –default-runtime=docker-runc \
                –exec-opt native.cgroupdriver=systemd \
                –userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
                $OPTIONS \
                $DOCKER_STORAGE_OPTIONS \
                $DOCKER_NETWORK_OPTIONS \
                $ADD_REGISTRY \
                $BLOCK_REGISTRY \
                $INSECURE_REGISTRY \
                $DOCKER_OPTS #添加网桥的参数

#重载
systemctl daemon-reload
#重启docker服务
service docker restart

修改成功 后的网桥信息

查看网桥数据

 

------------以下是正确姿势

网桥修改成功后,网卡的网段和子网掩码都修改了

root@docker:~# docker run –rm  –name b5 busybox ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
104: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:c0:a9:6e:02 brd ff:ff:ff:ff:ff:ff
    inet 192.169.110.2/24 brd 192.169.110.255 scope global eth0
       valid_lft forever preferred_lft forever
root@docker:~#

root@docker:~# !ps
ps -ef |grep docker
root     26972     1  0 02:02 ?        00:00:00 /usr/bin/dockerd -H fd:// -b=br0                                                      
root     27376 19710  0 02:05 pts/0    00:00:00 grep –color=auto docker                                                              
root@docker:~#

vim "/lib/systemd/system/docker.service"
[Service]                                                                                                                             
Type=notify                                                                                                                           
# the default is not to use systemd for cgroups because the delegate issues still                                                     
# exists and systemd currently does not support the cgroup feature set required                                                       
# for containers run by docker                                                                                                        
#ExecStart=/usr/bin/dockerd -H fd://                                                                                                  
EnvironmentFile=-/etc/default/docker                                                                                                  
ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_OPTS                                                                                      
ExecReload=/bin/kill -s HUP $MAINPID                                                                                                  
TimeoutSec=0                                                                                                                          
RestartSec=2                                                                                                                          
Restart=always  

一看必会系列:部署docker镜像harbor仓库

No Comments Docker

https://www.cnblogs.com/pangguoping/p/7650014.html

2.首先安装docker

yum install -y epel-release lrzsz wget net-tools ntp

时间来来个同步
ntpdate cn.pool.ntp.org

关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
关闭selinux:
sed -i ‘s/enforcing/disabled/’ /etc/selinux/config 
setenforce 0

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker

docker –version

3.安装docker-compose

yum install -y docker-compose

docker-compose –version

1、下载安装包

下载地址:http://harbor.orientsoft.cn/

http://harbor.orientsoft.cn/harbor-v1.5.0/harbor-offline-installer-v1.5.0.tgz

2、上传并解压安装包

将下载下来的安装包上传到服务器的/home目录,并解压。

[root@test101 home]# tar xf harbor-offline-installer-v1.5.0.tgz
[root@test101 home]# ll
总用量 843504
drwxr-xr-x. 4 root root       229 7月  30 15:48 harbor
-rw-r–r–. 1 root root 863747205 7月  30 15:39 harbor-offline-installer-v1.5.0.tgz
[root@test101 home]#
3、配置harbor和docker

3.1 编辑/home/habor/harbor.cfg,主要修改两个地方:

hostname = 10.0.0.101   #这里直接用的IP
harbor_admin_password = 123456   #登录harbor仓库的密码,默认密码是Harbor12345
5.修改配置文件

配置文件为:/usr/local/harbor/harbor.cfg
配置的内容为:

# vim /usr/local/harbor/harbor.cfg
hostname = rgs.unixfbi.com
#邮箱配置
email_server = smtp.qq.com
email_server_port = 25
email_username = unixfbi@unixfbi.com
email_password =12345678
email_from = UnixFBI <unixfbi@unixfbi.com>
email_ssl = false
#禁止用户注册
self_registration = off
#设置只有管理员可以创建项目
project_creation_restriction = adminonly
6.执行安装脚本

# /usr/local/harbor/install.sh

7.Harbor启动和停止

Harbor 的日常运维管理是通过docker-compose来完成的,Harbor本身有多个服务进程,都放在docker容器之中运行,我们可以通过docker ps命令查看。

或者docker-compose ps 来查看

Harbor的启动和停止

启动Harbor
# docker-compose start
停止Harbor
# docker-comose stop
重启Harbor
# docker-compose restart

改了data路径后,restart报错,但可以先stop,再start进行重启
[root@k8s-registry harbor]# docker-compose restart
Restarting nginx              … done
Restarting harbor-jobservice  … done
Restarting harbor-ui          … error
Restarting registry           … error
Restarting harbor-db          … done
Restarting harbor-adminserver … done
Restarting redis              … done
Restarting harbor-log         … done

[root@k8s-registry harbor]# docker-compose stop
Stopping nginx              … done
Stopping harbor-jobservice  … done
Stopping harbor-db          … done
Stopping harbor-adminserver … done
Stopping redis              … done
Stopping harbor-log         … done
[root@k8s-registry harbor]# docker-compose start
Starting log         … done
Starting mysql       … done
Starting redis       … done
Starting adminserver … done
Starting registry    … done
Starting ui          … done
Starting jobservice  … done
Starting proxy       … done
[root@k8s-registry harbor]#

8.访问测试

在浏览器输入rgs.unixfbi.com,因为我配置的域名为rgs.unixfbi.com。请大家根据自己的配置情况输入访问的域名;
默认账号密码: admin / Harbor12345 登录后修改密码

 

四、测试上传和下载镜像

在项目中标记镜像:
docker tag SOURCE_IMAGE[:TAG] reg.ccie.wang/k8s/IMAGE[:TAG]

推送镜像到当前项目:
docker push reg.ccie.wang/k8s/IMAGE[:TAG]

1,先登录 docker login ip

2,修改tag 格式,docker tag imagesID 仓库ip/harbor项目名字/imagesname:版本

3,docker push 仓库ip/harbor项目名字/imagesname:版本

[root@k8s-master 1.8+]# docker login reg.ccie.wang
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@k8s-master 1.8+]# docker push reg.ccie.wang/k8s/kubernetes-dashboard-amd64:v1.10.1
The push refers to repository [reg.ccie.wang/k8s/kubernetes-dashboard-amd64]
fbdfe08b001c: Pushed
v1.10.1: digest: sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 size: 529
[root@k8s-master 1.8+]#

[root@k8s-master 1.8+]# docker tag k8s.gcr.io/metrics-server-amd64:v0.3.1 reg.ccie.wang/k8s/metrics-server-amd64:v0.3.1
[root@k8s-master 1.8+]#
[root@k8s-master 1.8+]#
[root@k8s-master 1.8+]# docker push reg.ccie.wang/k8s/metrics-server-amd64:v0.3.1
The push refers to repository [reg.ccie.wang/k8s/metrics-server-amd64]
14679ed867b8: Pushed
f9d9e4e6e2f0: Pushed
v0.3.1: digest: sha256:78938f933822856f443e6827fe5b37d6cc2f74ae888ac8b33d06fdbe5f8c658b size: 739
[root@k8s-master 1.8+]#

拉镜像
[root@k8s-node1 ~]# docker pull reg.ccie.wang/k8s/metrics-server-amd64:v0.3.1
v0.3.1: Pulling from k8s/metrics-server-amd64
Digest: sha256:78938f933822856f443e6827fe5b37d6cc2f74ae888ac8b33d06fdbe5f8c658b
Status: Downloaded newer image for reg.ccie.wang/k8s/metrics-server-amd64:v0.3.1

 

 

 

 

 

 

 

1.修改各docker client配置

# vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd –insecure-registry rgs.unixfbi.com
增加 –insecure-registry rgs.unixfbi.com 即可。
重启docker:

# systemctl daemon-reload
# systemctl  restart docker
或者

创建/etc/docker/daemon.json文件,在文件中指定仓库地址
# cat > /etc/docker/daemon.json << EOF
{ "insecure-registries":["rgs.unixfbi.com"] }
EOF
然后重启docker就可以。

# systemctl  restart docker
这样设置完成后,就不会提示我们使用https的错误了。

2.创建Dockerfile

# vim Dockerfile
FROM centos:centos7.1.1503
ENV TZ "Asia/Shanghai"
3.创建镜像

# docker build -t rgs.unixfbi.com/library/centos7.1:0.1 .
4.把镜像push到Harbor

# docker login rgs.unixfbi.com
# docker push rgs.unixfbi.com/library/centos7.1:0.1
如果不是自己创建的镜像,记得先执行 docker tags 给镜像做tag
例如:

# docker pull busybox
# docker tag busybox:latest rgs.unixfbi.com/library/busybox:latest
# docker push rgs.unixfbi.com/library/busybox:latest
5.登录web页面查看镜像

 

6.pull镜像

从别的机器上拉一下镜像

# docker rmi -f $(docker images -q -a )
# docker pull rgs.unixfbi.com/library/centos7.1:0.1
0.1: Pulling from library/centos7.1
07618ba636d9: Pull complete
Digest: sha256:7f398052ae0e93ddf96ba476185c7f436b15abd27acd848a24b88ede4bb3c322
Status: Downloaded newer image for rgs.unixfbi.com/library/centos7.1:0.1

# docker images
REPOSITORY                         TAG                 IMAGE ID            CREATED             SIZE
rgs.unixfbi.com/library/centos7.1   0.1                 6c849613a995        5 hours ago         212MB

五、Harbor配置TLS证书
因为我们上面对Harbor的配置都是使用的http协议访问,但是我们工作中一般都是配置https访问。所以我给大家演示一下怎么配置Harbor可以使用https访问,以及配置TLS证书都需要做哪些工作。

1.修改Harbor配置文件

因为Harbor默认使用http协议访问,所以我们这里在配置文件中,开启https配置;
配置harbor.cfg

hostname = rgs.unixfbi.com
ui_url_protocol = https
ssl_cert = /etc/certs/ca.crt
ssl_cert_key = /etc/certs/ca.key
2.创建自签名证书key文件

# mkdir /etc/certs
# openssl genrsa -out /etc/certs/ca.key 2048
Generating RSA private key, 2048 bit long modulus
….+++
…………………………………………..+++
e is 65537 (0x10001)
3.创建自签名证书crt文件

注意命令中/CN=rgs.unixfbi.com字段中rgs.unixfbi.com修改为你自己的仓库域名。

# openssl req -x509 -new -nodes -key /etc/certs/ca.key -subj "/CN=rgs.unixfbi.com" -days 5000 -out /etc/certs/ca.crt
4.开始安装Harbor

# ./install.sh

✔ —-Harbor has been installed and started successfully.—-

Now you should be able to visit the admin portal at https://reg.ccie.wang.
For more details, please visit https://github.com/vmware/harbor .
显示是https了。

5.客户端配置

客户端需要创建证书文件存放的位置,并且把服务端创建的证书拷贝到该目录下,然后重启客户端docker。我们这里创建目录为:/etc/docker/certs.d/rgs.unixfbi.com

# mkdir -p /etc/docker/certs.d/rgs.unixfbi.com
把服务端crt证书文件拷贝到客户端,例如我这的客户端为:192.168.199.183

# scp /etc/certs/ca.crt root@192.168.199.183:/etc/docker/certs.d/rgs.unixfbi.com/
重启客户端docker

# systemctl restart docker
6.测试是否支持https访问

# docker login rgs.unixfbi.com
Username (admin):
Password:
Login Succeeded

六、遇到问题
遇到的问题就是Harbor我配置的是http访问,但是docker客户端默认都是https访问Harbor,所以就会产生错误。下面看看我是怎么解决这个问题的吧。下面我们来访问以下Harbor

# docker pull rgs.unixfbi.com/library/centos7.1:0.1
Error response from daemon: Get https://rgs.unixfbi.com/v1/_ping: dial tcp 192.168.199.233:443: getsockopt: connection refused
问题原因:
因为docker默认访问仓库时都是使用的https协议,而我们的仓库配置的是http
解决方法:
方法一:
在docker启动的配置仓库地址添加如下内容:
–insecure-registry rgs.unixfbi.com

# vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd –insecure-registry rgs.unixfbi.com
然后

# systemctl daemon-reload
# systemctl  restart docker
方法二:
创建/etc/docker/daemon.json文件,在文件中指定仓库地址

# cat > /etc/docker/daemon.json << EOF
{ "insecure-registries":["rgs.unixfbi.com"] }
EOF
然后重启docker就可以了

# systemctl  restart docker
方法三:
就是把你的仓库也配置成https ,设置证书。好吧,这种方法其实我不说,你也知道。

有坑 启动后发现存储只有50G,但/home目录划的400G。肯定是哪配置错误

转移数据目录有两种方式:一种是将默认数据目录软链至其他路径,另外一种是修改相应配置。目前使用的是第一种

Harbor各个组件容器的启动配置在docker-compose.yml文件中,所以可以在这里修改

打开该文件后,搜索所有volumes关键字,可以在每个容器的volumes下面看到,数据在Host机器上的挂载只会在两个地方:/data和./common。将/data修改为所需目录即可,例如/home/harborData

在Harbor的安装配置文件harbor.cfg中也有部分数据路径设置,主要为ssh相关设置,也作出相应修改即可
修改地方有两处
第一处
sed -i "s/\/data\//\/home\/opt\/harbor_data\/data\//g" docker-compose.yml
这里替换会有个错误 第60行需要进行手动修改
    原  – /data/:/data/:z
   改   – /home/opt/harbor_data/data/:/data/:z
不改的话也能用,但ui会不显示容量大小。而且这个容量df -h  是看不到的
第二处
[root@k8s-registry harbor]# grep data harbor.cfg
secretkey_path =  /home/opt/harbor_data/data

1.修改前使用
docker-compose down -v停止并删除现有容器,
2.修改完成后先执行/home/opt/harbor/prepare使配置生效
3.然后使用docker-compose up -d重新创建容器并启动服务

[root@k8s-registry harbor]# docker-compose down -v
Stopping nginx              … done
Stopping harbor-jobservice  … done
Stopping harbor-ui          … done
Stopping harbor-adminserver …
Stopping redis              … done
Stopping registry           …
Stopping harbor-db          … done
Stopping harbor-log         …

一看必会系列:harbor仓库扩容

No Comments Docker

 

一看必会都是已验证的。

 

有坑 启动后发现存储只有50G,但/home目录划的400G。肯定是哪配置错误

转移数据目录有两种方式:一种是将默认数据目录软链至其他路径,另外一种是修改相应配置。目前使用的是第一种

Harbor各个组件容器的启动配置在docker-compose.yml文件中,所以可以在这里修改

打开该文件后,搜索所有volumes关键字,可以在每个容器的volumes下面看到,数据在Host机器上的挂载只会在两个地方:/data和./common。将/data修改为所需目录即可,例如/home/harborData

在Harbor的安装配置文件harbor.cfg中也有部分数据路径设置,主要为ssh相关设置,也作出相应修改即可
修改地方有两处
第一处
sed -i "s/\/data\//\/home\/opt\/harbor_data\/data\//g" docker-compose.yml
这里替换会有个错误 第60行需要进行手动修改
    原  – /data/:/data/:z
   改   – /home/opt/harbor_data/data/:/data/:z
不改的话也能用,但ui会不显示容量大小。而且这个容量df -h  是看不到的
第二处
[root@k8s-registry harbor]# grep data harbor.cfg
secretkey_path =  /home/opt/harbor_data/data

1.修改前使用
docker-compose down -v停止并删除现有容器,
2.修改完成后先执行/home/opt/harbor/prepare使配置生效
3.然后使用docker-compose up -d重新创建容器并启动服务

[root@k8s-registry harbor]# docker-compose down -v
Stopping nginx              … done
Stopping harbor-jobservice  … done
Stopping harbor-ui          … done
Stopping harbor-adminserver …
Stopping redis              … done
Stopping registry           …
Stopping harbor-db          … done
Stopping harbor-log         …

一看必会系列:k8s-dashboard 1.10.1安装手册

No Comments Docker

http://www.525.life/article?id=1510739742331

视频版:https://ke.qq.com/course/266656

yum install -y epel-release lrzsz wget net-tools ntp

时间来来个同步

ntpdate cn.pool.ntp.org

关闭防火墙:

systemctl stop firewalld

systemctl disable firewalld

关闭selinux:

sed -i ‘s/enforcing/disabled/’ /etc/selinux/config

setenforce 0

关闭swap:

swapoff -a

临时

vim /etc/fstab

永久

添加主机名与IP对应关系(记得设置主机名): cat /etc/hosts

192.168.0.11 k8s-master

192.168.0.12 k8s-node1

192.168.0.13 k8s-node2

将桥接的IPv4流量传递到iptables的链:

cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl –system

[root@localhost ~]# sudo sysctl -p /etc/sysctl.d/k8s.conf

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

[root@localhost ~]#

[root@localhost ~]# modprobe br_netfilter

[root@localhost ~]# sudo sysctl -p /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

[root@localhost ~]#

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

curl -fsSL https://get.docker.com | bash -s docker –mirror Aliyun

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo yum -y install docker-ce-18.06.1.ce-3.el7 systemctl enable docker && systemctl start docker

docker –version Docker version 18.06.1-ce, build e68fc7a

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF

4. 所有节点安装Docker/kubeadm/kubelet

装之前先改hosts

[root@k8s-master ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.68 k8s-master

192.168.10.69 k8s-node1

由于版本更新频繁,这里指定版本号部署:

yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3

节点

yum install -y kubelet-1.13.3 kubeadm-1.13.3

systemctl enable kubelet

更换国内原,所有服务器都需要改成一样的

vi /etc/docker/daemon.json

{

"registry-mirrors": [ "https://registry.docker-cn.com"]

}

也可能和阿里云,但需要自己注册

{

"registry-mirrors": ["https://9syoriwt.mirror.aliyuncs.com"]

}

free -h

swapoff -a

vim /etc/fstab

kubeadm init \ –apiserver-advertise-address=192.168.10.68 \ –image-repository registry.aliyuncs.com/google_containers \ –kubernetes-version v1.13.3 \ –service-cidr=10.100.0.0/16\ –pod-network-cidr=10.244.0.0/16

如果初始化失败,可以重置下,再初始化

kubeadm reset #——注意用完这个,重装之后,可能遇到kubectl显示认证不过无法使用,这是多执行一次屏幕回显的注册adminconf指令那几条,就ok了

会生成token

7. 加入Kubernetes Node 向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

记录输出,node加入集群只需要运行这个

You can now join any number of machines by running the following on each node

as root:

kubeadm join 192.168.10.68:6443 –token 95fvbt.xf7ycgtxfbzc2tyr –discovery-token-ca-cert-hash sha256:cc48567c61690242b3123e0f4f68cda9ff431562735a655a5ee7b544b8364d1c

TOKEN会过期,所以重新创建token

默认24小时过期

1.kubeadm token create

kubeadm token list

2 获取ca证书sha256编码hash值

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’

3.将完整命令到要加的NODE上运行

kubeadm join 192.168.10.68:6443 –token lh4nta.nmd0mzksdi3n0luo –discovery-token-ca-cert-hash sha256:cc48567c61690242b3123e0f4f68cda9ff431562735a655a5ee7b544b8364d1c

格式:

kubeadm join masterIP:6443 –token 刚生成的 –discovery-token-ca-cert-hash sha256:刚生成的

[root@k8s-master ~]# kubeadm token list #查看TOKEN

TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS

95fvbt.xf7ycgtxfbzc2tyr <invalid> 2019-03-03T12:55:43-05:00 authentication,signing The default bootstrap token generated by ‘kubeadm init’. system:bootstrappers:kubeadm:default-node-token

lh4nta.nmd0mzksdi3n0luo 23h 2019-03-04T22:38:25-05:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token

结果如下:

[root@k8s-node2 ~]# kubeadm join 192.168.10.68:6443 –token lh4nta.nmd0mzksdi3n0luo –discovery-token-ca-cert-hash cc48567c61690242b3123e0f4f68cda9ff431562735a655a5ee7b544b8364d1c

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the master to see this node join the cluster.

[root@k8s-master ~]# kubectl get node

NAME STATUS ROLES AGE VERSION

k8s-master Ready master 34h v1.13.3

k8s-node1 Ready <none> 32h v1.13.3

k8s-node2 Ready <none> 4m25s v1.13.3

测试kubernetes集群 在Kubernetes集群中创建一个pod,验证是否正常运行:

kubectl create deployment nginx –image=nginx

kubectl expose deployment nginx –port=80 –type=NodePort

kubectl get pod,svc

[root@k8s-master ~]# kubectl get pod,svc

NAME READY STATUS RESTARTS AGE

pod/nginx-5c7588df-dwbqx 1/1 Running 0 29s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 74m

service/nginx NodePort 10.100.74.188 <none> 80:32020/TCP 12s

[root@k8s-master ~]#

其中有以下关键内容:

生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

下面的命令是配置常规用户如何使用kubectl(客户端)访问集群,因为master节点也需要使用kubectl访问集群,所以也需要运行以下命令:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get nodes

[root@localhost ~]# mkdir -p $HOME/.kube

[root@localhost ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@localhost ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8s-master NotReady master 4m23s v1.13.3

k8s-node1 Ready <none> 111s v1.13.3

[root@k8s-master ~]#

6. 安装Pod网络插件(CNI)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.extensions/kube-flannel-ds-amd64 created

daemonset.extensions/kube-flannel-ds-arm64 created

daemonset.extensions/kube-flannel-ds-arm created

daemonset.extensions/kube-flannel-ds-ppc64le created

daemonset.extensions/kube-flannel-ds-s390x created

[root@k8s-master ~]#

[root@k8s-master ~]# kubectl get cs

NAME STATUS MESSAGE ERROR

controller-manager Healthy ok

scheduler Healthy ok

etcd-0 Healthy {"health": "true"}

[root@k8s-master ~]#

创建一个应用测试

kubectl create deployment nginx –image=nginx kubectl expose deployment nginx –port=80 –type=NodePort kubectl get pod,svc

[root@k8s-master ~]# kubectl create deployment nginx –image=nginx

deployment.apps/nginx created

[root@k8s-master ~]# kubectl expose deployment nginx –port=80 –type=NodePort

service/nginx exposed

[root@k8s-master ~]# kubectl get pod,svc #查看pod和service

NAME READY STATUS RESTARTS AGE

pod/nginx-5c7588df-tmff9 1/1 Running 0 35s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 9m27s

service/nginx NodePort 10.100.34.146 <none> 80:32016/TCP 18s

[root@k8s-master ~]#

[root@k8s-master ~]# kubectl get pod -o wide #查Pod运行在哪个node

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

nginx-5c7588df-dwbqx 1/1 Running 1 32h 10.244.1.82 k8s-node1 <none> <none>

[root@k8s-master ~]#

验证:

容器间访问访问 http://10.100.34.146:80

外部访问: http://nodeip:32016

9. 部署 Dashboard

换阿里源,需要注册并获取地址

[root@k8s-master ~]# cat /etc/docker/daemon.json

{

"registry-mirrors": ["https://9syoriwt.mirror.aliyuncs.com"]

}

[root@k8s-master ~]# systemctl daemon-reload

[root@k8s-master ~]# systemctl restart docker

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

docker search kubernetes-dashboard-amd64:v1.10.1

[root@k8s-master ~]# docker search kubernetes-dashboard-amd64:v1.10.1

NAME DESCRIPTION STARS OFFICIAL AUTOMATED

mirrorgooglecontainers/kubernetes-dashboard-amd64 14

[root@k8s-master ~]# docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

v1.10.1: Pulling from mirrorgooglecontainers/kubernetes-dashboard-amd64

63926ce158a6: Pull complete

Digest: sha256:d6b4e5d77c1cdcb54cd5697a9fe164bc08581a7020d6463986fe1366d36060e8

Status: Downloaded newer image for mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

[root@k8s-master ~]#

默认镜像国内无法访问,修改镜像地址为: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

kind: Service

apiVersion: v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

type: NodePort

ports:

– port: 443

targetPort: 8443

nodePort: 30001

selector:

k8s-app: kubernetes-dashboard

kubectl apply -f kubernetes-dashboard.yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

删除node

kubectl delete node swarm1

[root@k8s-master ~]# kubectl get pod –namespace=kube-system

NAME READY STATUS RESTARTS AGE

kubernetes-dashboard-57df4db6b-25ng8 0/1 ContainerCreating 0 9s

[root@k8s-master ~]#

#这两个问题基本都是 无法下载镜像和node的问题,镜像要放在docker生成的节点上面

[root@k8s-master ~]# kubectl get pod –namespace=kube-system

NAME READY STATUS RESTARTS AGE

kubernetes-dashboard-57df4db6b-25ng8 0/1 ImagePullBackOff 0 134m

kubernetes-dashboard-847f8cb7b8-zp89j 0/1 CrashLoopBackOff 1 12s

[root@k8s-master ~]#

解决方法

#默认情况是会根据配置文件中的镜像地址去拉取镜像,如果设置为IfNotPresent 和Never就会使用本地镜像。

IfNotPresent :如果本地存在镜像就优先使用本地镜像。

Never:直接不再去拉取镜像了,使用本地的;如果本地不存在就报异常了。

参数的作用范围:

spec:

containers:

– name: nginx

image: image: reg.docker.lc/share/nginx:latest

imagePullPolicy: IfNotPresent #或者使用Never

发现node 有问题,关掉node后成功, 但仍然无法访问需要加https

[root@k8s-master ~]# kubectl get pod –namespace=kube-system |grep dash

kubernetes-dashboard-76479d66bb-smj7l 1/1 Running 0 5m45s

[root@k8s-master ~]#

访问方式要注意https

https://192.168.10.68:30001/#!/login

创建service account并绑定默认cluster-admin管理员集群角色:

命令:

kubectl create serviceaccount dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin –clusterrole=cluster-admin –serviceaccount=kube-system:dashboard-admin

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk ‘/dashboard-admin/{print $1}’)

过程

[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system

serviceaccount/dashboard-admin created

[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin –clusterrole=cluster-admin –serviceaccount=kube-system:dashboard-admin

clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk ‘/dashboard-admin/{print $1}’)

Name: dashboard-admin-token-tcw9s

Namespace: kube-system

Labels: <none>

Annotations: kubernetes.io/service-account.name: dashboard-admin

kubernetes.io/service-account.uid: 27149d2e-3d1a-11e9-8c59-005056963bc8

Type: kubernetes.io/service-account-token

Data

====

ca.crt: 1025 bytes

namespace: 11 bytes

token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdGN3OXMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMjcxNDlkMmUtM2QxYS0xMWU5LThjNTktMDA1MDU2OTYzYmM4Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.UxfBnISzZD5JP_BFd9R3nrXSdlodSQaPX4bNM7g2TKuXRN3rzAdfCCp8ehj1BLxMcWSFFD9TzhEBsQNh5hxdV1mYgC9g5Z6suqAsCzqgYz6nzy95lEttp62O9xb_H-dLPJC4SbrO27ezCCBJVoLqDgkuJPAOZFhx31LayiiWLGqOXIBTslDAm5JMSNChHQpnbUtb_3kqdsLmCkcFdk-VtmHS8lHZOJt20eiwb4Q4KqRggjn8oj-cNvB1MQZrObZM_bB10kFV8JiKaOIq6yw6LqERevEwSz-qhMGxfQfE1Wa14d7ia-9qpPMFp8CXwzwZ6RxTYJI6QYFVn_MhdL5jnQ

[root@k8s-master ~]#

如果token忘了咋办:方法如下

[root@k8s-master ~]# kubectl -n kube-system get secret | grep dashboard-admin

dashboard-admin-token-tcw9s kubernetes.io/service-account-token 3 33h

[root@k8s-master ~]#

[root@k8s-master ~]# kubectl describe -n kube-system secret/dashboard-admin-token-tcw9s

Data

====

ca.crt: 1025 bytes

namespace: 11 bytes

token: #这里就是

##eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdGN3OXMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMjcxNDlkMmUtM2QxYS0xMWU5LThjNTktMDA1MDU2OTYzYmM4Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.UxfBnISzZD5JP_BFd9R3nrXSdlodSQaPX4bNM7g2TKuXRN3rzAdfCCp8ehj1BLxMcWSFFD9TzhEBsQNh5hxdV1mYgC9g5Z6suqAsCzqgYz6nzy95lEttp62O9xb_H-dLPJC4SbrO27ezCCBJVoLqDgkuJPAOZFhx31LayiiWLGqOXIBTslDAm5JMSNChHQpnbUtb_3kqdsLmCkcFdk-VtmHS8lHZOJt20eiwb4Q4KqRggjn8oj-cNvB1MQZrObZM_bB10kFV8JiKaOIq6yw6LqERevEwSz-qhMGxfQfE1Wa14d7ia-9qpPMFp8CXwzwZ6RxTYJI6QYFVn_MhdL5jnQ

[root@k8s-master ~]#

—-查看命令

kubectl get all

kubectl get svc #service

kubectl get ns #namespace

kubectl get pod -o wide #查看支运行在哪个节点

——–查看及排错

kubeadm 生成的token过期后,集群增加节点

解决方法如下:

重新生成新的token

[root@walker-1 kubernetes]# kubeadm token create

[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –ttl 0)

aa78f6.8b4cafc8ed26c34f

[root@walker-1 kubernetes]# kubeadm token list

TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS

aa78f6.8b4cafc8ed26c34f 23h 2017-12-26T16:36:29+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token

获取ca证书sha256编码hash值

[root@walker-1 kubernetes]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’

0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538

节点加入集群

[root@walker-4 kubernetes]# kubeadm join –token aa78f6.8b4cafc8ed26c34f –discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538 172.16.6.79:6443 –skip-preflight-checks

查看 namespace 里的pod

[root@k8s-master ~]# kubectl describe pod –namespace=kube-system kubernetes-dashboard-76479d66bb-pxgtf

Events:

Type Reason Age From Message

—- —— —- —- ——-

Normal Scheduled 31s default-scheduler Successfully assigned kube-system/kubernetes-dashboard-76479d66bb-gbsn9 to k8s-node1

Normal Pulled 6s (x3 over 30s) kubelet, k8s-node1 Container image "mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1" already present on machine

Normal Created 6s (x3 over 30s) kubelet, k8s-node1 Created container

Normal Started 6s (x3 over 30s) kubelet, k8s-node1 Started container

Warning BackOff 0s (x5 over 26s) kubelet, k8s-node1 Back-off restarting failed container

kubernetes—dashboardv1.8.3版本安装详细步骤

http://www.525.life/article?id=1510739742372

kubernetes—CentOS7安装kubernetes1.11.2图文完整版

http://www.525.life/article?id=1510739742331

http://dockone.io/article/2247

-----------拉取被屏蔽的docker image

将 k8s.gcr.io 替换成registry.cn-hangzhou.aliyuncs.com/google_containers/ 即可

[root@k8s-master heapster]# grep gcr.io *

grafana.yaml: image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4

heapster.yaml: image: k8s.gcr.io/heapster-amd64:v1.5.4

influxdb.yaml: image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2

[root@k8s-master heapster]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v5.0.4

一看必会系列:k8s核心指标API Metrics-Server部署指南

No Comments Docker

Kubernetes1.13安装metrics-server及填坑

下载

https://github.com/kubernetes-incubator/metrics-server

wget https://codeload.github.com/kubernetes-incubator/metrics-server/zip/master

unzip master

进入目录

/opt/k8s/metrics-server/metrics-server-master/deploy/1.8+

替换镜地址

metrics-server-deployment.yaml: image: k8s.gcr.io/metrics-server-amd64:v0.3.1

好像没用,自己找个可用源吧

[root@k8s-master 1.8+]# grep gcr *

metrics-server-deployment.yaml: image: k8s.gcr.io/metrics-server-amd64:v0.3.1

[root@k8s-master 1.8+]# vim metrics-server-deployment.yaml

[root@k8s-master 1.8+]# grep amd64 *

metrics-server-deployment.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.1

[root@k8s-master 1.8+]#

381 kubectl apply -f ./

382 kubectl get pods,svc -n kube-system -o wide

演示

[root@k8s-master 1.8+]# kubectl api-versions |grep me

metrics.k8s.io/v1beta1

[root@k8s-master 1.8+]#

[root@k8s-master 1.8+]# kubectl top nodes

NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%

k8s-master 178m 4% 1811Mi 23%

k8s-node1 38m 0% 749Mi 9%

k8s-node2 30m 0% 594Mi 7%

[root@k8s-master 1.8+]# kubectl top pods –all-namespaces

NAMESPACE NAME CPU(cores) MEMORY(bytes)

default nginx-5c7588df-dwbqx 0m 2Mi

kube-system coredns-78d4cf999f-84gkh 3m 14Mi

kube-system coredns-78d4cf999f-dhhh6 3m 14Mi

kube-system etcd-k8s-master 18m 314Mi

kube-system kube-apiserver-k8s-master 29m 484Mi

kube-system kube-controller-manager-k8s-master 37m 68Mi

kube-system kube-flannel-ds-amd64-8kf24 2m 17Mi

kube-system kube-flannel-ds-amd64-hgb9x 2m 15Mi

kube-system kube-flannel-ds-amd64-lmjh8 3m 17Mi

kube-system kube-proxy-564j5 2m 17Mi

kube-system kube-proxy-m4zs4 3m 17Mi

kube-system kube-proxy-n7z76 2m 17Mi

kube-system kube-scheduler-k8s-master 12m 20Mi

kube-system kubernetes-dashboard-76479d66bb-smj7l 1m 30Mi

kube-system metrics-server-6c8b76677-fx5mr 1m 13Mi

2.2正式部署

可以看到这个目录里面都是yaml文件,包含了所有需要配置的yaml文件,我们直接用命令一键部署

Kubectl apply –f /.

部署完发现报错,具体报错是因为metrics-server-depoyment里面的image

k8s.gcr.io/metrics-server-amd64:v0.3.1 的拉取策略是always,由于我的环境不能科学上网,解决办法如下:

A:从阿里云镜像仓库用docker pull 命令 把metrics-server-amd64:v0.3.1下载到本地(注意看调度到哪个node,镜像就放到哪个node上),然后用docker tag 把pull下来的镜像名称修改成k8s.gcr.io/metrics-server-amd64:v0.3.1 即可。

B:修改metrics-server-depoyment.yaml文件的镜像拉取策略为IfNotPresent

---------------报错

E0304 09:14:52.776119 1 reststorage.go:129] unable to fetch node metrics for node "k8s-node2": no metrics known for node

E0304 09:15:03.649147 1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:k8s-node2: unable to fetch metrics from Kubelet k8s-node2 (k8s-node2): Get https://k8s-node2:10250/stats/summary/: dial tcp: lookup k8s-node2 on 10.100.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-master: unable to fetch metrics from Kubelet k8s-master (k8s-master): Get https://k8s-master:10250/stats/summary/: dial tcp: lookup k8s-master on 10.100.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-node1: unable to fetch metrics from Kubelet k8s-node1 (k8s-node1): Get https://k8s-node1:10250/stats/summary/: dial tcp: lookup k8s-node1 on 10.100.0.10:53: no such host]

【提示 无法解析节点的主机名,是metrics-server这个容器不能通过CoreDNS 10.96.0.10:53 解析各Node的主机名,metrics-server连节点时默认是连接节点的主机名,需要加个参数,让它连接节点的IP,同时因为10250是https端口,连接它时需要提供证书,所以加上–kubelet-insecure-tls,表示不验证客户端证书,此前的版本中使用–source=这个参数来指定不验证客户端证书。】

提示 无法解析节点的主机名,是metrics-server这个容器不能通过CoreDNS 10.96.0.10:53 解析各Node的主机名,metrics-server连节点时默认是连接节点的主机名,需要加个参数,让它连接节点的IP:

“–kubelet-preferred-address-types=InternalIP”

因为10250是https端口,连接它时需要提供证书,所以加上–kubelet-insecure-tls,表示不验证客户端证书,此前的版本中使用–source=这个参数来指定不验证客户端证书。

———————

解决方法

yaml文件新加如下配置

[root@k8s-master 1.8+]# vim metrics-server-deployment.yaml

[root@k8s-master 1.8+]#

#add

command:

– /metrics-server

– –kubelet-insecure-tls

– –kubelet-preferred-address-types=InternalIP

----------报错2

[root@k8s-master 1.8+]# kubectl logs -n kube-system pod/metrics-server-6c8b76677-fx5mr

I0304 09:52:24.936032 1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)

[restful] 2019/03/04 09:52:26 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi

[restful] 2019/03/04 09:52:26 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/

I0304 09:52:26.100866 1 serve.go:96] Serving securely on [::]:443

E0304 09:52:49.743254 1 reststorage.go:129] unable to fetch node metrics for node "k8s-master": no metrics known for node

E0304 09:52:49.743282 1 reststorage.go:129] unable to fetch node metrics for node "k8s-node1": no metrics known for node

E0304 09:52:49.743288 1 reststorage.go:129] unable to fetch node metrics for node "k8s-node2": no metrics known for node

全局需要修改的地方

containers:

– name: metrics-server

image: k8s.gcr.io/metrics-server-amd64:v0.3.1

imagePullPolicy: Never #下面的重复数的注意删除

command:

– /metrics-server

– –kubelet-insecure-tls

– –kubelet-preferred-address-types=InternalIP

kubernetes中的内存表示单位Mi和M的区别

No Comments Docker

记得以前看过一篇文章(现在找不到了),那篇文章讲到了申请memory的单位M,它认为1M=1024K=1024×1024字节,但在k8s中的M表示的意义是不同的,今天特意看了一下官方文档,并实验了一把,特此记录。
官网解释:Meaning of memory,Mi表示(1Mi=1024×1024),M表示(1M=1000×1000)(其它单位类推, 如Ki/K Gi/G)
创建两个pod, 一个申请1Mi, 另一个申请1M, 通过log来查看他们的区别。
nginx1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  – name: nginx1
    image: nginx:test
    ports:
    – containerPort: 80
    resources:
      limits:
        cpu: 200m
        memory: 128Mi
      requests:
        cpu: 0.1
        memory: 1Mi

nginx2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx2
spec:
  containers:
  – name: nginx2
    image: nginx:test
    ports:
    – containerPort: 80
    resources:
      limits:
        cpu: 200m
        memory: 128Mi
      requests:
        cpu: 0.1
        memory: 1M

nginx1.yaml(Mi)申请资源的信息如下,可以看到Memory=1024*1024
I0716 11:05:43.555791   31331 factory.go:469] About to try and schedule pod nginx
I0716 11:05:43.555804   31331 scheduler.go:165] Attempting to schedule pod: default/nginx
I0716 11:05:43.555866   31331 predicates.go:565] Predicate: MilliCPU=100 Memory=1048576 NvidiaGPU=0 OpaqueIntResources=map[]

nginx2.yaml(M)申请资源的信息如下,Memory=1000*1000
I0716 11:05:58.404826   31331 factory.go:469] About to try and schedule pod nginx2
I0716 11:05:58.404840   31331 scheduler.go:165] Attempting to schedule pod: default/nginx2
I0716 11:05:58.404904   31331 predicates.go:565] Predicate: MilliCPU=100 Memory=100000

作者:Mark_Zhang
链接:https://www.jianshu.com/p/f798b02363e8
来源:简书
简书著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。

Kubernetes通过yaml配置文件创建实例时不使用本地镜像的原因

No Comments Docker

官方其实已经说明了,只是没有详细看文档;https://kubernetes.io/docs/concepts/containers/images/

By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).

#默认情况是会根据配置文件中的镜像地址去拉取镜像,如果设置为IfNotPresent 和Never就会使用本地镜像。

IfNotPresent :如果本地存在镜像就优先使用本地镜像。

Never:直接不再去拉取镜像了,使用本地的;如果本地不存在就报异常了。

参数的作用范围:

  1. spec:
  2.   containers:
  3.     - name: nginx
  4.       image: image: reg.docker.lc/share/nginx:latest
  5.       imagePullPolicy: IfNotPresent   #或者使用Never

因为此参数默认为:imagePullPolicy: Always ,如果你yaml配置文件中没有定义那就是使用默认的。

拉取被国外的k8s docker image

No Comments Docker

-----------拉取被屏蔽的docker image

将 k8s.gcr.io 替换成registry.cn-hangzhou.aliyuncs.com/google_containers/ 即可

[root@k8s-master heapster]# grep gcr.io *

grafana.yaml: image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4

heapster.yaml: image: k8s.gcr.io/heapster-amd64:v1.5.4

influxdb.yaml: image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2

[root@k8s-master heapster]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v5.0.4

拉取被国外的k8s docker image

No Comments Docker

-----------拉取被屏蔽的docker image

将 k8s.gcr.io 替换成hangzhou.aliyuncs.com/google_containers/ 即可

[root@k8s-master heapster]# grep gcr.io *

grafana.yaml: image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4

heapster.yaml: image: k8s.gcr.io/heapster-amd64:v1.5.4

influxdb.yaml: image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2

[root@k8s-master heapster]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v5.0.4

一看必会系列:aliyunvpn 与 strongswan s2s对接配置

No Comments Linux

一定成功

 

阿里云vpn 网关与 strongswan s2s对接配置

 

{
  "LocalSubnet": "对端内网IP段/24",
  "RemoteSubnet": "阿里内网IP段/24",
  "IpsecConfig": {
    "IpsecPfs": "group2",
    "IpsecEncAlg": "aes",
    "IpsecAuthAlg": "sha1",
    "IpsecLifetime": 86400
  },
  "Local": "对端公网IP",
  "Remote": "阿里端公网IP",
  "IkeConfig": {
    "IkeAuthAlg": "sha1",
    "LocalId": "对端VM内网IP",
    "IkeEncAlg": "aes256",
    "IkeVersion": "ikev1",
    "IkeMode": "aggressive",
    "IkeLifetime": 86400,
    "RemoteId": "阿里端公网IP",
    "Psk": "g24J$%#$",
    "IkePfs": "group2"
  }
}

 

config setup
     uniqueids=no
conn %default
     authby=psk
     type=tunnel
conn tomyidc
     keyexchange=ikev1
     left=对端VM内网IP
     leftsubnet=本端内网IP段/24
     leftid=对端VM内网IP
     right=阿里端公网IP
     rightsubnet=阿里内网IP段/24
     rightid=阿里端公网IP
     auto=route
     ike=aes256-sha1-modp1024
     ikelifetime=86400s
     esp=aes-sha1-modp1024
     lifetime=86400s
     type=tunnel
     aggressive=yes

 

Listening IP addresses:
  对端VM内网IP
Connections:
     tomyidc:  对端VM内网IP…阿里端公网IP  IKEv1 Aggressive
     tomyidc:   local:  [对端VM内网IP] uses pre-shared key authentication
     tomyidc:   remote: [阿里端公网IP] uses pre-shared key authentication
     tomyidc:   child:  对端内网IP段/24 === 阿里内网IP段/24 TUNNEL
Routed Connections:
     tomyidc{1}:  ROUTED, TUNNEL, reqid 1
     tomyidc{1}:   对端内网IP段/24 === 阿里内网IP段/24
Security Associations (1 up, 0 connecting):
     tomyidc[1]: ESTABLISHED 4 minutes ago, 对端VM内网IP[对端VM内网IP]…阿里端公网IP[阿里端公网IP]
     tomyidc[1]: IKEv1 SPIs: 13f2e09ad624bad8_i* af1d8f540aef12d3_r, pre-shared key reauthentication in 23 hours
     tomyidc[1]: IKE proposal: AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024
     tomyidc{2}:  INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: ce59cad4_i c0ed3fcf_o
     tomyidc{2}:  AES_CBC_128/HMAC_SHA1_96/MODP_1024, 0 bytes_i, 4272 bytes_o (60 pkts, 200s ago), rekeying in 23 hours
     tomyidc{2}:   对端内网IP段/24 === 阿里内网IP段/24
[root@hk-cdn-server-ipsecvpn-001 strongswan]#

https://www.strongswan.org/testing/testresults/ikev1/net2net-psk/moon.statusall

 

中间出现的故障

"Error writing to socket: Invalid argument".
 
原因为 left 相关信息需要写成VM的IP 不是公网的IP