分类:Kubernetes

k8s pod模版

No Comments Kubernetes

pod模版

apiVersion: v1                  #必选,版本号,例如v1,版本号必须可以用 kubectl api-versions 查询到 .
kind: Pod                #必选,Pod
metadata:                #必选,元数据
  name: string                  #必选,Pod名称
  namespace: string             #必选,Pod所属的命名空间,默认为"default"
  labels:                 #自定义标签
    – name: string                #自定义标签名字
  annotations:                         #自定义注释列表
    – name: string
spec:                     #必选,Pod中容器的详细定义
  containers:                   #必选,Pod中容器列表
  – name: string                      #必选,容器名称,需符合RFC 1035规范
    image: string                     #必选,容器的镜像名称
    imagePullPolicy: [ Always|Never|IfNotPresent ]  #获取镜像的策略 Alawys表示下载镜像 IfnotPresent表示优先使用本地镜像,否则下载镜像,Nerver表示仅使用本地镜像
    command: [string]             #容器的启动命令列表,如不指定,使用打包时使用的启动命令
    args: [string]                   #容器的启动命令参数列表
    workingDir: string                     #容器的工作目录
    volumeMounts:             #挂载到容器内部的存储卷配置
    – name: string              #引用pod定义的共享存储卷的名称,需用volumes[]部分定义的的卷名
      mountPath: string                 #存储卷在容器内mount的绝对路径,应少于512字符
      readOnly: boolean                 #是否为只读模式
    ports:                #需要暴露的端口库号列表
    – name: string              #端口的名称
      containerPort: int                #容器需要监听的端口号
      hostPort: int                  #容器所在主机需要监听的端口号,默认与Container相同
      protocol: string                  #端口协议,支持TCP和UDP,默认TCP
    env:                    #容器运行前需设置的环境变量列表
    – name: string                  #环境变量名称
      value: string                 #环境变量的值
    resources:                        #资源限制和请求的设置
      limits:                   #资源限制的设置
        cpu: string                 #Cpu的限制,单位为core数,将用于docker run –cpu-shares参数
        memory: string                  #内存限制,单位可以为Mib/Gib,将用于docker run –memory参数
      requests:                       #资源请求的设置
        cpu: string                 #Cpu请求,容器启动的初始可用数量
        memory: string                    #内存请求,容器启动的初始可用数量
    livenessProbe:                  #对Pod内各容器健康检查的设置,当探测无响应几次后将自动重启该容器,检查方法有exec、httpGet和tcpSocket,对一个容器只需设置其中一种方法即可
      exec:               #对Pod容器内检查方式设置为exec方式
        command: [string]               #exec方式需要制定的命令或脚本
      httpGet:                #对Pod内个容器健康检查方法设置为HttpGet,需要制定Path、port
        path: string
        port: number
        host: string
        scheme: string
        HttpHeaders:
        – name: string
          value: string
      tcpSocket:      #对Pod内个容器健康检查方式设置为tcpSocket方式
         port: number
       initialDelaySeconds: 0       #容器启动完成后首次探测的时间,单位为秒
       timeoutSeconds: 0        #对容器健康检查探测等待响应的超时时间,单位秒,默认1秒
       periodSeconds: 0         #对容器监控检查的定期探测时间设置,单位秒,默认10秒一次
       successThreshold: 0
       failureThreshold: 0
       securityContext:
         privileged: false
    restartPolicy: [Always | Never | OnFailure] #Pod的重启策略,Always表示一旦不管以何种方式终止运行,kubelet都将重启,OnFailure表示只有Pod以非0退出码退出才重启,Nerver表示不再重启该Pod
    nodeSelector: obeject       #设置NodeSelector表示将该Pod调度到包含这个label的node上,以key:value的格式指定
    imagePullSecrets:     #Pull镜像时使用的secret名称,以key:secretkey格式指定
    – name: string
    hostNetwork: false          #是否使用主机网络模式,默认为false,如果设置为true,表示使用宿主机网络
    volumes:            #在该pod上定义共享存储卷列表
    – name: string         #共享存储卷名称 (volumes类型有很多种)
      emptyDir: {}          #类型为emtyDir的存储卷,与Pod同生命周期的一个临时目录。为空值
      hostPath: string          #类型为hostPath的存储卷,表示挂载Pod所在宿主机的目录
        path: string              #Pod所在宿主机的目录,将被用于同期中mount的目录
      secret:           #类型为secret的存储卷,挂载集群与定义的secre对象到容器内部
        scretname: string 
        items:    
        – key: string
          path: string
      configMap:                  #类型为configMap的存储卷,挂载预定义的configMap对象到容器内部
        name: string
        items:
        – key: string
          path: string

一看必会系列:k8s 练习3 pod 的扩容和缩减

No Comments Kubernetes

pod 的扩容和缩减

[root@k8s-master yaml]# kubectl get rc
NAME              DESIRED   CURRENT   READY   AGE
frontend-rc       3         3         3       18h
redis-master-rc   1         1         1       35h
redis-slave-rc    2         2         2       35h
[root@k8s-master yaml]# kubectl scale rc frontend-rc –replicas=4
replicationcontroller/frontend-rc scaled
[root@k8s-master yaml]# kubectl get rc
NAME              DESIRED   CURRENT   READY   AGE
frontend-rc       4         4         4       18h
redis-master-rc   1         1         1       35h
redis-slave-rc    2         2         2       35h
[root@k8s-master yaml]# kubectl get rc,pod
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       4         4         4       18h
replicationcontroller/redis-master-rc   1         1         1       35h
replicationcontroller/redis-slave-rc    2         2         2       35h

NAME                        READY   STATUS    RESTARTS   AGE
pod/frontend-rc-2h62f       1/1     Running   0          18h
pod/frontend-rc-5dwk2       1/1     Running   0          18h
pod/frontend-rc-dmxp8       1/1     Running   0          18h
pod/frontend-rc-flg9m       1/1     Running   0          9s
pod/redis-master-rc-jrrgx   1/1     Running   0          35h
pod/redis-slave-rc-f9svq    1/1     Running   0          23h
pod/redis-slave-rc-p6kbq    1/1     Running   0          35h
[root@k8s-master yaml]#

 

[root@k8s-master yaml]# kubectl scale rc frontend-rc –replicas=2
replicationcontroller/frontend-rc scaled
[root@k8s-master yaml]#
[root@k8s-master yaml]#
[root@k8s-master yaml]# kubectl get rc,pod
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       2         2         2       18h
replicationcontroller/redis-master-rc   1         1         1       35h
replicationcontroller/redis-slave-rc    2         2         2       35h

NAME                        READY   STATUS    RESTARTS   AGE
pod/frontend-rc-5dwk2       1/1     Running   0          18h
pod/frontend-rc-dmxp8       1/1     Running   0          18h
pod/redis-master-rc-jrrgx   1/1     Running   0          35h
pod/redis-slave-rc-f9svq    1/1     Running   0          23h
pod/redis-slave-rc-p6kbq    1/1     Running   0          35h
[root@k8s-master yaml]#
[root@k8s-master yaml]#

[root@k8s-master hpa]# kubectl autoscale rc hpa-apache-rc –min=1 –max=10 –cpu-percent=50
horizontalpodautoscaler.autoscaling/hpa-apache-rc autoscaled
[root@k8s-master hpa]# kubectl get rc,hpa
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       2         2         2       21h
replicationcontroller/hpa-apache-rc     1         1         1       113m
replicationcontroller/redis-master-rc   1         1         1       37h
replicationcontroller/redis-slave-rc    2         2         2       37h

NAME                                                REFERENCE                             TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-apache-rc   ReplicationController/hpa-apache-rc   <unknown>/50%   1         10        0          5s
[root@k8s-master hpa]#

进bosybox进行测试
[root@k8s-master hpa]# kubectl exec -it busybox-pod sh

/ # while true; do wget -q -O-  http://hpa-apache-svc > /dev/null;done

[root@k8s-master ~]# kubectl get rc,hpa
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       2         2         2       21h
replicationcontroller/hpa-apache-rc     3         3         3       128m
replicationcontroller/redis-master-rc   1         1         1       38h
replicationcontroller/redis-slave-rc    2         2         2       37h

NAME                                                REFERENCE                             TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-apache-rc   ReplicationController/hpa-apache-rc   122%/50%   1         10        3          15m
[root@k8s-master ~]#

稳定在 3个POD CPU恢得到44%
[root@k8s-master ~]# kubectl get rc,hpa
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       2         2         2       21h
replicationcontroller/hpa-apache-rc     3         3         3       148m
replicationcontroller/redis-master-rc   1         1         1       38h
replicationcontroller/redis-slave-rc    2         2         2       38h

NAME                                                REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-apache-rc   ReplicationController/hpa-apache-rc   44%/50%   1         10        3          35m
[root@k8s-master ~]#

------中间广告---------

退出测试后过段时间,pod恢复成一个
[root@k8s-master hpa]# kubectl get rc,hpa,svc
NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/hpa-apache-rc     1         1         1       159m

NAME                                                REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-apache-rc   ReplicationController/hpa-apache-rc   0%/50%    1         10        1          45m

NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/hpa-apache-svc   ClusterIP   10.100.27.38     <none>        80/TCP         157m

使用yaml的方式进行autoscale

[root@k8s-master hpa]# kubectl create -f hpa-apache-autoscale.yaml
horizontalpodautoscaler.autoscaling/hpa-apache-autoscale created
[root@k8s-master hpa]# kubectl get hpa
NAME                   REFERENCE                                        TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
hpa-apache-autoscale   ReplicationController/hpa-apache-autoscale-pod   <unknown>/50%   1         10        0          9s
[root@k8s-master hpa]#
刚启动时  出现<unknown> 不过不要紧,过段时间就好了。

[root@k8s-master ~]# kubectl get hpa    #现在已读到CPU信息
NAME                   REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-apache-autoscale   ReplicationController/hpa-apache-rc   0%/50%    1         10        1          53s

继续测试
[root@k8s-master hpa]# kubectl exec -it busybox-pod sh
#这里地址用的service的名字   http://service 当然也可以用IP加端口的方式
/ # while true;do wget -q -O-  http://hpa-apache-svc > /dev/null;done

CPU上来了且生成到了3个POD来解决问题
[root@k8s-master ~]# kubectl get hpa
NAME                   REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-apache-autoscale   ReplicationController/hpa-apache-rc   44%/50%   1         10        3          4m48s

 

删除autoscale
[root@k8s-master hpa]# kubectl get hpa
NAME            REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
hpa-apache-rc   ReplicationController/hpa-apache-rc   0%/50%    1         10        1          51m

[root@k8s-master hpa]# kubectl delete hpa hpa-apache-rc
horizontalpodautoscaler.autoscaling "hpa-apache-rc" deleted

[root@k8s-master hpa]# kubectl get hpa
No resources found.
[root@k8s-master hpa]#

配置文件如下
[root@k8s-master hpa]# tree .
.
├── busybox-pod.yaml
├── hpa-apache-autoscale.yaml
├── hpa-apache-rc.yaml
└── hpa-apache-svc.yaml

├── busybox-pod.yaml
apiVersion: v1                                                                             
kind: Pod
metadata:                                                                                  
  name: busybox-pod                                                                  
spec:                                                                                      
  containers:
    – name: busybox
      image: busybox
      command: [ "sleep" , "3600"]
     
├── hpa-apache-autoscale.yaml

apiVersion: autoscaling/v1                                                                     
kind: HorizontalPodAutoscaler
metadata:                                                                                  
  name: hpa-apache-autoscale                                                                      
spec:                                                                                      
  scaleTargetRef:
    apiVersion: v1
    kind: ReplicationController
    name: hpa-apache-rc                                                 
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50
 
├── hpa-apache-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: hpa-apache-rc
spec:
  replicas: 1
  template:
    metadata:
      name: hpa-apache-lb
      labels:
        name: hpa-apache-lb
    spec:
     containers:
     – name: hpa-apache-ctn
       image:  reg.ccie.wang/test/ubuntu:apache2.4.29
       resources:
         requests:
           cpu: 200m
       ports:
       – containerPort: 80
      
└── hpa-apache-svc.yaml
apiVersion: v1                                                                             
kind: Service
metadata:                                                                                  
  name: hpa-apache-svc                                                                       
spec:                                                                                      
  ports:
    – port: 80

Kubernetes(k8s) EmptyDir、HostPath、ConfigMap和Secret等几种存储类型介绍

No Comments Kubernetes

一个运行中的容器,缺省情况下,对文件系统的写入,都是发生在其分层文件系统的可写层的,一旦容器运行结束,所有写入都会被丢弃。因此需要对持久化支持。

Kubernetes 中通过 Volume 的方式提供对存储的支持。下面对一些常见的存储概念进行一点简要的说明。

EmptyDir

顾名思义,EmptyDir是一个空目录,他的生命周期和所属的 Pod 是完全一致的,可能读者会奇怪,那还要他做什么?EmptyDir的用处是,可以在同一 Pod 内的不同容器之间共享工作过程中产生的文件。

缺省情况下,EmptyDir 是使用主机磁盘进行存储的,也可以设置emptyDir.medium 字段的值为Memory,来提高运行速度,但是这种设置,对该卷的占用会消耗容器的内存份额。

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  – image: gcr.io/google_containers/test-webserver
    name: test-container
    volumeMounts:
    – mountPath: /cache
      name: cache-volume
  volumes:
  – name: cache-volume
    emptyDir: {}
HostPath

这种会把宿主机上的指定卷加载到容器之中,当然,如果 Pod 发生跨主机的重建,其内容就难保证了。

这种卷一般和DaemonSet搭配使用,用来操作主机文件,例如进行日志采集的 FLK 中的 FluentD 就采用这种方式,加载主机的容器日志目录,达到收集本主机所有日志的目的。

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  – image: gcr.io/google_containers/test-webserver
    name: test-container
    volumeMounts:
    – mountPath: /test-pd
      name: test-volume
  volumes:
  – name: test-volume
    hostPath:
      # directory location on host
      path: /data
NFS/GlusterFS/CephFS/AWS/GCE 等等

作为一个容器集群,支持网络存储自然是重中之重了,Kubernetes 支持为数众多的云提供商和网络存储方案。

各种支持的方式不尽相同,例如 GlusterFS 需要创建 Endpoint,Ceph/NFS 之流就没这么麻烦了。

各种个性配置可移步参考文档。

ConfigMap 和 Secret

镜像使用的过程中,经常需要利用配置文件、启动脚本等方式来影响容器的运行方式,如果仅有少量配置,我们可以使用环境变量的方式来进行配置。然而对于一些较为复杂的配置,例如 Apache 之类,就很难用这种方式进行控制了。另外一些敏感信息暴露在 YAML 中也是不合适的。

ConfigMap 和 Secret 除了使用文件方式进行应用之外,还有其他的应用方式;这里仅就文件方式做一点说明。

例如下面的 ConfigMap,将一个存储在 ConfigMap 中的配置目录加载到卷中。

apiVersion: v1
kind: Pod
metadata:
  name: dapi-test-pod
spec:
  containers:
    – name: test-container
      image: gcr.io/google_containers/busybox
      command: [ "/bin/sh", "-c", "ls /etc/config/" ]
      volumeMounts:
      – name: config-volume
        mountPath: /etc/config
  volumes:
    – name: config-volume
      configMap:
        # Provide the name of the ConfigMap containing the files you want
        # to add to the container
        name: special-config
  restartPolicy: Never
注意,这里的 ConfigMap 会映射为一个目录,ConfigMap 的 Key 就是文件名,每个 Value 就是文件内容,比如下面命令用一个目录创建一个 ConfigMap:

kubectl create configmap \
    game-config \
    –from-file=docs/user-guide/configmap/kubectl
创建一个 Secret:

kubectl create secret generic \
    db-user-pass –from-file=./username.txt \
    –from-file=./password.txt
使用 Volume 加载 Secret:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
  namespace: myns
spec:
  containers:
    – name: mypod
      image: redis
      volumeMounts:
        – name: foo
          mountPath: /etc/foo
          readOnly: true
  volumes:
    – name: foo
      secret:
        secretName: mysecret
可以看到 Secret 和 ConfigMap 的创建和使用是很相似的。在 RBAC 中,Secret 和 ConfigMap 可以进行分别赋权,以此限定操作人员的可见、可控权限。

PV & PVC

PersistentVolume 和 PersistentVolumeClaim 提供了对存储支持的抽象,也提供了基础设施和应用之间的分界,管理员创建一系列的 PV 提供存储,然后为应用提供 PVC,应用程序仅需要加载一个 PVC,就可以进行访问。

而 1.5 之后又提供了 PV 的动态供应。可以不经 PV 步骤直接创建 PVC。

原文出处:fleeto -> http://blog.fleeto.us/content/kubernetes-zhong-de-ji-chong-cun-chu

Kubernetes部分Volume类型介绍及yaml示例–emptyDir

No Comments Kubernetes

Kubernetes部分Volume类型介绍及yaml示例–emptyDir(本地数据卷)

说明
EmptyDir类型的volume创建于pod被调度到某个宿主机上的时候,而同一个pod内的容器都能读写EmptyDir中的同一个文件。一旦这个pod离开了这个宿主机,EmptyDir中的数据就会被永久删除。所以目前EmptyDir类型的volume主要用作临时空间,比如Web服务器写日志或者tmp文件需要的临时目录。
实战使用共享卷的标准单容器POD
#创建yaml文件
cat >> emptyDir.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
    labels:
        name: test-emptypath
        role: master
    name: test-emptypath
    namespace: test
spec:
    containers:
        – name: test-emptypath
            image: nginx:1.7.9
            volumeMounts:
             – name: log-storage
                 mountPath: /tmp/
    volumes:
    – name: log-storage
        emptyDir: {}
#启动emptyDir.yaml
kubectl create -f ./emptyDir.yaml
#查看Pod运行状态
kubectl get po -n test
NAME                         READY     STATUS    RESTARTS   AGE
test-emptypath               1/1       Running   0          3h
##说明:当 Pod 被分配给节点时,首先创建 emptyDir 卷,并且只要该 Pod
##在该节点上运行,该卷就会存在。正如卷的名字所述,它最初是空的。
实战使用共享卷的标准多容器POD、
#创建yaml文件
cat  >> emptyDir2.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
    name: datagrand
    namespace: test
spec:
    containers:
    – name: test1
        image: nginx:1.7.9
        volumeMounts:
        – name: log-storage
            mountPath: /usr/share/nginx/html
    – name: test2
        image: centos
        volumeMounts:
        – name: log-storage
            mountPath: /html
        command: ["/bin/sh","-c"]
        args:
            – while true;do
                    data >> /html/index.html;
                    sleep 1;
                done
volumes:
    – name: log-storage
        emptyDir: {}
##说明:在这个例子中,我们定义了一个名为HTML的卷。它的类型是emptyDir,
##这意味着当一个POD被分配到一个节点时,卷先被创建,并只要Pod在节点上
##运行时,这个卷仍存在。正如名字所说,它最初是空的。第一容器运行nginx的
##服务器并将共享卷挂载到目录/ usr /share/ nginx /html。第二容器使用centos
##的镜像,并将共享卷挂载到目录/HTML。每一秒,第二容器添加当前日期和时
##间到index.html文件中,它位于共享卷。当用户发出一个HTTP请求到POD,
##nginx的服务器读取该文件并将其传递给响应请求的用户。
#运行yaml
kubectl create -f ./emptyDir2.yaml
#查看Pod运行状态
kubectl get po -n test
NAME                         READY     STATUS    RESTARTS   AGE
datagrand                    2/2       Running   0          22m
#进入容器test1
kubectl exec -it datagrand -c test1 /bin/bash -n test
root@datagrand:/# cd /usr/share/nginx/html
root@datagrand:/usr/share/nginx/html# ls
index.html
##添加内容
root@datagrand:/usr/share/nginx/html# echo "this is a test" >> index.html
#进入容器test2
kubectl exec -it datagrand -c test2 /bin/bash -n test
[root@datagrand /]# cd html
[root@datagrand html]# ls
index.html
[root@datagrand html]# cat index.html
this is a test
##emptyDir卷是两个容器(test1和test2)共享的
参考文档
https://www.kubernetes.org.cn/2767.html

Cephfs & Ceph RBD 在k8s中的适用场景讨论及数据库性能压测

No Comments Kubernetes

 

测试发现cephfs的小文件读写性能一般,且写入延迟偏高,性能不甚满意,但是满足于日常应用环境的读写是没有问题的,但是在面对数据库的应用场景,是否能满足性能要求呢?本篇主要结合kubernetes,针对数据库应用场景,对cephfs 和 ceph rbd这两种ceph存储接口来进行性能对比测试.

 

适用场景讨论
Cephfs:
优点:
1.读取延迟低,I/O带宽表现良好,尤其是block size较大一些的文件
2.灵活度高,支持k8s的所有接入模式
缺点:
1.写入延迟相对较高且延迟时间不稳定
适用场景:
适用于要求灵活度高(支持k8s多节点挂载特性),对I/O延迟不甚敏感的文件读写操作,以及非海量的小文件存储支持.例如作为常用的应用/中间件挂载存储后端.

Ceph RBD:
优点:
1.I/O带宽表现良好
2.读写延迟都很低
3.支持镜像快照,镜像转储
缺点:
1.不支持多节点挂载
适用场景:
对I/O带宽和延迟要求都较高,且无多个节点同时读写数据需求的应用,例如数据库.

测试方法可参考上方链接中的文章,这里直接贴结果:

结果分析:

ssd raid性能毫无疑问是最好的
ceph rbd 数据库qps/tps可达ssd raid的60%-70%,
cephfs因为写入延迟不稳定的原因,压测过程中极小比例的操作响应时间非常漫长,导致qps/tps值整体表现不佳
hdd测试得到的qps/tps值中规中矩,操作最低响应时间较其他三者要高,但最高响应时间值也不会很高.然而机械硬盘介质决定了随着它的负载增高寻址时间会随之加长,性能将会呈线性下降.
———————
作者:ywq935
来源:CSDN
原文:https://blog.csdn.net/ywq935/article/details/82895732
版权声明:本文为博主原创文章,转载请附上博文链接!

一看必会系列:k8s 练习2 使用k8s部署php&redis组成的留言系统

No Comments Kubernetes

k8s-master 192.168.10.68
k8s-node1  192.168.10.71

本文将演示使用kubernetes系统基于 kubeguide相关的镜像文件创建基于php和redis的留言板系统
容器功能实现

前端 frontend 是php  从后端读取数据并展示
后端是 redis (一主 2从)提供数据

在线下载相关的镜像文件,并纳入本地仓库统一管理

# docker pull kubeguide/redis-master
# docker pull kubeguide/guestbook-php-frontend
# docker pull kubeguide/guestbook-redis-slave
# docker tag kubeguide/redis-master reg.jdccie.com/redis-master
# docker tag kubeguide/guestbook-php-frontend reg.jdccie.com/guestbook-php-frontend
# docker tag kubeguide/guestbook-redis-slave reg.jdccie.com/guestbook-redis-slave

#要称登陆 registry仓库
docker login -u 用户 -p密码 reg.jdccie.wang
# docker push reg.jdccie.com/redis-master
# docker push reg.jdccie.com/guestbook-php-frontend
# docker push reg.jdccie.com/guestbook-redis-slave

文件结构
[root@k8s-master yaml]# tree redis_web/
redis_web/
├── frontend-rc.yaml
├── frontend-svc.yaml
├── redis-master-rc.yaml
├── redis-master-svc.yaml
├── redis-slave-rc.yaml
└── redis-slave-svc.yaml

[root@k8s-master yaml]# vim /root/login.sh
[root@k8s-master yaml]# cat redis_web/*

├── frontend-rc.yaml 配置

apiVersion: v1
kind: ReplicationController
metadata:
  name: frontend-rc
  labels:
    name: frontend-pod-lb
spec:
  replicas: 3
  selector:
    name: frontend-pod-lb
  template:
    metadata:
      labels:
        name: frontend-pod-lb
    spec:
     containers:
     – name: frontend-name
       image: reg.ccie.wang/test/guestbook-php-frontend:latest
       ports:
       – containerPort: 80
       env:
       – name: GET_HOSTS_FROM
         value: "env"
        
├── frontend-svc.yaml配置
apiVersion: v1                                                                             
kind: Service
metadata:                                                                                  
  name: frontend-svc                                                                              
  labels:
    name: frontend-pod-lb
spec:                                                                                      
  type: NodePort
  ports:
    – port: 80
      nodePort: 30011
  selector:
    name: frontend-pod-lb

├── redis-master-rc.yaml配置

apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master-rc
  labels:
    names: redis-master-lb
spec:
  replicas: 1
  selector:
    name: redis-master-lb
  template:
    metadata:
      labels:
        name: redis-master-lb
    spec:
     containers:
     – name: master
       image: kubeguide/redis-master
       ports:
       – containerPort: 6379
      
├── redis-master-svc.yaml 配置
apiVersion: v1                                                                             
kind: Service
metadata:                                                                                  
  name: redis-master                                                                              
  labels:
    name: redis-master-lb
spec:                                                                                      
  ports:
    – port: 6379
      targetPort: 6379
  selector:
    name: redis-master-lb
   
├── redis-slave-rc.yaml 配置
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-slave-rc
  labels:
    names: redis-slave-lb
spec:
  replicas: 2
  selector:
    name: redis-slave-lb
  template:
    metadata:
      labels:
        name: redis-slave-lb
    spec:
     containers:
     – name: slave
       image: reg.ccie.wang/test/guestbook-redis-slave:latest
       env:
       – name: GET_HOSTS_FROM
         value: env
       ports:
       – containerPort: 6379
      
└── redis-slave-svc.yaml 配置

apiVersion: v1                                                                             
kind: Service
metadata:                                                                                  
  name: redis-slave                                                                              
  labels:
    name: redis-slave-lb
spec:                                                                                      
  ports:
    – port: 6379
  selector:
    name: redis-slave-lb

创建相应容器
redis-master
  940  kubectl create -f redis-master-rc.yaml
  941  kubectl create -f redis-master-svc.yaml
redis-slave
  948  kubectl create -f redis-slave-rc.yaml
  958  kubectl create -f redis-slave-svc.yaml
frontend
1011  kubectl create -f frontend-rc.yaml
1030  kubectl create -f frontend-svc.yaml

删除容器
kubectl delete -f 对应配置
记得验证 kubectl get rc,pod,svc

验证rc,pod,svc启动是否正常
[root@k8s-master yaml]# kubectl get rc,pod,svc

#DESIRED/CURRENT/READY 三者数量一至并与配置文件一样即为正常

NAME                                    DESIRED   CURRENT   READY   AGE
replicationcontroller/frontend-rc       3         3         3       81m
replicationcontroller/redis-master-rc   1         1         1       17h
replicationcontroller/redis-slave-rc    2         2         2       17h

#所有 Running 即正常

NAME                        READY   STATUS    RESTARTS   AGE
pod/frontend-rc-2h62f       1/1     Running   0          81m
pod/frontend-rc-5dwk2       1/1     Running   0          81m
pod/frontend-rc-dmxp8       1/1     Running   0          81m
pod/redis-master-rc-jrrgx   1/1     Running   0          17h
pod/redis-slave-rc-f9svq    1/1     Running   0          6h12m
pod/redis-slave-rc-p6kbq    1/1     Running   0          17h

#确认type port是否与配置一致
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/frontend-svc   NodePort    10.100.151.156   <none>        80:30011/TCP   81m
service/redis-master   ClusterIP   10.100.215.236   <none>        6379/TCP       17h
service/redis-slave    ClusterIP   10.100.178.103   <none>        6379/TCP       17h

查看redis-master是同步是否正常
grep -A 5 "关键字" #显示关键字后的5行

[root@k8s-master yaml]# kubectl exec redis-master-rc-jrrgx redis-cli info |grep -A 5 "Replication"
# Replication
role:master
connected_slaves:2   #说明有两个slave且正常
slave0:ip=10.244.1.108,port=6379,state=online,offset=82615,lag=0
slave1:ip=10.244.1.110,port=6379,state=online,offset=82615,lag=0
master_repl_offset:82615

查看slave的环境变量是否拿到 MASTER信息
[root@k8s-master yaml]# kubectl exec redis-slave-rc-f9svq env |grep MASTER
REDIS_MASTER_PORT_6379_TCP=tcp://10.100.215.236:6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.100.215.236
REDIS_MASTER_PORT=tcp://10.100.215.236:6379
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_SERVICE_HOST=10.100.215.236
REDIS_MASTER_PORT_6379_TCP_PORT=6379
[root@k8s-master yaml]#

 

查看frontend 运行在哪个node
[root@k8s-master yaml]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
frontend-rc-px989       1/1     Running   0          145m    10.244.1.114   k8s-node1   <none>           <none>
frontend-rc-q5mk4       1/1     Running   0          145m    10.244.1.115   k8s-node1   <none>           <none>
frontend-rc-qh466       1/1     Running   0          145m    10.244.1.116   k8s-node1   <none>           <none>
redis-master-rc-jrrgx   1/1     Running   0          16h     10.244.1.107   k8s-node1   <none>           <none>
redis-slave-rc-f9svq    1/1     Running   0          4h44m   10.244.1.110   k8s-node1   <none>           <none>
redis-slave-rc-p6kbq    1/1     Running   0          16h     10.244.1.108   k8s-node1   <none>           <none>
[root@k8s-master yaml]#

验证服务,使用刚查看的 node1及IP进行测试,能访问,能提交数据进行测试即可
[root@k8s-node1 ~]# curl 192.168.10.69:30011
<html ng-app="redis">
  <head>
    <title>Guestbook</title>
    <link rel="stylesheet" href="bootstrap.min.css">
    <script src="angular.min.js"></script>
    <script src="controllers.js"></script>
    <script src="ui-bootstrap-tpls.js"></script>
  </head>

最后可以进redis进行数据验证和 http界面上是否一致

1.进入redis-master容器内
[root@k8s-master yaml]#  kubectl exec -it redis-master-rc-jrrgx /bin/bash
2.进入redis
[ root@redis-master-rc-jrrgx:/data ]$ redis-cli    
3 查看所有keys
127.0.0.1:6379> keys *
1) "messages"
4. 获取key  "messages" 对应的值
127.0.0.1:6379> get messages
"Hello World!,hhh,gegrgr"
127.0.0.1:6379> get messages
"Hello World!,hhh,gegrgr,frr"  #增加为正常
127.0.0.1:6379>

 

———-报错

[root@k8s-master yaml]# kubectl logs frontend-rc-2h62f
AH00558: apache2: Could not reliably determine the server’s fully qualified domain name, using 10.244.1.118. Set the ‘ServerName’ directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server’s fully qualified domain name, using 10.244.1.118. Set the ‘ServerName’ directive globally to suppress this message
[Tue Mar 26 08:34:03.926249 2019] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/5.6.12 configured — resuming normal operations
[Tue Mar 26 08:34:03.926296 2019] [core:notice] [pid 1] AH00094: Command line: ‘apache2 -D FOREGROUND’

解决进服务器
[root@k8s-master yaml]# kubectl exec -it frontend-rc-2h62f /bin/bash
root@frontend-rc-2h62f:/var/www/html#
root@frontend-rc-2h62f:/var/www/html#
root@frontend-rc-2h62f:/var/www/html# find / -name httpd.conf
root@frontend-rc-2h62f:/var/www/html#

不管他没有影响,想解决加个Servername IP 就行了

kubernetes安装配置部署及测试

No Comments Kubernetes

https://feisky.gitbooks.io/kubernetes/  指南

https://www.kubernetes.org.cn/kubernetes-pod 中文说明

https://github.com/kubernetes/dashboard/

http://blog.51cto.com/douya/1945382  入门

———————-配置开始——————————————-

第一步组件安装
Master节点:
systemctl stop firewalld && sudo systemctl disable firewalld
yum install -y kubernetes etcd docker flannel    
 
Node节点:
systemctl stop firewalld && sudo systemctl disable firewalld
yum install -y kubernetes  docker flannel

节点
运行服务
 
 
 
—–Master
etcd
kube-apiserver
kube-controller-manager
kube-scheduler
kube-proxy
kubelet
docker
flanneld
 
 
—-node
flanneld
docker
kube-proxy
kubelet

Master:
hostnamectl set-hostname k8s_master
vi /etc/hosts
192.168.142.128   k8s_master
192.168.142.138   k8s_node1

 
etcd配置
vi /etc/etcd/etcd.conf 
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS=http://localhost:2379
 
apiserver 配置
vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="–insecure-bind-address=0.0.0.0"   (apiserver绑定主机的非安全IP地址)
KUBE_API_PORT="–port=8080"                                          (apiserver绑定主机的非安全端口号)
KUBELET_PORT="–kubelet-port=10250"
KUBE_ETCD_SERVERS="–etcd-servers=http://192.168.142.128:2379"
KUBE_SERVICE_ADDRESSES="–service-cluster-ip-range=192.168.142.0/24" (虚机同一网段)
KUBE_ADMISSION_CONTROL="–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS=""
 
Kubelet配置
vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="–address=0.0.0.0"
KUBELET_PORT="–port=10250"
KUBELET_HOSTNAME="–hostname-override=192.168.142.128"
KUBELET_API_SERVER="–api-servers=http://192.168.142.128:8080"
KUBELET_POD_INFRA_CONTAINER="–pod-infra-Container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
 
config配置
vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="–logtostderr=true"
KUBE_LOG_LEVEL="–v=0"
KUBE_ALLOW_PRIV="–allow-privileged=false"
KUBE_MASTER="–master=http://192.168.142.128:8080"
 
scheduler和 proxy 暂时没有用到,就不需要配置
 
flannel 配置
vi /etc/sysconfig/flanneld 
FLANNEL_ETCD="http://192.168.142.128:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"
etcdctl  set修改get查询。不管是修改还是创建的时候,配置文件必须完整/coreos.com/network/config,要不然启动会报错。
添加网络:
systemctl enable etcd.service
systemctl start etcd.service
etcdctl mk //atomic.io/network/config ‘{"Network":"172.17.0.0/16"}’  创建
etcdctl rm //atomic.io/network/config ‘{"Network":"172.17.0.0/16"}’   删除
 
Master启动:
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler   kube-proxy  kubelet docker flanneld   ; do systemctl restart   $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; done;
 
node配置:
 
hostnamectl set-hostname k8s_node1/2
 
Kubelet配置
vi   /etc/kubernetes/kubelet
KUBELET_ADDRESS="–address=0.0.0.0"
KUBELET_PORT="–port=10250"
KUBELET_HOSTNAME="–hostname-override=192.168.142.138"   (相应节点IP)
KUBELET_API_SERVER="–api-servers=http://192.168.142.128:8080"     (master节点IP)
KUBELET_POD_INFRA_CONTAINER="–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=" "
 
config配置
vi  /etc/kubernetes/config 
KUBE_LOGTOSTDERR="–logtostderr=true"
KUBE_LOG_LEVEL="–v=0"
KUBE_ALLOW_PRIV="–allow-privileged=false"
KUBE_MASTER="–master=http://192.168.142.128:8080"
 
flannel 配置
vi  /etc/sysconfig/flanneld 
FLANNEL_ETCD="http://192.168.142.128:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"
 
node启动
for SERVICES in kube-proxy kubelet docker flanneld; do
        systemctl restart $SERVICES
        systemctl enable $SERVICES
        systemctl status $SERVICES
    done;
 
 
查看所有NODE是否正常
kubectl -s 192.168.142.128:8080 get no
kubectl get nodes
 
 
访问http://kube-apiserver:port
http://192.168.142.128:8080/        查看所有请求url
http://192.168.142.128:8080/healthz/ping      查看健康状况

———————以上配置结束———————————–

——————-以下是排错及高级应用—————————

[root@k8s_master ~]# kubectl get namespaces  ——–查看所有namespace
NAME          STATUS    AGE
database      Active    23h
default       Active    1d
kube-system   Active    1d
[root@k8s_master ~]#

  kubectl get pod –all-namespaces   ———查看所有pod
 

[root@k8s_master ~]#   kubectl get pod –all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
database      httpdtest-1nv30                         1/1       Running   3          22h
database      mysqltest1-w7fjr                        1/1       Running   3          22h
database      nginxtest2-3247366770-2kqlc             1/1       Running   0          12h
database      nginxtest2-3247366770-5jfrv             1/1       Running   0          12h
database      nginxtest2-3247366770-6bs0t             1/1       Running   0          12h
database      redisteset-5b993                        1/1       Running   1          14h
database      wordpresstest-c7d70                     1/1       Running   3          22h
default       nginx-pod                               1/1       Running   3          1d
kube-system   helloworld-jv2lm                        1/1       Running   2          23h
kube-system   kubernetes-dashboard-1471901517-c57dj   1/1       Running   0          13h
[root@k8s_master ~]#

 
  kubectl get service –namespace=kube-system  —–查看指定namespace 里的service

[root@k8s_master ~]#   kubectl get service –namespace=database
NAME            CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE
httpdtest       192.168.142.216   <pending>     80:31581/TCP     22h
mysqltest       192.168.142.136   <pending>     3306:32045/TCP   23h
mysqltest1      192.168.142.89    <pending>     3306:32052/TCP   22h
nginxtest2      192.168.142.235   <pending>     80:30498/TCP     12h
redisteset      192.168.142.80    <pending>     6379:30387/TCP   14h
wordpresstest   192.168.142.168   <pending>     80:32566/TCP     22h
[root@k8s_master ~]#

 
  kubectl get pods –namespace=kube-system  —-查看指定namspace里的pod
 

[root@k8s_master ~]#   kubectl get pods –namespace=database
NAME                          READY     STATUS    RESTARTS   AGE
httpdtest-1nv30               1/1       Running   3          22h
mysqltest1-w7fjr              1/1       Running   3          22h
nginxtest2-3247366770-2kqlc   1/1       Running   0          12h
nginxtest2-3247366770-5jfrv   1/1       Running   0          12h
nginxtest2-3247366770-6bs0t   1/1       Running   0          12h
redisteset-5b993              1/1       Running   1          14h
wordpresstest-c7d70           1/1       Running   3          22h
[root@k8s_master ~]#

  kubectl get pods –namespace=kube-system
  kubectl get  -f kubernetes-dashboard.yaml
 
 
 
  在master节点上启动

systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler   kube-proxy  kubelet docker flanneld   ;
do systemctl restart   $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES; done;

在各个Nodes上启动

systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service

node启动

for SERVICES in kube-proxy kubelet docker flanneld; do
        systemctl restart $SERVICES
        systemctl enable $SERVICES
        systemctl status $SERVICES
    done;
   
   

—————–部署nginx测试——————-
nginx-pod.yaml
   
    apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
  name: nginx-pod
spec:
containers:
– name: nginx
   image: nginx
   ports:
   – containerPort: 80

http://blog.csdn.net/u013760355/article/details/68061976   
[root@master ~]# kubectl create -f /opt/dockerconfig/nginx-pod.yaml
Error from server (ServerTimeout): error when creating "/opt/dockerconfig/nginx-pod.yaml": No API token found for service account "default", retry after the token is automatically created and added to the service account

报错是验证产生的

[root@master ~]# vim /etc/kubernetes/apiserver

去掉相应配置
#KUBE_ADMISSION_CONTROL="–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

[root@master ~]# systemctl restart kube-apiserver
[root@master ~]#
解决

[root@master ~]# kubectl create -f /opt/dockerconfig/nginx-pod.yaml
pod "nginx-pod" created
[root@master ~]#

但是一直卡着
[root@master ~]# kubectl get pods
NAME        READY     STATUS              RESTARTS   AGE
nginx-pod   0/1       ContainerCreating   0          12m
[root@master ~]# kubectl get service
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   192.168.142.1   <none>        443/TCP   1h
[root@master ~]#
主要是通过“kubectl describe pod PodName”指令查看pod发生的事件,从事件列表中可以查找到错误信息
查状态
[root@master ~]# kubectl get pods
NAME        READY     STATUS              RESTARTS   AGE
nginx-pod   0/1       ContainerCreating   0          12m
[root@master ~]# kubectl get service
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   192.168.142.1   <none>        443/TCP   1h
[root@master ~]# kubectl describe pod gninx
Error from server (NotFound): pods "gninx" not found
[root@master ~]# kubectl describe pod nginx
Name:        nginx-pod
Namespace:    default
Node:        192.168.142.138/192.168.142.138
Start Time:    Thu, 18 Jan 2018 08:39:59 -0500
Labels:        name=nginx-pod
Status:        Pending
IP:       
Controllers:    <none>
Containers:
  nginx:
    Container ID:       
    Image:            nginx
    Image ID:           
    Port:            80/TCP
    State:            Waiting
      Reason:            ContainerCreating
    Ready:            False
    Restart Count:        0
    Volume Mounts:        <none>
    Environment Variables:    <none>
Conditions:
  Type        Status
  Initialized     True
  Ready     False
  PodScheduled     True
No volumes.
QoS Class:    BestEffort
Tolerations:    <none>
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath Type        Reason        Message
  ———    ——–    —–    —-                ————- ——–    ——        ——-
  15m        15m        1    {default-scheduler }                  Normal        Scheduled    Successfully assigned nginx-pod to 192.168.142.138
  15m        4m        7    {kubelet 192.168.142.138}              Warning        FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"

  14m    12s    64    {kubelet 192.168.142.138}        Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""

哈哈,你懂的
Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""
[root@master ~]#

手动下载
[root@master ~]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure …
open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory
[root@master ~]#

解决方法
[root@master ~]# yum install *rhsm* -y         —-安装

Installed:
  python-rhsm.x86_64 0:1.19.10-1.el7_4                                       python-rhsm-certificates.x86_64 0:1.19.10-1.el7_4                                     

Dependency Installed:
  python-dateutil.noarch 0:1.5-7.el7                                                                                                                                

Complete!
[root@master ~]#
[root@master ~]#
[root@master ~]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure …
latest: Pulling from registry.access.redhat.com/rhel7/pod-infrastructure

26e5ed6899db: Pulling fs layer
66dbe984a319: Pulling fs layer
^C38e7863e08: Pulling fs layer

等10个小时就好了
[root@master ~]#   kubectl get pods
NAME        READY     STATUS    RESTARTS   AGE
nginx-pod   1/1       Running   0          11h

 

新建nginx-service.
[root@master dockerconfig]# kubectl create -f nginx-service.yaml
service "nginx-service" created
[root@master dockerconfig]# kubectl get -f nginx-service.yaml
NAME            CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx-service   192.168.142.65   <nodes>       80:30001/TCP   8s
[root@master dockerconfig]#

访问 node1的 30001端口测试成功

————安装dashboard————
http://docs.minunix.com/docker/kubernetes-dashboard.yaml 下载
http://www.jb51.net/article/94343.htm  案例

删除原有pod

[root@k8s_master dockerconfig]# kubectl get pods –all-namespaces  确认pod名字
NAMESPACE     NAME                                           READY     STATUS              RESTARTS   AGE
kube-system   kubernetes-dashboard-latest-3447225518-f39cr   0/1       ContainerCreating   0          9h

[root@k8s_master dockerconfig]# kubectl delete pod kubernetes-dashboard-latest-3447225518-f39cr –namespace=kube-system
pod "kubernetes-dashboard-latest-3447225518-f39cr" deleted

curl -o kubernetes-dashboard.yaml http://docs.minunix.com/docker/kubernetes-dashboard.yaml

镜像先用 docker search kubernetes-dashboard 进行搜索,确认版本后
         docker pull xxxx  
         docker images  查看需要的,再写入配置文件

下载后修改
   – –apiserver-host=http://192.168.142.128:8080  ## 请修改为自己的kebu-apiserver
   Image: docker.io/rainf/kubernetes-dashboard-amd64

[root@master dockerconfig]# kubectl create -f kubernetes-dashboard.yaml
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[root@master dockerconfig]# kubectl get service
NAME            CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes      192.168.142.1    <none>        443/TCP        14h
nginx-service   192.168.142.65   <nodes>       80:30001/TCP   1h
[root@master dockerconfig]# kubectl get pods –namespace=kube-system
NAME                                           READY     STATUS             RESTARTS   AGE
kubernetes-dashboard-334721719-k4dn9           1/1       Running–运行中            0          42s
kubernetes-dashboard-latest-3447225518-gk7br   0/1       ImagePullBackOff   0          17m
[root@master dockerconfig]#

kubectl delete -f kubernetes-dashboard.yaml 删除,有需要可以执行

查看dashboard信息

[root@master dockerconfig]# kubectl describe pods kubernetes-dashboard-334721719-k4dn9  –namespace=kube-system
查看PODS详细信息的命令
Name:        kubernetes-dashboard-334721719-k4dn9
Namespace:    kube-system
Node:        192.168.142.138/192.168.142.138
Start Time:    Thu, 18 Jan 2018 21:16:01 -0500
Labels:        app=kubernetes-dashboard
        pod-template-hash=334721719
Status:        Running
IP:        172.17.13.3
Controllers:    ReplicaSet/kubernetes-dashboard-334721719
Containers:
  kubernetes-dashboard:
    Container ID:    docker://e376142b498b342099a655bcb02ba03a95a049c693750d0a1e8c547f7d127206
    Image:        daocloud.io/minunix/kubernetes-dashboard-amd64:v1.1.1
    Image ID:        docker-pullable://daocloud.io/minunix/kubernetes-dashboard-amd64@sha256:7fd7fd0e1aa84aecb62d62c10df0e8c4ed9cd80c538851962f058d708df06595
    Port:        9090/TCP
    Args:
      –apiserver-host=http://192.168.142.128:8080
    State:            Running
      Started:            Thu, 18 Jan 2018 21:16:12 -0500
    Ready:            True
    Restart Count:        0
    Liveness:            http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Volume Mounts:        <none>
    Environment Variables:    <none>
Conditions:
  Type        Status
  Initialized     True
  Ready     True
  PodScheduled     True
No volumes.
QoS Class:    BestEffort
Tolerations:    <none>
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath                Type        Reason            Message
  ———    ——–    —–    —-                ————-                ——–    ——            ——-
  2m        2m        1    {default-scheduler }                            Normal        Scheduled        Successfully assigned kubernetes-dashboard-334721719-k4dn9 to 192.168.142.138
  2m        2m        1    {kubelet 192.168.142.138}    spec.containers{kubernetes-dashboard}    Normal        Pulling            pulling image "daocloud.io/minunix/kubernetes-dashboard-amd64:v1.1.1"
  2m        2m        2    {kubelet 192.168.142.138}                        Warning        MissingClusterDNS    kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  2m        2m        1    {kubelet 192.168.142.138}    spec.containers{kubernetes-dashboard}    Normal        Pulled            Successfully pulled image "daocloud.io/minunix/kubernetes-dashboard-amd64:v1.1.1"
  2m        2m        1    {kubelet 192.168.142.138}    spec.containers{kubernetes-dashboard}    Normal        Created            Created container with docker id e376142b498b; Security:[seccomp=unconfined]
  2m        2m        1    {kubelet 192.168.142.138}    spec.containers{kubernetes-dashboard}    Normal        Started            Started container with docker id e376142b498b
[root@master dockerconfig]#

在node1上查看docker信息

[root@k8s_node1 ~]# docker ps -a
CONTAINER ID        IMAGE                                                        COMMAND                  CREATED             STATUS              PORTS               NAMES
e376142b498b        daocloud.io/minunix/kubernetes-dashboard-amd64:v1.1.1        "/dashboard –port=90"   4 minutes ago       Up 4 minutes                            k8s_kubernetes-dashboard.b582b075_kubernetes-dashboard-334721719-k4dn9_kube-system_b1c339df-fcbe-11e7-a898-000c29027e38_040bd918
217259971f68        registry.access.redhat.com/rhel7/pod-infrastructure:latest   "/usr/bin/pod"           4 minutes ago       Up 4 minutes                            k8s_POD.28c50bab_kubernetes-dashboard-334721719-k4dn9_kube-system_b1c339df-fcbe-11e7-a898-000c29027e38_71496fa5
4773c9fc3ead        registry.access.redhat.com/rhel7/pod-infrastructure:latest   "/usr/bin/pod"           21 minutes ago      Up 21 minutes                           k8s_POD.28c50bab_kubernetes-dashboard-latest-3447225518-gk7br_kube-system_60816c6e-fcbc-11e7-a898-000c29027e38_94bd5ef7
[root@k8s_node1 ~]#

查看对外端口
[root@master dockerconfig]# kubectl get service –namespace=kube-system
NAME                   CLUSTER-IP        EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   192.168.142.252   <nodes>–在node       80:30949/TCP   10m
[root@master dockerconfig]#

 

通过页面访问
http://192.168.142.138:30949/#/pod/default/nginx-pod  直接访问node1的地址

http://192.168.142.128:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload  通过master访问

———-使用dashboard进行APP部署————

新建  appname(helloworld)—image(nginx)—-extend—port(80)–targetport(80) –确定

[root@master dockerconfig]# kubectl get service –namespace=kube-system  ——-查看对外端口
NAME                   CLUSTER-IP        EXTERNAL-IP   PORT(S)        AGE
helloworld             192.168.142.153   <pending>     80:31675—端口/TCP   5m
kubernetes-dashboard   192.168.142.252   <nodes>       80:30949/TCP   1h
[root@master dockerconfig]#

测试一下
[root@master dockerconfig]# curl k8s_node1:31675
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</html>      ——-成功
[root@master dockerconfig]#

————进行mysql部署————

前面是一样
[root@master dockerconfig]# kubectl describe pods –namespace=database
Name:        mysqltest-7ctnt
Namespace:    database
Node:        192.168.142.128/192.168.142.128
Start Time:    Thu, 18 Jan 2018 23:14:19 -0500
Labels:        app=mysqltest
Status:        Pending
IP:       
Controllers:    ReplicationController/mysqltest
Containers:
  mysqltest:
    Container ID:       
    Image:            mysql
    Image ID:           
    Port:           
    State:            Waiting
      Reason:            ContainerCreating
    Ready:            False
    Restart Count:        0
    Volume Mounts:        <none>
    Environment Variables:    <none>
Conditions:
  Type        Status
  Initialized     True
  Ready     False
  PodScheduled     True
No volumes.
QoS Class:    BestEffort
Tolerations:    <none>
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath            Type        Reason            Message
  ———    ——–    —–    —-                ————-            ——–    ——            ——-
  2m        2m        1    {default-scheduler }                        Normal        Scheduled        Successfully assigned mysqltest-7ctnt to 192.168.142.128
  2m        2m        1    {kubelet 192.168.142.128}                    Warning        MissingClusterDNS    kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  2m        2m        1    {kubelet 192.168.142.128}    spec.containers{mysqltest}    Normal        Pulling            pulling image "mysql"
[root@master dockerconfig]# kubectl get pods –namespace=database

[root@master ~]# kubectl describe pods –namespace=database
Name:        mysqltest-7ctnt
Namespace:    database
Node:        192.168.142.128/192.168.142.128
Start Time:    Thu, 18 Jan 2018 23:14:19 -0500
Labels:        app=mysqltest
Status:        Running
IP:        172.17.72.3
Controllers:    ReplicationController/mysqltest
Containers:
  mysqltest:
    Container ID:        docker://c7697c1c17742628c0fe5ee8ff8b0405866c5d0542e76e542e2863537bea985d
    Image:            mysql
    Image ID:            docker-pullable://docker.io/mysql@sha256:7cdb08f30a54d109ddded59525937592cb6852ff635a546626a8960d9ec34c30
    Port:           
    State:            Waiting
      Reason:            CrashLoopBackOff
    Last State:            Terminated
      Reason:            Error
      Exit Code:        1
      Started:            Thu, 18 Jan 2018 23:38:33 -0500
      Finished:            Thu, 18 Jan 2018 23:38:35 -0500
    Ready:            False
    Restart Count:        8
    Volume Mounts:        <none>
    Environment Variables:    <none>
Conditions:
  Type        Status
  Initialized     True
  Ready     False
  PodScheduled     True
No volumes.
QoS Class:    BestEffort
Tolerations:    <none>
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath            Type        Reason        Message
  ———    ——–    —–    —-                ————-            ——–    ——        ——-
  26m        26m        1    {default-scheduler }                        Normal        Scheduled    Successfully assigned mysqltest-7ctnt to 192.168.142.128
  19m        19m        1    {kubelet 192.168.142.128}    spec.containers{mysqltest}    Normal        Created        Created container with docker id 1face8a8e311; Security:[seccomp=unconfined]
  19m        19m        1    {kubelet 192.168.142.128}    spec.containers{mysqltest}    Normal        Started        Started container with docker id 1face8a8e311
  19m        19m        1    {kubelet 192.168.142.128}    spec.containers{mysqltest}    Normal        Created        Created container with docker id e14e935fc888; Security:[seccomp=unconfined]
  19m        19m        1    {kubelet 192.168.142.128}    spec.containers{mysqltest}    Normal        Started        Started container with docker id e14e935fc888
  19m        19m        2    {kubelet 192.168.142.128}                    Warning        FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "mysqltest" with CrashLoopBackOff: "Back-off 10s restarting failed container=mysqltest pod=mysqltest-7ctnt_database(385572de-fccf-11e7-a898-000c29027e38)"

  这个错误基本上是 环境变量的问题在kubernetes dashboard上增加环境变为
 
  mysqltest1
Image:
mysql
Environment variables:
MYSQL_ROOT_PASSWORD: 123456    —加这个就好了

———-wordpress部署测试———–
wordpresstest
Image:
wordpress
Environment variables:
WORDPRESS_DB_HOST: 192.168.142.138:32052
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: 123456
WORDPRESS_DB_NAME: wordpress

kubernetes 展示的信息

Details
Name:
wordpresstest
Namespace:
database
Label selector:
app: wordpresstest
Labels:
app: wordpresstest
Images:
wordpress
Status
Pods:
1 running
Services
Name
Labels
Cluster IP
Internal endpoints
External endpoints
timelapse
wordpresstest
app: wordpresstest
192.168.142.168
wordpresstest.database:80 TCP   —–内部端口
wordpresstest.database:32566 TCP——对外端口

more_vert
Pods
Name
Status
Restarts
Age
Cluster IP
CPU (cores)
Memory (bytes)
check_circle
wordpresstest-c7d70
Running
0
55 minutes
172.17.72.9

测试一下
[root@master ~]# curl http://192.168.142.138:32566/ |less
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 53462    0 53462    0     0   321k      0 –:–:– –:–:– –:–:–  322k

正常

———-DOCKER加速器测试————-
http://guide.daocloud.io/dcs/docker-9153151.html 测试一下

 

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://b2ae5821.m.daocloud.io

运行后重启docker报错
[root@k8s_node1 ~]#  systemctl status docker
● docker.service – Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/docker.service.d
           └─flannel.conf
   Active: failed (Result: exit-code) since Thu 2018-01-18 23:25:07 EST; 32s ago
     Docs: http://docs.docker.com
  Process: 46222 ExecStart=/usr/bin/dockerd-current –add-runtime docker-runc=/usr/libexec/docker/docker-runc-current –default-runtime=docker-runc –exec-opt native.cgroupdriver=systemd –userland-proxy-path=/usr/libexec/docker/docker-proxy-current $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE)
Main PID: 46222 (code=exited, status=1/FAILURE)

Jan 18 23:25:07 k8s_node1 systemd[1]: Starting Docker Application Container Engine…
Jan 18 23:25:07 k8s_node1 dockerd-current[46222]: time="2018-01-18T23:25:07-05:00" level=fatal msg="unable to configure the Docker daemon with file /etc/…string\n"
Jan 18 23:25:07 k8s_node1 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Jan 18 23:25:07 k8s_node1 systemd[1]: Failed to start Docker Application Container Engine.
Jan 18 23:25:07 k8s_node1 systemd[1]: Unit docker.service entered failed state.
Jan 18 23:25:07 k8s_node1 systemd[1]: docker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

原因是脚本修改 vim /etc/docker/daemon.json 有问题需要手动修改

——–daemon.json修改如下——-
{"registry-mirrors": ["http://b2ae5821.m.daocloud.io"], "insecure-registries": []
}
———daemon.jason修改完成————

查状态正常
[root@k8s_node1 ~]#  systemctl status docker
● docker.service – Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/docker.service.d
           └─flannel.conf
   Active: active (running) since Thu 2018-01-18 23:28:38 EST; 1min 43s ago
     Docs: http://docs.docker.com
Main PID: 48542 (dockerd-current)

———–查看pod在哪个node上——–
[root@k8s_master ~]# kubectl get pods -o wide  –namespace=database
NAME                          READY     STATUS    RESTARTS   AGE       IP            NODE—-就是这个你懂的
httpdtest-1nv30               1/1       Running   3          23h       172.17.72.5   192.168.142.128
mysqltest1-w7fjr              1/1       Running   3          23h       172.17.72.3   192.168.142.128
nginxtest2-3247366770-2kqlc   1/1       Running   0          13h       172.17.13.5   192.168.142.138
nginxtest2-3247366770-5jfrv   1/1       Running   0          13h       172.17.13.4   192.168.142.138
nginxtest2-3247366770-6bs0t   1/1       Running   0          13h       172.17.13.6   192.168.142.138
redisteset-5b993              1/1       Running   1          14h       172.17.13.2   192.168.142.138
wordpresstest-c7d70           1/1       Running   3          22h       172.17.72.2   192.168.142.128
[root@k8s_master ~]#

[root@k8s_master ~]# kubectl get -h   ——-其它的问它

——–进行拉取测试————

速度叼爆了
[root@k8s_node1 ~]# docker pull wordpress
Using default tag: latest
Trying to pull repository docker.io/library/wordpress …
latest: Pulling from docker.io/library/wordpress

75651f247827: Pull complete
dbcf8fd0150f: Pull complete
de80263f26f0: Pull complete
65be8ad4c5fd: Pull complete
239d5fed0dda: Pull complete
5ab39b683a9f: Pull complete
4a3f54f2d93a: Pull complete
28c970ad99e9: Pull complete
5d1e20c7c396: Pull complete
05f877a23903: Pull complete
e0a5c61bdaa6: Pull complete
d27d2d70a072: Pull complete
ba039fef4b7e: Pull complete
fd026e22f5c3: Pull complete
a523c6d55ab4: Pull complete
025590874132: Pull complete
2d4bd5336aa0: Pull complete
c014b4d902ee: Pull complete
Digest: sha256:73d85a7ae83ea7240090c3a52dbf176d610df2480c75c9e7fed8dba7e3d5154e

 

该脚本可以将 –registry-mirror 加入到你的 Docker 配置文件 /etc/docker/daemon.json 中。
适用于 Ubuntu14.04、Debian、CentOS6 、CentOS7、Fedora、Arch Linux、openSUSE Leap 42.1,其他版本可能有细微不同。更多详情请访问文档。

———版本1.7.3安装

https://www.cnblogs.com/liangDream/p/7358847.html

Kubernetes中的nodePort,targetPort,port的区别和意义

No Comments Kubernetes

1. nodePort

外部机器可访问的端口。
比如一个Web应用需要被其他用户访问,那么需要配置type=NodePort,而且配置nodePort=30001,那么其他机器就可以通过浏览器访问scheme://node:30001访问到该服务,例如http://node:30001
例如MySQL数据库可能不需要被外界访问,只需被内部服务访问,那么不必设置NodePort

2. targetPort

容器的端口(最根本的端口入口),与制作容器时暴露的端口一致(DockerFile中EXPOSE),例如docker.io官方的nginx暴露的是80端口。
docker.io官方的nginx容器的DockerFile参考https://github.com/nginxinc/docker-nginx

3. port

kubernetes中的服务之间访问的端口,尽管mysql容器暴露了3306端口(参考https://github.com/docker-library/mysql/的DockerFile),但是集群内其他容器需要通过33306端口访问该服务,外部机器不能访问mysql服务,因为他没有配置NodePort类型

4. 举例

apiVersion: v1
kind: Service
metadata:
 name: nginx-service
spec:
 type: NodePort
 ports:
 - port: 30080
   targetPort: 80
   nodePort: 30001
 selector:
  name: nginx-pod
apiVersion: v1
kind: Service
metadata:
 name: mysql-service
spec:
 ports:
 - port: 33306
   targetPort: 3306
 selector:
  name: mysql-pod