一看必会系列:k8s 练习34 日志收集 efk 实战
从官方下载对应yaml
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
es-statefulset.yaml: – image: quay.io/fluentd_elasticsearch/elasticsearch:v7.2.0
es-statefulset.yaml: – image: alpine:3.6
fluentd-es-ds.yaml: image: quay.io/fluentd_elasticsearch/fluentd:v2.6.0
kibana-deployment.yaml: image: docker.elastic.co/kibana/kibana-oss:7.2.0
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kibana-oss7.2.0
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:fluentdv2.6.0
docker pull registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:elasticsearchv7.2.0
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:kibana-oss7.2.0 \
docker.elastic.co/kibana/kibana-oss:7.2.0
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:fluentdv2.6.0 \
quay.io/fluentd_elasticsearch/fluentd:v2.6.0
docker tag registry.cn-hangzhou.aliyuncs.com/jdccie-rgs/kubenetes:elasticsearchv7.2.0 \
quay.io/fluentd_elasticsearch/elasticsearch:v7.2.0
-rw-r–r– 1 root root 382 Apr 3 23:28 es-service.yaml
-rw-r–r– 1 root root 2900 Apr 4 04:15 es-statefulset.yaml
-rw-r–r– 1 root root 16124 Apr 3 23:28 fluentd-es-configmap.yaml
-rw-r–r– 1 root root 2717 Apr 4 06:19 fluentd-es-ds.yaml
-rw-r–r– 1 root root 1166 Apr 4 05:46 kibana-deployment.yaml
-rw-r–r– 1 root root 272 Apr 4 05:27 kibana-ingress.yaml #这个在后面
-rw-r–r– 1 root root 354 Apr 3 23:28 kibana-service.yaml
特别注意,一定要按照yaml里的文件来下载image不然会有各种错
先执行这个
kubectl create -f fluentd-es-configmap.yaml
configmap/fluentd-es-config-v0.2.0 created
再执行
[root@k8s-master elk]# kubectl create -f fluentd-es-ds.yaml
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v2.5.0 created
[root@k8s-master elk]# kubectl get pod -n kube-system |grep flu
fluentd-es-v2.5.0-hjzw8 1/1 Running 0 19s
fluentd-es-v2.5.0-zmlm2 1/1 Running 0 19s
[root@k8s-master elk]#
再启动elasticsearch
[root@k8s-master elk]# kubectl create -f es-statefulset.yaml
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
[root@k8s-master elk]# kubectl create -f es-service.yaml
service/elasticsearch-logging created
[root@k8s-master elk]#
[root@k8s-master elk]# kubectl get pod -n kube-system |grep elas
elasticsearch-logging-0 1/1 Running 0 11s
elasticsearch-logging-1 1/1 Running 0 8s
[root@k8s-master elk]#
再高动 kibana/kibana
kubectl create -f kibana-deployment.yaml
kubectl get pod -n kube-system
kubectl create -f kibana-service.yaml
验证
[root@k8s-master elk]# kubectl get pod,svc -n kube-system |grep kiba
pod/kibana-logging-65f5b98cf6-2p8cj 1/1 Running 0 46s
service/kibana-logging ClusterIP 10.100.152.68 <none> 5601/TCP 21s
[root@k8s-master elk]#
查看集群信息
[root@k8s-master elk]# kubectl cluster-info
Elasticsearch is running at https://192.168.10.68:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://192.168.10.68:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
因为只开了 容器端口,在外部机器上是无法访问的。有以下几种方法来访问
1.开proxy 在master上开
#这玩意是前台执行的,退出后就没了。–address 是master的Ip 实际上哪台上面都行
kubectl proxy –address=’192.168.10.68′ –port=8085 –accept-hosts=’^*$’
如需后台运行。使用。 nohup kubectl proxy –address=’192.168.10.68′ –port=8085 –accept-hosts=’^*$’ *
在master上查看端口是否开启
netstat -ntlp |grep 80
tcp 0 0 192.168.10.68:2380 0.0.0.0:* LISTEN 8897/etcd
tcp 0 0 192.168.10.68:8085 0.0.0.0:* LISTEN 16718/kubectl
进去kibana后操作出图
1.点击左边management
2. 建立index Create index pattern
3. 输入* 查看具体的日志名
4. 例如 logstash-2019.03.25 ,改成logstash-* 下一步到完成
4.1 一定要把那个 星星点一下, 设为index默认以logstash-*
5. discover 就可以看到日志了
验证结果,以下为正常,没有https 要注意
curl http://192.168.10.68:8085/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/
{
"name" : "bc30CKf",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "C3oV5BnMTByxYltuuYjTjg",
"version" : {
"number" : "6.7.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "8453f77",
"build_date" : "2019-03-21T15:32:29.844721Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
方法二:
[root@k8s-master elk]# kubectl get ingress -n kube-system -o wide
NAME HOSTS ADDRESS PORTS AGE
kibana-logging elk.ccie.wang 80 6m42s
可以是可以。但是会报 404 这个需要再查下问题在哪
创建ingress
配置文件如下 kibana-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana-logging-ingress
namespace: kube-system
spec:
rules:
– host: elk.ccie.wang
http:
paths:
– path: /
backend:
serviceName: kibana-logging
servicePort: 5601
kubectl create -f kibana-ingress.yaml
方法三:
修改 kibana-service.yaml 可直接访问http://node:nodeport
11 spec:
12 ports:
13 – port: 5601
14 protocol: TCP
15 targetPort: ui
16 #add nodeport
17 type: NodePort
验证文件信息
[root@k8s-master elk]# kubectl get -f fluentd-es-ds.yaml
NAME SECRETS AGE
serviceaccount/fluentd-es 1 85s
NAME AGE
clusterrole.rbac.authorization.k8s.io/fluentd-es 85s
NAME AGE
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es 85s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/fluentd-es-v2.5.0 2 2 2 2 2 <none> 85s
[root@k8s-master elk]#
----------报错
[root@k8s-master elk]# kubectl get pod -n kube-system |grep elas
elasticsearch-logging-0 0/1 ErrImagePull 0 71s
[root@k8s-master elk]#
拉境像报错
containers:
#将下面改成
#- image: gcr.io/fluentd-elasticsearch/elasticsearch:v6.6.1
– image: reg.ccie.wang/library/elk/elasticsearch:6.7.0
—————-知识扩展
1. fluentd
怎么使用这个镜像
docker run -d -p 24224:24224 -p 24224:24224/udp -v /data:/fluentd/log fluent/fluentd:v1.3-debian-1
默认的配置如下
监听端口 24224
存储标记为 docker.** 到 /fluentd/log/docker.*.log (and symlink docker.log)
存储其它日志到 /fluentd/log/data.*.log (and symlink data.log)
当然也能自定议参数
docker run -ti –rm -v /path/to/dir:/fluentd/etc fluentd -c /fluentd/etc/配置文件 -v
第一个-v 挂载/path/to/dir到容器里的/fluentd/etc
-c 前的是容器名 告诉 fluentd去哪找这个配置文件
第二个-v 传递详细的配置信息给 fluented
切换运行用户 foo
docker run -p 24224:24224 -u foo -v …
暂时还木有人评论,坐等沙发!