Create New K8S Elastic Stack Cluster

Create a New Namespace

Run k apply -f pp-test/namespace/elk-pp-test.yml, elk-pp-test.yml:

apiVersion: v1
kind: Namespace
metadata:
name: elk-pp-test
labels:
  name: elk-pp-test

Create ES Cluster

Prepare kubernetes yaml files

Copy kubernetes yaml files from AKS-ELK-PP-FLUX/deploy/elasticsearch/* to AKS-ELK-PP-FLUX/pp-test/es.

Change namespace in yaml files: sed -i 's/elk-cn-pp/elk-pp-test/g' pp-test/es/*, then adjust some parameters for the test-cluster like image version and memory & disk sizes.

Config files list:

pp-test:
es:
  es-pp-test-client-sts.yaml
  es-pp-test-data-sts.yaml
  es-pp-test-master-sts.yaml
  es-pp-test-configmap.yaml
  es-pp-test-http-lb.yaml
  es-pp-test-pvc.yaml
  es-pp-test-svc.yaml
kibana:
  kibana-pvc.yaml
  kibana-configmap.yaml
  kibana-deployment.yaml
  kibana-svc.yaml
namespace:
  elk-pp-test.yaml
ingress:
  http-ing.yaml
logstash:
es-template:
index-default.json
logstash-configmap.yaml
logstash-patterns-cm.yaml
logstash-pipeline-filebeat-cm.yaml
logstash-deployment.yaml
logstash-service.yaml

Create Persistent Volume Claim

# change default kubectl namespace
kubectl config set-context --current --namespace=elk-pp-test
# create pvc
k apply -f es-pp-test-pvc.yaml
k get pvc

Create Config Maps

 k apply -f es-pp-test-configmap.yaml
k get cm

Create Master Nodes

k apply -f es-pp-test-master-sts.yaml
k get sts
k rollout status sts/es-master-node
k get po
k logs -f es-master-node-0

Create Data Nodes

k apply -f es-pp-test-data-sts.yaml
k get sts
k rollout status sts/es-data-node
k get po
k logs -f es-data-node-0
# check data-pvc status
k get pvc
k get pvc/es-data-data-es-data-node-0 -oyaml

Create Client Nodes

k apply -f es-pp-test-client-sts.yaml
k get sts
k rollout status sts/es-client-node
k get po
k logs -f es-client-node-0

Create Services

k apply -f es-pp-test-svc.yaml
k get svc
# create http load balancer
# We can use ingress to access the es cluster instead of this LB.
k apply -f es-pp-test-http-lb.yaml
k get svc

Before services created, errors would be found in es nodes logs complaining failed to resolve host [master/data/client].

After creating the http load balancer, a external IP will be assigned to the LB. We can then access the es cluster with that IP address.

$ k get svc
NAME                 TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
client               ClusterIP      None           <none>        9300/TCP         9m15s
client-http          ClusterIP      None           <none>        9200/TCP         9m15s
data                 ClusterIP      None           <none>        9300/TCP         9m15s
elasticsearch-http   LoadBalancer   10.3.239.229   10.77.3.189   9200:30017/TCP   77s
master               ClusterIP      None           <none>        9300/TCP         9m15s
$ curl http://10.77.3.189:9200/_cat/nodes?v
ip          heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.77.3.126           14          98  27    1.29    1.92     2.34 m         -      es-master-node-0
10.77.3.27            12          95  17    0.79    2.05     1.86 m         -      es-master-node-1
10.77.3.114           13          98  27    1.29    1.92     2.34 m         *      es-master-node-2
10.77.3.131            5          98  28    1.29    1.92     2.34 d         -      es-data-node-0
10.77.3.50            10          98  28    1.29    1.92     2.34 i         -      es-client-node-2
10.77.3.77             5          95  17    0.79    2.05     1.86 d         -      es-data-node-2
10.77.3.24             6          95  17    0.79    2.05     1.86 d         -      es-data-node-1
10.77.3.78            10          95  17    0.79    2.05     1.86 i         -      es-client-node-0
10.77.3.122           13          98  28    1.29    1.92     2.34 i         -      es-client-node-1

With these basic checks of es cluster, we can see the es cluster has been started and running normally, safe and sound.

Create Kibana Cluster

Basically same with creating es cluster: preparing k8s files, create pvc/cm/deployments/services.

# create pvc
k apply -f kibana-pvc.yaml
k get pvc
# create cm
k apply -f kibana-configmap.yaml
k get cm/kibana-config -o yaml
# create svc
k apply -f kibana-svc.yaml
k get svc
# create deployment
k apply -f kibana-deployment.yaml
k get deploy
k get po
k logs -f kibana-5b5d6cdcd-hfbh5
# get ip address of kibana service
k describe svc kibana-svc

Create Ingress

Create Ingress

Service ingress & config maps:

# create cm
k apply -f tcp-services-configmap.yaml
k get cm
# create ingress
k apply -f http-ing.yaml
k get ing
k describe ingress kibana-ingress
k describe ing/elasticsearch-ingress
$ k describe ingress kibana-ingress
Name:             kibana-ingress
Namespace:        elk-pp-test
Address:
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                               Path  Backends
  ----                               ----  --------
  kibana-akscn-test.pp.dktapp.cloud
                                        kibana-svc:5601 (10.77.3.162:5601)
Annotations:                         field.cattle.io/publicEndpoints:
                                       [{"addresses":[""],"port":80,"protocol":"HTTP","serviceName":"elk-pp-test:kibana-svc","ingressName":"elk-pp-test:kibana-ingress","hostname...
                                     kubernetes.io/ingress.class: nginx
Events:
  Type    Reason  Age                    From                      Message
  ----    ------  ----                   ----                      -------
  Normal  CREATE  3m15s                  nginx-ingress-controller  Ingress elk-pp-test/kibana-ingress
  Normal  CREATE  3m15s                  nginx-ingress-controller  Ingress elk-pp-test/kibana-ingress
  Normal  CREATE  3m15s                  nginx-ingress-controller  Ingress elk-pp-test/kibana-ingress
  Normal  UPDATE  2m55s (x2 over 2m55s)  nginx-ingress-controller  Ingress elk-pp-test/kibana-ingress
  Normal  UPDATE  2m55s (x2 over 2m55s)  nginx-ingress-controller  Ingress elk-pp-test/kibana-ingress
  Normal  UPDATE  2m55s (x2 over 2m55s)  nginx-ingress-controller  Ingress elk-pp-test/kibana-ingress

From above output, we can see nginx-ingress-controller is updating itself to apply Ingress elk-pp-test/kibana-ingress. After the update, we can access kibana-akscn-test.pp.dktapp.cloud through nginx-ingress-controller. The IP address of nginx-ingress-controller can be found by external-ip of k get svc -n kube-system.

Nginx-ingress-controller locates in ns/kube-system:

# This nic is a daemonset deployed using Helm.
k get ds/nginx-ingress-controller -n kube-system -o yaml
k describe daemonsets/nginx-ingress-controller -n kube-system
k get ds/nginx-ingress-controller -n kube-system -o yaml
# nic tcp-service configmap
k get cm/nginx-ingress-tcp -n kube-system -o yaml

Since there is already a nginx-ingress-controller, we don't need to create another one. Controller take effects on all namespaces unless did a specific config when created.

REFER: Installing NGINX Ingress using Helm

Install helm 3:

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Add NGINX Ingress repo:

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

Install NGINX Ingress on kube-system namespace:

$ helm install -n kube-system ingress-nginx ingress-nginx/ingress-nginx

Create Logstash Cluster

Create Config Map

kubectl apply -f logstash-configmap.yaml
k apply -f logstash-patterns-cm.yaml
k apply -f logstash-pipeline-filebeat-cm.yaml
# create es-template configmap from template.json
k create configmap es-template --from-file=index-default.json
k get cm

Create Deployment

k apply -f logstash-deployment.yaml
k get deploy
k rollout status deploy/logstash
k get po
k logs -f logstash-756c9fd497-6zzjn

When creating logstash deployment, we need to make sure all volumes defined in logstash-deployment.yaml are created or for other world to say, the config maps are existed.

Besides, pay attention to the differences between the logstash version 6.x and 7.x. Take a example, to enable x-pack monitoring, a configuration xpack.monitoring.elasticsearch.url need to be set in logstash.yml if you're using logstash version 6.x, but in 7.x, this configuration should be xpack.monitoring.elasticsearch.hosts.

Create Service

k apply -f logstash-service.yaml
k get svc

Expose Logstash Service Through NIC

The existed logstash cluster from azure-elk-pp already take port 5044 of Nginx Ingress Controller, so we set logstash of azure-elk-pp-k8s-test to 5055. To achieve this and make it accessible, we need to adjust some configs below.

  1. Change logstash service.

Configure logstash-svc.yml, change spec.ports.port to 5055. The logstash service on azure-elk-pp-k8s-test is now listening to 5055 instead of 5044.

2. Modify NIC's tcp-settings.

k edit cm/nginx-ingress-tcp -n kube-system
# add [ "5055": elk-pp-test/logstash:5055 ] to [data]

After this step, the nginx server in NIC pods will have configs for 5055 port.

3. Modify NIC's daemon set.

k edit ds/nginx-ingress-controller -n kube-system
# add 
'''         
- containerPort: 5055
  hostPort: 5055
  name: 5055-tcp
  protocol: TCP
'''     
# to [spec.containers.ports]

4. Restart NIC daemon set.

# Because NIC ds's DaemonSet Update Strategy is configured to be [OnDelete], we need to delete NIC pods manually to update NIC ds. 
k get po -n kube-system
k delete po/nginx-ingress-controller-xxxxx -n kube-system
# After the pods deleted, the new pods will be created immediately, check the log to make sure the new pods are running normally, then delete the next pod.

After Step.4, we modified and restarted NIC ds, expose port 5055 on NIC pods.

5. Modify NIC service.

k edit svc/nginx-ingress-controller -n kube-system -o yaml
# add 
'''         
  - name: logstash-5055
    nodePort: 30955
    port: 5055
    protocol: TCP
    targetPort: 5055
'''     
# to [spec.ports]

After all these steps, we can access logstash service from LB's 5055 port.

发表评论

您的电子邮箱地址不会被公开。