Skip to content

Commit

Permalink
Day 82 - EFK Stack
Browse files Browse the repository at this point in the history
  • Loading branch information
MichaelCade committed Mar 29, 2022
1 parent c80e112 commit 329af76
Show file tree
Hide file tree
Showing 16 changed files with 337 additions and 6 deletions.
Binary file added Days/Images/Day82_Monitoring1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring10.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring11.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring12.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring13.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring6.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring7.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring8.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Days/Images/Day82_Monitoring9.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
256 changes: 256 additions & 0 deletions Days/Monitoring/EFK Stack/efk-stack.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,256 @@
---
#Create namespace, named kube logging
kind: Namespace
apiVersion: v1
metadata:
name: kube-logging
---
# Create a Headless services named elasticsearch, that will define a DNS domain
kind: Service
apiVersion: v1
#Define the service in the namespace
metadata:
name: elasticsearch
namespace: kube-logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
#Renderes The service Headless
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data
labels:
type: elasticsearch
spec:
storageClassName: standard
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: kube-logging
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
app: kibana
spec:
ports:
- port: 5601
selector:
app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-logging
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-logging
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.kube-logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
83 changes: 79 additions & 4 deletions Days/day82.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,88 @@
### EFK Stack
### EFK Stack

In the previous section, we spoke about ELK Stack, which uses Logstash as the log collector in the stack, in the EFK Stack we are swapping that out for FluentD.
In the previous section, we spoke about ELK Stack, which uses Logstash as the log collector in the stack, in the EFK Stack we are swapping that out for FluentD or FluentBit.

Our mission in this section is to monitor our Kubernetes logs using EFK.

### Overview of EFK

We will be deploying the following into our Kubernetes cluster.

If EFK stack is not enough we could take a look at APM (Application Performance Monitoring), SigNoz, which uses OpenTelemetry - a vendor-agnostic instrumentation library for generating telemetry data. OpenTelemetry is a project under Cloud Native Computing Foundation and is becoming the industry standard for creating portable telemetry data.
![](Images/Day82_Monitoring1.png)

The EFK stack is a collection of 3 software bundled together, including:

- Elasticsearch : NoSQL database is used to store data and provides interface for searching and query log.

- Fluentd : Fluentd is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and understanding of data.

- Kibana : Interface for managing and statistics logs. Responsible for reading information from elasticsearch .

### Deploying EFK on Minikube

We will be using our trusty minikube cluster to deploy our EFK stack. Let's start a cluster using `minikube start` on our system. I am using a Windows OS with WSL2 enabled.

![](Images/Day82_Monitoring2.png)

I have created [efk-stack.yaml](Days/Monitoring/../../Monitoring/EFK%20Stack/efk-stack.yaml) which contains everything we need to deploy the EFK stack into our cluster, using the `kubectl create -f efk-stack.yaml` command we can see everything being deployed.

![](Images/Day82_Monitoring3.png)

Depending on your system and if you have ran this already and have images pulled you should now watch the pods into a ready state before we can move on, you can check the progress with the following command. `kubectl get pods -n kube-logging -w` This can take a few minutes.

![](Images/Day82_Monitoring4.png)

The above command lets us keep an eye on things but I like to clarify that things are all good by just running the following `kubectl get pods -n kube-logging` command to ensure all pods are now up and running.

![](Images/Day82_Monitoring5.png)

Once we have all our pods up and running and at this stage we should see
- 3 pods associated to ElasticSearch
- 1 pod associated to Fluentd
- 1 pod associated to Kibana

We can also use `kubectl get all -n kube-logging` to show all in our namespace, fluentd as explained previously is deployed as a daemonset, kibana as a deployment and Elasticsearch as a statefulset.

![](Images/Day82_Monitoring6.png)

Now all of our pods are up and running we can now issue in a new terminal the port-forward command so that we can access our kibana dashboard. Note that your pod name will be different to the command we see here. `kubectl port-forward kibana-84cf7f59c-v2l8v 5601:5601 -n kube-logging`

![](Images/Day82_Monitoring7.png)

We can now open up a browser and navigate to this address, http://localhost:5601 you will be greeted with either the screen you see below or you might indeed see a sample data screen or continue and configure yourself. Either way and by all means look at that test data, it is what we covered when we looked at the ELK stack in a previous session.

![](Images/Day82_Monitoring8.png)

Next, we need to hit the "discover" tab on the left menu and add "*" to our index pattern. Continue to the next step by hitting "Next step".

![](Images/Day82_Monitoring9.png)

On Step 2 of 2, we are going to use the @timestamp option from the dropdown as this will filter our data by time. When you hit create pattern it might take a few seconds to complete.

![](Images/Day82_Monitoring10.png)

If we now head back to our "discover" tab after a few seconds you should start to see data coming in from your Kubernetes cluster.

![](Images/Day82_Monitoring11.png)

Now that we have the EFK stack up and running and we are gathering logs from our Kubernetes cluster via Fluentd we can also take a look at other sources we can choose from, if you navigate to the home screen by hitting the Kibana logo in the top left you will be greeted with the same page we saw when we first logged in.

We have the ability to add APM, Log data, metric data and security events from other plugins or sources.

![](Images/Day82_Monitoring12.png)

If we select "Add log data" then we can see below that we have a lot of choices on where we want to get our logs from, you can see that Logstash is mentioned there which is part of the ELK stack.

![](Images/Day82_Monitoring13.png)

Under the metrics data you will find that you can add sources for Prometheus and lots of other services.

### APM (Application Performance Monitoring)

There is also the option to gather APM (Application Performance Monitoring) which collects in-depth performance metrics and errors from inside your application. It allows you to monitor the performance of thousands of applications in real time.

I am not going to get into APM here but you can find out more on the [Elastic site](https://www.elastic.co/observability/application-performance-monitoring)

https://www.youtube.com/watch?v=idDu_jXqf4E&t=10s

## Resources

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,8 +127,8 @@ This will not cover all things DevOps but it will cover the areas that I feel wi
- [✔️] 📈 79 > [The Big Picture: Log Management](Days/day79.md)
- [✔️] 📈 80 > [ELK Stack](Days/day80.md)
- [✔️] 📈 81 > [Fluentd & FluentBit](Days/day81.md)
- [🚧] 📈 82 > [EFK Stack](Days/day82.md)
- [] 📈 83 > [Data Visualisation - Grafana](Days/day83.md)
- [✔️] 📈 82 > [EFK Stack](Days/day82.md)
- [🚧] 📈 83 > [Data Visualisation - Grafana](Days/day83.md)

### Store & Protect Your Data

Expand Down

0 comments on commit 329af76

Please sign in to comment.