Unverified Commit 97969130 by Denise Committed by GitHub

Merge pull request #219 from guangbochen/efk2.3

Add efk v7.3.0 chart
parents e93f0f5c e121ddb3
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
description: EFK(Elasticsearch + Filebeat + Kibana)
name: efk
version: 7.3.0
appVersion: 7.3.0
icon: file://../efk-logo.jpg
sources:
- https://www.elastic.co/products/elasticsearch
- https://www.elastic.co/products/kibana
- https://fluentbit.io/
home: https://www.elastic.co
maintainers:
- name: guangbochen
email: support@rancher.com
engine: gotpl
# Elastic Stack Kubernetes Helm Charts
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
## Charts
Please look in the chart directories for the documentation for each chart. These helm charts are designed to be a lightweight way to configure our official docker images. Links to the relevant docker image documentation has also been added below.
| Chart | Docker documentation |
| ------------------------------------------ | ------------------------------------------------------------------------------- |
| [Elasticsearch](./elasticsearch/README.md) | https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html |
| [Kibana](./kibana/README.md) | https://www.elastic.co/guide/en/kibana/current/docker.html |
| [Filebeat](./filebeat/README.md) | https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html |
| [Metricbeat](./metricbeat/README.md) | https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-docker.html |
## Kubernetes versions
The charts are [currently tested](https://devops-ci.elastic.co/job/elastic+helm-charts+master/) against all GKE versions available. The exact versions are defined under `KUBERNETES_VERSIONS` in [helpers/matrix.yml](/helpers/matrix.yml)
# EFK
EFK(Elasticsearch + Filebeat + Kibana) are flexible and powerful open source projects, provides distributed real-time search and analytics tools.</br>
This chart bootstraps a EFK stack with the following components:
- Elasticsearch
- Kibana
- Filebeat
- Metricbeat
**Warning:** upgrade from previous version under v7.3.0 is not supported, please re-deploy a new EFK app if you want to use the new v7.3.0 version.
description: Official Elastic helm chart for Elasticsearch
home: https://github.com/elastic/helm-charts
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: elasticsearch
version: 7.3.0
appVersion: 7.3.0
sources:
- https://github.com/elastic/elasticsearch
icon: https://helm.elastic.co/icons/elasticsearch.png
# Elasticsearch Helm Chart
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
This helm chart is a lightweight way to configure and run our official [Elasticsearch docker image](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html)
## Requirements
* [Helm](https://helm.sh/) >= 2.8.0
* Kubernetes >= 1.8
* Minimum cluster requirements include the following to run this chart with default settings. All of these settings are configurable.
* Three Kubernetes nodes to respect the default "hard" affinity settings
* 1GB of RAM for the JVM heap
## Usage notes and getting started
* This repo includes a number of [example](./examples) configurations which can be used as a reference. They are also used in the automated testing of this chart
* Automated testing of this chart is currently only run against GKE (Google Kubernetes Engine). If you are using a different Kubernetes provider you will likely need to adjust the `storageClassName` in the `volumeClaimTemplate`
* The default storage class for GKE is `standard` which by default will give you `pd-ssd` type persistent volumes. This is network attached storage and will not perform as well as local storage. If you are using Kubernetes version 1.10 or greater you can use [Local PersistentVolumes](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd) for increased performance
* The chart deploys a statefulset and by default will do an automated rolling update of your cluster. It does this by waiting for the cluster health to become green after each instance is updated. If you prefer to update manually you can set [`updateStrategy: OnDelete`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#on-delete)
* It is important to verify that the JVM heap size in `esJavaOpts` and to set the CPU/Memory `resources` to something suitable for your cluster
* To simplify chart and maintenance each set of node groups is deployed as a separate helm release. Take a look at the [multi](./examples/multi) example to get an idea for how this works. Without doing this it isn't possible to resize persistent volumes in a statefulset. By setting it up this way it makes it possible to add more nodes with a new storage size then drain the old ones. It also solves the problem of allowing the user to determine which node groups to update first when doing upgrades or changes.
* We have designed this chart to be very un-opinionated about how to configure Elasticsearch. It exposes ways to set environment variables and mount secrets inside of the container. Doing this makes it much easier for this chart to support multiple versions with minimal changes.
## Migration from helm/charts stable
If you currently have a cluster deployed with the [helm/charts stable](https://github.com/helm/charts/tree/master/stable/elasticsearch) chart you can follow the [migration guide](/elasticsearch/examples/migration/README.md)
## Installing
* Add the elastic helm charts repo
```
helm repo add elastic https://helm.elastic.co
```
* Install it
```
helm install --name elasticsearch elastic/elasticsearch
```
## Compatibility
This chart is tested with the latest supported versions. The currently tested versions are:
| 6.x | 7.x |
| ----- | ----- |
| 6.8.1 | 7.3.0 |
Examples of installing older major versions can be found in the [examples](./examples) directory.
While only the latest releases are tested, it is possible to easily install old or new releases by overriding the `imageTag`. To install version `7.3.0` of Elasticsearch it would look like this:
```
helm install --name elasticsearch elastic/elasticsearch --set imageTag=7.3.0
```
## Configuration
| Parameter | Description | Default |
| ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| `clusterName` | This will be used as the Elasticsearch [cluster.name](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.name.html) and should be unique per cluster in the namespace | `elasticsearch` |
| `nodeGroup` | This is the name that will be used for each group of nodes in the cluster. The name will be `clusterName-nodeGroup-X` | `master` |
| `masterService` | Optional. The service name used to connect to the masters. You only need to set this if your master `nodeGroup` is set to something other than `master`. See [Clustering and Node Discovery](#clustering-and-node-discovery) for more information. | `` |
| `roles` | A hash map with the [specific roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html) for the node group | `master: true`<br>`data: true`<br>`ingest: true` |
| `replicas` | Kubernetes replica count for the statefulset (i.e. how many pods) | `3` |
| `minimumMasterNodes` | The value for [discovery.zen.minimum_master_nodes](https://www.elastic.co/guide/en/elasticsearch/reference/6.7/discovery-settings.html#minimum_master_nodes). Should be set to `(master_eligible_nodes / 2) + 1`. Ignored in Elasticsearch versions >= 7. | `2` |
| `esMajorVersion` | Used to set major version specific configuration. If you are using a custom image and not running the default Elasticsearch version you will need to set this to the version you are running (e.g. `esMajorVersion: 6`) | `""` |
| `esConfig` | Allows you to add any config files in `/usr/share/elasticsearch/config/` such as `elasticsearch.yml` and `log4j2.properties`. See [values.yaml](./values.yaml) for an example of the formatting. | `{}` |
| `extraEnvs` | Extra [environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config) which will be appended to the `env:` definition for the container | `[]` |
| `extraVolumes` | Templatable string of additional volumes to be passed to the `tpl` function | `""` |
| `extraVolumeMounts` | Templatable string of additional volumeMounts to be passed to the `tpl` function | `""` |
| `extraInitContainers` | Templatable string of additional init containers to be passed to the `tpl` function | `""` |
| `secretMounts` | Allows you easily mount a secret as a file inside the statefulset. Useful for mounting certificates and other secrets. See [values.yaml](./values.yaml) for an example | `[]` |
| `image` | The Elasticsearch docker image | `docker.elastic.co/elasticsearch/elasticsearch` |
| `imageTag` | The Elasticsearch docker image tag | `7.3.0` |
| `imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` |
| `podAnnotations` | Configurable [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) applied to all Elasticsearch pods | `{}` |
| `labels` | Configurable [label](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) applied to all Elasticsearch pods | `{}` |
| `esJavaOpts` | [Java options](https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html) for Elasticsearch. This is where you should configure the [jvm heap size](https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html) | `-Xmx1g -Xms1g` |
| `resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the statefulset | `requests.cpu: 100m`<br>`requests.memory: 2Gi`<br>`limits.cpu: 1000m`<br>`limits.memory: 2Gi` |
| `initResources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the initContainer in the statefulset | {} |
| `sidecarResources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the sidecar containers in the statefulset | {} |
| `networkHost` | Value for the [network.host Elasticsearch setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/network.host.html) | `0.0.0.0` |
| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for statefulsets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage). You will want to adjust the storage (default `30Gi`) and the `storageClassName` if you are using a different storage class | `accessModes: [ "ReadWriteOnce" ]`<br>`resources.requests.storage: 30Gi` |
| `persistence.annotations` | Additional persistence annotations for the `volumeClaimTemplate` | `{}` |
| `persistence.enabled` | Enables a persistent volume for Elasticsearch data. Can be disabled for nodes that only have [roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html) which don't require persistent data. | `true` |
| `priorityClassName` | The [name of the PriorityClass](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass). No default is supplied as the PriorityClass must be created first. | `""` |
| `antiAffinityTopologyKey` | The [anti-affinity topology key](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | `kubernetes.io/hostname` |
| `antiAffinity` | Setting this to hard enforces the [anti-affinity rules](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). If it is set to soft it will be done "best effort". Other values will be ignored. | `hard` |
| `nodeAffinity` | Value for the [node affinity settings](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature) | `{}` |
| `podManagementPolicy` | By default Kubernetes [deploys statefulsets serially](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies). This deploys them in parallel so that they can discover eachother | `Parallel` |
| `protocol` | The protocol that will be used for the readinessProbe. Change this to `https` if you have `xpack.security.http.ssl.enabled` set | `http` |
| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set [http.port](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html#_settings) in `extraEnvs` | `9200` |
| `transportPort` | The transport port that Kubernetes will use for the service. If you change this you will also need to set [transport port configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-transport.html#_transport_settings) in `extraEnvs` | `9300` |
| `service.type` | Type of elasticsearch service. [Service Types](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) | `ClusterIP` |
| `service.nodePort` | Custom [nodePort](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport) port that can be set if you are using `service.type: nodePort`. | `` |
| `service.annotations` | Annotations that Kubernetes will use for the service. This will configure load balancer if `service.type` is `LoadBalancer` [Annotations](https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws) | `{}` |
| `updateStrategy` | The [updateStrategy](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets) for the statefulset. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` |
| `maxUnavailable` | The [maxUnavailable](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget) value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` |
| `fsGroup (DEPRECATED)` | The Group ID (GID) for [securityContext.fsGroup](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) so that the Elasticsearch user can read from the persistent volume | `` |
| `podSecurityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) for the pod | `fsGroup: 1000` |
| `securityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container) for the container | `capabilities.drop:[ALL]`<br>`runAsNonRoot: true`<br>`runAsUser: 1000` |
| `terminationGracePeriod` | The [terminationGracePeriod](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) in seconds used when trying to stop the pod | `120` |
| `sysctlInitContainer.enabled` | Allows you to disable the sysctlInitContainer if you are setting vm.max_map_count with another method | `true` |
| `sysctlVmMaxMapCount` | Sets the [sysctl vm.max_map_count](https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html#vm-max-map-count) needed for Elasticsearch | `262144` |
| `readinessProbe` | Configuration fields for the [readinessProbe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) | `failureThreshold: 3`<br>`initialDelaySeconds: 10`<br>`periodSeconds: 10`<br>`successThreshold: 3`<br>`timeoutSeconds: 5` |
| `clusterHealthCheckParams` | The [Elasticsearch cluster health status params](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params) that will be used by readinessProbe command | `wait_for_status=green&timeout=1s` |
| `imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` |
| `nodeSelector` | Configurable [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) so that you can target specific nodes for your Elasticsearch cluster | `{}` |
| `tolerations` | Configurable [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) | `[]` |
| `ingress` | Configurable [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) to expose the Elasticsearch service. See [`values.yaml`](./values.yaml) for an example | `enabled: false` |
| `schedulerName` | Name of the [alternate scheduler](https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods) | `nil` |
| `masterTerminationFix` | A workaround needed for Elasticsearch < 7.2 to prevent master status being lost during restarts [#63](https://github.com/elastic/helm-charts/issues/63) | `false` |
| `lifecycle` | Allows you to add lifecycle configuration. See [values.yaml](./values.yaml) for an example of the formatting. | `{}` |
## Try it out
In [examples/](./examples) you will find some example configurations. These examples are used for the automated testing of this helm chart
### Default
To deploy a cluster with all default values and run the integration tests
```
cd examples/default
make
```
### Multi
A cluster with dedicated node types
```
cd examples/multi
make
```
### Security
A cluster with node to node security and https enabled. This example uses autogenerated certificates and password, for a production deployment you want to generate SSL certificates following the [official docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#node-certificates).
* Generate the certificates and install Elasticsearch
```
cd examples/security
make
# Run a curl command to interact with the cluster
kubectl exec -ti security-master-0 -- sh -c 'curl -u $ELASTIC_USERNAME:$ELASTIC_PASSWORD -k https://localhost:9200/_cluster/health?pretty'
```
### FAQ
#### How to install plugins?
The [recommended](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_c_customized_image) way to install plugins into our docker images is to create a custom docker image.
The Dockerfile would look something like:
```
ARG elasticsearch_version
FROM docker.elastic.co/elasticsearch/elasticsearch:${elasticsearch_version}
RUN bin/elasticsearch-plugin install --batch repository-gcs
```
And then updating the `image` in values to point to your custom image.
There are a couple reasons we recommend this.
1. Tying the availability of Elasticsearch to the download service to install plugins is not a great idea or something that we recommend. Especially in Kubernetes where it is normal and expected for a container to be moved to another host at random times.
2. Mutating the state of a running docker image (by installing plugins) goes against best practices of containers and immutable infrastructure.
#### How to use the keystore?
1. Create a Kubernetes secret containing the [keystore](https://www.elastic.co/guide/en/elasticsearch/reference/current/secure-settings.html)
```
$ kubectl create secret generic elasticsearch-keystore --from-file=./elasticsearch.keystore
```
2. Mount it into the container via `secretMounts`
```
secretMounts:
- name: elasticsearch-keystore
secretName: elasticsearch-keystore
path: /usr/share/elasticsearch/config/elasticsearch.keystore
subPath: elasticsearch.keystore
```
#### How to enable snapshotting?
1. Install your [snapshot plugin](https://www.elastic.co/guide/en/elasticsearch/plugins/current/repository.html) into a custom docker image following the [how to install plugins guide](/elasticsearch/README.md#how-to-install-plugins)
2. Add any required secrets or credentials into an Elasticsearch keystore following the [how to use the keystore guide](/elasticsearch/README.md#how-to-use-the-keystore)
3. Configure the [snapshot repository](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html) as you normally would.
4. To automate snapshots you can use a tool like [curator](https://www.elastic.co/guide/en/elasticsearch/client/curator/current/snapshot.html). In the future there are plans to have Elasticsearch manage automated snapshots with [Snapshot Lifecycle Management](https://github.com/elastic/elasticsearch/issues/38461).
### Local development environments
This chart is designed to run on production scale Kubernetes clusters with multiple nodes, lots of memory and persistent storage. For that reason it can be a bit tricky to run them against local Kubernetes environments such as minikube. Below are some examples of how to get this working locally.
#### Minikube
This chart also works successfully on [minikube](https://kubernetes.io/docs/setup/minikube/) in addition to typical hosted Kubernetes environments.
An example `values.yaml` file for minikube is provided under `examples/`.
In order to properly support the required persistent volume claims for the Elasticsearch `StatefulSet`, the `default-storageclass` and `storage-provisioner` minikube addons must be enabled.
```
minikube addons enable default-storageclass
minikube addons enable storage-provisioner
cd examples/minikube
make
```
Note that if `helm` or `kubectl` timeouts occur, you may consider creating a minikube VM with more CPU cores or memory allocated.
#### Docker for Mac - Kubernetes
It is also possible to run this chart with the built in Kubernetes cluster that comes with [docker-for-mac](https://docs.docker.com/docker-for-mac/kubernetes/).
```
cd examples/docker-for-mac
make
```
## Clustering and Node Discovery
This chart facilitates Elasticsearch node discovery and services by creating two `Service` definitions in Kubernetes, one with the name `$clusterName-$nodeGroup` and another named `$clusterName-$nodeGroup-headless`.
Only `Ready` pods are a part of the `$clusterName-$nodeGroup` service, while all pods (`Ready` or not) are a part of `$clusterName-$nodeGroup-headless`.
If your group of master nodes has the default `nodeGroup: master` then you can just add new groups of nodes with a different `nodeGroup` and they will automatically discover the correct master. If your master nodes have a different `nodeGroup` name then you will need to set `masterService` to `$clusterName-$masterNodeGroup`.
The chart value for `masterService` is used to populate `discovery.zen.ping.unicast.hosts`, which Elasticsearch nodes will use to contact master nodes and form a cluster.
Therefore, to add a group of nodes to an existing cluster, setting `masterService` to the desired `Service` name of the related cluster is sufficient.
For an example of deploying both a group master nodes and data nodes using multiple releases of this chart, see the accompanying values files in `examples/multi`.
## Testing
This chart uses [pytest](https://docs.pytest.org/en/latest/) to test the templating logic. The dependencies for testing can be installed from the [`requirements.txt`](../requirements.txt) in the parent directory.
```
pip install -r ../requirements.txt
make pytest
```
You can also use `helm template` to look at the YAML being generated
```
make template
```
It is possible to run all of the tests and linting inside of a docker container
```
make test
```
## Integration Testing
Integration tests are run using [goss](https://github.com/aelsabbahy/goss/blob/master/docs/manual.md) which is a serverspec like tool written in golang. See [goss.yaml](examples/default/test/goss.yaml) for an example of what the tests look like.
To run the goss tests against the default example:
```
cd examples/default
make goss
```
1. Watch all cluster members come up.
$ kubectl get pods --namespace={{ .Release.Namespace }} -l app={{ template "uname" . }} -w
2. Test cluster health using Helm test.
$ helm test {{ .Release.Name }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "uname" -}}
{{ .Values.clusterName }}-{{ .Values.nodeGroup }}
{{- end -}}
{{- define "masterService" -}}
{{- if empty .Values.masterService -}}
{{ .Values.clusterName }}-master
{{- else -}}
{{ .Values.masterService }}
{{- end -}}
{{- end -}}
{{- define "endpoints" -}}
{{- $replicas := .replicas | int }}
{{- $uname := printf "%s-%s" .clusterName .nodeGroup }}
{{- range $i, $e := untilStep 0 $replicas 1 -}}
{{ $uname }}-{{ $i }},
{{- end -}}
{{- end -}}
{{- define "esMajorVersion" -}}
{{- if .Values.esMajorVersion -}}
{{ .Values.esMajorVersion }}
{{- else -}}
{{- $version := int (index (.Values.imageTag | splitList ".") 0) -}}
{{- if and (contains "docker.elastic.co/elasticsearch/elasticsearch" .Values.image) (not (eq $version 0)) -}}
{{ $version }}
{{- else -}}
7
{{- end -}}
{{- end -}}
{{- end -}}
{{- if .Values.esConfig }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "uname" . }}-config
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
data:
{{- range $path, $config := .Values.esConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "uname" . -}}
{{- $servicePort := .Values.httpPort -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- end }}
---
{{- if .Values.maxUnavailable }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "{{ template "uname" . }}-pdb"
spec:
maxUnavailable: {{ .Values.maxUnavailable }}
selector:
matchLabels:
app: "{{ template "uname" . }}"
{{- end }}
---
kind: Service
apiVersion: v1
metadata:
name: {{ template "uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
ports:
- name: http
protocol: TCP
port: {{ .Values.httpPort }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
- name: transport
protocol: TCP
port: {{ .Values.transportPort }}
---
kind: Service
apiVersion: v1
metadata:
name: {{ template "uname" . }}-headless
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
# Create endpoints also if the related pod isn't ready
publishNotReadyAddresses: true
selector:
app: "{{ template "uname" . }}"
ports:
- name: http
port: {{ .Values.httpPort }}
- name: transport
port: {{ .Values.transportPort }}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
annotations:
esMajorVersion: "{{ include "esMajorVersion" . }}"
spec:
serviceName: {{ template "uname" . }}-headless
selector:
matchLabels:
app: "{{ template "uname" . }}"
replicas: {{ default .Values.replicas }}
podManagementPolicy: {{ .Values.podManagementPolicy }}
updateStrategy:
type: {{ .Values.updateStrategy }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: {{ template "uname" . }}
{{- with .Values.persistence.annotations }}
annotations:
{{ toYaml . | indent 8 }}
{{- end }}
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
resources:
requests:
storage: "{{ .Values.persistence.size }}"
{{- end }}
template:
metadata:
name: "{{ template "uname" . }}"
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.esConfig }}
configchecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 8 }}
{{- if .Values.fsGroup }}
fsGroup: {{ .Values.fsGroup }} # Deprecated value, please use .Values.podSecurityContext.fsGroup
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 6 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- if or (eq .Values.antiAffinity "hard") (eq .Values.antiAffinity "soft") .Values.nodeAffinity }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
affinity:
{{- end }}
{{- if eq .Values.antiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "uname" .}}"
topologyKey: {{ .Values.antiAffinityTopologyKey }}
{{- else if eq .Values.antiAffinity "soft" }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: {{ .Values.antiAffinityTopologyKey }}
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "uname" . }}"
{{- end }}
{{- with .Values.nodeAffinity }}
nodeAffinity:
{{ toYaml . | indent 10 }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- end }}
{{- if .Values.esConfig }}
- name: esconfig
configMap:
name: {{ template "uname" . }}-config
{{- end }}
{{- if .Values.extraVolumes }}
{{ tpl .Values.extraVolumes . | indent 8 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
initContainers:
{{- if .Values.sysctlInitContainer.enabled }}
- name: configure-sysctl
securityContext:
runAsUser: 0
privileged: true
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
command: ["sysctl", "-w", "vm.max_map_count={{ .Values.sysctlVmMaxMapCount}}"]
resources:
{{ toYaml .Values.initResources | indent 10 }}
{{- end }}
{{- if .Values.extraInitContainers }}
{{ tpl .Values.extraInitContainers . | indent 6 }}
{{- end }}
containers:
- name: "{{ template "name" . }}"
securityContext:
{{ toYaml .Values.securityContext | indent 10 }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 10 }}
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: '{{ .Values.clusterHealthCheckParams }}' )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
else
BASIC_AUTH=''
fi
curl -XGET -s -k --fail ${BASIC_AUTH} {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}${path}
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
http "/"
else
echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "{{ .Values.clusterHealthCheckParams }}" )'
if http "/_cluster/health?{{ .Values.clusterHealthCheckParams }}" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "{{ .Values.clusterHealthCheckParams }}" )'
exit 1
fi
fi
ports:
- name: http
containerPort: {{ .Values.httpPort }}
- name: transport
containerPort: {{ .Values.transportPort }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
{{- if eq .Values.roles.master "true" }}
{{- if ge (int (include "esMajorVersion" .)) 7 }}
- name: cluster.initial_master_nodes
value: "{{ template "endpoints" .Values }}"
{{- else }}
- name: discovery.zen.minimum_master_nodes
value: "{{ .Values.minimumMasterNodes }}"
{{- end }}
{{- end }}
{{- if lt (int (include "esMajorVersion" .)) 7 }}
- name: discovery.zen.ping.unicast.hosts
value: "{{ template "masterService" . }}-headless"
{{- else }}
- name: discovery.seed_hosts
value: "{{ template "masterService" . }}-headless"
{{- end }}
- name: cluster.name
value: "{{ .Values.clusterName }}"
- name: network.host
value: "{{ .Values.networkHost }}"
- name: ES_JAVA_OPTS
value: "{{ .Values.esJavaOpts }}"
{{- range $role, $enabled := .Values.roles }}
- name: node.{{ $role }}
value: "{{ $enabled }}"
{{- end }}
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 10 }}
{{- end }}
volumeMounts:
{{- if .Values.persistence.enabled }}
- name: "{{ template "uname" . }}"
mountPath: /usr/share/elasticsearch/data
{{- end }}
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.esConfig }}
- name: esconfig
mountPath: /usr/share/elasticsearch/config/{{ $path }}
subPath: {{ $path }}
{{- end -}}
{{- if .Values.extraVolumeMounts }}
{{ tpl .Values.extraVolumeMounts . | indent 10 }}
{{- end }}
{{- if .Values.masterTerminationFix }}
{{- if eq .Values.roles.master "true" }}
# This sidecar will prevent slow master re-election
# https://github.com/elastic/helm-charts/issues/63
- name: elasticsearch-master-graceful-termination-handler
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command:
- "sh"
- -c
- |
#!/usr/bin/env bash
set -eo pipefail
http () {
local path="${1}"
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
else
BASIC_AUTH=''
fi
curl -XGET -s -k --fail ${BASIC_AUTH} {{ .Values.protocol }}://{{ template "masterService" . }}:{{ .Values.httpPort }}${path}
}
cleanup () {
while true ; do
local master="$(http "/_cat/master?h=node" || echo "")"
if [[ $master == "{{ template "masterService" . }}"* && $master != "${NODE_NAME}" ]]; then
echo "This node is not master."
break
fi
echo "This node is still master, waiting gracefully for it to step down"
sleep 1
done
exit 0
}
trap cleanup SIGTERM
sleep infinity &
wait $!
resources:
{{ toYaml .Values.sidecarResources | indent 10 }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 10 }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.lifecycle }}
lifecycle:
{{ toYaml .Values.lifecycle | indent 10 }}
{{- end }}
---
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
command:
- "sh"
- "-c"
- |
#!/usr/bin/env bash -e
curl -XGET --fail '{{ template "uname" . }}:{{ .Values.httpPort }}/_cluster/health?{{ .Values.clusterHealthCheckParams }}'
restartPolicy: Never
---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.master=true
roles:
master: "true"
ingest: "true"
data: "true"
replicas: 3
minimumMasterNodes: 2
esMajorVersion: ""
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "7.3.0"
imagePullPolicy: "IfNotPresent"
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
esJavaOpts: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "100m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
sidecarResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
#volumeClaimTemplate:
# accessModes: [ "ReadWriteOnce" ]
# resources:
# requests:
# storage: 30Gi
persistence:
enabled: false
annotations: {}
## data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
accessMode: ReadWriteOnce
size: 30Gi
extraVolumes: ""
# - name: extras
# emptyDir: {}
extraVolumeMounts: ""
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraInitContainers: ""
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
protocol: http
httpPort: 9200
transportPort: 9300
service:
type: ClusterIP
nodePort:
annotations: {}
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
# The following value is deprecated,
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publically expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ""
fullnameOverride: ""
# https://github.com/elastic/helm-charts/issues/63
masterTerminationFix: false
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
sysctlInitContainer:
enabled: true
description: Official Elastic helm chart for Filebeat
home: https://github.com/elastic/helm-charts
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: filebeat
version: 7.3.0
appVersion: 7.3.0
sources:
- https://github.com/elastic/beats
icon: https://helm.elastic.co/icons/filebeat.png
# Filebeat Helm Chart
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
This helm chart is a lightweight way to configure and run our official [Filebeat docker image](https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html).
## Requirements
* Kubernetes >= 1.8
* [Helm](https://helm.sh/) >= 2.8.0
## Usage notes and getting started
* The default Filebeat configuration file for this chart is configured to use an Elasticsearch endpoint. Without any additional changes, Filebeat will send documents to the service URL that the Elasticsearch helm chart sets up by default. You may either set the `ELASTICSEARCH_HOSTS` environment variable in `extraEnvs` to override this endpoint or modify the default `filebeatConfig` to change this behavior.
* The default Filebeat configuration file is also configured to capture container logs and enrich them with Kubernetes metadata by default. This will capture all container logs in the cluster.
## Installing
* Add the elastic helm charts repo
```
helm repo add elastic https://helm.elastic.co
```
* Install it
```
helm install --name filebeat elastic/filebeat
```
## Compatibility
This chart is tested with the latest supported versions. The currently tested versions are:
| 6.x | 7.x |
| ----- | ----- |
| 6.8.1 | 7.3.0 |
Examples of installing older major versions can be found in the [examples](./examples) directory.
While only the latest releases are tested, it is possible to easily install old or new releases by overriding the `imageTag`. To install version `7.3.0` of Filebeat it would look like this:
```
helm install --name filebeat elastic/filebeat --set imageTag=7.3.0
```
## Configuration
| Parameter | Description | Default |
| ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| `filebeatConfig` | Allows you to add any config files in `/usr/share/filebeat` such as `filebeat.yml`. See [values.yaml](./values.yaml) for an example of the formatting with the default configuration. | see [values.yaml](./values.yaml) |
| `extraEnvs` | Extra [environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config) which will be appended to the `env:` definition for the container | `[]` |
| `extraVolumeMounts` | Templatable string of additional volumeMounts to be passed to the `tpl` function | `""` |
| `extraVolumes` | Templatable string of additional volumes to be passed to the `tpl` function | `""` |
| `hostPathRoot` | Fully-qualified [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) that will be used to persist Filebeat registry data | `/var/lib` |
| `image` | The Filebeat docker image | `docker.elastic.co/beats/filebeat` |
| `imageTag` | The Filebeat docker image tag | `7.3.0` |
| `imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` |
| `imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` |
| `managedServiceAccount` | Whether the `serviceAccount` should be managed by this helm chart. Set this to `false` in order to manage your own service account and related roles. | `true` |
| `podAnnotations` | Configurable [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) applied to all Filebeat pods | `{}` |
| `labels` | Configurable [label](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) applied to all Filebeat pods | `{}` |
| `podSecurityContext` | Configurable [podSecurityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for Filebeat pod execution environment | `runAsUser: 0`<br>`privileged: false` |
| `livenessProbe` | Parameters to pass to [liveness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) checks for values such as timeouts and thresholds. | `failureThreshold: 3`<br>`initialDelaySeconds: 10`<br>`periodSeconds: 10`<br>`successThreshold: 3`<br>`timeoutSeconds: 5` |
| `readinessProbe` | Parameters to pass to [readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) checks for values such as timeouts and thresholds. | `failureThreshold: 3`<br>`initialDelaySeconds: 10`<br>`periodSeconds: 10`<br>`successThreshold: 3`<br>`timeoutSeconds: 5` |
| `resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the `DaemonSet` | `requests.cpu: 100m`<br>`requests.memory: 100Mi`<br>`limits.cpu: 1000m`<br>`limits.memory: 200Mi` |
| `serviceAccount` | Custom [serviceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) that Filebeat will use during execution. By default will use the service account created by this chart. | `""` |
| `secretMounts` | Allows you easily mount a secret as a file inside the `DaemonSet`. Useful for mounting certificates and other secrets. See [values.yaml](./values.yaml) for an example | `[]` |
| `terminationGracePeriod` | Termination period (in seconds) to wait before killing Filebeat pod process on pod shutdown | `30` |
| `tolerations` | Configurable [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) | `[]` |
| `nodeSelector` | Configurable [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) | `{}` |
| `affinity` | Configurable [affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) | `{}` |
| `priorityClassName` | The [name of the PriorityClass](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass). No default is supplied as the PriorityClass must be created first. | `""` |
| `updateStrategy` | The [updateStrategy](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/#daemonset-update-strategy) for the `DaemonSet`. By default Kubernetes will kill and recreate pods on updates. Setting this to `OnDelete` will require that pods be deleted manually. | `RollingUpdate` |
## Examples
In [examples/](./examples) you will find some example configurations. These examples are used for the automated testing of this helm chart.
### Default
* Deploy the [default Elasticsearch helm chart](../elasticsearch/README.md#default)
* Deploy Filebeat with the default values
```
cd examples/default
make
```
* You can now setup a port forward for Elasticsearch to observe Filebeat indices
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
## Testing
This chart uses [pytest](https://docs.pytest.org/en/latest/) to test the templating logic. The dependencies for testing can be installed from the [`requirements.txt`](../requirements.txt) in the parent directory.
```
pip install -r ../requirements.txt
make pytest
```
You can also use `helm template` to look at the YAML being generated
```
make template
```
It is possible to run all of the tests and linting inside of a docker container
```
make test
```
## Integration Testing
Integration tests are run using [goss](https://github.com/aelsabbahy/goss/blob/master/docs/manual.md) which is a serverspec like tool written in golang. See [goss.yaml](examples/default/test/goss.yaml) for an example of what the tests look like.
To run the goss tests against the default example:
```
cd examples/default
make goss
```
1. Watch all containers come up.
$ kubectl get pods --namespace={{ .Release.Namespace }} -l app={{ template "fullname" . }} -w
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Use the fullname if the serviceAccount value is not set
*/}}
{{- define "serviceAccount" -}}
{{- if .Values.serviceAccount }}
{{- .Values.serviceAccount -}}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- if .Values.managedServiceAccount }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "serviceAccount" . }}-cluster-role
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
verbs:
- get
- list
- watch
{{- end -}}
{{- if .Values.managedServiceAccount }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ template "serviceAccount" . }}-cluster-role-binding
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
roleRef:
kind: ClusterRole
name: {{ template "serviceAccount" . }}-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ template "serviceAccount" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}
{{- if .Values.filebeatConfig }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fullname" . }}-config
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
data:
{{- range $path, $config := .Values.filebeatConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ template "fullname" . }}
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
selector:
matchLabels:
app: "{{ template "fullname" . }}"
release: {{ .Release.Name | quote }}
updateStrategy:
type: {{ .Values.updateStrategy }}
template:
metadata:
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.filebeatConfig }}
configChecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
name: "{{ template "fullname" . }}"
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
spec:
{{- with .Values.tolerations }}
tolerations: {{ toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector: {{ toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- with .Values.affinity }}
affinity: {{ toYaml . | nindent 8 -}}
{{- end }}
serviceAccountName: {{ template "serviceAccount" . }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- end }}
{{- if .Values.filebeatConfig }}
- name: filebeat-config
configMap:
defaultMode: 0600
name: {{ template "fullname" . }}-config
{{- end }}
- name: data
hostPath:
path: {{ .Values.hostPathRoot }}/{{ template "fullname" . }}-{{ .Release.Namespace }}-data
type: DirectoryOrCreate
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varrundockersock
hostPath:
path: /var/run/docker.sock
{{- if .Values.extraVolumes }}
{{ tpl .Values.extraVolumes . | indent 6 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
containers:
- name: "filebeat"
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
args:
- "-e"
- "-E"
- "http.enabled=true"
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
{{ toYaml .Values.livenessProbe | indent 10 }}
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
filebeat test output
{{ toYaml .Values.readinessProbe | indent 10 }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 8 }}
{{- end }}
{{- if .Values.podSecurityContext }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 10 }}
{{- end }}
volumeMounts:
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.filebeatConfig }}
- name: filebeat-config
mountPath: /usr/share/filebeat/{{ $path }}
readOnly: true
subPath: {{ $path }}
{{- end }}
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
# Necessary when using autodiscovery; avoid mounting it otherwise
# See: https://www.elastic.co/guide/en/beats/filebeat/master/configuration-autodiscover.html
- name: varrundockersock
mountPath: /var/run/docker.sock
readOnly: true
{{- if .Values.extraVolumeMounts }}
{{ tpl .Values.extraVolumeMounts . | indent 8 }}
{{- end }}
{{- if .Values.managedServiceAccount }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "serviceAccount" . }}
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
{{- end -}}
---
# Allows you to add any config files in /usr/share/filebeat
# such as filebeat.yml
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: docker
containers.ids:
- '*'
processors:
- add_kubernetes_metadata:
in_cluster: true
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
# Extra environment variables to append to the DaemonSet pod spec.
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
extraVolumeMounts: ""
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraVolumes: ""
# - name: extras
# emptyDir: {}
# Root directory where Filebeat will write data to in order to persist registry data across pod restarts (file position and other metadata).
hostPathRoot: /var/lib
image: "docker.elastic.co/beats/filebeat"
imageTag: "7.3.0"
imagePullPolicy: "IfNotPresent"
imagePullSecrets: []
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
# Whether this chart should self-manage its service account, role, and associated role binding.
managedServiceAccount: true
# additionals labels
labels: {}
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# Various pod security context settings. Bear in mind that many of these have an impact on Filebeat functioning properly.
#
# - User that the container will execute as. Typically necessary to run as root (0) in order to properly collect host container logs.
# - Whether to execute the Filebeat containers as privileged containers. Typically not necessarily unless running within environments such as OpenShift.
podSecurityContext:
runAsUser: 0
privileged: false
resources:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "1000m"
memory: "200Mi"
# Custom service account override that the pod will use
serviceAccount: ""
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security other sensitive values
secretMounts: []
# - name: filebeat-certificates
# secretName: filebeat-certificates
# path: /usr/share/filebeat/certs
# How long to wait for Filebeat pods to stop gracefully
terminationGracePeriod: 30
tolerations: []
nodeSelector: {}
affinity: {}
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
updateStrategy: RollingUpdate
# Override various naming aspects of this chart
# Only edit these if you know what you're doing
nameOverride: ""
fullnameOverride: ""
description: Official Elastic helm chart for Kibana
home: https://github.com/elastic/helm-charts
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: kibana
version: 7.3.0
appVersion: 7.3.0
sources:
- https://github.com/elastic/kibana
icon: https://helm.elastic.co/icons/kibana.png
# Kibana Helm Chart
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
This helm chart is a lightweight way to configure and run our official [Kibana docker image](https://www.elastic.co/guide/en/kibana/current/docker.html)
## Requirements
* Kubernetes >= 1.8
* [Helm](https://helm.sh/) >= 2.8.0
## Installing
* Add the elastic helm charts repo
```
helm repo add elastic https://helm.elastic.co
```
* Install it
```
helm install --name kibana elastic/kibana
```
## Compatibility
This chart is tested with the latest supported versions. The currently tested versions are:
| 6.x | 7.x |
| ----- | ----- |
| 6.8.1 | 7.3.0 |
Examples of installing older major versions can be found in the [examples](./examples) directory.
While only the latest releases are tested, it is possible to easily install old or new releases by overriding the `imageTag`. To install version `7.3.0` of Kibana it would look like this:
```
helm install --name kibana elastic/kibana --set imageTag=7.3.0
```
## Configuration
| Parameter | Description | Default |
| ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| `elasticsearchHosts` | The URLs used to connect to Elasticsearch. | `http://elasticsearch-master:9200` |
| `elasticsearchURL` | The URL used to connect to Elasticsearch. Deprecated, needs to be used for Kibana versions < 6.6 | |
| `replicas` | Kubernetes replica count for the deployment (i.e. how many pods) | `1` |
| `extraEnvs` | Extra [environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config) which will be appended to the `env:` definition for the container | `[]` |
| `secretMounts` | Allows you easily mount a secret as a file inside the deployment. Useful for mounting certificates and other secrets. See [values.yaml](./values.yaml) for an example | `[]` |
| `image` | The Kibana docker image | `docker.elastic.co/kibana/kibana` |
| `imageTag` | The Kibana docker image tag | `7.3.0` |
| `imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` |
| `podAnnotations` | Configurable [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) applied to all Kibana pods | `{}` |
| `resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the statefulset | `requests.cpu: 100m`<br>`requests.memory: 2Gi`<br>`limits.cpu: 1000m`<br>`limits.memory: 2Gi` |
| `protocol` | The protocol that will be used for the readinessProbe. Change this to `https` if you have `server.ssl.enabled: true` set | `http` |
| `serverHost` | The [`server.host`](https://www.elastic.co/guide/en/kibana/current/settings.html) Kibana setting. This is set explicitly so that the default always matches what comes with the docker image. | `0.0.0.0` |
| `healthCheckPath` | The path used for the readinessProbe to check that Kibana is ready. If you are setting `server.basePath` you will also need to update this to `/${basePath}/app/kibana` | `/app/kibana` |
| `kibanaConfig` | Allows you to add any config files in `/usr/share/kibana/config/` such as `kibana.yml`. See [values.yaml](./values.yaml) for an example of the formatting. | `{}` |
| `podSecurityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) for the pod | `fsGroup: 1000` |
| `securityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container) for the container | `capabilities.drop:[ALL]`<br>`runAsNonRoot: true`<br>`runAsUser: 1000` |
| `serviceAccount` | Allows you to overwrite the "default" [serviceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) for the pod | `[]` |
| `priorityClassName` | The [name of the PriorityClass](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass). No default is supplied as the PriorityClass must be created first. | `""` |
| `antiAffinityTopologyKey` | The [anti-affinity topology key](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). By default this will prevent multiple Kibana instances from running on the same Kubernetes node | `kubernetes.io/hostname` |
| `antiAffinity` | Setting this to hard enforces the [anti-affinity rules](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). If it is set to soft it will be done "best effort" | `hard` |
| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. | `5601` |
| `maxUnavailable` | The [maxUnavailable](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget) value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod | `1` |
| `updateStrategy` | Allows you to change the default update [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) for the deployment. A [standard upgrade](https://www.elastic.co/guide/en/kibana/current/upgrade-standard.html) of Kibana requires a full stop and start which is why the default strategy is set to `Recreate` | `Recreate` |
| `readinessProbe` | Configuration for the [readinessProbe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) | `failureThreshold: 3`<br>`initialDelaySeconds: 10`<br>`periodSeconds: 10`<br>`successThreshold: 3`<br>`timeoutSeconds: 5` |
| `imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` |
| `nodeSelector` | Configurable [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) so that you can target specific nodes for your Kibana instances | `{}` |
| `tolerations` | Configurable [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) | `[]` |
| `ingress` | Configurable [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) to expose the Kibana service. See [`values.yaml`](./values.yaml) for an example | `enabled: false` |
| `service` | Configurable [service](https://kubernetes.io/docs/concepts/services-networking/service/) to expose the Kibana service. See [`values.yaml`](./values.yaml) for an example | `type: ClusterIP`<br>`port: 5601`<br>`nodePort:`<br>`annotations: {}` |
| `labels` | Configurable [label](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) applied to all Kibana pods | `{}` |
## Examples
In [examples/](./examples) you will find some example configurations. These examples are used for the automated testing of this helm chart
### Default
* Deploy the [default Elasticsearch helm chart](../elasticsearch/README.md#default)
* Deploy Kibana with the default values
```
cd examples/default
make
```
* You can now setup a port forward and access Kibana at http://localhost:5601
```
kubectl port-forward deployment/helm-kibana-default-kibana 5601
```
### Security
* Deploy a [security enabled Elasticsearch cluster](../elasticsearch/README.md#security)
* Deploy Kibana with the security example
```
cd examples/security
make
```
* Setup a port forward and access Kibana at https://localhost:5601
```
# Setup the port forward
kubectl port-forward deployment/helm-kibana-security-kibana 5601
# Run this in a seperate terminal
# Get the auto generated password
password=$(kubectl get secret elastic-credentials -o jsonpath='{.data.password}' | base64 --decode)
echo $password
# Test Kibana is working with curl or access it with your browser at https://localhost:5601
# The example certificate is self signed so you may see a warning about the certificate
curl -I -k -u elastic:$password https://localhost:5601/app/kibana
```
## Testing
This chart uses [pytest](https://docs.pytest.org/en/latest/) to test the templating logic. The dependencies for testing can be installed from the [`requirements.txt`](../requirements.txt) in the parent directory.
```
pip install -r ../requirements.txt
make test
```
You can also use `helm template` to look at the YAML being generated
```
make template
```
It is possible to run all of the tests and linting inside of a docker container
```
make test
```
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Release.Name .Values.nameOverride -}}
{{- printf "%s-%s" $name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- if .Values.kibanaConfig }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fullname" . }}-config
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
data:
{{- range $path, $config := .Values.kibanaConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
replicas: {{ .Values.replicas }}
strategy:
{{ toYaml .Values.updateStrategy | indent 4 }}
selector:
matchLabels:
app: kibana
release: {{ .Release.Name | quote }}
template:
metadata:
labels:
app: kibana
release: {{ .Release.Name | quote }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.kibanaConfig }}
configchecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
spec:
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 8 }}
{{- if .Values.serviceAccount }}
serviceAccount: {{ .Values.serviceAccount }}
{{- end }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- end }}
{{- if .Values.kibanaConfig }}
- name: kibanaconfig
configMap:
name: {{ template "fullname" . }}-config
{{- end }}
- name: kibana-nginx
configMap:
name: {{ template "fullname" . }}-nginx
items:
- key: nginx.conf
mode: 438
path: nginx.conf
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
containers:
- name: kibana-proxy
image: {{ .Values.global.nginxProxy.repository }}:{{ .Values.global.nginxProxy.tag }}
args:
- nginx
- -g
- daemon off;
- -c
- /nginx/nginx.conf
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: kibana-nginx
mountPath: /nginx/
{{- if and .Values.resources .Values.resources.proxy }}
resources:
{{ toYaml .Values.resources.proxy | indent 10 }}
{{- end }}
- name: kibana
securityContext:
{{ toYaml .Values.securityContext | indent 10 }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
env:
{{- if .Values.elasticsearchURL }}
- name: ELASTICSEARCH_URL
value: "{{ .Values.elasticsearchURL }}"
{{- else if .Values.elasticsearchHosts }}
- name: ELASTICSEARCH_HOSTS
value: "{{ .Values.elasticsearchHosts }}"
{{- end }}
- name: I18N_LOCALE
value: "{{ .Values.language }}"
- name: SERVER_HOST
value: "{{ .Values.serverHost }}"
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 10 }}
{{- end }}
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 10 }}
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
http () {
local path="${1}"
set -- -XGET -s --fail
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
fi
curl -k "$@" "{{ .Values.protocol }}://localhost:{{ .Values.httpPort }}${path}"
}
http "{{ .Values.healthCheckPath }}"
ports:
- containerPort: {{ .Values.httpPort }}
resources:
{{ toYaml .Values.resources | indent 10 }}
volumeMounts:
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.kibanaConfig }}
- name: kibanaconfig
mountPath: /usr/share/kibana/config/{{ $path }}
subPath: {{ $path }}
{{- end -}}
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "fullname" . -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fullname" . }}-nginx
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
data:
nginx.conf: |-
user nginx;
worker_processes auto;
error_log /dev/stdout warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
log_format main '[$time_local - $status] $remote_addr - $remote_user $request ($http_referer)';
proxy_connect_timeout 10;
proxy_read_timeout 180;
proxy_send_timeout 5;
proxy_buffering off;
proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:100m inactive=1d max_size=10g;
server {
listen 80;
access_log off;
gzip on;
gzip_min_length 1k;
gzip_comp_level 2;
gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript image/jpeg image/gif image/png;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
proxy_set_header Host $host;
location ~* bundles/app/ {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:5601;
sub_filter_types application/javascript;
sub_filter_once off;
sub_filter "loadStyleSheet('/" "loadStyleSheet('./";
sub_filter '/bundles/' './bundles/';
sub_filter '/built_assets/' './built_assets/';
}
location /built_assets/dlls/vendors.bundle.dll.js {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:5601;
sub_filter_types *;
sub_filter_once off;
sub_filter '/built_assets/dlls/' './built_assets/dlls/';
}
location /bundles/commons.bundle.js {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:5601;
sub_filter_types *;
sub_filter_once off;
sub_filter '/api/' './api/';
sub_filter '/app/' './app/';
sub_filter 'url:basePath.prepend("/".concat(navLink.icon))' 'url:basePath.prepend("./".concat(navLink.icon))';
sub_filter 'this.basePath=' 'this.basePath="."+';
sub_filter 'window.location=response.location' 'window.location="."+response.location';
}
location /bundles/kibana.bundle.js {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:5601;
sub_filter_types *;
sub_filter_once off;
sub_filter '/api/' './api/';
sub_filter '/app/' './app/';
sub_filter 'injectedMetadata.i18n.translationsUrl' '"./"+injectedMetadata.i18n.translationsUrl';
sub_filter 'this.basePath=' 'this.basePath="."+';
sub_filter '../api/console/' './api/console/';
sub_filter '/s/' './s/';
}
location /bundles/ {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:5601;
sub_filter_types *;
sub_filter_once off;
sub_filter 'injectedMetadata.i18n.translationsUrl' '"./"+injectedMetadata.i18n.translationsUrl';
}
location ~ ^/app/translations/(.*) {
return 301 /translations/$1;
}
location ~ ^/app/built_assets/(.*) {
return 301 /built_assets/$1;
}
location ~ ^/app/bundles/(.*) {
return 301 /bundles/$1;
}
location ~ ^/app/node_modules/(.*) {
return 301 /node_modules/$1;
}
location ~ ^/app/plugins/(.*) {
return 301 /plugins/$1;
}
location ~ ^/s/(.*)/app/api/(.*) {
if ($request_method = POST ) {
return 307 /api/$2$is_args$args;
}
return 301 /api/$2$is_args$args;
}
location ~ ^/app/s/(.*)/app/kibana {
return 301 /s/$1/app/kibana;
}
location ~ ^/s/(.*)/app/s/(.*)/elasticsearch/(.*) {
if ($request_method = POST ) {
return 307 /elasticsearch/$3$is_args$args;
}
return 301 /elasticsearch/$3$is_args$args;
}
location ~ ^/s/(.*)/app/s/(.*)/elasticsearch/(.*) {
if ($request_method = POST ) {
return 307 /elasticsearch/$3$is_args$args;
}
return 301 /elasticsearch/$3$is_args$args;
}
location ~ ^/s/(.*)/app/built_assets/(.*) {
return 301 /built_assets/$2$is_args$args;
}
location ~ ^/s/(.*)/app/s/(.*)/(.*) {
return 301 /s/$2/$3$is_args$args;
}
location ~ ^/s/(.*)/app/bundles/(.*) {
return 301 /bundles/$2$is_args$args;
}
location ~ ^/s/(.*)/app/node_modules/(.*) {
return 301 /node_modules/$2$is_args$args;
}
location ~ ^/s/(.*)/app/app/(.*) {
return 301 /app/$2$is_args$args;
}
location ~ ^/app/api/(.*) {
if ($request_method = POST ) {
return 307 /api/$1$is_args$args;
}
return 301 /api/$1$is_args$args;
}
location ~ ^/app/elasticsearch/(.*) {
if ($request_method = POST ) {
return 307 /elasticsearch/$1$is_args$args;
}
return 301 /elasticsearch/$1$is_args$args;
}
location ~ ^/app/app/(.*) {
return 301 /app/$1$is_args$args;
}
location / {
proxy_cache my_zone;
proxy_cache_valid 200 302 1d;
proxy_cache_valid 301 30d;
proxy_cache_valid any 5m;
proxy_cache_bypass $http_cache_control;
add_header X-Proxy-Cache $upstream_cache_status;
add_header Cache-Control "public";
proxy_pass http://localhost:5601/;
sub_filter_types text/html;
sub_filter_once on;
if ($request_filename ~ .*\.(?:js|css|jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm)$) {
expires 90d;
}
}
}
}
apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
protocol: TCP
name: http
targetPort: {{ .Values.httpPort }}
selector:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
{{- if .Values.enableProxy }}
---
apiVersion: v1
kind: Service
metadata:
name: kibana-http
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service }}
kubernetes.io/cluster-service: "true"
spec:
type: ClusterIP
ports:
- name: http-access-kibana
protocol: TCP
port: 80
selector:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
{{- end }}
---
elasticsearchURL: "" # "http://elasticsearch-master:9200"
elasticsearchHosts: "http://elasticsearch-master:9200"
replicas: 1
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: kibana-keystore
# secretName: kibana-keystore
# path: /usr/share/kibana/data/kibana.keystore
# subPath: kibana.keystore # optional
image: "docker.elastic.co/kibana/kibana"
imageTag: "7.3.0"
imagePullPolicy: "IfNotPresent"
# additionals labels
labels: {}
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
resources:
requests:
cpu: "100m"
memory: "500m"
limits:
cpu: "1000m"
memory: "1Gi"
protocol: http
serverHost: "0.0.0.0"
healthCheckPath: "/app/kibana"
# Allows you to add any config files in /usr/share/kibana/config/
# such as kibana.yml
kibanaConfig: {}
# kibana.yml: |
# key:
# nestedkey: value
# If Pod Security Policy in use it may be required to specify security context as well as service account
language: en #en/zh-CN/ja-JP
podSecurityContext:
fsGroup: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
serviceAccount: ""
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
httpPort: 5601
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
updateStrategy:
type: "Recreate"
service:
type: ClusterIP
port: 5601
nodePort:
annotations: {}
# cloud.google.com/load-balancer-type: "Internal"
# service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
# service.beta.kubernetes.io/azure-load-balancer-internal: "true"
# service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
# service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
imagePullSecrets: []
nodeSelector: {}
tolerations: []
affinity: {}
nameOverride: ""
fullnameOverride: ""
enableProxy: true
description: Official Elastic helm chart for Metricbeat
home: https://github.com/elastic/helm-charts
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: metricbeat
version: 7.3.0
appVersion: 7.3.0
sources:
- https://github.com/elastic/beats
icon: https://helm.elastic.co/icons/metricbeat.png
# Metricbeat Helm Chart
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
This helm chart is a lightweight way to configure and run our official [Metricbeat docker image](https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-docker.html).
## Requirements
* Kubernetes >= 1.8
* [Helm](https://helm.sh/) >= 2.8.0
## Installing
* Add the elastic helm charts repo
```
helm repo add elastic https://helm.elastic.co
```
* Install it
```
helm install --name metricbeat elastic/metricbeat
```
## Compatibility
This chart is tested with the latest supported versions. The currently tested versions are:
| 6.x | 7.x |
| ----- | ----- |
| 6.8.1 | 7.3.0 |
Examples of installing older major versions can be found in the [examples](./examples) directory.
While only the latest releases are tested, it is possible to easily install old or new releases by overriding the `imageTag`. To install version `7.3.0` of metricbeat it would look like this:
```
helm install --name metricbeat elastic/metricbeat --set imageTag=7.3.0
```
## Configuration
| Parameter | Description | Default |
| ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| `metricbeatConfig` | Allows you to add any config files in `/usr/share/metricbeat` such as `metricbeat.yml`. See [values.yaml](./values.yaml) for an example of the formatting with the default configuration. | see [values.yaml](./values.yaml) |
| `extraEnvs` | Extra [environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config) which will be appended to the `env:` definition for the container | `[]` |
| `extraVolumeMounts` | Templatable string of additional volumeMounts to be passed to the `tpl` function | `""` |
| `extraVolumes` | Templatable string of additional volumes to be passed to the `tpl` function | `""` |
| `hostPathRoot` | Fully-qualified [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) that will be used to persist Metricbeat registry data | `/var/lib` |
| `image` | The Metricbeat docker image | `docker.elastic.co/beats/metricbeat` |
| `imageTag` | The Metricbeat docker image tag | `7.3.0` |
| `imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` |
| `imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` |
| `managedServiceAccount` | Whether the `serviceAccount` should be managed by this helm chart. Set this to `false` in order to manage your own service account and related roles. | `true` |
| `podAnnotations` | Configurable [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) applied to all Metricbeat pods | `{}` |
| `podSecurityContext` | Configurable [podSecurityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for Metricbeat pod execution environment | `runAsUser: 0`<br>`privileged: false` |
| `livenessProbe` | Parameters to pass to [liveness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) checks for values such as timeouts and thresholds. | `failureThreshold: 3`<br>`initialDelaySeconds: 10`<br>`periodSeconds: 10`<br>`successThreshold: 3`<br>`timeoutSeconds: 5` |
| `readinessProbe` | Parameters to pass to [readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) checks for values such as timeouts and thresholds. | `failureThreshold: 3`<br>`initialDelaySeconds: 10`<br>`periodSeconds: 10`<br>`successThreshold: 3`<br>`timeoutSeconds: 5` |
| `resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the `DaemonSet` | `requests.cpu: 100m`<br>`requests.memory: 100Mi`<br>`limits.cpu: 1000m`<br>`limits.memory: 200Mi` |
| `serviceAccount` | Custom [serviceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) that Metricbeat will use during execution. By default will use the service account created by this chart. | `""` |
| `secretMounts` | Allows you easily mount a secret as a file inside the `DaemonSet`. Useful for mounting certificates and other secrets. See [values.yaml](./values.yaml) for an example | `[]` |
| `terminationGracePeriod` | Termination period (in seconds) to wait before killing Metricbeat pod process on pod shutdown | `30` |
| `tolerations` | Configurable [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) | `[]` |
| `nodeSelector` | Configurable [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) | `{}` |
| `affinity` | Configurable [affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) | `{}` |
| `updateStrategy` | The [updateStrategy](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/#daemonset-update-strategy) for the `DaemonSet`. By default Kubernetes will kill and recreate pods on updates. Setting this to `OnDelete` will require that pods be deleted manually. | `RollingUpdate` |
| `replicas` | The replica count for the metricbeat deployment talking to kube-state-metrics | `1` |
## Examples
In [examples/](./examples) you will find some example configurations. These examples are used for the automated testing of this helm chart.
### Default
* Deploy the [default Elasticsearch helm chart](../elasticsearch/README.md#default)
* Deploy Metricbeat with the default values
```
cd examples/default
make
```
* You can now setup a port forward for Elasticsearch to observe Metricbeat indices
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
## Testing
This chart uses [pytest](https://docs.pytest.org/en/latest/) to test the templating logic. The dependencies for testing can be installed from the [`requirements.txt`](../requirements.txt) in the parent directory.
```
pip install -r ../requirements.txt
make pytest
```
You can also use `helm template` to look at the YAML being generated
```
make template
```
It is possible to run all of the tests and linting inside of a docker container
```
make test
```
## Integration Testing
Integration tests are run using [goss](https://github.com/aelsabbahy/goss/blob/master/docs/manual.md) which is a serverspec like tool written in golang. See [goss.yaml](examples/default/test/goss.yaml) for an example of what the tests look like.
To run the goss tests against the default example:
```
cd examples/default
make goss
```
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
appVersion: 1.7.2
description: Install kube-state-metrics to generate and expose cluster-level metrics
home: https://github.com/kubernetes/kube-state-metrics/
keywords:
- metric
- monitoring
- prometheus
- kubernetes
maintainers:
- email: jose@armesto.net
name: fiunchinho
- email: tariq.ibrahim@mulesoft.com
name: tariq1890
name: kube-state-metrics
sources:
- https://github.com/kubernetes/kube-state-metrics/
version: 2.3.1
approvers:
- fiunchinho
- tariq1890
reviewers:
- fiunchinho
- tariq1890
# kube-state-metrics Helm Chart
* Installs the [kube-state-metrics agent](https://github.com/kubernetes/kube-state-metrics).
## Installing the Chart
To install the chart with the release name `my-release`:
```bash
$ helm install stable/kube-state-metrics
```
## Configuration
| Parameter | Description | Default |
|:----------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------|
| `image.repository` | The image repository to pull from | quay.io/coreos/kube-state-metrics |
| `image.tag` | The image tag to pull from | `v1.7.2` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `replicas` | Number of replicas | `1` |
| `service.port` | The port of the container | `8080` |
| `service.annotations` | Annotations to be added to the service | `{}`
| `customLabels` | Custom labels to apply to service, deployment and pods | `{}` |
| `hostNetwork` | Whether or not to use the host network | `false` |
| `prometheusScrape` | Whether or not enable prom scrape | `true` |
| `rbac.create` | If true, create & use RBAC resources | `true` |
| `serviceAccount.create` | If true, create & use serviceAccount | `true` |
| `serviceAccount.name` | If not set & create is true, use template fullname | |
| `serviceAccount.imagePullSecrets` | Specify image pull secrets field | `[]` |
| `podSecurityPolicy.enabled` | If true, create & use PodSecurityPolicy resources | `false` |
| `podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | {} |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `65534` |
| `securityContext.runAsUser` | User ID for the container | `65534` |
| `priorityClassName` | Name of Priority Class to assign pods | `nil` |
| `nodeSelector` | Node labels for pod assignment | {} |
| `affinity` | Affinity settings for pod assignment | {} |
| `tolerations` | Tolerations for pod assignment | [] |
| `podAnnotations` | Annotations to be added to the pod | {} |
| `resources` | kube-state-metrics resource requests and limits | {} |
| `collectors.certificatesigningrequests` | Enable the certificatesigningrequests collector. | `true` |
| `collectors.configmaps` | Enable the configmaps collector. | `true` |
| `collectors.cronjobs` | Enable the cronjobs collector. | `true` |
| `collectors.daemonsets` | Enable the daemonsets collector. | `true` |
| `collectors.deployments` | Enable the deployments collector. | `true` |
| `collectors.endpoints` | Enable the endpoints collector. | `true` |
| `collectors.horizontalpodautoscalers` | Enable the horizontalpodautoscalers collector. | `true` |
| `collectors.ingresses` | Enable the ingresses collector. | `true` |
| `collectors.jobs` | Enable the jobs collector. | `true` |
| `collectors.limitranges` | Enable the limitranges collector. | `true` |
| `collectors.namespaces` | Enable the namespaces collector. | `true` |
| `collectors.nodes` | Enable the nodes collector. | `true` |
| `collectors.persistentvolumeclaims` | Enable the persistentvolumeclaims collector. | `true` |
| `collectors.persistentvolumes` | Enable the persistentvolumes collector. | `true` |
| `collectors.poddisruptionbudgets` | Enable the poddisruptionbudgets collector. | `true` |
| `collectors.pods` | Enable the pods collector. | `true` |
| `collectors.replicasets` | Enable the replicasets collector. | `true` |
| `collectors.replicationcontrollers` | Enable the replicationcontrollers collector. | `true` |
| `collectors.resourcequotas` | Enable the resourcequotas collector. | `true` |
| `collectors.secrets` | Enable the secrets collector. | `true` |
| `collectors.services` | Enable the services collector. | `true` |
| `collectors.statefulsets` | Enable the statefulsets collector. | `true` |
| `collectors.storageclasses` | Enable the storageclasses collector. | `true` |
| `collectors.verticalpodautoscalers` | Enable the verticalpodautoscalers collector. | `false` |
| `prometheus.monitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
| `prometheus.monitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
| `prometheus.monitor.namespace` | Namespace where servicemonitor resource should be created | `the same namespace as kube-state-metrics` |
| `prometheus.monitor.honorLabels` | Honor metric labels | `false` |
kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.
The exposed metrics can be found here:
https://github.com/kubernetes/kube-state-metrics/blob/master/docs/README.md#exposed-metrics
The metrics are exported on the HTTP endpoint /metrics on the listening port.
In your case, {{ template "kube-state-metrics.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.port }}/metrics
They are served either as plaintext or protobuf depending on the Accept header.
They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint.
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kube-state-metrics.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kube-state-metrics.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "kube-state-metrics.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "kube-state-metrics.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
rules:
{{ if .Values.collectors.certificatesigningrequests }}
- apiGroups: ["certificates.k8s.io"]
resources:
- certificatesigningrequests
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.configmaps }}
- apiGroups: [""]
resources:
- configmaps
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.cronjobs }}
- apiGroups: ["batch"]
resources:
- cronjobs
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.daemonsets }}
- apiGroups: ["extensions", "apps"]
resources:
- daemonsets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.deployments }}
- apiGroups: ["extensions", "apps"]
resources:
- deployments
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.endpoints }}
- apiGroups: [""]
resources:
- endpoints
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.horizontalpodautoscalers }}
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.ingresses }}
- apiGroups: ["extensions", "networking.k8s.io"]
resources:
- ingresses
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.jobs }}
- apiGroups: ["batch"]
resources:
- jobs
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.limitranges }}
- apiGroups: [""]
resources:
- limitranges
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.namespaces }}
- apiGroups: [""]
resources:
- namespaces
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.nodes }}
- apiGroups: [""]
resources:
- nodes
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.persistentvolumeclaims }}
- apiGroups: [""]
resources:
- persistentvolumeclaims
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.persistentvolumes }}
- apiGroups: [""]
resources:
- persistentvolumes
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.poddisruptionbudgets }}
- apiGroups: ["policy"]
resources:
- poddisruptionbudgets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.pods }}
- apiGroups: [""]
resources:
- pods
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.replicasets }}
- apiGroups: ["extensions", "apps"]
resources:
- replicasets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.replicationcontrollers }}
- apiGroups: [""]
resources:
- replicationcontrollers
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.resourcequotas }}
- apiGroups: [""]
resources:
- resourcequotas
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.secrets }}
- apiGroups: [""]
resources:
- secrets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.services }}
- apiGroups: [""]
resources:
- services
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.statefulsets }}
- apiGroups: ["apps"]
resources:
- statefulsets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.storageclasses }}
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.verticalpodautoscalers }}
- apiGroups: ["autoscaling.k8s.io"]
resources:
- verticalpodautoscalers
verbs: ["list", "watch"]
{{ end -}}
{{- end -}}
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "kube-state-metrics.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "kube-state-metrics.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
replicas: {{ .Values.replicas }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
app.kubernetes.io/instance: "{{ .Release.Name }}"
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 8 }}
{{- end }}
{{- if .Values.podAnnotations }}
annotations:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
hostNetwork: {{ .Values.hostNetwork }}
serviceAccountName: {{ template "kube-state-metrics.serviceAccountName" . }}
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
args:
{{ if .Values.collectors.certificatesigningrequests }}
- --collectors=certificatesigningrequests
{{ end }}
{{ if .Values.collectors.configmaps }}
- --collectors=configmaps
{{ end }}
{{ if .Values.collectors.cronjobs }}
- --collectors=cronjobs
{{ end }}
{{ if .Values.collectors.daemonsets }}
- --collectors=daemonsets
{{ end }}
{{ if .Values.collectors.deployments }}
- --collectors=deployments
{{ end }}
{{ if .Values.collectors.endpoints }}
- --collectors=endpoints
{{ end }}
{{ if .Values.collectors.horizontalpodautoscalers }}
- --collectors=horizontalpodautoscalers
{{ end }}
{{ if .Values.collectors.ingresses }}
- --collectors=ingresses
{{ end }}
{{ if .Values.collectors.jobs }}
- --collectors=jobs
{{ end }}
{{ if .Values.collectors.limitranges }}
- --collectors=limitranges
{{ end }}
{{ if .Values.collectors.namespaces }}
- --collectors=namespaces
{{ end }}
{{ if .Values.collectors.nodes }}
- --collectors=nodes
{{ end }}
{{ if .Values.collectors.persistentvolumeclaims }}
- --collectors=persistentvolumeclaims
{{ end }}
{{ if .Values.collectors.persistentvolumes }}
- --collectors=persistentvolumes
{{ end }}
{{ if .Values.collectors.poddisruptionbudgets }}
- --collectors=poddisruptionbudgets
{{ end }}
{{ if .Values.collectors.pods }}
- --collectors=pods
{{ end }}
{{ if .Values.collectors.replicasets }}
- --collectors=replicasets
{{ end }}
{{ if .Values.collectors.replicationcontrollers }}
- --collectors=replicationcontrollers
{{ end }}
{{ if .Values.collectors.resourcequotas }}
- --collectors=resourcequotas
{{ end }}
{{ if .Values.collectors.secrets }}
- --collectors=secrets
{{ end }}
{{ if .Values.collectors.services }}
- --collectors=services
{{ end }}
{{ if .Values.collectors.statefulsets }}
- --collectors=statefulsets
{{ end }}
{{ if .Values.collectors.storageclasses }}
- --collectors=storageclasses
{{ end }}
{{ if .Values.collectors.verticalpodautoscalers }}
- --collectors=verticalpodautoscalers
{{ end }}
{{ if .Values.namespace }}
- --namespace={{ .Values.namespace }}
{{ end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podSecurityPolicy.annotations }}
annotations:
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
{{- end }}
spec:
privileged: false
volumes:
- 'secret'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
{{- end }}
{{- if and .Values.podSecurityPolicy.enabled -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: psp-{{ template "kube-state-metrics.fullname" . }}
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "kube-state-metrics.fullname" . }}
{{- end }}
{{- if and .Values.podSecurityPolicy.enabled -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: psp-{{ template "kube-state-metrics.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-{{ template "kube-state-metrics.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "kube-state-metrics.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
annotations:
{{- if .Values.prometheusScrape }}
prometheus.io/scrape: '{{ .Values.prometheusScrape }}'
{{- end }}
{{- if .Values.service.annotations }}
{{- toYaml .Values.service.annotations | nindent 4 }}
{{- end }}
spec:
type: "{{ .Values.service.type }}"
ports:
- name: "http"
protocol: TCP
port: {{ .Values.service.port }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
targetPort: 8080
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: "{{ .Values.service.loadBalancerIP }}"
{{- end }}
selector:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
imagePullSecrets:
{{ toYaml .Values.serviceAccount.imagePullSecrets | indent 2 }}
{{- end -}}
{{- if .Values.prometheus.monitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
{{- if .Values.prometheus.monitor.additionalLabels }}
{{ toYaml .Values.prometheus.monitor.additionalLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
endpoints:
- port: http
{{- if .Values.prometheus.monitor.honorLabels }}
honorLabels: true
{{- end }}
{{- end }}
# Default values for kube-state-metrics.
prometheusScrape: true
image:
repository: quay.io/coreos/kube-state-metrics
tag: v1.7.2
pullPolicy: IfNotPresent
replicas: 1
service:
port: 8080
# Default to clusterIP for backward compatibility
type: ClusterIP
nodePort: 0
loadBalancerIP: ""
annotations: {}
customLabels: {}
hostNetwork: false
rbac:
# If true, create & use RBAC resources
create: true
serviceAccount:
# Specifies whether a ServiceAccount should be created, require rbac true
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
# Reference to one or more secrets to be used when pulling images
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
prometheus:
monitor:
enabled: false
additionalLabels: {}
namespace: ""
honorLabels: false
## Specify if a Pod Security Policy for kube-state-metrics must be created
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
##
podSecurityPolicy:
enabled: false
annotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
securityContext:
enabled: true
runAsUser: 65534
fsGroup: 65534
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
## Affinity settings for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
affinity: {}
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Annotations to be added to the pod
podAnnotations: {}
## Assign a PriorityClassName to pods if set
# priorityClassName: ""
# Available collectors for kube-state-metrics. By default all available
# collectors are enabled.
collectors:
certificatesigningrequests: true
configmaps: true
cronjobs: true
daemonsets: true
deployments: true
endpoints: true
horizontalpodautoscalers: true
ingresses: true
jobs: true
limitranges: true
namespaces: true
nodes: true
persistentvolumeclaims: true
persistentvolumes: true
poddisruptionbudgets: true
pods: true
replicasets: true
replicationcontrollers: true
resourcequotas: true
secrets: true
services: true
statefulsets: true
storageclasses: true
verticalpodautoscalers: false
# Namespace to be enabled for collecting resources. By default all namespaces are collected.
# namespace: ""
dependencies:
- name: kube-state-metrics
repository: https://kubernetes-charts.storage.googleapis.com
version: 2.3.1
digest: sha256:505affb1bd3fa9d6952e5c9083d4e3c4e1dfe8ac8879299f336a4ae34591860b
generated: 2019-08-22T14:50:34.007904+08:00
dependencies:
- name: 'kube-state-metrics'
version: '2.3.1'
repository: '@stable'
1. Watch all containers come up.
$ kubectl get pods --namespace={{ .Release.Namespace }} -l app={{ template "fullname" . }} -w
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Use the fullname if the serviceAccount value is not set
*/}}
{{- define "serviceAccount" -}}
{{- if .Values.serviceAccount }}
{{- .Values.serviceAccount -}}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- if .Values.managedServiceAccount }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "serviceAccount" . }}-cluster-role
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
verbs:
- get
- list
- watch
{{- end -}}
{{- if .Values.managedServiceAccount }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ template "serviceAccount" . }}-cluster-role-binding
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
roleRef:
kind: ClusterRole
name: {{ template "serviceAccount" . }}-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ template "serviceAccount" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}
{{- if .Values.metricbeatConfig }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fullname" . }}-config
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
data:
{{- range $path, $config := .Values.metricbeatConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ template "fullname" . }}
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
spec:
selector:
matchLabels:
app: "{{ template "fullname" . }}"
release: {{ .Release.Name | quote }}
updateStrategy:
type: {{ .Values.updateStrategy }}
template:
metadata:
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.metricbeatConfig }}
configChecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
name: "{{ template "fullname" . }}"
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
spec:
{{- with .Values.tolerations }}
tolerations: {{ toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector: {{ toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity: {{ toYaml . | nindent 8 -}}
{{- end }}
serviceAccountName: {{ template "serviceAccount" . }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- end }}
{{- if .Values.metricbeatConfig }}
- name: metricbeat-config
configMap:
defaultMode: 0600
name: {{ template "fullname" . }}-config
{{- end }}
- name: data
hostPath:
path: {{ .Values.hostPathRoot }}/{{ template "fullname" . }}-{{ .Release.Namespace }}-data
type: DirectoryOrCreate
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varrundockersock
hostPath:
path: /var/run/docker.sock
{{- if .Values.extraVolumes }}
{{ tpl .Values.extraVolumes . | indent 6 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
containers:
- name: "metricbeat"
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
args:
- "-e"
- "-E"
- "http.enabled=true"
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
{{ toYaml .Values.livenessProbe | indent 10 }}
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
metricbeat test output
{{ toYaml .Values.readinessProbe | indent 10 }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 8 }}
{{- end }}
{{- if .Values.podSecurityContext }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 10 }}
{{- end }}
volumeMounts:
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.metricbeatConfig }}
- name: metricbeat-config
mountPath: /usr/share/metricbeat/{{ $path }}
readOnly: true
subPath: {{ $path }}
{{- end }}
- name: data
mountPath: /usr/share/metricbeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
# Necessary when using autodiscovery; avoid mounting it otherwise
# See: https://www.elastic.co/guide/en/beats/metricbeat/master/configuration-autodiscover.html
- name: varrundockersock
mountPath: /var/run/docker.sock
readOnly: true
{{- if .Values.extraVolumeMounts }}
{{ tpl .Values.extraVolumeMounts . | indent 8 }}
{{- end }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{{ template "fullname" . }}-metrics'
labels:
app: '{{ template "fullname" . }}-metrics'
chart: '{{ .Chart.Name }}-{{ .Chart.Version }}'
heritage: '{{ .Release.Service }}'
release: '{{ .Release.Name }}'
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: '{{ template "fullname" . }}-metrics'
chart: '{{ .Chart.Name }}-{{ .Chart.Version }}'
heritage: '{{ .Release.Service }}'
release: '{{ .Release.Name }}'
template:
metadata:
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.metricbeatConfig }}
configChecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
labels:
app: '{{ template "fullname" . }}-metrics'
chart: '{{ .Chart.Name }}-{{ .Chart.Version }}'
heritage: '{{ .Release.Service }}'
release: '{{ .Release.Name }}'
spec:
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 6 }}
{{- end }}
serviceAccountName: {{ template "serviceAccount" . }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- end }}
{{- if .Values.metricbeatConfig }}
- name: metricbeat-config
configMap:
defaultMode: 0600
name: {{ template "fullname" . }}-config
{{- end }}
{{- if .Values.extraVolumes }}
{{ tpl .Values.extraVolumes . | indent 6 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
containers:
- name: "metricbeat"
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
args:
- "-c"
- "/usr/share/metricbeat/kube-state-metrics-metricbeat.yml"
- "-e"
- "-E"
- "http.enabled=true"
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
metricbeat test output
{{ toYaml .Values.readinessProbe | indent 10 }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBE_STATE_METRICS_HOSTS
value: "$({{ .Release.Name | replace "-" "_" | upper }}_KUBE_STATE_METRICS_SERVICE_HOST):$({{ .Release.Name | replace "-" "_" | upper }}_KUBE_STATE_METRICS_SERVICE_PORT_HTTP)"
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 8 }}
{{- end }}
{{- if .Values.podSecurityContext }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 10 }}
{{- end }}
volumeMounts:
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.metricbeatConfig }}
- name: metricbeat-config
mountPath: /usr/share/metricbeat/{{ $path }}
readOnly: true
subPath: {{ $path }}
{{- end }}
{{- if .Values.extraVolumeMounts }}
{{ tpl .Values.extraVolumeMounts . | indent 8 }}
{{- end }}
{{- if .Values.managedServiceAccount }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "serviceAccount" . }}
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
{{- end -}}
---
# Allows you to add any config files in /usr/share/metricbeat
# such as metricbeat.yml
metricbeatConfig:
metricbeat.yml: |
system:
hostfs: /hostfs
metricbeat.modules:
- module: kubernetes
metricsets:
- container
- node
- pod
- system
- volume
period: 10s
host: "${NODE_NAME}"
hosts: ["${NODE_NAME}:10255"]
processors:
- add_kubernetes_metadata:
in_cluster: true
- module: kubernetes
enabled: true
metricsets:
- event
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
processes: ['.*']
process.include_top_n:
by_cpu: 5
by_memory: 5
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
kube-state-metrics-metricbeat.yml: |
metricbeat.modules:
- module: kubernetes
enabled: true
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
period: 10s
hosts: ["${KUBE_STATE_METRICS_HOSTS:kube-state-metrics:8080}"]
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
# Replicas being used for the kube-state-metrics metricbeat deployment
replicas: 1
# Extra environment variables to append to the DaemonSet pod spec.
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
extraVolumeMounts: ""
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraVolumes: ""
# - name: extras
# emptyDir: {}
# Root directory where metricbeat will write data to in order to persist registry data across pod restarts (file position and other metadata).
hostPathRoot: /var/lib
image: "docker.elastic.co/beats/metricbeat"
imageTag: "7.3.0"
imagePullPolicy: "IfNotPresent"
imagePullSecrets: []
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
# Whether this chart should self-manage its service account, role, and associated role binding.
managedServiceAccount: true
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# Various pod security context settings. Bear in mind that many of these have an impact on metricbeat functioning properly.
#
# - Filesystem group for the metricbeat user. The official elastic docker images always have an id of 1000.
# - User that the container will execute as. Typically necessary to run as root (0) in order to properly collect host container logs.
# - Whether to execute the metricbeat containers as privileged containers. Typically not necessarily unless running within environments such as OpenShift.
podSecurityContext:
runAsUser: 0
privileged: false
resources:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "1000m"
memory: "200Mi"
# Custom service account override that the pod will use
serviceAccount: ""
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security other sensitive values
secretMounts: []
# - name: metricbeat-certificates
# secretName: metricbeat-certificates
# path: /usr/share/metricbeat/certs
# How long to wait for metricbeat pods to stop gracefully
terminationGracePeriod: 30
tolerations: []
nodeSelector: {}
affinity: {}
updateStrategy: RollingUpdate
# Override various naming aspects of this chart
# Only edit these if you know what you're doing
nameOverride: ""
fullnameOverride: ""
labels:
io.cattle.role: cluster # options are cluster/project
io.rancher.app_min_version: 7.3.0
categories:
- elasticsearch
questions:
- variable: defaultImage
default: true
description: "Use default Docker image"
label: Use Default Image
type: boolean
group: "Container Images"
show_subquestion_if: false
subquestions:
- variable: elasticsearch.image
default: "ranchercharts/elasticsearch-elasticsearch"
description: "Elasticsearch image name"
type: string
label: ElasticSearch Image Name
- variable: elasticsearch.imageTag
default: "7.3.0"
description: "Elasticsearch image tag"
type: string
label: Elasticsearch Image Tag
- variable: kibana.image
default: "ranchercharts/kibana-kibana"
description: "Kibana image name"
type: string
label: Kibana Image Name
- variable: kibana.imageTag
default: "7.3.0"
description: "Kibana image tag"
type: string
label: Kibana Image Tag
- variable: filebeat.image
default: "ranchercharts/beats-filebeat"
description: "File-beat image name"
type: string
label: Filebeat Image Name
- variable: filebeat.imageTag
default: "7.3.0"
description: "File-beat image tag"
type: string
label: Filebeat Image Tag
- variable: metricbeat.image
default: "ranchercharts/beats-metricbeat"
description: "Metric-beat image name"
type: string
label: Metricbeat Image Name
- variable: metricbeat.imageTag
default: "7.3.0"
description: "Metric-beat image tag"
type: string
label: Metricbeat Image Tag
- variable: metricbeat.kube-state-metrics.image.repository
default: "ranchercharts/coreos-kube-state-metrics"
description: "Metric-beat image name"
type: string
label: Metricbeat Image Name
- variable: metricbeat.kube-state-metrics.image.tag
default: "v1.7.2"
description: "Metric-beat image tag"
type: string
label: Metricbeat Image Tag
# kibana configs
- variable: kibana.language
default: "en"
description: "Config Kiabana Language"
type: enum
label: Config Kiabana Language
required: true
group: "Kibana Config"
options:
- "en"
- "zh-CN"
- "ja-JP"
- variable: kibana.service.type
default: "NodePort"
description: "Kibana service type"
type: enum
label: Kibana Service Type
required: true
group: "Kibana Config"
options:
- "ClusterIP"
- "NodePort"
- variable: kibana.enableProxy
default: true
description: "Expose kibana using nginx proxy"
type: boolean
label: Expose Kibana using Nginx Proxy
group: "Kibana Config"
# elasticSearch configs
- variable: elasticsearch.replicas
default: 3
description: "config elasticSearch replicas"
type: int
label: Config ElasticSearch Replicas
required: true
group: "Elasticsearch Config"
min: 3
max: 99
- variable: elasticsearch.antiAffinity
default: "hard"
description: "Config elasticsearch pod antiAffinity type"
type: enum
label: Config Elasticsearch Pod AntiAffinity Type
required: true
group: "Elasticsearch Config"
options:
- "hard"
- "soft"
- variable: elasticsearch.persistence.enabled
default: false
description: "Enable persistent volume for elasticsearch"
type: boolean
required: true
label: Enable Elasticsearch Persistent Volume
show_subquestion_if: true
group: "Elasticsearch Config"
subquestions:
- variable: elasticsearch.persistence.storageClass
default: ""
description: "If undefined or set to null, using the default StorageClass. Defaults to null."
type: storageclass
label: Storage Class for Elasticsearch Node
- variable: elasticsearch.persistence.size
default: "30Gi"
description: "Elasticsearch persistent volume size"
required: true
type: string
label: Elasticsearch Persistent Volume Size of Node
# file-beats configs
- variable: filebeat.enabled
default: true
description: "Enable filebeat, lightweight shipper for logs"
type: boolean
label: Enable Filebeat
required: true
group: "Filebeat Config"
- variable: metricbeat.enabled
default: true
description: "Enable metric-beat, lightweight shipper for metrics"
type: boolean
label: Enable Metricbeat
required: true
group: "Metricbeat Config"
dependencies:
- name: elasticsearch
version: 7.3.0
condition: elasticsearch.enabled
- name: kibana
version: 7.3.0
condition: kibana.enabled
repository: file://./charts/kibana
- name: filebeat
version: 7.3.0
condition: filebeat.enabled
repository: file://./charts/filebeat
- name: metricbeat
version: 7.3.0
condition: metricbeat.enabled
repository: file://./charts/metricbeat
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "efk.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "efk.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
# Default values for efk.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
elasticsearch:
enabled: true
image: "ranchercharts/elasticsearch-elasticsearch"
imageTag: "7.3.0"
enableProxy: true
kibana:
enabled: true
image: "ranchercharts/kibana-kibana"
imageTag: "7.3.0"
filebeat:
enabled: true
image: "ranchercharts/beats-filebeat"
imageTag: "7.3.0"
metricbeat:
enabled: true
image: "ranchercharts/beats-metricbeat"
imageTag: "7.3.0"
kube-state-metrics:
image:
repository: ranchercharts/coreos-kube-state-metrics
tag: v1.7.2
global:
nginxProxy:
repository: rancher/nginx
tag: 1.15.8-alpine
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment