Unverified Commit 97969130 by Denise Committed by GitHub

Merge pull request #219 from guangbochen/efk2.3

Add efk v7.3.0 chart
parents e93f0f5c e121ddb3
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
description: EFK(Elasticsearch + Filebeat + Kibana)
name: efk
version: 7.3.0
appVersion: 7.3.0
icon: file://../efk-logo.jpg
sources:
- https://www.elastic.co/products/elasticsearch
- https://www.elastic.co/products/kibana
- https://fluentbit.io/
home: https://www.elastic.co
maintainers:
- name: guangbochen
email: support@rancher.com
engine: gotpl
# Elastic Stack Kubernetes Helm Charts
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
## Charts
Please look in the chart directories for the documentation for each chart. These helm charts are designed to be a lightweight way to configure our official docker images. Links to the relevant docker image documentation has also been added below.
| Chart | Docker documentation |
| ------------------------------------------ | ------------------------------------------------------------------------------- |
| [Elasticsearch](./elasticsearch/README.md) | https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html |
| [Kibana](./kibana/README.md) | https://www.elastic.co/guide/en/kibana/current/docker.html |
| [Filebeat](./filebeat/README.md) | https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html |
| [Metricbeat](./metricbeat/README.md) | https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-docker.html |
## Kubernetes versions
The charts are [currently tested](https://devops-ci.elastic.co/job/elastic+helm-charts+master/) against all GKE versions available. The exact versions are defined under `KUBERNETES_VERSIONS` in [helpers/matrix.yml](/helpers/matrix.yml)
# EFK
EFK(Elasticsearch + Filebeat + Kibana) are flexible and powerful open source projects, provides distributed real-time search and analytics tools.</br>
This chart bootstraps a EFK stack with the following components:
- Elasticsearch
- Kibana
- Filebeat
- Metricbeat
**Warning:** upgrade from previous version under v7.3.0 is not supported, please re-deploy a new EFK app if you want to use the new v7.3.0 version.
description: Official Elastic helm chart for Elasticsearch
home: https://github.com/elastic/helm-charts
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: elasticsearch
version: 7.3.0
appVersion: 7.3.0
sources:
- https://github.com/elastic/elasticsearch
icon: https://helm.elastic.co/icons/elasticsearch.png
1. Watch all cluster members come up.
$ kubectl get pods --namespace={{ .Release.Namespace }} -l app={{ template "uname" . }} -w
2. Test cluster health using Helm test.
$ helm test {{ .Release.Name }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "uname" -}}
{{ .Values.clusterName }}-{{ .Values.nodeGroup }}
{{- end -}}
{{- define "masterService" -}}
{{- if empty .Values.masterService -}}
{{ .Values.clusterName }}-master
{{- else -}}
{{ .Values.masterService }}
{{- end -}}
{{- end -}}
{{- define "endpoints" -}}
{{- $replicas := .replicas | int }}
{{- $uname := printf "%s-%s" .clusterName .nodeGroup }}
{{- range $i, $e := untilStep 0 $replicas 1 -}}
{{ $uname }}-{{ $i }},
{{- end -}}
{{- end -}}
{{- define "esMajorVersion" -}}
{{- if .Values.esMajorVersion -}}
{{ .Values.esMajorVersion }}
{{- else -}}
{{- $version := int (index (.Values.imageTag | splitList ".") 0) -}}
{{- if and (contains "docker.elastic.co/elasticsearch/elasticsearch" .Values.image) (not (eq $version 0)) -}}
{{ $version }}
{{- else -}}
7
{{- end -}}
{{- end -}}
{{- end -}}
{{- if .Values.esConfig }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "uname" . }}-config
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
data:
{{- range $path, $config := .Values.esConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "uname" . -}}
{{- $servicePort := .Values.httpPort -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- end }}
---
{{- if .Values.maxUnavailable }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "{{ template "uname" . }}-pdb"
spec:
maxUnavailable: {{ .Values.maxUnavailable }}
selector:
matchLabels:
app: "{{ template "uname" . }}"
{{- end }}
---
kind: Service
apiVersion: v1
metadata:
name: {{ template "uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
ports:
- name: http
protocol: TCP
port: {{ .Values.httpPort }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
- name: transport
protocol: TCP
port: {{ .Values.transportPort }}
---
kind: Service
apiVersion: v1
metadata:
name: {{ template "uname" . }}-headless
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
# Create endpoints also if the related pod isn't ready
publishNotReadyAddresses: true
selector:
app: "{{ template "uname" . }}"
ports:
- name: http
port: {{ .Values.httpPort }}
- name: transport
port: {{ .Values.transportPort }}
---
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
command:
- "sh"
- "-c"
- |
#!/usr/bin/env bash -e
curl -XGET --fail '{{ template "uname" . }}:{{ .Values.httpPort }}/_cluster/health?{{ .Values.clusterHealthCheckParams }}'
restartPolicy: Never
---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.master=true
roles:
master: "true"
ingest: "true"
data: "true"
replicas: 3
minimumMasterNodes: 2
esMajorVersion: ""
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "7.3.0"
imagePullPolicy: "IfNotPresent"
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
esJavaOpts: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "100m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
sidecarResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
#volumeClaimTemplate:
# accessModes: [ "ReadWriteOnce" ]
# resources:
# requests:
# storage: 30Gi
persistence:
enabled: false
annotations: {}
## data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
##
## If you want to reuse an existing claim, you can pass the name of the PVC using
## the existingClaim variable
# existingClaim: your-claim
accessMode: ReadWriteOnce
size: 30Gi
extraVolumes: ""
# - name: extras
# emptyDir: {}
extraVolumeMounts: ""
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraInitContainers: ""
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
protocol: http
httpPort: 9200
transportPort: 9300
service:
type: ClusterIP
nodePort:
annotations: {}
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
# The following value is deprecated,
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publically expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ""
fullnameOverride: ""
# https://github.com/elastic/helm-charts/issues/63
masterTerminationFix: false
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
sysctlInitContainer:
enabled: true
description: Official Elastic helm chart for Filebeat
home: https://github.com/elastic/helm-charts
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: filebeat
version: 7.3.0
appVersion: 7.3.0
sources:
- https://github.com/elastic/beats
icon: https://helm.elastic.co/icons/filebeat.png
1. Watch all containers come up.
$ kubectl get pods --namespace={{ .Release.Namespace }} -l app={{ template "fullname" . }} -w
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Use the fullname if the serviceAccount value is not set
*/}}
{{- define "serviceAccount" -}}
{{- if .Values.serviceAccount }}
{{- .Values.serviceAccount -}}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- if .Values.managedServiceAccount }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "serviceAccount" . }}-cluster-role
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
verbs:
- get
- list
- watch
{{- end -}}
{{- if .Values.managedServiceAccount }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ template "serviceAccount" . }}-cluster-role-binding
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
roleRef:
kind: ClusterRole
name: {{ template "serviceAccount" . }}-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ template "serviceAccount" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}
{{- if .Values.filebeatConfig }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fullname" . }}-config
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
data:
{{- range $path, $config := .Values.filebeatConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ template "fullname" . }}
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
selector:
matchLabels:
app: "{{ template "fullname" . }}"
release: {{ .Release.Name | quote }}
updateStrategy:
type: {{ .Values.updateStrategy }}
template:
metadata:
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.filebeatConfig }}
configChecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
name: "{{ template "fullname" . }}"
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
spec:
{{- with .Values.tolerations }}
tolerations: {{ toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector: {{ toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- with .Values.affinity }}
affinity: {{ toYaml . | nindent 8 -}}
{{- end }}
serviceAccountName: {{ template "serviceAccount" . }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- end }}
{{- if .Values.filebeatConfig }}
- name: filebeat-config
configMap:
defaultMode: 0600
name: {{ template "fullname" . }}-config
{{- end }}
- name: data
hostPath:
path: {{ .Values.hostPathRoot }}/{{ template "fullname" . }}-{{ .Release.Namespace }}-data
type: DirectoryOrCreate
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varrundockersock
hostPath:
path: /var/run/docker.sock
{{- if .Values.extraVolumes }}
{{ tpl .Values.extraVolumes . | indent 6 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
containers:
- name: "filebeat"
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
args:
- "-e"
- "-E"
- "http.enabled=true"
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
{{ toYaml .Values.livenessProbe | indent 10 }}
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
filebeat test output
{{ toYaml .Values.readinessProbe | indent 10 }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 8 }}
{{- end }}
{{- if .Values.podSecurityContext }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 10 }}
{{- end }}
volumeMounts:
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.filebeatConfig }}
- name: filebeat-config
mountPath: /usr/share/filebeat/{{ $path }}
readOnly: true
subPath: {{ $path }}
{{- end }}
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
# Necessary when using autodiscovery; avoid mounting it otherwise
# See: https://www.elastic.co/guide/en/beats/filebeat/master/configuration-autodiscover.html
- name: varrundockersock
mountPath: /var/run/docker.sock
readOnly: true
{{- if .Values.extraVolumeMounts }}
{{ tpl .Values.extraVolumeMounts . | indent 8 }}
{{- end }}
{{- if .Values.managedServiceAccount }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "serviceAccount" . }}
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
{{- end -}}
---
# Allows you to add any config files in /usr/share/filebeat
# such as filebeat.yml
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: docker
containers.ids:
- '*'
processors:
- add_kubernetes_metadata:
in_cluster: true
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
# Extra environment variables to append to the DaemonSet pod spec.
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
extraVolumeMounts: ""
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraVolumes: ""
# - name: extras
# emptyDir: {}
# Root directory where Filebeat will write data to in order to persist registry data across pod restarts (file position and other metadata).
hostPathRoot: /var/lib
image: "docker.elastic.co/beats/filebeat"
imageTag: "7.3.0"
imagePullPolicy: "IfNotPresent"
imagePullSecrets: []
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
# Whether this chart should self-manage its service account, role, and associated role binding.
managedServiceAccount: true
# additionals labels
labels: {}
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# Various pod security context settings. Bear in mind that many of these have an impact on Filebeat functioning properly.
#
# - User that the container will execute as. Typically necessary to run as root (0) in order to properly collect host container logs.
# - Whether to execute the Filebeat containers as privileged containers. Typically not necessarily unless running within environments such as OpenShift.
podSecurityContext:
runAsUser: 0
privileged: false
resources:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "1000m"
memory: "200Mi"
# Custom service account override that the pod will use
serviceAccount: ""
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security other sensitive values
secretMounts: []
# - name: filebeat-certificates
# secretName: filebeat-certificates
# path: /usr/share/filebeat/certs
# How long to wait for Filebeat pods to stop gracefully
terminationGracePeriod: 30
tolerations: []
nodeSelector: {}
affinity: {}
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
updateStrategy: RollingUpdate
# Override various naming aspects of this chart
# Only edit these if you know what you're doing
nameOverride: ""
fullnameOverride: ""
description: Official Elastic helm chart for Kibana
home: https://github.com/elastic/helm-charts
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: kibana
version: 7.3.0
appVersion: 7.3.0
sources:
- https://github.com/elastic/kibana
icon: https://helm.elastic.co/icons/kibana.png
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Release.Name .Values.nameOverride -}}
{{- printf "%s-%s" $name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- if .Values.kibanaConfig }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fullname" . }}-config
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
data:
{{- range $path, $config := .Values.kibanaConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
replicas: {{ .Values.replicas }}
strategy:
{{ toYaml .Values.updateStrategy | indent 4 }}
selector:
matchLabels:
app: kibana
release: {{ .Release.Name | quote }}
template:
metadata:
labels:
app: kibana
release: {{ .Release.Name | quote }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.kibanaConfig }}
configchecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
spec:
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 8 }}
{{- if .Values.serviceAccount }}
serviceAccount: {{ .Values.serviceAccount }}
{{- end }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- end }}
{{- if .Values.kibanaConfig }}
- name: kibanaconfig
configMap:
name: {{ template "fullname" . }}-config
{{- end }}
- name: kibana-nginx
configMap:
name: {{ template "fullname" . }}-nginx
items:
- key: nginx.conf
mode: 438
path: nginx.conf
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
containers:
- name: kibana-proxy
image: {{ .Values.global.nginxProxy.repository }}:{{ .Values.global.nginxProxy.tag }}
args:
- nginx
- -g
- daemon off;
- -c
- /nginx/nginx.conf
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: kibana-nginx
mountPath: /nginx/
{{- if and .Values.resources .Values.resources.proxy }}
resources:
{{ toYaml .Values.resources.proxy | indent 10 }}
{{- end }}
- name: kibana
securityContext:
{{ toYaml .Values.securityContext | indent 10 }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
env:
{{- if .Values.elasticsearchURL }}
- name: ELASTICSEARCH_URL
value: "{{ .Values.elasticsearchURL }}"
{{- else if .Values.elasticsearchHosts }}
- name: ELASTICSEARCH_HOSTS
value: "{{ .Values.elasticsearchHosts }}"
{{- end }}
- name: I18N_LOCALE
value: "{{ .Values.language }}"
- name: SERVER_HOST
value: "{{ .Values.serverHost }}"
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 10 }}
{{- end }}
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 10 }}
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
http () {
local path="${1}"
set -- -XGET -s --fail
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
fi
curl -k "$@" "{{ .Values.protocol }}://localhost:{{ .Values.httpPort }}${path}"
}
http "{{ .Values.healthCheckPath }}"
ports:
- containerPort: {{ .Values.httpPort }}
resources:
{{ toYaml .Values.resources | indent 10 }}
volumeMounts:
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.kibanaConfig }}
- name: kibanaconfig
mountPath: /usr/share/kibana/config/{{ $path }}
subPath: {{ $path }}
{{- end -}}
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "fullname" . -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fullname" . }}-nginx
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
data:
nginx.conf: |-
user nginx;
worker_processes auto;
error_log /dev/stdout warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
log_format main '[$time_local - $status] $remote_addr - $remote_user $request ($http_referer)';
proxy_connect_timeout 10;
proxy_read_timeout 180;
proxy_send_timeout 5;
proxy_buffering off;
proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:100m inactive=1d max_size=10g;
server {
listen 80;
access_log off;
gzip on;
gzip_min_length 1k;
gzip_comp_level 2;
gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript image/jpeg image/gif image/png;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
proxy_set_header Host $host;
location ~* bundles/app/ {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:5601;
sub_filter_types application/javascript;
sub_filter_once off;
sub_filter "loadStyleSheet('/" "loadStyleSheet('./";
sub_filter '/bundles/' './bundles/';
sub_filter '/built_assets/' './built_assets/';
}
location /built_assets/dlls/vendors.bundle.dll.js {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:5601;
sub_filter_types *;
sub_filter_once off;
sub_filter '/built_assets/dlls/' './built_assets/dlls/';
}
location /bundles/commons.bundle.js {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:5601;
sub_filter_types *;
sub_filter_once off;
sub_filter '/api/' './api/';
sub_filter '/app/' './app/';
sub_filter 'url:basePath.prepend("/".concat(navLink.icon))' 'url:basePath.prepend("./".concat(navLink.icon))';
sub_filter 'this.basePath=' 'this.basePath="."+';
sub_filter 'window.location=response.location' 'window.location="."+response.location';
}
location /bundles/kibana.bundle.js {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:5601;
sub_filter_types *;
sub_filter_once off;
sub_filter '/api/' './api/';
sub_filter '/app/' './app/';
sub_filter 'injectedMetadata.i18n.translationsUrl' '"./"+injectedMetadata.i18n.translationsUrl';
sub_filter 'this.basePath=' 'this.basePath="."+';
sub_filter '../api/console/' './api/console/';
sub_filter '/s/' './s/';
}
location /bundles/ {
proxy_set_header Accept-Encoding "";
proxy_pass http://localhost:5601;
sub_filter_types *;
sub_filter_once off;
sub_filter 'injectedMetadata.i18n.translationsUrl' '"./"+injectedMetadata.i18n.translationsUrl';
}
location ~ ^/app/translations/(.*) {
return 301 /translations/$1;
}
location ~ ^/app/built_assets/(.*) {
return 301 /built_assets/$1;
}
location ~ ^/app/bundles/(.*) {
return 301 /bundles/$1;
}
location ~ ^/app/node_modules/(.*) {
return 301 /node_modules/$1;
}
location ~ ^/app/plugins/(.*) {
return 301 /plugins/$1;
}
location ~ ^/s/(.*)/app/api/(.*) {
if ($request_method = POST ) {
return 307 /api/$2$is_args$args;
}
return 301 /api/$2$is_args$args;
}
location ~ ^/app/s/(.*)/app/kibana {
return 301 /s/$1/app/kibana;
}
location ~ ^/s/(.*)/app/s/(.*)/elasticsearch/(.*) {
if ($request_method = POST ) {
return 307 /elasticsearch/$3$is_args$args;
}
return 301 /elasticsearch/$3$is_args$args;
}
location ~ ^/s/(.*)/app/s/(.*)/elasticsearch/(.*) {
if ($request_method = POST ) {
return 307 /elasticsearch/$3$is_args$args;
}
return 301 /elasticsearch/$3$is_args$args;
}
location ~ ^/s/(.*)/app/built_assets/(.*) {
return 301 /built_assets/$2$is_args$args;
}
location ~ ^/s/(.*)/app/s/(.*)/(.*) {
return 301 /s/$2/$3$is_args$args;
}
location ~ ^/s/(.*)/app/bundles/(.*) {
return 301 /bundles/$2$is_args$args;
}
location ~ ^/s/(.*)/app/node_modules/(.*) {
return 301 /node_modules/$2$is_args$args;
}
location ~ ^/s/(.*)/app/app/(.*) {
return 301 /app/$2$is_args$args;
}
location ~ ^/app/api/(.*) {
if ($request_method = POST ) {
return 307 /api/$1$is_args$args;
}
return 301 /api/$1$is_args$args;
}
location ~ ^/app/elasticsearch/(.*) {
if ($request_method = POST ) {
return 307 /elasticsearch/$1$is_args$args;
}
return 301 /elasticsearch/$1$is_args$args;
}
location ~ ^/app/app/(.*) {
return 301 /app/$1$is_args$args;
}
location / {
proxy_cache my_zone;
proxy_cache_valid 200 302 1d;
proxy_cache_valid 301 30d;
proxy_cache_valid any 5m;
proxy_cache_bypass $http_cache_control;
add_header X-Proxy-Cache $upstream_cache_status;
add_header Cache-Control "public";
proxy_pass http://localhost:5601/;
sub_filter_types text/html;
sub_filter_once on;
if ($request_filename ~ .*\.(?:js|css|jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm)$) {
expires 90d;
}
}
}
}
apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service }}
{{- with .Values.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
protocol: TCP
name: http
targetPort: {{ .Values.httpPort }}
selector:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
{{- if .Values.enableProxy }}
---
apiVersion: v1
kind: Service
metadata:
name: kibana-http
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service }}
kubernetes.io/cluster-service: "true"
spec:
type: ClusterIP
ports:
- name: http-access-kibana
protocol: TCP
port: 80
selector:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
{{- end }}
---
elasticsearchURL: "" # "http://elasticsearch-master:9200"
elasticsearchHosts: "http://elasticsearch-master:9200"
replicas: 1
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: kibana-keystore
# secretName: kibana-keystore
# path: /usr/share/kibana/data/kibana.keystore
# subPath: kibana.keystore # optional
image: "docker.elastic.co/kibana/kibana"
imageTag: "7.3.0"
imagePullPolicy: "IfNotPresent"
# additionals labels
labels: {}
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
resources:
requests:
cpu: "100m"
memory: "500m"
limits:
cpu: "1000m"
memory: "1Gi"
protocol: http
serverHost: "0.0.0.0"
healthCheckPath: "/app/kibana"
# Allows you to add any config files in /usr/share/kibana/config/
# such as kibana.yml
kibanaConfig: {}
# kibana.yml: |
# key:
# nestedkey: value
# If Pod Security Policy in use it may be required to specify security context as well as service account
language: en #en/zh-CN/ja-JP
podSecurityContext:
fsGroup: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
serviceAccount: ""
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
httpPort: 5601
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
updateStrategy:
type: "Recreate"
service:
type: ClusterIP
port: 5601
nodePort:
annotations: {}
# cloud.google.com/load-balancer-type: "Internal"
# service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
# service.beta.kubernetes.io/azure-load-balancer-internal: "true"
# service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
# service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
imagePullSecrets: []
nodeSelector: {}
tolerations: []
affinity: {}
nameOverride: ""
fullnameOverride: ""
enableProxy: true
description: Official Elastic helm chart for Metricbeat
home: https://github.com/elastic/helm-charts
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: metricbeat
version: 7.3.0
appVersion: 7.3.0
sources:
- https://github.com/elastic/beats
icon: https://helm.elastic.co/icons/metricbeat.png
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
appVersion: 1.7.2
description: Install kube-state-metrics to generate and expose cluster-level metrics
home: https://github.com/kubernetes/kube-state-metrics/
keywords:
- metric
- monitoring
- prometheus
- kubernetes
maintainers:
- email: jose@armesto.net
name: fiunchinho
- email: tariq.ibrahim@mulesoft.com
name: tariq1890
name: kube-state-metrics
sources:
- https://github.com/kubernetes/kube-state-metrics/
version: 2.3.1
approvers:
- fiunchinho
- tariq1890
reviewers:
- fiunchinho
- tariq1890
# kube-state-metrics Helm Chart
* Installs the [kube-state-metrics agent](https://github.com/kubernetes/kube-state-metrics).
## Installing the Chart
To install the chart with the release name `my-release`:
```bash
$ helm install stable/kube-state-metrics
```
## Configuration
| Parameter | Description | Default |
|:----------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------|
| `image.repository` | The image repository to pull from | quay.io/coreos/kube-state-metrics |
| `image.tag` | The image tag to pull from | `v1.7.2` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `replicas` | Number of replicas | `1` |
| `service.port` | The port of the container | `8080` |
| `service.annotations` | Annotations to be added to the service | `{}`
| `customLabels` | Custom labels to apply to service, deployment and pods | `{}` |
| `hostNetwork` | Whether or not to use the host network | `false` |
| `prometheusScrape` | Whether or not enable prom scrape | `true` |
| `rbac.create` | If true, create & use RBAC resources | `true` |
| `serviceAccount.create` | If true, create & use serviceAccount | `true` |
| `serviceAccount.name` | If not set & create is true, use template fullname | |
| `serviceAccount.imagePullSecrets` | Specify image pull secrets field | `[]` |
| `podSecurityPolicy.enabled` | If true, create & use PodSecurityPolicy resources | `false` |
| `podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | {} |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `65534` |
| `securityContext.runAsUser` | User ID for the container | `65534` |
| `priorityClassName` | Name of Priority Class to assign pods | `nil` |
| `nodeSelector` | Node labels for pod assignment | {} |
| `affinity` | Affinity settings for pod assignment | {} |
| `tolerations` | Tolerations for pod assignment | [] |
| `podAnnotations` | Annotations to be added to the pod | {} |
| `resources` | kube-state-metrics resource requests and limits | {} |
| `collectors.certificatesigningrequests` | Enable the certificatesigningrequests collector. | `true` |
| `collectors.configmaps` | Enable the configmaps collector. | `true` |
| `collectors.cronjobs` | Enable the cronjobs collector. | `true` |
| `collectors.daemonsets` | Enable the daemonsets collector. | `true` |
| `collectors.deployments` | Enable the deployments collector. | `true` |
| `collectors.endpoints` | Enable the endpoints collector. | `true` |
| `collectors.horizontalpodautoscalers` | Enable the horizontalpodautoscalers collector. | `true` |
| `collectors.ingresses` | Enable the ingresses collector. | `true` |
| `collectors.jobs` | Enable the jobs collector. | `true` |
| `collectors.limitranges` | Enable the limitranges collector. | `true` |
| `collectors.namespaces` | Enable the namespaces collector. | `true` |
| `collectors.nodes` | Enable the nodes collector. | `true` |
| `collectors.persistentvolumeclaims` | Enable the persistentvolumeclaims collector. | `true` |
| `collectors.persistentvolumes` | Enable the persistentvolumes collector. | `true` |
| `collectors.poddisruptionbudgets` | Enable the poddisruptionbudgets collector. | `true` |
| `collectors.pods` | Enable the pods collector. | `true` |
| `collectors.replicasets` | Enable the replicasets collector. | `true` |
| `collectors.replicationcontrollers` | Enable the replicationcontrollers collector. | `true` |
| `collectors.resourcequotas` | Enable the resourcequotas collector. | `true` |
| `collectors.secrets` | Enable the secrets collector. | `true` |
| `collectors.services` | Enable the services collector. | `true` |
| `collectors.statefulsets` | Enable the statefulsets collector. | `true` |
| `collectors.storageclasses` | Enable the storageclasses collector. | `true` |
| `collectors.verticalpodautoscalers` | Enable the verticalpodautoscalers collector. | `false` |
| `prometheus.monitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
| `prometheus.monitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
| `prometheus.monitor.namespace` | Namespace where servicemonitor resource should be created | `the same namespace as kube-state-metrics` |
| `prometheus.monitor.honorLabels` | Honor metric labels | `false` |
kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.
The exposed metrics can be found here:
https://github.com/kubernetes/kube-state-metrics/blob/master/docs/README.md#exposed-metrics
The metrics are exported on the HTTP endpoint /metrics on the listening port.
In your case, {{ template "kube-state-metrics.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.port }}/metrics
They are served either as plaintext or protobuf depending on the Accept header.
They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint.
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kube-state-metrics.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kube-state-metrics.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "kube-state-metrics.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "kube-state-metrics.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
rules:
{{ if .Values.collectors.certificatesigningrequests }}
- apiGroups: ["certificates.k8s.io"]
resources:
- certificatesigningrequests
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.configmaps }}
- apiGroups: [""]
resources:
- configmaps
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.cronjobs }}
- apiGroups: ["batch"]
resources:
- cronjobs
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.daemonsets }}
- apiGroups: ["extensions", "apps"]
resources:
- daemonsets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.deployments }}
- apiGroups: ["extensions", "apps"]
resources:
- deployments
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.endpoints }}
- apiGroups: [""]
resources:
- endpoints
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.horizontalpodautoscalers }}
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.ingresses }}
- apiGroups: ["extensions", "networking.k8s.io"]
resources:
- ingresses
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.jobs }}
- apiGroups: ["batch"]
resources:
- jobs
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.limitranges }}
- apiGroups: [""]
resources:
- limitranges
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.namespaces }}
- apiGroups: [""]
resources:
- namespaces
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.nodes }}
- apiGroups: [""]
resources:
- nodes
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.persistentvolumeclaims }}
- apiGroups: [""]
resources:
- persistentvolumeclaims
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.persistentvolumes }}
- apiGroups: [""]
resources:
- persistentvolumes
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.poddisruptionbudgets }}
- apiGroups: ["policy"]
resources:
- poddisruptionbudgets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.pods }}
- apiGroups: [""]
resources:
- pods
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.replicasets }}
- apiGroups: ["extensions", "apps"]
resources:
- replicasets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.replicationcontrollers }}
- apiGroups: [""]
resources:
- replicationcontrollers
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.resourcequotas }}
- apiGroups: [""]
resources:
- resourcequotas
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.secrets }}
- apiGroups: [""]
resources:
- secrets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.services }}
- apiGroups: [""]
resources:
- services
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.statefulsets }}
- apiGroups: ["apps"]
resources:
- statefulsets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.storageclasses }}
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.verticalpodautoscalers }}
- apiGroups: ["autoscaling.k8s.io"]
resources:
- verticalpodautoscalers
verbs: ["list", "watch"]
{{ end -}}
{{- end -}}
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "kube-state-metrics.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "kube-state-metrics.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
replicas: {{ .Values.replicas }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
app.kubernetes.io/instance: "{{ .Release.Name }}"
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 8 }}
{{- end }}
{{- if .Values.podAnnotations }}
annotations:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
hostNetwork: {{ .Values.hostNetwork }}
serviceAccountName: {{ template "kube-state-metrics.serviceAccountName" . }}
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
args:
{{ if .Values.collectors.certificatesigningrequests }}
- --collectors=certificatesigningrequests
{{ end }}
{{ if .Values.collectors.configmaps }}
- --collectors=configmaps
{{ end }}
{{ if .Values.collectors.cronjobs }}
- --collectors=cronjobs
{{ end }}
{{ if .Values.collectors.daemonsets }}
- --collectors=daemonsets
{{ end }}
{{ if .Values.collectors.deployments }}
- --collectors=deployments
{{ end }}
{{ if .Values.collectors.endpoints }}
- --collectors=endpoints
{{ end }}
{{ if .Values.collectors.horizontalpodautoscalers }}
- --collectors=horizontalpodautoscalers
{{ end }}
{{ if .Values.collectors.ingresses }}
- --collectors=ingresses
{{ end }}
{{ if .Values.collectors.jobs }}
- --collectors=jobs
{{ end }}
{{ if .Values.collectors.limitranges }}
- --collectors=limitranges
{{ end }}
{{ if .Values.collectors.namespaces }}
- --collectors=namespaces
{{ end }}
{{ if .Values.collectors.nodes }}
- --collectors=nodes
{{ end }}
{{ if .Values.collectors.persistentvolumeclaims }}
- --collectors=persistentvolumeclaims
{{ end }}
{{ if .Values.collectors.persistentvolumes }}
- --collectors=persistentvolumes
{{ end }}
{{ if .Values.collectors.poddisruptionbudgets }}
- --collectors=poddisruptionbudgets
{{ end }}
{{ if .Values.collectors.pods }}
- --collectors=pods
{{ end }}
{{ if .Values.collectors.replicasets }}
- --collectors=replicasets
{{ end }}
{{ if .Values.collectors.replicationcontrollers }}
- --collectors=replicationcontrollers
{{ end }}
{{ if .Values.collectors.resourcequotas }}
- --collectors=resourcequotas
{{ end }}
{{ if .Values.collectors.secrets }}
- --collectors=secrets
{{ end }}
{{ if .Values.collectors.services }}
- --collectors=services
{{ end }}
{{ if .Values.collectors.statefulsets }}
- --collectors=statefulsets
{{ end }}
{{ if .Values.collectors.storageclasses }}
- --collectors=storageclasses
{{ end }}
{{ if .Values.collectors.verticalpodautoscalers }}
- --collectors=verticalpodautoscalers
{{ end }}
{{ if .Values.namespace }}
- --namespace={{ .Values.namespace }}
{{ end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podSecurityPolicy.annotations }}
annotations:
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
{{- end }}
spec:
privileged: false
volumes:
- 'secret'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
{{- end }}
{{- if and .Values.podSecurityPolicy.enabled -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: psp-{{ template "kube-state-metrics.fullname" . }}
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "kube-state-metrics.fullname" . }}
{{- end }}
{{- if and .Values.podSecurityPolicy.enabled -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: psp-{{ template "kube-state-metrics.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-{{ template "kube-state-metrics.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "kube-state-metrics.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
annotations:
{{- if .Values.prometheusScrape }}
prometheus.io/scrape: '{{ .Values.prometheusScrape }}'
{{- end }}
{{- if .Values.service.annotations }}
{{- toYaml .Values.service.annotations | nindent 4 }}
{{- end }}
spec:
type: "{{ .Values.service.type }}"
ports:
- name: "http"
protocol: TCP
port: {{ .Values.service.port }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
targetPort: 8080
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: "{{ .Values.service.loadBalancerIP }}"
{{- end }}
selector:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
imagePullSecrets:
{{ toYaml .Values.serviceAccount.imagePullSecrets | indent 2 }}
{{- end -}}
{{- if .Values.prometheus.monitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
{{- if .Values.prometheus.monitor.additionalLabels }}
{{ toYaml .Values.prometheus.monitor.additionalLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
endpoints:
- port: http
{{- if .Values.prometheus.monitor.honorLabels }}
honorLabels: true
{{- end }}
{{- end }}
# Default values for kube-state-metrics.
prometheusScrape: true
image:
repository: quay.io/coreos/kube-state-metrics
tag: v1.7.2
pullPolicy: IfNotPresent
replicas: 1
service:
port: 8080
# Default to clusterIP for backward compatibility
type: ClusterIP
nodePort: 0
loadBalancerIP: ""
annotations: {}
customLabels: {}
hostNetwork: false
rbac:
# If true, create & use RBAC resources
create: true
serviceAccount:
# Specifies whether a ServiceAccount should be created, require rbac true
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
# Reference to one or more secrets to be used when pulling images
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
prometheus:
monitor:
enabled: false
additionalLabels: {}
namespace: ""
honorLabels: false
## Specify if a Pod Security Policy for kube-state-metrics must be created
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
##
podSecurityPolicy:
enabled: false
annotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
securityContext:
enabled: true
runAsUser: 65534
fsGroup: 65534
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
## Affinity settings for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
affinity: {}
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Annotations to be added to the pod
podAnnotations: {}
## Assign a PriorityClassName to pods if set
# priorityClassName: ""
# Available collectors for kube-state-metrics. By default all available
# collectors are enabled.
collectors:
certificatesigningrequests: true
configmaps: true
cronjobs: true
daemonsets: true
deployments: true
endpoints: true
horizontalpodautoscalers: true
ingresses: true
jobs: true
limitranges: true
namespaces: true
nodes: true
persistentvolumeclaims: true
persistentvolumes: true
poddisruptionbudgets: true
pods: true
replicasets: true
replicationcontrollers: true
resourcequotas: true
secrets: true
services: true
statefulsets: true
storageclasses: true
verticalpodautoscalers: false
# Namespace to be enabled for collecting resources. By default all namespaces are collected.
# namespace: ""
dependencies:
- name: kube-state-metrics
repository: https://kubernetes-charts.storage.googleapis.com
version: 2.3.1
digest: sha256:505affb1bd3fa9d6952e5c9083d4e3c4e1dfe8ac8879299f336a4ae34591860b
generated: 2019-08-22T14:50:34.007904+08:00
dependencies:
- name: 'kube-state-metrics'
version: '2.3.1'
repository: '@stable'
1. Watch all containers come up.
$ kubectl get pods --namespace={{ .Release.Namespace }} -l app={{ template "fullname" . }} -w
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Use the fullname if the serviceAccount value is not set
*/}}
{{- define "serviceAccount" -}}
{{- if .Values.serviceAccount }}
{{- .Values.serviceAccount -}}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- if .Values.managedServiceAccount }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "serviceAccount" . }}-cluster-role
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
verbs:
- get
- list
- watch
{{- end -}}
{{- if .Values.managedServiceAccount }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ template "serviceAccount" . }}-cluster-role-binding
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
roleRef:
kind: ClusterRole
name: {{ template "serviceAccount" . }}-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ template "serviceAccount" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}
{{- if .Values.metricbeatConfig }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fullname" . }}-config
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
data:
{{- range $path, $config := .Values.metricbeatConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ template "fullname" . }}
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
spec:
selector:
matchLabels:
app: "{{ template "fullname" . }}"
release: {{ .Release.Name | quote }}
updateStrategy:
type: {{ .Values.updateStrategy }}
template:
metadata:
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.metricbeatConfig }}
configChecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
name: "{{ template "fullname" . }}"
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
spec:
{{- with .Values.tolerations }}
tolerations: {{ toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector: {{ toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity: {{ toYaml . | nindent 8 -}}
{{- end }}
serviceAccountName: {{ template "serviceAccount" . }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- end }}
{{- if .Values.metricbeatConfig }}
- name: metricbeat-config
configMap:
defaultMode: 0600
name: {{ template "fullname" . }}-config
{{- end }}
- name: data
hostPath:
path: {{ .Values.hostPathRoot }}/{{ template "fullname" . }}-{{ .Release.Namespace }}-data
type: DirectoryOrCreate
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varrundockersock
hostPath:
path: /var/run/docker.sock
{{- if .Values.extraVolumes }}
{{ tpl .Values.extraVolumes . | indent 6 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
containers:
- name: "metricbeat"
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
args:
- "-e"
- "-E"
- "http.enabled=true"
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
{{ toYaml .Values.livenessProbe | indent 10 }}
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
metricbeat test output
{{ toYaml .Values.readinessProbe | indent 10 }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 8 }}
{{- end }}
{{- if .Values.podSecurityContext }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 10 }}
{{- end }}
volumeMounts:
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.metricbeatConfig }}
- name: metricbeat-config
mountPath: /usr/share/metricbeat/{{ $path }}
readOnly: true
subPath: {{ $path }}
{{- end }}
- name: data
mountPath: /usr/share/metricbeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
# Necessary when using autodiscovery; avoid mounting it otherwise
# See: https://www.elastic.co/guide/en/beats/metricbeat/master/configuration-autodiscover.html
- name: varrundockersock
mountPath: /var/run/docker.sock
readOnly: true
{{- if .Values.extraVolumeMounts }}
{{ tpl .Values.extraVolumeMounts . | indent 8 }}
{{- end }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{{ template "fullname" . }}-metrics'
labels:
app: '{{ template "fullname" . }}-metrics'
chart: '{{ .Chart.Name }}-{{ .Chart.Version }}'
heritage: '{{ .Release.Service }}'
release: '{{ .Release.Name }}'
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: '{{ template "fullname" . }}-metrics'
chart: '{{ .Chart.Name }}-{{ .Chart.Version }}'
heritage: '{{ .Release.Service }}'
release: '{{ .Release.Name }}'
template:
metadata:
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.metricbeatConfig }}
configChecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
labels:
app: '{{ template "fullname" . }}-metrics'
chart: '{{ .Chart.Name }}-{{ .Chart.Version }}'
heritage: '{{ .Release.Service }}'
release: '{{ .Release.Name }}'
spec:
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 6 }}
{{- end }}
serviceAccountName: {{ template "serviceAccount" . }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- end }}
{{- if .Values.metricbeatConfig }}
- name: metricbeat-config
configMap:
defaultMode: 0600
name: {{ template "fullname" . }}-config
{{- end }}
{{- if .Values.extraVolumes }}
{{ tpl .Values.extraVolumes . | indent 6 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
containers:
- name: "metricbeat"
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
args:
- "-c"
- "/usr/share/metricbeat/kube-state-metrics-metricbeat.yml"
- "-e"
- "-E"
- "http.enabled=true"
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
metricbeat test output
{{ toYaml .Values.readinessProbe | indent 10 }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBE_STATE_METRICS_HOSTS
value: "$({{ .Release.Name | replace "-" "_" | upper }}_KUBE_STATE_METRICS_SERVICE_HOST):$({{ .Release.Name | replace "-" "_" | upper }}_KUBE_STATE_METRICS_SERVICE_PORT_HTTP)"
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 8 }}
{{- end }}
{{- if .Values.podSecurityContext }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 10 }}
{{- end }}
volumeMounts:
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.metricbeatConfig }}
- name: metricbeat-config
mountPath: /usr/share/metricbeat/{{ $path }}
readOnly: true
subPath: {{ $path }}
{{- end }}
{{- if .Values.extraVolumeMounts }}
{{ tpl .Values.extraVolumeMounts . | indent 8 }}
{{- end }}
{{- if .Values.managedServiceAccount }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "serviceAccount" . }}
labels:
app: "{{ template "fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
{{- end -}}
---
# Allows you to add any config files in /usr/share/metricbeat
# such as metricbeat.yml
metricbeatConfig:
metricbeat.yml: |
system:
hostfs: /hostfs
metricbeat.modules:
- module: kubernetes
metricsets:
- container
- node
- pod
- system
- volume
period: 10s
host: "${NODE_NAME}"
hosts: ["${NODE_NAME}:10255"]
processors:
- add_kubernetes_metadata:
in_cluster: true
- module: kubernetes
enabled: true
metricsets:
- event
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
processes: ['.*']
process.include_top_n:
by_cpu: 5
by_memory: 5
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
kube-state-metrics-metricbeat.yml: |
metricbeat.modules:
- module: kubernetes
enabled: true
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
period: 10s
hosts: ["${KUBE_STATE_METRICS_HOSTS:kube-state-metrics:8080}"]
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
# Replicas being used for the kube-state-metrics metricbeat deployment
replicas: 1
# Extra environment variables to append to the DaemonSet pod spec.
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
extraVolumeMounts: ""
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraVolumes: ""
# - name: extras
# emptyDir: {}
# Root directory where metricbeat will write data to in order to persist registry data across pod restarts (file position and other metadata).
hostPathRoot: /var/lib
image: "docker.elastic.co/beats/metricbeat"
imageTag: "7.3.0"
imagePullPolicy: "IfNotPresent"
imagePullSecrets: []
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
# Whether this chart should self-manage its service account, role, and associated role binding.
managedServiceAccount: true
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# Various pod security context settings. Bear in mind that many of these have an impact on metricbeat functioning properly.
#
# - Filesystem group for the metricbeat user. The official elastic docker images always have an id of 1000.
# - User that the container will execute as. Typically necessary to run as root (0) in order to properly collect host container logs.
# - Whether to execute the metricbeat containers as privileged containers. Typically not necessarily unless running within environments such as OpenShift.
podSecurityContext:
runAsUser: 0
privileged: false
resources:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "1000m"
memory: "200Mi"
# Custom service account override that the pod will use
serviceAccount: ""
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security other sensitive values
secretMounts: []
# - name: metricbeat-certificates
# secretName: metricbeat-certificates
# path: /usr/share/metricbeat/certs
# How long to wait for metricbeat pods to stop gracefully
terminationGracePeriod: 30
tolerations: []
nodeSelector: {}
affinity: {}
updateStrategy: RollingUpdate
# Override various naming aspects of this chart
# Only edit these if you know what you're doing
nameOverride: ""
fullnameOverride: ""
labels:
io.cattle.role: cluster # options are cluster/project
io.rancher.app_min_version: 7.3.0
categories:
- elasticsearch
questions:
- variable: defaultImage
default: true
description: "Use default Docker image"
label: Use Default Image
type: boolean
group: "Container Images"
show_subquestion_if: false
subquestions:
- variable: elasticsearch.image
default: "ranchercharts/elasticsearch-elasticsearch"
description: "Elasticsearch image name"
type: string
label: ElasticSearch Image Name
- variable: elasticsearch.imageTag
default: "7.3.0"
description: "Elasticsearch image tag"
type: string
label: Elasticsearch Image Tag
- variable: kibana.image
default: "ranchercharts/kibana-kibana"
description: "Kibana image name"
type: string
label: Kibana Image Name
- variable: kibana.imageTag
default: "7.3.0"
description: "Kibana image tag"
type: string
label: Kibana Image Tag
- variable: filebeat.image
default: "ranchercharts/beats-filebeat"
description: "File-beat image name"
type: string
label: Filebeat Image Name
- variable: filebeat.imageTag
default: "7.3.0"
description: "File-beat image tag"
type: string
label: Filebeat Image Tag
- variable: metricbeat.image
default: "ranchercharts/beats-metricbeat"
description: "Metric-beat image name"
type: string
label: Metricbeat Image Name
- variable: metricbeat.imageTag
default: "7.3.0"
description: "Metric-beat image tag"
type: string
label: Metricbeat Image Tag
- variable: metricbeat.kube-state-metrics.image.repository
default: "ranchercharts/coreos-kube-state-metrics"
description: "Metric-beat image name"
type: string
label: Metricbeat Image Name
- variable: metricbeat.kube-state-metrics.image.tag
default: "v1.7.2"
description: "Metric-beat image tag"
type: string
label: Metricbeat Image Tag
# kibana configs
- variable: kibana.language
default: "en"
description: "Config Kiabana Language"
type: enum
label: Config Kiabana Language
required: true
group: "Kibana Config"
options:
- "en"
- "zh-CN"
- "ja-JP"
- variable: kibana.service.type
default: "NodePort"
description: "Kibana service type"
type: enum
label: Kibana Service Type
required: true
group: "Kibana Config"
options:
- "ClusterIP"
- "NodePort"
- variable: kibana.enableProxy
default: true
description: "Expose kibana using nginx proxy"
type: boolean
label: Expose Kibana using Nginx Proxy
group: "Kibana Config"
# elasticSearch configs
- variable: elasticsearch.replicas
default: 3
description: "config elasticSearch replicas"
type: int
label: Config ElasticSearch Replicas
required: true
group: "Elasticsearch Config"
min: 3
max: 99
- variable: elasticsearch.antiAffinity
default: "hard"
description: "Config elasticsearch pod antiAffinity type"
type: enum
label: Config Elasticsearch Pod AntiAffinity Type
required: true
group: "Elasticsearch Config"
options:
- "hard"
- "soft"
- variable: elasticsearch.persistence.enabled
default: false
description: "Enable persistent volume for elasticsearch"
type: boolean
required: true
label: Enable Elasticsearch Persistent Volume
show_subquestion_if: true
group: "Elasticsearch Config"
subquestions:
- variable: elasticsearch.persistence.storageClass
default: ""
description: "If undefined or set to null, using the default StorageClass. Defaults to null."
type: storageclass
label: Storage Class for Elasticsearch Node
- variable: elasticsearch.persistence.size
default: "30Gi"
description: "Elasticsearch persistent volume size"
required: true
type: string
label: Elasticsearch Persistent Volume Size of Node
# file-beats configs
- variable: filebeat.enabled
default: true
description: "Enable filebeat, lightweight shipper for logs"
type: boolean
label: Enable Filebeat
required: true
group: "Filebeat Config"
- variable: metricbeat.enabled
default: true
description: "Enable metric-beat, lightweight shipper for metrics"
type: boolean
label: Enable Metricbeat
required: true
group: "Metricbeat Config"
dependencies:
- name: elasticsearch
version: 7.3.0
condition: elasticsearch.enabled
- name: kibana
version: 7.3.0
condition: kibana.enabled
repository: file://./charts/kibana
- name: filebeat
version: 7.3.0
condition: filebeat.enabled
repository: file://./charts/filebeat
- name: metricbeat
version: 7.3.0
condition: metricbeat.enabled
repository: file://./charts/metricbeat
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "efk.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "efk.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
# Default values for efk.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
elasticsearch:
enabled: true
image: "ranchercharts/elasticsearch-elasticsearch"
imageTag: "7.3.0"
enableProxy: true
kibana:
enabled: true
image: "ranchercharts/kibana-kibana"
imageTag: "7.3.0"
filebeat:
enabled: true
image: "ranchercharts/beats-filebeat"
imageTag: "7.3.0"
metricbeat:
enabled: true
image: "ranchercharts/beats-metricbeat"
imageTag: "7.3.0"
kube-state-metrics:
image:
repository: ranchercharts/coreos-kube-state-metrics
tag: v1.7.2
global:
nginxProxy:
repository: rancher/nginx
tag: 1.15.8-alpine
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment