Commit fee15b40 by Jan B Committed by Bill Maxwell

Helm Chart for Datadog Agent (#17)

* Added initial version of datadog chart * Update questions.yml * 2018: Auto stash before merge of "dash2018" and "jan/dash2018" * Added note about RKE configuration * Enhanced questions yaml * Added description * Fix nested subquestions * Fix descriptions * Fix tabs in yaml * Fix network policy
parent 392427e2
File added
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
name: datadog
version: 1.0.0
appVersion: 6.3.1
description: Datadog Agent
keywords:
- monitoring
- logging
home: https://www.datadoghq.com
icon: https://datadog-live.imgix.net/img/dd_logo_70x75.png
sources:
- https://app.datadoghq.com/account/settings#agent/kubernetes
- https://github.com/DataDog/datadog-agent
maintainers:
- name: janeczku
email: jan@rancher.com
# Datadog
[Datadog](https://www.datadoghq.com/) is a hosted infrastructure monitoring platform.
## Introduction
This chart adds the Datadog Agent to all nodes in your cluster via a DaemonSet. It also depends on the [kube-state-metrics chart](https://github.com/kubernetes/charts/tree/master/stable/kube-state-metrics).
Please refer to [the agent6 image documentation](https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent) and
[the agent6 general documentation](https://github.com/DataDog/datadog-agent/tree/master/docs) for more information.
## Prerequisites
- Kubernetes 1.8+
## Kubelet configuration for RKE clusters
Datadog Agent requires access to the kubelet API in order to function properly.
For RKE clusters, this means you will need to enable read-only access to the kubelet on port 10255 before deploying this chart.
In Rancher v2.0.4, a custom RKE config can be applied both while creating new and updating existing clusters. Just navigate to `=> Cluster Options => Edit as YAML` and add/update the kubelet subkey in the services stanza:
```yaml
services:
kubelet:
extra_args:
read-only-port: 10255
```
Note: You should make sure this port is properly firewalled on all your nodes.
## Deploying the Chart
1. First retrieve your DataDog API key from your [Agent Installation Instructions](https://app.datadoghq.com/account/settings#agent/kubernetes).
2. By default, this Chart creates a Secret and stores the provided API key in that Secret. Alternatively, you can point to an existing Secret containing your API key with the `datadog.apiKeyExistingSecret` value.
3. Customize the configurable parameters of the chart and deploy.
4. After a few minutes, you should see hosts and metrics being reported in your Datadog account.
## Configuration
TODO: Add table of configurable parameters
### Event Collection
The Datadog Agent can collect events from the Kubernetes API server. This can be enabled by setting the value of `datadog.collectEvents` to `true`. This implicitely enables leader election among members of the Datadog DaemonSet through kubernetes to ensure only one leader agent instance is gathering events at a given time.
# Datadog
[Datadog](https://www.datadoghq.com/) is a hosted infrastructure monitoring platform.
This chart adds the Datadog Agent to all nodes in your cluster via a DaemonSet. The chart optionally also deploys the [kube-state-metrics](https://github.com/kubernetes/charts/tree/master/stable/kube-state-metrics) chart.
Note: Before deploying this chart, ensure that kubelet API access is properly configured in your cluster (see README).
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
name: kube-state-metrics
description: Install kube-state-metrics to generate and expose cluster-level metrics
keywords:
- metric
- monitoring
- prometheus
version: 0.8.0
appVersion: 1.3.1
home: https://github.com/kubernetes/kube-state-metrics/
sources:
- https://github.com/kubernetes/kube-state-metrics/
maintainers:
- name: fiunchinho
email: jose@armesto.net
# kube-state-metrics Helm Chart
* Installs the [kube-state-metrics agent](https://github.com/kubernetes/kube-state-metrics).
## Installing the Chart
To install the chart with the release name `my-release`:
```bash
$ helm install stable/kube-state-metrics
```
## Configuration
| Parameter | Description | Default |
|---------------------------------------|---------------------------------------------------------|---------------------------------------------|
| `image.repository` | The image repository to pull from | k8s.gcr.io/kube-state-metrics |
| `image.tag` | The image tag to pull from | `<latest version>` |
| `image.pullPolicy` | Image pull policy | IfNotPresent |
| `service.port` | The port of the container | 8080 |
| `prometheusScrape` | Whether or not enable prom scrape | True |
| `rbac.create` | If true, create & use RBAC resources | False |
| `rbac.serviceAccountName` | ServiceAccount to be used (ignored if rbac.create=true) | default |
| `nodeSelector` | Node labels for pod assignment | {} |
| `tolerations` | Tolerations for pod assignment | [] |
| `podAnnotations` | Annotations to be added to the pod | {} |
| `resources` | kube-state-metrics resource requests and limits | {} |
| `collectors.cronjobs` | Enable the cronjobs collector. | true |
| `collectors.daemonsets` | Enable the daemonsets collector. | true |
| `collectors.deployments` | Enable the deployments collector. | true |
| `collectors.endpoints` | Enable the endpoints collector. | true |
| `collectors.horizontalpodautoscalers` | Enable the horizontalpodautoscalers collector. | true |
| `collectors.jobs` | Enable the jobs collector. | true |
| `collectors.limitranges` | Enable the limitranges collector. | true |
| `collectors.namespaces` | Enable the namespaces collector. | true |
| `collectors.nodes` | Enable the nodes collector. | true |
| `collectors.persistentvolumeclaims` | Enable the persistentvolumeclaims collector. | true |
| `collectors.persistentvolumes` | Enable the persistentvolumes collector. | true |
| `collectors.pods` | Enable the pods collector. | true |
| `collectors.replicasets` | Enable the replicasets collector. | true |
| `collectors.replicationcontrollers` | Enable the replicationcontrollers collector. | true |
| `collectors.resourcequotas` | Enable the resourcequotas collector. | true |
| `collectors.services` | Enable the services collector. | true |
| `collectors.statefulsets` | Enable the statefulsets collector. | true |
kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.
The exposed metrics can be found here:
https://github.com/kubernetes/kube-state-metrics/tree/master/Documentation#documentation.
The metrics are exported on the HTTP endpoint /metrics on the listening port.
In your case, {{ template "kube-state-metrics.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.port }}/metrics
They are served either as plaintext or protobuf depending on the Accept header.
They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint.
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kube-state-metrics.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kube-state-metrics.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app: {{ template "kube-state-metrics.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
rules:
{{ if .Values.collectors.cronjobs }}
- apiGroups: ["batch"]
resources:
- cronjobs
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.daemonsets }}
- apiGroups: ["extensions"]
resources:
- daemonsets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.deployments }}
- apiGroups: ["extensions"]
resources:
- deployments
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.endpoints }}
- apiGroups: [""]
resources:
- endpoints
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.horizontalpodautoscalers }}
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.jobs }}
- apiGroups: ["batch"]
resources:
- jobs
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.limitranges }}
- apiGroups: [""]
resources:
- limitranges
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.namespaces }}
- apiGroups: [""]
resources:
- namespaces
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.nodes }}
- apiGroups: [""]
resources:
- nodes
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.persistentvolumeclaims }}
- apiGroups: [""]
resources:
- persistentvolumeclaims
verbs: ["list", "watch"]
{{ end }}
{{ if .Values.collectors.persistentvolumes }}
- apiGroups: [""]
resources:
- persistentvolumes
verbs: ["list", "watch"]
{{ end }}
{{ if .Values.collectors.pods }}
- apiGroups: [""]
resources:
- pods
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.replicasets }}
- apiGroups: ["extensions"]
resources:
- replicasets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.replicationcontrollers }}
- apiGroups: [""]
resources:
- replicationcontrollers
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.resourcequotas }}
- apiGroups: [""]
resources:
- resourcequotas
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.services }}
- apiGroups: [""]
resources:
- services
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.statefulsets }}
- apiGroups: ["apps"]
resources:
- statefulsets
verbs: ["list", "watch"]
{{ end -}}
{{- end -}}
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "kube-state-metrics.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "kube-state-metrics.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "kube-state-metrics.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app: {{ template "kube-state-metrics.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
replicas: 1
template:
metadata:
labels:
app: {{ template "kube-state-metrics.name" . }}
release: "{{ .Release.Name }}"
{{- if .Values.podAnnotations }}
annotations:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ if .Values.rbac.create }}{{ template "kube-state-metrics.fullname" . }}{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }}
containers:
- name: {{ .Chart.Name }}
args:
{{ if .Values.collectors.cronjobs }}
- --collectors=cronjobs
{{ end }}
{{ if .Values.collectors.daemonsets }}
- --collectors=daemonsets
{{ end }}
{{ if .Values.collectors.deployments }}
- --collectors=deployments
{{ end }}
{{ if .Values.collectors.endpoints }}
- --collectors=endpoints
{{ end }}
{{ if .Values.collectors.horizontalpodautoscalers }}
- --collectors=horizontalpodautoscalers
{{ end }}
{{ if .Values.collectors.jobs }}
- --collectors=jobs
{{ end }}
{{ if .Values.collectors.limitranges }}
- --collectors=limitranges
{{ end }}
{{ if .Values.collectors.namespaces }}
- --collectors=namespaces
{{ end }}
{{ if .Values.collectors.nodes }}
- --collectors=nodes
{{ end }}
{{ if .Values.collectors.persistentvolumeclaims }}
- --collectors=persistentvolumeclaims
{{ end }}
{{ if .Values.collectors.persistentvolumes }}
- --collectors=persistentvolumes
{{ end }}
{{ if .Values.collectors.pods }}
- --collectors=pods
{{ end }}
{{ if .Values.collectors.replicasets }}
- --collectors=replicasets
{{ end }}
{{ if .Values.collectors.replicationcontrollers }}
- --collectors=replicationcontrollers
{{ end }}
{{ if .Values.collectors.resourcequotas }}
- --collectors=resourcequotas
{{ end }}
{{ if .Values.collectors.services }}
- --collectors=services
{{ end }}
{{ if .Values.collectors.statefulsets }}
- --collectors=statefulsets
{{ end }}
{{ if .Values.namespace }}
- --namespace={{ .Values.namespace }}
{{ end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app: {{ template "kube-state-metrics.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
{{- if .Values.prometheusScrape }}
annotations:
prometheus.io/scrape: '{{ .Values.prometheusScrape }}'
{{- end }}
spec:
type: "{{ .Values.service.type }}"
ports:
- name: "http"
protocol: TCP
port: {{ .Values.service.port }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
targetPort: 8080
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: "{{ .Values.service.loadBalancerIP }}"
{{- end }}
selector:
app: {{ template "kube-state-metrics.name" . }}
release: {{ .Release.Name }}
{{- if .Values.rbac.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "kube-state-metrics.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
{{- end -}}
# Default values for kube-state-metrics.
prometheusScrape: true
image:
repository: k8s.gcr.io/kube-state-metrics
tag: v1.3.1
pullPolicy: IfNotPresent
service:
port: 8080
# Default to clusterIP for backward compatibility
type: ClusterIP
nodePort: 0
loadBalancerIP: ""
rbac:
# If true, create & use RBAC resources
create: false
# Ignored if rbac.create is true
serviceAccountName: default
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Annotations to be added to the pod
podAnnotations: {}
# Available collectors for kube-state-metrics. By default all available
# collectors are enabled.
collectors:
cronjobs: true
daemonsets: true
deployments: true
endpoints: true
horizontalpodautoscalers: true
jobs: true
limitranges: true
namespaces: true
nodes: true
persistentvolumeclaims: true
persistentvolumes: true
pods: true
replicasets: true
replicationcontrollers: true
resourcequotas: true
services: true
statefulsets: true
# Namespace to be enabled for collecting resources. By default all namespaces are collected.
# namespace: ""
questions:
#image configurations
- variable: defaultImage
default: "true"
description: "Use default Datadog image or specify a custom one"
label: Use Default Datadog Image
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: image.repository
default: "datadog/agent"
description: "Datadog image name"
type: string
label: Datadog Image Name
- variable: image.tag
default: "6.3.1"
description: "Datadog Image Tag"
type: string
label: Datadog Image Tag
#agent configurations
- variable: datadog.apiKey
default: ""
description: "Enter your Datadog API Key."
type: string
label: Datadog API Key
group: "Agent Configuration"
- variable: datadog.apiKeyExistingSecret
default: ""
description: "Provide the name of an existing secret that contains the API key"
type: string
label: Use Existing API Key Secret
group: "Agent Configuration"
- variable: datadog.logLevel
default: "warn"
description: "Set Agent logging verbosity"
type: enum
options:
- "trace"
- "debug"
- "info"
- "warn"
- "error"
- "critical"
label: Log Level
group: "Agent Configuration"
- variable: datadog.nonLocalTraffic
default: false
description: "Whether DogStatsD should listen to non local UDP traffic"
type: boolean
label: Non Local Traffic
group: "Agent Configuration"
- variable: datadog.hostTags
default: ""
description: "Tag all nodes with these tags. Specify the tags separated by spaces, e.g; `simple-tag kube-cluster-name:my-cluster`."
type: string
label: Host Tags
group: "Agent Configuration"
- variable: datadog.labelsAsTags
default: ""
description: 'Specify a JSON map, where the map key is the source label name and the map value the datadog tag name. E.g: {"app":"kube_app","release":"helm_release"}.'
type: string
label: Extract Pod Labels as Tags
group: "Agent Configuration"
- variable: datadog.annotationsAsTags
default: ""
description: 'Specify a JSON map, where the map key is the source label name and the map value the datadog tag name. E.g: {"app":"kube_app","release":"helm_release"}.'
type: string
label: Extract Pod Annotations as Tags
group: "Agent Configuration"
- variable: datadog.nodeLabelsAsTags
default: ""
description: 'Specify a JSON map, where the map key is the source label name and the map value the datadog tag name. E.g: {"app":"kube_app","release":"helm_release"}.'
type: string
label: Extract Node Labels As Tags
group: "Agent Configuration"
- variable: datadog.collectEvents
default: true
description: "Enable event collection from the kubernetes API"
type: boolean
label: Collect Events
group: "Agent Configuration"
- variable: datadog.collectLogs
default: false
description: "Enables Datadog log collection"
type: boolean
label: Collect Logs
group: "Agent Configuration"
- variable: datadog.apmEnabled
default: false
description: "Run the trace-agent along with the infrastructure agent"
type: boolean
label: Enable APM
group: "Agent Configuration"
#pod configurations
- variable: pods.useHostPort
default: false
description: "Bind DogstatsD and Trace to hostPort"
type: boolean
label: Use HostPort
group: "Pod Configuration"
- variable: pods.useHostNetwork
default: false
description: "Bind DogstatsD and Trace to hostNetwork"
type: boolean
label: Use HostNetwork
group: "Pod Configuration"
- variable: pods.rkeDataControlPlane
default: true
description: "Configure Datadog Agent pods with the required tolerations to be deployed on the RKE data and control plane."
type: boolean
label: Run on RKE Control Plane Nodes
group: "Pod Configuration"
- variable: pods.httpProxy
default: ""
description: "HTTP Proxy URL"
type: string
label: HTTP Proxy
group: "Pod Configuration"
- variable: pods.httpsProxy
default: ""
description: "HTTPS Proxy URL"
type: string
label: HTTPS Proxy
group: "Pod Configuration"
- variable: pods.noProxy
default: ""
description: "URLs that should not use a proxy (comma-separated)"
type: string
label: No Proxy
group: "Pod Configuration"
#service configurations
- variable: service.enabled
default: false
description: "Create a service endpoint"
type: boolean
label: Create Service
group: "Service Configuration"
show_subquestion_if: true
subquestions:
- variable: service.serviceType
default: "ClusterIP"
description: "Service type to create"
type: enum
options:
- "ClusterIP"
- "NodePort"
- "LoadBalancer"
label: Service Type
#rbac configuration
- variable: rbac.create
default: true
description: "Create RBAC resources for Datadog"
type: boolean
label: Create RBAC
group: "RBAC Configuration"
show_subquestion_if: false
subquestions:
- variable: rbac.serviceAccountName
default: ""
description: "Provide an existing service account that has the required role bindings"
type: string
label: Use Service Account
#Kube State Metrics
- variable: deployKubeStateMetrics
default: true
group: "Kube-State-Metrics"
description: "Create a kube-state-metrics deployment"
type: boolean
label: Create Deployment
required: false
show_subquestion_if: true
subquestions:
- variable: kube-state-metrics.rbac.create
default: true
description: "Create RBAC resources for kube-state-metrics"
type: boolean
label: Create RBAC
- variable: kube-state-metrics.rbac.serviceAccountName
default: ""
description: "Use an existing service account that has the required role bindings. This will be ignored if Create RBAC is set to true."
type: string
label: Use Service Account
dependencies:
- name: kube-state-metrics
version: ~0.8.0
condition: deployKubeStateMetrics
Datadog agents are spinning up on each node in your cluster. After a few
minutes, you should see your agents starting in your event stream:
https://app.datadoghq.com/event/stream
{{ if .Values.datadog.apiKeyExistingSecret }}
You disabled creation of a secret containing your API key.
Therefore it is expected that you create Secret named '{{ .Values.datadog.apiKeyExistingSecret }}' which includes a key called 'api-key', which actually contains the API key. You can obtain your API key from: https://app.datadoghq.com/account/settings#agent/kubernetes
{{ end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "datadog.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "datadog.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Return secret name to be used based on provided values.
*/}}
{{- define "datadog.apiSecretName" -}}
{{- $fullName := include "datadog.fullname" . -}}
{{- default $fullName .Values.datadog.apiKeyExistingSecret | quote -}}
{{- end -}}
{{/*
Return service account name to be used based on provided values.
*/}}
{{- define "datadog.serviceAccountName" -}}
{{- $fullName := include "datadog.fullname" . -}}
{{- default $fullName .Values.rbac.serviceAccountName | quote -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for RBAC APIs.
*/}}
{{- define "rbac.apiVersion" -}}
{{- if semverCompare "^1.8-0" .Capabilities.KubeVersion.GitVersion -}}
"rbac.authorization.k8s.io/v1"
{{- else -}}
"rbac.authorization.k8s.io/v1beta1"
{{- end -}}
{{- end -}}
{{- if .Values.datadog.collectEvents -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: datadogtoken
labels:
app: {{ template "datadog.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
data:
event.tokenKey: "0"
{{- end -}}
{{- if .Values.rbac.create -}}
apiVersion: {{ template "rbac.apiVersion" . }}
kind: ClusterRole
metadata:
labels:
app: {{ template "datadog.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "datadog.fullname" . }}
rules:
- nonResourceURLs:
- "/version" # Used to get apiserver version metadata
- "/healthz" # Healthcheck
verbs: ["get"]
- apiGroups:
- ""
- apps
- batch
- extensions
resources:
- componentstatuses
- endpoints
- jobs
- pods
- replicasets
- replicationcontrollers
- daemonsets
- deployments
- statefulsets
- nodes
- namespaces
- events # Cluster events + kube_service cache invalidation
- services # kube_service tag
verbs: ["get", "list", "watch"]
{{- if .Values.datadog.collectEvents }}
- apiGroups: [""]
resources:
- configmaps
resourceNames:
- datadogtoken # Kubernetes event collection state
- datadog-leader-election # Leader election token
verbs: ["get", "delete", "update"]
- apiGroups: [""] # Create the datadog-leader-election config-map
resources:
- "configmaps"
verbs: ["create"]
{{- end }}
- apiGroups: # Acces to kubelet API resources
- ""
resources:
- nodes/metrics
- nodes/spec
- nodes/proxy # Is this even needed? nodes/pods ?
verbs:
- get
{{- end -}}
{{- if .Values.rbac.create -}}
apiVersion: {{ template "rbac.apiVersion" . }}
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "datadog.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "datadog.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "datadog.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "datadog.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: {{ template "datadog.fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
template:
metadata:
labels:
app: {{ template "datadog.fullname" . }}
name: {{ template "datadog.fullname" . }}
spec:
{{- if .Values.pods.useHostNetwork }}
hostNetwork: true
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 200m
memory: 256Mi
ports:
- containerPort: 8125
{{- if .Values.pods.useHostPort }}
hostPort: 8125
{{- end }}
name: dogstatsdport
protocol: UDP
{{- if .Values.datadog.apmEnabled }}
- containerPort: 8126
{{- if .Values.pods.useHostPort }}
hostPort: 8126
{{- end }}
name: traceport
protocol: TCP
{{- end }}
env:
- name: DD_API_KEY
valueFrom:
secretKeyRef:
name: {{ template "datadog.apiSecretName" . }}
key: api-key
{{- if .Values.datadog.logLevel }}
- name: DD_LOG_LEVEL
value: {{ .Values.datadog.logLevel | quote }}
{{- end }}
{{- if .Values.datadog.nonLocalTraffic }}
- name: DD_DOGSTATSD_NON_LOCAL_TRAFFIC
value: "true"
{{- end }}
{{- if .Values.datadog.hostTags }}
- name: DD_TAGS
value: {{ .Values.datadog.hostTags | quote }}
{{- end }}
{{- if .Values.datadog.labelsAsTags }}
- name: DD_KUBERNETES_POD_LABELS_AS_TAGS
value: {{ .Values.datadog.labelsAsTags | quote }}
{{- end }}
{{- if .Values.datadog.annotationsAsTags }}
- name: DD_KUBERNETES_POD_ANNOTATIONS_AS_TAGS
value: {{ .Values.datadog.annotationsAsTags | quote }}
{{- end }}
{{- if .Values.datadog.nodeLabelsAsTags }}
- name: DD_KUBERNETES_NODE_LABELS_AS_TAGS
value: {{ .Values.datadog.nodeLabelsAsTags | quote }}
{{- end }}
{{- if .Values.datadog.apmEnabled }}
- name: DD_APM_ENABLED
value: {{ .Values.datadog.apmEnabled | quote }}
{{- end }}
{{- if .Values.datadog.collectEvents }}
- name: DD_LEADER_ELECTION
value: "true"
- name: DD_COLLECT_KUBERNETES_EVENTS
value: "true"
{{- end }}
- name: KUBERNETES
value: "yes"
- name: DD_KUBERNETES_KUBELET_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
{{- if .Values.datadog.collectLogs }}
- name: DD_LOGS_ENABLED
value: "true"
- name: DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL
value: "true"
{{- end }}
{{- if .Values.pods.httpProxy }}
- name: HTTP_PROXY
value: {{ .Values.pods.httpProxy | quote }}
{{- end }}
{{- if .Values.pods.httpsProxy }}
- name: HTTP_PROXY
value: {{ .Values.pods.httpsProxy | quote }}
{{- end }}
{{- if .Values.pods.noProxy }}
- name: HTTP_PROXY
value: {{ .Values.pods.noProxy | quote }}
{{- end }}
volumeMounts:
- name: dockersocket
mountPath: /var/run/docker.sock
- name: procdir
mountPath: /host/proc
readOnly: true
- name: cgroups
mountPath: /host/sys/fs/cgroup
readOnly: true
{{- if .Values.datadog.collectLogs }}
- name: pointerdir
mountPath: /opt/datadog-agent/run
{{- end }}
livenessProbe:
exec:
command:
- ./probe.sh
initialDelaySeconds: 15
periodSeconds: 5
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersocket
- hostPath:
path: /proc
name: procdir
- hostPath:
path: /sys/fs/cgroup
name: cgroups
{{- if .Values.datadog.collectLogs }}
- hostPath:
path: /opt/datadog-agent/run
name: pointerdir
{{- end }}
{{- if (or (.Values.pods.tolerations) (.Values.pods.rkeDataControlPlane)) }}
tolerations:
{{- if .Values.pods.rkeDataControlPlane }}
- key: "node-role.kubernetes.io/etcd"
value: "true"
- key: "node-role.kubernetes.io/controlplane"
value: "true"
{{- end }}
{{- if .Values.pods.tolerations }}
{{ toYaml .Values.pods.tolerations | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.pods.affinity }}
affinity:
{{ toYaml .Values.pods.affinity | indent 8 }}
{{- end }}
serviceAccountName: {{ template "datadog.serviceAccountName" . }}
{{- if .Values.pods.nodeSelector }}
nodeSelector:
{{ toYaml .Values.pods.nodeSelector | indent 8 }}
{{- end }}
updateStrategy:
type: "OnDelete"
{{- if .Values.service.enabled -}}
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: {{ template "datadog.fullname" . }}
labels:
app: {{ template "datadog.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
spec:
ingress:
- from:
- namespaceSelector: {}
ports:
- port: 8125
protocol: UDP
{{- if .Values.datadog.apmEnabled }}
- port: 8126
protocol: UDP
{{- end }}
podSelector:
matchLabels:
app: {{ template "datadog.fullname" . }}
{{- end -}}
{{- if .Values.rbac.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "datadog.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "datadog.fullname" . }}
{{- end -}}
{{- if not .Values.datadog.apiKeyExistingSecret -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "datadog.fullname" . }}
labels:
app: {{ template "datadog.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
type: Opaque
data:
api-key: {{ default "MISSING" .Values.datadog.apiKey | b64enc | quote }}
{{- end -}}
{{- if .Values.service.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: {{ template "datadog.fullname" . }}
labels:
app: {{ template "datadog.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
spec:
type: {{ .Values.service.serviceType }}
selector:
app: {{ template "datadog.fullname" . }}
ports:
- port: 8125
name: dogstatsdport
protocol: UDP
{{- if .Values.datadog.apmEnabled }}
- port: 8126
name: traceport
protocol: TCP
{{- end }}
{{- end -}}
image:
repository: datadog/agent
tag: 6.3.1
pullPolicy: IfNotPresent
datadog:
## You'll need to set this to your Datadog API key.
## ref: https://app.datadoghq.com/account/settings#agent/kubernetes
##
apiKey:
## Use an existing secret that contains the API key instead of creating a new one.
apiKeyExistingSecret:
## Set Agent logging verbosity. Valid values are one of:
## trace, debug, info, warn, error, critical, and off
##
logLevel: warn
## Whether DogStatsD should listen to non local UDP traffic.
## This is required to send StatsD metrics from other pods or from outside the cluster.
## Ref: https://github.com/DataDog/datadog-agent/blob/master/Dockerfiles/agent/README.md
nonLocalTraffic: false
## Tag all nodes with these static tags.
## This must be a single string value with tags separated by spaces.
## For example: "simple-tag kube-cluster-name:my-cluster"
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#global-options
##
hostTags:
## Extract pod labels, pod annotations or node labels to add additional tags to metrics and nodes.
## Each value must be a JSON map, where the map key is the source label name and the map
## value the datadog tag name.
## For example: '{"app":"kube_app","release":"helm_release"}'
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#tagging
##
labelsAsTags:
annotationsAsTags:
nodeLabelsAsTags:
## Enable event collection from the kubernetes API
## ref: https://github.com/DataDog/datadog-agent/blob/master/Dockerfiles/agent/README.md
##
collectEvents: true
## Enables Datadog log collection
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#log-collection
##
collectLogs: false
## Run the trace-agent along with the infrastructure agent, allowing the container to accept traces on 8126/tcp.
## ref: https://github.com/DataDog/datadog-agent/blob/master/Dockerfiles/agent/README.md#optional-collection-agents
apmEnabled: false
pods:
## Bind the DogstatsD (8125/UDP) and Trace (8126/UDP) ports to hostPorts of the same value.
## The ports will need to be available on all nodes.
## This allows pods to emit StatsD metrics locally and correlate pod metrics with the nodes
## they are running on.
## You can pass the local host IP to any pod as an environment variable in the PodSpec using
## the `status.hostIP` downward API reference:
## Example:
##
## env:
## - name: DOGSTATSD_HOST_IP
## valueFrom:
## fieldRef:
## fieldPath: status.hostIP
##
## WARNING: Make sure that hosts using this are properly firewalled. Otherwise metrics and
## traces will be accepted from any host able to connect to this host.
## Refs: https://docs.datadoghq.com/developers/dogstatsd/
useHostPort: false
## Binds the DogstatsD (8125/UDP) and Trace (8126/UDP) ports on the hostNetwork.
## Use this if your CNI network plugin does not support hostPort.
useHostNetwork: false
## Configure pods with the required tolerations to run agents
## on RKE data and control plane nodes.
rkeDataControlPlane: true
## Configure additional tolerations to allow the DaemonSet to schedule on tainted nodes.
# tolerations: []
## Limit the DaemonSet to create pods on nodes matching the selector.
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
# nodeSelector: {}
## Allow the DaemonSet to schedule using affinity rules.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}
## If your network configuration restrictes outbound traffic, you can configure the Agent
## to connect to the internet through a web proxy:
##
## An http URL to use as a proxy for http requests.
httpProxy:
## An http URL to use as a proxy for https requests.
httpsProxy:
## A comma-separated list of URLs for which no proxy should be used.
noProxy:
service:
## Creates a service endpoint for sending custom metrics and traces from your applications (8125/UDP and 8126/UDP).
## Enabling this will implicitely create a network policy that permits access to these ports from all projects/namespaces.
## Use this as an alternative to hostPort binding (see above) to ingest StatsD metrics to Datadog.
## Caveat: K8s services do not gurantee that traffic will handled by pods locally to the client.
## This might result in metrics being tagged with incorrect host tags.
enabled: false
## Set serviceType to `NodePort` to make the service accessible from outside the cluster. Otherwise use `ClusterIP`.
serviceType: ClusterIP
rbac:
## Create RBAC resources required for Datadog
create: true
## Use an existing service account with required role bindings.
## Ignored if rbac.create is true.
serviceAccountName:
## Create a kube-state-metrics deployment
## ref: https://github.com/kubernetes/charts/tree/master/stable/kube-state-metrics
deployKubeStateMetrics: false
## Values for kube-state-metrics child chart
kube-state-metrics:
rbac:
## Create RBAC resources required for kube-state-metrics
create: true
## Use an existing service account with required role bindings.
## Ignored if rbac.create is true.
serviceAccountName:
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment