Commit 44cadd78 by Guangbo Chen

added kafka and kube-dashboard chart

parent 0a92af26
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
description: Apache Kafka is publish-subscribe messaging rethought as a distributed
commit log.
name: kafka
version: 0.7.3
appVersion: 4.0.1
keywords:
- kafka
- zookeeper
- kafka statefulset
home: https://kafka.apache.org/
sources:
- https://github.com/kubernetes/charts/tree/master/incubator/zookeeper
- https://github.com/Yolean/kubernetes-kafka
- https://github.com/confluentinc/cp-docker-images
- https://github.com/apache/kafka
maintainers:
- name: Faraaz Khan
email: faraaz@rationalizeit.us
- name: Marc Villacorta
email: marc.villacorta@gmail.com
- name: Ben Goldberg
email: ben@spothero.com
icon: https://kafka.apache.org/images/logo.png
approvers:
- benjigoldberg
reviewers:
- benjigoldberg
# Apache Kafka Helm Chart
This is an implementation of Kafka StatefulSet found here:
* https://github.com/Yolean/kubernetes-kafka
## StatefulSet Caveats
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations
### Installing the Chart
This chart includes a ZooKeeper chart as a dependency to the Kafka
cluster in its `requirement.yaml` by default. The chart can be customized using the
following configurable parameters:
| Parameter | Description | Default |
| ------------------------------ | --------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------- |
| `image` | Kafka Container image name | `confluentinc/cp-kafka` |
| `imageTag` | Kafka Container image tag | `4.0.0` |
| `imagePullPolicy` | Kafka Container pull policy | `IfNotPresent` |
| `replicas` | Kafka Brokers | `3` |
| `component` | Kafka k8s selector key | `kafka` |
| `resources` | Kafka resource requests and limits | `{}` |
| `kafkaHeapOptions` | Kafka broker JVM heap options | `-Xmx1G-Xms1G` |
| `logSubPath` | Subpath under `persistence.mountPath` where kafka logs will be placed. | `logs` |
| `schedulerName` | Name of Kubernetes scheduler (other than the default) | `nil` |
| `affinity` | Defines affinities and anti-affinities for pods as defined in: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity preferences | `{}` |
| `tolerations` | List of node tolerations for the pods. https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ | `[]` |
| `external.enabled` | If True, exposes Kafka brokers via NodePort (PLAINTEXT by default) | `false` |
| `external.servicePort` | TCP port configured at external services (one per pod) to relay from NodePort to the external listener port. | '19092' |
| `external.firstListenerPort` | TCP port which is added pod index number to arrive at the port used for NodePort and external listener port. | '31090' |
| `external.domain` | Domain in which to advertise Kafka external listeners. | `cluster.local` |
| `external.init` | External init container settings. | (see `values.yaml`) |
| `rbac.enabled` | Enable a service account and role for the init container to use in an RBAC enabled cluster | `false` |
| `configurationOverrides` | `Kafka ` [configuration setting][brokerconfigs] overrides in the dictionary format | `{ offsets.topic.replication.factor: 3 }` |
| `additionalPorts` | Additional ports to expose on brokers. Useful when the image exposes metrics (like prometheus, etc.) through a javaagent instead of a sidecar | `{}` |
| `readinessProbe.initialDelaySeconds` | Number of seconds before probe is initiated. | `30` |
| `readinessProbe.periodSeconds` | How often (in seconds) to perform the probe. | `10` |
| `readinessProbe.timeoutSeconds` | Number of seconds after which the probe times out. | `5` |
| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` |
| `readinessProbe.failureThreshold` | After the probe fails this many times, pod will be marked Unready. | `3` |
| `terminationGracePeriodSeconds` | Wait up to this many seconds for a broker to shut down gracefully, after which it is killed | `60` |
| `updateStrategy` | StatefulSet update strategy to use. | `{ type: "OnDelete" }` |
| `podManagementPolicy` | Start and stop pods in Parallel or OrderedReady (one-by-one.) Can not change after first release. | `OrderedReady` |
| `persistence.enabled` | Use a PVC to persist data | `true` |
| `persistence.size` | Size of data volume | `1Gi` |
| `persistence.mountPath` | Mount path of data volume | `/opt/kafka/data` |
| `persistence.storageClass` | Storage class of backing PVC | `nil` |
| `jmx.configMap.enabled` | Enable the default ConfigMap for JMX | `true` |
| `jmx.configMap.overrideConfig` | Allows config file to be generated by passing values to ConfigMap | `{}` |
| `jmx.configMap.overrideName` | Allows setting the name of the ConfigMap to be used | `""` |
| `jmx.port` | The jmx port which JMX style metrics are exposed (note: these are not scrapeable by Prometheus) | `5555` |
| `jmx.whitelistObjectNames` | Allows setting which JMX objects you want to expose to via JMX stats to JMX Exporter | (see `values.yaml`) |
| `prometheus.jmx.resources` | Allows setting resource limits for jmx sidecar container | `{}` |
| `prometheus.jmx.enabled` | Whether or not to expose JMX metrics to Prometheus | `false` |
| `prometheus.jmx.image` | JMX Exporter container image | `solsson/kafka-prometheus-jmx-exporter@sha256` |
| `prometheus.jmx.imageTag` | JMX Exporter container image tag | `a23062396cd5af1acdf76512632c20ea6be76885dfc20cd9ff40fb23846557e8` |
| `prometheus.jmx.interval` | Interval that Prometheus scrapes JMX metrics when using Prometheus Operator | `10s` |
| `prometheus.jmx.port` | JMX Exporter Port which exposes metrics in Prometheus format for scraping | `5556` |
| `prometheus.kafka.enabled` | Whether or not to create a separate Kafka exporter | `false` |
| `prometheus.kafka.image` | Kafka Exporter container image | `danielqsj/kafka-exporter` |
| `prometheus.kafka.imageTag` | Kafka Exporter container image tag | `v1.0.1` |
| `prometheus.kafka.interval` | Interval that Prometheus scrapes Kafka metrics when using Prometheus Operator | `10s` |
| `prometheus.kafka.port` | Kafka Exporter Port which exposes metrics in Prometheus format for scraping | `9308` |
| `prometheus.kafka.resources` | Allows setting resource limits for kafka-exporter pod | `{}` |
| `prometheus.operator` | True if using the Prometheus Operator, False if not | `false` |
| `prometheus.operator.serviceMonitor.namespace` | Namespace which Prometheus is running in. Default to kube-prometheus install. | `monitoring` |
| `prometheus.operator.serviceMonitor.selector` | Default to kube-prometheus install (CoreOS recommended), but should be set according to Prometheus install | `{ prometheus: kube-prometheus }` |
| `zookeeper.enabled` | If True, installs Zookeeper Chart | `true` |
| `zookeeper.resources` | Zookeeper resource requests and limits | `{}` |
| `zookeeper.heap` | JVM heap size to allocate to Zookeeper | `1G` |
| `zookeeper.storage` | Zookeeper Persistent volume size | `2Gi` |
| `zookeeper.imagePullPolicy` | Zookeeper Container pull policy | `IfNotPresent` |
| `zookeeper.url` | URL of Zookeeper Cluster (unneeded if installing Zookeeper Chart) | `""` |
| `zookeeper.port` | Port of Zookeeper Cluster | `2181` |
| `zookeeper.affinity` | Defines affinities and anti-affinities for pods as defined in: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity preferences | `{}`
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
Alternatively a YAML file that specifies the values for the parameters can be provided like this:
```bash
$ helm install --name my-kafka -f values.yaml incubator/kafka
```
### Connecting to Kafka from inside Kubernetes
You can connect to Kafka by running a simple pod in the K8s cluster like this with a configuration like this:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: testclient
namespace: kafka
spec:
containers:
- name: kafka
image: solsson/kafka:0.11.0.0
command:
- sh
- -c
- "exec tail -f /dev/null"
```
Once you have the testclient pod above running, you can list all kafka
topics with:
` kubectl -n kafka exec -ti testclient -- ./bin/kafka-topics.sh --zookeeper
my-release-zookeeper:2181 --list`
Where `my-release` is the name of your helm release.
## Extensions
Kafka has a rich ecosystem, with lots of tools. This sections is intended to compile all of those tools for which a corresponding Helm chart has already been created.
- [Schema-registry](https://github.com/kubernetes/charts/tree/master/incubator/schema-registry) - A confluent project that provides a serving layer for your metadata. It provides a RESTful interface for storing and retrieving Avro schemas.
### Connecting to Kafka from outside Kubernetes
Review and optionally override to enable the example text concerned with external access in `values.yaml`.
Once configured, you should be able to reach Kafka via NodePorts, one per replica. In kops where private,
topology is enabled, this feature publishes an internal round-robin DNS record using the following naming
scheme. The external access feature of this chart was tested with kops on AWS using flannel networking.
If you wish to enable external access to Kafka running in kops, your security groups will likely need to
be adjusted to allow non-Kubernetes nodes (e.g. bastion) to access the Kafka external listener port range.
```
{{ .Release.Name }}.{{ .Values.external.domain }}
```
Port numbers for external access used at container and NodePort are unique to each container in the StatefulSet.
Using the default `external.firstListenerPort` number with a `replicas` value of `3`, the following
container and NodePorts will be opened for external access: `31090`, `31091`, `31092`. All of these ports should
be reachable from any host to NodePorts are exposed because Kubernetes routes each NodePort from entry node
to pod/container listening on the same port (e.g. `31091`).
The `external.servicePort` at each external access service (one such service per pod) is a relay toward
the a `containerPort` with a number matching its respective `NodePort`. The range of NodePorts is set, but
should not actually listen, on all Kafka pods in the StatefulSet. As any given pod will listen only one
such port at a time, setting the range at every Kafka pod is a reasonably safe configuration.
## Known Limitations
* Topic creation is not automated
* Only supports storage options that have backends for persistent volume claims (tested mostly on AWS)
* There must not exist a service called `kafka` in the same namespace
[brokerconfigs]: https://kafka.apache.org/documentation/#brokerconfigs
## Prometheus Stats
### Prometheus vs Prometheus Operator
Standard Prometheus is the default monitoring option for this chart. This chart also supports the CoreOS Prometheus Operator,
which can provide additional functionality like automatically updating Prometheus and Alert Manager configuration. If you are
interested in installing the Prometheus Operator please see the [CoreOS repository](https://github.com/coreos/prometheus-operator/tree/master/helm) for more information or
read through the [CoreOS blog post introducing the Prometheus Operator](https://coreos.com/blog/the-prometheus-operator.html)
### JMX Exporter
The majority of Kafka statistics are provided via JMX and are exposed via the [Prometheus JMX Exporter](https://github.com/prometheus/jmx_exporter).
The JMX Exporter is a general purpose prometheus provider which is intended for use with any Java application. Because of this, it produces a number of statistics which
may not be of interest. To help in reducing these statistics to their relevant components we have created a curated whitelist `whitelistObjectNames` for the JMX exporter.
This whitelist may be modified or removed via the values configuration.
To accommodate compatibility with the Prometheus metrics, this chart performs transformations of raw JMX metrics. For example, broker names and topics names are incorporated
into the metric name instead of becoming a label. If you are curious to learn more about any default transformations to the chart metrics, please have reference the [configmap template](https://github.com/kubernetes/charts/blob/master/incubator/kafka/templates/jmx-configmap.yaml).
### Kafka Exporter
The [Kafka Exporter](https://github.com/danielqsj/kafka_exporter) is a complimentary metrics exporter to the JMX Exporter. The Kafka Exporter provides additional statistics on Kafka Consumer Groups.
# Kafka
Kafka is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.
## Chart Details
This chart bootstraps a [Confluent](https://docs.confluent.io/4.0.1/) Kafka deployment. The chart has the following components,
- Apache Kafka
- Zookeeper
- Kafka-Schema
- Kafka-REST
- Kafka-Topics-UI (This project is licensed under the [BSL](http://www.landoop.com/bsl) license.)
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
description: A Helm chart for Kubernetes
name: kafka-rest
version: 0.1.0
1. Get the application URL by running these commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "kafka-rest.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "kafka-rest.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "kafka-rest.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.externalPort }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "kafka-rest.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:{{ .Values.service.internalPort }}
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kafka-rest.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "kafka-rest.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "kafka-rest.zookeeper.connect" }}
{{- printf "%s-zookeeper:2181" .Release.Name }}
{{- end -}}
{{- define "kafka-rest.schema-registry.url" }}
{{- printf "http://%s-schema-registry:8081" .Release.Name }}
{{- end -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "kafka-rest.fullname" . }}
labels:
app: {{ template "kafka-rest.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "kafka-rest.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: KAFKA_REST_ZOOKEEPER_CONNECT
value: {{ template "kafka-rest.zookeeper.connect" . }}
- name: KAFKA_REST_LISTENERS
value: "http://0.0.0.0:8082"
- name: KAFKA_REST_SCHEMA_REGISTRY_URL
value: {{ template "kafka-rest.schema-registry.url" . }}
- name: KAFKA_REST_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: {{ .Values.service.internalPort }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "kafka-rest.fullname" . }}
labels:
app: {{ template "kafka-rest.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "kafka-rest.name" . }}
release: {{ .Release.Name }}
# Default values for kafka-rest.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: confluentinc/cp-kafka-rest
tag: 4.0.0
pullPolicy: IfNotPresent
service:
name: rest
type: ClusterIP
externalPort: 8082
internalPort: 8082
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
description: A Helm chart for Kafka Topics UI
name: kafka-topics-ui
version: 0.1.0
sources:
- https://github.com/Landoop/kafka-topics-ui
Business Source License 1.0
Licensor: Landoop Ltd
Software: Landoop Apache Kafka tools. The Software is © 2016 Landoop
Use Limitation: Usage of the software is free when your application uses the Software with a total of less than five Kafka server instances for production purposes.
Change Date: 2019-01-01
Change License: Apache-2.0 license.
_____________________________________________
You are granted limited license to the Software under this Business Source License. Please read this Business Source License carefully, particularly the Use Limitation set forth above.
Subject to the Use Limitation, Licensor grants you a non-exclusive, worldwide (subject to applicable laws) license to copy, modify, display, use, create derivative works, and redistribute the Software until the Change Date. If your use of the Software exceeds, or will exceed, the foregoing limitations you MUST obtain alternative licensing terms for the Software directly from Licensor. For the avoidance of doubt, prior to the Change Date, there is no Use Limitations for non-production purposes.
After the Change Date, this Business Source License will convert to the Change License and your use of the Software, including modified versions of the Software, will be governed by such Change License.
All copies of original and modified Software, and derivative works of the Software, are subject to this Business Source License. This Business Source License applies separately for each version of the Software and the Change Date will vary for each version of the Software released by Licensor.
You must conspicuously display this Business Source License on each original or modified copy of the Software. If you receive the Software in original or modified form from a third party, the restrictions set forth in this Business Source License apply to your use of such Software.
Any use of the Software in violation of this Business Source License will automatically terminate your rights under this Business Source License for the current and all future versions of the Software.
You may not use the marks or logos of Licensor or its affiliates for commercial purposes without prior written consent from Licensor.
TO THE EXTENT PERMITTED BY APPLICABLE LAW, THE SOFTWARE AND ALL SERVICES PROVIDED BY LICENSOR OR ITS AFFILIATES UNDER OR IN CONNECTION WITH WITH THIS BUSINESS SOURCE LICENSE ARE PROVIDED ON AN “AS IS” AND “AS AVAILABLE” BASIS. YOU EXPRESSLY WAIVE ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING (WITHOUT LIMITATION) WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, TITLE, SYSTEM INTEGRATION, AND ACCURACY OF INFORMATIONAL CONTENT.
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http://{{ . }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "kafka-topics-ui.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "kafka-topics-ui.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "kafka-topics-ui.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.externalPort }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "kafka-topics-ui.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:{{ .Values.service.internalPort }}
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kafka-topics-ui.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "kafka-topics-ui.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "kafka-rest-proxy.name" -}}
{{- $port := .Values.kafkaRest.port | toString -}}
{{- printf "http://%s-%s:%s" .Release.Name .Values.kafkaRest.name $port | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "kafka-topics-ui.fullname" . }}
labels:
app: {{ template "kafka-topics-ui.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "kafka-topics-ui.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.internalPort }}
env:
- name: KAFKA_REST_PROXY_URL
value: {{ template "kafka-rest-proxy.name" . }}
- name: PROXY
value: "true"
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.ingress.enabled -}}
{{- $serviceName := include "kafka-topics-ui.fullname" . -}}
{{- $servicePort := .Values.service.externalPort -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "kafka-topics-ui.fullname" . }}
labels:
app: {{ template "kafka-topics-ui.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
rules:
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host }}
http:
paths:
- path:
backend:
serviceName: {{ $serviceName }}
servicePort: {{ $servicePort }}
{{- end -}}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
apiVersion: v1
kind: Service
metadata:
name: {{ template "kafka-topics-ui.fullname" . }}
labels:
app: {{ template "kafka-topics-ui.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
selector:
app: {{ template "kafka-topics-ui.name" . }}
release: {{ .Release.Name }}
# Default values for kafka-topics-ui.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: landoop/kafka-topics-ui
tag: latest
pullPolicy: IfNotPresent
service:
name: web
type: ClusterIP
externalPort: 8000
internalPort: 8000
ingress:
enabled: false
# Used to create an Ingress record.
hosts:
- xip.io
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
kafkaRest:
name: kafka-rest
port: 8082
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
name: schema-registry
home: https://docs.confluent.io/current/schema-registry/docs/index.html
version: 0.4.3
appVersion: 4.0.1
keywords:
- confluent
- kafka
- schema-registry
description: Schema Registry provides a serving layer for your metadata. It provides a RESTful
interface for storing and retrieving Avro schemas. It stores a versioned history of all schemas,
provides multiple compatibility settings and allows evolution of schemas according to the
configured compatibility setting. It provides serializers that plug into Kafka clients that
handle schema storage and retrieval for Kafka messages that are sent in the Avro format.
icon: https://www.confluent.io/wp-content/themes/confluent/assets/images/confluent-logo-300.png
apiVersion: v1
sources:
- https://github.com/confluentinc/schema-registry
- https://github.com/confluentinc/cp-docker-images
maintainers:
- name: benjigoldberg
email: ben@spothero.com
approvers:
- benjigoldberg
reviewers:
- benjigoldberg
# Schema-Registry Helm Chart
This helm chart creates a [Confluent Schema-Registry server](https://github.com/confluentinc/schema-registry).
## Prerequisites
* Kubernetes 1.6
* A running Kafka Installation
* A running Zookeeper Installation
## Chart Components
This chart will do the following:
* Create a Schema-Registry deployment
* Create a Service configured to connect to the available Schema-Registry pods on the configured
client port.
Note: Distributed Schema Registry Master Election is done via Kafka Coordinator Master Election
https://docs.confluent.io/current/schema-registry/docs/design.html#kafka-coordinator-master-election
## Installing the Chart
You can install the chart with the release name `mysr` as below.
```console
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install --name mysr incubator/schema-registry
```
If you do not specify a name, helm will select a name for you.
### Installed Components
You can use `kubectl get` to view all of the installed components.
```console{%raw}
$ kubectl get all -l app=schema-registry
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/mysr-schema-registry 1 1 1 1 23m
NAME DESIRED CURRENT READY AGE
rs/mysr-schema-registry-bcb4c994c 1 1 1 23m
NAME READY STATUS RESTARTS AGE
po/mysr-schema-registry-bcb4c994c-qjqbj 1/1 Running 1 23m
```
1. `deploy/mysr-schema-registry` is the Deployment created by this chart.
1. `rs/mysr-schema-registry-bcb4c994c` is the ReplicaSet created by this Chart's Deployment.
1. `po/mysr-schema-registry-bcb4c994c-qjqbj` is the Pod created by the ReplicaSet under this Chart's Deployment.
## Configuration
You can specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```console
$ helm install --name my-release -f values.yaml incubator/schema-registry
```
> **Tip**: You can use the default [values.yaml](values.yaml)
### Parameters
The following table lists the configurable parameters of the SchemaRegistry chart and their default values.
| Parameter | Description | Default |
| --------- | ----------- | ------- |
| `image` | The `SchemaRegistry` image repository | `confluentinc/cp-schema-registry` |
| `imageTag` | The `SchemaRegistry` image tag | `4.0.1` |
| `imagePullPolicy` | Image Pull Policy | `IfNotPresent` |
| `replicaCount` | The number of `SchemaRegistry` Pods in the Deployment | `1` |
| `configurationOverrides` | `SchemaRegistry` [configuration setting](https://github.com/confluentinc/schema-registry/blob/master/docs/config.rst#configuration-options) overrides in the dictionary format `setting.name: value` | `{}` |
| `kafkaOpts` | Additional Java arguments to pass to Kafka. | ` ` |
| `sasl.configPath` | where to store config for sasl configurations | `/etc/kafka-config` |
| `sasl.scram.enabled` | whether sasl-scam is enabled | `false` |
| `sasl.scram.init.image` | which image to use for initializing sasl scram | `confluentinc/cp-schema-registry` |
| `sasl.scram.init.imageTag` | which version/tag to use for sasl scram init | `4.0.0` |
| `sasl.scram.init.imagePullPolicy` | the sasl scram init pull policy | `IfNotPresent` |
| `sasl.scram.clientUser` | the sasl scram user to use to authenticate to kafka | `kafka-client` |
| `sasl.scram.clientPassword` | the sasl scram password to use to authenticate to kafka | `kafka-password` |
| `sasl.scram.zookeeperClientUser` | the sasl scram user to use to authenticate to zookeeper | `zookeeper-client` |
| `sasl.scram.zookeeperClientPassword` | the sasl scram password to use to authenticate to zookeeper | `zookeeper-password` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `servicePort` | The port on which the SchemaRegistry server will be exposed. | `8081` |
| `overrideGroupId` | Group ID defaults to using Release Name so each release is its own Schema Registry worker group, it can be overridden | `{- .Release.Name -}}` |
| `kafkaStore.overrideBootstrapServers` | Defaults to Kafka Servers in the same release, it can be overridden in case there was a separate release for Kafka Deploy | `{{- printf "PLAINTEXT://%s-kafka-headless:9092" .Release.Name }}`
| `kafka.enabled` | If `true`, install Kafka/Zookeeper alongside the `SchemaRegistry`. This is intended for testing and argument-less helm installs of this chart only and should not be used in Production. | `true` |
| `kafka.replicas` | The number of Kafka Pods to install as part of the `StatefulSet` if `kafka.Enabled` is `true`| `1` |
| `kafka.zookeeper.servers` | The number of Zookeeper Pods to install as part of the `StatefulSet` if `kafka.Enabled` is `true`| `1` |
| `ingress.enabled` | Enable Ingress? | `false`
| `ingress.hostname` | set hostname for ingress | `""`
| `ingress.annotations` | set annotations for ingress | `{}`
Confluent Schema-Registry is now installed on your Kubernetes cluster. For more information on
Schema-Registry, please navigate to https://github.com/confluentinc/schema-registry
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "schema-registry.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:{{ .Values.servicePort }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "schema-registry.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "schema-registry.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Form the Kafka URL. If Kafka is installed as part of this chart, use k8s service discovery,
else use user-provided URL
*/}}
{{- define "schema-registry.kafkaStore.bootstrapServers" }}
{{- if .Values.kafkaStore.overrideBootstrapServers -}}
{{- .Values.kafkaStore.overrideBootstrapServers }}
{{- else -}}
{{- printf "PLAINTEXT://%s-kafka-headless:9092" .Release.Name }}
{{- end -}}
{{- end -}}
{{/*
Default GroupId to Release Name but allow it to be overridden
*/}}
{{- define "schema-registry.kafkaStore.groupId" -}}
{{- if .Values.kafkaStore.overrideGroupId -}}
{{- .Values.kafkaStore.overrideGroupId -}}
{{- else -}}
{{- .Release.Name -}}
{{- end -}}
{{- end -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "schema-registry.fullname" . }}
labels:
app: {{ template "schema-registry.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "schema-registry.name" . }}
release: {{ .Release.Name }}
spec:
{{- if .Values.sasl.scram.enabled }}
initContainers:
## ref: https://github.com/Yolean/kubernetes-kafka/blob/master/kafka/50kafka.yml
- name: init-sasl
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command:
- sh
- -euc
- |
sed "s/\$SCRAM_CLIENT_USER/${SCRAM_CLIENT_USER}/g; s/\$SCRAM_CLIENT_PASSWORD/${SCRAM_CLIENT_PASSWORD}/g; s/\$ZOOKEEPER_CLIENT_USER/${ZOOKEEPER_CLIENT_USER}/g; s/\$ZOOKEEPER_CLIENT_PASSWORD/${ZOOKEEPER_CLIENT_PASSWORD}/g;" /tmp/kafka-template/kafka_client_jaas.conf > /etc/kafka-config/kafka_client_jaas.conf
env:
- name: ZOOKEEPER_CLIENT_USER
value: {{ .Values.sasl.scram.zookeeperClientUser }}
- name: ZOOKEEPER_CLIENT_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "schema-registry.fullname" . }}
key: zookeeper-client-password
- name: SCRAM_CLIENT_USER
value: {{ .Values.sasl.scram.clientUser }}
- name: SCRAM_CLIENT_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "schema-registry.fullname" . }}
key: client-password
volumeMounts:
- name: jaastemplate
mountPath: "/tmp/kafka-template"
- name: jaasconfig
mountPath: {{ .Values.sasl.configPath | quote }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: {{ .Values.imagePullPolicy }}
ports:
- containerPort: {{ .Values.servicePort }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.servicePort }}
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: {{ .Values.servicePort }}
initialDelaySeconds: 10
timeoutSeconds: 5
env:
- name: SCHEMA_REGISTRY_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
value: {{ template "schema-registry.kafkaStore.bootstrapServers" . }}
- name: SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID
value: {{ template "schema-registry.kafkaStore.groupId" . }}
- name: SCHEMA_REGISTRY_MASTER_ELIGIBILITY
value: "true"
{{ range $configName, $configValue := .Values.configurationOverrides }}
- name: SCHEMA_REGISTRY_{{ $configName | replace "." "_" | upper }}
value: {{ $configValue | quote }}
{{ end }}
{{- if .Values.schemaRegistryOpts }}
# The pre-flight checks use KAFKA_OPTS instead of SCHEMA_REGISTRY_OPTS.
- name: KAFKA_OPTS
value: "{{ .Values.schemaRegistryOpts }}"
- name: SCHEMA_REGISTRY_OPTS
value: "{{ .Values.schemaRegistryOpts }}"
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumeMounts:
{{- if .Values.sasl.scram.enabled }}
- name: jaasconfig
mountPath: {{ .Values.sasl.configPath | quote }}
{{- end }}
volumes:
{{- if .Values.sasl.scram.enabled }}
- name: jaasconfig
emptyDir: { medium: "Memory" }
- name: jaastemplate
configMap:
name: {{ template "schema-registry.fullname" . }}
{{- end }}
{{- if .Values.ingress.enabled }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "schema-registry.fullname" . }}
labels:
app: {{ template "schema-registry.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.ingress.labels }}
{{ toYaml .Values.ingress.labels | indent 4 }}
{{- end }}
{{- if .Values.ingress.annotations }}
annotations:
{{ toYaml .Values.ingress.annotations | indent 4 }}
{{- end }}
spec:
rules:
- host: {{ .Values.ingress.hostname }}
http:
paths:
- path: /
backend:
serviceName: {{ template "schema-registry.fullname" . }}
servicePort: {{ .Values.servicePort }}
{{ end -}}
{{ if .Values.sasl.scram.enabled -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "schema-registry.fullname" . }}
labels:
app: {{ include "schema-registry.name" . | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
kafka_client_jaas.conf: |-
// Info for Schema Registry to connect to Zookeeper
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="$ZOOKEEPER_CLIENT_USER"
password="$ZOOKEEPER_CLIENT_PASSWORD";
};
// Info for third-party clients to connect to Kafka
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="$SCRAM_CLIENT_USER"
password="$SCRAM_CLIENT_PASSWORD";
};
{{- end }}
{{- if and .Values.sasl.scram.enabled (not (hasKey .Values.sasl.scram "useExistingSecret")) -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "schema-registry.fullname" . }}-sasl-scram-secret
labels:
app: {{ template "schema-registry.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
type: Opaque
data:
scram-client-password: {{ .Values.sasl.scram.clientPassword | b64enc }}
zookeeper-client-password: {{ .Values.sasl.scram.zookeeperClientPassword | b64enc }}
{{- end -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "schema-registry.fullname" . }}
type: Opaque
data:
client-user: {{ .Values.sasl.scram.clientUser | b64enc | quote }}
{{- if .Values.sasl.scram.clientPassword }}
client-password: {{ .Values.sasl.scram.clientPassword | b64enc | quote }}
{{- else }}
client-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
zookeeper-client-user: {{ .Values.sasl.scram.zookeeperClientUser | b64enc | quote }}
{{- if .Values.sasl.scram.zookeeperClientPassword }}
zookeeper-client-password: {{ .Values.sasl.scram.zookeeperClientPassword | b64enc | quote }}
{{- else }}
zookeeper-client-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "schema-registry.fullname" . }}
labels:
app: {{ template "schema-registry.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
ports:
- name: schema-registry
port: {{ .Values.servicePort }}
selector:
app: {{ template "schema-registry.name" . }}
release: {{ .Release.Name | quote}}
# Default values for Confluent Schema-Registry
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value
## schema-registry repository
image: "confluentinc/cp-schema-registry"
## The container tag to use
imageTag: 4.0.1
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
imagePullPolicy: "IfNotPresent"
## Number of Schema Registry Pods to Deploy
replicaCount: 1
## Schema Registry Settings Overrides
## Configuration Options can be found here: https://docs.confluent.io/current/schema-registry/docs/config.html
configurationOverrides:
kafkastore.topic.replication.factor: 3
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## Confluent has production deployment guidelines here:
## ref: https://github.com/confluentinc/schema-registry/blob/master/docs/deployment.rst
##
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## The port on which the SchemaRegistry will be available and serving requests
servicePort: 8081
## If `Kafka.Enabled` is `false`, kafkaStore.overrideBootstrapServers must be provided for Master Election.
## You can list load balanced service endpoint, or list of all brokers (which is hard in K8s). e.g.:
## overrideBootstrapServers: "PLAINTEXT://dozing-prawn-kafka-headless:9092"
## Charts uses Kafka Coordinator Master Election: https://docs.confluent.io/current/schema-registry/docs/design.html#kafka-coordinator-master-election
kafkaStore:
overrideBootstrapServers: ""
# By Default uses Release Name, but can be overridden. Which means each release is its own group of
# Schema Registry workers. You can have multiple groups talking to same Kafka Cluster
overrideGroupId: ""
## Additional Java arguments to pass to Kafka.
# schemaRegistryOpts: -Dfoo=bar
# Options for connecting to SASL kafka brokers
sasl:
configPath: "/etc/kafka-config"
scram:
enabled: false
init:
image: "confluentinc/cp-schema-registry"
imageTag: "4.0.0"
imagePullPolicy: "IfNotPresent"
clientUser: "kafka-client"
zookeeperClientUser: "zookeeper-client"
# Passwords can be either provided here or pulled from an existing k8s secret.
# If user wants to specify the password here:
# clientPassword: "client-password"
# zookeeperClientPassword: "zookeeper-client-password"
ingress:
enabled: false
annotations: {}
hostname: ""
labels: {}
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
description: A Helm chart for Kubernetes
name: zookeeper
version: 0.1.0
{{- $root := . -}}
********************************************************************************
********************************************************************************
Your ZooKeeper v{{ .Chart.Version }} release is named '{{ .Release.Name }}'.
To wait for the ZooKeeper ensemble to become ready, run the following:
$ kubectl -n {{ .Release.Namespace }} get po -l release={{ .Release.Name }} -w
Once ready, to get a ZooKeeper shell on any pod, run the following:
{{- range $i, $e := until (.Values.replicaCount| int) }}
$ kubectl -n {{ $root.Release.Namespace }} exec -it {{ $root.Release.Namespace }}-{{ $root.Chart.Name }}-{{ $i }} zkCli.sh
{{- end }}
To get the config, run the following:
$ kubectl -n {{ .Release.Namespace }} get cm {{ .Release.Name }} -o yaml
********************************************************************************
********************************************************************************
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "zookeeper.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "zookeeper.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- $root := . -}}
{{- $ns := .Release.Namespace -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
tick: "{{ .Values.env.ZK_TICK_TIME }}"
client_port: "{{ .Values.env.ZK_CLIENT_PORT }}"
purge_interval: "{{ .Values.env.PURGE_INTERVAL }}"
servers: "{{ range $i, $e := until (.Values.replicaCount | int) }}{{ if ne $i 0 }};{{ end }}{{ template "zookeeper.fullname" $ }}-{{ $i }}.{{ template "zookeeper.fullname" $ }}-headless.{{ $ns }}.svc.cluster.local:2888:3888{{ end }}"
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
selector:
matchLabels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
minAvailable: {{ (div .Values.replicaCount 2) | add1 }} # Limits how many Zokeeper pods must be available due to voluntary disruptions.
apiVersion: v1
kind: Service
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ template "zookeeper.fullname" . }}-headless
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
clusterIP: None
ports:
- port: 2181
name: clients
- port: 2888
name: server
- port: 3888
name: leader-election
selector:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ template "zookeeper.fullname" . }}
labels:
app: {{ template "zookeeper.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
podManagementPolicy: Parallel
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
serviceName: {{ template "zookeeper.fullname" . }}-headless
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: {{ template "zookeeper.name" . }}
release: {{ .Release.Name }}
spec:
securityContext:
{{ toYaml .Values.securityContext | indent 8 }}
containers:
- name: {{ template "zookeeper.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- bash
- -c
- ZOOKEEPER_SERVER_ID=$((${HOSTNAME##*-}+1)) && /etc/confluent/docker/run
env:
- name: ZOOKEEPER_TICK_TIME
valueFrom:
configMapKeyRef:
key: tick
name: {{ template "zookeeper.fullname" . }}
- name: ZOOKEEPER_SYNC_LIMIT
valueFrom:
configMapKeyRef:
key: tick
name: {{ template "zookeeper.fullname" . }}
- name: ZOOKEEPER_SERVERS
valueFrom:
configMapKeyRef:
key: servers
name: {{ template "zookeeper.fullname" . }}
- name: ZOOKEEPER_CLIENT_PORT
valueFrom:
configMapKeyRef:
key: client_port
name: {{ template "zookeeper.fullname" . }}
- name: ZOOKEEPER_AUTOPURGE_PURGE_INTERVAL
valueFrom:
configMapKeyRef:
key: purge_interval
name: {{ template "zookeeper.fullname" . }}
- name: ZOOKEEPER_SERVER_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 2181
name: client
protocol: TCP
- containerPort: 2888
name: server
protocol: TCP
- containerPort: 3888
name: leader-election
protocol: TCP
volumeMounts:
- mountPath: /var/lib/zookeeper
name: data
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.affinityEnabled }}
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- {{ template "zookeeper.name" . }}
- key: release
operator: In
values:
- {{ .Release.Name }}
topologyKey: "kubernetes.io/hostname"
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- if not .Values.persistence.enabled }}
volumes:
- name: data
emptyDir: {}
{{- else }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}
# Default values for zookeeper.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 3
terminationGracePeriodSeconds: 120 # Duration in seconds a Zokeeper pod needs to terminate gracefully.
securityContext:
fsGroup: 1000
runAsUser: 1000
image:
repository: confluentinc/cp-zookeeper
tag: 4.1.1
pullPolicy: IfNotPresent
service:
name: client
type: ClusterIP
externalPort: 2181
internalPort: 2181
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
persistence:
enabled: true
## zookeeper data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 5Gi
env:
## The port on which the server will accept client requests.
ZK_CLIENT_PORT: 2181
## The number of wall clock ms that corresponds to a Tick for the ensembles
## internal time.
ZK_TICK_TIME: 2000
PURGE_INTERVAL: 24
affinityEnabled: true
questions:
- variable: defaultImage
default: "true"
description: "Use default Docker image"
label: Use Default Image
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: image
default: "confluentinc/cp-kafka"
description: "Kafka image name"
type: string
label: Kafka Image Name
- variable: imageTag
default: "4.0.1-1"
description: "Kafka image tag"
type: string
label: Kafka Image Tag
- variable: zookeeper.image.repository
default: "confluentinc/cp-zookeeper"
description: "Zookeeper image name"
type: string
label: Zookeeper Image Name
- variable: zookeeper.image.tag
default: "4.1.1"
description: "Zookeeper image tag"
type: string
label: Zookeeper Image Tag
- variable: schema-registry.image
default: "confluentinc/cp-schema-registry"
description: "Kafka schema registry image name"
type: string
label: Kafka Schema-Registry Image Name
- variable: schema-registry.imageTag
default: "4.0.1"
description: "Kafka schema registry image tag"
type: string
label: Kafka Schema-Registry Image Tag
- variable: kafka-rest.image.repository
default: "confluentinc/cp-kafka-rest"
description: "Kafka-REST image name"
type: string
label: Kafka-REST Image Name
- variable: kafka-rest.image.tag
default: "4.0.0"
description: "Kafka rest image tag"
type: string
label: Kafka-REST Image Tag
- variable: kafka-topics-ui.image.repository
default: "landoop/kafka-topics-ui"
description: "kafka-topics-ui image name"
type: string
label: Kafka-Topics-UI Image Name
- variable: kafka-topics-ui.image.tag
default: "latest"
description: "kafka-topics-ui image tag"
type: string
label: Kafka-Topics-UI Image Tag
# Kafka Configurations
- variable: replicas
default: 3
description: "Replicas of Kafka Brokers"
type: int
min: 3
max: 30
required: true
label: Kafka Brokers
group: "Kafka Settings"
- variable: persistence.enabled
default: false
description: "Enable persistent volume for Kafka"
type: boolean
required: true
label: Kafka Persistent Volume Enabled
show_subquestion_if: true
group: "Kafka Settings"
subquestions:
- variable: persistence.size
default: "20Gi"
description: "Kafka Persistent Volume Size"
type: string
label: Kafka Volume Size
- variable: persistence.storageClass
default: ""
description: "If undefined or null, uses the default StorageClass. Default to null"
type: storageclass
label: Default StorageClass for Kafka
# Zookeeper Configurations
- variable: zookeeper.persistence.enabled
default: false
description: "Enable persistent volume for Zookeeper"
type: boolean
required: true
label: Zookeeper Persistent Volume Enabled
show_subquestion_if: true
group: "Zookeeper Settings"
subquestions:
- variable: zookeeper.persistence.size
default: "20Gi"
description: "Zookeeper Persistent Volume Size"
type: string
label: Zookeeper Volume Size
- variable: zookeeper.persistence.storageClass
default: ""
description: "If undefined or null, uses the default StorageClass. Default to null"
type: storageclass
label: Default StorageClass for Zookeeper
# kafka-Topics-UI Configurations
- variable: kafka-topics-ui.enabled
default: true
description: "Enable kafka topics ui dashboard"
type: boolean
label: Enable Kafka Topics UI Dashboard
group: "Kafka Topics UI"
- variable: kafka-topics-ui.ingress.enabled
default: true
description: "Expose kafka topics UI using Layer 7 Load Balancer - ingress"
type: boolean
label: Expose Kafka Topics UI using Layer 7 Load Balancer
show_if: "kafka-topics-ui.enabled=true"
show_subquestion_if: true
group: "Kafka Topics UI"
subquestions:
- variable: kafka-topics-ui.ingress.host
default: "xip.io"
description: "layer 7 Load Balancer hostname"
type: hostname
show_if: "kafka-topics-ui.enabled=true&&kafka-topics-ui.ingress.enabled=true"
required: true
label: Layer 7 Load Balancer Hostname
- variable: kafka-topics-ui.service.type
default: "NodePort"
description: "Kafka topics ui Service type"
type: enum
group: "Kafka Topics UI"
options:
- "ClusterIP"
- "NodePort"
required: true
label: Service Type of Kafka Topics UI
show_subquestion_if: "NodePort"
subquestions:
- variable: kafka-topics-ui.service.nodePort
default: ""
description: "NodePort port number(to set explicitly, choose port between 30000-32767)"
type: int
min: 30000
max: 32767
label: Service NodePort number
dependencies:
- name: zookeeper
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
version: 0.5.0
digest: sha256:fdd5e2554c3bc2ab4d65600e6509dbc95356da42aa78efc4c9fb8e70a164b1c0
generated: 2018-02-28T10:42:32.184171-08:00
dependencies:
- name: zookeeper
version: 0.5.0
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
condition: zookeeper.enabled
- name: kafka-topics-ui
version: 0.1.0
condition: kafka-topics-ui.enabled
- name: kafka-topics-ui
version: 0.1.0
condition: kafka-topics-ui.enabled
- name: schema-registry
version: 0.4.3
condition: schema-registry.enabled
- name: kafka-rest
version: 0.1.0
condition: kafka-rest.enabled
### Connecting to Kafka from inside Kubernetes
You can connect to Kafka by running a simple pod in the K8s cluster like this with a configuration like this:
apiVersion: v1
kind: Pod
metadata:
name: testclient
namespace: {{ .Release.Namespace }}
spec:
containers:
- name: kafka
image: {{ .Values.image }}:{{ .Values.imageTag }}
command:
- sh
- -c
- "exec tail -f /dev/null"
Once you have the testclient pod above running, you can list all kafka
topics with:
kubectl -n {{ .Release.Namespace }} exec testclient -- /usr/bin/kafka-topics --zookeeper {{ .Release.Name }}-zookeeper:2181 --list
To create a new topic:
kubectl -n {{ .Release.Namespace }} exec testclient -- /usr/bin/kafka-topics --zookeeper {{ .Release.Name }}-zookeeper:2181 --topic test1 --create --partitions 1 --replication-factor 1
To listen for messages on a topic:
kubectl -n {{ .Release.Namespace }} exec -ti testclient -- /usr/bin/kafka-console-consumer --bootstrap-server {{ .Release.Name }}-kafka:9092 --topic test1 --from-beginning
To stop the listener session above press: Ctrl+C
To start an interactive message producer session:
kubectl -n {{ .Release.Namespace }} exec -ti testclient -- /usr/bin/kafka-console-producer --broker-list {{ .Release.Name }}-kafka-headless:9092 --topic test1
To create a message in the above session, simply type the message and press "enter"
To end the producer session try: Ctrl+C
{{ if .Values.external.enabled }}
### Connecting to Kafka from outside Kubernetes
You have enabled the external access feature of this chart.
**WARNING:** By default this feature allows Kafka clients outside Kubernetes to
connect to Kafka via NodePort(s) in `PLAINTEXT`.
Please see this chart's README.md for more details and guidance.
If you wish to connect to Kafka from outside please configure your external Kafka
clients to point at the following brokers. Please allow a few minutes for all
associated resources to become healthy.
{{ $fullName := include "kafka.fullname" . }}
{{- $replicas := .Values.replicas | int }}
{{- $servicePort := .Values.external.servicePort }}
{{- $externalFqdn := printf "%s.%s" .Release.Name .Values.external.domain }}
{{- $root := . }}
{{- range $i, $e := until $replicas }}
{{- $externalListenerPort := add $root.Values.external.firstListenerPort $i }}
{{ printf "%s:%d" $externalFqdn $externalListenerPort | indent 2 }}
{{- end }}
{{- end }}
{{ if .Values.prometheus.jmx.enabled }}
To view JMX configuration (pull request/updates to improve defaults are encouraged):
{{ if .Values.jmx.configMap.overrideName }}
kubectl -n {{ .Release.Namespace }} describe configmap {{ .Values.jmx.configMap.overrideName }}
{{ else }}
kubectl -n {{ .Release.Namespace }} describe configmap {{ include "kafka.fullname" . }}-metrics
{{- end }}
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kafka.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "kafka.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified zookeeper name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "kafka.zookeeper.fullname" -}}
{{- $name := default "zookeeper" .Values.zookeeper.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Form the Zookeeper URL. If zookeeper is installed as part of this chart, use k8s service discovery,
else use user-provided URL
*/}}
{{- define "zookeeper.url" }}
{{- $port := .Values.zookeeper.service.port | toString }}
{{- printf "%s:%s" (include "kafka.zookeeper.fullname" .) $port }}
{{- end -}}
{{- if and .Values.prometheus.jmx.enabled .Values.jmx.configMap.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kafka.fullname" . }}-metrics
labels:
app: {{ include "kafka.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
jmx-kafka-prometheus.yml: |+
{{- if .Values.jmx.configMap.overrideConfig }}
{{ toYaml .Values.jmx.configMap.overrideConfig | indent 4 }}
{{- else }}
jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:{{ .Values.jmx.port }}/jmxrmi
lowercaseOutputName: true
lowercaseOutputLabelNames: true
ssl: false
{{ if .Values.jmx.whitelistObjectNames }}
whitelistObjectNames: ["{{ join "\",\"" .Values.jmx.whitelistObjectNames }}"]
{{ end }}
rules:
- pattern: kafka.controller<type=(ControllerChannelManager), name=(QueueSize), broker-id=(\d+)><>(Value)
name: kafka_controller_$1_$2_$4
labels:
broker_id: "$3"
- pattern: kafka.controller<type=(ControllerChannelManager), name=(TotalQueueSize)><>(Value)
name: kafka_controller_$1_$2_$3
- pattern: kafka.controller<type=(KafkaController), name=(.+)><>(Value)
name: kafka_controller_$1_$2_$3
- pattern: kafka.controller<type=(ControllerStats), name=(.+)><>(Count)
name: kafka_controller_$1_$2_$3
- pattern: kafka.server<type=(ReplicaFetcherManager), name=(.+), clientId=(.+)><>(Value)
name: kafka_server_$1_$2_$4
labels:
client_id: "$3"
- pattern : kafka.network<type=(Processor), name=(IdlePercent), networkProcessor=(.+)><>(Value)
name: kafka_network_$1_$2_$4
labels:
network_processor: $3
- pattern : kafka.network<type=(RequestMetrics), name=(RequestsPerSec), request=(.+)><>(Count)
name: kafka_network_$1_$2_$4
labels:
request: $3
- pattern: kafka.server<type=(.+), name=(.+), topic=(.+)><>(Count|OneMinuteRate)
name: kafka_server_$1_$2_$4
labels:
topic: $3
- pattern: kafka.server<type=(DelayedOperationPurgatory), name=(.+), delayedOperation=(.+)><>(Value)
name: kafka_server_$1_$2_$3_$4
- pattern: kafka.server<type=(.+), name=(.+)><>(Count|Value|OneMinuteRate)
name: kafka_server_$1_total_$2_$3
- pattern: kafka.server<type=(.+)><>(queue-size)
name: kafka_server_$1_$2
- pattern: java.lang<type=(.+), name=(.+)><(.+)>(\w+)
name: java_lang_$1_$4_$3_$2
- pattern: java.lang<type=(.+), name=(.+)><>(\w+)
name: java_lang_$1_$3_$2
- pattern : java.lang<type=(.*)>
- pattern: kafka.log<type=(.+), name=(.+), topic=(.+), partition=(.+)><>Value
name: kafka_log_$1_$2
labels:
topic: $3
partition: $4
{{- end }}
{{- end }}
{{- if .Values.prometheus.kafka.enabled }}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "kafka.fullname" . }}-exporter
labels:
app: "{{ template "kafka.name" . }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
selector:
matchLabels:
app: {{ template "kafka.name" . }}-exporter
release: {{ .Release.Name }}
template:
metadata:
annotations:
{{- if and .Values.prometheus.kafka.enabled (not .Values.prometheus.operator.enabled) }}
prometheus.io/scrape: "true"
prometheus.io/port: {{ .Values.prometheus.kafka.port | quote }}
{{- end }}
labels:
app: {{ template "kafka.name" . }}-exporter
release: {{ .Release.Name }}
spec:
containers:
- image: "{{ .Values.prometheus.kafka.image }}:{{ .Values.prometheus.kafka.imageTag }}"
name: kafka-exporter
args:
- --kafka.server={{ template "kafka.fullname" . }}:9092
- --web.listen-address=:{{ .Values.prometheus.kafka.port }}
ports:
- containerPort: {{ .Values.prometheus.kafka.port }}
resources:
{{ toYaml .Values.prometheus.kafka.resources | indent 10 }}
{{- end }}
{{- if .Values.rbac.enabled }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- patch
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: {{ .Release.Name }}
roleRef:
kind: Role
name: {{ .Release.Name }}
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.external.enabled }}
{{- $fullName := include "kafka.fullname" . }}
{{- $replicas := .Values.replicas | int }}
{{- $servicePort := .Values.external.servicePort }}
{{- $root := . }}
{{- range $i, $e := until $replicas }}
{{- $externalListenerPort := add $root.Values.external.firstListenerPort $i }}
{{- $responsiblePod := printf "%s-%d" (printf "%s" $fullName) $i }}
---
apiVersion: v1
kind: Service
metadata:
annotations:
## ref: https://github.com/kubernetes/kops/blob/master/dns-controller/pkg/watchers/annotations.go#L21
dns.alpha.kubernetes.io/internal: "{{ $root.Release.Name }}.{{ $root.Values.external.domain }}"
name: {{ $root.Release.Name }}-{{ $i }}-external
labels:
app: {{ include "kafka.name" $root }}
chart: {{ $root.Chart.Name }}-{{ $root.Chart.Version }}
release: {{ $root.Release.Name }}
heritage: {{ $root.Release.Service }}
pod: {{ $responsiblePod | quote }}
spec:
type: NodePort
ports:
- name: external-broker
port: {{ $servicePort }}
targetPort: {{ $externalListenerPort }}
nodePort: {{ $externalListenerPort }}
protocol: TCP
selector:
app: {{ include "kafka.name" $root }}
release: {{ $root.Release.Name }}
pod: {{ $responsiblePod | quote }}
{{- end }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "kafka.fullname" . }}
labels:
app: {{ include "kafka.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- name: broker
port: 9092
{{- if and .Values.prometheus.jmx.enabled .Values.prometheus.operator.enabled }}
- name: jmx-exporter
protocol: TCP
port: {{ .Values.jmx.port }}
targetPort: {{ .Values.prometheus.jmx.port }}
{{- end }}
selector:
app: {{ include "kafka.name" . }}
release: {{ .Release.Name }}
---
{{- if and .Values.prometheus.kafka.enabled .Values.prometheus.operator.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "kafka.fullname" . }}-exporter
labels:
app: {{ include "kafka.name" . }}-exporter
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- name: kafka-exporter
protocol: TCP
port: {{ .Values.prometheus.kafka.port }}
targetPort: {{ .Values.prometheus.kafka.port }}
selector:
app: {{ include "kafka.name" . }}-exporter
release: {{ .Release.Name }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "kafka.fullname" . }}-headless
labels:
app: {{ include "kafka.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- name: broker
port: 9092
clusterIP: None
selector:
app: {{ include "kafka.name" . }}
release: {{ .Release.Name }}
{{ if and .Values.prometheus.jmx.enabled .Values.prometheus.operator.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "kafka.fullname" . }}
namespace: {{ .Values.prometheus.operator.serviceMonitor.namespace }}
labels:
{{ toYaml .Values.prometheus.operator.serviceMonitor.selector | indent 4 }}
spec:
selector:
matchLabels:
app: {{ include "kafka.name" . }}
release: {{ .Release.Name }}
endpoints:
- port: jmx-exporter
interval: {{ .Values.prometheus.jmx.interval }}
namespaceSelector:
any: true
{{ end }}
---
{{ if and .Values.prometheus.kafka.enabled .Values.prometheus.operator.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "kafka.fullname" . }}-exporter
namespace: {{ .Values.prometheus.operator.serviceMonitor.namespace }}
labels:
{{ toYaml .Values.prometheus.operator.serviceMonitor.selector | indent 4 }}
spec:
selector:
matchLabels:
app: {{ include "kafka.name" . }}-exporter
release: {{ .Release.Name }}
endpoints:
- port: kafka-exporter
interval: {{ .Values.prometheus.kafka.interval }}
namespaceSelector:
any: true
{{ end }}
{{- $advertisedListenersOverride := first (pluck "advertised.listeners" .Values.configurationOverrides) }}
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ include "kafka.fullname" . }}
labels:
app: {{ include "kafka.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
serviceName: {{ include "kafka.fullname" . }}-headless
podManagementPolicy: {{ .Values.podManagementPolicy }}
updateStrategy:
{{ toYaml .Values.updateStrategy | indent 4 }}
replicas: {{ default 3 .Values.replicas }}
template:
metadata:
{{- if and .Values.prometheus.jmx.enabled (not .Values.prometheus.operator.enabled) }}
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: {{ .Values.prometheus.jmx.port | quote }}
{{- end }}
labels:
app: {{ include "kafka.name" . }}
release: {{ .Release.Name }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
{{- if .Values.rbac.enabled }}
serviceAccountName: {{ .Release.Name }}
{{- end }}
{{- if .Values.external.enabled }}
## ref: https://github.com/Yolean/kubernetes-kafka/blob/master/kafka/50kafka.yml
initContainers:
- name: init-ext
image: "{{ .Values.external.init.image }}:{{ .Values.external.init.imageTag }}"
imagePullPolicy: "{{ .Values.external.init.imagePullPolicy }}"
command:
- sh
- -euxc
- "kubectl label pods ${POD_NAME} --namespace ${POD_NAMESPACE} pod=${POD_NAME}"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
containers:
{{- if .Values.prometheus.jmx.enabled }}
- name: metrics
image: "{{ .Values.prometheus.jmx.image }}:{{ .Values.prometheus.jmx.imageTag }}"
command:
- java
- -XX:+UnlockExperimentalVMOptions
- -XX:+UseCGroupMemoryLimitForHeap
- -XX:MaxRAMFraction=1
- -XshowSettings:vm
- -jar
- jmx_prometheus_httpserver.jar
- {{ .Values.prometheus.jmx.port | quote }}
- /etc/jmx-kafka/jmx-kafka-prometheus.yml
ports:
- containerPort: {{ .Values.prometheus.jmx.port }}
resources:
{{ toYaml .Values.prometheus.jmx.resources | indent 10 }}
volumeMounts:
- name: jmx-config
mountPath: /etc/jmx-kafka
{{- end }}
- name: {{ include "kafka.name" . }}-broker
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
livenessProbe:
exec:
command:
- sh
- -ec
- /usr/bin/jps | /bin/grep -q SupportedKafka
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: kafka
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
ports:
- containerPort: 9092
name: kafka
{{- if .Values.external.enabled }}
{{- $replicas := .Values.replicas | int }}
{{- $root := . }}
{{- range $i, $e := until $replicas }}
- containerPort: {{ add $root.Values.external.firstListenerPort $i }}
name: external-{{ $i }}
{{- end }}
{{- end }}
{{- if .Values.prometheus.jmx.enabled }}
- containerPort: {{ .Values.jmx.port }}
name: jmx
{{- end }}
{{- if .Values.additionalPorts }}
{{ toYaml .Values.additionalPorts | indent 8 }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
{{- if .Values.prometheus.jmx.enabled }}
- name: JMX_PORT
value: "{{ .Values.jmx.port }}"
{{- end }}
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_HEAP_OPTS
value: {{ .Values.kafkaHeapOptions }}
{{- if not (hasKey .Values.configurationOverrides "zookeeper.connect") }}
- name: KAFKA_ZOOKEEPER_CONNECT
value: {{ include "zookeeper.url" . | quote }}
{{- end }}
{{- if not (hasKey .Values.configurationOverrides "log.dirs") }}
- name: KAFKA_LOG_DIRS
value: {{ printf "%s/%s" .Values.persistence.mountPath .Values.logSubPath | quote }}
{{- end }}
{{- range $key, $value := .Values.configurationOverrides }}
- name: {{ printf "KAFKA_%s" $key | replace "." "_" | upper | quote }}
value: {{ $value | quote }}
{{- end }}
{{- if .Values.jmx.port }}
- name: KAFKA_JMX_PORT
value: "{{ .Values.jmx.port }}"
{{- end }}
# This is required because the Downward API does not yet support identification of
# pod numbering in statefulsets. Thus, we are required to specify a command which
# allows us to extract the pod ID for usage as the Kafka Broker ID.
# See: https://github.com/kubernetes/kubernetes/issues/31218
command:
- sh
- -exc
- |
export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_IP}:9092{{ if kindIs "string" $advertisedListenersOverride }}{{ printf ",%s" $advertisedListenersOverride }}{{ end }} && \
exec /etc/confluent/docker/run
volumeMounts:
- name: datadir
mountPath: {{ .Values.persistence.mountPath | quote }}
volumes:
{{- if not .Values.persistence.enabled }}
- name: datadir
emptyDir: {}
{{- end }}
{{- if .Values.prometheus.jmx.enabled }}
- name: jmx-config
configMap:
{{- if .Values.jmx.configMap.overrideName }}
name: {{ .Values.jmx.configMap.overrideName }}
{{- else }}
name: {{ include "kafka.fullname" . }}-metrics
{{- end }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}
# ------------------------------------------------------------------------------
# Kafka:
# ------------------------------------------------------------------------------
## The StatefulSet installs 3 pods by default
replicas: 3
## The kafka image repository
image: "confluentinc/cp-kafka"
## The kafka image tag
imageTag: "4.0.1-1"
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
imagePullPolicy: "IfNotPresent"
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources: {}
# limits:
# cpu: 200m
# memory: 1536Mi
# requests:
# cpu: 100m
# memory: 1024Mi
kafkaHeapOptions: "-Xmx1G -Xms1G"
## The StatefulSet Update Strategy which Kafka will use when changes are applied.
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: "OnDelete"
## Start and stop pods in Parallel or OrderedReady (one-by-one.) Note - Can not change after first release.
## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
podManagementPolicy: Parallel
## If RBAC is enabled on the cluster, the Kafka init container needs a service account
## with permissisions sufficient to apply pod labels
rbac:
enabled: true
## The name of the storage class which the cluster should use.
# storageClass: default
## The subpath within the Kafka container's PV where logs will be stored.
## This is combined with `persistence.mountPath`, to create, by default: /opt/kafka/data/logs
logSubPath: "logs"
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Pod scheduling preferences (by default keep pods within a release on separate nodes).
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## By default we don't set affinity
affinity: {}
## Alternatively, this typical example defines:
## antiAffinity (to keep Kafka pods on separate pods)
## and affinity (to encourage Kafka pods to be collocated with Zookeeper pods)
# affinity:
# podAntiAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: app
# operator: In
# values:
# - kafka
# topologyKey: "kubernetes.io/hostname"
# podAffinity:
# preferredDuringSchedulingIgnoredDuringExecution:
# - weight: 50
# podAffinityTerm:
# labelSelector:
# matchExpressions:
# - key: app
# operator: In
# values:
# - zookeeper
# topologyKey: "kubernetes.io/hostname"
## Node labels for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
## Readiness probe config.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
##
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
## Period to wait for broker graceful shutdown (sigterm) before pod is killed (sigkill)
## ref: https://kubernetes-v1-4.github.io/docs/user-guide/production-pods/#lifecycle-hooks-and-termination-notice
## ref: https://kafka.apache.org/10/documentation.html#brokerconfigs controlled.shutdown.*
terminationGracePeriodSeconds: 30
# Tolerations for nodes that have taints on them.
# Useful if you want to dedicate nodes to just run kafka
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# tolerations:
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
## External access.
##
external:
enabled: false
servicePort: 19092
firstListenerPort: 31090
domain: cluster.local
init:
image: "lwolf/kubectl_deployer"
imageTag: "0.4"
imagePullPolicy: "IfNotPresent"
## Configuration Overrides. Specify any Kafka settings you would like set on the StatefulSet
## here in map format, as defined in the official docs.
## ref: https://kafka.apache.org/documentation/#brokerconfigs
##
configurationOverrides:
"offsets.topic.replication.factor": 3
"auto.leader.rebalance.enable": true
"auto.create.topics.enable": true
"controlled.shutdown.enable": true
"controlled.shutdown.max.retries": 3
## Options required for external access via NodePort
## ref:
## - http://kafka.apache.org/documentation/#security_configbroker
## - https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic
##
## Setting "advertised.listeners" here appends to "PLAINTEXT://${POD_IP}:9092,"
# "advertised.listeners": |-
# EXTERNAL://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID}))
# "listener.security.protocol.map": |-
# PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
## A collection of additional ports to expose on brokers (formatted as normal containerPort yaml)
# Useful when the image exposes metrics (like prometheus, etc.) through a javaagent instead of a sidecar
additionalPorts: {}
## Persistence configuration. Specify if and how to persist data to a persistent volume.
##
persistence:
enabled: false
## The size of the PersistentVolume to allocate to each Kafka Pod in the StatefulSet. For
## production servers this number should likely be much larger.
##
size: "1Gi"
## The location within the Kafka container where the PV will mount its storage and Kafka will
## store its logs.
##
mountPath: "/opt/kafka/data"
## Kafka data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass:
jmx:
## Rules to apply to the Prometheus JMX Exporter. Note while lots of stats have been cleaned and exposed,
## there are still more stats to clean up and expose, others will never get exposed. They keep lots of duplicates
## that can be derived easily. The configMap in this chart cleans up the metrics it exposes to be in a Prometheus
## format, eg topic, broker are labels and not part of metric name. Improvements are gladly accepted and encouraged.
configMap:
## Allows disabling the default configmap, note a configMap is needed
enabled: true
## Allows setting values to generate confimap
## To allow all metrics through (warning its crazy excessive) comment out below `overrideConfig` and set
## `whitelistObjectNames: []`
overrideConfig: {}
# jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
# lowercaseOutputName: true
# lowercaseOutputLabelNames: true
# ssl: false
# rules:
# - pattern: ".*"
## If you would like to supply your own ConfigMap for JMX metrics, supply the name of that
## ConfigMap as an `overrideName` here.
overrideName: ""
## Port the jmx metrics are exposed in native jmx format, not in Prometheus format
port: 5555
## JMX Whitelist Objects, can be set to control which JMX metrics are exposed. Only whitelisted
## values will be exposed via JMX Exporter. They must also be exposed via Rules. To expose all metrics
## (warning its crazy excessive and they aren't formatted in a prometheus style) (1) `whitelistObjectNames: []`
## (2) commented out above `overrideConfig`.
whitelistObjectNames: # []
- kafka.controller:*
- kafka.server:*
- java.lang:*
- kafka.network:*
- kafka.log:*
## Prometheus Exporters / Metrics
##
prometheus:
## Prometheus JMX Exporter: exposes the majority of Kafkas metrics
jmx:
enabled: false
## The image to use for the metrics collector
image: solsson/kafka-prometheus-jmx-exporter@sha256
## The image tag to use for the metrics collector
imageTag: a23062396cd5af1acdf76512632c20ea6be76885dfc20cd9ff40fb23846557e8
## Interval at which Prometheus scrapes metrics, note: only used by Prometheus Operator
interval: 10s
## Port jmx-exporter exposes Prometheus format metrics to scrape
port: 5556
resources: {}
# limits:
# cpu: 200m
# memory: 1Gi
# requests:
# cpu: 100m
# memory: 100Mi
## Prometheus Kafka Exporter: exposes complimentary metrics to JMX Exporter
kafka:
enabled: false
## The image to use for the metrics collector
image: danielqsj/kafka-exporter
## The image tag to use for the metrics collector
imageTag: v1.0.1
## Interval at which Prometheus scrapes metrics, note: only used by Prometheus Operator
interval: 10s
## Port kafka-exporter exposes for Prometheus to scrape metrics
port: 9308
## Resource limits
resources: {}
# limits:
# cpu: 200m
# memory: 1Gi
# requests:
# cpu: 100m
# memory: 100Mi
operator:
## Are you using Prometheus Operator?
enabled: false
serviceMonitor:
# Namespace Prometheus is installed in
namespace: monitoring
## Defaults to whats used if you follow CoreOS [Prometheus Install Instructions](https://github.com/coreos/prometheus-operator/tree/master/helm#tldr)
## [Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus/templates/prometheus.yaml#L65)
## [Kube Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/kube-prometheus/values.yaml#L298)
selector:
prometheus: kube-prometheus
# ------------------------------------------------------------------------------
# Zookeeper:
# ------------------------------------------------------------------------------
zookeeper:
## If true, install the Zookeeper chart alongside Kafka
## ref: https://github.com/kubernetes/charts/tree/master/incubator/zookeeper
enabled: true
# Default values for zookeeper.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 3
terminationGracePeriodSeconds: 30 # Duration in seconds a Zokeeper pod needs to terminate gracefully.
securityContext:
fsGroup: 1000
runAsUser: 1000
image:
repository: confluentinc/cp-zookeeper
tag: 4.1.1
pullPolicy: IfNotPresent
service:
name: client
type: ClusterIP
port: 2181
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
persistence:
enabled: false
## zookeeper data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessMode: ReadWriteOnce
size: 5Gi
env:
## The port on which the server will accept client requests.
ZK_CLIENT_PORT: 2181
## The number of wall clock ms that corresponds to a Tick for the ensembles
## internal time.
ZK_TICK_TIME: 2000
# The time interval in hours for which the purge task has to be triggered.
# Set to a positive integer (1 and above) to enable the auto purging. Defaults to 0.
PURGE_INTERVAL: 24
# ------------------------------------------------------------------------------
# Kafka Schema Registry:
# ------------------------------------------------------------------------------
schema-registry:
enabled: true
## Number of Schema Registry Pods to Deploy
replicaCount: 1
## schema-registry repository
image: "confluentinc/cp-schema-registry"
## The container tag to use
imageTag: 4.0.1
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
imagePullPolicy: "IfNotPresent"
## Schema Registry Settings Overrides
## Configuration Options can be found here: https://docs.confluent.io/current/schema-registry/docs/config.html
configurationOverrides:
kafkastore.topic.replication.factor: 3
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## Confluent has production deployment guidelines here:
## ref: https://github.com/confluentinc/schema-registry/blob/master/docs/deployment.rst
##
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## The port on which the SchemaRegistry will be available and serving requests
servicePort: 8081
## If `Kafka.Enabled` is `false`, kafkaStore.overrideBootstrapServers must be provided for Master Election.
## You can list load balanced service endpoint, or list of all brokers (which is hard in K8s). e.g.:
## overrideBootstrapServers: "PLAINTEXT://dozing-prawn-kafka-headless:9092"
## Charts uses Kafka Coordinator Master Election: https://docs.confluent.io/current/schema-registry/docs/design.html#kafka-coordinator-master-election
kafkaStore:
overrideBootstrapServers: ""
# By Default uses Release Name, but can be overridden. Which means each release is its own group of
# Schema Registry workers. You can have multiple groups talking to same Kafka Cluster
overrideGroupId: ""
## Additional Java arguments to pass to Kafka.
# schemaRegistryOpts: -Dfoo=bar
# Options for connecting to SASL kafka brokers
sasl:
configPath: "/etc/kafka-config"
scram:
enabled: false
clientUser: "kafka-client"
zookeeperClientUser: "zookeeper-client"
# Passwords can be either provided here or pulled from an existing k8s secret.
# If user wants to specify the password here:
# clientPassword: "client-password"
# zookeeperClientPassword: "zookeeper-client-password"
# ------------------------------------------------------------------------------
# Kafka Rest:
# ------------------------------------------------------------------------------
kafka-rest:
enabled: true
replicaCount: 1
image:
repository: confluentinc/cp-kafka-rest
tag: 4.0.0
pullPolicy: IfNotPresent
service:
name: rest
type: ClusterIP
externalPort: 8082
internalPort: 8082
# ------------------------------------------------------------------------------
# Kafka Topics UI:
# ------------------------------------------------------------------------------
kafka-topics-ui:
## If true, install the Zookeeper chart alongside Kafka
## ref: https://github.com/kubernetes/charts/tree/master/incubator/zookeeper
enabled: false
replicaCount: 1
image:
repository: landoop/kafka-topics-ui
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
# nodePort:
ingress:
enabled: false
# Used to create an Ingress record.
hosts:
- xip.io
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
name: kubernetes-dashboard
version: 0.6.8
appVersion: 1.8.3
description: General-purpose web UI for Kubernetes clusters
keywords:
- kubernetes
- dashboard
home: https://github.com/kubernetes/dashboard
sources:
- https://github.com/kubernetes/dashboard
maintainers:
- name: kfox1111
email: Kevin.Fox@pnnl.gov
icon: https://raw.githubusercontent.com/kubernetes/kubernetes/master/logo/logo.svg
## Configuration
The following table lists the configurable parameters of the kubernetes-dashboard chart and their default values.
| Parameter | Description | Default |
|---------------------------|-----------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------|
| `image.repository` | Repository for container image | `k8s.gcr.io/kubernetes-dashboard-amd64` |
| `image.tag` | Image tag | `v1.8.3` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `extraArgs` | Additional container arguments | `[]` |
| `nodeSelector` | node labels for pod assignment | `{}` |
| `tolerations` | List of node taints to tolerate (requires Kubernetes >= 1.6) | `[]` |
| `service.externalPort` | Dashboard external port | 443 |
| `service.internalPort` | Dashboard internal port | 443 |
| `ingress.annotations` | Specify ingress class | `kubernetes.io/ingress.class: nginx` |
| `ingress.enabled` | Enable ingress controller resource | `false` |
| `ingress.path` | Path to match against incoming requests. Must begin with a '/' | `/` |
| `ingress.hosts` | Dashboard Hostnames | `nil` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `resources` | Pod resource requests & limits | `limits: {cpu: 100m, memory: 50Mi}, requests: {cpu: 100m, memory: 50Mi}` |
| `rbac.create` | Create & use RBAC resources | `true` |
| `rbac.clusterAdminRole` | "cluster-admin" ClusterRole will be used for dashboard ServiceAccount ([NOT RECOMMENDED](#access-control)) | `false` |
| `serviceAccount.create` | Whether a new service account name that the agent will use should be created. | `true` |
| `serviceAccount.name` | Service account to be used. If not set and serviceAccount.create is `true` a name is generated using the fullname template. | |
# kubernetes-dashboard
[Kubernetes Dashboard](https://github.com/kubernetes/dashboard) is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.
## Access control
IMPORTANT:
You must be a cluster admin to be able to deploy Kubernetes Dashboard.
WARNING:
Once the Dashboard is deployed with cluster admin role, anyone with access to this project can access the Dashboard and therefore gain access to the entire Kubernetes cluster!!!
It is critical for the Kubernetes cluster to correctly setup access control of Kubernetes Dashboard. See this [guide](https://github.com/kubernetes/dashboard/wiki/Access-control) for best practises.
It is highly recommended to use RBAC with minimal privileges needed for Dashboard to run.
namespace: kube-system
categories:
- dashboard
- kubernetes
questions:
- variable: defaultImage
default: "true"
description: "Use default Docker image"
label: Use Default Image
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: image.repository
default: "k8s.gcr.io/kubernetes-dashboard-amd64"
description: "Docker image repository"
type: string
label: Image Repository
- variable: image.tag
default: "v1.8.3"
description: "Docker image tag"
type: string
label: Image Tag
- variable: rbac.clusterAdminRole
required: true
default: false
description: "IMPORTANT: Granting admin privileges to Dashboard's Service Account might be a security risk, makeing sure that you know what you are doing before proceeding."
type: boolean
label: "IMPORTANT: Enable Dashboard Cluster Admin Role"
*********************************************************************************
*** PLEASE BE PATIENT: kubernetes-dashboard may take a few minutes to install ***
*********************************************************************************
{{- if .Values.ingress.enabled }}
From outside the cluster, the server URL(s) are:
{{- range .Values.ingress.hosts }}
https://{{ . }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
Get the Kubernetes Dashboard URL by running:
export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "kubernetes-dashboard.fullname" . }})
export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo https://$NODE_IP:$NODE_PORT/
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc -w {{ template "kubernetes-dashboard.fullname" . }}'
Get the Kubernetes Dashboard URL by running:
export SERVICE_IP=$(kubectl get svc {{ template "kubernetes-dashboard.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo https://$SERVICE_IP/
{{- else if contains "ClusterIP" .Values.service.type }}
Get the Kubernetes Dashboard URL by running:
kubectl cluster-info | grep dashboard
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kubernetes-dashboard.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kubernetes-dashboard.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "kubernetes-dashboard.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "kubernetes-dashboard.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "kubernetes-dashboard.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "kubernetes-dashboard.fullname" . }}
labels:
app: {{ template "kubernetes-dashboard.name" . }}
chart: {{ template "kubernetes-dashboard.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
kubernetes.io/cluster-service: "true"
{{- if .Values.labels }}
{{ toYaml .Values.labels | indent 4 }}
{{- end }}
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: {{ template "kubernetes-dashboard.name" . }}
release: {{ .Release.Name }}
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: {{ template "kubernetes-dashboard.serviceAccountName" . }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- --auto-generate-certificates
{{- if .Values.extraArgs }}
{{ toYaml .Values.extraArgs | indent 10 }}
{{- end }}
ports:
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: {{ template "kubernetes-dashboard.fullname" . }}
- name: tmp-volume
emptyDir: {}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.ingress.enabled -}}
{{- $serviceName := include "kubernetes-dashboard.fullname" . -}}
{{- $servicePort := .Values.service.externalPort -}}
{{- $path := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "kubernetes-dashboard.fullname" . }}
labels:
app: {{ template "kubernetes-dashboard.name" . }}
chart: {{ template "kubernetes-dashboard.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.ingress.annotations }}
annotations:
nginx.org/redirect-to-https: true
{{ toYaml .Values.ingress.annotations | indent 4 }}
{{- end }}
spec:
rules:
{{- if .Values.ingress.hosts }}
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host }}
http:
paths:
- path: {{ $path }}
backend:
serviceName: {{ $serviceName }}
servicePort: {{ $servicePort }}
{{- end -}}
{{- else }}
- http:
paths:
- path: {{ $path }}
backend:
serviceName: {{ $serviceName }}
servicePort: {{ $servicePort }}
{{- end -}}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
{{- if and .Values.rbac.create (not .Values.rbac.clusterAdminRole) }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
labels:
app: {{ template "kubernetes-dashboard.name" . }}
chart: {{ template "kubernetes-dashboard.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kubernetes-dashboard.fullname" . }}
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- kubernetes-dashboard-key-holder
- {{ template "kubernetes-dashboard.fullname" . }}
verbs:
- get
- update
- delete
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- kubernetes-dashboard-settings
verbs:
- get
- update
# Allow Dashboard to get metrics from heapster.
- apiGroups:
- ""
resources:
- services
resourceNames:
- heapster
verbs:
- proxy
- apiGroups:
- ""
resources:
- services/proxy
resourceNames:
- heapster
- "http:heapster:"
- "https:heapster:"
verbs:
- get
{{- end -}}
{{- if .Values.rbac.create }}
{{- if .Values.rbac.clusterAdminRole }}
# Cluster role binding for clusterAdminRole == true
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "kubernetes-dashboard.name" . }}
chart: {{ template "kubernetes-dashboard.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kubernetes-dashboard.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: {{ template "kubernetes-dashboard.serviceAccountName" . }}
namespace: kube-system
{{- else -}}
# Role binding for clusterAdminRole == false
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
labels:
app: {{ template "kubernetes-dashboard.name" . }}
chart: {{ template "kubernetes-dashboard.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kubernetes-dashboard.fullname" . }}
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "kubernetes-dashboard.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "kubernetes-dashboard.serviceAccountName" . }}
namespace: kube-system
{{- end -}}
{{- end -}}
apiVersion: v1
kind: Secret
metadata:
labels:
app: {{ template "kubernetes-dashboard.name" . }}
chart: {{ template "kubernetes-dashboard.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kubernetes-dashboard.fullname" . }}
namespace: kube-system
type: Opaque
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "kubernetes-dashboard.name" . }}
chart: {{ template "kubernetes-dashboard.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kubernetes-dashboard.serviceAccountName" . }}
namespace: kube-system
{{- end -}}
apiVersion: v1
kind: Service
metadata:
name: {{ template "kubernetes-dashboard.fullname" . }}
labels:
app: {{ template "kubernetes-dashboard.name" . }}
chart: {{ template "kubernetes-dashboard.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
kubernetes.io/cluster-service: "true"
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
{{- if .Values.service.annotations }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: https
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
selector:
app: {{ template "kubernetes-dashboard.name" . }}
release: {{ .Release.Name }}
# Default values for kubernetes-dashboard
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value
image:
repository: k8s.gcr.io/kubernetes-dashboard-amd64
tag: v1.8.3
pullPolicy: IfNotPresent
## Here labels can be added to the kubernets dashboard deployment
##
labels: {}
# kubernetes.io/cluster-service: "true"
# kubernetes.io/name: "Kubernetes Dashboard"
## Additional container arguments
##
# extraArgs:
# - --enable-insecure-login
# - --system-banner="Welcome to Kubernetes"
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## List of node taints to tolerate (requires Kubernetes >= 1.6)
tolerations: []
# - key: "key"
# operator: "Equal|Exists"
# value: "value"
# effect: "NoSchedule|PreferNoSchedule|NoExecute"
service:
type: ClusterIP
externalPort: 443
## This allows an override of the heapster service name
## Default: {{ .Chart.Name }}
##
# nameOverride:
## Kubernetes Dashboard Service annotations
##
annotations: {}
# foo.io/bar: "true"
## Here labels can be added to the Kubernetes Dashboard service
##
labels: {}
# kubernetes.io/name: "Kubernetes Dashboard"
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ingress:
## If true, Kubernetes Dashboard Ingress will be created.
##
enabled: false
## Kubernetes Dashboard Ingress annotations
##
# annotations:
# kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/secure-backends: "true"
# kubernetes.io/tls-acme: 'true'
## Kubernetes Dashboard Ingress path
##
path: ""
## Kubernetes Dashboard Ingress hostnames
## Must be provided if Ingress is enabled
##
# hosts:
# - kubernetes-dashboard.domain.com
## Kubernetes Dashboard Ingress TLS configuration
## Secrets must be manually created in the namespace
##
# tls:
# - secretName: kubernetes-dashboard-tls
# hosts:
# - kubernetes-dashboard.domain.com
rbac:
# Specifies whether RBAC resources should be created
create: true
# Specifies whether cluster-admin ClusterRole will be used for dashboard
# ServiceAccount (NOT RECOMMENDED).
clusterAdminRole: false
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment