Commit 4d99d1a5 by Guangbo Chen Committed by Guangbo

Update Magento depdency files with k8s 1.16 support

parent b44889ca
.git
# OWNERS file for Kubernetes
OWNERS
\ No newline at end of file
apiVersion: v1
appVersion: 6.8.2
description: Flexible and powerful open source, distributed real-time search and analytics
engine.
home: https://www.elastic.co/products/elasticsearch
icon: https://static-www.elastic.co/assets/blteb1c97719574938d/logo-elastic-elasticsearch-lt.svg
maintainers:
- email: christian@jetstack.io
name: simonswine
- email: michael.haselton@gmail.com
name: icereval
- email: pete.brown@powerhrg.com
name: rendhalver
- email: cedric@desaintmartin.fr
name: desaintmartin
- email: goonohc@gmail.com
name: KongZ
- email: hfernandez@mesosphere.com
name: hectorj2f
name: elasticsearch
sources:
- https://www.elastic.co/products/elasticsearch
- https://github.com/jetstack/elasticsearch-pet
- https://github.com/giantswarm/kubernetes-elastic-stack
- https://github.com/GoogleCloudPlatform/elasticsearch-docker
- https://github.com/clockworksoul/helm-elasticsearch
- https://github.com/pires/kubernetes-elasticsearch-cluster
version: 1.31.0
# Elasticsearch Helm Chart
This chart uses a standard Docker image of Elasticsearch (docker.elastic.co/elasticsearch/elasticsearch-oss) and uses a service pointing to the master's transport port for service discovery.
Elasticsearch does not communicate with the Kubernetes API, hence no need for RBAC permissions.
## **Pre-deprecation notice**
As mentioned in #10543 we are planning on deprecating this chart in favour of the official [Elastic Helm Chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch).
We have made steps towards that goal by producing a [migration guide](https://github.com/elastic/helm-charts/blob/master/elasticsearch/examples/migration/README.md) to help people switch the management of their clusters over to the new Charts.
The Elastic Helm Chart supports version 7 of Elasticsearch and it was decided it would be easier for people to upgrade after migrating to the Elastic Helm Chart because it's upgrade process works better.
During deprecation process we want to make sure that Chart will do what people are using this chart to do.
Please look at the Elastic Helm Charts and if you see anything missing from please [open an issue](https://github.com/elastic/helm-charts/issues/new/choose) to let us know what you need.
The Elastic Chart repo is also in [Helm Hub](https://hub.helm.sh).
## Warning for previous users
If you are currently using an earlier version of this Chart you will need to redeploy your Elasticsearch clusters. The discovery method used here is incompatible with using RBAC.
If you are upgrading to Elasticsearch 6 from the 5.5 version used in this chart before, please note that your cluster needs to do a full cluster restart.
The simplest way to do that is to delete the installation (keep the PVs) and install this chart again with the new version.
If you want to avoid doing that upgrade to Elasticsearch 5.6 first before moving on to Elasticsearch 6.0.
## Prerequisites Details
* Kubernetes 1.6+
* PV dynamic provisioning support on the underlying infrastructure
## StatefulSets Details
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
## StatefulSets Caveats
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations
## Todo
* Implement TLS/Auth/Security
* Smarter upscaling/downscaling
* Solution for memory locking
## Chart Details
This chart will do the following:
* Implemented a dynamically scalable elasticsearch cluster using Kubernetes StatefulSets/Deployments
* Multi-role deployment: master, client (coordinating) and data nodes
* Statefulset Supports scaling down without degrading the cluster
## Installing the Chart
To install the chart with the release name `my-release`:
```bash
$ helm install --name my-release stable/elasticsearch
```
## Deleting the Charts
Delete the Helm deployment as normal
```
$ helm delete my-release
```
Deletion of the StatefulSet doesn't cascade to deleting associated PVCs. To delete them:
```
$ kubectl delete pvc -l release=my-release,component=data
```
## Configuration
The following table lists the configurable parameters of the elasticsearch chart and their default values.
| Parameter | Description | Default |
| ------------------------------------ | ------------------------------------------------------------------- | --------------------------------------------------- |
| `appVersion` | Application Version (Elasticsearch) | `6.8.2` |
| `image.repository` | Container image name | `docker.elastic.co/elasticsearch/elasticsearch-oss` |
| `image.tag` | Container image tag | `6.8.2` |
| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
| `initImage.repository` | Init container image name | `busybox` |
| `initImage.tag` | Init container image tag | `latest` |
| `initImage.pullPolicy` | Init container pull policy | `Always` |
| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
| `cluster.name` | Cluster name | `elasticsearch` |
| `cluster.xpackEnable` | Writes the X-Pack configuration options to the configuration file | `false` |
| `cluster.config` | Additional cluster config appended | `{}` |
| `cluster.keystoreSecret` | Name of secret holding secure config options in an es keystore | `nil` |
| `cluster.env` | Cluster environment variables | `{MINIMUM_MASTER_NODES: "2"}` |
| `cluster.bootstrapShellCommand` | Post-init command to run in separate Job | `""` |
| `cluster.additionalJavaOpts` | Cluster parameters to be added to `ES_JAVA_OPTS` environment variable | `""` |
| `cluster.plugins` | List of Elasticsearch plugins to install | `[]` |
| `cluster.loggingYml` | Cluster logging configuration for ES v2 | see `values.yaml` for defaults |
| `cluster.log4j2Properties` | Cluster logging configuration for ES v5 and 6 | see `values.yaml` for defaults |
| `client.name` | Client component name | `client` |
| `client.replicas` | Client node replicas (deployment) | `2` |
| `client.resources` | Client node resources requests & limits | `{} - cpu limit must be an integer` |
| `client.priorityClassName` | Client priorityClass | `nil` |
| `client.heapSize` | Client node heap size | `512m` |
| `client.podAnnotations` | Client Deployment annotations | `{}` |
| `client.nodeSelector` | Node labels for client pod assignment | `{}` |
| `client.tolerations` | Client tolerations | `[]` |
| `client.terminationGracePeriodSeconds` | Client nodes: Termination grace period (seconds) | `nil` |
| `client.serviceAnnotations` | Client Service annotations | `{}` |
| `client.serviceType` | Client service type | `ClusterIP` |
| `client.httpNodePort` | Client service HTTP NodePort port number. Has no effect if client.serviceType is not `NodePort`. | `nil` |
| `client.loadBalancerIP` | Client loadBalancerIP | `{}` |
| `client.loadBalancerSourceRanges` | Client loadBalancerSourceRanges | `{}` |
| `client.antiAffinity` | Client anti-affinity policy | `soft` |
| `client.nodeAffinity` | Client node affinity policy | `{}` |
| `client.initResources` | Client initContainer resources requests & limits | `{}` |
| `client.hooks.preStop` | Client nodes: Lifecycle hook script to execute prior the pod stops | `nil` |
| `client.hooks.preStart` | Client nodes: Lifecycle hook script to execute after the pod starts | `nil` |
| `client.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for client | `""` |
| `client.ingress.enabled` | Enable Client Ingress | `false` |
| `client.ingress.user` | If this & password are set, enable basic-auth on ingress | `nil` |
| `client.ingress.password` | If this & user are set, enable basic-auth on ingress | `nil` |
| `client.ingress.annotations` | Client Ingress annotations | `{}` |
| `client.ingress.hosts` | Client Ingress Hostnames | `[]` |
| `client.ingress.tls` | Client Ingress TLS configuration | `[]` |
| `client.exposeTransportPort` | Expose transport port 9300 on client service (ClusterIP) | `false` |
| `master.initResources` | Master initContainer resources requests & limits | `{}` |
| `master.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for master | `""` |
| `master.exposeHttp` | Expose http port 9200 on master Pods for monitoring, etc | `false` |
| `master.name` | Master component name | `master` |
| `master.replicas` | Master node replicas (deployment) | `2` |
| `master.resources` | Master node resources requests & limits | `{} - cpu limit must be an integer` |
| `master.priorityClassName` | Master priorityClass | `nil` |
| `master.podAnnotations` | Master Deployment annotations | `{}` |
| `master.nodeSelector` | Node labels for master pod assignment | `{}` |
| `master.tolerations` | Master tolerations | `[]` |
| `master.terminationGracePeriodSeconds` | Master nodes: Termination grace period (seconds) | `nil` |
| `master.heapSize` | Master node heap size | `512m` |
| `master.name` | Master component name | `master` |
| `master.persistence.enabled` | Master persistent enabled/disabled | `true` |
| `master.persistence.name` | Master statefulset PVC template name | `data` |
| `master.persistence.size` | Master persistent volume size | `4Gi` |
| `master.persistence.storageClass` | Master persistent volume Class | `nil` |
| `master.persistence.accessMode` | Master persistent Access Mode | `ReadWriteOnce` |
| `master.readinessProbe` | Master container readiness probes | see `values.yaml` for defaults |
| `master.antiAffinity` | Master anti-affinity policy | `soft` |
| `master.nodeAffinity` | Master node affinity policy | `{}` |
| `master.podManagementPolicy` | Master pod creation strategy | `OrderedReady` |
| `master.updateStrategy` | Master node update strategy policy | `{type: "onDelete"}` |
| `master.hooks.preStop` | Master nodes: Lifecycle hook script to execute prior the pod stops | `nil` |
| `master.hooks.preStart` | Master nodes: Lifecycle hook script to execute after the pod starts | `nil` |
| `data.initResources` | Data initContainer resources requests & limits | `{}` |
| `data.additionalJavaOpts` | Parameters to be added to `ES_JAVA_OPTS` environment variable for data | `""` |
| `data.exposeHttp` | Expose http port 9200 on data Pods for monitoring, etc | `false` |
| `data.replicas` | Data node replicas (statefulset) | `2` |
| `data.resources` | Data node resources requests & limits | `{} - cpu limit must be an integer` |
| `data.priorityClassName` | Data priorityClass | `nil` |
| `data.heapSize` | Data node heap size | `1536m` |
| `data.hooks.drain.enabled` | Data nodes: Enable drain pre-stop and post-start hook | `true` |
| `data.hooks.preStop` | Data nodes: Lifecycle hook script to execute prior the pod stops. Ignored if `data.hooks.drain.enabled` is `true` | `nil` |
| `data.hooks.preStart` | Data nodes: Lifecycle hook script to execute after the pod starts. Ignored if `data.hooks.drain.enabled` is `true` | `nil`|
| `data.persistence.enabled` | Data persistent enabled/disabled | `true` |
| `data.persistence.name` | Data statefulset PVC template name | `data` |
| `data.persistence.size` | Data persistent volume size | `30Gi` |
| `data.persistence.storageClass` | Data persistent volume Class | `nil` |
| `data.persistence.accessMode` | Data persistent Access Mode | `ReadWriteOnce` |
| `data.readinessProbe` | Readiness probes for data-containers | see `values.yaml` for defaults |
| `data.podAnnotations` | Data StatefulSet annotations | `{}` |
| `data.nodeSelector` | Node labels for data pod assignment | `{}` |
| `data.tolerations` | Data tolerations | `[]` |
| `data.terminationGracePeriodSeconds` | Data termination grace period (seconds) | `3600` |
| `data.antiAffinity` | Data anti-affinity policy | `soft` |
| `data.nodeAffinity` | Data node affinity policy | `{}` |
| `data.podManagementPolicy` | Data pod creation strategy | `OrderedReady` |
| `data.updateStrategy` | Data node update strategy policy | `{type: "onDelete"}` |
| `sysctlInitContainer.enabled` | If true, the sysctl init container is enabled (does not stop chownInitContainer or extraInitContainers from running) | `true` |
| `chownInitContainer.enabled` | If true, the chown init container is enabled (does not stop sysctlInitContainer or extraInitContainers from running) | `true` |
| `extraInitContainers` | Additional init container passed through the tpl | `` |
| `podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | `{}` |
| `podSecurityPolicy.enabled` | Specify if a pod security policy must be created | `false` |
| `securityContext.enabled` | If true, add securityContext to client, master and data pods | `false` |
| `securityContext.runAsUser` | user ID to run containerized process | `1000` |
| `serviceAccounts.client.create` | If true, create the client service account | `true` |
| `serviceAccounts.client.name` | Name of the client service account to use or create | `{{ elasticsearch.client.fullname }}` |
| `serviceAccounts.master.create` | If true, create the master service account | `true` |
| `serviceAccounts.master.name` | Name of the master service account to use or create | `{{ elasticsearch.master.fullname }}` |
| `serviceAccounts.data.create` | If true, create the data service account | `true` |
| `serviceAccounts.data.name` | Name of the data service account to use or create | `{{ elasticsearch.data.fullname }}` |
| `testFramework.image` | `test-framework` image repository. | `dduportal/bats` |
| `testFramework.tag` | `test-framework` image tag. | `0.4.0` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
In terms of Memory resources you should make sure that you follow that equation:
- `${role}HeapSize < ${role}MemoryRequests < ${role}MemoryLimits`
The YAML value of cluster.config is appended to elasticsearch.yml file for additional customization ("script.inline: on" for example to allow inline scripting)
# Deep dive
## Application Version
This chart aims to support Elasticsearch v2 to v6 deployments by specifying the `values.yaml` parameter `appVersion`.
### Version Specific Features
* Memory Locking *(variable renamed)*
* Ingest Node *(v5)*
* X-Pack Plugin *(v5)*
Upgrade paths & more info: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html
## Mlocking
This is a limitation in kubernetes right now. There is no way to raise the
limits of lockable memory, so that these memory areas won't be swapped. This
would degrade performance heavily. The issue is tracked in
[kubernetes/#3595](https://github.com/kubernetes/kubernetes/issues/3595).
```
[WARN ][bootstrap] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[WARN ][bootstrap] This can result in part of the JVM being swapped out.
[WARN ][bootstrap] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
```
## Minimum Master Nodes
> The minimum_master_nodes setting is extremely important to the stability of your cluster. This setting helps prevent split brains, the existence of two masters in a single cluster.
>When you have a split brain, your cluster is at danger of losing data. Because the master is considered the supreme ruler of the cluster, it decides when new indices can be created, how shards are moved, and so forth. If you have two masters, data integrity becomes perilous, since you have two nodes that think they are in charge.
>This setting tells Elasticsearch to not elect a master unless there are enough master-eligible nodes available. Only then will an election take place.
>This setting should always be configured to a quorum (majority) of your master-eligible nodes. A quorum is (number of master-eligible nodes / 2) + 1
More info: https://www.elastic.co/guide/en/elasticsearch/guide/1.x/_important_configuration_changes.html#_minimum_master_nodes
# Client and Coordinating Nodes
Elasticsearch v5 terminology has updated, and now refers to a `Client Node` as a `Coordinating Node`.
More info: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/modules-node.html#coordinating-node
## Enabling elasticsearch internal monitoring
Requires version 6.3+ and standard non `oss` repository defined. Starting with 6.3 Xpack is partially free and enabled by default. You need to set a new config to enable the collection of these internal metrics. (https://www.elastic.co/guide/en/elasticsearch/reference/6.3/monitoring-settings.html)
To do this through this helm chart override with the three following changes:
```
image.repository: docker.elastic.co/elasticsearch/elasticsearch
cluster.xpackEnable: true
cluster.env.XPACK_MONITORING_ENABLED: true
```
Note: to see these changes you will need to update your kibana repo to `image.repository: docker.elastic.co/kibana/kibana` instead of the `oss` version
## Select right storage class for SSD volumes
### GCE + Kubernetes 1.5
Create StorageClass for SSD-PD
```
$ kubectl create -f - <<EOF
kind: StorageClass
apiVersion: extensions/v1beta1
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
EOF
```
Create cluster with Storage class `ssd` on Kubernetes 1.5+
```
$ helm install stable/elasticsearch --name my-release --set data.persistence.storageClass=ssd,data.storage=100Gi
```
### Usage of the `tpl` Function
The `tpl` function allows us to pass string values from `values.yaml` through the templating engine. It is used for the following values:
* `extraInitContainers`
It is important that these values be configured as strings. Otherwise, installation will fail.
---
# Expose transport port on ClusterIP service
client:
exposeTransportPort: true
extraInitContainers: |
- name: "plugin-install-ingest-attachment"
image: "docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1"
command: ["/bin/bash"]
args: ["-c", "yes | /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-attachment"]
- name: "plugin-install-mapper-size"
image: "docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1"
command: ["/bin/bash"]
args: ["-c", "yes | /usr/share/elasticsearch/bin/elasticsearch-plugin install mapper-size"]
---
# Enable custom lifecycle hooks for client, data and master pods
client:
hooks:
preStop: |-
#!/bin/bash
echo "Node {{ template "elasticsearch.client.fullname" . }} is shutting down"
postStart: |-
#!/bin/bash
echo "Node {{ template "elasticsearch.client.fullname" . }} is ready to be used"
data:
hooks:
drain:
enabled: false
preStop: |-
#!/bin/bash
echo "Node {{ template "elasticsearch.data.fullname" . }} is shutting down"
postStart: |-
#!/bin/bash
echo "Node {{ template "elasticsearch.data.fullname" . }} is ready to be used"
master:
hooks:
preStop: |-
#!/bin/bash
echo "Node {{ template "elasticsearch.master.fullname" . }} is shutting down"
postStart: |-
#!/bin/bash
echo "Node {{ template "elasticsearch.master.fullname" . }} is ready to be used"
---
# Deploy Chart as non-root and unprivileged
chownInitContainer:
enabled: false
securityContext:
enabled: true
runAsUser: 1000
sysctlInitContainer:
enabled: false
---
# Enable init container for installing plugins
cluster:
plugins:
- ingest-attachment
- mapper-size
data:
updateStrategy:
type: RollingUpdate
master:
updateStrategy:
type: RollingUpdate
The elasticsearch cluster has been installed.
Elasticsearch can be accessed:
* Within your cluster, at the following DNS name at port 9200:
{{ template "elasticsearch.client.fullname" . }}.{{ .Release.Namespace }}.svc
* From outside the cluster, run these commands in the same shell:
{{- if contains "NodePort" .Values.client.serviceType }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "elasticsearch.client.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.client.serviceType }}
WARNING: You have likely exposed your Elasticsearch cluster direct to the internet.
Elasticsearch does not implement any security for public facing clusters by default.
As a minimum level of security; switch to ClusterIP/NodePort and place an Nginx gateway infront of the cluster in order to lock down access to dangerous HTTP endpoints and verbs.
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "elasticsearch.client.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "elasticsearch.client.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:9200
{{- else if contains "ClusterIP" .Values.client.serviceType }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "elasticsearch.name" . }},component={{ .Values.client.name }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 9200:9200
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "elasticsearch.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create a default fully qualified client name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.client.fullname" -}}
{{ template "elasticsearch.fullname" . }}-{{ .Values.client.name }}
{{- end -}}
{{/*
Create a default fully qualified data name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.data.fullname" -}}
{{ template "elasticsearch.fullname" . }}-{{ .Values.data.name }}
{{- end -}}
{{/*
Create a default fully qualified master name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.master.fullname" -}}
{{ template "elasticsearch.fullname" . }}-{{ .Values.master.name }}
{{- end -}}
{{/*
Create the name of the service account to use for the client component
*/}}
{{- define "elasticsearch.serviceAccountName.client" -}}
{{- if .Values.serviceAccounts.client.create -}}
{{ default (include "elasticsearch.client.fullname" .) .Values.serviceAccounts.client.name }}
{{- else -}}
{{ default "default" .Values.serviceAccounts.client.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use for the data component
*/}}
{{- define "elasticsearch.serviceAccountName.data" -}}
{{- if .Values.serviceAccounts.data.create -}}
{{ default (include "elasticsearch.data.fullname" .) .Values.serviceAccounts.data.name }}
{{- else -}}
{{ default "default" .Values.serviceAccounts.data.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use for the master component
*/}}
{{- define "elasticsearch.serviceAccountName.master" -}}
{{- if .Values.serviceAccounts.master.create -}}
{{ default (include "elasticsearch.master.fullname" .) .Values.serviceAccounts.master.name }}
{{- else -}}
{{ default "default" .Values.serviceAccounts.master.name }}
{{- end -}}
{{- end -}}
{{/*
plugin installer template
*/}}
{{- define "plugin-installer" -}}
- name: es-plugin-install
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
capabilities:
add:
- IPC_LOCK
- SYS_RESOURCE
command:
- "sh"
- "-c"
- |
{{- range .Values.cluster.plugins }}
/usr/share/elasticsearch/bin/elasticsearch-plugin install -b {{ . }}
{{- end }}
volumeMounts:
- mountPath: /usr/share/elasticsearch/plugins/
name: plugindir
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
name: config
subPath: elasticsearch.yml
{{- end -}}
{{- if and ( .Values.client.ingress.user ) ( .Values.client.ingress.password ) }}
---
apiVersion: v1
kind: Secret
metadata:
name: '{{ include "elasticsearch.client.fullname" . }}-auth'
type: Opaque
data:
auth: {{ printf "%s:{PLAIN}%s\n" .Values.client.ingress.user .Values.client.ingress.password | b64enc | quote }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.client.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.client.fullname" . }}
spec:
selector:
matchLabels:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.client.name }}"
release: {{ .Release.Name }}
replicas: {{ .Values.client.replicas }}
template:
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.client.name }}"
release: {{ .Release.Name }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- if .Values.client.podAnnotations }}
{{ toYaml .Values.client.podAnnotations | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "elasticsearch.serviceAccountName.client" . }}
{{- if .Values.client.priorityClassName }}
priorityClassName: "{{ .Values.client.priorityClassName }}"
{{- end }}
securityContext:
fsGroup: 1000
{{- if or .Values.client.antiAffinity .Values.client.nodeAffinity }}
affinity:
{{- end }}
{{- if eq .Values.client.antiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: "{{ template "elasticsearch.name" . }}"
release: "{{ .Release.Name }}"
component: "{{ .Values.client.name }}"
{{- else if eq .Values.client.antiAffinity "soft" }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: "{{ template "elasticsearch.name" . }}"
release: "{{ .Release.Name }}"
component: "{{ .Values.client.name }}"
{{- end }}
{{- with .Values.client.nodeAffinity }}
nodeAffinity:
{{ toYaml . | indent 10 }}
{{- end }}
{{- if .Values.client.nodeSelector }}
nodeSelector:
{{ toYaml .Values.client.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.client.tolerations }}
tolerations:
{{ toYaml .Values.client.tolerations | indent 8 }}
{{- end }}
{{- if .Values.client.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.client.terminationGracePeriodSeconds }}
{{- end }}
{{- if or .Values.extraInitContainers .Values.sysctlInitContainer.enabled .Values.cluster.plugins }}
initContainers:
{{- if .Values.sysctlInitContainer.enabled }}
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
- name: "sysctl"
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: {{ .Values.initImage.pullPolicy | quote }}
resources:
{{ toYaml .Values.client.initResources | indent 12 }}
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
{{- end }}
{{- if .Values.extraInitContainers }}
{{ tpl .Values.extraInitContainers . | indent 6 }}
{{- end }}
{{- if .Values.cluster.plugins }}
{{ include "plugin-installer" . | indent 6 }}
{{- end }}
{{- end }}
containers:
- name: elasticsearch
env:
- name: NODE_DATA
value: "false"
{{- if hasPrefix "5." .Values.appVersion }}
- name: NODE_INGEST
value: "false"
{{- end }}
- name: NODE_MASTER
value: "false"
- name: DISCOVERY_SERVICE
value: {{ template "elasticsearch.fullname" . }}-discovery
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
- name: ES_JAVA_OPTS
value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.client.heapSize }} -Xmx{{ .Values.client.heapSize }} {{ .Values.cluster.additionalJavaOpts }} {{ .Values.client.additionalJavaOpts }}"
{{- range $key, $value := .Values.cluster.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
resources:
{{ toYaml .Values.client.resources | indent 12 }}
readinessProbe:
httpGet:
path: /_cluster/health
port: 9200
initialDelaySeconds: 5
livenessProbe:
httpGet:
path: /_cluster/health?local=true
port: 9200
initialDelaySeconds: 90
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
{{- if .Values.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
volumeMounts:
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
name: config
subPath: elasticsearch.yml
{{- if .Values.cluster.plugins }}
- mountPath: /usr/share/elasticsearch/plugins/
name: plugindir
{{- end }}
{{- if hasPrefix "2." .Values.appVersion }}
- mountPath: /usr/share/elasticsearch/config/logging.yml
name: config
subPath: logging.yml
{{- end }}
{{- if hasPrefix "5." .Values.appVersion }}
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
name: config
subPath: log4j2.properties
{{- end }}
{{- if .Values.cluster.keystoreSecret }}
- name: keystore
mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore"
subPath: elasticsearch.keystore
readOnly: true
{{- end }}
{{- if .Values.client.hooks.preStop }}
- name: config
mountPath: /client-pre-stop-hook.sh
subPath: client-pre-stop-hook.sh
{{- end }}
{{- if .Values.client.hooks.postStart }}
- name: config
mountPath: /client-post-start-hook.sh
subPath: client-post-start-hook.sh
{{- end }}
{{- if or .Values.client.hooks.preStop .Values.client.hooks.postStart }}
lifecycle:
{{- if .Values.client.hooks.preStop }}
preStop:
exec:
command: ["/bin/bash","/client-pre-stop-hook.sh"]
{{- end }}
{{- if .Values.client.hooks.postStart }}
postStart:
exec:
command: ["/bin/bash","/client-post-start-hook.sh"]
{{- end }}
{{- end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range $pullSecret := .Values.image.pullSecrets }}
- name: {{ $pullSecret }}
{{- end }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "elasticsearch.fullname" . }}
{{- if .Values.cluster.plugins }}
- name: plugindir
emptyDir: {}
{{- end }}
{{- if .Values.cluster.keystoreSecret }}
- name: keystore
secret:
secretName: {{ .Values.cluster.keystoreSecret }}
{{- end }}
{{- if .Values.client.ingress.enabled -}}
{{- $fullName := include "elasticsearch.client.fullname" . -}}
{{- $ingressPath := .Values.client.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.client.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
{{- with .Values.client.ingress.annotations }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- if and ( .Values.client.ingress.user ) ( .Values.client.ingress.password ) }}
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: '{{ include "elasticsearch.client.fullname" . }}-auth'
nginx.ingress.kubernetes.io/auth-realm: "Authentication-Required"
{{- end }}
spec:
{{- if .Values.client.ingress.tls }}
tls:
{{- range .Values.client.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.client.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}
{{- if .Values.client.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.client.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.client.fullname" . }}
spec:
{{- if .Values.client.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.client.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.client.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.client.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.client.name }}"
release: {{ .Release.Name }}
{{- end }}
{{- if .Values.serviceAccounts.client.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.client.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.client.fullname" . }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.client.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.client.fullname" . }}
{{- if .Values.client.serviceAnnotations }}
annotations:
{{ toYaml .Values.client.serviceAnnotations | indent 4 }}
{{- end }}
spec:
ports:
- name: http
port: 9200
{{- if and .Values.client.httpNodePort (eq .Values.client.serviceType "NodePort") }}
nodePort: {{ .Values.client.httpNodePort }}
{{- end }}
targetPort: http
{{- if .Values.client.exposeTransportPort }}
- name: transport
port: 9300
targetPort: transport
{{- end }}
selector:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.client.name }}"
release: {{ .Release.Name }}
type: {{ .Values.client.serviceType }}
{{- if .Values.client.loadBalancerIP }}
loadBalancerIP: "{{ .Values.client.loadBalancerIP }}"
{{- end }}
{{if .Values.client.loadBalancerSourceRanges}}
loadBalancerSourceRanges:
{{range $rangeList := .Values.client.loadBalancerSourceRanges}}
- {{ $rangeList }}
{{end}}
{{end}}
\ No newline at end of file
{{ $minorAppVersion := regexFind "[0-9]*.[0-9]*" .Values.appVersion | float64 -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "elasticsearch.fullname" . }}
labels:
app: {{ template "elasticsearch.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
data:
elasticsearch.yml: |-
cluster.name: {{ .Values.cluster.name }}
node.data: ${NODE_DATA:true}
node.master: ${NODE_MASTER:true}
{{- if hasPrefix "5." .Values.appVersion }}
node.ingest: ${NODE_INGEST:true}
{{- else if hasPrefix "6." .Values.appVersion }}
node.ingest: ${NODE_INGEST:true}
{{- end }}
node.name: ${HOSTNAME}
network.host: 0.0.0.0
{{- if hasPrefix "2." .Values.appVersion }}
# see https://github.com/kubernetes/kubernetes/issues/3595
bootstrap.mlockall: ${BOOTSTRAP_MLOCKALL:false}
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:}
minimum_master_nodes: ${MINIMUM_MASTER_NODES:2}
{{- else if hasPrefix "5." .Values.appVersion }}
# see https://github.com/kubernetes/kubernetes/issues/3595
bootstrap.memory_lock: ${BOOTSTRAP_MEMORY_LOCK:false}
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:}
minimum_master_nodes: ${MINIMUM_MASTER_NODES:2}
{{- if .Values.cluster.xpackEnable }}
# see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
{{- if or ( gt $minorAppVersion 5.4 ) ( eq $minorAppVersion 5.4 ) }}
xpack.ml.enabled: ${XPACK_ML_ENABLED:false}
{{- end }}
xpack.monitoring.enabled: ${XPACK_MONITORING_ENABLED:false}
xpack.security.enabled: ${XPACK_SECURITY_ENABLED:false}
xpack.watcher.enabled: ${XPACK_WATCHER_ENABLED:false}
{{- else }}
{{- if or ( gt $minorAppVersion 5.4 ) ( eq $minorAppVersion 5.4 ) }}
xpack.ml.enabled: false
{{- end }}
xpack.monitoring.enabled: false
xpack.security.enabled: false
xpack.watcher.enabled: false
{{- end }}
{{- else if hasPrefix "6." .Values.appVersion }}
# see https://github.com/kubernetes/kubernetes/issues/3595
bootstrap.memory_lock: ${BOOTSTRAP_MEMORY_LOCK:false}
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:}
minimum_master_nodes: ${MINIMUM_MASTER_NODES:2}
{{- if and ( .Values.cluster.xpackEnable ) ( gt $minorAppVersion 6.3 ) }}
# see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
# After 6.3 xpack systems changed and are enabled by default and different configs manage them this enables monitoring
xpack.monitoring.collection.enabled: ${XPACK_MONITORING_ENABLED:false}
{{- else if .Values.cluster.xpackEnable }}
# see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
xpack.ml.enabled: ${XPACK_ML_ENABLED:false}
xpack.monitoring.enabled: ${XPACK_MONITORING_ENABLED:false}
xpack.security.enabled: ${XPACK_SECURITY_ENABLED:false}
xpack.watcher.enabled: ${XPACK_WATCHER_ENABLED:false}
{{- end }}
{{- end }}
# see https://github.com/elastic/elasticsearch-definitive-guide/pull/679
processors: ${PROCESSORS:}
# avoid split-brain w/ a minimum consensus of two masters plus a data node
gateway.expected_master_nodes: ${EXPECTED_MASTER_NODES:2}
gateway.expected_data_nodes: ${EXPECTED_DATA_NODES:1}
gateway.recover_after_time: ${RECOVER_AFTER_TIME:5m}
gateway.recover_after_master_nodes: ${RECOVER_AFTER_MASTER_NODES:2}
gateway.recover_after_data_nodes: ${RECOVER_AFTER_DATA_NODES:1}
{{- with .Values.cluster.config }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- if hasPrefix "2." .Values.appVersion }}
logging.yml: |-
{{ toYaml .Values.cluster.loggingYml | indent 4 }}
{{- else }}
log4j2.properties: |-
{{ tpl .Values.cluster.log4j2Properties . | indent 4 }}
{{- end }}
{{- if .Values.data.hooks.drain.enabled }}
data-pre-stop-hook.sh: |-
#!/bin/bash
exec &> >(tee -a "/var/log/elasticsearch-hooks.log")
NODE_NAME=${HOSTNAME}
echo "Prepare to migrate data of the node ${NODE_NAME}"
echo "Move all data from node ${NODE_NAME}"
curl -s -XPUT -H 'Content-Type: application/json' '{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings' -d "{
\"transient\" :{
\"cluster.routing.allocation.exclude._name\" : \"${NODE_NAME}\"
}
}"
echo ""
while true ; do
echo -e "Wait for node ${NODE_NAME} to become empty"
SHARDS_ALLOCATION=$(curl -s -XGET 'http://{{ template "elasticsearch.client.fullname" . }}:9200/_cat/shards')
if ! echo "${SHARDS_ALLOCATION}" | grep -E "${NODE_NAME}"; then
break
fi
sleep 1
done
echo "Node ${NODE_NAME} is ready to shutdown"
data-post-start-hook.sh: |-
#!/bin/bash
exec &> >(tee -a "/var/log/elasticsearch-hooks.log")
NODE_NAME=${HOSTNAME}
CLUSTER_SETTINGS=$(curl -s -XGET "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings")
if echo "${CLUSTER_SETTINGS}" | grep -E "${NODE_NAME}"; then
echo "Activate node ${NODE_NAME}"
curl -s -XPUT -H 'Content-Type: application/json' "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings" -d "{
\"transient\" :{
\"cluster.routing.allocation.exclude._name\" : null
}
}"
fi
echo "Node ${NODE_NAME} is ready to be used"
{{- else }}
{{- if .Values.data.hooks.preStop }}
data-pre-stop-hook.sh: |-
{{ tpl .Values.data.hooks.preStop . | indent 4 }}
{{- end }}
{{- if .Values.data.hooks.postStart }}
data-post-start-hook.sh: |-
{{ tpl .Values.data.hooks.postStart . | indent 4 }}
{{- end }}
{{- end }}
{{- if .Values.client.hooks.preStop }}
client-pre-stop-hook.sh: |-
{{ tpl .Values.client.hooks.preStop . | indent 4 }}
{{- end }}
{{- if .Values.client.hooks.postStart }}
client-post-start-hook.sh: |-
{{ tpl .Values.client.hooks.postStart . | indent 4 }}
{{- end }}
{{- if .Values.master.hooks.preStop }}
master-pre-stop-hook.sh: |-
{{ tpl .Values.master.hooks.preStop . | indent 4 }}
{{- end }}
{{- if .Values.master.hooks.postStart }}
master-post-start-hook.sh: |-
{{ tpl .Values.master.hooks.postStart . | indent 4 }}
{{- end }}
{{- if .Values.data.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.data.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.data.fullname" . }}
spec:
{{- if .Values.data.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.data.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.data.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.data.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.data.name }}"
release: {{ .Release.Name }}
{{- end }}
{{- if .Values.serviceAccounts.data.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.data.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.data.fullname" . }}
{{- end }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.data.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.data.fullname" . }}
spec:
selector:
matchLabels:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.data.name }}"
release: {{ .Release.Name }}
role: data
serviceName: {{ template "elasticsearch.data.fullname" . }}
replicas: {{ .Values.data.replicas }}
template:
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.data.name }}"
release: {{ .Release.Name }}
role: data
{{- if or .Values.data.podAnnotations (eq .Values.data.updateStrategy.type "RollingUpdate") }}
annotations:
{{- if .Values.data.podAnnotations }}
{{ toYaml .Values.data.podAnnotations | indent 8 }}
{{- end }}
{{- if eq .Values.data.updateStrategy.type "RollingUpdate" }}
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- end }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
serviceAccountName: {{ template "elasticsearch.serviceAccountName.data" . }}
{{- if .Values.data.priorityClassName }}
priorityClassName: "{{ .Values.data.priorityClassName }}"
{{- end }}
securityContext:
fsGroup: 1000
{{- if or .Values.data.antiAffinity .Values.data.nodeAffinity }}
affinity:
{{- end }}
{{- if eq .Values.data.antiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: "{{ template "elasticsearch.name" . }}"
release: "{{ .Release.Name }}"
component: "{{ .Values.data.name }}"
{{- else if eq .Values.data.antiAffinity "soft" }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: "{{ template "elasticsearch.name" . }}"
release: "{{ .Release.Name }}"
component: "{{ .Values.data.name }}"
{{- end }}
{{- with .Values.data.nodeAffinity }}
nodeAffinity:
{{ toYaml . | indent 10 }}
{{- end }}
{{- if .Values.data.nodeSelector }}
nodeSelector:
{{ toYaml .Values.data.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.data.tolerations }}
tolerations:
{{ toYaml .Values.data.tolerations | indent 8 }}
{{- end }}
{{- if or .Values.extraInitContainers .Values.sysctlInitContainer.enabled .Values.chownInitContainer.enabled .Values.cluster.plugins }}
initContainers:
{{- end }}
{{- if .Values.sysctlInitContainer.enabled }}
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
- name: "sysctl"
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: {{ .Values.initImage.pullPolicy | quote }}
resources:
{{ toYaml .Values.data.initResources | indent 12 }}
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
{{- end }}
{{- if .Values.chownInitContainer.enabled }}
- name: "chown"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
resources:
{{ toYaml .Values.data.initResources | indent 12 }}
command:
- /bin/bash
- -c
- >
set -e;
set -x;
chown elasticsearch:elasticsearch /usr/share/elasticsearch/data;
for datadir in $(find /usr/share/elasticsearch/data -mindepth 1 -maxdepth 1 -not -name ".snapshot"); do
chown -R elasticsearch:elasticsearch $datadir;
done;
chown elasticsearch:elasticsearch /usr/share/elasticsearch/logs;
for logfile in $(find /usr/share/elasticsearch/logs -mindepth 1 -maxdepth 1 -not -name ".snapshot"); do
chown -R elasticsearch:elasticsearch $logfile;
done
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: data
{{- end }}
{{- if .Values.extraInitContainers }}
{{ tpl .Values.extraInitContainers . | indent 6 }}
{{- end }}
{{- if .Values.cluster.plugins }}
{{ include "plugin-installer" . | indent 6 }}
{{- end }}
containers:
- name: elasticsearch
env:
- name: DISCOVERY_SERVICE
value: {{ template "elasticsearch.fullname" . }}-discovery
- name: NODE_MASTER
value: "false"
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
- name: ES_JAVA_OPTS
value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.data.heapSize }} -Xmx{{ .Values.data.heapSize }} {{ .Values.cluster.additionalJavaOpts }} {{ .Values.data.additionalJavaOpts }}"
{{- range $key, $value := .Values.cluster.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
{{- if .Values.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
ports:
- containerPort: 9300
name: transport
{{ if .Values.data.exposeHttp }}
- containerPort: 9200
name: http
{{ end }}
resources:
{{ toYaml .Values.data.resources | indent 12 }}
readinessProbe:
{{ toYaml .Values.data.readinessProbe | indent 10 }}
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: data
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
name: config
subPath: elasticsearch.yml
{{- if .Values.cluster.plugins }}
- mountPath: /usr/share/elasticsearch/plugins/
name: plugindir
{{- end }}
{{- if hasPrefix "2." .Values.appVersion }}
- mountPath: /usr/share/elasticsearch/config/logging.yml
name: config
subPath: logging.yml
{{- end }}
{{- if hasPrefix "5." .Values.appVersion }}
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
name: config
subPath: log4j2.properties
{{- end }}
{{- if .Values.cluster.keystoreSecret }}
- name: keystore
mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore"
subPath: elasticsearch.keystore
readOnly: true
{{- end }}
{{- if or .Values.data.hooks.preStop .Values.data.hooks.drain.enabled }}
- name: config
mountPath: /data-pre-stop-hook.sh
subPath: data-pre-stop-hook.sh
{{- end }}
{{- if or .Values.data.hooks.postStart .Values.data.hooks.drain.enabled }}
- name: config
mountPath: /data-post-start-hook.sh
subPath: data-post-start-hook.sh
{{- end }}
{{- if or .Values.data.hooks.preStop .Values.data.hooks.postStart .Values.data.hooks.drain.enabled }}
lifecycle:
{{- if or .Values.data.hooks.preStop .Values.data.hooks.drain.enabled }}
preStop:
exec:
command: ["/bin/bash","/data-pre-stop-hook.sh"]
{{- end }}
{{- if or .Values.data.hooks.postStart .Values.data.hooks.drain.enabled }}
postStart:
exec:
command: ["/bin/bash","/data-post-start-hook.sh"]
{{- end }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.data.terminationGracePeriodSeconds }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range $pullSecret := .Values.image.pullSecrets }}
- name: {{ $pullSecret }}
{{- end }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "elasticsearch.fullname" . }}
{{- if .Values.cluster.plugins }}
- name: plugindir
emptyDir: {}
{{- end }}
{{- if .Values.cluster.keystoreSecret }}
- name: keystore
secret:
secretName: {{ .Values.cluster.keystoreSecret }}
{{- end }}
{{- if not .Values.data.persistence.enabled }}
- name: data
emptyDir: {}
{{- end }}
podManagementPolicy: {{ .Values.data.podManagementPolicy }}
updateStrategy:
type: {{ .Values.data.updateStrategy.type }}
{{- if .Values.data.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: {{ .Values.data.persistence.name }}
spec:
accessModes:
- {{ .Values.data.persistence.accessMode | quote }}
{{- if .Values.data.persistence.storageClass }}
{{- if (eq "-" .Values.data.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.data.persistence.storageClass }}"
{{- end }}
{{- end }}
resources:
requests:
storage: "{{ .Values.data.persistence.size }}"
{{- end }}
{{- if .Values.cluster.bootstrapShellCommand }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "elasticsearch.fullname" . }}-bootstrap
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: {{ template "elasticsearch.fullname" . }}-bootstrap
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
containers:
- name: bootstrap-elasticsearch
image: byrnedo/alpine-curl
command:
- "sh"
- "-c"
- {{ .Values.cluster.bootstrapShellCommand | quote }}
restartPolicy: Never
backoffLimit: 20
{{- end }}
{{- if .Values.master.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.master.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.master.fullname" . }}
spec:
{{- if .Values.master.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.master.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.master.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.master.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.master.name }}"
release: {{ .Release.Name }}
{{- end }}
{{- if .Values.serviceAccounts.master.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.master.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.master.fullname" . }}
{{- end }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.master.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.master.fullname" . }}
spec:
selector:
matchLabels:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.master.name }}"
release: {{ .Release.Name }}
role: master
serviceName: {{ template "elasticsearch.master.fullname" . }}
replicas: {{ .Values.master.replicas }}
template:
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.master.name }}"
release: {{ .Release.Name }}
role: master
{{- if or .Values.master.podAnnotations (eq .Values.master.updateStrategy.type "RollingUpdate") }}
annotations:
{{- if .Values.master.podAnnotations }}
{{ toYaml .Values.master.podAnnotations | indent 8 }}
{{- end }}
{{- if eq .Values.master.updateStrategy.type "RollingUpdate" }}
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- end }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
serviceAccountName: {{ template "elasticsearch.serviceAccountName.master" . }}
{{- if .Values.master.priorityClassName }}
priorityClassName: "{{ .Values.master.priorityClassName }}"
{{- end }}
securityContext:
fsGroup: 1000
{{- if or .Values.master.antiAffinity .Values.master.nodeAffinity }}
affinity:
{{- end }}
{{- if eq .Values.master.antiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: "{{ template "elasticsearch.name" . }}"
release: "{{ .Release.Name }}"
component: "{{ .Values.master.name }}"
{{- else if eq .Values.master.antiAffinity "soft" }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: "{{ template "elasticsearch.name" . }}"
release: "{{ .Release.Name }}"
component: "{{ .Values.master.name }}"
{{- end }}
{{- with .Values.master.nodeAffinity }}
nodeAffinity:
{{ toYaml . | indent 10 }}
{{- end }}
{{- if .Values.master.nodeSelector }}
nodeSelector:
{{ toYaml .Values.master.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.master.tolerations }}
tolerations:
{{ toYaml .Values.master.tolerations | indent 8 }}
{{- end }}
{{- if .Values.master.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.master.terminationGracePeriodSeconds }}
{{- end }}
{{- if or .Values.extraInitContainers .Values.sysctlInitContainer.enabled .Values.chownInitContainer.enabled .Values.cluster.plugins }}
initContainers:
{{- end }}
{{- if .Values.sysctlInitContainer.enabled }}
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
- name: "sysctl"
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: {{ .Values.initImage.pullPolicy | quote }}
resources:
{{ toYaml .Values.master.initResources | indent 12 }}
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
{{- end }}
{{- if .Values.chownInitContainer.enabled }}
- name: "chown"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
resources:
{{ toYaml .Values.master.initResources | indent 12 }}
command:
- /bin/bash
- -c
- >
set -e;
set -x;
chown elasticsearch:elasticsearch /usr/share/elasticsearch/data;
for datadir in $(find /usr/share/elasticsearch/data -mindepth 1 -maxdepth 1 -not -name ".snapshot"); do
chown -R elasticsearch:elasticsearch $datadir;
done;
chown elasticsearch:elasticsearch /usr/share/elasticsearch/logs;
for logfile in $(find /usr/share/elasticsearch/logs -mindepth 1 -maxdepth 1 -not -name ".snapshot"); do
chown -R elasticsearch:elasticsearch $logfile;
done
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: data
{{- end }}
{{- if .Values.extraInitContainers }}
{{ tpl .Values.extraInitContainers . | indent 6 }}
{{- end }}
{{- if .Values.cluster.plugins }}
{{ include "plugin-installer" . | indent 6 }}
{{- end }}
containers:
- name: elasticsearch
env:
- name: NODE_DATA
value: "false"
{{- if hasPrefix "5." .Values.appVersion }}
- name: NODE_INGEST
value: "false"
{{- end }}
- name: DISCOVERY_SERVICE
value: {{ template "elasticsearch.fullname" . }}-discovery
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
- name: ES_JAVA_OPTS
value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.master.heapSize }} -Xmx{{ .Values.master.heapSize }} {{ .Values.cluster.additionalJavaOpts }} {{ .Values.master.additionalJavaOpts }}"
{{- range $key, $value := .Values.cluster.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
resources:
{{ toYaml .Values.master.resources | indent 12 }}
readinessProbe:
{{ toYaml .Values.master.readinessProbe | indent 10 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
{{- if .Values.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
ports:
- containerPort: 9300
name: transport
{{ if .Values.master.exposeHttp }}
- containerPort: 9200
name: http
{{ end }}
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: data
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
name: config
subPath: elasticsearch.yml
{{- if .Values.cluster.plugins }}
- mountPath: /usr/share/elasticsearch/plugins/
name: plugindir
{{- end }}
{{- if hasPrefix "2." .Values.appVersion }}
- mountPath: /usr/share/elasticsearch/config/logging.yml
name: config
subPath: logging.yml
{{- end }}
{{- if hasPrefix "5." .Values.appVersion }}
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
name: config
subPath: log4j2.properties
{{- end }}
{{- if .Values.cluster.keystoreSecret }}
- name: keystore
mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore"
subPath: elasticsearch.keystore
readOnly: true
{{- end }}
{{- if .Values.master.hooks.preStop }}
- name: config
mountPath: /master-pre-stop-hook.sh
subPath: master-pre-stop-hook.sh
{{- end }}
{{- if .Values.master.hooks.postStart }}
- name: config
mountPath: /master-post-start-hook.sh
subPath: master-post-start-hook.sh
{{- end }}
{{- if or .Values.master.hooks.preStop .Values.master.hooks.postStart }}
lifecycle:
{{- if .Values.master.hooks.preStop }}
preStop:
exec:
command: ["/bin/bash","/master-pre-stop-hook.sh"]
{{- end }}
{{- if .Values.master.hooks.postStart }}
postStart:
exec:
command: ["/bin/bash","/master-post-start-hook.sh"]
{{- end }}
{{- end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range $pullSecret := .Values.image.pullSecrets }}
- name: {{ $pullSecret }}
{{- end }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "elasticsearch.fullname" . }}
{{- if .Values.cluster.plugins }}
- name: plugindir
emptyDir: {}
{{- end }}
{{- if .Values.cluster.keystoreSecret }}
- name: keystore
secret:
secretName: {{ .Values.cluster.keystoreSecret }}
{{- end }}
{{- if not .Values.master.persistence.enabled }}
- name: data
emptyDir: {}
{{- end }}
podManagementPolicy: {{ .Values.master.podManagementPolicy }}
updateStrategy:
type: {{ .Values.master.updateStrategy.type }}
{{- if .Values.master.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: {{ .Values.master.persistence.name }}
spec:
accessModes:
- {{ .Values.master.persistence.accessMode | quote }}
{{- if .Values.master.persistence.storageClass }}
{{- if (eq "-" .Values.master.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.master.persistence.storageClass }}"
{{- end }}
{{- end }}
resources:
requests:
storage: "{{ .Values.master.persistence.size }}"
{{ end }}
apiVersion: v1
kind: Service
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.master.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.fullname" . }}-discovery
spec:
clusterIP: None
ports:
- port: 9300
targetPort: transport
selector:
app: {{ template "elasticsearch.name" . }}
component: "{{ .Values.master.name }}"
release: {{ .Release.Name }}
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "elasticsearch.fullname" . }}
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
{{- if .Values.podSecurityPolicy.annotations }}
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
{{- end }}
spec:
privileged: true
allowPrivilegeEscalation: true
volumes:
- 'configMap'
- 'secret'
- 'emptyDir'
- 'persistentVolumeClaim'
hostNetwork: false
hostPID: false
hostIPC: false
runAsUser:
rule: 'RunAsAny'
runAsGroup:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1000
max: 1000
readOnlyRootFilesystem: false
hostPorts:
- min: 1
max: 65535
{{- end }}
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: {{ template "elasticsearch.fullname" . }}
labels:
app: {{ template "elasticsearch.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "elasticsearch.fullname" . }}
{{- end }}
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: {{ template "elasticsearch.fullname" . }}
labels:
app: {{ template "elasticsearch.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
roleRef:
kind: Role
name: {{ template "elasticsearch.fullname" . }}
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ template "elasticsearch.serviceAccountName.client" . }}
namespace: {{ .Release.Namespace }}
- kind: ServiceAccount
name: {{ template "elasticsearch.serviceAccountName.data" . }}
namespace: {{ .Release.Namespace }}
- kind: ServiceAccount
name: {{ template "elasticsearch.serviceAccountName.master" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "elasticsearch.fullname" . }}-test
labels:
app: {{ template "elasticsearch.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
data:
run.sh: |-
@test "Test Access and Health" {
curl -D - http://{{ template "elasticsearch.client.fullname" . }}:9200
curl -D - http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/health?wait_for_status=green
}
apiVersion: v1
kind: Pod
metadata:
name: {{ template "elasticsearch.fullname" . }}-test
labels:
app: {{ template "elasticsearch.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
annotations:
"helm.sh/hook": test-success
spec:
initContainers:
- name: test-framework
image: "{{ .Values.testFramework.image}}:{{ .Values.testFramework.tag }}"
command:
- "bash"
- "-c"
- |
set -ex
# copy bats to tools dir
cp -R /usr/local/libexec/ /tools/bats/
volumeMounts:
- mountPath: /tools
name: tools
containers:
- name: {{ .Release.Name }}-test
image: "{{ .Values.testFramework.image}}:{{ .Values.testFramework.tag }}"
command: ["/tools/bats/bats", "-t", "/tests/run.sh"]
volumeMounts:
- mountPath: /tests
name: tests
readOnly: true
- mountPath: /tools
name: tools
volumes:
- name: tests
configMap:
name: {{ template "elasticsearch.fullname" . }}-test
- name: tools
emptyDir: {}
restartPolicy: Never
# Default values for elasticsearch.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
appVersion: "6.8.2"
## Define serviceAccount names for components. Defaults to component's fully qualified name.
##
serviceAccounts:
client:
create: true
name:
master:
create: true
name:
data:
create: true
name:
## Specify if a Pod Security Policy for node-exporter must be created
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
##
podSecurityPolicy:
enabled: false
annotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
securityContext:
enabled: false
runAsUser: 1000
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName: "default-scheduler"
image:
repository: "docker.elastic.co/elasticsearch/elasticsearch-oss"
tag: "6.8.2"
pullPolicy: "IfNotPresent"
# If specified, use these secrets to access the image
# pullSecrets:
# - registry-secret
testFramework:
image: "dduportal/bats"
tag: "0.4.0"
initImage:
repository: "busybox"
tag: "latest"
pullPolicy: "Always"
cluster:
name: "elasticsearch"
# If you want X-Pack installed, switch to an image that includes it, enable this option and toggle the features you want
# enabled in the environment variables outlined in the README
xpackEnable: false
# Some settings must be placed in a keystore, so they need to be mounted in from a secret.
# Use this setting to specify the name of the secret
# keystoreSecret: eskeystore
config: {}
# Custom parameters, as string, to be added to ES_JAVA_OPTS environment variable
additionalJavaOpts: ""
# Command to run at the end of deployment
bootstrapShellCommand: ""
env:
# IMPORTANT: https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#minimum_master_nodes
# To prevent data loss, it is vital to configure the discovery.zen.minimum_master_nodes setting so that each master-eligible
# node knows the minimum number of master-eligible nodes that must be visible in order to form a cluster.
MINIMUM_MASTER_NODES: "2"
# List of plugins to install via dedicated init container
plugins: []
# - ingest-attachment
# - mapper-size
loggingYml:
# you can override this using by setting a system property, for example -Des.logger.level=DEBUG
es.logger.level: INFO
rootLogger: ${es.logger.level}, console
logger:
# log action execution errors for easier debugging
action: DEBUG
# reduce the logging for aws, too much is logged under the default INFO
com.amazonaws: WARN
appender:
console:
type: console
layout:
type: consolePattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
log4j2Properties: |
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
logger.searchguard.name = com.floragunn
logger.searchguard.level = info
client:
name: client
replicas: 2
serviceType: ClusterIP
## If coupled with serviceType = "NodePort", this will set a specific nodePort to the client HTTP port
# httpNodePort: 30920
loadBalancerIP: {}
loadBalancerSourceRanges: {}
## (dict) If specified, apply these annotations to the client service
# serviceAnnotations:
# example: client-svc-foo
heapSize: "512m"
# additionalJavaOpts: "-XX:MaxRAM=512m"
antiAffinity: "soft"
nodeAffinity: {}
nodeSelector: {}
tolerations: []
# terminationGracePeriodSeconds: 60
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
resources:
limits:
cpu: "1"
# memory: "1024Mi"
requests:
cpu: "25m"
memory: "512Mi"
priorityClassName: ""
## (dict) If specified, apply these annotations to each client Pod
# podAnnotations:
# example: client-foo
podDisruptionBudget:
enabled: false
minAvailable: 1
# maxUnavailable: 1
hooks: {}
## (string) Script to execute prior the client pod stops.
# preStop: |-
## (string) Script to execute after the client pod starts.
# postStart: |-
ingress:
enabled: false
# user: NAME
# password: PASSWORD
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
master:
name: master
exposeHttp: false
replicas: 3
heapSize: "512m"
# additionalJavaOpts: "-XX:MaxRAM=512m"
persistence:
enabled: true
accessMode: ReadWriteOnce
name: data
size: "4Gi"
# storageClass: "ssd"
readinessProbe:
httpGet:
path: /_cluster/health?local=true
port: 9200
initialDelaySeconds: 5
antiAffinity: "soft"
nodeAffinity: {}
nodeSelector: {}
tolerations: []
# terminationGracePeriodSeconds: 60
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
resources:
limits:
cpu: "1"
# memory: "1024Mi"
requests:
cpu: "25m"
memory: "512Mi"
priorityClassName: ""
## (dict) If specified, apply these annotations to each master Pod
# podAnnotations:
# example: master-foo
podManagementPolicy: OrderedReady
podDisruptionBudget:
enabled: false
minAvailable: 2 # Same as `cluster.env.MINIMUM_MASTER_NODES`
# maxUnavailable: 1
updateStrategy:
type: OnDelete
hooks: {}
## (string) Script to execute prior the master pod stops.
# preStop: |-
## (string) Script to execute after the master pod starts.
# postStart: |-
data:
name: data
exposeHttp: false
replicas: 2
heapSize: "1536m"
# additionalJavaOpts: "-XX:MaxRAM=1536m"
persistence:
enabled: true
accessMode: ReadWriteOnce
name: data
size: "30Gi"
# storageClass: "ssd"
readinessProbe:
httpGet:
path: /_cluster/health?local=true
port: 9200
initialDelaySeconds: 5
terminationGracePeriodSeconds: 3600
antiAffinity: "soft"
nodeAffinity: {}
nodeSelector: {}
tolerations: []
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
resources:
limits:
cpu: "1"
# memory: "2048Mi"
requests:
cpu: "25m"
memory: "1536Mi"
priorityClassName: ""
## (dict) If specified, apply these annotations to each data Pod
# podAnnotations:
# example: data-foo
podDisruptionBudget:
enabled: false
# minAvailable: 1
maxUnavailable: 1
podManagementPolicy: OrderedReady
updateStrategy:
type: OnDelete
hooks:
## Drain the node before stopping it and re-integrate it into the cluster after start.
## When enabled, it supersedes `data.hooks.preStop` and `data.hooks.postStart` defined below.
drain:
enabled: true
## (string) Script to execute prior the data pod stops. Ignored if `data.hooks.drain.enabled` is true (default)
# preStop: |-
# #!/bin/bash
# exec &> >(tee -a "/var/log/elasticsearch-hooks.log")
# NODE_NAME=${HOSTNAME}
# curl -s -XPUT -H 'Content-Type: application/json' '{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings' -d "{
# \"transient\" :{
# \"cluster.routing.allocation.exclude._name\" : \"${NODE_NAME}\"
# }
# }"
# echo "Node ${NODE_NAME} is exluded from the allocation"
## (string) Script to execute after the data pod starts. Ignored if `data.hooks.drain.enabled` is true (default)
# postStart: |-
# #!/bin/bash
# exec &> >(tee -a "/var/log/elasticsearch-hooks.log")
# NODE_NAME=${HOSTNAME}
# CLUSTER_SETTINGS=$(curl -s -XGET "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings")
# if echo "${CLUSTER_SETTINGS}" | grep -E "${NODE_NAME}"; then
# echo "Activate node ${NODE_NAME}"
# curl -s -XPUT -H 'Content-Type: application/json' "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings" -d "{
# \"transient\" :{
# \"cluster.routing.allocation.exclude._name\" : null
# }
# }"
# fi
# echo "Node ${NODE_NAME} is ready to be used"
## Sysctl init container to setup vm.max_map_count
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
# and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall
sysctlInitContainer:
enabled: true
## Chown init container to change ownership of data and logs directories to elasticsearch user
chownInitContainer:
enabled: true
## Additional init containers
extraInitContainers: |
apiVersion: v1
appVersion: 10.1.40
description: Fast, reliable, scalable, and easy to use open-source relational database
system. MariaDB Server is intended for mission-critical, heavy-load production systems
as well as for embedding into mass-deployed software. Highly available MariaDB cluster.
engine: gotpl
home: https://mariadb.org
icon: https://bitnami.com/assets/stacks/mariadb/img/mariadb-stack-220x234.png
keywords:
- mariadb
- mysql
- database
- sql
- prometheus
maintainers:
- email: containers@bitnami.com
name: Bitnami
name: mariadb
sources:
- https://github.com/bitnami/bitnami-docker-mariadb
- https://github.com/prometheus/mysqld_exporter
version: 5.11.3
approvers:
- prydonius
- tompizmor
- sameersbn
- carrodher
- juan131
reviewers:
- prydonius
- tompizmor
- sameersbn
- carrodher
- juan131
# MariaDB
[MariaDB](https://mariadb.org) is one of the most popular database servers in the world. It’s made by the original developers of MySQL and guaranteed to stay open source. Notable users include Wikipedia, Facebook and Google.
MariaDB is developed as open source software and as a relational database it provides an SQL interface for accessing data. The latest versions of MariaDB also include GIS and JSON features.
## TL;DR
```bash
$ helm install stable/mariadb
```
## Introduction
This chart bootstraps a [MariaDB](https://github.com/bitnami/bitnami-docker-mariadb) replication cluster deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
- Kubernetes 1.10+
- PV provisioner support in the underlying infrastructure
## Installing the Chart
To install the chart with the release name `my-release`:
```bash
$ helm install --name my-release stable/mariadb
```
The command deploys MariaDB on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```bash
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following table lists the configurable parameters of the MariaDB chart and their default values.
| Parameter | Description | Default |
|-------------------------------------------|-----------------------------------------------------|-------------------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | MariaDB image registry | `docker.io` |
| `image.repository` | MariaDB Image name | `bitnami/mariadb` |
| `image.tag` | MariaDB Image tag | `{VERSION}` |
| `image.pullPolicy` | MariaDB image pull policy | `Always` if `imageTag` is `latest`, else `IfNotPresent` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.debug` | Specify if debug logs should be enabled | `false` |
| `service.type` | Kubernetes service type | `ClusterIP` |
| `service.clusterIp` | Specific cluster IP when service type is cluster IP. Use None for headless service | `nil` |
| `service.port` | MySQL service port | `3306` |
| `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `false` |
| `serviceAccount.name` | The name of the ServiceAccount to create | Generated using the mariadb.fullname template |
| `rbac.create` | Create and use RBAC resources | `false` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
| `existingSecret` | Use Existing secret for Password details (`rootUser.password`, `db.password`, `replication.password` will be ignored and picked up from this secret) | |
| `rootUser.password` | Password for the `root` user. Ignored if existing secret is provided. | _random 10 character alphanumeric string_ |
| `rootUser.forcePassword` | Force users to specify a password | `false` |
| `db.user` | Username of new user to create | `nil` |
| `db.password` | Password for the new user. Ignored if existing secret is provided. | _random 10 character alphanumeric string if `db.user` is defined_ |
| `db.name` | Name for new database to create | `my_database` |
| `replication.enabled` | MariaDB replication enabled | `true` |
| `replication.user` |MariaDB replication user | `replicator` |
| `replication.password` | MariaDB replication user password. Ignored if existing secret is provided. | _random 10 character alphanumeric string_ |
| `initdbScripts` | Dictionary of initdb scripts | `nil` |
| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`) | `nil` |
| `master.annotations[].key` | key for the the annotation list item | `nil` |
| `master.annotations[].value` | value for the the annotation list item | `nil` |
| `master.affinity` | Master affinity (in addition to master.antiAffinity when set) | `{}` |
| `master.antiAffinity` | Master pod anti-affinity policy | `soft` |
| `master.tolerations` | List of node taints to tolerate (master) | `[]` |
| `master.updateStrategy` | Master statefulset update strategy policy | `RollingUpdate` |
| `master.persistence.enabled` | Enable persistence using PVC | `true` |
| `master.persistence.existingClaim` | Provide an existing `PersistentVolumeClaim` | `nil` |
| `master.persistence.mountPath` | Path to mount the volume at | `/bitnami/mariadb` |
| `master.persistence.annotations` | Persistent Volume Claim annotations | `{}` |
| `master.persistence.storageClass` | Persistent Volume Storage Class | `` |
| `master.persistence.accessModes` | Persistent Volume Access Modes | `[ReadWriteOnce]` |
| `master.persistence.size` | Persistent Volume Size | `8Gi` |
| `master.extraInitContainers` | Additional init containers as a string to be passed to the `tpl` function (master) | |
| `master.config` | Config file for the MariaDB Master server | `_default values in the values.yaml file_` |
| `master.resources` | CPU/Memory resource requests/limits for master node | `{}` |
| `master.livenessProbe.enabled` | Turn on and off liveness probe (master) | `true` |
| `master.livenessProbe.initialDelaySeconds`| Delay before liveness probe is initiated (master) | `120` |
| `master.livenessProbe.periodSeconds` | How often to perform the probe (master) | `10` |
| `master.livenessProbe.timeoutSeconds` | When the probe times out (master) | `1` |
| `master.livenessProbe.successThreshold` | Minimum consecutive successes for the probe (master)| `1` |
| `master.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe (master) | `3` |
| `master.readinessProbe.enabled` | Turn on and off readiness probe (master) | `true` |
| `master.readinessProbe.initialDelaySeconds`| Delay before readiness probe is initiated (master) | `30` |
| `master.readinessProbe.periodSeconds` | How often to perform the probe (master) | `10` |
| `master.readinessProbe.timeoutSeconds` | When the probe times out (master) | `1` |
| `master.readinessProbe.successThreshold` | Minimum consecutive successes for the probe (master)| `1` |
| `master.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe (master) | `3` |
| `master.podDisruptionBudget.enabled` | If true, create a pod disruption budget for master pods. | `false` |
| `master.podDisruptionBudget.minAvailable` | Minimum number / percentage of pods that should remain scheduled | `1` |
| `master.podDisruptionBudget.maxUnavailable`| Maximum number / percentage of pods that may be made unavailable | `nil` |
| `slave.replicas` | Desired number of slave replicas | `1` |
| `slave.annotations[].key` | key for the the annotation list item | `nil` |
| `slave.annotations[].value` | value for the the annotation list item | `nil` |
| `slave.affinity` | Slave affinity (in addition to slave.antiAffinity when set) | `{}` |
| `slave.antiAffinity` | Slave pod anti-affinity policy | `soft` |
| `slave.tolerations` | List of node taints to tolerate for (slave) | `[]` |
| `slave.updateStrategy` | Slave statefulset update strategy policy | `RollingUpdate` |
| `slave.persistence.enabled` | Enable persistence using a `PersistentVolumeClaim` | `true` |
| `slave.persistence.annotations` | Persistent Volume Claim annotations | `{}` |
| `slave.persistence.storageClass` | Persistent Volume Storage Class | `` |
| `slave.persistence.accessModes` | Persistent Volume Access Modes | `[ReadWriteOnce]` |
| `slave.persistence.size` | Persistent Volume Size | `8Gi` |
| `slave.extraInitContainers` | Additional init containers as a string to be passed to the `tpl` function (slave) | |
| `slave.config` | Config file for the MariaDB Slave replicas | `_default values in the values.yaml file_` |
| `slave.resources` | CPU/Memory resource requests/limits for slave node | `{}` |
| `slave.livenessProbe.enabled` | Turn on and off liveness probe (slave) | `true` |
| `slave.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated (slave) | `120` |
| `slave.livenessProbe.periodSeconds` | How often to perform the probe (slave) | `10` |
| `slave.livenessProbe.timeoutSeconds` | When the probe times out (slave) | `1` |
| `slave.livenessProbe.successThreshold` | Minimum consecutive successes for the probe (slave) | `1` |
| `slave.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe (slave) | `3` |
| `slave.readinessProbe.enabled` | Turn on and off readiness probe (slave) | `true` |
| `slave.readinessProbe.initialDelaySeconds`| Delay before readiness probe is initiated (slave) | `45` |
| `slave.readinessProbe.periodSeconds` | How often to perform the probe (slave) | `10` |
| `slave.readinessProbe.timeoutSeconds` | When the probe times out (slave) | `1` |
| `slave.readinessProbe.successThreshold` | Minimum consecutive successes for the probe (slave) | `1` |
| `slave.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe (slave) | `3` |
| `slave.podDisruptionBudget.enabled` | If true, create a pod disruption budget for slave pods. | `false` |
| `slave.podDisruptionBudget.minAvailable` | Minimum number / percentage of pods that should remain scheduled | `1` |
| `slave.podDisruptionBudget.maxUnavailable`| Maximum number / percentage of pods that may be made unavailable | `nil` |
| `metrics.enabled` | Start a side-car prometheus exporter | `false` |
| `metrics.image.registry` | Exporter image registry | `docker.io` |
| `metrics.image.repository` | Exporter image name | `prom/mysqld-exporter` |
| `metrics.image.tag` | Exporter image tag | `v0.10.0` |
| `metrics.image.pullPolicy` | Exporter image pull policy | `IfNotPresent` |
| `metrics.resources` | Exporter resource requests/limit | `nil` |
The above parameters map to the env variables defined in [bitnami/mariadb](http://github.com/bitnami/bitnami-docker-mariadb). For more information please refer to the [bitnami/mariadb](http://github.com/bitnami/bitnami-docker-mariadb) image documentation.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```bash
$ helm install --name my-release \
--set rootUser.password=secretpassword,db.user=app_database \
stable/mariadb
```
The above command sets the MariaDB `root` account password to `secretpassword`. Additionally it creates a database named `my_database`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```bash
$ helm install --name my-release -f values.yaml stable/mariadb
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## Initialize a fresh instance
The [Bitnami MariaDB](https://github.com/bitnami/bitnami-docker-mariadb) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap.
Alternatively, you can specify custom scripts using the `initdbScripts` parameter as dict.
In addition to these options, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the two previous options.
The allowed extensions are `.sh`, `.sql` and `.sql.gz`.
## Persistence
The [Bitnami MariaDB](https://github.com/bitnami/bitnami-docker-mariadb) image stores the MariaDB data and configurations at the `/bitnami/mariadb` path of the container.
The chart mounts a [Persistent Volume](kubernetes.io/docs/user-guide/persistent-volumes/) volume at this location. The volume is created using dynamic volume provisioning, by default. An existing PersistentVolumeClaim can be defined.
## Extra Init Containers
The feature allows for specifying a template string for a initContainer in the master/slave pod. Usecases include situations when you need some pre-run setup. For example, in IKS (IBM Cloud Kubernetes Service), non-root users do not have write permission on the volume mount path for NFS-powered file storage. So, you could use a initcontainer to `chown` the mount. See a example below, where we add an initContainer on the master pod that reports to an external resource that the db is going to starting.
`values.yaml`
```yaml
master:
extraInitContainers: |
- name: initcontainer
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- curl http://api-service.local/db/starting;
```
## Upgrading
It's necessary to set the `rootUser.password` parameter when upgrading for readiness/liveness probes to work properly. When you install this chart for the first time, some notes will be displayed providing the credentials you must use under the 'Administrator credentials' section. Please note down the password and run the command below to upgrade your chart:
```bash
$ helm upgrade my-release stable/mariadb --set rootUser.password=[ROOT_PASSWORD]
```
| Note: you need to substitute the placeholder _[ROOT_PASSWORD]_ with the value obtained in the installation notes.
### To 5.0.0
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments.
Use the workaround below to upgrade from versions previous to 5.0.0. The following example assumes that the release name is mariadb:
```console
$ kubectl delete statefulset opencart-mariadb --cascade=false
```
You can copy here your custom .sh, .sql or .sql.gz file so they are executed during the first boot of the image.
More info in the [bitnami-docker-mariadb](https://github.com/bitnami/bitnami-docker-mariadb#initializing-a-new-instance) repository.
\ No newline at end of file
Please be patient while the chart is being deployed
Tip:
Watch the deployment status using the command: kubectl get pods -w --namespace {{ .Release.Namespace }} -l release={{ .Release.Name }}
Services:
echo Master: {{ template "mariadb.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.port }}
{{- if .Values.replication.enabled }}
echo Slave: {{ template "slave.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.port }}
{{- end }}
Administrator credentials:
Username: root
Password : $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "mariadb.fullname" . }} -o jsonpath="{.data.mariadb-root-password}" | base64 --decode)
To connect to your database:
1. Run a pod that you can use as a client:
kubectl run {{ template "mariadb.fullname" . }}-client --rm --tty -i --restart='Never' --image {{ template "mariadb.image" . }} --namespace {{ .Release.Namespace }} --command -- bash
2. To connect to master service (read/write):
mysql -h {{ template "mariadb.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local -uroot -p {{ .Values.db.name }}
{{- if .Values.replication.enabled }}
3. To connect to slave service (read-only):
mysql -h {{ template "slave.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local -uroot -p {{ .Values.db.name }}
{{- end }}
To upgrade this helm chart:
1. Obtain the password as described on the 'Administrator credentials' section and set the 'rootUser.password' parameter as shown below:
ROOT_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "mariadb.fullname" . }} -o jsonpath="{.data.mariadb-root-password}" | base64 --decode)
helm upgrade {{ .Release.Name }} stable/mariadb --set rootUser.password=$ROOT_PASSWORD
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "mariadb.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "mariadb.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- printf .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- define "master.fullname" -}}
{{- if .Values.replication.enabled -}}
{{- printf "%s-%s" (include "mariadb.fullname" .) "master" | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- include "mariadb.fullname" . -}}
{{- end -}}
{{- end -}}
{{- define "slave.fullname" -}}
{{- printf "%s-%s" (include "mariadb.fullname" .) "slave" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "mariadb.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Return the proper MariaDB image name
*/}}
{{- define "mariadb.image" -}}
{{- $registryName := .Values.image.registry -}}
{{- $repositoryName := .Values.image.repository -}}
{{- $tag := .Values.image.tag | toString -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
Also, we can't use a single if because lazy evaluation is not an option
*/}}
{{- if .Values.global }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
{{/*
Return the proper metrics image name
*/}}
{{- define "mariadb.metrics.image" -}}
{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
Also, we can't use a single if because lazy evaluation is not an option
*/}}
{{- if .Values.global }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- else -}}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
{{ template "mariadb.initdbScriptsCM" . }}
{{/*
Get the initialization scripts ConfigMap name.
*/}}
{{- define "mariadb.initdbScriptsCM" -}}
{{- if .Values.initdbScriptsConfigMap -}}
{{- printf "%s" .Values.initdbScriptsConfigMap -}}
{{- else -}}
{{- printf "%s-init-scripts" (include "master.fullname" .) -}}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "mariadb.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "mariadb.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Return the proper Docker Image Registry Secret Names
*/}}
{{- define "mariadb.imagePullSecrets" -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
Also, we can not use a single if because lazy evaluation is not an option
*/}}
{{- if .Values.global }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- range .Values.metrics.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end -}}
{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- range .Values.metrics.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end -}}
{{- end -}}
{{- if and (or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScripts) (not .Values.initdbScriptsConfigMap) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "master.fullname" . }}-init-scripts
labels:
app: "{{ template "mariadb.name" . }}"
chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
component: "master"
{{- if and (.Files.Glob "files/docker-entrypoint-initdb.d/*.sql.gz") (not .Values.initdbScriptsConfigMap) }}
binaryData:
{{- $root := . }}
{{- range $path, $bytes := .Files.Glob "files/docker-entrypoint-initdb.d/*.sql.gz" }}
{{ base $path }}: {{ $root.Files.Get $path | b64enc | quote }}
{{- end }}
{{- end }}
data:
{{- if and (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql}") (not .Values.initdbScriptsConfigMap) }}
{{ (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql}").AsConfig | indent 2 }}
{{- end }}
{{- with .Values.initdbScripts }}
{{ toYaml . | indent 2 }}
{{- end }}
{{ end }}
{{- if .Values.master.config }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "master.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
component: "master"
chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
my.cnf: |-
{{ .Values.master.config | indent 4 }}
{{- end -}}
{{- if .Values.master.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "mariadb.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
component: "master"
chart: {{ template "mariadb.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
{{- if .Values.master.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.master.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.master.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.master.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
app: "{{ template "mariadb.name" . }}"
component: "master"
release: {{ .Release.Name | quote }}
{{- end }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "master.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
chart: "{{ template "mariadb.chart" . }}"
component: "master"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
selector:
matchLabels:
release: "{{ .Release.Name }}"
component: "master"
app: "{{ template "mariadb.name" . }}"
serviceName: "{{ template "master.fullname" . }}"
replicas: 1
updateStrategy:
type: {{ .Values.master.updateStrategy.type }}
{{- if (eq "Recreate" .Values.master.updateStrategy.type) }}
rollingUpdate: null
{{- end }}
template:
metadata:
{{- if .Values.master.annotations }}
annotations:
{{- range .Values.master.annotations }}
{{ .key }}: '{{ .value }}'
{{- end }}
{{- end }}
labels:
app: "{{ template "mariadb.name" . }}"
component: "master"
release: "{{ .Release.Name }}"
chart: "{{ template "mariadb.chart" . }}"
spec:
serviceAccountName: "{{ template "mariadb.serviceAccountName" . }}"
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
{{- if eq .Values.master.antiAffinity "hard" }}
affinity:
{{- with .Values.master.affinity }}
{{ toYaml . | indent 8 }}
{{- end }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: "{{ template "mariadb.name" . }}"
release: "{{ .Release.Name }}"
{{- else if eq .Values.master.antiAffinity "soft" }}
affinity:
{{- with .Values.master.affinity }}
{{ toYaml . | indent 8 }}
{{- end }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: "{{ template "mariadb.name" . }}"
release: "{{ .Release.Name }}"
{{- else}}
{{- with .Values.master.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
{{- with .Values.master.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- include "mariadb.imagePullSecrets" . | indent 6 }}
{{- if .Values.master.extraInitContainers }}
initContainers:
{{ tpl .Values.master.extraInitContainers . | indent 6}}
{{- end }}
containers:
- name: "mariadb"
image: {{ template "mariadb.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
env:
{{- if .Values.image.debug}}
- name: BITNAMI_DEBUG
value: "true"
{{- end }}
- name: MARIADB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "mariadb.fullname" . }}
{{- end }}
key: mariadb-root-password
{{- if .Values.db.user }}
- name: MARIADB_USER
value: "{{ .Values.db.user }}"
- name: MARIADB_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "mariadb.fullname" . }}
{{- end }}
key: mariadb-password
{{- end }}
- name: MARIADB_DATABASE
value: "{{ .Values.db.name }}"
{{- if .Values.replication.enabled }}
- name: MARIADB_REPLICATION_MODE
value: "master"
- name: MARIADB_REPLICATION_USER
value: "{{ .Values.replication.user }}"
- name: MARIADB_REPLICATION_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "mariadb.fullname" . }}
{{- end }}
key: mariadb-replication-password
{{- end }}
ports:
- name: mysql
containerPort: 3306
{{- if .Values.master.livenessProbe.enabled }}
livenessProbe:
exec:
command: ["sh", "-c", "exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD"]
initialDelaySeconds: {{ .Values.master.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.master.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.master.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.master.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.master.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.master.readinessProbe.enabled }}
readinessProbe:
exec:
command: ["sh", "-c", "exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD"]
initialDelaySeconds: {{ .Values.master.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.master.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.master.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.master.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.master.readinessProbe.failureThreshold }}
{{- end }}
resources:
{{ toYaml .Values.master.resources | indent 10 }}
volumeMounts:
- name: data
mountPath: {{ .Values.master.persistence.mountPath }}
{{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }}
- name: custom-init-scripts
mountPath: /docker-entrypoint-initdb.d
{{- end }}
{{- if .Values.master.config }}
- name: config
mountPath: /opt/bitnami/mariadb/conf/my.cnf
subPath: my.cnf
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
image: {{ template "mariadb.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
env:
- name: MARIADB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "mariadb.fullname" . }}
{{- end }}
key: mariadb-root-password
command: [ 'sh', '-c', 'DATA_SOURCE_NAME="root:$MARIADB_ROOT_PASSWORD@(localhost:3306)/" /bin/mysqld_exporter' ]
ports:
- name: metrics
containerPort: 9104
livenessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
{{ toYaml .Values.metrics.resources | indent 10 }}
{{- end }}
volumes:
{{- if .Values.master.config }}
- name: config
configMap:
name: {{ template "master.fullname" . }}
{{- end }}
{{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }}
- name: custom-init-scripts
configMap:
name: {{ template "mariadb.initdbScriptsCM" . }}
{{- end }}
{{- if and .Values.master.persistence.enabled .Values.master.persistence.existingClaim }}
- name: data
persistentVolumeClaim:
claimName: {{ .Values.master.persistence.existingClaim }}
{{- else if not .Values.master.persistence.enabled }}
- name: data
emptyDir: {}
{{- else if and .Values.master.persistence.enabled (not .Values.master.persistence.existingClaim) }}
volumeClaimTemplates:
- metadata:
name: data
labels:
app: "{{ template "mariadb.name" . }}"
component: "master"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
accessModes:
{{- range .Values.master.persistence.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.master.persistence.size | quote }}
{{- if .Values.master.persistence.storageClass }}
{{- if (eq "-" .Values.master.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.master.persistence.storageClass | quote }}
{{- end }}
{{- end }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "mariadb.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
component: "master"
chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.metrics.enabled }}
annotations:
{{ toYaml .Values.metrics.annotations | indent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if eq .Values.service.type "ClusterIP" }}
{{- if .Values.service.clusterIp }}
clusterIP: {{ .Values.service.clusterIp }}
{{- end }}
{{- end }}
ports:
- name: mysql
port: {{ .Values.service.port }}
targetPort: mysql
{{- if eq .Values.service.type "NodePort" }}
{{- if .Values.service.nodePort }}
{{- if .Values.service.nodePort.master }}
nodePort: {{ .Values.service.nodePort.master }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
port: 9104
targetPort: metrics
{{- end }}
selector:
app: "{{ template "mariadb.name" . }}"
component: "master"
release: "{{ .Release.Name }}"
{{- if and .Values.serviceAccount.create .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "master.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
{{- end }}
{{- if and .Values.serviceAccount.create .Values.rbac.create }}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "master.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
subjects:
- kind: ServiceAccount
name: {{ template "mariadb.serviceAccountName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "master.fullname" . }}
{{- end }}
{{- if (not .Values.existingSecret) -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "mariadb.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
type: Opaque
data:
{{- if .Values.rootUser.password }}
mariadb-root-password: "{{ .Values.rootUser.password | b64enc }}"
{{- else if (not .Values.rootUser.forcePassword) }}
mariadb-root-password: "{{ randAlphaNum 10 | b64enc }}"
{{ else }}
mariadb-root-password: {{ required "A MariaDB Root Password is required!" .Values.rootUser.password }}
{{- end }}
{{- if .Values.db.user }}
{{- if .Values.db.password }}
mariadb-password: "{{ .Values.db.password | b64enc }}"
{{- else if (not .Values.db.forcePassword) }}
mariadb-password: "{{ randAlphaNum 10 | b64enc }}"
{{- else }}
mariadb-password: {{ required "A MariaDB Database Password is required!" .Values.db.password }}
{{- end }}
{{- end }}
{{- if .Values.replication.enabled }}
{{- if .Values.replication.password }}
mariadb-replication-password: "{{ .Values.replication.password | b64enc }}"
{{- else if (not .Values.replication.forcePassword) }}
mariadb-replication-password: "{{ randAlphaNum 10 | b64enc }}"
{{- else }}
mariadb-replication-password: {{ required "A MariaDB Replication Password is required!" .Values.replication.password }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "mariadb.serviceAccountName" . }}
labels:
app: "{{ template "mariadb.name" . }}"
chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- end }}
{{- if and .Values.replication.enabled .Values.slave.config }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "slave.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
component: "slave"
chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
my.cnf: |-
{{ .Values.slave.config | indent 4 }}
{{- end }}
{{- if .Values.replication.enabled }}
{{- if .Values.slave.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "mariadb.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
component: "slave"
chart: {{ template "mariadb.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
{{- if .Values.slave.podDisruptionBudget.minAvailable }}
minAvailable: {{ .Values.slave.podDisruptionBudget.minAvailable }}
{{- end }}
{{- if .Values.slave.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.slave.podDisruptionBudget.maxUnavailable }}
{{- end }}
selector:
matchLabels:
app: "{{ template "mariadb.name" . }}"
component: "slave"
release: {{ .Release.Name | quote }}
{{- end }}
{{- end }}
{{- if .Values.replication.enabled }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "slave.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
chart: "{{ template "mariadb.chart" . }}"
component: "slave"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
selector:
matchLabels:
release: "{{ .Release.Name }}"
component: "slave"
app: "{{ template "mariadb.name" . }}"
serviceName: "{{ template "slave.fullname" . }}"
replicas: {{ .Values.slave.replicas }}
updateStrategy:
type: {{ .Values.slave.updateStrategy.type }}
{{- if (eq "Recreate" .Values.slave.updateStrategy.type) }}
rollingUpdate: null
{{- end }}
template:
metadata:
{{- if .Values.slave.annotations }}
annotations:
{{- range .Values.slave.annotations }}
{{ .key }}: '{{ .value }}'
{{- end }}
{{- end }}
labels:
app: "{{ template "mariadb.name" . }}"
component: "slave"
release: "{{ .Release.Name }}"
chart: "{{ template "mariadb.chart" . }}"
spec:
serviceAccountName: "{{ template "mariadb.serviceAccountName" . }}"
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
{{- if eq .Values.slave.antiAffinity "hard" }}
affinity:
{{- with .Values.slave.affinity }}
{{ toYaml . | indent 8 }}
{{- end }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: "{{ template "mariadb.name" . }}"
release: "{{ .Release.Name }}"
{{- else if eq .Values.slave.antiAffinity "soft" }}
affinity:
{{- with .Values.slave.affinity }}
{{ toYaml . | indent 8 }}
{{- end }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: "{{ template "mariadb.name" . }}"
release: "{{ .Release.Name }}"
{{- else}}
{{- with .Values.slave.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
{{- with .Values.slave.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- include "mariadb.imagePullSecrets" . | indent 6 }}
{{- if .Values.master.extraInitContainers }}
initContainers:
{{ tpl .Values.master.extraInitContainers . | indent 6}}
{{- end }}
containers:
- name: "mariadb"
image: {{ template "mariadb.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
env:
{{- if .Values.image.debug}}
- name: BITNAMI_DEBUG
value: "true"
{{- end }}
- name: MARIADB_REPLICATION_MODE
value: "slave"
- name: MARIADB_MASTER_HOST
value: {{ template "mariadb.fullname" . }}
- name: MARIADB_MASTER_PORT_NUMBER
value: "{{ .Values.service.port }}"
- name: MARIADB_MASTER_ROOT_USER
value: "root"
- name: MARIADB_MASTER_ROOT_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "mariadb.fullname" . }}
{{- end }}
key: mariadb-root-password
- name: MARIADB_REPLICATION_USER
value: "{{ .Values.replication.user }}"
- name: MARIADB_REPLICATION_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "mariadb.fullname" . }}
{{- end }}
key: mariadb-replication-password
ports:
- name: mysql
containerPort: 3306
{{- if .Values.slave.livenessProbe.enabled }}
livenessProbe:
exec:
command: ["sh", "-c", "exec mysqladmin status -uroot -p$MARIADB_MASTER_ROOT_PASSWORD"]
initialDelaySeconds: {{ .Values.slave.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.slave.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.slave.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.slave.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.slave.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.slave.readinessProbe.enabled }}
readinessProbe:
exec:
command: ["sh", "-c", "exec mysqladmin status -uroot -p$MARIADB_MASTER_ROOT_PASSWORD"]
initialDelaySeconds: {{ .Values.slave.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.slave.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.slave.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.slave.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.slave.readinessProbe.failureThreshold }}
{{- end }}
resources:
{{ toYaml .Values.slave.resources | indent 10 }}
volumeMounts:
- name: data
mountPath: /bitnami/mariadb
{{- if .Values.slave.config }}
- name: config
mountPath: /opt/bitnami/mariadb/conf/my.cnf
subPath: my.cnf
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
image: {{ template "mariadb.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
env:
- name: MARIADB_MASTER_ROOT_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "mariadb.fullname" . }}
{{- end }}
key: mariadb-root-password
command: [ 'sh', '-c', 'DATA_SOURCE_NAME="root:$MARIADB_MASTER_ROOT_PASSWORD@(localhost:3306)/" /bin/mysqld_exporter' ]
ports:
- name: metrics
containerPort: 9104
livenessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /metrics
port: metrics
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
{{ toYaml .Values.metrics.resources | indent 10 }}
{{- end }}
volumes:
{{- if .Values.slave.config }}
- name: config
configMap:
name: {{ template "slave.fullname" . }}
{{- end }}
{{- if .Values.slave.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
labels:
app: "{{ template "mariadb.name" . }}"
component: "slave"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
spec:
accessModes:
{{- range .Values.slave.persistence.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.slave.persistence.size | quote }}
{{- if .Values.slave.persistence.storageClass }}
{{- if (eq "-" .Values.slave.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.slave.persistence.storageClass | quote }}
{{- end }}
{{- end }}
{{- else }}
- name: "data"
emptyDir: {}
{{- end }}
{{- end }}
{{- if .Values.replication.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "slave.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
chart: "{{ template "mariadb.chart" . }}"
component: "slave"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.metrics.enabled }}
annotations:
{{ toYaml .Values.metrics.annotations | indent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if eq .Values.service.type "ClusterIP" }}
{{- if .Values.service.clusterIp }}
clusterIP: {{ .Values.service.clusterIp }}
{{- end }}
{{- end }}
ports:
- name: mysql
port: {{ .Values.service.port }}
targetPort: mysql
{{- if (eq .Values.service.type "NodePort") }}
{{- if .Values.service.nodePort }}
{{- if .Values.service.nodePort.slave }}
nodePort: {{ .Values.service.nodePort.slave }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
port: 9104
targetPort: metrics
{{- end }}
selector:
app: "{{ template "mariadb.name" . }}"
component: "slave"
release: "{{ .Release.Name }}"
{{- end }}
apiVersion: v1
kind: Pod
metadata:
name: "{{ template "mariadb.fullname" . }}-test-{{ randAlphaNum 5 | lower }}"
annotations:
"helm.sh/hook": test-success
spec:
initContainers:
- name: "test-framework"
image: "dduportal/bats:0.4.0"
command:
- "bash"
- "-c"
- |
set -ex
# copy bats to tools dir
cp -R /usr/local/libexec/ /tools/bats/
volumeMounts:
- mountPath: /tools
name: tools
containers:
- name: mariadb-test
image: {{ template "mariadb.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
command: ["/tools/bats/bats", "-t", "/tests/run.sh"]
env:
- name: MARIADB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.existingSecret }}
name: {{ .Values.existingSecret }}
{{- else }}
name: {{ template "mariadb.fullname" . }}
{{- end }}
key: mariadb-root-password
volumeMounts:
- mountPath: /tests
name: tests
readOnly: true
- mountPath: /tools
name: tools
volumes:
- name: tests
configMap:
name: {{ template "mariadb.fullname" . }}-tests
- name: tools
emptyDir: {}
restartPolicy: Never
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "mariadb.fullname" . }}-tests
data:
run.sh: |-
@test "Testing MariaDB is accessible" {
mysql -h {{ template "mariadb.fullname" . }} -uroot -p$MARIADB_ROOT_PASSWORD -e 'show databases;'
}
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
## Bitnami MariaDB image
## ref: https://hub.docker.com/r/bitnami/mariadb/tags/
##
image:
registry: docker.io
repository: bitnami/mariadb
tag: 10.1.40
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns BASH and NAMI debugging in minideb
## ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
debug: false
service:
## Kubernetes service type, ClusterIP and NodePort are supported at present
type: ClusterIP
# clusterIp: None
port: 3306
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
# master: 30001
# slave: 30002
## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: false
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the mariadb.fullname template
# name:
## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
create: false
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
# # Use existing secret (ignores root, db and replication passwords)
# existingSecret:
rootUser:
## MariaDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mariadb#setting-the-root-password-on-first-run
##
password:
##
## Option to force users to specify a password. That is required for 'helm upgrade' to work properly.
## If it is not force, a random password will be generated.
forcePassword: true
db:
## MariaDB username and password
## ref: https://github.com/bitnami/bitnami-docker-mariadb#creating-a-database-user-on-first-run
##
user:
password:
## Password is ignored if existingSecret is specified.
## Database to create
## ref: https://github.com/bitnami/bitnami-docker-mariadb#creating-a-database-on-first-run
##
name: my_database
## Option to force users to specify a password. That is required for 'helm upgrade' to work properly.
## If it is not force, a random password will be generated.
forcePassword: true
replication:
## Enable replication. This enables the creation of replicas of MariaDB. If false, only a
## master deployment would be created
enabled: true
##
## MariaDB replication user
## ref: https://github.com/bitnami/bitnami-docker-mariadb#setting-up-a-replication-cluster
##
user: replicator
## MariaDB replication user password
## ref: https://github.com/bitnami/bitnami-docker-mariadb#setting-up-a-replication-cluster
##
password:
## Password is ignored if existingSecret is specified.
##
## Option to force users to specify a password. That is required for 'helm upgrade' to work properly.
## If it is not force, a random password will be generated.
forcePassword: true
## initdb scripts
## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
# my_init_script.sh: |
# #!/bin/sh
# echo "Do something."
#
## ConfigMap with scripts to be run at first boot
## Note: This will override initdbScripts
# initdbScriptsConfigMap:
master:
## Mariadb Master additional pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
# annotations:
# - key: key1
# value: value1
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Kept for backwards compatibility. You can now disable it by removing it.
## if you wish to set it through master.affinity.podAntiAffinity instead.
##
antiAffinity: soft
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## updateStrategy for MariaDB Master StatefulSet
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: RollingUpdate
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: true
# Enable persistence using an existing PVC
# existingClaim:
mountPath: /bitnami/mariadb
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
## Persistent Volume Claim annotations
##
annotations: {}
## Persistent Volume Access Mode
##
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 8Gi
##
extraInitContainers: |
# - name: do-something
# image: busybox
# command: ['do', 'something']
## Configure MySQL with a custom my.cnf file
## ref: https://mysql.com/kb/en/mysql/configuring-mysql-with-mycnf/#example-of-configuration-file
##
config: |-
[mysqld]
skip-name-resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mariadb
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
tmpdir=/opt/bitnami/mariadb/tmp
max_allowed_packet=16M
bind-address=0.0.0.0
pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
log-error=/opt/bitnami/mariadb/logs/mysqld.log
character-set-server=UTF8
collation-server=utf8_general_ci
[client]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
default-character-set=UTF8
[manager]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
## Configure master resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
livenessProbe:
enabled: true
##
## Initializing the database could take some time
initialDelaySeconds: 120
##
## Default Kubernetes values
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
enabled: true
initialDelaySeconds: 15
##
## Default Kubernetes values
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
podDisruptionBudget:
enabled: false
minAvailable: 1
# maxUnavailable: 1
slave:
replicas: 2
## Mariadb Slave additional pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
# annotations:
# - key: key1
# value: value1
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Kept for backwards compatibility. You can now disable it by removing it.
## if you wish to set it through slave.affinity.podAntiAffinity instead.
##
antiAffinity: soft
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## updateStrategy for MariaDB Slave StatefulSet
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: RollingUpdate
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: true
# storageClass: "-"
annotations:
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 8Gi
##
extraInitContainers: |
# - name: do-something
# image: busybox
# command: ['do', 'something']
## Configure MySQL slave with a custom my.cnf file
## ref: https://mysql.com/kb/en/mysql/configuring-mysql-with-mycnf/#example-of-configuration-file
##
config: |-
[mysqld]
skip-name-resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mariadb
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
tmpdir=/opt/bitnami/mariadb/tmp
max_allowed_packet=16M
bind-address=0.0.0.0
pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
log-error=/opt/bitnami/mariadb/logs/mysqld.log
character-set-server=UTF8
collation-server=utf8_general_ci
[client]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
default-character-set=UTF8
[manager]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
##
## Configure slave resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
livenessProbe:
enabled: true
##
## Initializing the database could take some time
initialDelaySeconds: 120
##
## Default Kubernetes values
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
enabled: true
initialDelaySeconds: 15
##
## Default Kubernetes values
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
podDisruptionBudget:
enabled: false
minAvailable: 1
# maxUnavailable: 1
metrics:
enabled: true
image:
registry: docker.io
repository: prom/mysqld-exporter
tag: v0.10.0
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
resources: {}
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9104"
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
## Bitnami MariaDB image
## ref: https://hub.docker.com/r/bitnami/mariadb/tags/
##
image:
registry: docker.io
repository: bitnami/mariadb
tag: 10.1.40
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns BASH and NAMI debugging in minideb
## ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
debug: false
service:
## Kubernetes service type, ClusterIP and NodePort are supported at present
type: ClusterIP
# clusterIp: None
port: 3306
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort:
# master: 30001
# slave: 30002
## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: false
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the mariadb.fullname template
# name:
## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
create: false
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
# # Use existing secret (ignores root, db and replication passwords)
# existingSecret:
rootUser:
## MariaDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mariadb#setting-the-root-password-on-first-run
##
password:
##
## Option to force users to specify a password. That is required for 'helm upgrade' to work properly.
## If it is not force, a random password will be generated.
forcePassword: false
db:
## MariaDB username and password
## ref: https://github.com/bitnami/bitnami-docker-mariadb#creating-a-database-user-on-first-run
##
user:
password:
## Password is ignored if existingSecret is specified.
## Database to create
## ref: https://github.com/bitnami/bitnami-docker-mariadb#creating-a-database-on-first-run
##
name: my_database
## Option to force users to specify a password. That is required for 'helm upgrade' to work properly.
## If it is not force, a random password will be generated.
forcePassword: false
replication:
## Enable replication. This enables the creation of replicas of MariaDB. If false, only a
## master deployment would be created
enabled: true
##
## MariaDB replication user
## ref: https://github.com/bitnami/bitnami-docker-mariadb#setting-up-a-replication-cluster
##
user: replicator
## MariaDB replication user password
## ref: https://github.com/bitnami/bitnami-docker-mariadb#setting-up-a-replication-cluster
##
password:
## Password is ignored if existingSecret is specified.
##
## Option to force users to specify a password. That is required for 'helm upgrade' to work properly.
## If it is not force, a random password will be generated.
forcePassword: false
## initdb scripts
## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
# my_init_script.sh: |
# #!/bin/sh
# echo "Do something."
#
## ConfigMap with scripts to be run at first boot
## Note: This will override initdbScripts
# initdbScriptsConfigMap:
master:
## Mariadb Master additional pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
# annotations:
# - key: key1
# value: value1
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Kept for backwards compatibility. You can now disable it by removing it.
## if you wish to set it through master.affinity.podAntiAffinity instead.
##
antiAffinity: soft
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## updateStrategy for MariaDB Master StatefulSet
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: RollingUpdate
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: true
# Enable persistence using an existing PVC
# existingClaim:
mountPath: /bitnami/mariadb
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
## Persistent Volume Claim annotations
##
annotations: {}
## Persistent Volume Access Mode
##
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 8Gi
##
extraInitContainers: |
# - name: do-something
# image: busybox
# command: ['do', 'something']
## Configure MySQL with a custom my.cnf file
## ref: https://mysql.com/kb/en/mysql/configuring-mysql-with-mycnf/#example-of-configuration-file
##
config: |-
[mysqld]
skip-name-resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mariadb
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
tmpdir=/opt/bitnami/mariadb/tmp
max_allowed_packet=16M
bind-address=0.0.0.0
pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
log-error=/opt/bitnami/mariadb/logs/mysqld.log
character-set-server=UTF8
collation-server=utf8_general_ci
[client]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
default-character-set=UTF8
[manager]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
## Configure master resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
livenessProbe:
enabled: true
##
## Initializing the database could take some time
initialDelaySeconds: 120
##
## Default Kubernetes values
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
enabled: true
initialDelaySeconds: 30
##
## Default Kubernetes values
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
podDisruptionBudget:
enabled: false
minAvailable: 1
# maxUnavailable: 1
slave:
replicas: 1
## Mariadb Slave additional pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
# annotations:
# - key: key1
# value: value1
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Kept for backwards compatibility. You can now disable it by removing it.
## if you wish to set it through slave.affinity.podAntiAffinity instead.
##
antiAffinity: soft
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## updateStrategy for MariaDB Slave StatefulSet
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: RollingUpdate
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: true
# storageClass: "-"
annotations:
accessModes:
- ReadWriteOnce
## Persistent Volume size
##
size: 8Gi
##
extraInitContainers: |
# - name: do-something
# image: busybox
# command: ['do', 'something']
## Configure MySQL slave with a custom my.cnf file
## ref: https://mysql.com/kb/en/mysql/configuring-mysql-with-mycnf/#example-of-configuration-file
##
config: |-
[mysqld]
skip-name-resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mariadb
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
tmpdir=/opt/bitnami/mariadb/tmp
max_allowed_packet=16M
bind-address=0.0.0.0
pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
log-error=/opt/bitnami/mariadb/logs/mysqld.log
character-set-server=UTF8
collation-server=utf8_general_ci
[client]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
default-character-set=UTF8
[manager]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
##
## Configure slave resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
livenessProbe:
enabled: true
##
## Initializing the database could take some time
initialDelaySeconds: 120
##
## Default Kubernetes values
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
enabled: true
initialDelaySeconds: 45
##
## Default Kubernetes values
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
podDisruptionBudget:
enabled: false
minAvailable: 1
# maxUnavailable: 1
metrics:
enabled: false
image:
registry: docker.io
repository: prom/mysqld-exporter
tag: v0.10.0
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
resources: {}
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9104"
dependencies:
- name: mariadb
version: 5.x.x
repository: https://kubernetes-charts.storage.googleapis.com/
version: 5.1.4
repository: file://./charts/mariadb
condition: mariadb.enabled
- name: elasticsearch
version: 1.x.x
repository: https://kubernetes-charts.storage.googleapis.com/
version: 1.0.1
repository: file://./charts/elasticsearch
condition: elasticsearch.enabled
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment