Commit c017960f by loganhz Committed by Alena Prokharchyk

Remove 0.0.1

parent 5f3cd2c3
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
tests/
apiVersion: v1
name: rancher-istio
version: 0.0.1
appVersion: 1.1.5
tillerVersion: ">=2.7.2-0"
description: Helm chart for all istio components
home: https://istio.io/
keywords:
- istio
- security
- sidecarInjectorWebhook
- mixer
- pilot
- galley
sources:
- http://github.com/istio/istio
engine: gotpl
icon: https://istio.io/favicons/android-192x192.png
maintainers:
- name: istio
# Istio
[Istio](https://istio.io/) is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data.
## Introduction
This chart bootstraps all istio [components](https://istio.io/docs/concepts/what-is-istio/overview.html) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Chart Details
This chart can install multiple istio components as subcharts:
- ingressgateway
- egressgateway
- sidecarInjectorWebhook
- galley
- mixer
- pilot
- security(citadel)
- grafana
- prometheus
- servicegraph
- tracing(jaeger)
- kiali
To enable or disable each component, change the corresponding `enabled` flag.
## Prerequisites
- Kubernetes 1.9 or newer cluster with RBAC (Role-Based Access Control) enabled is required
- Helm 2.7.2 or newer or alternately the ability to modify RBAC rules is also required
- If you want to enable automatic sidecar injection, Kubernetes 1.9+ with `admissionregistration` API is required, and `kube-apiserver` process must have the `admission-control` flag set with the `MutatingAdmissionWebhook` and `ValidatingAdmissionWebhook` admission controllers added and listed in the correct order.
- The `istio-init` chart must be run to completion prior to install the `istio` chart.
## Resources Required
The chart deploys pods that consume minimum resources as specified in the resources configuration parameter.
## Installing the Chart
1. If a service account has not already been installed for Tiller, install one:
```
$ kubectl apply -f install/kubernetes/helm/helm-service-account.yaml
```
1. Install Tiller on your cluster with the service account:
```
$ helm init --service-account tiller
```
1. Set and create the namespace where Istio was installed:
```
$ NAMESPACE=istio-system
$ kubectl create ns $NAMESPACE
```
1. If you are enabling `kiali`, you need to create the secret that contains the username and passphrase for `kiali` dashboard:
```
$ echo -n 'admin' | base64
YWRtaW4=
$ echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: $NAMESPACE
labels:
app: kiali
type: Opaque
data:
username: YWRtaW4=
passphrase: MWYyZDFlMmU2N2Rm
EOF
```
1. If you are using security mode for Grafana, create the secret first as follows:
- Encode username, you can change the username to the name as you want:
```
$ echo -n 'admin' | base64
YWRtaW4=
```
- Encode passphrase, you can change the passphrase to the passphrase as you want:
```
$ echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
```
- Create secret for Grafana:
```
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: grafana
namespace: $NAMESPACE
labels:
app: grafana
type: Opaque
data:
username: YWRtaW4=
passphrase: MWYyZDFlMmU2N2Rm
EOF
```
1. Add `istio.io` chart repository and point to the release:
```
$ helm repo add istio.io https://storage.googleapis.com/istio-prerelease/daily-build/release-1.1-latest-daily/charts
```
1. To install the chart with the release name `istio` in namespace $NAMESPACE you defined above:
- With [automatic sidecar injection](https://istio.io/docs/setup/kubernetes/sidecar-injection/#automatic-sidecar-injection) (requires Kubernetes >=1.9.0):
```
$ helm install istio --name istio --namespace $NAMESPACE
```
- Without the sidecar injection webhook:
```
$ helm install istio --name istio --namespace $NAMESPACE --set sidecarInjectorWebhook.enabled=false
```
## Configuration
The Helm chart ships with reasonable defaults. There may be circumstances in which defaults require overrides.
To override Helm values, use `--set key=value` argument during the `helm install` command. Multiple `--set` operations may be used in the same Helm operation.
Helm charts expose configuration options which are currently in alpha. The currently exposed options are explained in the following table:
| Parameter | Description | Values | Default |
| --- | --- | --- | --- |
| `global.hub` | Specifies the HUB for most images used by Istio | registry/namespace | `docker.io/istio` |
| `global.tag` | Specifies the TAG for most images used by Istio | valid image tag | `0.8.latest` |
| `global.proxy.image` | Specifies the proxy image name | valid proxy name | `proxyv2` |
| `global.proxy.concurrency` | Specifies the number of proxy worker threads | number, 0 = auto | `0` |
| `global.imagePullPolicy` | Specifies the image pull policy | valid image pull policy | `IfNotPresent` |
| `global.controlPlaneSecurityEnabled` | Specifies whether control plane mTLS is enabled | true/false | `false` |
| `global.mtls.enabled` | Specifies whether mTLS is enabled by default between services | true/false | `false` |
| `global.rbacEnabled` | Specifies whether to create Istio RBAC rules or not | true/false | `true` |
| `global.arch.amd64` | Specifies the scheduling policy for `amd64` architectures | 0 = never, 1 = least preferred, 2 = no preference, 3 = most preferred | `2` |
| `global.arch.s390x` | Specifies the scheduling policy for `s390x` architectures | 0 = never, 1 = least preferred, 2 = no preference, 3 = most preferred | `2` |
| `global.arch.ppc64le` | Specifies the scheduling policy for `ppc64le` architectures | 0 = never, 1 = least preferred, 2 = no preference, 3 = most preferred | `2` |
| `ingress.enabled` | Specifies whether Ingress should be installed | true/false | `true` |
| `gateways.enabled` | Specifies whether gateway(both Ingres and Egress) should be installed | true/false | `true` |
| `gateways.istio-ingressgateway.enabled` | Specifies whether Ingress gateway should be installed | true/false | `true` |
| `gateways.istio-egressgateway.enabled` | Specifies whether Egress gateway should be installed | true/false | `true` |
| `sidecarInjectorWebhook.enabled` | Specifies whether automatic sidecar-injector should be installed | true/false | `true` |
| `galley.enabled` | Specifies whether Galley should be installed for server-side config validation | true/false | `true` |
| `security.enabled` | Specifies whether Citadel should be installed | true/false | `true` |
| `mixer.policy.enabled` | Specifies whether Mixer Policy should be installed | true/false | `true` |
| `mixer.telemetry.enabled` | Specifies whether Mixer Telemetry should be installed | true/false | `true` |
| `pilot.enabled` | Specifies whether Pilot should be installed | true/false | `true` |
| `grafana.enabled` | Specifies whether Grafana addon should be installed | true/false | `false` |
| `grafana.persist` | Specifies whether Grafana addon should persist config data | true/false | `false` |
| `grafana.storageClassName` | If `grafana.persist` is true, specifies the [`StorageClass`](https://kubernetes.io/docs/concepts/storage/storage-classes/) to use for the `PersistentVolumeClaim` | `StorageClass` | "" |
| `grafana.accessMode` | If `grafana.persist` is true, specifies the [`Access Mode`](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) to use for the `PersistentVolumeClaim` | RWO/ROX/RWX | `ReadWriteMany` |
| `prometheus.enabled` | Specifies whether Prometheus addon should be installed | true/false | `true` |
| `servicegraph.enabled` | Specifies whether Servicegraph addon should be installed | true/false | `false` |
| `tracing.enabled` | Specifies whether Tracing(jaeger) addon should be installed | true/false | `false` |
| `kiali.enabled` | Specifies whether Kiali addon should be installed | true/false | `false` |
## Uninstalling the Chart
To uninstall/delete the `istio` release but continue to track the release:
```
$ helm delete istio
```
To uninstall/delete the `istio` release completely and make its name free for later use:
```
$ helm delete istio --purge
```
# Istio
[Istio](https://istio.io/) is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data.
## Introduction
This chart bootstraps all istio [components](https://istio.io/docs/concepts/what-is-istio/overview.html) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Chart Details
This chart can install multiple istio components as subcharts:
- gateways
- sidecarInjectorWebhook
- galley
- mixer
- pilot
- security(citadel)
- tracing(jaeger)
- kiali
- grafana
- prometheus
To enable or disable each component, change the corresponding `enabled` flag.
Notes: You will need to apply `kubectl label namespace $your-namesapce istio-injection=enabled` to enabled automatic sidecar injection of your desired kubernetes namespaces.
apiVersion: v1
description: A Helm chart for Kubernetes
name: certmanager
version: 1.1.0
appVersion: 0.6.2
tillerVersion: ">=2.7.2"
certmanager has been deployed successfully!
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.readthedocs.io/en/latest/reference/issuers.html
\ No newline at end of file
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "certmanager.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "certmanager.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "certmanager.chart" -}}
{{- .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: certmanager
namespace: {{ .Release.Namespace }}
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: certmanager
template:
metadata:
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.podLabels }}
{{ toYaml .Values.podLabels | indent 8 }}
{{- end }}
annotations:
sidecar.istio.io/inject: "false"
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
serviceAccountName: certmanager
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: certmanager
{{- if .Values.global.systemDefaultRegistry }}
image: "{{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}"
{{- else }}
image: {{ .Values.image.hub }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}
{{- end }}
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
args:
- --cluster-resource-namespace=$(POD_NAMESPACE)
- --leader-election-namespace=$(POD_NAMESPACE)
{{- if .Values.extraArgs }}
{{ toYaml .Values.extraArgs | indent 8 }}
{{- end }}
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- if .Values.podDnsPolicy }}
dnsPolicy: {{ .Values.podDnsPolicy }}
{{- end }}
{{- if .Values.podDnsConfig }}
dnsConfig:
{{ toYaml .Values.podDnsConfig | indent 8 }}
{{- end }}
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
\ No newline at end of file
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: {{ .Release.Namespace }}
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: {{ .Values.email }}
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
http01: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt
namespace: {{ .Release.Namespace }}
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: {{ .Values.email }}
privateKeySecretRef:
name: letsencrypt
http01: {}
{{- if .Values.global.defaultPodDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: certmanager
namespace: {{ .Release.Namespace }}
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
version: {{ .Chart.Version }}
{{- if .Values.podLabels }}
{{ toYaml .Values.podLabels | indent 4 }}
{{- end }}
spec:
{{- if .Values.global.defaultPodDisruptionBudget.enabled }}
{{ include "podDisruptionBudget.spec" .Values.global.defaultPodDisruptionBudget }}
{{- end }}
selector:
matchLabels:
app: certmanager
release: {{ .Release.Name }}
{{- end }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: certmanager
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["certmanager.k8s.io"]
resources: ["certificates", "certificates/finalizers", "issuers", "clusterissuers", "orders", "orders/finalizers", "challenges"]
verbs: ["*"]
- apiGroups: [""]
resources: ["configmaps", "secrets", "events", "services", "pods"]
verbs: ["*"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: certmanager
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: certmanager
subjects:
- name: certmanager
namespace: {{ .Release.Namespace }}
kind: ServiceAccount
apiVersion: v1
kind: ServiceAccount
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
metadata:
name: certmanager
namespace: {{ .Release.Namespace }}
labels:
app: certmanager
chart: {{ template "certmanager.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
# Certmanager uses ACME to sign certificates. Since Istio gateways are
# mounting the TLS secrets the Certificate CRDs must be created in the
# istio-system namespace. Once the certificate has been created, the
# gateway must be updated by adding 'secretVolumes'. After the gateway
# restart, DestinationRules can be created using the ACME-signed certificates.
enabled: false
image:
hub: quay.io
repository: rancher/jetstack-cert-manager-controller
tag: v0.6.2
resources: {}
nodeSelector: {}
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
apiVersion: v1
name: galley
version: 1.1.0
appVersion: 1.1.0
tillerVersion: ">=2.7.2"
description: Helm chart for galley deployment
keywords:
- istio
- galley
sources:
- http://github.com/istio/istio
engine: gotpl
icon: https://istio.io/favicons/android-192x192.png
approvers:
- cmluciano
- geeknoid
- ozevren
- ayj
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "galley.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "galley.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "galley.chart" -}}
{{- .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-galley-{{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["admissionregistration.k8s.io"]
resources: ["validatingwebhookconfigurations"]
verbs: ["*"]
- apiGroups: ["config.istio.io"] # istio mixer CRD watcher
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.istio.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["authentication.istio.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["rbac.istio.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions","apps"]
resources: ["deployments"]
resourceNames: ["istio-galley"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods", "nodes", "services", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["deployments/finalizers"]
resourceNames: ["istio-galley"]
verbs: ["update"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-galley-admin-role-binding-{{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-galley-{{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
name: istio-galley-service-account
namespace: {{ .Release.Namespace }}
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-galley-configuration
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: galley
data:
validatingwebhookconfiguration.yaml: |-
{{- include "validatingwebhookconfiguration.yaml.tpl" . | indent 4}}
\ No newline at end of file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-galley
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: galley
spec:
replicas: {{ .Values.replicaCount }}
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: galley
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: istio-galley-service-account
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: galley
image: "{{ template "system_default_registry" . }}{{ .Values.repository }}:{{ .Values.tag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
ports:
- containerPort: 443
- containerPort: {{ .Values.global.monitoringPort }}
- containerPort: 9901
command:
- /usr/local/bin/galley
- server
- --meshConfigFile=/etc/mesh-config/mesh
- --livenessProbeInterval=1s
- --livenessProbePath=/healthliveness
- --readinessProbePath=/healthready
- --readinessProbeInterval=1s
{{- if $.Values.global.controlPlaneSecurityEnabled}}
- --insecure=false
{{- else }}
- --insecure=true
{{- end }}
{{- if not $.Values.global.useMCP }}
- --enable-server=false
{{- end }}
- --validation-webhook-config-file
- /etc/config/validatingwebhookconfiguration.yaml
- --monitoringPort={{ .Values.global.monitoringPort }}
{{- if $.Values.global.logging.level }}
- --log_output_level={{ $.Values.global.logging.level }}
{{- end}}
volumeMounts:
- name: certs
mountPath: /etc/certs
readOnly: true
- name: config
mountPath: /etc/config
readOnly: true
- name: mesh-config
mountPath: /etc/mesh-config
readOnly: true
livenessProbe:
exec:
command:
- /usr/local/bin/galley
- probe
- --probe-path=/healthliveness
- --interval=10s
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
exec:
command:
- /usr/local/bin/galley
- probe
- --probe-path=/healthready
- --interval=10s
initialDelaySeconds: 5
periodSeconds: 5
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 12 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 12 }}
{{- end }}
volumes:
- name: certs
secret:
secretName: istio.istio-galley-service-account
- name: config
configMap:
name: istio-galley-configuration
- name: mesh-config
configMap:
name: istio
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.global.defaultPodDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: istio-galley
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: galley
spec:
{{- if .Values.global.defaultPodDisruptionBudget.enabled }}
{{ include "podDisruptionBudget.spec" .Values.global.defaultPodDisruptionBudget }}
{{- end }}
selector:
matchLabels:
app: {{ template "galley.name" . }}
release: {{ .Release.Name }}
istio: galley
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: istio-galley
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: galley
spec:
ports:
- port: 443
name: https-validation
- port: {{ .Values.global.monitoringPort }}
name: http-monitoring
- port: 9901
name: grpc-mcp
selector:
istio: galley
apiVersion: v1
kind: ServiceAccount
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
metadata:
name: istio-galley-service-account
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{ define "validatingwebhookconfiguration.yaml.tpl" }}
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: istio-galley
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: galley
webhooks:
{{- if .Values.global.configValidation }}
- name: pilot.validation.istio.io
clientConfig:
service:
name: istio-galley
namespace: {{ .Release.Namespace }}
path: "/admitpilot"
caBundle: ""
rules:
- operations:
- CREATE
- UPDATE
apiGroups:
- config.istio.io
apiVersions:
- v1alpha2
resources:
- httpapispecs
- httpapispecbindings
- quotaspecs
- quotaspecbindings
- operations:
- CREATE
- UPDATE
apiGroups:
- rbac.istio.io
apiVersions:
- "*"
resources:
- "*"
- operations:
- CREATE
- UPDATE
apiGroups:
- authentication.istio.io
apiVersions:
- "*"
resources:
- "*"
- operations:
- CREATE
- UPDATE
apiGroups:
- networking.istio.io
apiVersions:
- "*"
resources:
- destinationrules
- envoyfilters
- gateways
- serviceentries
- sidecars
- virtualservices
failurePolicy: Fail
- name: mixer.validation.istio.io
clientConfig:
service:
name: istio-galley
namespace: {{ .Release.Namespace }}
path: "/admitmixer"
caBundle: ""
rules:
- operations:
- CREATE
- UPDATE
apiGroups:
- config.istio.io
apiVersions:
- v1alpha2
resources:
- rules
- attributemanifests
- circonuses
- deniers
- fluentds
- kubernetesenvs
- listcheckers
- memquotas
- noops
- opas
- prometheuses
- rbacs
- solarwindses
- stackdrivers
- cloudwatches
- dogstatsds
- statsds
- stdios
- apikeys
- authorizations
- checknothings
# - kuberneteses
- listentries
- logentries
- metrics
- quotas
- reportnothings
- tracespans
failurePolicy: Fail
{{- end }}
{{- end }}
suite: Test Galley Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
asserts:
- equal:
path: spec.replicas
value: 1
- equal:
path: spec.template.spec.containers[0].ports
value:
- containerPort: 443
- containerPort: 15014
- containerPort: 9901
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
suite: Test Galley RBAC
templates:
- clusterrole.yaml
tests:
- it: should pass all kinds of assertion
set:
asserts:
- isNotNull:
path: rules
- isNotEmpty:
path: rules
- contains:
path: rules
content:
apiGroups: ["admissionregistration.k8s.io"]
resources: ["validatingwebhookconfigurations"]
verbs: ["*"]
- isKind:
of: ClusterRole
- isAPIVersion:
of: rbac.authorization.k8s.io/v1
- hasDocuments:
count: 1
#
# galley configuration
#
enabled: true
replicaCount: 1
nodeSelector: {}
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
apiVersion: v1
name: gateways
version: 1.1.0
appVersion: 1.1.0
tillerVersion: ">=2.7.2"
description: Helm chart for deploying Istio gateways
keywords:
- istio
- ingressgateway
- egressgateway
- gateways
sources:
- http://github.com/istio/istio
engine: gotpl
icon: https://istio.io/favicons/android-192x192.png
{{/* affinity - https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ */}}
{{- define "gatewaynodeaffinity" }}
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
{{- include "gatewayNodeAffinityRequiredDuringScheduling" . }}
preferredDuringSchedulingIgnoredDuringExecution:
{{- include "gatewayNodeAffinityPreferredDuringScheduling" . }}
{{- end }}
{{- define "gatewayNodeAffinityRequiredDuringScheduling" }}
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
{{- range $key, $val := .root.Values.global.arch }}
{{- if gt ($val | int) 0 }}
- {{ $key }}
{{- end }}
{{- end }}
{{- $nodeSelector := default .root.Values.global.defaultNodeSelector .nodeSelector -}}
{{- range $key, $val := $nodeSelector }}
- key: {{ $key }}
operator: In
values:
- {{ $val }}
{{- end }}
{{- end }}
{{- define "gatewayNodeAffinityPreferredDuringScheduling" }}
{{- range $key, $val := .root.Values.global.arch }}
{{- if gt ($val | int) 0 }}
- weight: {{ $val | int }}
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- {{ $key }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gatewaypodAntiAffinity" }}
{{- if or .podAntiAffinityLabelSelector .podAntiAffinityTermLabelSelector}}
podAntiAffinity:
{{- if .podAntiAffinityLabelSelector }}
requiredDuringSchedulingIgnoredDuringExecution:
{{- include "gatewaypodAntiAffinityRequiredDuringScheduling" . }}
{{- end }}
{{- if .podAntiAffinityTermLabelSelector }}
preferredDuringSchedulingIgnoredDuringExecution:
{{- include "gatewaypodAntiAffinityPreferredDuringScheduling" . }}
{{- end }}
{{- end }}
{{- end }}
{{- define "gatewaypodAntiAffinityRequiredDuringScheduling" }}
{{- range $index, $item := .podAntiAffinityLabelSelector }}
- labelSelector:
matchExpressions:
- key: {{ $item.key }}
operator: {{ $item.operator }}
{{- if $item.value }}
values:
{{- $vals := split "," $item.values }}
{{- range $i, $v := $vals }}
- {{ $v }}
{{- end }}
{{- end }}
topologyKey: {{ $item.topologyKey }}
{{- end }}
{{- end }}
{{- define "gatewaypodAntiAffinityPreferredDuringScheduling" }}
{{- range $index, $item := .podAntiAffinityTermLabelSelector }}
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: {{ $item.key }}
operator: {{ $item.operator }}
{{- if $item.values }}
values:
{{- $vals := split "," $item.values }}
{{- range $i, $v := $vals }}
- {{ $v }}
{{- end }}
{{- end }}
topologyKey: {{ $item.topologyKey }}
weight: 100
{{- end }}
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "gateway.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "gateway.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "gateway.chart" -}}
{{- .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if and $spec.enabled $spec.autoscaleEnabled $spec.autoscaleMin $spec.autoscaleMax }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ $key }}
namespace: {{ $spec.namespace | default $.Release.Namespace }}
labels:
app: {{ $spec.labels.istio }}
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
spec:
maxReplicas: {{ $spec.autoscaleMax }}
minReplicas: {{ $spec.autoscaleMin }}
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: {{ $key }}
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ $spec.cpu.targetAverageUtilization }}
---
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ $key }}-{{ $.Release.Namespace }}
labels:
app: {{ $spec.labels.istio }}
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
rules:
- apiGroups: ["networking.istio.io"]
resources: ["virtualservices", "destinationrules", "gateways"]
verbs: ["get", "watch", "list", "update"]
---
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ $key }}-{{ $.Release.Namespace }}
labels:
app: {{ $spec.labels.istio }}
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ $key }}-{{ $.Release.Namespace }}
subjects:
- kind: ServiceAccount
name: {{ $key }}-service-account
namespace: {{ $.Release.Namespace }}
---
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ $key }}
namespace: {{ $spec.namespace | default $.Release.Namespace }}
labels:
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
spec:
{{- if not $spec.autoscaleEnabled }}
{{- if $spec.replicaCount }}
replicas: {{ $spec.replicaCount }}
{{- else }}
replicas: 1
{{- end }}
{{- end }}
template:
metadata:
labels:
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
annotations:
sidecar.istio.io/inject: "false"
{{- if $spec.podAnnotations }}
{{ toYaml $spec.podAnnotations | indent 8 }}
{{ end }}
spec:
serviceAccountName: {{ $key }}-service-account
{{- if $.Values.global.priorityClassName }}
priorityClassName: "{{ $.Values.global.priorityClassName }}"
{{- end }}
{{- if $.Values.global.proxy.enableCoreDump }}
initContainers:
- name: enable-core-dump
image: "{{ template "system_default_registry" $ }}{{ $.Values.global.proxy_init.repository }}:{{ $.Values.global.proxy_init.tag }}"
imagePullPolicy: IfNotPresent
command:
- /bin/sh
args:
- -c
- sysctl -w kernel.core_pattern=/var/lib/istio/core.proxy && ulimit -c unlimited
securityContext:
privileged: true
{{- end }}
containers:
{{- if $spec.sds }}
{{- if $spec.sds.enabled }}
- name: ingress-sds
image: "{{ template "system_default_registry" $ }}{{ $.Values.global.nodeAgent.repository }}:{{ $.Values.global.nodeAgent.tag }}"
imagePullPolicy: {{ $.Values.global.imagePullPolicy }}
env:
- name: "ENABLE_WORKLOAD_SDS"
value: "false"
- name: "ENABLE_INGRESS_GATEWAY_SDS"
value: "true"
- name: "INGRESS_GATEWAY_NAMESPACE"
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
volumeMounts:
- name: ingressgatewaysdsudspath
mountPath: /var/run/ingress_gateway
{{- end }}
{{- end }}
- name: istio-proxy
image: "{{ template "system_default_registry" $ }}{{ $.Values.global.proxy.repository }}:{{ $.Values.global.proxy.tag }}"
imagePullPolicy: {{ $.Values.global.imagePullPolicy }}
ports:
{{- range $key, $val := $spec.ports }}
- containerPort: {{ $val.port }}
{{- end }}
- containerPort: 15090
protocol: TCP
name: http-envoy-prom
args:
- proxy
- router
- --domain
- $(POD_NAMESPACE).svc.{{ $.Values.global.proxy.clusterDomain }}
{{- if $.Values.global.proxy.logLevel }}
- --proxyLogLevel={{ $.Values.global.proxy.logLevel }}
{{- end}}
{{- if $.Values.global.logging.level }}
- --log_output_level={{ $.Values.global.logging.level }}
{{- end}}
- --drainDuration
- '45s' #drainDuration
- --parentShutdownDuration
- '1m0s' #parentShutdownDuration
- --connectTimeout
- '10s' #connectTimeout
- --serviceCluster
- {{ $key }}
- --zipkinAddress
{{- if $.Values.global.tracer.zipkin.address }}
- {{ $.Values.global.tracer.zipkin.address }}
{{- else if $.Values.global.istioNamespace }}
- zipkin.{{ $.Values.global.istioNamespace }}:9411
{{- else }}
- zipkin:9411
{{- end }}
{{- if $.Values.global.proxy.envoyStatsd.enabled }}
- --statsdUdpAddress
- {{ $.Values.global.proxy.envoyStatsd.host }}:{{ $.Values.global.proxy.envoyStatsd.port }}
{{- end }}
{{- if $.Values.global.proxy.envoyMetricsService.enabled }}
- --envoyMetricsServiceAddress
- {{ $.Values.global.proxy.envoyMetricsService.host }}:{{ $.Values.global.proxy.envoyMetricsService.port }}
{{- end }}
- --proxyAdminPort
- "15000"
- --statusPort
- "15020"
{{- if $.Values.global.controlPlaneSecurityEnabled }}
- --controlPlaneAuthPolicy
- MUTUAL_TLS
- --discoveryAddress
{{- if $.Values.global.istioNamespace }}
- istio-pilot.{{ $.Values.global.istioNamespace }}:15011
{{- else }}
- istio-pilot:15011
{{- end }}
{{- else }}
- --controlPlaneAuthPolicy
- NONE
- --discoveryAddress
{{- if $.Values.global.istioNamespace }}
- istio-pilot.{{ $.Values.global.istioNamespace }}:15010
{{- else }}
- istio-pilot:15010
{{- end }}
{{- end }}
{{- if $.Values.global.trustDomain }}
- --trust-domain={{ $.Values.global.trustDomain }}
{{- end }}
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15020
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources:
{{- if $spec.resources }}
{{ toYaml $spec.resources | indent 12 }}
{{- else }}
{{ toYaml $.Values.global.defaultResources | indent 12 }}
{{- end }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ISTIO_META_CONFIG_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- if $spec.sds }}
{{- if $spec.sds.enabled }}
- name: ISTIO_META_USER_SDS
value: "true"
{{- end }}
{{- end }}
{{- if $spec.env }}
{{- range $key, $val := $spec.env }}
- name: {{ $key }}
value: {{ $val }}
{{- end }}
{{- end }}
volumeMounts:
{{- if $.Values.global.sds.enabled }}
- name: sdsudspath
mountPath: /var/run/sds/uds_path
readOnly: true
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
mountPath: /var/run/secrets/tokens
{{- end }}
{{- end }}
{{- if $spec.sds }}
{{- if $spec.sds.enabled }}
- name: ingressgatewaysdsudspath
mountPath: /var/run/ingress_gateway
{{- end }}
{{- end }}
- name: istio-certs
mountPath: /etc/certs
readOnly: true
{{- range $spec.secretVolumes }}
- name: {{ .name }}
mountPath: {{ .mountPath | quote }}
readOnly: true
{{- end }}
{{- if $spec.additionalContainers }}
{{ toYaml $spec.additionalContainers | indent 8 }}
{{- end }}
volumes:
{{- if $spec.sds }}
{{- if $spec.sds.enabled }}
- name: ingressgatewaysdsudspath
emptyDir: {}
{{- end }}
{{- end }}
{{- if $.Values.global.sds.enabled }}
- name: sdsudspath
hostPath:
path: /var/run/sds/uds_path
type: Socket
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
projected:
sources:
- serviceAccountToken:
path: istio-token
expirationSeconds: 43200
audience: {{ $.Values.global.trustDomain }}
{{- end }}
{{- end }}
- name: istio-certs
secret:
secretName: istio.{{ $key }}-service-account
optional: true
{{- range $spec.secretVolumes }}
- name: {{ .name }}
secret:
secretName: {{ .secretName | quote }}
optional: true
{{- end }}
{{- range $spec.configVolumes }}
- name: {{ .name }}
configMap:
name: {{ .configMapName | quote }}
optional: true
{{- end }}
affinity:
{{- include "gatewaynodeaffinity" (dict "root" $ "nodeSelector" $spec.nodeSelector) | indent 6 }}
{{- include "gatewaypodAntiAffinity" (dict "podAntiAffinityLabelSelector" $spec.podAntiAffinityLabelSelector "podAntiAffinityTermLabelSelector" $spec.podAntiAffinityTermLabelSelector) | indent 6 }}
---
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if and (ne $key "enabled") }}
{{- if $spec.enabled }}
{{- if $.Values.global.defaultPodDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ $key }}
namespace: {{ $spec.namespace | default $.Release.Namespace }}
labels:
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
spec:
{{- if $.Values.global.defaultPodDisruptionBudget.enabled }}
{{ include "podDisruptionBudget.spec" $.Values.global.defaultPodDisruptionBudget }}
{{- end }}
selector:
matchLabels:
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
---
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.global.k8sIngress.enabled }}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-autogenerated-k8s-ingress
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "gateway.name" . }}
chart: {{ template "gateway.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
selector:
istio: {{ .Values.global.k8sIngress.gatewayName }}
servers:
- port:
number: 80
protocol: HTTP2
name: http
hosts:
- "*"
{{ if .Values.global.k8sIngress.enableHttps }}
- port:
number: 443
protocol: HTTPS
name: https-default
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingress-certs/tls.crt
privateKey: /etc/istio/ingress-certs/tls.key
hosts:
- "*"
{{ end }}
---
{{ end }}
{{- if .Values.global.meshExpansion.enabled }}
{{- if .Values.global.meshExpansion.useILB }}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: meshexpansion-ilb-gateway
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "gateway.name" . }}
chart: {{ template "gateway.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
selector:
istio: ilbgateway
servers:
- port:
number: 15011
protocol: TCP
name: tcp-pilot
hosts:
- "*"
- port:
number: 8060
protocol: TCP
name: tcp-citadel
hosts:
- "*"
- port:
number: 15004
name: tls-mixer
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
hosts:
- "*"
---
{{- else }}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: meshexpansion-gateway
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "gateway.name" . }}
chart: {{ template "gateway.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15011
protocol: TCP
name: tcp-pilot
hosts:
- "*"
- port:
number: 8060
protocol: TCP
name: tcp-citadel
hosts:
- "*"
- port:
number: 15004
name: tls-mixer
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
hosts:
- "*"
---
{{- end }}
{{- end }}
{{- if .Values.global.multiCluster.enabled }}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-multicluster-egressgateway
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "gateway.name" . }}
chart: {{ template "gateway.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
selector:
istio: egressgateway
servers:
- hosts:
- "*.global"
port:
name: tls
number: 15443
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-multicluster-ingressgateway
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "gateway.name" . }}
chart: {{ template "gateway.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*.global"
port:
name: tls
number: 15443
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: istio-multicluster-ingressgateway
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "gateway.name" . }}
chart: {{ template "gateway.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
workloadLabels:
istio: ingressgateway
filters:
- listenerMatch:
portNumber: 15443
listenerType: GATEWAY
insertPosition:
index: AFTER
relativeTo: envoy.filters.network.sni_cluster
filterName: envoy.filters.network.tcp_cluster_rewrite
filterType: NETWORK
filterConfig:
cluster_pattern: "\\.global$"
cluster_replacement: ".svc.{{ .Values.global.proxy.clusterDomain }}"
---
## To ensure all traffic to *.global is using mTLS
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: istio-multicluster-destinationrule
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "gateway.name" . }}
chart: {{ template "gateway.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
host: "*.global"
{{- if .Values.global.defaultConfigVisibilitySettings }}
exportTo:
- '*'
{{- end }}
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
{{- end }}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
{{- if ($spec.sds) and (eq $spec.sds.enabled true) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $key }}-sds
namespace: {{ $.Release.Namespace }}
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
{{- if ($spec.sds) and (eq $spec.sds.enabled true) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $key }}-sds
namespace: {{ $.Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ $key }}-sds
subjects:
- kind: ServiceAccount
name: {{ $key }}-service-account
---
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ $key }}
namespace: {{ $spec.namespace | default $.Release.Namespace }}
annotations:
{{- range $key, $val := $spec.serviceAnnotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
labels:
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
spec:
{{- if $spec.loadBalancerIP }}
loadBalancerIP: "{{ $spec.loadBalancerIP }}"
{{- end }}
{{- if $spec.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml $spec.loadBalancerSourceRanges | indent 4 }}
{{- end }}
{{- if $spec.externalTrafficPolicy }}
externalTrafficPolicy: {{$spec.externalTrafficPolicy }}
{{- end }}
{{- if $spec.externalIPs }}
externalIPs:
{{ toYaml $spec.externalIPs | indent 4 }}
{{- end }}
type: {{ .type }}
selector:
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
ports:
{{- range $key, $val := $spec.ports }}
-
{{- range $pkey, $pval := $val }}
{{ $pkey}}: {{ $pval }}
{{- end }}
{{- end }}
# range addon ports
{{- range $key, $val := $spec.addOnPorts }}
-
{{- range $pkey, $pval := $val }}
{{ $pkey}}: {{ $pval }}
{{- end }}
{{- end }}
# range meshExpansion ports
{{- if $.Values.global.meshExpansion.enabled }}
{{- range $key, $val := $spec.meshExpansionPorts }}
-
{{- range $pkey, $pval := $val }}
{{ $pkey}}: {{ $pval }}
{{- end }}
{{- end }}
{{- end }}
---
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
apiVersion: v1
kind: ServiceAccount
{{- if $.Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range $.Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
metadata:
name: {{ $key }}-service-account
namespace: {{ $spec.namespace | default $.Release.Namespace }}
labels:
app: {{ $spec.labels.app }}
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
---
{{- end }}
{{- end }}
{{- end }}
suite: Test Gateway Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
istio-ingressgateway.enabled: true
istio-ilbgateway.enabled: false
istio-egressgateway.enabled: false
istio-ingressgateway.autoscaleEnabled: true
asserts:
- isNull:
path: spec.replicas
- contains:
path: spec.template.spec.containers[0].ports
content:
containerPort: 80
- contains:
path: spec.template.spec.containers[0].ports
content:
containerPort: 443
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
- it: should deploy 3 gateways
set:
istio-ingressgateway.enabled: true
istio-ilbgateway.enabled: true
istio-egressgateway.enabled: true
asserts:
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 3
- it: should not deploy gateways
set:
istio-ingressgateway.enabled: false
istio-ilbgateway.enabled: false
istio-egressgateway.enabled: false
asserts:
- hasDocuments:
count: 0
#
# Gateways Configuration
# By default (if enabled) a pair of Ingress and Egress Gateways will be created for the mesh.
# You can add more gateways in addition to the defaults but make sure those are uniquely named
# and that NodePorts are not conflicting.
# Disable specifc gateway by setting the `enabled` to false.
#
enabled: true
istio-ingressgateway:
enabled: true
#
# Secret Discovery Service (SDS) configuration for ingress gateway.
#
sds:
# If true, ingress gateway fetches credentials from SDS server to handle TLS connections.
enabled: false
# SDS server that watches kubernetes secrets and provisions credentials to ingress gateway.
# This server runs in the same pod as ingress gateway.
labels:
app: istio-ingressgateway
istio: ingressgateway
autoscaleEnabled: true
autoscaleMin: 1
autoscaleMax: 5
# specify replicaCount when autoscaleEnabled: false
# replicaCount: 1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 256Mi
cpu:
targetAverageUtilization: 80
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalIPs: []
serviceAnnotations: {}
podAnnotations: {}
type: LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be
#externalTrafficPolicy: Local #change to Local to preserve source IP or Cluster for default behaviour or leave commented out
ports:
## You can add custom gateway ports
# Note that AWS ELB will by default perform health checks on the first port
# on this list. Setting this to the health check port will ensure that health
# checks always work. https://github.com/istio/istio/issues/12503
- port: 80
targetPort: 80
name: http2
nodePort: 31380
- port: 443
name: https
nodePort: 31390
### PORTS FOR UI/metrics #####
## Disable if not needed
addOnPorts:
# - port: 15029
# targetPort: 15029
# name: https-kiali
# - port: 15030
# targetPort: 15030
# name: https-prometheus
# - port: 15031
# targetPort: 15031
# name: https-grafana
# - port: 15032
# targetPort: 15032
# name: https-tracing
# # This is the port where sni routing happens
# - port: 15443
# targetPort: 15443
# name: tls
# - port: 15020
# targetPort: 15020
# name: status-port
#### MESH EXPANSION PORTS ########
# Pilot and Citadel MTLS ports are enabled in gateway - but will only redirect
# to pilot/citadel if global.meshExpansion settings are enabled.
# Delete these ports if mesh expansion is not enabled, to avoid
# exposing unnecessary ports on the web.
# You can remove these ports if you are not using mesh expansion
meshExpansionPorts:
- port: 15011
targetPort: 15011
name: tcp-pilot-grpc-tls
- port: 15004
targetPort: 15004
name: tcp-mixer-grpc-tls
- port: 8060
targetPort: 8060
name: tcp-citadel-grpc-tls
- port: 853
targetPort: 853
name: tcp-dns-tls
####### end MESH EXPANSION PORTS ######
##############
secretVolumes:
- name: ingressgateway-certs
secretName: istio-ingressgateway-certs
mountPath: /etc/istio/ingressgateway-certs
- name: ingressgateway-ca-certs
secretName: istio-ingressgateway-ca-certs
mountPath: /etc/istio/ingressgateway-ca-certs
### Advanced options ############
env:
# A gateway with this mode ensures that pilot generates an additional
# set of clusters for internal services but without Istio mTLS, to
# enable cross cluster routing.
ISTIO_META_ROUTER_MODE: "sni-dnat"
nodeSelector: {}
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
istio-egressgateway:
enabled: false
labels:
app: istio-egressgateway
istio: egressgateway
autoscaleEnabled: true
autoscaleMin: 1
autoscaleMax: 5
# specify replicaCount when autoscaleEnabled: false
# replicaCount: 1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 256Mi
cpu:
targetAverageUtilization: 80
serviceAnnotations: {}
podAnnotations: {}
type: ClusterIP #change to NodePort or LoadBalancer if need be
ports:
- port: 80
name: http2
- port: 443
name: https
# This is the port where sni routing happens
- port: 15443
targetPort: 15443
name: tls
secretVolumes:
- name: egressgateway-certs
secretName: istio-egressgateway-certs
mountPath: /etc/istio/egressgateway-certs
- name: egressgateway-ca-certs
secretName: istio-egressgateway-ca-certs
mountPath: /etc/istio/egressgateway-ca-certs
#### Advanced options ########
env:
# Set this to "external" if and only if you want the egress gateway to
# act as a transparent SNI gateway that routes mTLS/TLS traffic to
# external services defined using service entries, where the service
# entry has resolution set to DNS, has one or more endpoints with
# network field set to "external". By default its set to "" so that
# the egress gateway sees the same set of endpoints as the sidecars
# preserving backward compatibility
# ISTIO_META_REQUESTED_NETWORK_VIEW: ""
# A gateway with this mode ensures that pilot generates an additional
# set of clusters for internal services but without Istio mTLS, to
# enable cross cluster routing.
ISTIO_META_ROUTER_MODE: "sni-dnat"
nodeSelector: {}
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
# Mesh ILB gateway creates a gateway of type InternalLoadBalancer,
# for mesh expansion. It exposes the mtls ports for Pilot,CA as well
# as non-mtls ports to support upgrades and gradual transition.
istio-ilbgateway:
enabled: false
labels:
app: istio-ilbgateway
istio: ilbgateway
autoscaleEnabled: true
autoscaleMin: 1
autoscaleMax: 5
# specify replicaCount when autoscaleEnabled: false
# replicaCount: 1
cpu:
targetAverageUtilization: 80
resources:
requests:
cpu: 800m
memory: 512Mi
#limits:
# cpu: 1800m
# memory: 256Mi
loadBalancerIP: ""
serviceAnnotations:
cloud.google.com/load-balancer-type: "internal"
podAnnotations: {}
type: LoadBalancer
ports:
## You can add custom gateway ports - google ILB default quota is 5 ports,
- port: 15011
name: grpc-pilot-mtls
# Insecure port - only for migration from 0.8. Will be removed in 1.1
- port: 15010
name: grpc-pilot
- port: 8060
targetPort: 8060
name: tcp-citadel-grpc-tls
# Port 5353 is forwarded to kube-dns
- port: 5353
name: tcp-dns
secretVolumes:
- name: ilbgateway-certs
secretName: istio-ilbgateway-certs
mountPath: /etc/istio/ilbgateway-certs
- name: ilbgateway-ca-certs
secretName: istio-ilbgateway-ca-certs
mountPath: /etc/istio/ilbgateway-ca-certs
nodeSelector: {}
apiVersion: v1
description: A Helm chart for Kubernetes
name: grafana
version: 1.1.0
appVersion: 1.1.0
tillerVersion: ">=2.7.2"
#!/bin/bash
set -e
THIS_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
UX=$(uname)
for db in "${THIS_DIR}"/dashboards/*.json; do
if [[ ${UX} == "Darwin" ]]; then
# shellcheck disable=SC2016
sed -i '' 's/${DS_PROMETHEUS}/Prometheus/g' "$db"
else
# shellcheck disable=SC2016
sed -i 's/${DS_PROMETHEUS}/Prometheus/g' "$db"
fi
done
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "grafana.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "grafana.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "grafana.chart" -}}
{{- .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-grafana-custom-resources
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: grafana
data:
custom-resources.yaml: |-
{{- include "grafana-default.yaml.tpl" . | indent 4}}
run.sh: |-
{{- include "install-custom-resources.sh.tpl" . | indent 4}}
{{- $files := .Files }}
{{- range $path, $bytes := .Files.Glob "dashboards/*.json" }}
{{- $filename := trimSuffix (ext $path) (base $path) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-grafana-configuration-dashboards-{{ $filename }}
namespace: {{ $.Release.Namespace }}
labels:
app: {{ template "grafana.name" $ }}
chart: {{ template "grafana.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
istio: grafana
data:
{{ base $path }}: '{{ $files.Get $path }}'
---
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-grafana
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
istio: grafana
data:
{{- if .Values.datasources }}
{{- range $key, $value := .Values.datasources }}
{{ $key }}: |
{{ toYaml $value | indent 4 }}
{{- end -}}
{{- end -}}
{{- if .Values.dashboardProviders }}
{{- range $key, $value := .Values.dashboardProviders }}
{{ $key }}: |
{{ toYaml $value | indent 4 }}
{{- end -}}
{{- end -}}
apiVersion: v1
kind: ServiceAccount
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
metadata:
name: istio-grafana-post-install-account
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-grafana-post-install-{{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["authentication.istio.io"] # needed to create default authn policy
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-grafana-post-install-role-binding-{{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-grafana-post-install-{{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
name: istio-grafana-post-install-account
namespace: {{ .Release.Namespace }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: istio-grafana-post-install-{{ .Values.global.tag | printf "%v" | trunc 32 }}
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
template:
metadata:
name: istio-grafana-post-install
labels:
app: istio-grafana
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
serviceAccountName: istio-grafana-post-install-account
containers:
- name: kubectl
image: "{{ template "system_default_registry" . }}{{ .Values.global.kubectl.repository }}:{{ .Values.global.kubectl.tag }}"
command: [ "/bin/bash", "/tmp/grafana/run.sh", "/tmp/grafana/custom-resources.yaml" ]
volumeMounts:
- mountPath: "/tmp/grafana"
name: tmp-configmap-grafana
volumes:
- name: tmp-configmap-grafana
configMap:
name: istio-grafana-custom-resources
restartPolicy: OnFailure
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: grafana
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
spec:
securityContext:
fsGroup: 472
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ template "system_default_registry" . }}{{ .Values.repository }}:{{ .Values.tag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /login
port: 3000
env:
- name: GRAFANA_PORT
value: "3000"
{{- if .Values.security.enabled }}
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: {{ .Values.security.secretName }}
key: {{ .Values.security.usernameKey }}
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.security.secretName }}
key: {{ .Values.security.passphraseKey }}
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
- name: GF_AUTH_DISABLE_LOGIN_FORM
value: "false"
{{- else }}
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
{{- end }}
- name: GF_PATHS_DATA
value: /data/grafana
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 12 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 12 }}
{{- end }}
volumeMounts:
- name: data
mountPath: /data/grafana
{{- range $path, $bytes := .Files.Glob "dashboards/*.json" }}
{{- $filename := trimSuffix (ext $path) (base $path) }}
- name: dashboards-istio-{{ $filename }}
mountPath: "/var/lib/grafana/dashboards/istio/{{ base $path }}"
subPath: {{ base $path }}
readOnly: true
{{- end }}
- name: config
mountPath: "/etc/grafana/provisioning/datasources/datasources.yaml"
subPath: datasources.yaml
- name: config
mountPath: "/etc/grafana/provisioning/dashboards/dashboardproviders.yaml"
subPath: dashboardproviders.yaml
- name: grafana-proxy
image: "{{ template "system_default_registry" . }}{{ .Values.global.nginxProxy.repository }}:{{ .Values.global.nginxProxy.tag }}"
args:
- nginx
- -g
- daemon off;
- -c
- /nginx/nginx.conf
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /nginx/
name: grafana-nginx
{{- if and .Values.resources .Values.resources.proxy }}
resources:
{{ toYaml .Values.resources.proxy | indent 10 }}
{{- end }}
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
volumes:
- name: config
configMap:
name: istio-grafana
- name: grafana-nginx
configMap:
name: grafana-nginx
items:
- key: nginx.conf
mode: 438
path: nginx.conf
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim | default ("istio-grafana-pvc") }}
{{- else }}
emptyDir: {}
{{- end }}
{{- range $path, $bytes := .Files.Glob "dashboards/*.json" }}
{{- $filename := trimSuffix (ext $path) (base $path) }}
- name: dashboards-istio-{{ $filename }}
configMap:
name: istio-grafana-configuration-dashboards-{{ $filename }}
{{- end }}
{{ define "grafana-default.yaml.tpl" }}
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: grafana-ports-mtls-disabled
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
targets:
- name: grafana
ports:
- number: {{ .Values.service.externalPort }}
{{- end }}
{{- if .Values.ingress.enabled -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
rules:
{{- if .Values.ingress.hosts }}
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host }}
http:
paths:
- path: {{ if $.Values.contextPath }} {{ $.Values.contextPath }} {{ else }} / {{ end }}
backend:
serviceName: grafana
servicePort: 3000
{{- end -}}
{{- else }}
- http:
paths:
- path: {{ if .Values.contextPath }} {{ .Values.contextPath }} {{ else }} / {{ end }}
backend:
serviceName: grafana
servicePort: 3000
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-nginx
labels:
app: grafana-nginx
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
data:
nginx.conf: |-
user nginx;
worker_processes auto;
error_log /dev/stdout warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
log_format main '[$time_local - $status] $remote_addr - $remote_user $request ($http_referer)';
proxy_connect_timeout 10;
proxy_read_timeout 180;
proxy_send_timeout 5;
proxy_buffering off;
proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:100m inactive=1d max_size=10g;
server {
listen 80;
access_log off;
gzip on;
gzip_min_length 1k;
gzip_comp_level 2;
gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript image/jpeg image/gif image/png;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
proxy_set_header Host $host;
location /api/dashboards {
proxy_pass http://localhost:3000;
}
location /api/search {
proxy_pass http://localhost:3000;
sub_filter_types application/json;
sub_filter_once off;
sub_filter '"url":"/d' '"url":"d';
}
location / {
proxy_cache my_zone;
proxy_cache_valid 200 302 1d;
proxy_cache_valid 301 30d;
proxy_cache_valid any 5m;
proxy_cache_bypass $http_cache_control;
add_header X-Proxy-Cache $upstream_cache_status;
add_header Cache-Control "public";
proxy_pass http://localhost:3000/;
sub_filter_types text/html;
sub_filter_once off;
sub_filter '"appSubUrl":""' '"appSubUrl":"."';
sub_filter '"url":"/' '"url":"./';
sub_filter ':"/avatar/' ':"avatar/';
if ($request_filename ~ .*\.(?:js|css|jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm)$) {
expires 90d;
}
}
}
}
{{- if .Values.persistence.enabled }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: istio-grafana-pvc
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
storageClassName: {{ .Values.persistence.storageClass }}
accessModes:
- {{ .Values.persistence.accessMode }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: {{ .Release.Namespace }}
annotations:
{{- range $key, $val := .Values.service.annotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
kubernetes.io/cluster-service: "true"
spec:
type: {{ .Values.service.type }}
ports:
- name: http-access-grafana
protocol: TCP
targetPort: 80
port: 80
selector:
app: grafana
{{- if .Values.global.enableHelmTest }}
apiVersion: v1
kind: Pod
metadata:
name: {{ template "grafana.fullname" . }}-test
namespace: {{ .Release.Namespace }}
labels:
app: grafana-test
chart: {{ template "grafana.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
istio: grafana
annotations:
sidecar.istio.io/inject: "false"
helm.sh/hook: test-success
spec:
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: "{{ template "grafana.fullname" . }}-test"
image: "{{ template "system_default_registry" . }}{{ .Values.global.proxy.repository }}:{{ .Values.global.proxy.tag }}"
imagePullPolicy: "{{ .Values.global.imagePullPolicy }}"
command: ['curl']
args: ['http://grafana:{{ .Values.grafana.service.externalPort }}']
restartPolicy: Never
affinity:
{{- include "nodeaffinity" . | indent 4 }}
{{- end }}
suite: Test Istio Grafana Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
replicaCount: 1
asserts:
- equal:
path: spec.replicas
value: 1
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
#
# addon grafana configuration
#
enabled: false
replicaCount: 1
ingress:
enabled: false
## Used to create an Ingress record.
hosts:
- grafana.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: grafana-tls
# hosts:
# - grafana.local
persistence:
enabled: false
storageClass: ""
accessMode: ReadWriteOnce
existingClaim: ""
size: 5Gi
security:
enabled: false
secretName: grafana
usernameKey: username
passphraseKey: passphrase
nodeSelector: {}
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
contextPath: /grafana
service:
annotations: {}
type: ClusterIP
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
orgId: 1
url: http://prometheus:9090
access: proxy
isDefault: true
jsonData:
timeInterval: 5s
editable: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'istio'
orgId: 1
folder: 'istio'
type: file
disableDeletion: false
options:
path: /var/lib/grafana/dashboards/istio
resources: {}
\ No newline at end of file
apiVersion: v1
description: Istio CoreDNS provides DNS resolution for services in multicluster setups.
name: istiocoredns
version: 1.1.0
appVersion: 0.1
tillerVersion: ">=2.7.2"
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "istiocoredns.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "istiocoredns.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "istiocoredns.chart" -}}
{{- .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istiocoredns
labels:
app: {{ template "istiocoredns.name" . }}
chart: {{ template "istiocoredns.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["networking.istio.io"]
resources: ["*"]
verbs: ["get", "watch", "list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-istiocoredns-role-binding-{{ .Release.Namespace }}
labels:
app: {{ template "istiocoredns.name" . }}
chart: {{ template "istiocoredns.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istiocoredns
subjects:
- kind: ServiceAccount
name: istiocoredns-service-account
namespace: {{ .Release.Namespace }}
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "istiocoredns.name" . }}
chart: {{ template "istiocoredns.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
data:
Corefile: |
.:53 {
errors
health
proxy global 127.0.0.1:8053 {
protocol grpc insecure
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
reload
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istiocoredns
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "istiocoredns.name" . }}
chart: {{ template "istiocoredns.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
name: istiocoredns
labels:
app: istiocoredns
chart: {{ template "istiocoredns.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: istiocoredns-service-account
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: coredns
image: "{{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 10 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 10 }}
{{- end }}
- name: istio-coredns-plugin
command:
- /usr/local/bin/plugin
image: "{{ template "system_default_registry" . }}{{ .Values.pluginImage.repository }}:{{ .Values.pluginImage.tag }}"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8053
name: dns-grpc
protocol: TCP
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 10 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 10 }}
{{- end }}
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
apiVersion: v1
kind: Service
metadata:
name: istiocoredns
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "istiocoredns.name" . }}
chart: {{ template "istiocoredns.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
selector:
app: istiocoredns
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
apiVersion: v1
kind: ServiceAccount
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
metadata:
name: istiocoredns-service-account
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "istiocoredns.name" . }}
chart: {{ template "istiocoredns.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
#
# addon istiocoredns tracing configuration
#
enabled: false
replicaCount: 1
# Source code for the plugin can be found at
# https://github.com/istio-ecosystem/istio-coredns-plugin
# The plugin listens for DNS requests from coredns server at 127.0.0.1:8053
nodeSelector: {}
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
apiVersion: v1
description: Kiali is an open source project for service mesh observability, refer to https://www.kiali.io for details.
name: kiali
version: 1.1.0
appVersion: 0.17
tillerVersion: ">=2.7.2"
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kiali.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kiali.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "kiali.chart" -}}
{{- .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kiali
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: [""]
resources:
- configmaps
- endpoints
- namespaces
- nodes
- pods
- services
- replicationcontrollers
verbs:
- get
- list
- watch
- apiGroups: ["extensions", "apps"]
resources:
- deployments
- statefulsets
- replicasets
verbs:
- get
- list
- watch
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups: ["batch"]
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
- apiGroups: ["config.istio.io"]
resources:
- apikeys
- authorizations
- checknothings
- circonuses
- deniers
- fluentds
- handlers
- kubernetesenvs
- kuberneteses
- listcheckers
- listentries
- logentries
- memquotas
- metrics
- opas
- prometheuses
- quotas
- quotaspecbindings
- quotaspecs
- rbacs
- reportnothings
- rules
- solarwindses
- stackdrivers
- statsds
- stdios
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["networking.istio.io"]
resources:
- destinationrules
- gateways
- serviceentries
- virtualservices
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["authentication.istio.io"]
resources:
- policies
- meshpolicies
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["rbac.istio.io"]
resources:
- clusterrbacconfigs
- rbacconfigs
- serviceroles
- servicerolebindings
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["monitoring.kiali.io"]
resources:
- monitoringdashboards
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kiali-viewer
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: [""]
resources:
- configmaps
- endpoints
- namespaces
- nodes
- pods
- services
- replicationcontrollers
verbs:
- get
- list
- watch
- apiGroups: ["extensions", "apps"]
resources:
- deployments
- statefulsets
- replicasets
verbs:
- get
- list
- watch
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups: ["batch"]
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
- apiGroups: ["config.istio.io"]
resources:
- apikeys
- authorizations
- checknothings
- circonuses
- deniers
- fluentds
- handlers
- kubernetesenvs
- kuberneteses
- listcheckers
- listentries
- logentries
- memquotas
- metrics
- opas
- prometheuses
- quotas
- quotaspecbindings
- quotaspecs
- rbacs
- reportnothings
- rules
- servicecontrolreports
- servicecontrols
- solarwindses
- stackdrivers
- statsds
- stdios
verbs:
- get
- list
- watch
- apiGroups: ["networking.istio.io"]
resources:
- destinationrules
- gateways
- serviceentries
- virtualservices
verbs:
- get
- list
- watch
- apiGroups: ["authentication.istio.io"]
resources:
- policies
- meshpolicies
verbs:
- get
- list
- watch
- apiGroups: ["rbac.istio.io"]
resources:
- clusterrbacconfigs
- rbacconfigs
- serviceroles
- servicerolebindings
verbs:
- get
- list
- watch
- apiGroups: ["monitoring.kiali.io"]
resources:
- monitoringdashboards
verbs:
- get
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-kiali-admin-role-binding-{{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kiali
subjects:
- kind: ServiceAccount
name: kiali-service-account
namespace: {{ .Release.Namespace }}
apiVersion: v1
kind: ConfigMap
metadata:
name: kiali
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
data:
config.yaml: |
istio_namespace: {{ .Release.Namespace }}
server:
port: 20001
external_services:
istio:
url_service_version: http://istio-pilot:8080/version
jaeger:
service: "jaeger-query"
{{- if .Values.dashboard.jaegerURL }}
url: {{ .Values.dashboard.jaegerURL }}
{{- end }}
grafana:
custom_metrics_url: "http://prometheus.{{ .Release.Namespace }}:9090"
{{- if .Values.dashboard.grafanaURL }}
url: {{ .Values.dashboard.grafanaURL }}
{{- end }}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kiali
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: kiali
template:
metadata:
name: kiali
labels:
app: kiali
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: kiali-service-account
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- image: "{{ template "system_default_registry" . }}{{ .Values.repository }}:{{ .Values.tag }}"
name: kiali
command:
- "/opt/kiali/kiali"
- "-config"
- "/kiali-configuration/config.yaml"
- "-v"
- "4"
env:
{{- if and .Values.global.rancher (and .Values.global.rancher.domain .Values.global.rancher.clusterId) }}
{{- if not .Values.dashboard.jaegerURL }}
- name: JAEGER_URL
value: 'https://{{ .Values.global.rancher.domain }}/k8s/clusters/{{ .Values.global.rancher.clusterId }}/api/v1/namespaces/{{ .Release.Namespace }}/services/tracing:80/proxy/jaeger'
{{- end }}
{{- if not .Values.dashboard.grafanaURL }}
- name: GRAFANA_URL
value: 'https://{{ .Values.global.rancher.domain }}/k8s/clusters/{{ .Values.global.rancher.clusterId }}/api/v1/namespaces/{{ .Release.Namespace }}/services/http:grafana:80/proxy/'
{{- end }}
{{- end }}
- name: ACTIVE_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AUTH_STRATEGY
value: {{ .Values.dashboard.authStrategy }}
- name: SERVER_CREDENTIALS_USERNAME
valueFrom:
secretKeyRef:
name: {{ .Values.dashboard.secretName }}
key: username
optional: true
- name: SERVER_CREDENTIALS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.dashboard.secretName }}
key: passphrase
optional: true
- name: PROMETHEUS_SERVICE_URL
value: {{ .Values.prometheusAddr }}
{{- if .Values.contextPath }}
- name: SERVER_WEB_ROOT
value: {{ .Values.contextPath }}
{{- end }}
volumeMounts:
- name: kiali-configuration
mountPath: "/kiali-configuration"
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 10 }}
{{- else }}
{{ toYaml .Values.global.defaultResources | indent 10 }}
{{- end }}
- name: kiali-proxy
image: "{{ template "system_default_registry" . }}{{ .Values.global.nginxProxy.repository }}:{{ .Values.global.nginxProxy.tag }}"
args:
- nginx
- -g
- daemon off;
- -c
- /nginx/nginx.conf
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /nginx/
name: kiali-nginx
{{- if and .Values.resources .Values.resources.proxy }}
resources:
{{ toYaml .Values.resources.proxy | indent 10 }}
{{- end }}
volumes:
- name: kiali-configuration
configMap:
name: kiali
- name: kiali-nginx
configMap:
name: kiali-nginx
items:
- key: nginx.conf
mode: 438
path: nginx.conf
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.ingress.enabled -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kiali
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
rules:
{{- if .Values.ingress.hosts }}
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host }}
http:
paths:
- path: {{ if $.Values.contextPath }} {{ $.Values.contextPath }} {{ else }} / {{ end }}
backend:
serviceName: kiali
servicePort: 20001
{{- end -}}
{{- else }}
- http:
paths:
- path: {{ if .Values.contextPath }} {{ .Values.contextPath }} {{ else }} / {{ end }}
backend:
serviceName: kiali
servicePort: 20001
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: kiali-nginx
namespace: {{ .Release.Namespace }}
labels:
app: kiali-nginx
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
data:
nginx.conf: |-
user nginx;
worker_processes auto;
error_log /dev/stdout warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
log_format main '[$time_local - $status] $remote_addr - $remote_user $request ($http_referer)';
proxy_connect_timeout 10;
proxy_read_timeout 180;
proxy_send_timeout 5;
proxy_buffering off;
proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:100m inactive=1d max_size=10g;
server {
listen 80;
access_log off;
gzip on;
gzip_min_length 1k;
gzip_comp_level 2;
gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript image/jpeg image/gif image/png;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
proxy_set_header Host $host;
location / {
proxy_cache my_zone;
proxy_cache_valid 200 302 1d;
proxy_cache_valid 301 30d;
proxy_cache_valid any 5m;
proxy_cache_bypass $http_cache_control;
add_header X-Proxy-Cache $upstream_cache_status;
add_header Cache-Control "public";
proxy_pass http://localhost:20001/;
sub_filter_types text/html;
sub_filter_once on;
sub_filter </head> '<script>var path = window.location.pathname; var pathName = path.substring(0, path.lastIndexOf("/proxy") + 6); window.WEB_ROOT = window.WEB_ROOT ? pathName + window.WEB_ROOT:pathName</script></head>';
if ($request_filename ~ .*\.(?:js|css|jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm)$) {
expires 90d;
}
}
}
}
apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
type: Opaque
data:
username: {{ .Values.dashboard.username | b64enc | quote }}
passphrase: {{ .Values.dashboard.passphrase | b64enc | quote }}
apiVersion: v1
kind: Service
metadata:
name: kiali
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
type: ClusterIP
ports:
- name: http-kiali
protocol: TCP
port: 20001
selector:
app: kiali
---
apiVersion: v1
kind: Service
metadata:
name: kiali-http
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
kubernetes.io/cluster-service: "true"
spec:
type: {{ .Values.service.type }}
ports:
- name: http-access-kiali
protocol: TCP
port: 80
selector:
app: kiali
apiVersion: v1
kind: ServiceAccount
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
metadata:
name: kiali-service-account
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
chart: {{ template "kiali.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.global.enableHelmTest }}
apiVersion: v1
kind: Pod
metadata:
name: {{ template "kiali.fullname" . }}-test
namespace: {{ .Release.Namespace }}
labels:
app: kiali-test
chart: {{ template "kiali.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
istio: kiali
annotations:
sidecar.istio.io/inject: "false"
helm.sh/hook: test-success
spec:
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: "{{ template "kiali.fullname" . }}-test"
image: "{{ template "system_default_registry" . }}{{ .Values.global.proxy.repository }}:{{ .Values.global.proxy.tag }}"
imagePullPolicy: "{{ .Values.global.imagePullPolicy }}"
command: ['curl']
args: ['http://kiali:20001']
restartPolicy: Never
affinity:
{{- include "nodeaffinity" . | indent 4 }}
{{- end }}
suite: Test Istio Kiali Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
replicaCount: 1
asserts:
- equal:
path: spec.replicas
value: 1
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
#
# addon kiali
#
enabled: false
replicaCount: 1
contextPath: /
nodeSelector: {}
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
# already running on the node rather than based on labels on nodes.
# There are currently two types of anti-affinity:
# "requiredDuringSchedulingIgnoredDuringExecution"
# "preferredDuringSchedulingIgnoredDuringExecution"
# which denote “hard” vs. “soft” requirements, you can define your values
# in "podAntiAffinityLabelSelector" and "podAntiAffinityTermLabelSelector"
# correspondingly.
# For example:
# podAntiAffinityLabelSelector:
# - key: security
# operator: In
# values: S1,S2
# topologyKey: "kubernetes.io/hostname"
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
ingress:
enabled: false
## Used to create an Ingress record.
hosts:
- kiali.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: kiali-tls
# hosts:
# - kiali.local
dashboard:
# login/anonymous
authStrategy: anonymous
secretName: kiali
username: admin
passphrase: admin
# Override the automatically detected Grafana URL, useful when Grafana service has no ExternalIPs
grafanaURL:
# Override the automatically detected Jaeger URL, useful when Jaeger service has no ExternalIPs
jaegerURL:
prometheusAddr: http://prometheus:9090
service:
type: ClusterIP
resources: {}
\ No newline at end of file
apiVersion: v1
name: mixer
version: 1.1.0
appVersion: 1.1.0
tillerVersion: ">=2.7.2"
description: Helm chart for mixer deployment
keywords:
- istio
- mixer
sources:
- http://github.com/istio/istio
engine: gotpl
icon: https://istio.io/favicons/android-192x192.png
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "mixer.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "mixer.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "mixer.chart" -}}
{{- .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- range $key, $spec := .Values }}
{{- if or (eq $key "policy") (eq $key "telemetry") }}
{{- if and $spec.enabled $spec.autoscaleEnabled $spec.autoscaleMin $spec.autoscaleMax }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: istio-{{ $key }}
namespace: {{ $.Release.Namespace }}
labels:
app: {{ template "mixer.name" $ }}
chart: {{ template "mixer.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
spec:
maxReplicas: {{ $spec.autoscaleMax }}
minReplicas: {{ $spec.autoscaleMin }}
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: istio-{{ $key }}
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ $spec.cpu.targetAverageUtilization }}
---
{{- end }}
{{- end }}
{{- end }}
{{- if or (.Values.policy.enabled) (.Values.telemetry.enabled) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-mixer-{{ .Release.Namespace }}
labels:
app: {{ template "mixer.name" . }}
chart: {{ template "mixer.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["config.istio.io"] # istio CRD watcher
resources: ["*"]
verbs: ["create", "get", "list", "watch", "patch"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["configmaps", "endpoints", "pods", "services", "namespaces", "secrets", "replicationcontrollers"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions", "apps"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
{{- end }}
{{- if or (.Values.policy.enabled) (.Values.telemetry.enabled) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-mixer-admin-role-binding-{{ .Release.Namespace }}
labels:
app: {{ template "mixer.name" . }}
chart: {{ template "mixer.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: istio-mixer-{{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
name: istio-mixer-service-account
namespace: {{ .Release.Namespace }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if or (eq $key "policy") (eq $key "telemetry") }}
{{- if $spec.enabled }}
{{- if $.Values.global.defaultPodDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: istio-{{ $key }}
namespace: {{ $.Release.Namespace }}
labels:
app: {{ $key }}
chart: {{ template "mixer.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
version: {{ $.Chart.Version }}
istio: mixer
istio-mixer-type: {{ $key }}
spec:
{{- if $.Values.global.defaultPodDisruptionBudget.enabled }}
{{ include "podDisruptionBudget.spec" $.Values.global.defaultPodDisruptionBudget }}
{{- end }}
selector:
matchLabels:
app: {{ $key }}
release: {{ $.Release.Name }}
istio: mixer
istio-mixer-type: {{ $key }}
---
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if or (eq $key "policy") (eq $key "telemetry") }}
{{- if $spec.enabled }}
apiVersion: v1
kind: Service
metadata:
name: istio-{{ $key }}
namespace: {{ $.Release.Namespace }}
annotations:
networking.istio.io/exportTo: "*"
labels:
app: {{ template "mixer.name" $ }}
chart: {{ template "mixer.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
istio: mixer
spec:
ports:
- name: grpc-mixer
port: 9091
- name: grpc-mixer-mtls
port: 15004
- name: http-monitoring
port: {{ $.Values.global.monitoringPort }}
{{- if eq $key "telemetry" }}
- name: prometheus
port: 42422
{{- if $spec.sessionAffinityEnabled }}
sessionAffinity: ClientIP
{{- end }}
{{- end }}
selector:
istio: mixer
istio-mixer-type: {{ $key }}
---
{{- end }}
{{- end }}
{{- end }}
{{- if or (.Values.policy.enabled) (.Values.telemetry.enabled) }}
apiVersion: v1
kind: ServiceAccount
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
metadata:
name: istio-mixer-service-account
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "mixer.name" . }}
chart: {{ template "mixer.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- end }}
suite: Test Istio Mixer Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
policy.enabled: true
telemetry.enabled: false
asserts:
- isNull:
path: spec.replicas
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
- it: should pass all kinds of assertion
set:
policy.enabled: false
telemetry.enabled: true
asserts:
- isNull:
path: spec.replicas
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
- it: should pass all kinds of assertion
set:
policy.enabled: true
telemetry.enabled: true
asserts:
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 2
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment