Unverified Commit 89c34b61 by Denise Committed by GitHub

Merge pull request #68 from alena1108/jun19

[backport 2.3-preview2] Istio 1.2
parents 5f3cd2c3 9c5e36ae
apiVersion: v1
name: rancher-istio
version: 0.0.1
appVersion: 1.1.5
appVersion: 1.2.0
tillerVersion: ">=2.7.2-0"
description: Helm chart for all istio components
home: https://istio.io/
......
......@@ -2,13 +2,17 @@
[Istio](https://istio.io/) is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data.
The documentation here is for developers only, please follow the installation instructions from [istio.io](https://istio.io/docs/setup/kubernetes/install/helm/) for all other uses.
## Introduction
This chart bootstraps all istio [components](https://istio.io/docs/concepts/what-is-istio/overview.html) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
This chart bootstraps all Istio [components](https://istio.io/docs/concepts/what-is-istio/overview.html) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Chart Details
This chart can install multiple istio components as subcharts:
This chart can install multiple Istio components as subcharts:
- ingressgateway
- egressgateway
- sidecarInjectorWebhook
......@@ -18,7 +22,6 @@ This chart can install multiple istio components as subcharts:
- security(citadel)
- grafana
- prometheus
- servicegraph
- tracing(jaeger)
- kiali
......@@ -105,12 +108,6 @@ The chart deploys pods that consume minimum resources as specified in the resour
EOF
```
1. Add `istio.io` chart repository and point to the release:
```
$ helm repo add istio.io https://storage.googleapis.com/istio-prerelease/daily-build/release-1.1-latest-daily/charts
```
1. To install the chart with the release name `istio` in namespace $NAMESPACE you defined above:
- With [automatic sidecar injection](https://istio.io/docs/setup/kubernetes/sidecar-injection/#automatic-sidecar-injection) (requires Kubernetes >=1.9.0):
......@@ -128,39 +125,7 @@ The chart deploys pods that consume minimum resources as specified in the resour
The Helm chart ships with reasonable defaults. There may be circumstances in which defaults require overrides.
To override Helm values, use `--set key=value` argument during the `helm install` command. Multiple `--set` operations may be used in the same Helm operation.
Helm charts expose configuration options which are currently in alpha. The currently exposed options are explained in the following table:
| Parameter | Description | Values | Default |
| --- | --- | --- | --- |
| `global.hub` | Specifies the HUB for most images used by Istio | registry/namespace | `docker.io/istio` |
| `global.tag` | Specifies the TAG for most images used by Istio | valid image tag | `0.8.latest` |
| `global.proxy.image` | Specifies the proxy image name | valid proxy name | `proxyv2` |
| `global.proxy.concurrency` | Specifies the number of proxy worker threads | number, 0 = auto | `0` |
| `global.imagePullPolicy` | Specifies the image pull policy | valid image pull policy | `IfNotPresent` |
| `global.controlPlaneSecurityEnabled` | Specifies whether control plane mTLS is enabled | true/false | `false` |
| `global.mtls.enabled` | Specifies whether mTLS is enabled by default between services | true/false | `false` |
| `global.rbacEnabled` | Specifies whether to create Istio RBAC rules or not | true/false | `true` |
| `global.arch.amd64` | Specifies the scheduling policy for `amd64` architectures | 0 = never, 1 = least preferred, 2 = no preference, 3 = most preferred | `2` |
| `global.arch.s390x` | Specifies the scheduling policy for `s390x` architectures | 0 = never, 1 = least preferred, 2 = no preference, 3 = most preferred | `2` |
| `global.arch.ppc64le` | Specifies the scheduling policy for `ppc64le` architectures | 0 = never, 1 = least preferred, 2 = no preference, 3 = most preferred | `2` |
| `ingress.enabled` | Specifies whether Ingress should be installed | true/false | `true` |
| `gateways.enabled` | Specifies whether gateway(both Ingres and Egress) should be installed | true/false | `true` |
| `gateways.istio-ingressgateway.enabled` | Specifies whether Ingress gateway should be installed | true/false | `true` |
| `gateways.istio-egressgateway.enabled` | Specifies whether Egress gateway should be installed | true/false | `true` |
| `sidecarInjectorWebhook.enabled` | Specifies whether automatic sidecar-injector should be installed | true/false | `true` |
| `galley.enabled` | Specifies whether Galley should be installed for server-side config validation | true/false | `true` |
| `security.enabled` | Specifies whether Citadel should be installed | true/false | `true` |
| `mixer.policy.enabled` | Specifies whether Mixer Policy should be installed | true/false | `true` |
| `mixer.telemetry.enabled` | Specifies whether Mixer Telemetry should be installed | true/false | `true` |
| `pilot.enabled` | Specifies whether Pilot should be installed | true/false | `true` |
| `grafana.enabled` | Specifies whether Grafana addon should be installed | true/false | `false` |
| `grafana.persist` | Specifies whether Grafana addon should persist config data | true/false | `false` |
| `grafana.storageClassName` | If `grafana.persist` is true, specifies the [`StorageClass`](https://kubernetes.io/docs/concepts/storage/storage-classes/) to use for the `PersistentVolumeClaim` | `StorageClass` | "" |
| `grafana.accessMode` | If `grafana.persist` is true, specifies the [`Access Mode`](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) to use for the `PersistentVolumeClaim` | RWO/ROX/RWX | `ReadWriteMany` |
| `prometheus.enabled` | Specifies whether Prometheus addon should be installed | true/false | `true` |
| `servicegraph.enabled` | Specifies whether Servicegraph addon should be installed | true/false | `false` |
| `tracing.enabled` | Specifies whether Tracing(jaeger) addon should be installed | true/false | `false` |
| `kiali.enabled` | Specifies whether Kiali addon should be installed | true/false | `false` |
Helm charts expose configuration options which are currently in alpha. The currently exposed options can be found [here](https://istio.io/docs/reference/config/installation-options/).
## Uninstalling the Chart
......
......@@ -4,7 +4,7 @@
## Introduction
This chart bootstraps all istio [components](https://istio.io/docs/concepts/what-is-istio/overview.html) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
This chart bootstraps all Istio [components](https://istio.io/docs/concepts/what-is-istio/overview.html) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Chart Details
......
apiVersion: apps/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: certmanager
......@@ -9,7 +9,7 @@ metadata:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
replicas: 1
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: certmanager
......@@ -38,7 +38,7 @@ spec:
{{- if .Values.global.systemDefaultRegistry }}
image: "{{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}"
{{- else }}
image: {{ .Values.image.hub }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
{{- end }}
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
args:
......@@ -54,6 +54,7 @@ spec:
fieldPath: metadata.namespace
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- if .Values.podDnsPolicy }}
dnsPolicy: {{ .Values.podDnsPolicy }}
{{- end }}
......@@ -64,3 +65,7 @@ spec:
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
\ No newline at end of file
......@@ -4,12 +4,13 @@
# gateway must be updated by adding 'secretVolumes'. After the gateway
# restart, DestinationRules can be created using the ACME-signed certificates.
enabled: false
replicaCount: 1
image:
hub: quay.io
repository: rancher/jetstack-cert-manager-controller
tag: v0.6.2
resources: {}
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -29,5 +30,5 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-galley
......@@ -11,6 +11,9 @@ metadata:
istio: galley
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
istio: galley
strategy:
rollingUpdate:
maxSurge: 1
......@@ -46,6 +49,7 @@ spec:
- --livenessProbePath=/healthliveness
- --readinessProbePath=/healthready
- --readinessProbeInterval=1s
- --deployment-namespace={{ .Release.Namespace }}
{{- if $.Values.global.controlPlaneSecurityEnabled}}
- --insecure=false
{{- else }}
......@@ -107,3 +111,7 @@ spec:
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
......@@ -3,7 +3,6 @@ apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: istio-galley
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "galley.name" . }}
chart: {{ template "galley.chart" . }}
......@@ -65,6 +64,7 @@ webhooks:
- sidecars
- virtualservices
failurePolicy: Fail
sideEffects: None
- name: mixer.validation.istio.io
clientConfig:
service:
......@@ -109,6 +109,12 @@ webhooks:
- quotas
- reportnothings
- tracespans
- adapters
- handlers
- instances
- templates
- zipkins
failurePolicy: Fail
sideEffects: None
{{- end }}
{{- end }}
suite: Test Galley Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
asserts:
- equal:
path: spec.replicas
value: 1
- equal:
path: spec.template.spec.containers[0].ports
value:
- containerPort: 443
- containerPort: 15014
- containerPort: 9901
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
suite: Test Galley RBAC
templates:
- clusterrole.yaml
tests:
- it: should pass all kinds of assertion
set:
asserts:
- isNotNull:
path: rules
- isNotEmpty:
path: rules
- contains:
path: rules
content:
apiGroups: ["admissionregistration.k8s.io"]
resources: ["validatingwebhookconfigurations"]
verbs: ["*"]
- isKind:
of: ClusterRole
- isAPIVersion:
of: rbac.authorization.k8s.io/v1
- hasDocuments:
count: 1
......@@ -4,6 +4,7 @@
enabled: true
replicaCount: 1
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -23,5 +24,5 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
......@@ -11,10 +11,6 @@
{{- define "gatewayNodeAffinityRequiredDuringScheduling" }}
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
......@@ -66,7 +62,7 @@
matchExpressions:
- key: {{ $item.key }}
operator: {{ $item.operator }}
{{- if $item.value }}
{{- if $item.values }}
values:
{{- $vals := split "," $item.values }}
{{- range $i, $v := $vals }}
......
......@@ -7,15 +7,17 @@ metadata:
name: {{ $key }}
namespace: {{ $spec.namespace | default $.Release.Namespace }}
labels:
app: {{ $spec.labels.istio }}
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
spec:
maxReplicas: {{ $spec.autoscaleMax }}
minReplicas: {{ $spec.autoscaleMin }}
scaleTargetRef:
apiVersion: apps/v1beta1
apiVersion: apps/v1
kind: Deployment
name: {{ $key }}
metrics:
......
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ $key }}-{{ $.Release.Namespace }}
labels:
app: {{ $spec.labels.istio }}
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
rules:
- apiGroups: ["networking.istio.io"]
resources: ["virtualservices", "destinationrules", "gateways"]
verbs: ["get", "watch", "list", "update"]
---
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ $key }}-{{ $.Release.Namespace }}
labels:
app: {{ $spec.labels.istio }}
chart: {{ template "gateway.chart" $ }}
heritage: {{ $.Release.Service }}
release: {{ $.Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ $key }}-{{ $.Release.Namespace }}
subjects:
- kind: ServiceAccount
name: {{ $key }}-service-account
namespace: {{ $.Release.Namespace }}
---
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $spec := .Values }}
{{- if ne $key "enabled" }}
{{- if $spec.enabled }}
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $key }}
......@@ -21,6 +21,12 @@ spec:
replicas: 1
{{- end }}
{{- end }}
selector:
matchLabels:
release: {{ $.Release.Name }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
template:
metadata:
labels:
......@@ -44,7 +50,7 @@ spec:
initContainers:
- name: enable-core-dump
image: "{{ template "system_default_registry" $ }}{{ $.Values.global.proxy_init.repository }}:{{ $.Values.global.proxy_init.tag }}"
imagePullPolicy: IfNotPresent
imagePullPolicy: {{ $.Values.global.imagePullPolicy }}
command:
- /bin/sh
args:
......@@ -59,6 +65,12 @@ spec:
- name: ingress-sds
image: "{{ template "system_default_registry" $ }}{{ $.Values.global.nodeAgent.repository }}:{{ $.Values.global.nodeAgent.tag }}"
imagePullPolicy: {{ $.Values.global.imagePullPolicy }}
resources:
{{- if $spec.sds.resources }}
{{ toYaml $spec.sds.resources | indent 12 }}
{{- else }}
{{ toYaml $.Values.global.defaultResources | indent 12 }}
{{- end }}
env:
- name: "ENABLE_WORKLOAD_SDS"
value: "false"
......@@ -92,6 +104,9 @@ spec:
{{- if $.Values.global.proxy.logLevel }}
- --proxyLogLevel={{ $.Values.global.proxy.logLevel }}
{{- end}}
{{- if $.Values.global.proxy.componentLogLevel }}
- --proxyComponentLogLevel={{ $.Values.global.proxy.componentLogLevel }}
{{- end}}
{{- if $.Values.global.logging.level }}
- --log_output_level={{ $.Values.global.logging.level }}
{{- end}}
......@@ -162,6 +177,11 @@ spec:
{{ toYaml $.Values.global.defaultResources | indent 12 }}
{{- end }}
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
......@@ -206,7 +226,7 @@ spec:
volumeMounts:
{{- if $.Values.global.sds.enabled }}
- name: sdsudspath
mountPath: /var/run/sds/uds_path
mountPath: /var/run/sds
readOnly: true
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
......@@ -240,8 +260,7 @@ spec:
{{- if $.Values.global.sds.enabled }}
- name: sdsudspath
hostPath:
path: /var/run/sds/uds_path
type: Socket
path: /var/run/sds
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
projected:
......@@ -271,6 +290,10 @@ spec:
affinity:
{{- include "gatewaynodeaffinity" (dict "root" $ "nodeSelector" $spec.nodeSelector) | indent 6 }}
{{- include "gatewaypodAntiAffinity" (dict "podAntiAffinityLabelSelector" $spec.podAntiAffinityLabelSelector "podAntiAffinityTermLabelSelector" $spec.podAntiAffinityTermLabelSelector) | indent 6 }}
{{- if $spec.tolerations }}
tolerations:
{{ toYaml $spec.tolerations | indent 6 }}
{{- end }}
---
{{- end }}
{{- end }}
......
......@@ -84,7 +84,15 @@ metadata:
release: {{ .Release.Name }}
spec:
selector:
istio: ingressgateway
{{- range $key, $spec := .Values }}
{{- if eq $key "istio-ingressgateway" }}
{{- if $spec.enabled }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
servers:
- port:
number: 15011
......@@ -123,7 +131,15 @@ metadata:
release: {{ .Release.Name }}
spec:
selector:
istio: egressgateway
{{- range $key, $spec := .Values }}
{{- if eq $key "istio-egressgateway" }}
{{- if $spec.enabled }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
servers:
- hosts:
- "*.global"
......@@ -146,7 +162,15 @@ metadata:
release: {{ .Release.Name }}
spec:
selector:
istio: ingressgateway
{{- range $key, $spec := .Values }}
{{- if eq $key "istio-ingressgateway" }}
{{- if $spec.enabled }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
servers:
- hosts:
- "*.global"
......@@ -169,7 +193,15 @@ metadata:
release: {{ .Release.Name }}
spec:
workloadLabels:
istio: ingressgateway
{{- range $key, $spec := .Values }}
{{- if eq $key "istio-ingressgateway" }}
{{- if $spec.enabled }}
{{- range $key, $val := $spec.labels }}
{{ $key }}: {{ $val }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
filters:
- listenerMatch:
portNumber: 15443
......
suite: Test Gateway Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
istio-ingressgateway.enabled: true
istio-ilbgateway.enabled: false
istio-egressgateway.enabled: false
istio-ingressgateway.autoscaleEnabled: true
asserts:
- isNull:
path: spec.replicas
- contains:
path: spec.template.spec.containers[0].ports
content:
containerPort: 80
- contains:
path: spec.template.spec.containers[0].ports
content:
containerPort: 443
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
- it: should deploy 3 gateways
set:
istio-ingressgateway.enabled: true
istio-ilbgateway.enabled: true
istio-egressgateway.enabled: true
asserts:
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 3
- it: should not deploy gateways
set:
istio-ingressgateway.enabled: false
istio-ilbgateway.enabled: false
istio-egressgateway.enabled: false
asserts:
- hasDocuments:
count: 0
......@@ -17,6 +17,14 @@ istio-ingressgateway:
enabled: false
# SDS server that watches kubernetes secrets and provisions credentials to ingress gateway.
# This server runs in the same pod as ingress gateway.
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2000m
memory: 1024Mi
labels:
app: istio-ingressgateway
istio: ingressgateway
......@@ -31,7 +39,7 @@ istio-ingressgateway:
memory: 128Mi
limits:
cpu: 2000m
memory: 256Mi
memory: 1024Mi
cpu:
targetAverageUtilization: 80
loadBalancerIP: ""
......@@ -104,12 +112,22 @@ istio-ingressgateway:
secretName: istio-ingressgateway-ca-certs
mountPath: /etc/istio/ingressgateway-ca-certs
### Advanced options ############
# Ports to explicitly check for readiness. If configured, the readiness check will expect a
# listener on these ports. A comma separated list is expected, such as "80,443".
#
# Warning: If you do not have a gateway configured for the ports provided, this check will always
# fail. This is intended for use cases where you always expect to have a listener on the port,
# such as 80 or 443 in typical setups.
applicationPorts: ""
env:
# A gateway with this mode ensures that pilot generates an additional
# set of clusters for internal services but without Istio mTLS, to
# enable cross cluster routing.
ISTIO_META_ROUTER_MODE: "sni-dnat"
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -129,8 +147,8 @@ istio-ingressgateway:
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
istio-egressgateway:
enabled: false
......@@ -185,6 +203,7 @@ istio-egressgateway:
# enable cross cluster routing.
ISTIO_META_ROUTER_MODE: "sni-dnat"
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -204,8 +223,8 @@ istio-egressgateway:
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
# Mesh ILB gateway creates a gateway of type InternalLoadBalancer,
# for mesh expansion. It exposes the mtls ports for Pilot,CA as well
......@@ -255,3 +274,4 @@ istio-ilbgateway:
secretName: istio-ilbgateway-ca-certs
mountPath: /etc/istio/ilbgateway-ca-certs
nodeSelector: {}
tolerations: []
......@@ -296,14 +296,14 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(container_cpu_usage_seconds_total{container_name=~\"galley\", pod_name=~\"istio-galley-.*\"}[1m]))",
"expr": "sum(rate(container_cpu_usage_seconds_total{job=\"kubernetes-cadvisor\",container_name=~\"galley\", pod_name=~\"istio-galley-.*\"}[1m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Total (k8s)",
"refId": "A"
},
{
"expr": "sum(rate(container_cpu_usage_seconds_total{container_name=~\"galley\", pod_name=~\"istio-galley-.*\"}[1m])) by (container_name)",
"expr": "sum(rate(container_cpu_usage_seconds_total{job=\"kubernetes-cadvisor\",container_name=~\"galley\", pod_name=~\"istio-galley-.*\"}[1m])) by (container_name)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{ container_name }} (k8s)",
......
......@@ -1742,7 +1742,7 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy=\"mutual_tls\", destination_service=~\"$service\",response_code!~\"5.*\", destination_workload=~\"$dstwl\", destination_workload_namespace=~\"$dstns\"}[5m])) by (destination_workload, destination_workload_namespace) / sum(rate(istio_requests_total{reporter=\"destination\", connection_security_policy=\"mutual_tls\", destination_service=~\"$service\", destination_workload=~\"$dstwl\", destination_workload_namespace=~\"$dstns\"}[5m])) by (destination_workload, destination_workload_namespace)",
"expr": "sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy=\"mutual_tls\", destination_service=~\"$service\",response_code!~\"5.*\", destination_workload=~\"$dstwl\", destination_workload_namespace=~\"$dstns\"}[5m])) by (destination_workload, destination_workload_namespace) / sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy=\"mutual_tls\", destination_service=~\"$service\", destination_workload=~\"$dstwl\", destination_workload_namespace=~\"$dstns\"}[5m])) by (destination_workload, destination_workload_namespace)",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
......@@ -1751,7 +1751,7 @@
"step": 2
},
{
"expr": "sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy!=\"mutual_tls\", destination_service=~\"$service\",response_code!~\"5.*\", destination_workload=~\"$dstwl\", destination_workload_namespace=~\"$dstns\"}[5m])) by (destination_workload, destination_workload_namespace) / sum(rate(istio_requests_total{reporter=\"destination\", connection_security_policy!=\"mutual_tls\", destination_service=~\"$service\", destination_workload=~\"$dstwl\", destination_workload_namespace=~\"$dstns\"}[5m])) by (destination_workload, destination_workload_namespace)",
"expr": "sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy!=\"mutual_tls\", destination_service=~\"$service\",response_code!~\"5.*\", destination_workload=~\"$dstwl\", destination_workload_namespace=~\"$dstns\"}[5m])) by (destination_workload, destination_workload_namespace) / sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy!=\"mutual_tls\", destination_service=~\"$service\", destination_workload=~\"$dstwl\", destination_workload_namespace=~\"$dstns\"}[5m])) by (destination_workload, destination_workload_namespace)",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
......
......@@ -654,7 +654,7 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy=\"mutual_tls\", destination_workload_namespace=~\"$namespace\", destination_workload=~\"$workload\",response_code!~\"5.*\", source_workload=~\"$srcwl\", source_workload_namespace=~\"$srcns\"}[5m])) by (source_workload, source_workload_namespace) / sum(rate(istio_requests_total{reporter=\"destination\", connection_security_policy=\"mutual_tls\", destination_workload_namespace=~\"$namespace\", destination_workload=~\"$workload\", source_workload=~\"$srcwl\", source_workload_namespace=~\"$srcns\"}[5m])) by (source_workload, source_workload_namespace)",
"expr": "sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy=\"mutual_tls\", destination_workload_namespace=~\"$namespace\", destination_workload=~\"$workload\",response_code!~\"5.*\", source_workload=~\"$srcwl\", source_workload_namespace=~\"$srcns\"}[5m])) by (source_workload, source_workload_namespace) / sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy=\"mutual_tls\", destination_workload_namespace=~\"$namespace\", destination_workload=~\"$workload\", source_workload=~\"$srcwl\", source_workload_namespace=~\"$srcns\"}[5m])) by (source_workload, source_workload_namespace)",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
......@@ -663,7 +663,7 @@
"step": 2
},
{
"expr": "sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy!=\"mutual_tls\", destination_workload_namespace=~\"$namespace\", destination_workload=~\"$workload\",response_code!~\"5.*\", source_workload=~\"$srcwl\", source_workload_namespace=~\"$srcns\"}[5m])) by (source_workload, source_workload_namespace) / sum(rate(istio_requests_total{reporter=\"destination\", connection_security_policy!=\"mutual_tls\", destination_workload_namespace=~\"$namespace\", destination_workload=~\"$workload\", source_workload=~\"$srcwl\", source_workload_namespace=~\"$srcns\"}[5m])) by (source_workload, source_workload_namespace)",
"expr": "sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy!=\"mutual_tls\", destination_workload_namespace=~\"$namespace\", destination_workload=~\"$workload\",response_code!~\"5.*\", source_workload=~\"$srcwl\", source_workload_namespace=~\"$srcns\"}[5m])) by (source_workload, source_workload_namespace) / sum(irate(istio_requests_total{reporter=\"destination\", connection_security_policy!=\"mutual_tls\", destination_workload_namespace=~\"$namespace\", destination_workload=~\"$workload\", source_workload=~\"$srcwl\", source_workload_namespace=~\"$srcns\"}[5m])) by (source_workload, source_workload_namespace)",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
......
......@@ -356,7 +356,7 @@
"steppedLine": false,
"targets": [
{
"expr": "label_replace(sum(rate(container_cpu_usage_seconds_total{container_name=~\"mixer|istio-proxy\", pod_name=~\"istio-telemetry-.*|istio-policy-.*\"}[1m])) by (pod_name), \"service\", \"$1\" , \"pod_name\", \"(istio-telemetry|istio-policy)-.*\")",
"expr": "label_replace(sum(rate(container_cpu_usage_seconds_total{job=\"kubernetes-cadvisor\",container_name=~\"mixer|istio-proxy\", pod_name=~\"istio-telemetry-.*|istio-policy-.*\"}[1m])) by (pod_name), \"service\", \"$1\" , \"pod_name\", \"(istio-telemetry|istio-policy)-.*\")",
"format": "time_series",
"hide": false,
"intervalFactor": 2,
......@@ -364,7 +364,7 @@
"refId": "A"
},
{
"expr": "label_replace(sum(rate(container_cpu_usage_seconds_total{container_name=~\"mixer|istio-proxy\", pod_name=~\"istio-telemetry-.*|istio-policy-.*\"}[1m])) by (container_name, pod_name), \"service\", \"$1\" , \"pod_name\", \"(istio-telemetry|istio-policy)-.*\")",
"expr": "label_replace(sum(rate(container_cpu_usage_seconds_total{job=\"kubernetes-cadvisor\",container_name=~\"mixer|istio-proxy\", pod_name=~\"istio-telemetry-.*|istio-policy-.*\"}[1m])) by (container_name, pod_name), \"service\", \"$1\" , \"pod_name\", \"(istio-telemetry|istio-policy)-.*\")",
"format": "time_series",
"hide": false,
"intervalFactor": 2,
......@@ -1599,7 +1599,7 @@
"steppedLine": false,
"targets": [
{
"expr": "label_replace(irate(mixer_runtime_dispatches_total{adapter=\"$adapter\"}[1m]),\"handler\", \"$1 ($3)\", \"handler\", \"(.*)\\\\.(.*)\\\\.(.*)\")",
"expr": "label_replace(irate(mixer_runtime_dispatches_total{adapter=~\"$adapter\"}[1m]),\"handler\", \"$1 ($3)\", \"handler\", \"(.*)\\\\.(.*)\\\\.(.*)\")",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{ handler }} (error: {{ error }})",
......@@ -1683,21 +1683,21 @@
"steppedLine": false,
"targets": [
{
"expr": "label_replace(histogram_quantile(0.5, sum(rate(mixer_runtime_dispatch_duration_seconds_bucket{adapter=\"$adapter\"}[1m])) by (handler, error, le)), \"handler_short\", \"$1 ($3)\", \"handler\", \"(.*)\\\\.(.*)\\\\.(.*)\")",
"expr": "label_replace(histogram_quantile(0.5, sum(rate(mixer_runtime_dispatch_duration_seconds_bucket{adapter=~\"$adapter\"}[1m])) by (handler, error, le)), \"handler_short\", \"$1 ($3)\", \"handler\", \"(.*)\\\\.(.*)\\\\.(.*)\")",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "p50 - {{ handler_short }} (error: {{ error }})",
"refId": "A"
},
{
"expr": "label_replace(histogram_quantile(0.9, sum(irate(mixer_runtime_dispatch_duration_seconds_bucket{adapter=\"$adapter\"}[1m])) by (handler, error, le)), \"handler_short\", \"$1 ($3)\", \"handler\", \"(.*)\\\\.(.*)\\\\.(.*)\")",
"expr": "label_replace(histogram_quantile(0.9, sum(irate(mixer_runtime_dispatch_duration_seconds_bucket{adapter=~\"$adapter\"}[1m])) by (handler, error, le)), \"handler_short\", \"$1 ($3)\", \"handler\", \"(.*)\\\\.(.*)\\\\.(.*)\")",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "p90 - {{ handler_short }} (error: {{ error }})",
"refId": "D"
},
{
"expr": "label_replace(histogram_quantile(0.99, sum(irate(mixer_runtime_dispatch_duration_seconds_bucket{adapter=\"$adapter\"}[1m])) by (handler, error, le)), \"handler_short\", \"$1 ($3)\", \"handler\", \"(.*)\\\\.(.*)\\\\.(.*)\")",
"expr": "label_replace(histogram_quantile(0.99, sum(irate(mixer_runtime_dispatch_duration_seconds_bucket{adapter=~\"$adapter\"}[1m])) by (handler, error, le)), \"handler_short\", \"$1 ($3)\", \"handler\", \"(.*)\\\\.(.*)\\\\.(.*)\")",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "p99 - {{ handler_short }} (error: {{ error }})",
......
File mode changed from 100644 to 100755
......@@ -14,6 +14,12 @@ metadata:
chart: {{ template "grafana.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
......
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
......@@ -10,6 +10,9 @@ metadata:
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
......@@ -19,6 +22,7 @@ spec:
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
prometheus.io/scrape: "true"
spec:
securityContext:
fsGroup: 472
......@@ -115,6 +119,10 @@ spec:
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
volumes:
- name: config
configMap:
......
......@@ -26,4 +26,5 @@ spec:
restartPolicy: Never
affinity:
{{- include "nodeaffinity" . | indent 4 }}
{{- include "podAntiAffinity" . | indent 4 }}
{{- end }}
suite: Test Istio Grafana Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
replicaCount: 1
asserts:
- equal:
path: spec.replicas
value: 1
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
......@@ -28,6 +28,7 @@ security:
usernameKey: username
passphraseKey: passphrase
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -47,8 +48,8 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
contextPath: /grafana
service:
......
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: istiocoredns
......@@ -10,6 +10,9 @@ metadata:
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: istiocoredns
template:
metadata:
name: istiocoredns
......@@ -28,7 +31,7 @@ spec:
containers:
- name: coredns
image: "{{ template "system_default_registry" . }}{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
......@@ -62,7 +65,7 @@ spec:
command:
- /usr/local/bin/plugin
image: "{{ template "system_default_registry" . }}{{ .Values.pluginImage.repository }}:{{ .Values.pluginImage.tag }}"
imagePullPolicy: IfNotPresent
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
ports:
- containerPort: 8053
name: dns-grpc
......@@ -84,3 +87,7 @@ spec:
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
......@@ -7,6 +7,7 @@ replicaCount: 1
# https://github.com/istio-ecosystem/istio-coredns-plugin
# The plugin listens for DNS requests from coredns server at 127.0.0.1:8053
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -26,5 +27,5 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
......@@ -2,5 +2,5 @@ apiVersion: v1
description: Kiali is an open source project for service mesh observability, refer to https://www.kiali.io for details.
name: kiali
version: 1.1.0
appVersion: 0.17
appVersion: 0.20
tillerVersion: ">=2.7.2"
......@@ -15,8 +15,9 @@ rules:
- namespaces
- nodes
- pods
- services
- pods/log
- replicationcontrollers
- services
verbs:
- get
- list
......@@ -24,8 +25,8 @@ rules:
- apiGroups: ["extensions", "apps"]
resources:
- deployments
- statefulsets
- replicasets
- statefulsets
verbs:
- get
- list
......@@ -47,13 +48,19 @@ rules:
- watch
- apiGroups: ["config.istio.io"]
resources:
- adapters
- apikeys
- bypasses
- authorizations
- checknothings
- circonuses
- cloudwatches
- deniers
- dogstatsds
- edges
- fluentds
- handlers
- instances
- kubernetesenvs
- kuberneteses
- listcheckers
......@@ -61,18 +68,24 @@ rules:
- logentries
- memquotas
- metrics
- noops
- opas
- prometheuses
- quotas
- quotaspecbindings
- quotaspecs
- rbacs
- redisquotas
- reportnothings
- rules
- signalfxs
- solarwindses
- stackdrivers
- statsds
- stdios
- templates
- tracespans
- zipkins
verbs:
- create
- delete
......@@ -95,8 +108,8 @@ rules:
- watch
- apiGroups: ["authentication.istio.io"]
resources:
- policies
- meshpolicies
- policies
verbs:
- create
- delete
......@@ -108,8 +121,8 @@ rules:
resources:
- clusterrbacconfigs
- rbacconfigs
- serviceroles
- servicerolebindings
- serviceroles
verbs:
- create
- delete
......@@ -122,6 +135,7 @@ rules:
- monitoringdashboards
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
......@@ -140,8 +154,9 @@ rules:
- namespaces
- nodes
- pods
- services
- pods/log
- replicationcontrollers
- services
verbs:
- get
- list
......@@ -149,8 +164,8 @@ rules:
- apiGroups: ["extensions", "apps"]
resources:
- deployments
- statefulsets
- replicasets
- statefulsets
verbs:
- get
- list
......@@ -172,13 +187,19 @@ rules:
- watch
- apiGroups: ["config.istio.io"]
resources:
- adapters
- apikeys
- bypasses
- authorizations
- checknothings
- circonuses
- cloudwatches
- deniers
- dogstatsds
- edges
- fluentds
- handlers
- instances
- kubernetesenvs
- kuberneteses
- listcheckers
......@@ -186,20 +207,24 @@ rules:
- logentries
- memquotas
- metrics
- noops
- opas
- prometheuses
- quotas
- quotaspecbindings
- quotaspecs
- rbacs
- redisquotas
- reportnothings
- rules
- servicecontrolreports
- servicecontrols
- signalfxs
- solarwindses
- stackdrivers
- statsds
- stdios
- templates
- tracespans
- zipkins
verbs:
- get
- list
......@@ -216,8 +241,8 @@ rules:
- watch
- apiGroups: ["authentication.istio.io"]
resources:
- policies
- meshpolicies
- policies
verbs:
- get
- list
......@@ -226,8 +251,8 @@ rules:
resources:
- clusterrbacconfigs
- rbacconfigs
- serviceroles
- servicerolebindings
- serviceroles
verbs:
- get
- list
......@@ -237,3 +262,4 @@ rules:
- monitoringdashboards
verbs:
- get
- list
......@@ -10,7 +10,7 @@ metadata:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kiali
name: kiali{{- if .Values.dashboard.viewOnlyMode }}-viewer{{- end }}
subjects:
- kind: ServiceAccount
name: kiali-service-account
......
......@@ -14,12 +14,12 @@ data:
server:
port: 20001
external_services:
istio:
url_service_version: http://istio-pilot:8080/version
jaeger:
service: "jaeger-query"
{{- if .Values.dashboard.jaegerURL }}
url: {{ .Values.dashboard.jaegerURL }}
tracing:
service: "tracing/jaeger"
{{- if and .Values.global.rancher (and .Values.global.rancher.domain .Values.global.rancher.clusterId) }}
{{- if not .Values.dashboard.jaegerURL }}
url: 'https://{{ .Values.global.rancher.domain }}/k8s/clusters/{{ .Values.global.rancher.clusterId }}/api/v1/namespaces/{{ .Release.Namespace }}/services/http:tracing:80/proxy/jaeger'
{{- end }}
{{- end }}
grafana:
custom_metrics_url: "http://prometheus.{{ .Release.Namespace }}:9090"
......
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: kiali
......@@ -23,6 +23,9 @@ spec:
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
spec:
serviceAccountName: kiali-service-account
{{- if .Values.global.priorityClassName }}
......@@ -39,10 +42,6 @@ spec:
- "4"
env:
{{- if and .Values.global.rancher (and .Values.global.rancher.domain .Values.global.rancher.clusterId) }}
{{- if not .Values.dashboard.jaegerURL }}
- name: JAEGER_URL
value: 'https://{{ .Values.global.rancher.domain }}/k8s/clusters/{{ .Values.global.rancher.clusterId }}/api/v1/namespaces/{{ .Release.Namespace }}/services/tracing:80/proxy/jaeger'
{{- end }}
{{- if not .Values.dashboard.grafanaURL }}
- name: GRAFANA_URL
value: 'https://{{ .Values.global.rancher.domain }}/k8s/clusters/{{ .Values.global.rancher.clusterId }}/api/v1/namespaces/{{ .Release.Namespace }}/services/http:grafana:80/proxy/'
......@@ -75,6 +74,8 @@ spec:
volumeMounts:
- name: kiali-configuration
mountPath: "/kiali-configuration"
- name: kiali-secret
mountPath: "/kiali-secret"
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 10 }}
......@@ -111,6 +112,10 @@ spec:
- key: nginx.conf
mode: 438
path: nginx.conf
- name: kiali-secret
secret:
secretName: {{ .Values.dashboard.secretName }}
optional: true
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
apiVersion: v1
kind: Secret
metadata:
name: kiali
name: {{ .Values.dashboard.secretName }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "kiali.name" . }}
......
......@@ -26,4 +26,5 @@ spec:
restartPolicy: Never
affinity:
{{- include "nodeaffinity" . | indent 4 }}
{{- include "podAntiAffinity" . | indent 4 }}
{{- end }}
suite: Test Istio Kiali Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
replicaCount: 1
asserts:
- equal:
path: spec.replicas
value: 1
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
#
# addon kiali
#
enabled: false
enabled: false # Note that if using the demo or demo-auth yaml when installing via Helm, this default will be `true`.
replicaCount: 1
contextPath: /
nodeSelector: {}
......@@ -24,8 +24,8 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
ingress:
enabled: false
......@@ -44,15 +44,15 @@ ingress:
dashboard:
# login/anonymous
authStrategy: anonymous
secretName: kiali
username: admin
passphrase: admin
# Override the automatically detected Grafana URL, useful when Grafana service has no ExternalIPs
grafanaURL:
secretName: kiali # You must create a secret with this name - one is not provided out-of-box.
viewOnlyMode: false # Bind the service account to a role with only read access
grafanaURL: # If you have Grafana installed and it is accessible to client browsers, then set this to its external URL. Kiali will redirect users to this URL when Grafana metrics are to be shown.
jaegerURL: # If you have Jaeger installed and it is accessible to client browsers, then set this property to its external URL. Kiali will redirect users to this URL when Jaeger tracing is to be shown.
# Override the automatically detected Jaeger URL, useful when Jaeger service has no ExternalIPs
jaegerURL:
prometheusAddr: http://prometheus:9090
service:
......
......@@ -15,7 +15,7 @@ spec:
maxReplicas: {{ $spec.autoscaleMax }}
minReplicas: {{ $spec.autoscaleMin }}
scaleTargetRef:
apiVersion: apps/v1beta1
apiVersion: apps/v1
kind: Deployment
name: istio-{{ $key }}
metrics:
......
......@@ -9,6 +9,20 @@
secret:
secretName: istio.istio-mixer-service-account
optional: true
{{- if $.Values.global.sds.enabled }}
- hostPath:
path: /var/run/sds
name: sds-uds-path
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
projected:
sources:
- serviceAccountToken:
audience: {{ $.Values.global.trustDomain }}
expirationSeconds: 43200
path: istio-token
{{- end }}
{{- end }}
- name: uds-socket
emptyDir: {}
- name: policy-adapter-secret
......@@ -18,6 +32,10 @@
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
containers:
- name: mixer
image: "{{ template "system_default_registry" . }}{{ $.Values.repository }}:{{ $.Values.tag }}"
......@@ -47,10 +65,15 @@
{{- else }}
- --useAdapterCRDs=false
{{- end }}
{{- if $.Values.templates.useTemplateCRDs }}
- --useTemplateCRDs=true
{{- else }}
- --useTemplateCRDs=false
{{- end }}
{{- if $.Values.global.tracer.zipkin.address }}
- --trace_zipkin_url=http://{{- $.Values.global.tracer.zipkin.address }}/api/v1/spans
{{- else }}
- --trace_zipkin_url=http://zipkin:9411/api/v1/spans
- --trace_zipkin_url=http://zipkin.{{ $.Release.Namespace }}:9411/api/v1/spans
{{- end }}
{{- if .Values.env }}
env:
......@@ -134,6 +157,15 @@
- name: istio-certs
mountPath: /etc/certs
readOnly: true
{{- if $.Values.global.sds.enabled }}
- name: sds-uds-path
mountPath: /var/run/sds
readOnly: true
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
mountPath: /var/run/secrets/tokens
{{- end }}
{{- end }}
- name: uds-socket
mountPath: /sock
- name: policy-adapter-secret
......@@ -149,6 +181,20 @@
secret:
secretName: istio.istio-mixer-service-account
optional: true
{{- if $.Values.global.sds.enabled }}
- hostPath:
path: /var/run/sds
name: sds-uds-path
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
projected:
sources:
- serviceAccountToken:
audience: {{ $.Values.global.trustDomain }}
expirationSeconds: 43200
path: istio-token
{{- end }}
{{- end }}
- name: uds-socket
emptyDir: {}
- name: telemetry-adapter-secret
......@@ -158,6 +204,10 @@
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
containers:
- name: mixer
image: "{{ template "system_default_registry" . }}{{ $.Values.repository }}:{{ $.Values.tag }}"
......@@ -190,10 +240,15 @@
{{- else }}
- --useAdapterCRDs=false
{{- end }}
{{- if $.Values.templates.useTemplateCRDs }}
- --useTemplateCRDs=true
{{- else }}
- --useTemplateCRDs=false
{{- end }}
{{- if $.Values.global.tracer.zipkin.address }}
- --trace_zipkin_url=http://{{- $.Values.global.tracer.zipkin.address }}/api/v1/spans
{{- else }}
- --trace_zipkin_url=http://zipkin:9411/api/v1/spans
- --trace_zipkin_url=http://zipkin.{{ $.Release.Namespace }}:9411/api/v1/spans
{{- end }}
- --averageLatencyThreshold
- {{ $.Values.telemetry.loadshedding.latencyThreshold }}
......@@ -281,6 +336,15 @@
- name: istio-certs
mountPath: /etc/certs
readOnly: true
{{- if $.Values.global.sds.enabled }}
- name: sds-uds-path
mountPath: /var/run/sds
readOnly: true
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
mountPath: /var/run/secrets/tokens
{{- end }}
{{- end }}
- name: uds-socket
mountPath: /sock
{{- end }}
......@@ -289,7 +353,7 @@
{{- range $key, $spec := .Values }}
{{- if or (eq $key "policy") (eq $key "telemetry") }}
{{- if $spec.enabled }}
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-{{ $key }}
......
......@@ -36,3 +36,4 @@ spec:
{{- end }}
{{- end }}
{{- end }}
suite: Test Istio Mixer Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
policy.enabled: true
telemetry.enabled: false
asserts:
- isNull:
path: spec.replicas
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
- it: should pass all kinds of assertion
set:
policy.enabled: false
telemetry.enabled: true
asserts:
- isNull:
path: spec.replicas
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
- it: should pass all kinds of assertion
set:
policy.enabled: true
telemetry.enabled: true
asserts:
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 2
#
# mixer configuration
#
enabled: true
env:
GODEBUG: gctrace=1
# max procs should be ceil(cpu limit + 1)
......@@ -47,6 +45,7 @@ telemetry:
podAnnotations: {}
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -66,8 +65,11 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
templates:
useTemplateCRDs: false
adapters:
kubernetesenv:
......@@ -81,4 +83,4 @@ adapters:
enabled: true
metricsExpiryDuration: 10m
# Setting this to false sets the useAdapterCRDs mixer startup argument to false
useAdapterCRDs: true
useAdapterCRDs: false
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: istio-nodeagent
......@@ -10,6 +10,9 @@ metadata:
heritage: {{ .Release.Service }}
istio: nodeagent
spec:
selector:
matchLabels:
istio: nodeagent
template:
metadata:
labels:
......@@ -18,8 +21,13 @@ spec:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
istio: nodeagent
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: istio-nodeagent-service-account
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
- name: nodeagent
image: "{{ template "system_default_registry" . }}{{ $.Values.global.nodeAgent.repository }}:{{ $.Values.global.nodeAgent.tag }}"
......@@ -43,3 +51,8 @@ spec:
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
\ No newline at end of file
......@@ -2,7 +2,6 @@
# nodeagent configuration
#
enabled: false
image: node-agent-k8s
env:
# name of authentication provider.
CA_PROVIDER: ""
......@@ -11,6 +10,7 @@ env:
# names of authentication provider's plugins.
Plugins: ""
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -30,5 +30,5 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
......@@ -30,4 +30,3 @@ Create chart name and version as used by the chart label.
{{- define "pilot.chart" -}}
{{- .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
......@@ -13,7 +13,7 @@ spec:
maxReplicas: {{ .Values.autoscaleMax }}
minReplicas: {{ .Values.autoscaleMin }}
scaleTargetRef:
apiVersion: apps/v1beta1
apiVersion: apps/v1
kind: Deployment
name: istio-pilot
metrics:
......
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-pilot
......@@ -173,8 +173,31 @@ spec:
- name: istio-certs
mountPath: /etc/certs
readOnly: true
{{- if $.Values.global.sds.enabled }}
- name: sds-uds-path
mountPath: /var/run/sds
readOnly: true
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
mountPath: /var/run/secrets/tokens
{{- end }}
{{- end }}
{{- end }}
volumes:
{{- if $.Values.global.sds.enabled }}
- hostPath:
path: /var/run/sds
name: sds-uds-path
{{- if $.Values.global.sds.useTrustworthyJwt }}
- name: istio-token
projected:
sources:
- serviceAccountToken:
audience: {{ $.Values.global.trustDomain }}
expirationSeconds: 43200
path: istio-token
{{- end }}
{{- end }}
- name: config-volume
configMap:
name: istio
......@@ -185,3 +208,7 @@ spec:
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
suite: Test Pilot Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
service.internalPort: 8080
sidecar: true
asserts:
- equal:
path: spec.template.spec.containers[0].image
value: istio/pilot:1.1.5
- notEqual:
path: spec.template.spec.containers[0].image
value: istio/pilot:1.1
- matchRegex:
path: metadata.name
pattern: .*istio-pilot.*
- contains:
path: spec.template.spec.containers[0].ports
content:
containerPort: 8080
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
# sidecar tests
- equal:
path: spec.template.spec.containers[1].name
value: istio-proxy
- notContains:
path: spec.template.spec.containers[0].ports
content:
containerPort: 15011
- it: should not set replicas
set:
autoscaleEnabled: true
asserts:
- equal:
path: spec.replicas
value: null
- it: should not add sidecar
set:
sidecar: false
asserts:
- contains:
path: spec.template.spec.containers[0].ports
content:
containerPort: 15011
- contains:
path: spec.template.spec.containers[0].args
content:
--secureGrpcAddr
......@@ -20,6 +20,7 @@ env:
cpu:
targetAverageUtilization: 80
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -39,8 +40,8 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
# The following is used to limit how long a sidecar can be connected
# to a pilot. It balances out load across pilot instances at the cost of
......
......@@ -2,5 +2,5 @@ apiVersion: v1
description: A Helm chart for Kubernetes
name: prometheus
version: 1.1.0
appVersion: 2.3.1
appVersion: 2.8.0
tillerVersion: ">=2.7.2"
......@@ -50,38 +50,6 @@ data:
action: replace
target_label: pod_name
metric_relabel_configs:
# Exclude some of the envoy metrics that have massive cardinality
# This list may need to be pruned further moving forward, as informed
# by performance and scalability testing.
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
- job_name: 'istio-policy'
kubernetes_sd_configs:
- role: endpoints
......
# TODO: the original template has service account, roles, etc
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
......@@ -89,7 +89,9 @@ spec:
- name: istio-certs
secret:
defaultMode: 420
{{- if not .Values.security.enabled }}
optional: true
{{- end }}
secretName: istio.default
- name: prometheus-nginx
configMap:
......@@ -97,3 +99,7 @@ spec:
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
......@@ -21,8 +21,9 @@ spec:
- name: "{{ template "prometheus.fullname" . }}-test"
image: {{ template "system_default_registry" . }}{{ .Values.global.proxy.repository }}:{{ .Values.global.proxy.tag }}
imagePullPolicy: "{{ .Values.global.imagePullPolicy }}"
command: ['sh', '-c', 'for i in 1 2 3; do curl http://prometheus:9090/-/ready && break || sleep 15; done']
command: ['sh', '-c', 'for i in 1 2 3; do curl http://prometheus:9090/-/ready && exit 0 || sleep 15; done; exit 1']
restartPolicy: Never
affinity:
{{- include "nodeaffinity" . | indent 4 }}
{{- include "podAntiAffinity" . | indent 4 }}
{{- end }}
suite: Test Istio Prometheus Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
replicaCount: 1
asserts:
- equal:
path: spec.replicas
value: 1
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
......@@ -6,6 +6,7 @@ replicaCount: 1
retention: 6h
nodeSelector: {}
tolerations: []
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -25,8 +26,8 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
# Controls the frequency of prometheus scraping
scrapeInterval: 15s
......@@ -49,6 +50,9 @@ ingress:
service:
annotations: {}
nodePort:
enabled: false
port: 32090
security:
enabled: true
......
......@@ -27,6 +27,12 @@ metadata:
chart: {{ template "security.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
......
# istio CA watching all namespaces
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-citadel
......@@ -11,7 +11,10 @@ metadata:
release: {{ .Release.Name }}
istio: citadel
spec:
replicas: {{ .Values.replicaCount }}
replicas: 1
selector:
matchLabels:
istio: citadel
strategy:
rollingUpdate:
maxSurge: 1
......@@ -38,7 +41,6 @@ spec:
args:
- --append-dns-names=true
- --grpc-port=8060
- --grpc-hostname=citadel
- --citadel-storage-namespace={{ .Release.Namespace }}
- --custom-dns-names=istio-pilot-service-account.{{ .Release.Namespace }}:istio-pilot.{{ .Release.Namespace }}
- --monitoring-port={{ .Values.global.monitoringPort }}
......@@ -54,12 +56,22 @@ spec:
{{- if .Values.global.trustDomain }}
- --trust-domain={{ .Values.global.trustDomain }}
{{- end }}
{{- if .Values.citadelHealthCheck }}
- --liveness-probe-path=/tmp/ca.liveness # path to the liveness health check status file
- --liveness-probe-interval=60s # interval for health check file update
- --probe-check-interval=15s # interval for health status check
{{- end }}
{{- if .Values.citadelHealthCheck }}
livenessProbe:
httpGet:
path: /version
port: {{ .Values.global.monitoringPort }}
initialDelaySeconds: 5
periodSeconds: 5
exec:
command:
- /usr/local/bin/istio_ca
- probe
- --probe-path=/tmp/ca.liveness # path to the liveness health check status file
- --interval=125s # the maximum time gap allowed between the file mtime and the current sys clock
initialDelaySeconds: 60
periodSeconds: 60
{{- end }}
resources:
{{- if .Values.resources }}
{{ toYaml .Values.resources | indent 12 }}
......@@ -80,3 +92,7 @@ spec:
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
\ No newline at end of file
......@@ -40,7 +40,7 @@ spec:
mode: ISTIO_MUTUAL
---
# Destination rule to disable (m)TLS when talking to API server, as API server doesn't have sidecar.
# Customer should add similar destination rules for other services that dont' have sidecar.
# Customer should add similar destination rules for other services that don't have sidecar.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
......
......@@ -21,8 +21,9 @@ spec:
- name: "{{ template "security.fullname" . }}-test"
image: "{{ template "system_default_registry" . }}{{ .Values.global.proxy.repository }}:{{ .Values.global.proxy.tag }}"
imagePullPolicy: "{{ .Values.global.imagePullPolicy }}"
command: ['sh', '-c', 'for i in 1 2 3; do curl http://istio-citadel:8060/-/ready && break || sleep 15; done']
command: ['sh', '-c', 'for i in 1 2 3; do curl http://istio-citadel:8060/-/ready && exit 0 || sleep 15; done; exit 1']
restartPolicy: Never
affinity:
{{- include "nodeaffinity" . | indent 4 }}
{{- include "podAntiAffinity" . | indent 4 }}
{{- end }}
suite: Test Istio Citadel Deployment
templates:
- deployment.yaml
tests:
- it: should pass all kinds of assertion
set:
replicaCount: 1
asserts:
- equal:
path: spec.replicas
value: 1
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
......@@ -2,10 +2,13 @@
# security configuration
#
enabled: true
replicaCount: 1
selfSigned: true # indicate if self-signed CA is used.
createMeshPolicy: true
nodeSelector: {}
tolerations: []
# Enable health checking on the Citadel CSR signing API.
# https://istio.io/docs/tasks/security/health-check/
citadelHealthCheck: false
# Specify the pod anti-affinity that allows you to constrain which nodes
# your pod is eligible to be scheduled based on labels on pods that are
......@@ -25,5 +28,5 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-sidecar-injector
......@@ -11,6 +11,9 @@ metadata:
istio: sidecar-injector
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
istio: sidecar-injector
strategy:
rollingUpdate:
maxSurge: 1
......@@ -27,7 +30,7 @@ spec:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: istio-sidecar-injector-service-account
{{- if .Values.global.priorityClassName }}
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
containers:
......@@ -89,6 +92,8 @@ spec:
items:
- key: config
path: config
- key: values
path: values
affinity:
{{- include "nodeaffinity" . | indent 6 }}
{{- include "podAntiAffinity" . | indent 6 }}
......@@ -2,7 +2,6 @@ apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: istio-sidecar-injector
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "sidecar-injector.name" . }}
chart: {{ template "sidecar-injector.chart" . }}
......@@ -25,6 +24,10 @@ webhooks:
namespaceSelector:
{{- if .Values.enableNamespacesByDefault }}
matchExpressions:
- key: name
operator: NotIn
values:
- {{ .Release.Namespace }}
- key: istio-injection
operator: NotIn
values:
......
{{- if .Values.global.defaultPodDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: istio-sidecar-injector
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "sidecar-injector.name" . }}
release: {{ .Release.Name }}
istio: sidecar-injector
spec:
{{ include "podDisruptionBudget.spec" .Values.global.defaultPodDisruptionBudget }}
selector:
matchLabels:
app: {{ template "sidecar-injector.name" . }}
release: {{ .Release.Name }}
istio: sidecar-injector
{{- end }}
\ No newline at end of file
suite: Test SidecarInjectorWebhook MutatingWebhook
templates:
- mutatingwebhook.yaml
tests:
- it: should pass all kinds of assertion
set:
enableNamespacesByDefault: false
asserts:
- isNull:
path: webhooks[0].namespaceSelector.matchExpressions
- isEmpty:
path: webhooks[0].namespaceSelector.matchExpressions
- isNotNull:
path: webhooks[0].namespaceSelector.matchLabels
- isNotEmpty:
path: webhooks[0].namespaceSelector.matchLabels
- contains:
path: webhooks[0].rules
content:
operations: [ "CREATE" ]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
- isKind:
of: MutatingWebhookConfiguration
- isAPIVersion:
of: admissionregistration.k8s.io/v1beta1
- hasDocuments:
count: 1
- it: should not set autoInjection selector
set:
enableNamespacesByDefault: true
asserts:
- isNotNull:
path: webhooks[0].namespaceSelector.matchExpressions
- isNotEmpty:
path: webhooks[0].namespaceSelector.matchExpressions
- isNull:
path: webhooks[0].namespaceSelector.matchLabels
- isEmpty:
path: webhooks[0].namespaceSelector.matchLabels
suite: Test SidecarInjectorWebhook RBAC
templates:
- clusterrole.yaml
tests:
- it: should pass all kinds of assertion
set:
provider: jaeger
asserts:
- isNotNull:
path: rules
- isNotEmpty:
path: rules
- contains:
path: rules
content:
apiGroups: ["admissionregistration.k8s.io"]
resources: ["mutatingwebhookconfigurations"]
verbs: ["get", "list", "watch", "patch"]
- isKind:
of: ClusterRole
- isAPIVersion:
of: rbac.authorization.k8s.io/v1
- hasDocuments:
count: 1
......@@ -24,10 +24,17 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
# If true, webhook or istioctl injector will rewrite PodSpec for liveness
# health check to redirect request to sidecar. This makes liveness check work
# even when mTLS is enabled.
rewriteAppHTTPProbe: false
# You can use the field called alwaysInjectSelector and neverInjectSelector which will always inject the sidecar or
# always skip the injection on pods that match that label selector, regardless of the global policy.
# See https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/#more-control-adding-exceptions
neverInjectSelector: []
alwaysInjectSelector: []
\ No newline at end of file
{{ if eq .Values.provider "jaeger" }}
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-tracing
......@@ -11,6 +11,9 @@ metadata:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
selector:
matchLabels:
app: jaeger
template:
metadata:
labels:
......
{{ if eq .Values.provider "zipkin" }}
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-zipkin
name: istio-tracing
namespace: {{ .Release.Namespace }}
labels:
app: zipkin
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
chart: {{ template "tracing.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
selector:
matchLabels:
app: zipkin
template:
metadata:
labels:
app: zipkin
chart: {{ template "tracing.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
{{- if .Values.global.priorityClassName }}
priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
- name: zipkin
image: "{{ template "system_default_registry" . }}{{ .Values.zipkin.repository }}:{{ .Values.zipkin.tag }}"
imagePullPolicy: {{ .Values.global.imagePullPolicy }}
ports:
- containerPort: {{ .Values.zipkin.queryPort }}
livenessProbe:
......
......@@ -29,4 +29,5 @@ spec:
restartPolicy: Never
affinity:
{{- include "nodeaffinity" . | indent 4 }}
{{- include "podAntiAffinity" . | indent 4 }}
{{- end }}
suite: Test Jaeger Deployment
templates:
- deployment-jaeger.yaml
- deployment-zipkin.yaml
tests:
- it: should pass all kinds of assertion
set:
provider: jaeger
asserts:
- equal:
path: spec.template.spec.containers[0].image
value: jaegertracing/all-in-one:1.9
- equal:
path: spec.template.metadata.labels.app
value: jaeger
- equal:
path: spec.template.spec.containers[0].name
value: jaeger
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
- it: should not deploy jaeger
set:
provider: zipkin
asserts:
- hasDocuments:
count: 0
suite: Test Zipkinn Deployment
templates:
- deployment-zipkin.yaml
tests:
- it: should pass all kinds of assertion
set:
provider: zipkin
asserts:
- equal:
path: spec.template.spec.containers[0].image
value: openzipkin/zipkin:2
- equal:
path: spec.template.metadata.labels.app
value: zipkin
- equal:
path: spec.template.spec.containers[0].name
value: tracing
- isNull:
path: spec.template.nodeSelector
- isNotNull:
path: spec.template
- isNotEmpty:
path: spec.template.spec.containers[0].resources
- isNotEmpty:
path: spec.template.spec.containers[0]
- isKind:
of: Deployment
- isAPIVersion:
of: extensions/v1beta1
- hasDocuments:
count: 1
- it: should not deploy zipkin
set:
provider: jaeger
asserts:
- hasDocuments:
count: 0
#
# addon jeager tracing configuration
# addon jaeger tracing configuration
#
enabled: false
......@@ -24,8 +24,8 @@ nodeSelector: {}
# This pod anti-affinity rule says that the pod requires not to be scheduled
# onto a node if that node is already running a pod with label having key
# “security” and value “S1”.
podAntiAffinityLabelSelector: {}
podAntiAffinityTermLabelSelector: {}
podAntiAffinityLabelSelector: []
podAntiAffinityTermLabelSelector: []
jaeger:
memory:
......
......@@ -66,7 +66,7 @@
matchExpressions:
- key: {{ $item.key }}
operator: {{ $item.operator }}
{{- if $item.value }}
{{- if $item.values }}
values:
{{- $vals := split "," $item.values }}
{{- range $i, $v := $vals }}
......
......@@ -47,7 +47,7 @@ apiVersion: v1
kind: Endpoints
metadata:
name: istio-telemetry
namespace: istio-system
namespace: {{ .Release.Namespace }}
subsets:
- addresses:
- ip: {{ .Values.global.remoteTelemetryAddress }}
......
suite: Test Certmanager CRDs
templates:
- crd-certmanager.yaml
tests:
- it: should create certmanager CRDs
set:
enableCRDs: true
certmanager.enabled: true
asserts:
- hasDocuments:
count: 5
- isKind:
of: CustomResourceDefinition
- it: should not render certmanager CRDs
set:
enableCRDs: true
certmanager.enabled: false
asserts:
- hasDocuments:
count: 0
- it: should set helm crd hook annotations
set:
enableCRDs: true
certmanager.enabled: true
asserts:
- equal:
path: metadata.annotations
value:
helm.sh/resource-policy: keep
helm.sh/hook: "crd-install"
suite: Test Istio CRDs
templates:
- crds.yaml
tests:
- it: should create custom resource definition
set:
enableCRDs: true
asserts:
- hasDocuments:
count: 53 #istio v1.1.5 contains total 53 CRDs
- isKind:
of: CustomResourceDefinition
- it: should not render custom resource definition
set:
enableCRDs: false
asserts:
- hasDocuments:
count: 0
- it: should set helm crd hook annotations
set:
enableCRDs: true
asserts:
- equal:
path: metadata.annotations
value:
helm.sh/resource-policy: keep
helm.sh/hook: "crd-install"
......@@ -23,7 +23,7 @@ gateways:
#
sidecarInjectorWebhook:
repository: rancher/istio-sidecar_injector
tag: "1.1.5"
tag: "1.2.0"
enabled: true
#
......@@ -32,7 +32,7 @@ sidecarInjectorWebhook:
#
galley:
repository: rancher/istio-galley
tag: 1.1.5
tag: 1.2.0
enabled: true
#
......@@ -41,7 +41,7 @@ galley:
# @see charts/mixer/values.yaml, it takes precedence
mixer:
repository: rancher/istio-mixer
tag: "1.1.5"
tag: "1.2.0"
enabled: true
policy:
# if policy is enabled the global.disablePolicyChecks has affect.
......@@ -55,7 +55,7 @@ mixer:
# @see charts/pilot/values.yaml
pilot:
repository: rancher/istio-pilot
tag: "1.1.5"
tag: "1.2.0"
enabled: true
#
......@@ -63,7 +63,7 @@ pilot:
#
security:
repository: rancher/istio-citadel
tag: "1.1.5"
tag: "1.2.0"
enabled: true
#
......@@ -77,7 +77,7 @@ nodeagent:
#
grafana:
repository: rancher/grafana-grafana
tag: 5.4.0
tag: 6.1.6
enabled: false
#
......@@ -85,7 +85,7 @@ grafana:
#
prometheus:
repository: rancher/prom-prometheus
tag: v2.3.1
tag: v2.8.0
enabled: true
#
......@@ -105,10 +105,16 @@ tracing:
#
kiali:
repository: rancher/kiali-kiali
tag: v0.17
tag: v0.20
enabled: true
#
# addon certmanager configuration
#
certmanager:
enabled: false
#
# Istio CNI plugin enabled
# This must be enabled to use the CNI plugin in Istio. The CNI plugin is installed separately.
# If true, the privileged initContainer istio-init is not needed to perform the traffic redirect
......@@ -128,9 +134,6 @@ istiocoredns:
tag: 0.2-istio-1.1
enabled: false
certmanager:
enabled: false
# Common settings used among istio subcharts.
global:
# Specify rancher domain and clusterId of external tracing config
......@@ -148,7 +151,7 @@ global:
# Default tag for Istio images.
# tag: release-1.1-latest-daily
tag: 1.1.5
tag: 1.2.0
# Comma-separated minimum per-scope logging level of messages to output, in the form of <scope>:<level>,<scope>:<level>
# The control plane has different scopes depending on component, but can configure default log level across all components
......@@ -158,7 +161,7 @@ global:
kubectl:
repository: rancher/istio-kubectl
tag: 1.1.5
tag: 1.2.0
# monitoring port used by mixer, pilot, galley
monitoringPort: 15014
......@@ -182,7 +185,7 @@ global:
proxy:
repository: rancher/istio-proxyv2
tag: 1.1.5
tag: 1.2.0
# cluster domain. Default value is "cluster.local".
clusterDomain: "cluster.local"
......@@ -194,7 +197,7 @@ global:
memory: 128Mi
limits:
cpu: 2000m
memory: 128Mi
memory: 1024Mi
# Controls number of Proxy worker threads.
# If set to 0 (default), then start worker thread for each CPU thread/core.
......@@ -217,9 +220,13 @@ global:
# Expected values are: trace|debug|info|warning|error|critical|off
logLevel: ""
# Per Component log level for proxy, applies to gateways and sidecars. If a component level is
# not set, then the global "logLevel" will be used. If left empty, "misc:error" is used.
componentLogLevel: ""
# Configure the DNS refresh rate for Envoy cluster of type STRICT_DNS
# 5 seconds is the default refresh rate used by Envoy
dnsRefreshRate: 5s
# This must be given it terms of seconds. For example, 300s is valid but 5m is invalid.
dnsRefreshRate: 300s
#If set to true, istio-proxy container will have privileged securityContext
privileged: false
......@@ -246,6 +253,7 @@ global:
# be allowed by the sidecar
includeIPRanges: "*"
excludeIPRanges: ""
excludeOutboundPorts: ""
# pod internal interfaces
kubevirtInterfaces: ""
......@@ -291,14 +299,14 @@ global:
proxy_init:
# Base name for the proxy_init container, used to configure iptables.
repository: rancher/istio-proxy_init
tag: "1.1.5"
tag: "1.2.0"
# imagePullPolicy is applied to istio control plane components.
# local tests require IfNotPresent, to avoid uploading to dockerhub.
# TODO: Switch to Always as default, and override in the local tests.
imagePullPolicy: IfNotPresent
# controlPlaneMtls enabled. Will result in delays starting the pods while secrets are
# controlPlaneSecurityEnabled enabled. Will result in delays starting the pods while secrets are
# propagated, not recommended for tests.
controlPlaneSecurityEnabled: false
......@@ -348,7 +356,7 @@ global:
# to use for pulling any images in pods that reference this ServiceAccount.
# For components that don't use ServiceAccounts (i.e. grafana, servicegraph, tracing)
# ImagePullSecrets will be added to the corresponding Deployment(StatefulSet) objects.
# Must be set for any clustser configured with private docker registry.
# Must be set for any cluster configured with private docker registry.
imagePullSecrets:
# - private-registry-key
......@@ -466,7 +474,7 @@ global:
nodeAgent:
repository: rancher/istio-node-agent-k8s
tag: "1.1.5"
tag: "1.2.0"
sds:
# SDS enabled. IF set to true, mTLS certificates for the sidecars will be
# distributed through the SecretDiscoveryService instead of using K8S secrets to mount the certificates.
......@@ -484,8 +492,9 @@ global:
# The second network, `network2`, in this example is defined differently with all endpoints
# retrieved through the specified Multi-Cluster registry being mapped to network2. The
# gateway is also defined differently with the name of the gateway service on the remote
# cluster. The public IP for the gateway will be determined from that remote service (not
# supported yet).
# cluster. The public IP for the gateway will be determined from that remote service (only
# LoadBalancer gateway service type is currently supported, for a NodePort type gateway service,
# it still need to be configured manually).
#
# meshNetworks:
# network1:
......@@ -498,7 +507,7 @@ global:
# endpoints:
# - fromRegistry: reg1
# gateways:
# - registryServiceName: istio-ingressgateway
# - registryServiceName: istio-ingressgateway.istio-system.svc.cluster.local
# port: 443
#
meshNetworks: {}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment