Unverified Commit 28cb367a by Denise Committed by GitHub

Merge pull request #267 from rancher/partners

Update partner charts on 10/9
parents c9365145 a7eeab7f
apiVersion: v1
name: instana-agent
version: 1.0.16
appVersion: 1.0
description: Instana Agent for Kubernetes
home: https://www.instana.com/
icon: file://../stan_icon_front_black_big.png
sources:
- https://github.com/instana/instana-agent-docker
maintainers:
- name: jbrisbin
email: jon.brisbin@instana.com
- name: wiggzz
email: william.james@instana.com
- name: JeroenSoeters
email: jeroen.soeters@instana.com
- name: fstab
email: fabian.staeber@instana.com
- name: mdonkers
email: miel.donkers@instana.com
- name: dlbock
email: dahlia.bock@instana.com
- name: nfisher
email: nathan.fisher@instana.com
approvers:
- jbrisbin
- wiggzz
- JeroenSoeters
- fstab
- mdonkers
- dlbock
- nfisher
reviewers:
- jbrisbin
- wiggzz
- JeroenSoeters
- fstab
- mdonkers
- dlbock
- nfisher
# Instana
[Instana](https://www.instana.com/) is a Dynamic APM for Microservice Applications
## Introduction
This chart adds the Instana Agent to all schedulable nodes (e.g. by default, not masters) in your cluster via a `DaemonSet`.
## Prerequisites
Kubernetes 1.9.x - 1.14.x
Working `helm` and `tiller`.
_Note:_ Tiller may need a service account and role binding if RBAC is enabled in your cluster.
## Installing the Chart
To configure the installation you can either specify the options on the command line using the **--set** switch, or you can edit **values.yaml**. Either way you should ensure that you set values for:
* agent.key
* zone.name or cluster.name
For most users, setting the `zone.name` is sufficient. However, if you would like to be able group your hosts based on the availability zone rather than cluster name, then you can specify the cluster name using the `cluster.name` instead of the `zone.name` setting. If you omit the `zone.name` the host zone will be automatically determined by the availability zone information on the host.
If you're in the EU, you'll probably also want to set the regional equivalent values for:
* agent.endpointHost
* agent.endpointPort
_Note:_ Check the values for the endpoint entries in the [agent backend configuration](https://docs.instana.io/quick_start/agent_configuration/#backend).
Optionally, if your infrastructure uses a proxy, you should ensure that you set values for:
* agent.pod.proxyHost
* agent.pod.proxyPort
* agent.pod.proxyProtocol
* agent.pod.proxyUser
* agent.pod.proxyPassword
* agent.pod.proxyUseDNS
Optionally, if your infrastructure has multiple networks defined, you might need to allow the agent to listen on all addresses (typically with value set to '*'):
* agent.listenAddress
If your agent requires download key, you should ensure that you set values for it:
* agent.downloadKey
Agent can have APM, INFRASTRUCTURE or AWS mode. Default is APM and if you want to override that, ensure you set value:
* agent.mode
To install the chart with the release name `instana-agent` and set the values on the command line run:
```bash
$ helm install --name instana-agent --namespace instana-agent \
--set agent.key=INSTANA_AGENT_KEY \
--set agent.endpointHost=HOST \
--set zone.name=ZONE_NAME \
stable/instana-agent
```
To install the chart with the release name `instana-agent` after editing the **values.yaml** file, run:
```bash
$ helm install --name instana-agent --namespace instana-agent stable/instana-agent
```
## Uninstalling the Chart
To uninstall/delete the `instana-agent` daemon set:
```bash
$ helm del --purge instana-agent
```
## Configuration
### Helm Chart
The following table lists the configurable parameters of the Instana chart and their default values.
| Parameter | Description | Default |
|------------------------------------|-------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|
| `agent.configuration_yaml` | Custom content for the agent configuration.yaml file | `nil` See [below](#agent) for more details |
| `agent.downloadKey` | Your Instana Download key | `nil` Usually not required |
| `agent.endpointHost` | Instana Agent backend endpoint host | `saas-us-west-2.instana.io` |
| `agent.endpointPort` | Instana Agent backend endpoint port | `443` |
| `agent.image.name` | The image name to pull | `instana/agent` |
| `agent.image.tag` | The image tag to pull | `1.0.17` |
| `agent.image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `agent.key` | Your Instana Agent key | `nil` You must provide your own key |
| `agent.leaderElectorPort` | Instana leader elector sidecar port | `42655` |
| `agent.listenAddress` | List of addresses to listen on, or "*" for all interfaces | `nil` |
| `agent.mode` | Agent mode (Supported values are APM, INFRASTRUCTURE, AWS) | `APM` |
| `agent.pod.annotations` | Additional annotations to apply to the pod | `{}` |
| `agent.pod.limits.cpu` | Container cpu limits in cpu cores | `1.5` |
| `agent.pod.limits.memory` | Container memory limits in MiB | `512` |
| `agent.pod.proxyHost` | Hostname/address of a proxy | `nil` |
| `agent.pod.proxyPort` | Port of a proxy | `nil` |
| `agent.pod.proxyProtocol` | Proxy protocol (Supported proxy types are "http", "socks4", "socks5") | `nil` |
| `agent.pod.proxyUser` | Username of the proxy auth | `nil` |
| `agent.pod.proxyPassword` | Password of the proxy auth | `nil` |
| `agent.pod.proxyUseDNS` | Boolean if proxy also does DNS | `nil` |
| `agent.pod.requests.memory` | Container memory requests in MiB | `512` |
| `agent.pod.requests.cpu` | Container cpu requests in cpu cores | `0.5` |
| `agent.pod.tolerations` | Tolerations for pod assignment | `[]` |
| `agent.redactKubernetesSecrets` | Enable additional secrets redaction for selected Kubernetes resources | `nil` See [Kubernetes secrets](https://docs.instana.io/quick_start/agent_setup/container/kubernetes/#secrets) for more details. |
| `cluster.name` | Display name of the monitored cluster | Value of `zone.name` |
| `podSecurityPolicy.enable` | Whether a PodSecurityPolicy should be authorized for the Instana Agent pods. Requires `rbac.create` to be `true` as well. | `false` See [PodSecurityPolicy](https://docs.instana.io/quick_start/agent_setup/container/kubernetes/#podsecuritypolicy) for more details. |
| `podSecurityPolicy.name` | Name of an _existing_ PodSecurityPolicy to authorize for the Instana Agent pods. If not provided and `podSecurityPolicy.enable` is `true`, a PodSecurityPolicy will be created for you. | `nil` |
| `rbac.create` | Whether RBAC resources should be created | `true` |
| `serviceAccount.create` | Whether a ServiceAccount should be created | `true` |
| `serviceAccount.name` | Name of the ServiceAccount to use | `instana-agent` |
| `zone.name` | Zone that detected technologies will be assigned to | `nil` You must provide either `zone.name` or `cluster.name`, see [above](#installing-the-chart) for details |
#### Development and debugging options
These options will be rarely used outside of development or debugging of the agent.
| Parameter | Description | Default |
|------------------------------------|-------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|
| `agent.host.repository` | Host path to mount as the agent maven repository | `nil` |
### Agent
To configure the agent, you can either:
- edit the [config map](templates/configmap.yaml), or
- provide the configuration via the `agent.configuration_yaml` parameter in [values.yaml](values.yaml)
This configuration will be used for all Instana Agents on all nodes. Visit the [agent configuration documentation](https://docs.instana.io/quick_start/agent_configuration/#configuration) for more details on configuration options.
# Instana
[Instana](https://www.instana.com/) is a Dynamic APM for Microservice Applications
## Introduction
This chart adds the Instana Agent to all schedulable nodes (e.g. by default, not masters) in your cluster via a `DaemonSet`.
name: instana-agent
rancher_min_version: 2.3.0-rc1
labels:
io.cattle.role: cluster
io.rancher.certified: partner
questions:
# Basic agent configuration
- variable: agent.key
label: agent.key
description: "Your Instana Agent key is the secret token which your agent uses to authenticate to Instana's servers"
type: string
required: true
group: "Agent Configuration"
- variable: agent.endpointHost
label: agent.endpointHost
description: "The hostname of the Instana server your agents will connect to"
type: string
required: true
default: "saas-us-west-2.instana.io"
group: "Agent Configuration"
- variable: zone.name
label: zone.name
description: "Custom zone that detected technologies will be assigned to"
type: string
required: true
group: "Agent Configuration"
# Advanced agent configuration
- variable: advancedAgentConfiguration
description: "Show advanced configuration for the Instana Agent"
label: Show advanced configuration
type: boolean
default: false
show_subquestion_if: true
group: "Advanced Agent Configuration"
subquestions:
- variable: agent.configuration_yaml
label: agent.configuration_yaml (Optional)
description: "Custom content for the agent configuration.yaml file in YAML format. Please use the 'Edit as YAML' feature in the Rancher UI for the best editing experience."
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.downloadKey
label: agent.downloadKey (Optional)
description: "Your Instana download key"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.endpointPort
label: agent.endpointPort
description: "The Agent backend port number (as a string) of the Instana server your agents will connect to"
type: string
required: true
default: "443"
group: "Advanced Agent Configuration"
- variable: agent.image.name
label: agent.image.name
description: "The name of the container image of the Instana Agent"
type: string
required: true
default: "instana/agent"
group: "Advanced Agent Configuration"
- variable: agent.image.tag
label: agent.image.tag
description: "The tag name of the Instana Agent container image"
type: string
required: true
default: "1.0.17"
group: "Advanced Agent Configuration"
- variable: agent.image.pullPolicy
label: agent.image.pullPolicy
description: "Specifies when to pull the Instana Agent image container"
type: string
required: true
default: "IfNotPresent"
group: "Advanced Agent Configuration"
- variable: agent.leaderElectorPort
label: agent.leaderElectorPort
description: "The port on which the leader elector sidecar is exposed"
type: int
required: true
default: 42655
group: "Advanced Agent Configuration"
- variable: agent.listenAddress
label: agent.listenAddress (Optional)
description: "The IP address the agent HTTP server will listen to, or '*' for all interfaces"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.mode
label: agent.mode (Optional)
description: "Agent mode. Possible options are: APM, INFRASTRUCTURE or AWS"
type: enum
options:
- "APM"
- "INFRASTRUCTURE"
- "AWS"
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.annotations
label: agent.pod.annotations (Optional)
description: "Additional annotations to be added to the agent pods in YAML format. Please use the 'Edit as YAML' feature in the Rancher UI for the best editing experience."
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.limits.cpu
label: agent.pod.limits.cpu
description: "CPU units allocation limits for the agent pods"
type: string
required: true
default: "1.5"
group: "Advanced Agent Configuration"
- variable: agent.pod.limits.memory
label: agent.pod.limits.memory
description: "Memory allocation limits in MiB for the agent pods"
type: int
required: true
default: 512
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyHost
label: agent.pod.proxyHost (Optional)
description: "Hostname/address of a proxy. Sets the INSTANA_AGENT_PROXY_HOST environment variable"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyPort
label: agent.pod.proxyPort (Optional)
description: "Port of a proxy. Sets the INSTANA_AGENT_PROXY_PORT environment variable"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyProtocol
label: agent.pod.proxyProtocol (Optional)
description: "Proxy protocol. Sets the INSTANA_AGENT_PROXY_PROTOCOL environment variable. Supported proxy types are http, socks4, socks5"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyUser
label: agent.pod.proxyUser (Optional)
description: "Username of the proxy auth. Sets the INSTANA_AGENT_PROXY_USER environment variable"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyPassword
label: agent.pod.proxyPassword (Optional)
description: "Password of the proxy auth. Sets the INSTANA_AGENT_PROXY_PASSWORD environment variable"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.proxyUseDNS
label: agent.pod.proxyUseDNS. (Optional)
description: "Boolean if proxy also does DNS. Sets the INSTANA_AGENT_PROXY_USE_DNS environment variable"
type: enum
options:
- "true"
- "false"
required: false
group: "Advanced Agent Configuration"
- variable: agent.pod.requests.cpu
label: agent.pod.requests.cpu
description: "Requested CPU units allocation for the agent pods"
type: string
required: true
default: "0.5"
group: "Advanced Agent Configuration"
- variable: agent.pod.requests.memory
label: agent.pod.requests.memory
description: "Requested memory allocation in MiB for the agent pods"
type: int
required: true
default: 512
group: "Advanced Agent Configuration"
- variable: agent.pod.tolerations
label: agent.pod.tolerations (Optional)
description: "Tolerations to influence agent pod assignment in YAML format. Please use the 'Edit as YAML' feature in the Rancher UI for the best editing experience."
type: string
required: false
group: "Advanced Agent Configuration"
- variable: agent.redactKubernetesSecrets
label: agent.redactKubernetesSecrets (Optional)
description: "Enable additional secrets redaction for selected Kubernetes resources"
type: boolean
required: false
default: false
group: "Advanced Agent Configuration"
- variable: cluster.name
label: cluster.name (Optional)
description: "The name that will be assigned to this cluster in Instana. See the 'Installing the Chart' section in the 'Detailed Descriptions' tab for more details"
type: string
required: false
group: "Advanced Agent Configuration"
- variable: podSecurityPolicy.enable
label: podSecurityPolicy.enable (Optional)
description: "Specifies whether a PodSecurityPolicy should be authorized for the Instana Agent pods. Requires `rbac.create` to also be `true`"
type: boolean
show_if: "rbac.create=true"
required: false
default: false
group: "Pod Security Policy Configuration"
- variable: podSecurityPolicy.name
label: podSecurityPolicy.name (Optional)
description: "The name of an existing PodSecurityPolicy you would like to authorize for the Instana Agent pods. If not set and `podSecurityPolicy.enable` is `true`, a PodSecurityPolicy will be created with a name generated using the fullname template"
type: string
show_if: "rbac.create=true&&podSecurityPolicy.enable=true"
required: false
group: "Pod Security Policy Configuration"
- variable: rbac.create
label: rbac.create
description: "Specifies whether RBAC resources should be created"
type: boolean
required: true
default: true
group: "RBAC Configuration"
- variable: serviceAccount.create
label: serviceAccount.create
description: "Specifies whether a ServiceAccount should be created"
type: boolean
required: true
default: true
show_subquestion_if: true
group: "RBAC Configuration"
subquestions:
- variable: serviceAccount.name
label: Name of the ServiceAccount (Optional)
description: "The name of the ServiceAccount to use. If not set and `serviceAccount.create` is true, a name is generated using the fullname template."
type: string
required: false
group: "RBAC Configuration"
{{- if (and (not .Values.agent.key) (and (not .Values.zone.name) (not .Values.cluster.name))) }}
##############################################################################
#### ERROR: You did not specify your secret agent key. ####
#### ERROR: You also did not specify a zone or name for this cluster. ####
##############################################################################
This agent deployment will be incomplete until you set your agent key and zone or name for this cluster:
helm upgrade {{ .Release.Name }} --reuse-values \
--set agent.key=$(YOUR_SECRET_AGENT_KEY) \
--set zone.name=$(YOUR_ZONE_NAME) stable/instana-agent
Alternatively, you may specify a cluster name and the zone will be detected from availability zone information on the host:
helm upgrade {{ .Release.Name }} --reuse-values \
--set agent.key=$(YOUR_SECRET_AGENT_KEY) \
--set cluster.name=$(YOUR_CLUSTER_NAME) stable/instana-agent
- YOUR_SECRET_AGENT_KEY can be obtained from the Management Portal section of your Instana installation.
- YOUR_ZONE_NAME should be the zone that detected technologies will be assigned to.
- YOUR_CLUSTER_NAME should be the custom name of your cluster.
At least one of zone.name or cluster.name is required. This cluster will be reported with the name of the zone unless you specify a cluster name.
{{- else if (and (not .Values.zone.name) (not .Values.cluster.name)) }}
##############################################################################
#### ERROR: You did not specify a zone or name for this cluster. ####
##############################################################################
This agent deployment will be incomplete until you set a zone for this cluster:
helm upgrade {{ .Release.Name }} --reuse-values \
--set zone.name=$(YOUR_ZONE_NAME) stable/instana-agent
Alternatively, you may specify a cluster name and the zone will be detected from availability zone information on the host:
helm upgrade {{ .Release.Name }} --reuse-values \
--set cluster.name=$(YOUR_CLUSTER_NAME) stable/instana-agent
- YOUR_ZONE_NAME should be the zone that detected technologies will be assigned to.
- YOUR_CLUSTER_NAME should be the custom name of your cluster.
At least one of zone.name or cluster.name is required. This cluster will be reported with the name of the zone unless you specify a cluster name.
{{- else if not .Values.agent.key }}
##############################################################################
#### ERROR: You did not specify your secret agent key. ####
##############################################################################
This agent deployment will be incomplete until you set your agent key:
helm upgrade {{ .Release.Name }} --reuse-values \
--set agent.key=$(YOUR_SECRET_AGENT_KEY) stable/instana-agent
- YOUR_SECRET_AGENT_KEY can be obtained from the Management Portal section of your Instana installation.
{{- else -}}
It may take a few moments for the agents to fully deploy. You can see what agents are running by listing resources in the {{ .Release.Namespace }} namespace:
kubectl get all -n {{ .Release.Namespace }}
You can get the logs for all of the agents with `kubectl logs`:
kubectl logs -l app.kubernetes.io/instance={{ .Release.Name }} -n {{ .Release.Namespace }} -c instana-agent
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "instana-agent.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "instana-agent.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "instana-agent.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
The name of the ServiceAccount used.
*/}}
{{- define "instana-agent.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "instana-agent.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
The name of the PodSecurityPolicy used.
*/}}
{{- define "instana-agent.podSecurityPolicyName" -}}
{{- if .Values.podSecurityPolicy.enable -}}
{{ default (include "instana-agent.fullname" .) .Values.podSecurityPolicy.name }}
{{- end -}}
{{- end -}}
{{/*
Add Helm metadata to resource labels.
*/}}
{{- define "instana-agent.commonLabels" -}}
app.kubernetes.io/name: {{ include "instana-agent.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "instana-agent.chart" . }}
{{- end -}}
{{/*
Add Helm metadata to selector labels specifically for deployments/daemonsets/statefulsets.
*/}}
{{- define "instana-agent.selectorLabels" -}}
app.kubernetes.io/name: {{ include "instana-agent.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{- if .Values.agent.key }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "instana-agent.fullname" . }}-agent-secret
labels:
{{- include "instana-agent.commonLabels" . | nindent 4 }}
type: Opaque
data:
key: {{ .Values.agent.key | b64enc | quote }}
{{- end }}
{{- if .Values.rbac.create -}}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "instana-agent.fullname" . }}
labels:
{{- include "instana-agent.commonLabels" . | nindent 4 }}
rules:
- nonResourceURLs:
- "/version"
- "/healthz"
verbs: ["get"]
- apiGroups: ["batch"]
resources:
- "jobs"
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources:
- "deployments"
- "replicasets"
- "ingresses"
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources:
- "deployments"
- "replicasets"
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- "namespaces"
- "events"
- "services"
- "endpoints"
- "nodes"
- "pods"
- "replicationcontrollers"
- "componentstatuses"
- "resourcequotas"
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- "endpoints"
verbs: ["create", "update", "patch"]
{{- if .Values.podSecurityPolicy.enable}}
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames:
- {{ template "instana-agent.podSecurityPolicyName" . }}
{{- end -}}
{{- end -}}
{{- if .Values.rbac.create -}}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "instana-agent.fullname" . }}
labels:
{{- include "instana-agent.commonLabels" . | nindent 4 }}
subjects:
- kind: ServiceAccount
name: {{ template "instana-agent.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ template "instana-agent.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "instana-agent.fullname" . }}
labels:
{{- include "instana-agent.commonLabels" . | nindent 4 }}
data:
configuration.yaml: |
# Manual a-priori configuration. Configuration will be only used when the sensor
# is actually installed by the agent.
# The commented out example values represent example configuration and are not
# necessarily defaults. Defaults are usually 'absent' or mentioned separately.
# Changes are hot reloaded unless otherwise mentioned.
# It is possible to create files called 'configuration-abc.yaml' which are
# merged with this file in file system order. So 'configuration-cde.yaml' comes
# after 'configuration-abc.yaml'. Only nested structures are merged, values are
# overwritten by subsequent configurations.
# Secrets
# To filter sensitive data from collection by the agent, all sensors respect
# the following secrets configuration. If a key collected by a sensor matches
# an entry from the list, the value is redacted.
#com.instana.secrets:
# matcher: 'contains-ignore-case' # 'contains-ignore-case', 'contains', 'regex'
# list:
# - 'key'
# - 'password'
# - 'secret'
# Host
#com.instana.plugin.host:
# tags:
# - 'dev'
# - 'app1'
# Hardware & Zone
#com.instana.plugin.generic.hardware:
# enabled: true # disabled by default
# availability-zone: 'zone'
{{- if .Values.agent.configuration_yaml -}}
{{ .Values.agent.configuration_yaml | nindent 4 }}
{{- end }}
{{- if .Values.agent.key -}}
{{- if or .Values.zone.name .Values.cluster.name -}}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ template "instana-agent.fullname" . }}
labels:
{{- include "instana-agent.commonLabels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "instana-agent.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "instana-agent.commonLabels" . | nindent 8 }}
{{- if .Values.agent.pod.annotations }}
annotations:
{{- toYaml .Values.agent.pod.annotations | nindent 8 }}
{{- end }}
spec:
serviceAccount: {{ template "instana-agent.serviceAccountName" . }}
hostIPC: true
hostNetwork: true
hostPID: true
containers:
- name: {{ template "instana-agent.name" . }}
image: "{{ .Values.agent.image.name }}:{{ .Values.agent.image.tag }}"
imagePullPolicy: {{ .Values.agent.image.pullPolicy }}
env:
- name: INSTANA_AGENT_LEADER_ELECTOR_PORT
value: {{ .Values.agent.leaderElectorPort | quote }}
- name: INSTANA_ZONE
value: {{ .Values.zone.name | quote }}
- name: INSTANA_KUBERNETES_CLUSTER_NAME
value: {{ .Values.cluster.name | quote }}
- name: INSTANA_AGENT_ENDPOINT
value: {{ .Values.agent.endpointHost | quote }}
- name: INSTANA_AGENT_ENDPOINT_PORT
value: {{ .Values.agent.endpointPort | quote }}
- name: INSTANA_AGENT_KEY
valueFrom:
secretKeyRef:
name: {{ template "instana-agent.fullname" . }}-agent-secret
key: key
{{- if .Values.agent.mode }}
- name: INSTANA_AGENT_MODE
value: {{ .Values.agent.mode | quote }}
{{- end }}
{{- if .Values.agent.downloadKey }}
- name: INSTANA_DOWNLOAD_KEY
valueFrom:
secretKeyRef:
name: {{ template "instana-agent.fullname" . }}-download-secret
key: key
{{- end }}
{{- if .Values.agent.proxyHost }}
- name: INSTANA_AGENT_PROXY_HOST
value: {{ .Values.agent.proxyHost | quote }}
{{- end }}
{{- if .Values.agent.proxyPort }}
- name: INSTANA_AGENT_PROXY_PORT
value: {{ .Values.agent.proxyPort | quote }}
{{- end }}
{{- if .Values.agent.proxyProtocol }}
- name: INSTANA_AGENT_PROXY_PROTOCOL
value: {{ .Values.agent.proxyProtocol | quote }}
{{- end }}
{{- if .Values.agent.proxyUser }}
- name: INSTANA_AGENT_PROXY_USER
value: {{ .Values.agent.proxyUser | quote }}
{{- end }}
{{- if .Values.agent.proxyPassword }}
- name: INSTANA_AGENT_PROXY_PASSWORD
value: {{ .Values.agent.proxyPassword | quote }}
{{- end }}
{{- if .Values.agent.proxyUseDNS }}
- name: INSTANA_AGENT_PROXY_USE_DNS
value: {{ .Values.agent.proxyUseDNS | quote }}
{{- end }}
{{- if .Values.agent.listenAddress }}
- name: INSTANA_AGENT_HTTP_LISTEN
value: {{ .Values.agent.listenAddress | quote }}
{{- end }}
{{- if .Values.agent.redactKubernetesSecrets }}
- name: INSTANA_KUBERNETES_REDACT_SECRETS
value: {{ .Values.agent.redactKubernetesSecrets | quote }}
{{- end }}
- name: JAVA_OPTS
value: "-Xmx{{ div (default 512 .Values.agent.pod.requests.memory) 3 }}M -XX:+ExitOnOutOfMemoryError"
- name: INSTANA_AGENT_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
securityContext:
privileged: true
volumeMounts:
- name: dev
mountPath: /dev
- name: run
mountPath: /run
- name: var-run
mountPath: /var/run
- name: sys
mountPath: /sys
- name: var-log
mountPath: /var/log
- name: machine-id
mountPath: /etc/machine-id
- name: configuration
subPath: configuration.yaml
mountPath: /root/configuration.yaml
{{- if .Values.agent.host.repository }}
- name: repo
mountPath: /opt/instana/agent/data/repo
{{- end }}
livenessProbe:
httpGet:
path: /status
port: 42699
initialDelaySeconds: 75
periodSeconds: 5
resources:
requests:
memory: "{{ default 512 .Values.agent.pod.requests.memory }}Mi"
cpu: {{ default 0.5 .Values.agent.pod.requests.cpu }}
limits:
memory: "{{ default 512 .Values.agent.pod.limits.memory }}Mi"
cpu: {{ default 1.5 .Values.agent.pod.limits.cpu }}
ports:
- containerPort: 42699
- name: {{ template "instana-agent.name" . }}-leader-elector
image: instana/leader-elector:0.5.4
env:
- name: INSTANA_AGENT_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
command:
- "/app/server"
- "--election=instana"
- "--http=localhost:{{ default 42655 .Values.agent.leaderElectorPort }}"
- "--id=$(INSTANA_AGENT_POD_NAME)"
resources:
requests:
cpu: 0.1
memory: 64Mi
livenessProbe:
httpGet:
path: /status
port: 42699
initialDelaySeconds: 75
periodSeconds: 5
ports:
- containerPort: {{ .Values.agent.leaderElectorPort }}
{{- if .Values.agent.pod.tolerations }}
tolerations:
{{- toYaml .Values.agent.pod.tolerations | nindent 8 }}
{{- end }}
volumes:
- name: dev
hostPath:
path: /dev
- name: run
hostPath:
path: /run
- name: var-run
hostPath:
path: /var/run
- name: sys
hostPath:
path: /sys
- name: var-log
hostPath:
path: /var/log
- name: machine-id
hostPath:
path: /etc/machine-id
- name: configuration
configMap:
name: {{ template "instana-agent.fullname" . }}
{{- if .Values.agent.host.repository }}
- name: repo
hostPath:
path: {{ .Values.agent.host.repository }}
{{- end }}
{{- end -}}
{{- end -}}
{{- if .Values.agent.downloadKey }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "instana-agent.fullname" . }}-download-secret
labels:
{{- include "instana-agent.commonLabels" . | nindent 4 }}
type: Opaque
data:
key: {{ .Values.agent.downloadKey | b64enc | quote }}
{{- end }}
{{- if .Values.rbac.create -}}
{{- if (and .Values.podSecurityPolicy.enable (not .Values.podSecurityPolicy.name)) -}}
kind: PodSecurityPolicy
apiVersion: policy/v1beta1
metadata:
name: {{ template "instana-agent.podSecurityPolicyName" . }}
labels:
{{- include "instana-agent.commonLabels" . | nindent 4 }}
spec:
privileged: true
allowPrivilegeEscalation: true
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- secret
- projected
- hostPath
allowedHostPaths:
- pathPrefix: "/dev"
readOnly: false
- pathPrefix: "/run"
readOnly: false
- pathPrefix: "/var/run"
readOnly: false
- pathPrefix: "/sys"
readOnly: false
- pathPrefix: "/var/log"
readOnly: false
- pathPrefix: "/etc/machine-id"
readOnly: false
{{- if .Values.agent.host.repository }}
- pathPrefix: {{ .Values.agent.host.repository }}
readOnly: false
{{- end }}
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: "RunAsAny"
seLinux:
rule: "RunAsAny"
supplementalGroups:
rule: "RunAsAny"
fsGroup:
rule: "RunAsAny"
{{- end -}}
{{- end -}}
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "instana-agent.serviceAccountName" . }}
labels:
{{- include "instana-agent.commonLabels" . | nindent 4 }}
{{- end -}}
# name is the value which will be used as the base resource name for various resources associated with the agent.
# name: instana-agent
zone:
# zone.name is the custom zone that detected technologies will be assigned to
name: null
agent:
# agent.key is the secret token which your agent uses to authenticate to Instana's servers.
key: null
# agent.mode is used to set agent mode and it can be APM, INFRASTRUCTURE or AWS
# mode: APM
# agent.downloadKey is optional, if used it doesn't have to match agent.key
# downloadKey: null
# agent.listenAddress is the IP address the agent HTTP server will listen to.
# listenAddress: *
# agent.leaderElectorPort is the port on which the leader elector sidecar is exposed.
leaderElectorPort: 42655
# agent.endpointHost is the hostname of the Instana server your agents will connect to.
endpointHost: saas-us-west-2.instana.io
# agent.endpointPort is the port number (as a String) of the Instana server your agents will connect to.
endpointPort: 443
image:
# agent.image.name is the name of the container image of the Instana agent.
name: instana/agent
# agent.image.tag is the tag name of the agent container image.
tag: 1.0.17
# agent.image.pullPolicy specifies when to pull the image container.
pullPolicy: IfNotPresent
pod:
# agent.pod.annotations are additional annotations to be added to the agent pods.
annotations: {}
# agent.pod.tolerations are tolerations to influence agent pod assignment.
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
requests:
# agent.pod.requests.memory is the requested memory allocation in MiB for the agent pods.
memory: 512
# agent.pod.requests.cpu are the requested CPU units allocation for the agent pods.
cpu: 0.5
limits:
# agent.pod.limits.memory set the memory allocation limits in MiB for the agent pods.
memory: 512
# agent.pod.limits.cpu sets the CPU units allocation limits for the agent pods.
cpu: 1.5
# agent.proxyHost sets the INSTANA_AGENT_PROXY_HOST environment variable.
# proxyHost: null
# agent.proxyPort sets the INSTANA_AGENT_PROXY_PORT environment variable.
# proxyPort: null
# agent.proxyProtocol sets the INSTANA_AGENT_PROXY_PROTOCOL environment variable.
# proxyProtocol: null
# agent.proxyUser sets the INSTANA_AGENT_PROXY_USER environment variable.
# proxyUser: null
# agent.proxyPassword sets the INSTANA_AGENT_PROXY_PASSWORD environment variable.
# proxyPassword: null
# agent.proxyUseDNS sets the INSTANA_AGENT_PROXY_USE_DNS environment variable.
# proxyUseDNS: null
configuration_yaml: |
# Place agent configuration here
# agent.redactKubernetesSecrets sets the INSTANA_KUBERNETES_REDACT_SECRETS environment variable.
# redactKubernetesSecrets: null
# agent.host.repository sets a host path to be mounted as the agent maven repository (for debugging or development purposes)
host:
repository: null
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and `create` is true, a name is generated using the fullname template
# name: instana-agent
podSecurityPolicy:
# Specifies whether a PodSecurityPolicy should be authorized for the Instana Agent pods.
# Requires `rbac.create` to be `true` as well.
enable: false
# The name of an existing PodSecurityPolicy you would like to authorize for the Instana Agent pods.
# If not set and `enable` is true, a PodSecurityPolicy will be created with a name generated using the fullname template.
name: null
cluster:
# cluster.name represents the name that will be assigned to this cluster in Instana
name: null
apiVersion: v1 apiVersion: v1
version: 1.1.0 version: 1.2.0
name: openebs name: openebs
appVersion: 1.1.0 appVersion: 1.2.0
description: Containerized Storage for Containers description: Containerized Storage for Containers
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/openebs/icon/color/openebs-icon-color.png icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/openebs/icon/color/openebs-icon-color.png
home: http://www.openebs.io/ home: http://www.openebs.io/
......
...@@ -39,27 +39,33 @@ The following table lists the configurable parameters of the OpenEBS chart and t ...@@ -39,27 +39,33 @@ The following table lists the configurable parameters of the OpenEBS chart and t
| ----------------------------------------| --------------------------------------------- | ----------------------------------------- | | ----------------------------------------| --------------------------------------------- | ----------------------------------------- |
| `rbac.create` | Enable RBAC Resources | `true` | | `rbac.create` | Enable RBAC Resources | `true` |
| `image.pullPolicy` | Container pull policy | `IfNotPresent` | | `image.pullPolicy` | Container pull policy | `IfNotPresent` |
| `apiserver.enabled` | Enable API Server | `true` |
| `apiserver.image` | Image for API Server | `quay.io/openebs/m-apiserver` | | `apiserver.image` | Image for API Server | `quay.io/openebs/m-apiserver` |
| `apiserver.imageTag` | Image Tag for API Server | `1.1.0` | | `apiserver.imageTag` | Image Tag for API Server | `1.2.0` |
| `apiserver.replicas` | Number of API Server Replicas | `1` | | `apiserver.replicas` | Number of API Server Replicas | `1` |
| `apiserver.sparse.enabled` | Create Sparse Pool based on Sparsefile | `false` | | `apiserver.sparse.enabled` | Create Sparse Pool based on Sparsefile | `false` |
| `provisioner.enabled` | Enable Provisioner | `true` |
| `provisioner.image` | Image for Provisioner | `quay.io/openebs/openebs-k8s-provisioner` | | `provisioner.image` | Image for Provisioner | `quay.io/openebs/openebs-k8s-provisioner` |
| `provisioner.imageTag` | Image Tag for Provisioner | `1.1.0` | | `provisioner.imageTag` | Image Tag for Provisioner | `1.2.0` |
| `provisioner.replicas` | Number of Provisioner Replicas | `1` | | `provisioner.replicas` | Number of Provisioner Replicas | `1` |
| `localProvisioner.image` | Image for localProvisioner | `quay.io/openebs/provisioner-localpv` | | `localprovisioner.enabled` | Enable localProvisioner | `true` |
| `localProvisioner.imageTag` | Image Tag for localProvisioner | `1.1.0` | | `localprovisioner.image` | Image for localProvisioner | `quay.io/openebs/provisioner-localpv` |
| `localProvisioner.replicas` | Number of localProvisioner Replicas | `1` | | `localprovisioner.imageTag` | Image Tag for localProvisioner | `1.2.0` |
| `localProvisioner.basePath` | BasePath for hostPath volumes on Nodes | `/var/openebs/local` | | `localprovisioner.replicas` | Number of localProvisioner Replicas | `1` |
| `webhook.image` | Image for admision server | `quay.io/openebs/admission-server` | | `localprovisioner.basePath` | BasePath for hostPath volumes on Nodes | `/var/openebs/local` |
| `webhook.imageTag` | Image Tag for admission server | `1.1.0` | | `webhook.enabled` | Enable admission server | `true` |
| `webhook.image` | Image for admission server | `quay.io/openebs/admission-server` |
| `webhook.imageTag` | Image Tag for admission server | `1.2.0` |
| `webhook.replicas` | Number of admission server Replicas | `1` | | `webhook.replicas` | Number of admission server Replicas | `1` |
| `snapshotOperator.enabled` | Enable Snapshot Provisioner | `true` |
| `snapshotOperator.provisioner.image` | Image for Snapshot Provisioner | `quay.io/openebs/snapshot-provisioner` | | `snapshotOperator.provisioner.image` | Image for Snapshot Provisioner | `quay.io/openebs/snapshot-provisioner` |
| `snapshotOperator.provisioner.imageTag` | Image Tag for Snapshot Provisioner | `1.1.0` | | `snapshotOperator.provisioner.imageTag` | Image Tag for Snapshot Provisioner | `1.2.0` |
| `snapshotOperator.controller.image` | Image for Snapshot Controller | `quay.io/openebs/snapshot-controller` | | `snapshotOperator.controller.image` | Image for Snapshot Controller | `quay.io/openebs/snapshot-controller` |
| `snapshotOperator.controller.imageTag` | Image Tag for Snapshot Controller | `1.1.0` | | `snapshotOperator.controller.imageTag` | Image Tag for Snapshot Controller | `1.2.0` |
| `snapshotOperator.replicas` | Number of Snapshot Operator Replicas | `1` | | `snapshotOperator.replicas` | Number of Snapshot Operator Replicas | `1` |
| `ndm.enabled` | Enable Node Disk Manager | `true` |
| `ndm.image` | Image for Node Disk Manager | `quay.io/openebs/node-disk-manager-amd64` | | `ndm.image` | Image for Node Disk Manager | `quay.io/openebs/node-disk-manager-amd64` |
| `ndm.imageTag` | Image Tag for Node Disk Manager | `v0.4.1` | | `ndm.imageTag` | Image Tag for Node Disk Manager | `v0.4.2` |
| `ndm.sparse.path` | Directory where Sparse files are created | `/var/openebs/sparse` | | `ndm.sparse.path` | Directory where Sparse files are created | `/var/openebs/sparse` |
| `ndm.sparse.size` | Size of the sparse file in bytes | `10737418240` | | `ndm.sparse.size` | Size of the sparse file in bytes | `10737418240` |
| `ndm.sparse.count` | Number of sparse files to be created | `1` | | `ndm.sparse.count` | Number of sparse files to be created | `1` |
...@@ -68,21 +74,23 @@ The following table lists the configurable parameters of the OpenEBS chart and t ...@@ -68,21 +74,23 @@ The following table lists the configurable parameters of the OpenEBS chart and t
| `ndm.filters.includePaths` | Include devices with specified path patterns | `""` | | `ndm.filters.includePaths` | Include devices with specified path patterns | `""` |
| `ndm.filters.excludePaths` | Exclude devices with specified path patterns | `loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md` | | `ndm.filters.excludePaths` | Exclude devices with specified path patterns | `loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md` |
| `ndm.probes.enableSeachest` | Enable Seachest probe for NDM | `false` | | `ndm.probes.enableSeachest` | Enable Seachest probe for NDM | `false` |
| `ndmOperator.enabled` | Enable NDM Operator | `true` |
| `ndmOperator.image` | Image for NDM Operator | `quay.io/openebs/node-disk-operator-amd64`| | `ndmOperator.image` | Image for NDM Operator | `quay.io/openebs/node-disk-operator-amd64`|
| `ndmOperator.imageTag` | Image Tag for NDM Operator | `v0.4.1` | | `ndmOperator.imageTag` | Image Tag for NDM Operator | `v0.4.2` |
| `jiva.image` | Image for Jiva | `quay.io/openebs/jiva` | | `jiva.image` | Image for Jiva | `quay.io/openebs/jiva` |
| `jiva.imageTag` | Image Tag for Jiva | `1.1.0` | | `jiva.imageTag` | Image Tag for Jiva | `1.2.0` |
| `jiva.replicas` | Number of Jiva Replicas | `3` | | `jiva.replicas` | Number of Jiva Replicas | `3` |
| `jiva.defaultStoragePath` | hostpath used by default Jiva StorageClass | `/var/openebs` |
| `cstor.pool.image` | Image for cStor Pool | `quay.io/openebs/cstor-pool` | | `cstor.pool.image` | Image for cStor Pool | `quay.io/openebs/cstor-pool` |
| `cstor.pool.imageTag` | Image Tag for cStor Pool | `1.1.0` | | `cstor.pool.imageTag` | Image Tag for cStor Pool | `1.2.0` |
| `cstor.poolMgmt.image` | Image for cStor Pool Management | `quay.io/openebs/cstor-pool-mgmt` | | `cstor.poolMgmt.image` | Image for cStor Pool Management | `quay.io/openebs/cstor-pool-mgmt` |
| `cstor.poolMgmt.imageTag` | Image Tag for cStor Pool Management | `1.1.0` | | `cstor.poolMgmt.imageTag` | Image Tag for cStor Pool Management | `1.2.0` |
| `cstor.target.image` | Image for cStor Target | `quay.io/openebs/cstor-istgt` | | `cstor.target.image` | Image for cStor Target | `quay.io/openebs/cstor-istgt` |
| `cstor.target.imageTag` | Image Tag for cStor Target | `1.1.0` | | `cstor.target.imageTag` | Image Tag for cStor Target | `1.2.0` |
| `cstor.volumeMgmt.image` | Image for cStor Volume Management | `quay.io/openebs/cstor-volume-mgmt` | | `cstor.volumeMgmt.image` | Image for cStor Volume Management | `quay.io/openebs/cstor-volume-mgmt` |
| `cstor.volumeMgmt.imageTag` | Image Tag for cStor Volume Management | `1.1.0` | | `cstor.volumeMgmt.imageTag` | Image Tag for cStor Volume Management | `1.2.0` |
| `policies.monitoring.image` | Image for Prometheus Exporter | `quay.io/openebs/m-exporter` | | `policies.monitoring.image` | Image for Prometheus Exporter | `quay.io/openebs/m-exporter` |
| `policies.monitoring.imageTag` | Image Tag for Prometheus Exporter | `1.1.0` | | `policies.monitoring.imageTag` | Image Tag for Prometheus Exporter | `1.2.0` |
| `analytics.enabled` | Enable sending stats to Google Analytics | `true` | | `analytics.enabled` | Enable sending stats to Google Analytics | `true` |
| `analytics.pingInterval` | Duration(hours) between sending ping stat | `24h` | | `analytics.pingInterval` | Duration(hours) between sending ping stat | `24h` |
| `defaultStorageConfig.enabled` | Enable default storage class installation | `true` | | `defaultStorageConfig.enabled` | Enable default storage class installation | `true` |
......
...@@ -13,9 +13,18 @@ rules: ...@@ -13,9 +13,18 @@ rules:
resources: ["nodes", "nodes/proxy"] resources: ["nodes", "nodes/proxy"]
verbs: ["*"] verbs: ["*"]
- apiGroups: ["*"] - apiGroups: ["*"]
resources: ["namespaces", "services", "pods", "deployments", "events", "endpoints", "configmaps", "jobs"] resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs" ]
verbs: ["*"] verbs: ["*"]
- apiGroups: ["*"] - apiGroups: ["*"]
resources: ["statefulsets", "daemonsets"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["resourcequotas", "limitranges"]
verbs: ["list", "watch"]
- apiGroups: ["*"]
resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"]
verbs: ["list", "watch"]
- apiGroups: ["*"]
resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"] resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]
verbs: ["*"] verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"] - apiGroups: ["volumesnapshot.external-storage.k8s.io"]
...@@ -28,7 +37,7 @@ rules: ...@@ -28,7 +37,7 @@ rules:
resources: [ "disks", "blockdevices", "blockdeviceclaims"] resources: [ "disks", "blockdevices", "blockdeviceclaims"]
verbs: ["*" ] verbs: ["*" ]
- apiGroups: ["*"] - apiGroups: ["*"]
resources: [ "storagepoolclaims", "storagepoolclaims/finalizers","storagepools"] resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"]
verbs: ["*" ] verbs: ["*" ]
- apiGroups: ["*"] - apiGroups: ["*"]
resources: [ "castemplates", "runtasks"] resources: [ "castemplates", "runtasks"]
...@@ -37,6 +46,9 @@ rules: ...@@ -37,6 +46,9 @@ rules:
resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"] resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"]
verbs: ["*" ] verbs: ["*" ]
- apiGroups: ["*"] - apiGroups: ["*"]
resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"] resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"]
verbs: ["*" ] verbs: ["*" ]
- apiGroups: ["*"] - apiGroups: ["*"]
......
{{- if .Values.ndm.enabled }}
# This is the node-disk-manager related config. # This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters # It can be used to customize the disks probes and filters
apiVersion: v1 apiVersion: v1
...@@ -10,9 +11,10 @@ metadata: ...@@ -10,9 +11,10 @@ metadata:
release: {{ .Release.Name }} release: {{ .Release.Name }}
heritage: {{ .Release.Service }} heritage: {{ .Release.Service }}
component: ndm-config component: ndm-config
openebs.io/component-name: ndm-config
data: data:
# udev-probe is default or primary probe which should be enabled to run ndm # udev-probe is default or primary probe which should be enabled to run ndm
# filterconfigs contails configs of filters - in ther form fo include # filterconfigs contains configs of filters - in the form of include
# and exclude comma separated strings # and exclude comma separated strings
node-disk-manager.config: | node-disk-manager.config: |
probeconfigs: probeconfigs:
...@@ -21,7 +23,7 @@ data: ...@@ -21,7 +23,7 @@ data:
state: true state: true
- key: seachest-probe - key: seachest-probe
name: seachest probe name: seachest probe
state: true state: {{ .Values.ndm.probes.enableSeachest }}
- key: smart-probe - key: smart-probe
name: smart probe name: smart probe
state: true state: true
...@@ -41,3 +43,4 @@ data: ...@@ -41,3 +43,4 @@ data:
include: "{{ .Values.ndm.filters.includePaths }}" include: "{{ .Values.ndm.filters.includePaths }}"
exclude: "{{ .Values.ndm.filters.excludePaths }}" exclude: "{{ .Values.ndm.filters.excludePaths }}"
--- ---
{{- end }}
apiVersion: extensions/v1beta1 {{- if .Values.ndm.enabled }}
apiVersion: apps/v1
kind: DaemonSet kind: DaemonSet
metadata: metadata:
name: {{ template "openebs.fullname" . }}-ndm name: {{ template "openebs.fullname" . }}-ndm
...@@ -8,6 +9,8 @@ metadata: ...@@ -8,6 +9,8 @@ metadata:
release: {{ .Release.Name }} release: {{ .Release.Name }}
heritage: {{ .Release.Service }} heritage: {{ .Release.Service }}
component: ndm component: ndm
openebs.io/component-name: ndm
openebs.io/version: {{ .Values.release.version }}
spec: spec:
updateStrategy: updateStrategy:
type: "RollingUpdate" type: "RollingUpdate"
...@@ -35,6 +38,8 @@ spec: ...@@ -35,6 +38,8 @@ spec:
securityContext: securityContext:
privileged: true privileged: true
env: env:
# namespace in which NDM is installed will be passed to NDM Daemonset
# as environment variable
- name: NAMESPACE - name: NAMESPACE
valueFrom: valueFrom:
fieldRef: fieldRef:
...@@ -119,3 +124,8 @@ spec: ...@@ -119,3 +124,8 @@ spec:
nodeSelector: nodeSelector:
{{ toYaml .Values.ndm.nodeSelector | indent 8 }} {{ toYaml .Values.ndm.nodeSelector | indent 8 }}
{{- end }} {{- end }}
{{- if .Values.ndm.tolerations }}
tolerations:
{{ toYaml .Values.ndm.tolerations | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.webhook.enabled }}
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
...@@ -8,8 +9,13 @@ metadata: ...@@ -8,8 +9,13 @@ metadata:
release: {{ .Release.Name }} release: {{ .Release.Name }}
heritage: {{ .Release.Service }} heritage: {{ .Release.Service }}
component: admission-webhook component: admission-webhook
openebs.io/component-name: admission-webhook
openebs.io/version: {{ .Values.release.version }}
spec: spec:
replicas: {{ .Values.webhook.replicas }} replicas: {{ .Values.webhook.replicas }}
strategy:
type: "Recreate"
rollingUpdate: null
selector: selector:
matchLabels: matchLabels:
app: admission-webhook app: admission-webhook
...@@ -18,6 +24,7 @@ spec: ...@@ -18,6 +24,7 @@ spec:
labels: labels:
app: admission-webhook app: admission-webhook
name: admission-webhook name: admission-webhook
release: {{ .Release.Name }}
openebs.io/version: {{ .Values.release.version }} openebs.io/version: {{ .Values.release.version }}
openebs.io/component-name: admission-webhook openebs.io/component-name: admission-webhook
spec: spec:
...@@ -52,3 +59,4 @@ spec: ...@@ -52,3 +59,4 @@ spec:
- name: webhook-certs - name: webhook-certs
secret: secret:
secretName: admission-server-certs secretName: admission-server-certs
{{- end }}
{{- if .Values.localprovisioner.enabled }}
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
...@@ -9,8 +10,12 @@ metadata: ...@@ -9,8 +10,12 @@ metadata:
heritage: {{ .Release.Service }} heritage: {{ .Release.Service }}
component: localpv-provisioner component: localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: {{ .Values.release.version }}
spec: spec:
replicas: {{ .Values.provisioner.replicas }} replicas: {{ .Values.localprovisioner.replicas }}
strategy:
type: "Recreate"
rollingUpdate: null
selector: selector:
matchLabels: matchLabels:
app: {{ template "openebs.name" . }} app: {{ template "openebs.name" . }}
...@@ -78,3 +83,4 @@ spec: ...@@ -78,3 +83,4 @@ spec:
affinity: affinity:
{{ toYaml .Values.localprovisioner.affinity | indent 8 }} {{ toYaml .Values.localprovisioner.affinity | indent 8 }}
{{- end }} {{- end }}
{{- end }}
{{- if .Values.apiserver.enabled }}
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
...@@ -13,6 +14,9 @@ metadata: ...@@ -13,6 +14,9 @@ metadata:
openebs.io/version: {{ .Values.release.version }} openebs.io/version: {{ .Values.release.version }}
spec: spec:
replicas: {{ .Values.apiserver.replicas }} replicas: {{ .Values.apiserver.replicas }}
strategy:
type: "Recreate"
rollingUpdate: null
selector: selector:
matchLabels: matchLabels:
app: {{ template "openebs.name" . }} app: {{ template "openebs.name" . }}
...@@ -45,17 +49,6 @@ spec: ...@@ -45,17 +49,6 @@ spec:
# This is supported for maya api server version 0.5.2 onwards # This is supported for maya api server version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER #- name: OPENEBS_IO_K8S_MASTER
# value: "http://172.28.128.3:8080" # value: "http://172.28.128.3:8080"
# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be
# configured as a part of openebs installation.
# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.
- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
value: "{{ .Values.apiserver.sparse.enabled }}"
- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR
value: "{{ .Values.ndm.sparse.path }}"
- name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
value: "{{ .Values.defaultStorageConfig.enabled }}"
- name: OPENEBS_IO_CSTOR_TARGET_DIR
value: "{{ .Values.ndm.sparse.path }}"
# OPENEBS_NAMESPACE provides the namespace of this deployment as an # OPENEBS_NAMESPACE provides the namespace of this deployment as an
# environment variable # environment variable
- name: OPENEBS_NAMESPACE - name: OPENEBS_NAMESPACE
...@@ -74,6 +67,44 @@ spec: ...@@ -74,6 +67,44 @@ spec:
valueFrom: valueFrom:
fieldRef: fieldRef:
fieldPath: metadata.name fieldPath: metadata.name
# If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default
# storageclass and storagepool will not be created.
- name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
value: "{{ .Values.defaultStorageConfig.enabled }}"
# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be
# configured as a part of openebs installation.
# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.
# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
# is set to true
- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
value: "{{ .Values.apiserver.sparse.enabled }}"
# OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath
# to be used for saving the shared content between the side cars
# of cstor volume pod.
# The default path used is /var/openebs/sparse
- name: OPENEBS_IO_CSTOR_TARGET_DIR
value: "{{ .Values.ndm.sparse.path }}"
# OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath
# to be used for saving the shared content between the side cars
# of cstor pool pod. This ENV is also used to indicate the location
# of the sparse devices.
# The default path used is /var/openebs/sparse
- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR
value: "{{ .Values.ndm.sparse.path }}"
# OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath
# to be used for default Jiva StoragePool loaded by OpenEBS
# The default path used is /var/openebs
# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
# is set to true
- name: OPENEBS_IO_JIVA_POOL_DIR
value: "{{ .Values.jiva.defaultStoragePath }}"
# OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath
# to be used for default openebs-hostpath storageclass loaded by OpenEBS
# The default path used is /var/openebs/local
# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
# is set to true
- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR
value: "{{ .Values.localprovisioner.basePath }}"
- name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE - name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE
value: "{{ .Values.jiva.image }}:{{ .Values.jiva.imageTag }}" value: "{{ .Values.jiva.image }}:{{ .Values.jiva.imageTag }}"
- name: OPENEBS_IO_JIVA_REPLICA_IMAGE - name: OPENEBS_IO_JIVA_REPLICA_IMAGE
...@@ -121,3 +152,4 @@ spec: ...@@ -121,3 +152,4 @@ spec:
affinity: affinity:
{{ toYaml .Values.apiserver.affinity | indent 8 }} {{ toYaml .Values.apiserver.affinity | indent 8 }}
{{- end }} {{- end }}
{{- end }}
{{- if .Values.provisioner.enabled }}
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
...@@ -8,8 +9,14 @@ metadata: ...@@ -8,8 +9,14 @@ metadata:
release: {{ .Release.Name }} release: {{ .Release.Name }}
heritage: {{ .Release.Service }} heritage: {{ .Release.Service }}
component: provisioner component: provisioner
name: openebs-provisioner
openebs.io/component-name: openebs-provisioner
openebs.io/version: {{ .Values.release.version }}
spec: spec:
replicas: {{ .Values.provisioner.replicas }} replicas: {{ .Values.provisioner.replicas }}
strategy:
type: "Recreate"
rollingUpdate: null
selector: selector:
matchLabels: matchLabels:
app: {{ template "openebs.name" . }} app: {{ template "openebs.name" . }}
...@@ -81,3 +88,4 @@ spec: ...@@ -81,3 +88,4 @@ spec:
affinity: affinity:
{{ toYaml .Values.provisioner.affinity | indent 8 }} {{ toYaml .Values.provisioner.affinity | indent 8 }}
{{- end }} {{- end }}
{{- end }}
{{- if .Values.snapshotOperator.enabled }}
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
...@@ -8,6 +9,8 @@ metadata: ...@@ -8,6 +9,8 @@ metadata:
release: {{ .Release.Name }} release: {{ .Release.Name }}
heritage: {{ .Release.Service }} heritage: {{ .Release.Service }}
component: snapshot-operator component: snapshot-operator
openebs.io/component-name: openebs-snapshot-operator
openebs.io/version: {{ .Values.release.version }}
spec: spec:
replicas: {{ .Values.snapshotOperator.replicas }} replicas: {{ .Values.snapshotOperator.replicas }}
selector: selector:
...@@ -15,7 +18,8 @@ spec: ...@@ -15,7 +18,8 @@ spec:
app: {{ template "openebs.name" . }} app: {{ template "openebs.name" . }}
release: {{ .Release.Name }} release: {{ .Release.Name }}
strategy: strategy:
type: {{ .Values.snapshotOperator.upgradeStrategy }} type: "Recreate"
rollingUpdate: null
template: template:
metadata: metadata:
labels: labels:
...@@ -110,3 +114,4 @@ spec: ...@@ -110,3 +114,4 @@ spec:
affinity: affinity:
{{ toYaml .Values.snapshotOperator.affinity | indent 8 }} {{ toYaml .Values.snapshotOperator.affinity | indent 8 }}
{{- end }} {{- end }}
{{- end }}
{{- if .Values.ndmOperator.enabled }}
--- ---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
...@@ -10,11 +11,13 @@ metadata: ...@@ -10,11 +11,13 @@ metadata:
heritage: {{ .Release.Service }} heritage: {{ .Release.Service }}
component: ndm-operator component: ndm-operator
openebs.io/component-name: ndm-operator openebs.io/component-name: ndm-operator
openebs.io/version: {{ .Values.release.version }}
name: ndm-operator name: ndm-operator
spec: spec:
replicas: {{ .Values.ndmOperator.replicas }} replicas: {{ .Values.ndmOperator.replicas }}
strategy: strategy:
type: {{ .Values.ndmOperator.upgradeStrategy }} type: "Recreate"
rollingUpdate: null
selector: selector:
matchLabels: matchLabels:
app: {{ template "openebs.name" . }} app: {{ template "openebs.name" . }}
...@@ -56,10 +59,11 @@ spec: ...@@ -56,10 +59,11 @@ spec:
- name: CLEANUP_JOB_IMAGE - name: CLEANUP_JOB_IMAGE
value: "{{ .Values.ndmOperator.cleanupImage }}:{{ .Values.ndmOperator.cleanupImageTag }}" value: "{{ .Values.ndmOperator.cleanupImage }}:{{ .Values.ndmOperator.cleanupImageTag }}"
{{- if .Values.ndmOperator.nodeSelector }} {{- if .Values.ndmOperator.nodeSelector }}
nodeSelector: nodeSelector:
{{ toYaml .Values.ndmOperator.nodeSelector | indent 8 }} {{ toYaml .Values.ndmOperator.nodeSelector | indent 8 }}
{{- end }} {{- end }}
{{- if .Values.ndmOperator.tolerations }} {{- if .Values.ndmOperator.tolerations }}
tolerations: tolerations:
{{ toYaml .Values.ndmOperator.tolerations | indent 8 }} {{ toYaml .Values.ndmOperator.tolerations | indent 8 }}
{{- end }} {{- end }}
{{- end }}
{{- if .Values.webhook.enabled }}
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
...@@ -7,9 +8,11 @@ metadata: ...@@ -7,9 +8,11 @@ metadata:
chart: {{ template "openebs.chart" . }} chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }} release: {{ .Release.Name }}
heritage: {{ .Release.Service }} heritage: {{ .Release.Service }}
openebs.io/component-name: admission-webhook-svc
spec: spec:
ports: ports:
- port: 443 - port: 443
targetPort: 443 targetPort: 443
selector: selector:
app: admission-webhook app: admission-webhook
{{- end }}
{{- if .Values.apiserver.enabled }}
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
...@@ -7,6 +8,7 @@ metadata: ...@@ -7,6 +8,7 @@ metadata:
chart: {{ template "openebs.chart" . }} chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }} release: {{ .Release.Name }}
heritage: {{ .Release.Service }} heritage: {{ .Release.Service }}
openebs.io/component-name: maya-apiserver-svc
spec: spec:
ports: ports:
- name: api - name: api
...@@ -18,3 +20,4 @@ spec: ...@@ -18,3 +20,4 @@ spec:
release: {{ .Release.Name }} release: {{ .Release.Name }}
component: apiserver component: apiserver
sessionAffinity: None sessionAffinity: None
{{- end }}
{{- if .Values.webhook.enabled }}
{{- $ca := genCA "admission-server-ca" 3650 }} {{- $ca := genCA "admission-server-ca" 3650 }}
{{- $cn := printf "admission-server-svc" }} {{- $cn := printf "admission-server-svc" }}
{{- $altName1 := printf "admission-server-svc.%s" .Release.Namespace }} {{- $altName1 := printf "admission-server-svc.%s" .Release.Namespace }}
...@@ -18,7 +19,7 @@ metadata: ...@@ -18,7 +19,7 @@ metadata:
webhooks: webhooks:
# failurePolicy Fail means that an error calling the webhook causes the admission to fail. # failurePolicy Fail means that an error calling the webhook causes the admission to fail.
- name: admission-webhook.openebs.io - name: admission-webhook.openebs.io
failurePolicy: Fail failurePolicy: {{ .Values.webhook.failurePolicy }}
clientConfig: clientConfig:
service: service:
name: admission-server-svc name: admission-server-svc
...@@ -34,6 +35,10 @@ webhooks: ...@@ -34,6 +35,10 @@ webhooks:
apiGroups: ["*"] apiGroups: ["*"]
apiVersions: ["*"] apiVersions: ["*"]
resources: ["persistentvolumeclaims"] resources: ["persistentvolumeclaims"]
- operations: [ "CREATE", "UPDATE" ]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["cstorpoolclusters"]
--- ---
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
...@@ -55,3 +60,4 @@ data: ...@@ -55,3 +60,4 @@ data:
cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ3VENDQXRXZ0F3SUJBZ0lVYk84NS9JR0ZXYTA2Vm11WVdTWjdxaTUybmRRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hERUxNQWtHQTFVRUJoTUNlSGd4Q2pBSUJnTlZCQWdNQVhneENqQUlCZ05WQkFjTUFYZ3hDakFJQmdOVgpCQW9NQVhneENqQUlCZ05WQkFzTUFYZ3hDekFKQmdOVkJBTU1BbU5oTVJBd0RnWUpLb1pJaHZjTkFRa0JGZ0Y0Ck1CNFhEVEU1TURNd01qQTNNek13TUZvWERUSXdNRE13TVRBM01qYzFNbG93S3pFcE1DY0dBMVVFQXhNZ1lXUnQKYVhOemFXOXVMWE5sY25abGNpMXpkbU11YjNCbGJtVmljeTV6ZG1Nd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQQpBNElCRHdBd2dnRUtBb0lCQVFERk5MRE1xKzd6eFZidDNPcnFhaVUyOFB6K25ZeFRCblA0NVhFWGFjSUpPWG1aClM1c2ZjMjM3WVNWS0I5Tlp4cXNYT08wcXpWb0xtNlZ0UDJjREpWZGZIVUQ0QXBZSC94UVBVTktrcFg3K0NVTFEKZ3VBNWowOXozdkFaeDJidXBTaXFFdE1mVldqNkh5V0Jyd2FuZW9IaVVXVVdpbmtnUXpCQzR1SWtiRkE2djYrZwp4ZzAwS09TY2NFRWY3eU5McjBvejBKVHRpRm1aS1pVVVBwK3N3WTRpRTZ3RER5bVVnTmY4SW8wUEExVkQ1TE9vCkFwQ0l2WDJyb1RNd3VkR1VrZUc1VTA2OWIrMWtQMEJsUWdDZk9TQTBmZEN3Snp0aWE1aHpaUlVIWGxFOVArN0kKekgyR0xXeHh1aHJPTlFmT25HcVRiUE13UmowekZIdmcycUo1azJ2VkFnTUJBQUdqZ2Rjd2dkUXdEZ1lEVlIwUApBUUgvQkFRREFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEClZSME9CQllFRklnOVFSOSsyVW12THQwQXY4MlYwZml0bU81WE1COEdBMVVkSXdRWU1CYUFGTU5HNkZ4aUxhYWYKMWV3bDVEN3VJcmIrRStIT01GOEdBMVVkRVFSWU1GYUNGR0ZrYldsemMybHZiaTF6WlhKMlpYSXRjM1pqZ2h4aApaRzFwYzNOcGIyNHRjMlZ5ZG1WeUxYTjJZeTV2Y0dWdVpXSnpnaUJoWkcxcGMzTnBiMjR0YzJWeWRtVnlMWE4yCll5NXZjR1Z1WldKekxuTjJZekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSlpJRzd2d0RYaWxhWUFCS1Brc0oKZVJtdml4ZnYybTRVTVdzdlBKVVVJTXhHbzhtc1J6aWhBRjVuTExzaURKRDl4MjhraXZXaGUwbWE4aWVHYjY5Sgp1U1N4bys0OStaV3NVaTB3UlRDMi9ZWGlkWS9xNDU2c1g4ck9qQURDZlFUcFpYc2ZyekVWa2Q4NE0zdU5GTmhnCnMyWmxJMnNDTWljYXExNWxIWEh3akFkY2FqZit1VklwOXNHUElsMUhmZFcxWVFLc0NoU3dhdi80NUZJcFlMSVYKM3hiS2ZIbmh2czhJck5ZbTVIenAvVVdvcFN1Tm5tS1IwWGo3cXpGcllUYzV3eHZ3VVZrKzVpZFFreWMwZ0RDcApGbkFVdEdmaUVUQnBhU3pISjQ4STZqUFpneVE0NzlZMmRxRUtXcWtyc0RkZ2tVcXlnNGlQQ0YwWC9YVU9YU3VGClNnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ3VENDQXRXZ0F3SUJBZ0lVYk84NS9JR0ZXYTA2Vm11WVdTWjdxaTUybmRRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hERUxNQWtHQTFVRUJoTUNlSGd4Q2pBSUJnTlZCQWdNQVhneENqQUlCZ05WQkFjTUFYZ3hDakFJQmdOVgpCQW9NQVhneENqQUlCZ05WQkFzTUFYZ3hDekFKQmdOVkJBTU1BbU5oTVJBd0RnWUpLb1pJaHZjTkFRa0JGZ0Y0Ck1CNFhEVEU1TURNd01qQTNNek13TUZvWERUSXdNRE13TVRBM01qYzFNbG93S3pFcE1DY0dBMVVFQXhNZ1lXUnQKYVhOemFXOXVMWE5sY25abGNpMXpkbU11YjNCbGJtVmljeTV6ZG1Nd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQQpBNElCRHdBd2dnRUtBb0lCQVFERk5MRE1xKzd6eFZidDNPcnFhaVUyOFB6K25ZeFRCblA0NVhFWGFjSUpPWG1aClM1c2ZjMjM3WVNWS0I5Tlp4cXNYT08wcXpWb0xtNlZ0UDJjREpWZGZIVUQ0QXBZSC94UVBVTktrcFg3K0NVTFEKZ3VBNWowOXozdkFaeDJidXBTaXFFdE1mVldqNkh5V0Jyd2FuZW9IaVVXVVdpbmtnUXpCQzR1SWtiRkE2djYrZwp4ZzAwS09TY2NFRWY3eU5McjBvejBKVHRpRm1aS1pVVVBwK3N3WTRpRTZ3RER5bVVnTmY4SW8wUEExVkQ1TE9vCkFwQ0l2WDJyb1RNd3VkR1VrZUc1VTA2OWIrMWtQMEJsUWdDZk9TQTBmZEN3Snp0aWE1aHpaUlVIWGxFOVArN0kKekgyR0xXeHh1aHJPTlFmT25HcVRiUE13UmowekZIdmcycUo1azJ2VkFnTUJBQUdqZ2Rjd2dkUXdEZ1lEVlIwUApBUUgvQkFRREFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEClZSME9CQllFRklnOVFSOSsyVW12THQwQXY4MlYwZml0bU81WE1COEdBMVVkSXdRWU1CYUFGTU5HNkZ4aUxhYWYKMWV3bDVEN3VJcmIrRStIT01GOEdBMVVkRVFSWU1GYUNGR0ZrYldsemMybHZiaTF6WlhKMlpYSXRjM1pqZ2h4aApaRzFwYzNOcGIyNHRjMlZ5ZG1WeUxYTjJZeTV2Y0dWdVpXSnpnaUJoWkcxcGMzTnBiMjR0YzJWeWRtVnlMWE4yCll5NXZjR1Z1WldKekxuTjJZekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSlpJRzd2d0RYaWxhWUFCS1Brc0oKZVJtdml4ZnYybTRVTVdzdlBKVVVJTXhHbzhtc1J6aWhBRjVuTExzaURKRDl4MjhraXZXaGUwbWE4aWVHYjY5Sgp1U1N4bys0OStaV3NVaTB3UlRDMi9ZWGlkWS9xNDU2c1g4ck9qQURDZlFUcFpYc2ZyekVWa2Q4NE0zdU5GTmhnCnMyWmxJMnNDTWljYXExNWxIWEh3akFkY2FqZit1VklwOXNHUElsMUhmZFcxWVFLc0NoU3dhdi80NUZJcFlMSVYKM3hiS2ZIbmh2czhJck5ZbTVIenAvVVdvcFN1Tm5tS1IwWGo3cXpGcllUYzV3eHZ3VVZrKzVpZFFreWMwZ0RDcApGbkFVdEdmaUVUQnBhU3pISjQ4STZqUFpneVE0NzlZMmRxRUtXcWtyc0RkZ2tVcXlnNGlQQ0YwWC9YVU9YU3VGClNnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeFRTd3pLdnU4OFZXN2R6cTZtb2xOdkQ4L3AyTVV3WnorT1Z4RjJuQ0NUbDVtVXViCkgzTnQrMkVsU2dmVFdjYXJGemp0S3MxYUM1dWxiVDluQXlWWFh4MUErQUtXQi84VUQxRFNwS1YrL2dsQzBJTGcKT1k5UGM5N3dHY2RtN3FVb3FoTFRIMVZvK2g4bGdhOEdwM3FCNGxGbEZvcDVJRU13UXVMaUpHeFFPcit2b01ZTgpOQ2prbkhCQkgrOGpTNjlLTTlDVTdZaFptU21WRkQ2ZnJNR09JaE9zQXc4cGxJRFgvQ0tORHdOVlErU3pxQUtRCmlMMTlxNkV6TUxuUmxKSGh1Vk5PdlcvdFpEOUFaVUlBbnprZ05IM1FzQ2M3WW11WWMyVVZCMTVSUFQvdXlNeDkKaGkxc2Nib2F6alVIenB4cWsyenpNRVk5TXhSNzROcWllWk5yMVFJREFRQUJBb0lCQVFDcXRIT2VsKzRlUWVKLwp3RTN4WUxTYUhIMURnZWxvTFJ2U2hmb2hSRURjYjA0ZExsODNHRnBKMGN2UGkzcWVLZVVNRXhEcGpoeTJFNk5kCk1CYmhtRDlMYkMxREFpb1EvZkxGVnpjZm9zcU02RU5YN3hKZGdQcEwyTjJKMHh2ODFDYWhJZTV6SHlIaDhYZ3MKQysvOHBZVXMvVHcrQ052VTI1UTVNZUNEbXViUUVuemJqQ3lIQm5SVmw1dVF6bk8zWEt2NEVyejdBT1BBWmFJTQozYmNFNC83c1JGczM4SE1aMVZTZ2JxUi9rM1N5SEFzNXhNWHVtY0hMMTBkK0FVK21BQ0svUThpdWJHMm9kNnJiCko3S0RONmFuUzRPZk4zZ3RtaEppN3ZsTjJVL3JycHdnblI0d3Y0bmV4U1ZlamYzQU9iaU9jNnYzZ0xJbXJ2Q3oKNzFETDFPaTVBb0dCQU9HeFp2RWFUSFFnNFdaQVJZbXlGZEtZeXY2MURDc1JycElmUlh3Q1YrcnBZTFM2NlV4SQprWHJISlNreWFqTjNTOXVsZUtUTXRWaU5wY2JCcjVNZ0lOaFFvdThRc2dpZlZHWFJGQ3d0OXJ3MGNDbEc1Y2pCClZ3bUQzYWFBTGR5WVQvbHc4dnk1Zndqc1hFZHd1OEQ2cC9rd0ZzMmlwZWQ4QVFPUVZlQ1dPeXF6QW9HQkFOK3YKL2VxKzZ5NHhPZ2ZtQ01KcHJ0THBBN1J0M3FsU0JKbEw3RkNsQXRCeUUxazBPTVIrZTdhSDBVTDdYWVR4YlBLOApBYnRZR3lzWDkydGM3RHlaU0k0cDFjUHhvcHdzNkt3N0RYZUt0YTNnVkRmSXVuZ3haR25XWjk2WmNjcEhyVzgyCnl5OTk5dTQ2WE1tQWZwSzEvbGxjdGdLem5FUVp5ZkhEUmlWdVVQTlhBb0dCQUxkMGxORDNKNTVkKzlvNTlFeHgKVGZ2WjUyZ1Rrc2lQbnU5NEsrc1puSTEvRnZUUjJrSC8yd0dLVDFLbGdGNUZZb3d3ZlZpNGJkQ0ZrM04walZ0eQppa0JMaTZYNFZEOWVCQ1NmUjE2Q0hrWHQraDRUVzBWTW80dEFmVE9TamJUNnVrZHc0Sk05MVYxVGc4OHVlKy9wCjBCQm1YcUxZeXpMWFFadTcvNUtIaTZDeEFvR0FaTWV2R0E5eWVEcFhrZTF6THR4Y2xzdkREb3lkMEIyUzB0cGgKR3lodEx5cm1Tcjk3Z0JRWWV2R1FONlIyeXduVzh6bi9jYi9OWmNvRGdFeTZac2NNNkhneXhuaGNzZzZOdWVOVgpPdkcwenlVTjdLQTBXeWl0dS8yTWlMOExoSDVzeG5taWE4Qk4rNkV4NHR0UXE1cnhnS09Eb1kzNHJyb0x3VEFnCnI0YVhWRHNDZ1lBYnRwZXhvNTJ4VmJkTzZCL3B5RUU2cEJCS1FkK3hiVkJNMDZwUzArSlFudSt5SVBmeXFhekwKbGdYTEhBSm01bU9Sb2RFRHk0WlVJRkM5RmhraGcrV0ZzSHJCOXpGU1IrZFc2Uzg1eFA4ZGxHVE42S2cydXJNQQowNTRCQUh4RWhPNU9QblNqT0VHSmQwYTdGQmc1UlkxN0RRQlFxV25SZENURHlDWmU0OStLcWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeFRTd3pLdnU4OFZXN2R6cTZtb2xOdkQ4L3AyTVV3WnorT1Z4RjJuQ0NUbDVtVXViCkgzTnQrMkVsU2dmVFdjYXJGemp0S3MxYUM1dWxiVDluQXlWWFh4MUErQUtXQi84VUQxRFNwS1YrL2dsQzBJTGcKT1k5UGM5N3dHY2RtN3FVb3FoTFRIMVZvK2g4bGdhOEdwM3FCNGxGbEZvcDVJRU13UXVMaUpHeFFPcit2b01ZTgpOQ2prbkhCQkgrOGpTNjlLTTlDVTdZaFptU21WRkQ2ZnJNR09JaE9zQXc4cGxJRFgvQ0tORHdOVlErU3pxQUtRCmlMMTlxNkV6TUxuUmxKSGh1Vk5PdlcvdFpEOUFaVUlBbnprZ05IM1FzQ2M3WW11WWMyVVZCMTVSUFQvdXlNeDkKaGkxc2Nib2F6alVIenB4cWsyenpNRVk5TXhSNzROcWllWk5yMVFJREFRQUJBb0lCQVFDcXRIT2VsKzRlUWVKLwp3RTN4WUxTYUhIMURnZWxvTFJ2U2hmb2hSRURjYjA0ZExsODNHRnBKMGN2UGkzcWVLZVVNRXhEcGpoeTJFNk5kCk1CYmhtRDlMYkMxREFpb1EvZkxGVnpjZm9zcU02RU5YN3hKZGdQcEwyTjJKMHh2ODFDYWhJZTV6SHlIaDhYZ3MKQysvOHBZVXMvVHcrQ052VTI1UTVNZUNEbXViUUVuemJqQ3lIQm5SVmw1dVF6bk8zWEt2NEVyejdBT1BBWmFJTQozYmNFNC83c1JGczM4SE1aMVZTZ2JxUi9rM1N5SEFzNXhNWHVtY0hMMTBkK0FVK21BQ0svUThpdWJHMm9kNnJiCko3S0RONmFuUzRPZk4zZ3RtaEppN3ZsTjJVL3JycHdnblI0d3Y0bmV4U1ZlamYzQU9iaU9jNnYzZ0xJbXJ2Q3oKNzFETDFPaTVBb0dCQU9HeFp2RWFUSFFnNFdaQVJZbXlGZEtZeXY2MURDc1JycElmUlh3Q1YrcnBZTFM2NlV4SQprWHJISlNreWFqTjNTOXVsZUtUTXRWaU5wY2JCcjVNZ0lOaFFvdThRc2dpZlZHWFJGQ3d0OXJ3MGNDbEc1Y2pCClZ3bUQzYWFBTGR5WVQvbHc4dnk1Zndqc1hFZHd1OEQ2cC9rd0ZzMmlwZWQ4QVFPUVZlQ1dPeXF6QW9HQkFOK3YKL2VxKzZ5NHhPZ2ZtQ01KcHJ0THBBN1J0M3FsU0JKbEw3RkNsQXRCeUUxazBPTVIrZTdhSDBVTDdYWVR4YlBLOApBYnRZR3lzWDkydGM3RHlaU0k0cDFjUHhvcHdzNkt3N0RYZUt0YTNnVkRmSXVuZ3haR25XWjk2WmNjcEhyVzgyCnl5OTk5dTQ2WE1tQWZwSzEvbGxjdGdLem5FUVp5ZkhEUmlWdVVQTlhBb0dCQUxkMGxORDNKNTVkKzlvNTlFeHgKVGZ2WjUyZ1Rrc2lQbnU5NEsrc1puSTEvRnZUUjJrSC8yd0dLVDFLbGdGNUZZb3d3ZlZpNGJkQ0ZrM04walZ0eQppa0JMaTZYNFZEOWVCQ1NmUjE2Q0hrWHQraDRUVzBWTW80dEFmVE9TamJUNnVrZHc0Sk05MVYxVGc4OHVlKy9wCjBCQm1YcUxZeXpMWFFadTcvNUtIaTZDeEFvR0FaTWV2R0E5eWVEcFhrZTF6THR4Y2xzdkREb3lkMEIyUzB0cGgKR3lodEx5cm1Tcjk3Z0JRWWV2R1FONlIyeXduVzh6bi9jYi9OWmNvRGdFeTZac2NNNkhneXhuaGNzZzZOdWVOVgpPdkcwenlVTjdLQTBXeWl0dS8yTWlMOExoSDVzeG5taWE4Qk4rNkV4NHR0UXE1cnhnS09Eb1kzNHJyb0x3VEFnCnI0YVhWRHNDZ1lBYnRwZXhvNTJ4VmJkTzZCL3B5RUU2cEJCS1FkK3hiVkJNMDZwUzArSlFudSt5SVBmeXFhekwKbGdYTEhBSm01bU9Sb2RFRHk0WlVJRkM5RmhraGcrV0ZzSHJCOXpGU1IrZFc2Uzg1eFA4ZGxHVE42S2cydXJNQQowNTRCQUh4RWhPNU9QblNqT0VHSmQwYTdGQmc1UlkxN0RRQlFxV25SZENURHlDWmU0OStLcWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
{{- end }} {{- end }}
{{- end }}
...@@ -12,14 +12,15 @@ serviceAccount: ...@@ -12,14 +12,15 @@ serviceAccount:
release: release:
# "openebs.io/version" label for control plane components # "openebs.io/version" label for control plane components
version: "1.1.0" version: "1.2.0"
image: image:
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
apiserver: apiserver:
enabled: true
image: "quay.io/openebs/m-apiserver" image: "quay.io/openebs/m-apiserver"
imageTag: "1.1.0" imageTag: "1.2.0"
replicas: 1 replicas: 1
ports: ports:
externalPort: 5656 externalPort: 5656
...@@ -37,8 +38,9 @@ defaultStorageConfig: ...@@ -37,8 +38,9 @@ defaultStorageConfig:
enabled: "true" enabled: "true"
provisioner: provisioner:
enabled: true
image: "quay.io/openebs/openebs-k8s-provisioner" image: "quay.io/openebs/openebs-k8s-provisioner"
imageTag: "1.1.0" imageTag: "1.2.0"
replicas: 1 replicas: 1
nodeSelector: {} nodeSelector: {}
tolerations: [] tolerations: []
...@@ -48,8 +50,9 @@ provisioner: ...@@ -48,8 +50,9 @@ provisioner:
periodSeconds: 60 periodSeconds: 60
localprovisioner: localprovisioner:
enabled: true
image: "quay.io/openebs/provisioner-localpv" image: "quay.io/openebs/provisioner-localpv"
imageTag: "1.1.0" imageTag: "1.2.0"
helperImage: "quay.io/openebs/openebs-tools" helperImage: "quay.io/openebs/openebs-tools"
helperImageTag: "3.8" helperImageTag: "3.8"
replicas: 1 replicas: 1
...@@ -62,12 +65,13 @@ localprovisioner: ...@@ -62,12 +65,13 @@ localprovisioner:
periodSeconds: 60 periodSeconds: 60
snapshotOperator: snapshotOperator:
enabled: true
controller: controller:
image: "quay.io/openebs/snapshot-controller" image: "quay.io/openebs/snapshot-controller"
imageTag: "1.1.0" imageTag: "1.2.0"
provisioner: provisioner:
image: "quay.io/openebs/snapshot-provisioner" image: "quay.io/openebs/snapshot-provisioner"
imageTag: "1.1.0" imageTag: "1.2.0"
replicas: 1 replicas: 1
upgradeStrategy: "Recreate" upgradeStrategy: "Recreate"
nodeSelector: {} nodeSelector: {}
...@@ -78,8 +82,9 @@ snapshotOperator: ...@@ -78,8 +82,9 @@ snapshotOperator:
periodSeconds: 60 periodSeconds: 60
ndm: ndm:
enabled: true
image: "quay.io/openebs/node-disk-manager-amd64" image: "quay.io/openebs/node-disk-manager-amd64"
imageTag: "v0.4.1" imageTag: "v0.4.2"
sparse: sparse:
path: "/var/openebs/sparse" path: "/var/openebs/sparse"
size: "10737418240" size: "10737418240"
...@@ -91,13 +96,15 @@ ndm: ...@@ -91,13 +96,15 @@ ndm:
probes: probes:
enableSeachest: false enableSeachest: false
nodeSelector: {} nodeSelector: {}
tolerations: []
healthCheck: healthCheck:
initialDelaySeconds: 30 initialDelaySeconds: 30
periodSeconds: 60 periodSeconds: 60
ndmOperator: ndmOperator:
enabled: true
image: "quay.io/openebs/node-disk-operator-amd64" image: "quay.io/openebs/node-disk-operator-amd64"
imageTag: "v0.4.1" imageTag: "v0.4.2"
replicas: 1 replicas: 1
upgradeStrategy: Recreate upgradeStrategy: Recreate
nodeSelector: {} nodeSelector: {}
...@@ -110,9 +117,11 @@ ndmOperator: ...@@ -110,9 +117,11 @@ ndmOperator:
cleanupImageTag: "3.9" cleanupImageTag: "3.9"
webhook: webhook:
enabled: true
image: "quay.io/openebs/admission-server" image: "quay.io/openebs/admission-server"
imageTag: "1.1.0" imageTag: "1.2.0"
generateTLS: true generateTLS: true
failurePolicy: Ignore
replicas: 1 replicas: 1
nodeSelector: {} nodeSelector: {}
tolerations: [] tolerations: []
...@@ -120,28 +129,29 @@ webhook: ...@@ -120,28 +129,29 @@ webhook:
jiva: jiva:
image: "quay.io/openebs/jiva" image: "quay.io/openebs/jiva"
imageTag: "1.1.0" imageTag: "1.2.0"
replicas: 3 replicas: 3
defaultStoragePath: "/var/openebs"
cstor: cstor:
pool: pool:
image: "quay.io/openebs/cstor-pool" image: "quay.io/openebs/cstor-pool"
imageTag: "1.1.0" imageTag: "1.2.0"
poolMgmt: poolMgmt:
image: "quay.io/openebs/cstor-pool-mgmt" image: "quay.io/openebs/cstor-pool-mgmt"
imageTag: "1.1.0" imageTag: "1.2.0"
target: target:
image: "quay.io/openebs/cstor-istgt" image: "quay.io/openebs/cstor-istgt"
imageTag: "1.1.0" imageTag: "1.2.0"
volumeMgmt: volumeMgmt:
image: "quay.io/openebs/cstor-volume-mgmt" image: "quay.io/openebs/cstor-volume-mgmt"
imageTag: "1.1.0" imageTag: "1.2.0"
policies: policies:
monitoring: monitoring:
enabled: true enabled: true
image: "quay.io/openebs/m-exporter" image: "quay.io/openebs/m-exporter"
imageTag: "1.1.0" imageTag: "1.2.0"
analytics: analytics:
enabled: true enabled: true
......
apiVersion: v1
version: 1.1.0
name: openebs
appVersion: 1.1.0
description: Containerized Storage for Containers
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/openebs/icon/color/openebs-icon-color.png
home: http://www.openebs.io/
keywords:
- cloud-native-storage
- block-storage
- iSCSI
- storage
sources:
- https://github.com/openebs/openebs
maintainers:
- name: kmova
email: kiran.mova@openebs.io
- name: prateekpandey14
email: prateek.pandey@openebs.io
OpenEBS
=======
[OpenEBS](https://github.com/openebs/openebs) is an open source storage platform that provides persistent and containerized block storage for DevOps and container environments.
OpenEBS can be deployed on any Kubernetes cluster - either in cloud, on-premise or developer laptop (minikube). OpenEBS itself is deployed as just another container on your cluster, and enables storage services that can be designated on a per pod, application, cluster or container level.
Introduction
------------
This chart bootstraps OpenEBS deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Kubernetes 1.10+ with RBAC enabled
- iSCSI PV support in the underlying infrastructure
## Installing OpenEBS
```
helm install --namespace openebs stable/openebs
```
## Installing OpenEBS with the release name `my-release`:
```
helm install --name `my-release` --namespace openebs stable/openebs
```
## To uninstall/delete the `my-release` deployment:
```
helm ls --all
helm delete `my-release`
```
## Configuration
The following table lists the configurable parameters of the OpenEBS chart and their default values.
| Parameter | Description | Default |
| ----------------------------------------| --------------------------------------------- | ----------------------------------------- |
| `rbac.create` | Enable RBAC Resources | `true` |
| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
| `apiserver.image` | Image for API Server | `quay.io/openebs/m-apiserver` |
| `apiserver.imageTag` | Image Tag for API Server | `1.1.0` |
| `apiserver.replicas` | Number of API Server Replicas | `1` |
| `apiserver.sparse.enabled` | Create Sparse Pool based on Sparsefile | `false` |
| `provisioner.image` | Image for Provisioner | `quay.io/openebs/openebs-k8s-provisioner` |
| `provisioner.imageTag` | Image Tag for Provisioner | `1.1.0` |
| `provisioner.replicas` | Number of Provisioner Replicas | `1` |
| `localProvisioner.image` | Image for localProvisioner | `quay.io/openebs/provisioner-localpv` |
| `localProvisioner.imageTag` | Image Tag for localProvisioner | `1.1.0` |
| `localProvisioner.replicas` | Number of localProvisioner Replicas | `1` |
| `localProvisioner.basePath` | BasePath for hostPath volumes on Nodes | `/var/openebs/local` |
| `webhook.image` | Image for admision server | `quay.io/openebs/admission-server` |
| `webhook.imageTag` | Image Tag for admission server | `1.1.0` |
| `webhook.replicas` | Number of admission server Replicas | `1` |
| `snapshotOperator.provisioner.image` | Image for Snapshot Provisioner | `quay.io/openebs/snapshot-provisioner` |
| `snapshotOperator.provisioner.imageTag` | Image Tag for Snapshot Provisioner | `1.1.0` |
| `snapshotOperator.controller.image` | Image for Snapshot Controller | `quay.io/openebs/snapshot-controller` |
| `snapshotOperator.controller.imageTag` | Image Tag for Snapshot Controller | `1.1.0` |
| `snapshotOperator.replicas` | Number of Snapshot Operator Replicas | `1` |
| `ndm.image` | Image for Node Disk Manager | `quay.io/openebs/node-disk-manager-amd64` |
| `ndm.imageTag` | Image Tag for Node Disk Manager | `v0.4.1` |
| `ndm.sparse.path` | Directory where Sparse files are created | `/var/openebs/sparse` |
| `ndm.sparse.size` | Size of the sparse file in bytes | `10737418240` |
| `ndm.sparse.count` | Number of sparse files to be created | `1` |
| `ndm.filters.excludeVendors` | Exclude devices with specified vendor | `CLOUDBYT,OpenEBS` |
| `ndm.filters.excludePaths` | Exclude devices with specified path patterns | `loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md` |
| `ndm.filters.includePaths` | Include devices with specified path patterns | `""` |
| `ndm.filters.excludePaths` | Exclude devices with specified path patterns | `loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md` |
| `ndm.probes.enableSeachest` | Enable Seachest probe for NDM | `false` |
| `ndmOperator.image` | Image for NDM Operator | `quay.io/openebs/node-disk-operator-amd64`|
| `ndmOperator.imageTag` | Image Tag for NDM Operator | `v0.4.1` |
| `jiva.image` | Image for Jiva | `quay.io/openebs/jiva` |
| `jiva.imageTag` | Image Tag for Jiva | `1.1.0` |
| `jiva.replicas` | Number of Jiva Replicas | `3` |
| `cstor.pool.image` | Image for cStor Pool | `quay.io/openebs/cstor-pool` |
| `cstor.pool.imageTag` | Image Tag for cStor Pool | `1.1.0` |
| `cstor.poolMgmt.image` | Image for cStor Pool Management | `quay.io/openebs/cstor-pool-mgmt` |
| `cstor.poolMgmt.imageTag` | Image Tag for cStor Pool Management | `1.1.0` |
| `cstor.target.image` | Image for cStor Target | `quay.io/openebs/cstor-istgt` |
| `cstor.target.imageTag` | Image Tag for cStor Target | `1.1.0` |
| `cstor.volumeMgmt.image` | Image for cStor Volume Management | `quay.io/openebs/cstor-volume-mgmt` |
| `cstor.volumeMgmt.imageTag` | Image Tag for cStor Volume Management | `1.1.0` |
| `policies.monitoring.image` | Image for Prometheus Exporter | `quay.io/openebs/m-exporter` |
| `policies.monitoring.imageTag` | Image Tag for Prometheus Exporter | `1.1.0` |
| `analytics.enabled` | Enable sending stats to Google Analytics | `true` |
| `analytics.pingInterval` | Duration(hours) between sending ping stat | `24h` |
| `defaultStorageConfig.enabled` | Enable default storage class installation | `true` |
| `HealthCheck.initialDelaySeconds` | Delay before liveness probe is initiated | `30` | | 30 |
| `HealthCheck.periodSeconds` | How often to perform the liveness probe | `60` | | 10 |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```shell
helm install --name `my-release` -f values.yaml stable/openebs
```
> **Tip**: You can use the default [values.yaml](values.yaml)
# OpenEBS
OpenEBS is an open source storage platform that provides persistent container attached, cloud-native block storage for DevOps and for Kubernetes environments.
OpenEBS allows you to treat your persistent workload containers, such as DBs on containers, just like other containers. OpenEBS itself is deployed as just another container on your host and enables storage services that can be designated on a per pod, application, cluster or container level, including:
- Data persistence across nodes, dramatically reducing time spent rebuilding Cassandra rings for example.
- Synchronization of data across availability zones and cloud providers.
- Use of commodity hardware plus a container engine to deliver so called container attached block storage.
- Integration with Kubernetes, so developer and application intent flows into OpenEBS configurations automatically.
- Management of tiering to and from S3 and other targets.
categories:
- storage
namespace: openebs
labels:
io.rancher.certified: partner
questions:
- variable: defaultImage
default: "true"
description: "Use default OpenEBS images"
label: Use Default Image
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: apiserver.image
default: "quay.io/openebs/m-apiserver"
description: "Default API Server image for OpenEBS"
type: string
label: API Server Image
- variable: apiserver.imageTag
default: "1.1.0"
description: "The image tag of API Server image"
type: string
label: Image Tag For OpenEBS API Server Image
- variable: provisioner.image
default: "quay.io/openebs/openebs-k8s-provisioner"
description: "Default K8s Provisioner image for OpenEBS"
type: string
label: Provisioner Image
- variable: provisioner.imageTag
default: "1.1.0"
description: "The image tag of Provisioner image"
type: string
label: Image Tag For Provisioner Image
- variable: snapshotOperator.controller.image
default: "quay.io/openebs/snapshot-controller"
description: "Default Snapshot Controller image for OpenEBS"
type: string
label: Snapshot Controller Image
- variable: snapshotOperator.controller.imageTag
default: "1.1.0"
description: "The image tag of Snapshot Controller image"
type: string
label: Image Tag For OpenEBS Snapshot Controller Image
- variable: snapshotOperator.provisioner.image
default: "quay.io/openebs/snapshot-provisioner"
description: "Default Snapshot Provisioner image for OpenEBS"
type: string
label: Snapshot Provisioner Image
- variable: snapshotOperator.provisioner.imageTag
default: "1.1.0"
description: "The image tag of Snapshot Provisioner image"
type: string
label: Image Tag For OpenEBS Snapshot Provisioner Image
- variable: ndm.image
default: "quay.io/openebs/node-disk-manager-amd64"
description: "Default NDM image"
type: string
label: Node Disk Manager Image
- variable: ndm.imageTag
default: "v0.4.1"
description: "The image tag of NDM image"
type: string
label: Image Tag For Node Disk Manager Image
- variable: ndo.image
default: "quay.io/openebs/node-disk-operator-amd64"
description: "Default NDO image"
type: string
label: Node Disk Operator Image
- variable: ndo.imageTag
default: "v0.4.1"
description: "The image tag of NDO image"
type: string
label: Image Tag For Node Disk Manager Image
- variable: jiva.image
default: "quay.io/openebs/jiva"
description: "Default Jiva Storage Engine image for OpenEBS"
type: string
label: Jiva Storage Engine Image
- variable: jiva.imageTag
default: "1.1.0"
description: "The image tag of Jiva image"
type: string
label: Image Tag For OpenEBS Jiva Storage Engine Image
- variable: cstor.pool.image
default: "quay.io/openebs/cstor-pool"
description: "Default cStor Storage Engine Pool image for OpenEBS"
type: string
label: cStor Storage Engine Pool Image
- variable: cstor.pool.imageTag
default: "1.1.0"
description: "The image tag of cStor Storage Engine Pool image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Pool Image
- variable: cstor.poolMgmt.image
default: "quay.io/openebs/cstor-pool-mgmt"
description: "Default cStor Storage Engine Pool Management image for OpenEBS"
type: string
label: cStor Storage Engine Pool Management Image
- variable: cstor.poolMgmt.imageTag
default: "1.1.0"
description: "The image tag of cStor Storage Engine Pool Management image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Pool Management Image
- variable: cstor.target.image
default: "quay.io/openebs/cstor-istgt"
description: "Default cStor Storage Engine Target image for OpenEBS"
type: string
label: cStor Storage Engine Target Image
- variable: cstor.target.imageTag
default: "1.1.0"
description: "The image tag of cStor Storage Engine Target image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Target Image
- variable: cstor.volumeMgmt.image
default: "quay.io/openebs/cstor-volume-mgmt"
description: "Default cStor Storage Engine Target Management image for OpenEBS"
type: string
label: cStor Storage Engine Target Management Image
- variable: cstor.volumeMgmt.imageTag
default: "1.1.0"
description: "The image tag of cStor Storage Engine Target Management image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Target Management Image
- variable: policies.monitoring.image
default: "quay.io/openebs/m-exporter"
description: "Default OpeneEBS Volume and pool Exporter image"
type: string
label: Monitoring Exporter Image
show_if: "policies.monitoring.enabled=true&&defaultImage=false"
- variable: policies.monitoring.imageTag
default: "1.1.0"
description: "The image tag of OpenEBS Exporter"
type: string
label: Image Tag For OpenEBS Exporter Image
show_if: "policies.monitoring.enabled=true&&defaultImage=false"
- variable: ndm.filters.excludeVendors
default: 'CLOUDBYT\,OpenEBS'
type: string
description: "Configure NDM to filter disks from following vendors"
label: Filter Disks belonging to vendors
group: "NDM Disk Filter by Vendor "
- variable: ndm.filters.excludePaths
default: 'loop\,fd0\,sr0\,/dev/ram\,/dev/dm-\,/dev/md'
type: string
description: "Configure NDM to filter disks from following paths"
label: Filter Disks belonging to paths
group: "NDM Disk Filter by Path"
- variable: ndm.sparse.enabled
default: "true"
description: "Create a cStor Pool on Sparse Disks"
label: Create cStor Pool on Sprase Disks
type: boolean
show_subquestion_if: true
group: "NDM Sparse Disk Settings"
subquestions:
- variable: ndm.sparse.size
default: "10737418240"
description: "Default Size of Sparse Disk"
type: string
label: Sparse Disk Size in bytes
- variable: ndm.sparse.count
default: "1"
description: "Number of Sparse Disks"
type: string
label: Number of Sparse Disks
- variable: ndm.sparse.path
default: "/var/openebs/sparse"
description: "Directory where Sparse Disks should be created"
type: string
label: Directory for Sparse Disks
- variable: defaultPorts
default: "true"
description: "Use default Communication Ports"
label: Use Default Ports
type: boolean
show_subquestion_if: false
group: "Communication Ports"
subquestions:
- variable: apiserver.ports.externalPort
default: 5656
description: "Default External Port for OpenEBS API Server"
type: int
min: 0
max: 9999
label: OpenEBS API Server External Port
- variable: apiserver.ports.internalPort
default: 5656
description: "Default Internal Port for OpenEBS API Server"
type: int
min: 0
max: 9999
label: OpenEBS API Server Internal Port
- variable: policies.monitoring.enabled
default: true
description: "Enable prometheus monitoring"
type: boolean
label: Enable Prometheus Monitoring
group: "Monitoring Settings"
- variable: analytics.enabled
default: true
description: "Enable sending anonymous statistics to OpenEBS Google Analytics"
type: boolean
label: Enable updating OpenEBS with usage details
group: "Anonymous Analytics"
The OpenEBS has been installed. Check its status by running:
$ kubectl get pods -n {{ .Release.Namespace }}
For dynamically creating OpenEBS Volumes, you can either create a new StorageClass or
use one of the default storage classes provided by OpenEBS.
Use `kubectl get sc` to see the list of installed OpenEBS StorageClasses. A sample
PVC spec using `openebs-jiva-default` StorageClass is given below:"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-vol-claim
spec:
storageClassName: openebs-jiva-default
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5G
---
For more information, please visit http://docs.openebs.io/.
Please note that, OpenEBS uses iSCSI for connecting applications with the
OpenEBS Volumes and your nodes should have the iSCSI initiator installed.
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "openebs.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "openebs.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "openebs.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "openebs.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "openebs.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "openebs.fullname" . }}
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups: ["*"]
resources: ["nodes", "nodes/proxy"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["namespaces", "services", "pods", "deployments", "events", "endpoints", "configmaps", "jobs"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]
verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots", "volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["*"]
resources: [ "disks", "blockdevices", "blockdeviceclaims"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "storagepoolclaims", "storagepoolclaims/finalizers","storagepools"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "castemplates", "runtasks"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "upgradetasks"]
verbs: ["*" ]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
{{- end }}
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "openebs.fullname" . }}
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "openebs.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "openebs.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "openebs.fullname" . }}-ndm-config
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: ndm-config
data:
# udev-probe is default or primary probe which should be enabled to run ndm
# filterconfigs contails configs of filters - in ther form fo include
# and exclude comma separated strings
node-disk-manager.config: |
probeconfigs:
- key: udev-probe
name: udev probe
state: true
- key: seachest-probe
name: seachest probe
state: true
- key: smart-probe
name: smart probe
state: true
filterconfigs:
- key: os-disk-exclude-filter
name: os disk exclude filter
state: true
exclude: "/,/etc/hosts,/boot"
- key: vendor-filter
name: vendor filter
state: true
include: ""
exclude: "{{ .Values.ndm.filters.excludeVendors }}"
- key: path-filter
name: path filter
state: true
include: "{{ .Values.ndm.filters.includePaths }}"
exclude: "{{ .Values.ndm.filters.excludePaths }}"
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: {{ template "openebs.fullname" . }}-ndm
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: ndm
spec:
updateStrategy:
type: "RollingUpdate"
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: ndm
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: ndm
openebs.io/component-name: ndm
name: openebs-ndm
openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
hostNetwork: true
containers:
- name: {{ template "openebs.name" . }}-ndm
image: "{{ .Values.ndm.image }}:{{ .Values.ndm.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
privileged: true
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# pass hostname as env variable using downward API to the NDM container
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- if .Values.ndm.sparse }}
{{- if .Values.ndm.sparse.path }}
# specify the directory where the sparse files need to be created.
# if not specified, then sparse files will not be created.
- name: SPARSE_FILE_DIR
value: "{{ .Values.ndm.sparse.path }}"
{{- end }}
{{- if .Values.ndm.sparse.size }}
# Size(bytes) of the sparse file to be created.
- name: SPARSE_FILE_SIZE
value: "{{ .Values.ndm.sparse.size }}"
{{- end }}
{{- if .Values.ndm.sparse.count }}
# Specify the number of sparse files to be created
- name: SPARSE_FILE_COUNT
value: "{{ .Values.ndm.sparse.count }}"
{{- end }}
{{- end }}
livenessProbe:
exec:
command:
- pgrep
- ".*ndm"
initialDelaySeconds: {{ .Values.ndm.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.ndm.healthCheck.periodSeconds }}
volumeMounts:
- name: config
mountPath: /host/node-disk-manager.config
subPath: node-disk-manager.config
readOnly: true
- name: udev
mountPath: /run/udev
- name: procmount
mountPath: /host/proc
readOnly: true
{{- if .Values.ndm.sparse }}
{{- if .Values.ndm.sparse.path }}
- name: sparsepath
mountPath: {{ .Values.ndm.sparse.path }}
{{- end }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "openebs.fullname" . }}-ndm-config
- name: udev
hostPath:
path: /run/udev
type: Directory
# mount /proc (to access mount file of process 1 of host) inside container
# to read mount-point of disks and partitions
- name: procmount
hostPath:
path: /proc
type: Directory
{{- if .Values.ndm.sparse }}
{{- if .Values.ndm.sparse.path }}
- name: sparsepath
hostPath:
path: {{ .Values.ndm.sparse.path }}
{{- end }}
{{- end }}
# By default the node-disk-manager will be run on all kubernetes nodes
# If you would like to limit this to only some nodes, say the nodes
# that have storage attached, you could label those node and use
# nodeSelector.
#
# e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"
# kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"
#nodeSelector:
# "openebs.io/nodegroup": "storage-node"
{{- if .Values.ndm.nodeSelector }}
nodeSelector:
{{ toYaml .Values.ndm.nodeSelector | indent 8 }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-admission-server
labels:
app: admission-webhook
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: admission-webhook
spec:
replicas: {{ .Values.webhook.replicas }}
selector:
matchLabels:
app: admission-webhook
template:
metadata:
labels:
app: admission-webhook
name: admission-webhook
openebs.io/version: {{ .Values.release.version }}
openebs.io/component-name: admission-webhook
spec:
{{- if .Values.webhook.nodeSelector }}
nodeSelector:
{{ toYaml .Values.webhook.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.webhook.tolerations }}
tolerations:
{{ toYaml .Values.webhook.tolerations | indent 8 }}
{{- end }}
{{- if .Values.webhook.affinity }}
affinity:
{{ toYaml .Values.webhook.affinity | indent 8 }}
{{- end }}
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: admission-webhook
image: "{{ .Values.webhook.image }}:{{ .Values.webhook.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- -tlsCertFile=/etc/webhook/certs/cert.pem
- -tlsKeyFile=/etc/webhook/certs/key.pem
- -alsologtostderr
- -v=8
- 2>&1
volumeMounts:
- name: webhook-certs
mountPath: /etc/webhook/certs
readOnly: true
volumes:
- name: webhook-certs
secret:
secretName: admission-server-certs
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-localpv-provisioner
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
spec:
replicas: {{ .Values.provisioner.replicas }}
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: localpv-provisioner
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: {{ template "openebs.name" . }}-localpv-provisioner
image: "{{ .Values.localprovisioner.image }}:{{ .Values.localprovisioner.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_NAMESPACE is the namespace that this provisioner will
# lookup to find maya api service
- name: OPENEBS_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# OPENEBS_IO_BASE_PATH is the environment variable that provides the
# default base path on the node where host-path PVs will be provisioned.
- name: OPENEBS_IO_ENABLE_ANALYTICS
value: "{{ .Values.analytics.enabled }}"
- name: OPENEBS_IO_BASE_PATH
value: "{{ .Values.localprovisioner.basePath }}"
- name: OPENEBS_IO_HELPER_IMAGE
value: "{{ .Values.localprovisioner.helperImage }}:{{ .Values.localprovisioner.helperImageTag }}"
- name: OPENEBS_IO_INSTALLER_TYPE
value: "charts-helm"
livenessProbe:
exec:
command:
- pgrep
- ".*localpv"
initialDelaySeconds: {{ .Values.localprovisioner.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.localprovisioner.healthCheck.periodSeconds }}
{{- if .Values.localprovisioner.nodeSelector }}
nodeSelector:
{{ toYaml .Values.localprovisioner.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.localprovisioner.tolerations }}
tolerations:
{{ toYaml .Values.localprovisioner.tolerations | indent 8 }}
{{- end }}
{{- if .Values.localprovisioner.affinity }}
affinity:
{{ toYaml .Values.localprovisioner.affinity | indent 8 }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-apiserver
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: apiserver
name: maya-apiserver
openebs.io/component-name: maya-apiserver
openebs.io/version: {{ .Values.release.version }}
spec:
replicas: {{ .Values.apiserver.replicas }}
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: apiserver
name: maya-apiserver
openebs.io/component-name: maya-apiserver
openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: {{ template "openebs.name" . }}-apiserver
image: "{{ .Values.apiserver.image }}:{{ .Values.apiserver.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.apiserver.ports.internalPort }}
env:
# OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for maya api server version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for maya api server version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://172.28.128.3:8080"
# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be
# configured as a part of openebs installation.
# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.
- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
value: "{{ .Values.apiserver.sparse.enabled }}"
- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR
value: "{{ .Values.ndm.sparse.path }}"
- name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
value: "{{ .Values.defaultStorageConfig.enabled }}"
- name: OPENEBS_IO_CSTOR_TARGET_DIR
value: "{{ .Values.ndm.sparse.path }}"
# OPENEBS_NAMESPACE provides the namespace of this deployment as an
# environment variable
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
# environment variable
- name: OPENEBS_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
# OPENEBS_MAYA_POD_NAME provides the name of this pod as
# environment variable
- name: OPENEBS_MAYA_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE
value: "{{ .Values.jiva.image }}:{{ .Values.jiva.imageTag }}"
- name: OPENEBS_IO_JIVA_REPLICA_IMAGE
value: "{{ .Values.jiva.image }}:{{ .Values.jiva.imageTag }}"
- name: OPENEBS_IO_JIVA_REPLICA_COUNT
value: "{{ .Values.jiva.replicas }}"
- name: OPENEBS_IO_CSTOR_TARGET_IMAGE
value: "{{ .Values.cstor.target.image }}:{{ .Values.cstor.target.imageTag }}"
- name: OPENEBS_IO_CSTOR_POOL_IMAGE
value: "{{ .Values.cstor.pool.image }}:{{ .Values.cstor.pool.imageTag }}"
- name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE
value: "{{ .Values.cstor.poolMgmt.image }}:{{ .Values.cstor.poolMgmt.imageTag }}"
- name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE
value: "{{ .Values.cstor.volumeMgmt.image }}:{{ .Values.cstor.volumeMgmt.imageTag }}"
- name: OPENEBS_IO_VOLUME_MONITOR_IMAGE
value: "{{ .Values.policies.monitoring.image }}:{{ .Values.policies.monitoring.imageTag }}"
- name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE
value: "{{ .Values.policies.monitoring.image }}:{{ .Values.policies.monitoring.imageTag }}"
# OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage
# events to Google Analytics
- name: OPENEBS_IO_ENABLE_ANALYTICS
value: "{{ .Values.analytics.enabled }}"
# OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours)
# for periodic ping events sent to Google Analytics. Default is 24 hours.
- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL
value: "{{ .Values.analytics.pingInterval }}"
- name: OPENEBS_IO_INSTALLER_TYPE
value: "charts-helm"
livenessProbe:
exec:
command:
- /usr/local/bin/mayactl
- version
initialDelaySeconds: {{ .Values.apiserver.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.apiserver.healthCheck.periodSeconds }}
{{- if .Values.apiserver.nodeSelector }}
nodeSelector:
{{ toYaml .Values.apiserver.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.apiserver.tolerations }}
tolerations:
{{ toYaml .Values.apiserver.tolerations | indent 8 }}
{{- end }}
{{- if .Values.apiserver.affinity }}
affinity:
{{ toYaml .Values.apiserver.affinity | indent 8 }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-provisioner
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: provisioner
spec:
replicas: {{ .Values.provisioner.replicas }}
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: provisioner
name: openebs-provisioner
openebs.io/component-name: openebs-provisioner
openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: {{ template "openebs.name" . }}-provisioner
image: "{{ .Values.provisioner.image }}:{{ .Values.provisioner.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_NAMESPACE is the namespace that this provisioner will
# lookup to find maya api service
- name: OPENEBS_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that provisioner should forward the volume create/delete requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs provisioner version 0.5.3-RC1 onwards
- name: OPENEBS_MAYA_SERVICE_NAME
value: "{{ template "openebs.fullname" . }}-apiservice"
# The following values will be set as annotations to the PV object.
# Refer : https://github.com/openebs/external-storage/pull/15
#- name: OPENEBS_MONITOR_URL
# value: "{{ .Values.provisioner.monitorUrl }}"
#- name: OPENEBS_MONITOR_VOLKEY
# value: "{{ .Values.provisioner.monitorVolumeKey }}"
#- name: MAYA_PORTAL_URL
# value: "{{ .Values.provisioner.mayaPortalUrl }}"
livenessProbe:
exec:
command:
- pgrep
- ".*openebs"
initialDelaySeconds: {{ .Values.provisioner.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.provisioner.healthCheck.periodSeconds }}
{{- if .Values.provisioner.nodeSelector }}
nodeSelector:
{{ toYaml .Values.provisioner.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.provisioner.tolerations }}
tolerations:
{{ toYaml .Values.provisioner.tolerations | indent 8 }}
{{- end }}
{{- if .Values.provisioner.affinity }}
affinity:
{{ toYaml .Values.provisioner.affinity | indent 8 }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-snapshot-operator
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: snapshot-operator
spec:
replicas: {{ .Values.snapshotOperator.replicas }}
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
strategy:
type: {{ .Values.snapshotOperator.upgradeStrategy }}
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: snapshot-operator
name: openebs-snapshot-operator
openebs.io/version: {{ .Values.release.version }}
openebs.io/component-name: openebs-snapshot-operator
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: {{ template "openebs.name" . }}-snapshot-controller
image: "{{ .Values.snapshotOperator.controller.image }}:{{ .Values.snapshotOperator.controller.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# OPENEBS_IO_K8S_MASTER enables openebs snapshot controller to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs snapshot controller version 0.6-RC1 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs snapshot controller to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs snapshot controller version 0.6-RC1 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_NAMESPACE is the namespace that this snapshot controller will
# lookup to find maya api service
- name: OPENEBS_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that snapshot controller should forward the volume snapshot requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs snapshot controller version 0.6-RC1 onwards
- name: OPENEBS_MAYA_SERVICE_NAME
value: "{{ template "openebs.fullname" . }}-apiservice"
livenessProbe:
exec:
command:
- pgrep
- ".*controller"
initialDelaySeconds: {{ .Values.snapshotOperator.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.snapshotOperator.healthCheck.periodSeconds }}
- name: {{ template "openebs.name" . }}-snapshot-provisioner
image: "{{ .Values.snapshotOperator.provisioner.image }}:{{ .Values.snapshotOperator.provisioner.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# OPENEBS_IO_K8S_MASTER enables openebs snapshot provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs snapshot provisioner version 0.6-RC1 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs snapshot provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs snapshot provisioner version 0.6-RC1 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_NAMESPACE is the namespace that this snapshot provisioner will
# lookup to find maya api service
- name: OPENEBS_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that snapshot provisioner should forward the volume snapshot PV requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs snapshot provisioner version 0.6-RC1 onwards
- name: OPENEBS_MAYA_SERVICE_NAME
value: "{{ template "openebs.fullname" . }}-apiservice"
livenessProbe:
exec:
command:
- pgrep
- ".*provisioner"
initialDelaySeconds: {{ .Values.snapshotOperator.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.snapshotOperator.healthCheck.periodSeconds }}
{{- if .Values.snapshotOperator.nodeSelector }}
nodeSelector:
{{ toYaml .Values.snapshotOperator.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.snapshotOperator.tolerations }}
tolerations:
{{ toYaml .Values.snapshotOperator.tolerations | indent 8 }}
{{- end }}
{{- if .Values.snapshotOperator.affinity }}
affinity:
{{ toYaml .Values.snapshotOperator.affinity | indent 8 }}
{{- end }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-ndm-operator
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: ndm-operator
openebs.io/component-name: ndm-operator
name: ndm-operator
spec:
replicas: {{ .Values.ndmOperator.replicas }}
strategy:
type: {{ .Values.ndmOperator.upgradeStrategy }}
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: ndm-operator
name: ndm-operator
openebs.io/component-name: ndm-operator
openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: {{ template "openebs.fullname" . }}-ndm-operator
image: "{{ .Values.ndmOperator.image }}:{{ .Values.ndmOperator.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
readinessProbe:
exec:
command:
- stat
- /tmp/operator-sdk-ready
initialDelaySeconds: {{ .Values.ndmOperator.readinessCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.ndmOperator.readinessCheck.periodSeconds }}
failureThreshold: {{ .Values.ndmOperator.readinessCheck.failureThreshold }}
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "node-disk-operator"
- name: CLEANUP_JOB_IMAGE
value: "{{ .Values.ndmOperator.cleanupImage }}:{{ .Values.ndmOperator.cleanupImageTag }}"
{{- if .Values.ndmOperator.nodeSelector }}
nodeSelector:
{{ toYaml .Values.ndmOperator.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.ndmOperator.tolerations }}
tolerations:
{{ toYaml .Values.ndmOperator.tolerations | indent 8 }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: admission-server-svc
labels:
app: admission-webhook
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- port: 443
targetPort: 443
selector:
app: admission-webhook
apiVersion: v1
kind: Service
metadata:
name: {{ template "openebs.fullname" . }}-apiservice
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- name: api
port: {{ .Values.apiserver.ports.externalPort }}
targetPort: {{ .Values.apiserver.ports.internalPort }}
protocol: TCP
selector:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: apiserver
sessionAffinity: None
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "openebs.serviceAccountName" . }}
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end }}
{{- $ca := genCA "admission-server-ca" 3650 }}
{{- $cn := printf "admission-server-svc" }}
{{- $altName1 := printf "admission-server-svc.%s" .Release.Namespace }}
{{- $altName2 := printf "admission-server-svc.%s.svc" .Release.Namespace }}
{{- $cert := genSignedCert $cn nil (list $altName1 $altName2) 3650 $ca }}
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: openebs-validation-webhook-cfg
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: admission-webhook
openebs.io/component-name: admission-webhook
webhooks:
# failurePolicy Fail means that an error calling the webhook causes the admission to fail.
- name: admission-webhook.openebs.io
failurePolicy: Fail
clientConfig:
service:
name: admission-server-svc
namespace: {{ .Release.Namespace }}
path: "/validate"
{{- if .Values.webhook.generateTLS }}
caBundle: {{ b64enc $ca.Cert }}
{{- else }}
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURpekNDQW5PZ0F3SUJBZ0lKQUk5NG9wdWdKb1drTUEwR0NTcUdTSWIzRFFFQkN3VUFNRnd4Q3pBSkJnTlYKQkFZVEFuaDRNUW93Q0FZRFZRUUlEQUY0TVFvd0NBWURWUVFIREFGNE1Rb3dDQVlEVlFRS0RBRjRNUW93Q0FZRApWUVFMREFGNE1Rc3dDUVlEVlFRRERBSmpZVEVRTUE0R0NTcUdTSWIzRFFFSkFSWUJlREFlRncweE9UQXpNREl3Ck56TXlOREZhRncweU1EQXpNREV3TnpNeU5ERmFNRnd4Q3pBSkJnTlZCQVlUQW5oNE1Rb3dDQVlEVlFRSURBRjQKTVFvd0NBWURWUVFIREFGNE1Rb3dDQVlEVlFRS0RBRjRNUW93Q0FZRFZRUUxEQUY0TVFzd0NRWURWUVFEREFKagpZVEVRTUE0R0NTcUdTSWIzRFFFSkFSWUJlRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBT0pxNmI2dnI0cDMzM3FRaHJQbmNCVFVIUE1ESnJtaEYvOU44NjZodzFvOGZLclFwNkJmRkcvZEQ0N2gKVGcvWnJ0U2VHT0NoRjFxSEk1dGp3SlVEeGphSUM3U0FkZGpxb1pJUGFoT1pjVlpxZE1POVVFTlFUbktIRXczVQpCUjJUaHdydi9QTTRxZitUazdRa1J6Y2VJQXg1VS9lbUlEV2t4NEk3RlRYQk1XT1hGUTNoRlFtWFppZHpHN21mCnZJTlhYN0krOHR3QVM0alNSdGhxYjVUTzMwYmpxQTFzY0RRdXlZU2R6OVg5TGw1WU1QSUtSZHpnYUR1d1Q5QkQKZjNxT1VqazN6M1FZd0IvWmowaXJtQlpKejJla0V3a1QxbWlyUHF2NTA5QVJ5V1U2QUlSSTN6dnB6S2tWeFJUaApmcUROa1M5SmRRV1Q3RW9vN2lITmRtZlhOYmtDQXdFQUFhTlFNRTR3SFFZRFZSME9CQllFRk1ORzZGeGlMYWFmCjFld2w1RDd1SXJiK0UrSE9NQjhHQTFVZEl3UVlNQmFBRk1ORzZGeGlMYWFmMWV3bDVEN3VJcmIrRStIT01Bd0cKQTFVZEV3UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHQnYxeC92OWRnWU1ZY1h5TU9MUUNENgpVZWNsS3YzSFRTVGUybXZQcTZoTW56K0ExOGF6RWhPU0xONHZuQUNSd2pzRmVobWIrWk9wMVlYWDkzMi9OckRxCk1XUmh1bENiblFndjlPNVdHWXBDQUR1dnBBMkwyT200aU50S0FucUpGNm5ubHI1UFdQZnVJelB1eVlvQUpKRDkKSFpZRjVwa2hac0EwdDlUTDFuUmdPbFY4elZ0eUg2TTVDWm5nSEpjWG9CWlVvSlBvcGJsc3BpUnh6dzBkMUU0SgpUVmVHaXZFa0RJNFpFYTVuTzZyTUZzcXJ1L21ydVQwN1FCaWd5ZzlEY3h0QU5TUTczQUhOemNRUWpZMWg3L2RiCmJ6QXQ2aWxNZXZKc2lpVFlGYjRPb0dIVW53S2tTQUJuazFNQW5oUUhvYUNuS2dXZE1vU3orQWVuYkhzYXJSMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
{{- end }}
rules:
- operations: [ "CREATE", "DELETE" ]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["persistentvolumeclaims"]
---
apiVersion: v1
kind: Secret
metadata:
name: admission-server-certs
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
openebs.io/component-name: admission-webhook
type: Opaque
data:
{{- if .Values.webhook.generateTLS }}
cert.pem: {{ b64enc $cert.Cert }}
key.pem: {{ b64enc $cert.Key }}
{{- else }}
cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ3VENDQXRXZ0F3SUJBZ0lVYk84NS9JR0ZXYTA2Vm11WVdTWjdxaTUybmRRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hERUxNQWtHQTFVRUJoTUNlSGd4Q2pBSUJnTlZCQWdNQVhneENqQUlCZ05WQkFjTUFYZ3hDakFJQmdOVgpCQW9NQVhneENqQUlCZ05WQkFzTUFYZ3hDekFKQmdOVkJBTU1BbU5oTVJBd0RnWUpLb1pJaHZjTkFRa0JGZ0Y0Ck1CNFhEVEU1TURNd01qQTNNek13TUZvWERUSXdNRE13TVRBM01qYzFNbG93S3pFcE1DY0dBMVVFQXhNZ1lXUnQKYVhOemFXOXVMWE5sY25abGNpMXpkbU11YjNCbGJtVmljeTV6ZG1Nd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQQpBNElCRHdBd2dnRUtBb0lCQVFERk5MRE1xKzd6eFZidDNPcnFhaVUyOFB6K25ZeFRCblA0NVhFWGFjSUpPWG1aClM1c2ZjMjM3WVNWS0I5Tlp4cXNYT08wcXpWb0xtNlZ0UDJjREpWZGZIVUQ0QXBZSC94UVBVTktrcFg3K0NVTFEKZ3VBNWowOXozdkFaeDJidXBTaXFFdE1mVldqNkh5V0Jyd2FuZW9IaVVXVVdpbmtnUXpCQzR1SWtiRkE2djYrZwp4ZzAwS09TY2NFRWY3eU5McjBvejBKVHRpRm1aS1pVVVBwK3N3WTRpRTZ3RER5bVVnTmY4SW8wUEExVkQ1TE9vCkFwQ0l2WDJyb1RNd3VkR1VrZUc1VTA2OWIrMWtQMEJsUWdDZk9TQTBmZEN3Snp0aWE1aHpaUlVIWGxFOVArN0kKekgyR0xXeHh1aHJPTlFmT25HcVRiUE13UmowekZIdmcycUo1azJ2VkFnTUJBQUdqZ2Rjd2dkUXdEZ1lEVlIwUApBUUgvQkFRREFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEClZSME9CQllFRklnOVFSOSsyVW12THQwQXY4MlYwZml0bU81WE1COEdBMVVkSXdRWU1CYUFGTU5HNkZ4aUxhYWYKMWV3bDVEN3VJcmIrRStIT01GOEdBMVVkRVFSWU1GYUNGR0ZrYldsemMybHZiaTF6WlhKMlpYSXRjM1pqZ2h4aApaRzFwYzNOcGIyNHRjMlZ5ZG1WeUxYTjJZeTV2Y0dWdVpXSnpnaUJoWkcxcGMzTnBiMjR0YzJWeWRtVnlMWE4yCll5NXZjR1Z1WldKekxuTjJZekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSlpJRzd2d0RYaWxhWUFCS1Brc0oKZVJtdml4ZnYybTRVTVdzdlBKVVVJTXhHbzhtc1J6aWhBRjVuTExzaURKRDl4MjhraXZXaGUwbWE4aWVHYjY5Sgp1U1N4bys0OStaV3NVaTB3UlRDMi9ZWGlkWS9xNDU2c1g4ck9qQURDZlFUcFpYc2ZyekVWa2Q4NE0zdU5GTmhnCnMyWmxJMnNDTWljYXExNWxIWEh3akFkY2FqZit1VklwOXNHUElsMUhmZFcxWVFLc0NoU3dhdi80NUZJcFlMSVYKM3hiS2ZIbmh2czhJck5ZbTVIenAvVVdvcFN1Tm5tS1IwWGo3cXpGcllUYzV3eHZ3VVZrKzVpZFFreWMwZ0RDcApGbkFVdEdmaUVUQnBhU3pISjQ4STZqUFpneVE0NzlZMmRxRUtXcWtyc0RkZ2tVcXlnNGlQQ0YwWC9YVU9YU3VGClNnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeFRTd3pLdnU4OFZXN2R6cTZtb2xOdkQ4L3AyTVV3WnorT1Z4RjJuQ0NUbDVtVXViCkgzTnQrMkVsU2dmVFdjYXJGemp0S3MxYUM1dWxiVDluQXlWWFh4MUErQUtXQi84VUQxRFNwS1YrL2dsQzBJTGcKT1k5UGM5N3dHY2RtN3FVb3FoTFRIMVZvK2g4bGdhOEdwM3FCNGxGbEZvcDVJRU13UXVMaUpHeFFPcit2b01ZTgpOQ2prbkhCQkgrOGpTNjlLTTlDVTdZaFptU21WRkQ2ZnJNR09JaE9zQXc4cGxJRFgvQ0tORHdOVlErU3pxQUtRCmlMMTlxNkV6TUxuUmxKSGh1Vk5PdlcvdFpEOUFaVUlBbnprZ05IM1FzQ2M3WW11WWMyVVZCMTVSUFQvdXlNeDkKaGkxc2Nib2F6alVIenB4cWsyenpNRVk5TXhSNzROcWllWk5yMVFJREFRQUJBb0lCQVFDcXRIT2VsKzRlUWVKLwp3RTN4WUxTYUhIMURnZWxvTFJ2U2hmb2hSRURjYjA0ZExsODNHRnBKMGN2UGkzcWVLZVVNRXhEcGpoeTJFNk5kCk1CYmhtRDlMYkMxREFpb1EvZkxGVnpjZm9zcU02RU5YN3hKZGdQcEwyTjJKMHh2ODFDYWhJZTV6SHlIaDhYZ3MKQysvOHBZVXMvVHcrQ052VTI1UTVNZUNEbXViUUVuemJqQ3lIQm5SVmw1dVF6bk8zWEt2NEVyejdBT1BBWmFJTQozYmNFNC83c1JGczM4SE1aMVZTZ2JxUi9rM1N5SEFzNXhNWHVtY0hMMTBkK0FVK21BQ0svUThpdWJHMm9kNnJiCko3S0RONmFuUzRPZk4zZ3RtaEppN3ZsTjJVL3JycHdnblI0d3Y0bmV4U1ZlamYzQU9iaU9jNnYzZ0xJbXJ2Q3oKNzFETDFPaTVBb0dCQU9HeFp2RWFUSFFnNFdaQVJZbXlGZEtZeXY2MURDc1JycElmUlh3Q1YrcnBZTFM2NlV4SQprWHJISlNreWFqTjNTOXVsZUtUTXRWaU5wY2JCcjVNZ0lOaFFvdThRc2dpZlZHWFJGQ3d0OXJ3MGNDbEc1Y2pCClZ3bUQzYWFBTGR5WVQvbHc4dnk1Zndqc1hFZHd1OEQ2cC9rd0ZzMmlwZWQ4QVFPUVZlQ1dPeXF6QW9HQkFOK3YKL2VxKzZ5NHhPZ2ZtQ01KcHJ0THBBN1J0M3FsU0JKbEw3RkNsQXRCeUUxazBPTVIrZTdhSDBVTDdYWVR4YlBLOApBYnRZR3lzWDkydGM3RHlaU0k0cDFjUHhvcHdzNkt3N0RYZUt0YTNnVkRmSXVuZ3haR25XWjk2WmNjcEhyVzgyCnl5OTk5dTQ2WE1tQWZwSzEvbGxjdGdLem5FUVp5ZkhEUmlWdVVQTlhBb0dCQUxkMGxORDNKNTVkKzlvNTlFeHgKVGZ2WjUyZ1Rrc2lQbnU5NEsrc1puSTEvRnZUUjJrSC8yd0dLVDFLbGdGNUZZb3d3ZlZpNGJkQ0ZrM04walZ0eQppa0JMaTZYNFZEOWVCQ1NmUjE2Q0hrWHQraDRUVzBWTW80dEFmVE9TamJUNnVrZHc0Sk05MVYxVGc4OHVlKy9wCjBCQm1YcUxZeXpMWFFadTcvNUtIaTZDeEFvR0FaTWV2R0E5eWVEcFhrZTF6THR4Y2xzdkREb3lkMEIyUzB0cGgKR3lodEx5cm1Tcjk3Z0JRWWV2R1FONlIyeXduVzh6bi9jYi9OWmNvRGdFeTZac2NNNkhneXhuaGNzZzZOdWVOVgpPdkcwenlVTjdLQTBXeWl0dS8yTWlMOExoSDVzeG5taWE4Qk4rNkV4NHR0UXE1cnhnS09Eb1kzNHJyb0x3VEFnCnI0YVhWRHNDZ1lBYnRwZXhvNTJ4VmJkTzZCL3B5RUU2cEJCS1FkK3hiVkJNMDZwUzArSlFudSt5SVBmeXFhekwKbGdYTEhBSm01bU9Sb2RFRHk0WlVJRkM5RmhraGcrV0ZzSHJCOXpGU1IrZFc2Uzg1eFA4ZGxHVE42S2cydXJNQQowNTRCQUh4RWhPNU9QblNqT0VHSmQwYTdGQmc1UlkxN0RRQlFxV25SZENURHlDWmU0OStLcWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
{{- end }}
# Default values for openebs.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccount:
create: true
name:
release:
# "openebs.io/version" label for control plane components
version: "1.1.0"
image:
pullPolicy: IfNotPresent
apiserver:
image: "quay.io/openebs/m-apiserver"
imageTag: "1.1.0"
replicas: 1
ports:
externalPort: 5656
internalPort: 5656
sparse:
enabled: "false"
nodeSelector: {}
tolerations: []
affinity: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
defaultStorageConfig:
enabled: "true"
provisioner:
image: "quay.io/openebs/openebs-k8s-provisioner"
imageTag: "1.1.0"
replicas: 1
nodeSelector: {}
tolerations: []
affinity: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
localprovisioner:
image: "quay.io/openebs/provisioner-localpv"
imageTag: "1.1.0"
helperImage: "quay.io/openebs/openebs-tools"
helperImageTag: "3.8"
replicas: 1
basePath: "/var/openebs/local"
nodeSelector: {}
tolerations: []
affinity: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
snapshotOperator:
controller:
image: "quay.io/openebs/snapshot-controller"
imageTag: "1.1.0"
provisioner:
image: "quay.io/openebs/snapshot-provisioner"
imageTag: "1.1.0"
replicas: 1
upgradeStrategy: "Recreate"
nodeSelector: {}
tolerations: []
affinity: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
ndm:
image: "quay.io/openebs/node-disk-manager-amd64"
imageTag: "v0.4.1"
sparse:
path: "/var/openebs/sparse"
size: "10737418240"
count: "1"
filters:
excludeVendors: "CLOUDBYT,OpenEBS"
includePaths: ""
excludePaths: "loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md"
probes:
enableSeachest: false
nodeSelector: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
ndmOperator:
image: "quay.io/openebs/node-disk-operator-amd64"
imageTag: "v0.4.1"
replicas: 1
upgradeStrategy: Recreate
nodeSelector: {}
tolerations: []
readinessCheck:
initialDelaySeconds: 4
periodSeconds: 10
failureThreshold: 1
cleanupImage: "quay.io/openebs/linux-utils"
cleanupImageTag: "3.9"
webhook:
image: "quay.io/openebs/admission-server"
imageTag: "1.1.0"
generateTLS: true
replicas: 1
nodeSelector: {}
tolerations: []
affinity: {}
jiva:
image: "quay.io/openebs/jiva"
imageTag: "1.1.0"
replicas: 3
cstor:
pool:
image: "quay.io/openebs/cstor-pool"
imageTag: "1.1.0"
poolMgmt:
image: "quay.io/openebs/cstor-pool-mgmt"
imageTag: "1.1.0"
target:
image: "quay.io/openebs/cstor-istgt"
imageTag: "1.1.0"
volumeMgmt:
image: "quay.io/openebs/cstor-volume-mgmt"
imageTag: "1.1.0"
policies:
monitoring:
enabled: true
image: "quay.io/openebs/m-exporter"
imageTag: "1.1.0"
analytics:
enabled: true
# Specify in hours the duration after which a ping event needs to be sent.
pingInterval: "24h"
apiVersion: v1
appVersion: v1.1.1
description: |
An AIOps platform for deploying, scaling and managing containerized applications in Kubernetes environments.
home: https://redskyops.dev/
icon: file://../icon.png
name: redskyops
version: 0.1.1
maintainers:
- name: redskyops
email: info@redskyops.dev
# Red Sky Ops
## Chart Repository
The Red Sky Ops chart repository can be configured in Helm as follows:
```sh
helm repo add redsky https://redskyops.dev/charts/
helm repo update
```
## Installing the Chart
The Red Sky Ops manager can be installed using the Helm command:
```sh
helm install --namespace redsky-system --name redsky redsky/redskyops
```
The recommended namespace (`redsky-system`) and release name (`redsky`) are consistent with an install performed using the `redskyctl` tool (see the [install guide](https://redskyops.dev/docs/install/) for more information).
## Configuration
The following configuration options are available:
| Parameter | Description |
| -------------------- | ------------------------------------------------ |
| `redskyImage` | Docker image name |
| `redskyTag` | Docker image tag |
| `address` | Fully qualified URL of the remote server |
| `oauth2ClientID` | OAuth2 client identifier |
| `oauth2ClientSecret` | OAuth2 client secret |
| `oauth2TokenURL` | Override default OAuth2 token URL |
| `rbac.create` | Specify whether RBAC resources should be created |
# Red Sky Ops
Red Sky Ops is an AIOps platform for deploying and optimizing containerized applications in Kubernetes environments. It makes it easy for DevOps teams to manage the millions of possible combinations of application variables they're confronted when deploying applications. With Red Sky Ops, they can identify and implement the best settings for each application in any cloud environment. Red Sky Ops allows teams to streamline their application tuning process and have a centralized, organized view of their tuning results.
**Installation Note:**
It is recommended that you launch using the name **"redsky"**.
labels:
io.cattle.role: cluster # options are cluster/project
questions:
- variable: defaultImage
default: true
description: "Use default Docker image"
label: Use Default Image
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: redskyImage
default: "gcr.io/redskyops/k8s-experiment"
description: "Docker image name"
type: string
label: Image Name
- variable: redskyTag
default: "1.1.1"
description: "Docker image tag"
type: string
label: Image Tag
- variable: remoteServerEnabled
default: false
description: "Use a remote Red Sky Ops server"
label: Use Remote Server
type: boolean
show_subquestion_if: true
group: "Remote Server"
subquestions:
- variable: address
default: ""
description: "Fully qualified URL of the remote server"
type: string
label: Address
- variable: oauth2ClientID
default: ""
description: "OAuth2 client identifier"
type: string
label: Client ID
- variable: oauth2ClientSecret
default: ""
description: "OAuth2 client secret"
type: string
label: Client Secret
Red Sky Ops is ready to run within your cluster.
You may want to install the Red Sky Ops Tool (redskyctl) locally, see the install guide for more information:
https://redskyops.dev/docs/install/
This source diff could not be displayed because it is too large. You can view the blob instead.
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/port: "8443"
prometheus.io/scheme: https
prometheus.io/scrape: "true"
labels:
app.kubernetes.io/name: redskyops
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
control-plane: controller-manager
name: "{{ .Release.Name }}-controller-manager-metrics-service"
spec:
ports:
- name: https
port: 8443
targetPort: https
selector:
app.kubernetes.io/name: redskyops
control-plane: controller-manager
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
labels:
app.kubernetes.io/name: redskyops
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
control-plane: controller-manager
name: "{{ .Release.Name }}-controller-manager"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: redskyops
control-plane: controller-manager
template:
metadata:
labels:
app.kubernetes.io/name: redskyops
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
control-plane: controller-manager
spec:
containers:
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=10
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
- args:
- --metrics-addr=127.0.0.1:8080
command:
- /manager
image: {{ .Values.redskyImage }}:{{ .Values.redskyTag }}
name: manager
resources:
limits:
cpu: 100m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
volumeMounts:
- mountPath: /home/nonroot
name: client-config
readOnly: true
terminationGracePeriodSeconds: 10
volumes:
- name: client-config
secret:
items:
- key: client.yaml
path: .redsky
secretName: client-config
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/name: redskyops
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
name: "{{ .Release.Name }}-manager-role"
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- patch
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- apiGroups:
- ""
resources:
- services
verbs:
- list
- apiGroups:
- apps
- extensions
resources:
- deployments
- statefulsets
verbs:
- get
- list
- patch
- apiGroups:
- batch
- extensions
resources:
- jobs
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- batch
- extensions
resources:
- jobs/status
verbs:
- get
- patch
- update
- apiGroups:
- redskyops.dev
resources:
- experiments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- redskyops.dev
resources:
- experiments/status
verbs:
- get
- patch
- update
- apiGroups:
- redskyops.dev
resources:
- trials
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- redskyops.dev
resources:
- trials/status
verbs:
- get
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: redskyops
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
name: "{{ .Release.Name }}-proxy-role"
rules:
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: redskyops
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
name: "{{ .Release.Name }}-manager-rolebinding"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "{{ .Release.Name }}-manager-role"
subjects:
- kind: ServiceAccount
name: default
namespace: {{ .Release.Namespace | quote }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: redskyops
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
name: "{{ .Release.Name }}-proxy-rolebinding"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "{{ .Release.Name }}-proxy-role"
subjects:
- kind: ServiceAccount
name: default
namespace: {{ .Release.Namespace | quote }}
{{- end -}}
{{- define "client.config" }}
address: {{ .Values.address | quote }}
oauth2:
client_id: {{ .Values.oauth2ClientID | quote }}
client_secret: {{ .Values.oauth2ClientSecret | quote }}
token_url: {{ .Values.oauth2TokenURL | quote }}
{{- end }}
apiVersion: v1
kind: Secret
metadata:
name: client-config
labels:
app.kubernetes.io/name: redskyops
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
type: Opaque
data:
client.yaml: {{ include "client.config" . | b64enc }}
# Override the Red Sky manager image name and tag
redskyImage: "gcr.io/redskyops/k8s-experiment"
redskyTag: "1.1.1"
# Configure the Red Sky server address
address: ""
# Red Sky server client identifier and secret
oauth2ClientID: ""
oauth2ClientSecret: ""
# Override the Red Sky server token URL, this is not typically necessary
oauth2TokenURL: ""
rbac:
# Specifies whether RBAC resources should be created
create: true
apiVersion: v1
appVersion: "1.4.0"
description: Cloud Native storage for containers
name: storageos-operator
version: 0.2.13
tillerVersion: ">=2.10.0"
keywords:
- storage
- block-storage
- volume
- operator
home: https://storageos.com
icon: https://storageos.com/wp-content/themes/storageOS/images/logo.svg
sources:
- https://github.com/storageos
maintainers:
- name: croomes
email: simon.croome@storageos.com
- name: darkowlzz
email: sunny.gogoi@storageos.com
MIT License
Copyright (c) 2019 StorageOS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
# StorageOS Operator Helm Chart
> **Note**: This is the recommended chart to use for installing StorageOS. It
installs the StorageOS Operator, and then installs a StorageOS cluster with a
minimal configuration. Other Helm charts
([storageoscluster-operator](https://github.com/storageos/charts/tree/master/stable/storageoscluster-operator)
and
[storageos](https://github.com/storageos/charts/tree/master/stable/storageos))
will be deprecated.
[StorageOS](https://storageos.com) is a software-based storage platform
designed for cloud-native applications. By deploying StorageOS on your
Kubernetes cluster, local storage from cluster node is aggregated into a
distributed pool, and persistent volumes created from it using the native
Kubernetes volume driver are available instantly to pods wherever they move in
the cluster.
Features such as replication, encryption and caching help protect data and
maximise performance.
This chart installs a StorageOS Cluster Operator which helps deploy and
configure a StorageOS cluster on kubernetes.
## Prerequisites
- Helm 2.10+
- Kubernetes 1.9+.
- Privileged mode containers (enabled by default)
- Kubernetes 1.9 only:
- Feature gate: MountPropagation=true. This can be done by appending
`--feature-gates MountPropagation=true` to the kube-apiserver and kubelet
services.
Refer to the [StorageOS prerequisites
docs](https://docs.storageos.com/docs/prerequisites/overview) for more
information.
## Installing the chart
```console
# Add storageos charts repo.
$ helm repo add storageos https://charts.storageos.com
# Install the chart in a namespace.
$ helm install storageos/storageos-operator --namespace storageos-operator
```
This will install the StorageOSCluster operator in `storageos-operator`
namespace and deploys StorageOS with a minimal configuration.
> **Tip**: List all releases using `helm list`
## Creating a StorageOS cluster manually
The Helm chart supports a subset of StorageOSCluster custom resource parameters.
For advanced configurations, you may wish to create the cluster resource
manually and only use the Helm chart to install the Operator.
To disable auto-provisioning the cluster with the Helm chart, set
`cluster.create` to false:
```yaml
cluster:
...
create: false
```
Create a secret to store storageos cluster secrets:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: "storageos-api"
namespace: "default"
labels:
app: "storageos"
type: "kubernetes.io/storageos"
data:
# echo -n '<secret>' | base64
apiAddress: c3RvcmFnZW9zOjU3MDU=
apiUsername: c3RvcmFnZW9z
apiPassword: c3RvcmFnZW9z
```
Create a `StorageOSCluster` custom resource and refer the above secret in
`secretRefName` and `secretRefNamespace` fields.
```yaml
apiVersion: "storageos.com/v1"
kind: "StorageOSCluster"
metadata:
name: "example-storageos"
namespace: "default"
spec:
secretRefName: "storageos-api"
secretRefNamespace: "default"
```
Once the `StorageOSCluster` configuration is applied, the StorageOSCluster
operator will create a StorageOS cluster in the `storageos` namespace by
default.
Most installations will want to use the default [CSI](https://kubernetes-csi.github.io/docs/)
driver. To use the [Native Driver](https://kubernetes.io/docs/concepts/storage/volumes/#storageos)
instead, disable CSI:
```yaml
spec:
...
csi:
enable: false
...
```
in the above `StorageOSCluster` resource config.
Learn more about advanced configuration options
[here](https://github.com/storageos/cluster-operator/blob/master/README.md#storageoscluster-resource-configuration).
To check cluster status, run:
```bash
$ kubectl get storageoscluster
NAME READY STATUS AGE
example-storageos 3/3 Running 4m
```
All the events related to this cluster are logged as part of the cluster object
and can be viewed by describing the object.
```bash
$ kubectl describe storageoscluster example-storageos
Name: example-storageos
Namespace: default
Labels: <none>
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ChangedStatus 1m (x2 over 1m) storageos-operator 0/3 StorageOS nodes are functional
Normal ChangedStatus 35s storageos-operator 3/3 StorageOS nodes are functional. Cluster healthy
```
## Configuration
The following tables lists the configurable parameters of the StorageOSCluster
Operator chart and their default values.
Parameter | Description | Default
--------- | ----------- | -------
`operator.image.repository` | StorageOS Operator container image repository | `storageos/cluster-operator`
`operator.image.tag` | StorageOS Operator container image tag | `1.4.0`
`operator.image.pullPolicy` | StorageOS Operator container image pull policy | `IfNotPresent`
`podSecurityPolicy.enabled` | If true, create & use PodSecurityPolicy resources | `false`
`podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | `{}`
`cluster.create` | If true, auto-create the StorageOS cluster | `true`
`cluster.name` | Name of the storageos deployment | `storageos`
`cluster.namespace` | Namespace to install the StorageOS cluster into | `kube-system`
`cluster.secretRefName` | Name of the secret containing StorageOS API credentials | `storageos-api`
`cluster.admin.username` | Username to authenticate to the StorageOS API with | `storageos`
`cluster.admin.password` | Password to authenticate to the StorageOS API with |
`cluster.sharedDir` | The path shared into to kubelet container when running kubelet in a container |
`cluster.kvBackend.embedded` | Use StorageOS embedded etcd | `true`
`cluster.kvBackend.address` | List of etcd targets, in the form ip[:port], separated by commas |
`cluster.kvBackend.backend` | Key-Value store backend name | `etcd`
`cluster.kvBackend.tlsSecretName` | Name of the secret containing kv backend tls cert |
`cluster.kvBackend.tlsSecretNamespace` | Namespace of the secret containing kv backend tls cert |
`cluster.nodeSelectorTerm.key` | Key of the node selector term used for pod placement |
`cluster.nodeSelectorTerm.value` | Value of the node selector term used for pod placement |
`cluster.toleration.key` | Key of the pod toleration parameter |
`cluster.toleration.value` | Value of the pod toleration parameter |
`cluster.disableTelemetry` | If true, no telemetry data will be collected from the cluster | `false`
`cluster.images.node.repository` | StorageOS Node container image repository | `storageos/node`
`cluster.images.node.tag` | StorageOS Node container image tag | `1.4.0`
`cluster.csi.enable` | If true, CSI driver is enabled | `true`
`cluster.csi.deploymentStrategy` | Whether CSI helpers should be deployed as a `deployment` or `statefulset` | `deployment`
## Deleting a StorageOS Cluster
Deleting the `StorageOSCluster` custom resource object would delete the
storageos cluster and all the associated resources.
In the above example,
```bash
kubectl delete storageoscluster example-storageos
```
would delete the custom resource and the cluster.
## Uninstalling the Chart
To uninstall/delete the storageos cluster operator deployment:
```bash
helm delete --purge <release-name>
```
Learn more about configuring the StorageOS Operator on
[GitHub](https://github.com/storageos/cluster-operator).
# StorageOS Operator
[StorageOS](https://storageos.com) is a cloud native, software-defined storage
platform that transforms commodity server or cloud based disk capacity into
enterprise-class persistent storage for containers. StorageOS is ideal for
deploying databases, message busses, and other mission-critical stateful
solutions, where rapid recovery and fault tolerance are essential.
The StorageOS Operator installs and manages StorageOS within a cluster.
Cluster nodes may contribute local or attached disk-based storage into a
distributed pool, which is then available to all cluster members via a
global namespace.
By default, a minimal configuration of StorageOS is installed. To set advanced
configurations, disable the default installation of StorageOS and create a
custom StorageOSCluster resource
([documentation](https://docs.storageos.com/docs/reference/cluster-operator/examples)).
`Notes: The StorageOS Operator must be installed in the System Project with
Cluster Role`
podSecurityPolicy:
enabled: true
cluster:
# Disable cluster creation in CI, should install the operator only.
create: false
categories:
- storage
labels:
io.rancher.certified: partner
questions:
- variable: k8sDistro
default: rancher
description: "Kubernetes Distribution"
show_if: false
# Operator image configuration.
- variable: defaultImage
default: true
description: "Use default Docker images"
label: Use Default Images
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: operator.image.pullPolicy
default: IfNotPresent
description: "Operator Image pull policy"
type: enum
label: Operator Image pull policy
options:
- IfNotPresent
- Always
- Never
- variable: operator.image.repository
default: "storageos/cluster-operator"
description: "StorageOS operator image name"
type: string
label: StorageOS Operator Image Name
- variable: operator.image.tag
default: "1.4.0"
description: "StorageOS Operator image tag"
type: string
label: StorageOS Operator Image Tag
# Default minimal cluster configuration.
- variable: cluster.create
default: true
type: boolean
description: "Install StorageOS cluster with minimal configurations"
label: "Install StorageOS cluster"
show_subquestion_if: true
group: "StorageOS Cluster"
subquestions:
# CSI configuration.
- variable: cluster.csi.enable
default: true
description: "Use Container Storage Interface (CSI) driver"
label: Use CSI Driver
type: boolean
# Cluster metadata.
- variable: cluster.name
default: "storageos"
description: "Name of the StorageOS cluster deployment"
type: string
label: Name
- variable: cluster.namespace
default: "kube-system"
description: "Namespace of the StorageOS cluster deployment. `kube-system` recommended to avoid pre-emption when node is under load."
type: string
label: Namespace
# Node container image.
- variable: cluster.images.node.repository
default: "storageos/node"
description: "StorageOS node container image name"
type: string
label: StorageOS Node Container Image Name
- variable: cluster.images.node.tag
default: "1.4.0"
description: "StorageOS Node container image tag"
type: string
label: StorageOS Node Container Image Tag
# Credentials.
- variable: cluster.admin.username
default: "admin"
description: "Username of the StorageOS administrator account"
type: string
label: Username
- variable: cluster.admin.password
default: ""
description: "Password of the StorageOS administrator account. If empty, a random password will be generated."
type: password
label: Password
# Telemetry.
- variable: cluster.disableTelemetry
default: false
type: boolean
description: "Disable telemetry data collection. See https://docs.storageos.com/docs/reference/telemetry for more information."
label: Disable Telemetry
# KV store backend.
- variable: cluster.kvBackend.embedded
default: true
type: boolean
description: "Use embedded KV store for testing. Select false to use external etcd for production deployments."
label: "Use embedded KV store"
- variable: cluster.kvBackend.address
default: "10.0.0.1:2379"
description: "List of etcd targets, in the form ip[:port], separated by commas. Prefer multiple direct endpoints over a single load-balanced endpoint. Only used if not using embedded KV store."
type: string
label: External etcd address(es)
show_if: "cluster.kvBackend.embedded=false"
- variable: cluster.kvBackend.tls
default: false
type: boolean
description: "Enable etcd TLS"
label: "TLS should be configured for external etcd to protect configuration data (Optional)."
show_if: "cluster.kvBackend.embedded=false"
- variable: cluster.kvBackend.tlsSecretName
required: false
default: ""
description: "Name of the secret that contains the etcd TLS certs. This secret is typically shared with etcd."
type: string
label: External etcd TLS secret name
show_if: "cluster.kvBackend.tls=true"
- variable: cluster.kvBackend.tlsSecretNamespace
required: false
default: ""
description: "Namespace of the secret that contains the etcd TLS certs. This secret is typically shared with etcd."
type: string
label: External etcd TLS secret namespace
show_if: "cluster.kvBackend.tls=true"
# Node Selector Term.
- variable: cluster.nodeSelectorTerm.key
required: false
default: ""
description: "Key of the node selector term match expression used to select the nodes to install StorageOS on, e.g. `node-role.kubernetes.io/worker`"
type: string
label: Node selector term key
- variable: cluster.nodeSelectorTerm.value
required: false
default: ""
description: "Value of the node selector term match expression used to select the nodes to install StorageOS on."
type: string
label: Node selector term value
# Pod tolerations.
- variable: cluster.toleration.key
required: false
default: ""
description: "Key of pod toleration with operator 'Equal' and effect 'NoSchedule'"
type: string
label: Pod toleration key
- variable: cluster.toleration.value
required: false
default: ""
description: "Value of pod toleration with operator 'Equal' and effect 'NoSchedule'"
type: string
label: Pod toleration value
# Shared Directory
- variable: cluster.sharedDir
required: false
default: "/var/lib/kubelet/plugins/kubernetes.io~storageos"
description: "Shared Directory should be set if running kubelet in a container. This should be the path shared into to kubelet container, typically: '/var/lib/kubelet/plugins/kubernetes.io~storageos'. If not set, defaults will be used."
type: string
label: Shared Directory
StorageOS Operator deployed.
If you disabled automatic cluster creation, you can deploy a StorageOS cluster
by creating a custom StorageOSCluster resource:
1. Create a secret containing StorageOS cluster credentials. This secret
contains the API username and password that will be used to authenticate to the
StorageOS cluster. Base64 encode the username and password that you want to use
for your StorageOS cluster.
apiVersion: v1
kind: Secret
metadata:
name: storageos-api
namespace: default
labels:
app: storageos
type: kubernetes.io/storageos
data:
# echo -n '<secret>' | base64
apiUsername: c3RvcmFnZW9z
apiPassword: c3RvcmFnZW9z
2. Create a StorageOS custom resource that references the secret created
above (storageos-api in the above example). When the resource is created, the
cluster will be deployed.
apiVersion: storageos.com/v1
kind: StorageOSCluster
metadata:
name: example-storageos
namespace: default
spec:
secretRefName: storageos-api
secretRefNamespace: default
csi:
enable: true
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "storageos.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "storageos.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "storageos.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "storageos.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "storageos.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- if .Values.cluster.create }}
# ClusterRole, ClusterRoleBinding and ServiceAccounts have hook-failed in
# hook-delete-policy to make it easy to rerun the whole setup even after a
# failure, else the rerun fails with existing resource error.
# Hook delete policy before-hook-creation ensures any other leftover resources
# from previous run gets deleted when run again.
# The Job resources will not be deleted to help investigage the failure.
# Since the resources created by the operator are not managed by the chart, each
# of them must be individually deleted in separate jobs.
apiVersion: v1
kind: ServiceAccount
metadata:
name: storageos-cleanup
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: storageos:cleanup
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "1"
rules:
# Using apiGroup "apps" for daemonsets fails and the permission error indicates
# that it's in group "extensions". Not sure if it's a Job specific behavior,
# because the daemonsets deployed by the operator use "apps" apiGroup.
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
verbs:
- delete
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- delete
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
- clusterroles
- clusterrolebindings
verbs:
- delete
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- delete
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- delete
- apiGroups:
- ""
resources:
- serviceaccounts
- secrets
- services
- configmaps
verbs:
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: storageos:cleanup
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "2"
subjects:
- name: storageos-cleanup
kind: ServiceAccount
namespace: {{ .Release.Namespace }}
roleRef:
name: storageos:cleanup
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
---
# Iterate through the Values.cleanup list and create jobs to delete all the
# unmanaged resources of the cluster.
{{- range .Values.cleanup }}
apiVersion: batch/v1
kind: Job
metadata:
name: "storageos-{{ .name }}-cleanup"
namespace: {{ .namespace }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, before-hook-creation"
"helm.sh/hook-weight": "3"
spec:
template:
spec:
serviceAccountName: storageos-cleanup
containers:
- name: "storageos-{{ .name }}-cleanup"
image: bitnami/kubectl:1.14.1
command:
- kubectl
- -n
- {{ $.Values.cluster.namespace }}
- delete
{{- range .command }}
- {{ . | quote }}
{{- end }}
- --ignore-not-found=true
restartPolicy: Never
backoffLimit: 4
---
{{- end }}
{{- end }}
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: jobs.storageos.com
annotations:
"helm.sh/hook": crd-install
spec:
group: storageos.com
names:
kind: Job
listKind: JobList
plural: jobs
singular: job
scope: Namespaced
validation:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
args:
description: Args is an array of strings passed as an argument to the
job container.
items:
type: string
type: array
completionWord:
description: CompletionWord is the word that's looked for in the pod
logs to find out if a DaemonSet Pod has completed its task.
type: string
hostPath:
description: HostPath is the path in the host that's mounted into a
job container.
type: string
image:
description: Image is the container image to run as the job.
type: string
labelSelector:
description: LabelSelector is the label selector for the job Pods.
type: string
mountPath:
description: MountPath is the path in the job container where a volume
is mounted.
type: string
nodeSelectorTerms:
description: NodeSelectorTerms is the set of placement of the job pods
using node affinity requiredDuringSchedulingIgnoredDuringExecution.
items:
type: object
type: array
tolerations:
description: Tolerations is to set the placement of storageos pods using
pod toleration.
items:
type: object
type: array
required:
- image
- args
- mountPath
- hostPath
- completionWord
type: object
status:
properties:
completed:
description: Completed indicates the complete status of job.
type: boolean
type: object
version: v1
versions:
- name: v1
served: true
storage: true
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: nfsservers.storageos.com
annotations:
"helm.sh/hook": crd-install
spec:
additionalPrinterColumns:
- JSONPath: .status.phase
description: Status of the NFS server.
name: status
type: string
- JSONPath: .spec.resources.requests.storage
description: Capacity of the NFS server.
name: capacity
type: string
- JSONPath: .status.remoteTarget
description: Remote target address of the NFS server.
name: target
type: string
- JSONPath: .status.accessModes
description: Access modes supported by the NFS server.
name: access modes
type: string
- JSONPath: .spec.storageClassName
description: StorageClass used for creating the NFS volume.
name: storageclass
type: string
- JSONPath: .metadata.creationTimestamp
name: age
type: date
group: storageos.com
names:
kind: NFSServer
listKind: NFSServerList
plural: nfsservers
shortNames:
- nfsserver
singular: nfsserver
scope: Namespaced
validation:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
annotations:
additionalProperties:
type: string
description: The annotations-related configuration to add/set on each
Pod related object.
type: object
export:
description: The parameters to configure the NFS export
properties:
name:
description: Name of the export
type: string
persistentVolumeClaim:
description: PVC from which the NFS daemon gets storage for sharing
type: object
server:
description: The NFS server configuration
properties:
accessMode:
description: Reading and Writing permissions on the export Valid
values are "ReadOnly", "ReadWrite" and "none"
type: string
squash:
description: This prevents the root users connected remotely
from having root privileges Valid values are "none", "rootid",
"root", and "all"
type: string
type: object
type: object
mountOptions:
description: PV mount options. Not validated - mount of the PVs will
simply fail if one is invalid.
items:
type: string
type: array
nfsContainer:
description: NFSContainer is the container image to use for the NFS
server.
type: string
persistentVolumeReclaimPolicy:
description: Reclamation policy for the persistent volume shared to
the user's pod.
type: string
resources:
description: Resources represents the minimum resources required
type: object
storageClassName:
description: StorageClassName is the name of the StorageClass used by
the NFS volume.
type: string
tolerations:
description: Tolerations is to set the placement of NFS server pods
using pod toleration.
items:
type: object
type: array
type: object
status:
properties:
accessModes:
description: AccessModes is the access modes supported by the NFS server.
type: string
phase:
description: 'Phase is a simple, high-level summary of where the NFS
Server is in its lifecycle. Phase will be set to Ready when the NFS
Server is ready for use. It is intended to be similar to the PodStatus
Phase described at: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#podstatus-v1-core There
are five possible phase values: - Pending: The NFS Server has been
accepted by the Kubernetes system, but one or more of the components
has not been created. This includes time before being scheduled
as well as time spent downloading images over the network, which
could take a while. - Running: The NFS Server has been bound to
a node, and all of the dependencies have been created. - Succeeded:
All NFS Server dependencies have terminated in success, and will
not be restarted. - Failed: All NFS Server dependencies in the pod
have terminated, and at least one container has terminated in
failure. The container either exited with non-zero status or was
terminated by the system. - Unknown: For some reason the state of
the NFS Server could not be obtained, typically due to an error
in communicating with the host of the pod.'
type: string
remoteTarget:
description: RemoteTarget is the connection string that clients can
use to access the shared filesystem.
type: string
type: object
version: v1
versions:
- name: v1
served: true
storage: true
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "storageos.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ template "storageos.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "storageos.name" . }}
release: {{ .Release.Name }}
spec:
serviceAccountName: {{ template "storageos.serviceAccountName" . }}
containers:
- name: storageos-operator
image: "{{ .Values.operator.image.repository }}:{{ .Values.operator.image.tag }}"
imagePullPolicy: {{ .Values.operator.image.pullPolicy }}
ports:
- containerPort: 60000
name: metrics
command:
- cluster-operator
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OPERATOR_NAME
value: "cluster-operator"
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "storageos.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
{{- if .Values.podSecurityPolicy.annotations }}
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
{{- end }}
spec:
volumes:
- '*'
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
{{- end }}
# Role for storageos operator
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storageos:operator
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- storageos.com
resources:
- storageosclusters
- storageosupgrades
- jobs
- nfsservers
verbs:
- "*"
- apiGroups:
- apps
resources:
- statefulsets
- daemonsets
- deployments
- replicasets
verbs:
- "*"
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- get
- update
- create
- patch
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- get
- update
- patch
- delete
- create
- apiGroups:
- ""
resources:
- events
- namespaces
- serviceaccounts
- secrets
- services
- persistentvolumeclaims
- persistentvolumes
- configmaps
- replicationcontrollers
- pods/binding
- endpoints
verbs:
- create
- patch
- get
- list
- delete
- watch
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
- clusterroles
- clusterrolebindings
verbs:
- create
- delete
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
- volumeattachments
- csinodeinfos
verbs:
- create
- delete
- watch
- list
- get
- update
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- delete
- apiGroups:
- csi.storage.k8s.io
resources:
- csidrivers
verbs:
- create
- delete
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- list
- watch
- apiGroups:
- security.openshift.io
resourceNames:
- privileged
resources:
- securitycontextconstraints
verbs:
- create
- delete
- update
- get
- use
---
# Bind operator service account to storageos-operator role
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: storageos:operator
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: storageos:operator
apiGroup: rbac.authorization.k8s.io
{{- if .Values.podSecurityPolicy.enabled }}
---
# ClusterRole for using pod security policy.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: storageos:psp-user
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames:
- {{ template "storageos.fullname" . }}-psp
---
# Bind pod security policy cluster role to the operator service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: storageos:psp-user
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: storageos:psp-user
subjects:
- kind: ServiceAccount
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.cluster.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.cluster.secretRefName }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
type: "kubernetes.io/storageos"
data:
apiUsername: {{ default "" .Values.cluster.admin.username | b64enc | quote }}
{{ if .Values.cluster.admin.password }}
apiPassword: {{ default "" .Values.cluster.admin.password | b64enc | quote }}
{{ else }}
apiPassword: {{ randAlphaNum 10 | b64enc | quote }}
{{ end }}
# Add base64 encoded TLS cert and key below if ingress.tls is set to true.
# tls.crt:
# tls.key:
# Add base64 encoded creds below for CSI credentials.
# csiProvisionUsername:
# csiProvisionPassword:
# csiControllerPublishUsername:
# csiControllerPublishPassword:
# csiNodePublishUsername:
# csiNodePublishPassword:
{{- end }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.cluster.create }}
apiVersion: storageos.com/v1
kind: StorageOSCluster
metadata:
name: {{ .Values.cluster.name }}
namespace: {{ .Release.Namespace }}
spec:
namespace: {{ .Values.cluster.namespace }}
secretRefName: {{ .Values.cluster.secretRefName }}
secretRefNamespace: {{ .Release.Namespace }}
disableTelemetry: {{ .Values.cluster.disableTelemetry }}
{{- if .Values.k8sDistro }}
k8sDistro: {{ .Values.k8sDistro }}
{{- end }}
{{- if .Values.cluster.images.node.repository }}
images:
nodeContainer: "{{ .Values.cluster.images.node.repository }}:{{ .Values.cluster.images.node.tag }}"
{{- end }}
csi:
enable: {{ .Values.cluster.csi.enable }}
deploymentStrategy: {{ .Values.cluster.csi.deploymentStrategy }}
{{- if .Values.cluster.sharedDir }}
sharedDir: {{ .Values.cluster.sharedDir }}
{{- end }}
{{- if eq .Values.cluster.kvBackend.embedded false }}
kvBackend:
address: {{ .Values.cluster.kvBackend.address }}
backend: {{ .Values.cluster.kvBackend.backend }}
{{- end }}
{{- if .Values.cluster.kvBackend.tlsSecretName }}
tlsEtcdSecretRefName: {{ .Values.cluster.kvBackend.tlsSecretName }}
{{- end }}
{{- if .Values.cluster.kvBackend.tlsSecretNamespace }}
tlsEtcdSecretRefNamespace: {{ .Values.cluster.kvBackend.tlsSecretNamespace }}
{{- end }}
{{- if .Values.cluster.nodeSelectorTerm.key }}
nodeSelectorTerms:
- matchExpressions:
- key: {{ .Values.cluster.nodeSelectorTerm.key }}
operator: In
values:
- "{{ .Values.cluster.nodeSelectorTerm.value }}"
{{- end }}
{{- if .Values.cluster.toleration.key }}
tolerations:
- key: {{ .Values.cluster.toleration.key }}
operator: "Equal"
value: {{ .Values.cluster.toleration.value }}
effect: "NoSchedule"
{{- end }}
{{- end }}
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: storageosclusters.storageos.com
annotations:
"helm.sh/hook": crd-install
spec:
additionalPrinterColumns:
- JSONPath: .status.ready
description: Ready status of the storageos nodes.
name: ready
type: string
- JSONPath: .status.phase
description: Status of the whole cluster.
name: status
type: string
- JSONPath: .metadata.creationTimestamp
name: age
type: date
group: storageos.com
names:
kind: StorageOSCluster
listKind: StorageOSClusterList
plural: storageosclusters
shortNames:
- stos
singular: storageoscluster
scope: Namespaced
validation:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
csi:
description: CSI defines the configurations for CSI.
properties:
deploymentStrategy:
type: string
deviceDir:
type: string
driverRegisterationMode:
type: string
driverRequiresAttachment:
type: string
enable:
type: boolean
enableControllerPublishCreds:
type: boolean
enableNodePublishCreds:
type: boolean
enableProvisionCreds:
type: boolean
endpoint:
type: string
kubeletDir:
type: string
kubeletRegistrationPath:
type: string
pluginDir:
type: string
registrarSocketDir:
type: string
registrationDir:
type: string
version:
type: string
type: object
debug:
description: Debug is to set debug mode of the cluster.
type: boolean
disableFencing:
description: 'Disable Pod Fencing. With StatefulSets, Pods are only
re-scheduled if the Pod has been marked as killed. In practice this
means that failover of a StatefulSet pod is a manual operation. By
enabling Pod Fencing and setting the `storageos.com/fenced=true` label
on a Pod, StorageOS will enable automated Pod failover (by killing
the application Pod on the failed node) if the following conditions
exist: - Pod fencing has not been explicitly disabled. - StorageOS
has determined that the node the Pod is running on is offline. StorageOS
uses Gossip and TCP checks and will retry for 30 seconds. At this
point all volumes on the failed node are marked offline (irrespective
of whether fencing is enabled) and volume failover starts. - The
Pod has the label `storageos.com/fenced=true` set. - The Pod has at
least one StorageOS volume attached. - Each StorageOS volume has at
least 1 healthy replica. When Pod Fencing is disabled, StorageOS
will not perform any interaction with Kubernetes when it detects that
a node has gone offline. Additionally, the Kubernetes permissions
required for Fencing will not be added to the StorageOS role.'
type: boolean
disableScheduler:
description: Disable StorageOS scheduler extender.
type: boolean
disableTCMU:
description: Disable TCMU can be set to true to disable the TCMU storage
driver. This is required when there are multiple storage systems
running on the same node and you wish to avoid conflicts. Only one
TCMU-based storage system can run on a node at a time. Disabling
TCMU will degrade performance.
type: boolean
disableTelemetry:
description: Disable Telemetry.
type: boolean
forceTCMU:
description: Force TCMU can be set to true to ensure that TCMU is enabled
or cause StorageOS to abort startup. At startup, StorageOS will automatically
fallback to non-TCMU mode if another TCMU-based storage system is
running on the node. Since non-TCMU will degrade performance, this
may not always be desired.
type: boolean
images:
description: Images defines the various container images used in the
cluster.
properties:
csiClusterDriverRegistrarContainer:
type: string
csiExternalAttacherContainer:
type: string
csiExternalProvisionerContainer:
type: string
csiLivenessProbeContainer:
type: string
csiNodeDriverRegistrarContainer:
type: string
hyperkubeContainer:
type: string
initContainer:
type: string
nodeContainer:
type: string
type: object
ingress:
description: Ingress defines the ingress configurations used in the
cluster.
properties:
annotations:
additionalProperties:
type: string
type: object
enable:
type: boolean
hostname:
type: string
tls:
type: boolean
type: object
join:
description: Join is the join token used for service discovery.
type: string
k8sDistro:
description: 'K8sDistro is the name of the Kubernetes distribution where
the operator is being deployed. It should be in the format: `name[-1.0]`,
where the version is optional and should only be appended if known. Suitable
names include: `openshift`, `rancher`, `aks`, `gke`, `eks`, or the
deployment method if using upstream directly, e.g `minishift` or `kubeadm`. Setting
k8sDistro is optional, and will be used to simplify cluster configuration
by setting appropriate defaults for the distribution. The distribution
information will also be included in the product telemetry (if enabled),
to help focus development efforts.'
type: string
kvBackend:
description: KVBackend defines the key-value store backend used in the
cluster.
properties:
address:
type: string
backend:
type: string
type: object
namespace:
description: Namespace is the kubernetes Namespace where storageos resources
are provisioned.
type: string
nodeSelectorTerms:
description: NodeSelectorTerms is to set the placement of storageos
pods using node affinity requiredDuringSchedulingIgnoredDuringExecution.
items:
type: object
type: array
pause:
description: Pause is to pause the operator for the cluster.
type: boolean
resources:
description: Resources is to set the resource requirements of the storageos
containers.
type: object
secretRefName:
description: SecretRefName is the name of the secret object that contains
all the sensitive cluster configurations.
type: string
secretRefNamespace:
description: SecretRefNamespace is the namespace of the secret reference.
type: string
service:
description: Service is the Service configuration for the cluster nodes.
properties:
annotations:
additionalProperties:
type: string
type: object
externalPort:
format: int64
type: integer
internalPort:
format: int64
type: integer
name:
type: string
type:
type: string
required:
- name
- type
type: object
sharedDir:
description: 'SharedDir is the shared directory to be used when the
kubelet is running in a container. Typically: "/var/lib/kubelet/plugins/kubernetes.io~storageos".
If not set, defaults will be used.'
type: string
storageClassName:
description: StorageClassName is the name of default StorageClass created
for StorageOS volumes.
type: string
tlsEtcdSecretRefName:
description: TLSEtcdSecretRefName is the name of the secret object that
contains the etcd TLS certs. This secret is shared with etcd, therefore
it's not part of the main storageos secret.
type: string
tlsEtcdSecretRefNamespace:
description: TLSEtcdSecretRefNamespace is the namespace of the etcd
TLS secret object.
type: string
tolerations:
description: Tolerations is to set the placement of storageos pods using
pod toleration.
items:
type: object
type: array
required:
- secretRefName
- secretRefNamespace
type: object
status:
properties:
members:
properties:
ready:
description: Ready are the storageos cluster members that are ready
to serve requests. The member names are the same as the node IPs.
items:
type: string
type: array
unready:
description: Unready are the storageos cluster nodes not ready to
serve requests.
items:
type: string
type: array
type: object
nodeHealthStatus:
additionalProperties:
properties:
directfsInitiator:
type: string
director:
type: string
kv:
type: string
kvWrite:
type: string
nats:
type: string
presentation:
type: string
rdb:
type: string
type: object
type: object
nodes:
items:
type: string
type: array
phase:
type: string
ready:
type: string
type: object
version: v1
versions:
- name: v1
served: true
storage: true
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: storageosupgrades.storageos.com
annotations:
"helm.sh/hook": crd-install
spec:
group: storageos.com
names:
kind: StorageOSUpgrade
listKind: StorageOSUpgradeList
plural: storageosupgrades
singular: storageosupgrade
scope: Namespaced
validation:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
newImage:
description: NewImage is the new StorageOS node container image.
type: string
required:
- newImage
type: object
status:
properties:
completed:
description: Completed is the status of upgrade process.
type: boolean
type: object
version: v1
versions:
- name: v1
served: true
storage: true
# Default values for storageos.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
name: storageos-operator
k8sDistro: default
serviceAccount:
create: true
name: storageos-operator-sa
podSecurityPolicy:
enabled: false
annotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
# operator-specific configuation parameters.
operator:
image:
repository: storageos/cluster-operator
tag: 1.4.0
pullPolicy: IfNotPresent
# cluster-specific configuation parameters.
cluster:
# set create to true if the operator should auto-create the StorageOS cluster.
create: true
# Name of the deployment.
name: storageos
# Namespace to install the StorageOS cluster into.
namespace: kube-system
# Name of the secret containing StorageOS API credentials.
secretRefName: storageos-api
# Default admin account.
admin:
# Username to authenticate to the StorageOS API with.
username: storageos
# Password to authenticate to the StorageOS API with. If empty, a random
# password will be generated and set in the secretRefName secret.
password:
# sharedDir should be set if running kubelet in a container. This should
# be the path shared into to kubelet container, typically:
# "/var/lib/kubelet/plugins/kubernetes.io~storageos". If not set, defaults
# will be used.
sharedDir:
# Key-Value store backend.
kvBackend:
embedded: true
address:
backend: etcd
tlsSecretName:
tlsSecretNamespace:
# Node selector terms to install StorageOS on.
nodeSelectorTerm:
key:
value:
# Pod toleration for the StorageOS pods.
toleration:
key:
value:
# To disable anonymous usage reporting across the cluster, set to true.
# Defaults to false. To help improve the product, data such as API usage and
# StorageOS configuration information is collected.
disableTelemetry: false
images:
# nodeContainer is the StorageOS node image to use, available from the
# [Docker Hub](https://hub.docker.com/r/storageos/node/).
node:
repository: storageos/node
tag: 1.4.0
csi:
enable: true
deploymentStrategy: deployment
# The following is used for cleaning up unmanaged cluster resources when
# auto-install is enabled.
cleanup:
- name: daemonset
command:
- "daemonset"
- "storageos-daemonset"
- name: statefulset
command:
- "statefulset"
- "storageos-statefulset"
- name: csi-helper
command:
- "deployment"
- "storageos-csi-helper"
- name: scheduler
command:
- "deployment"
- "storageos-scheduler"
- name: configmap
command:
- "configmap"
- "storageos-scheduler-config"
- "storageos-scheduler-policy"
- name: serviceaccount
command:
- "serviceaccount"
- "storageos-daemonset-sa"
- "storageos-statefulset-sa"
- name: role
command:
- "role"
- "storageos:key-management"
- name: rolebinding
command:
- "rolebinding"
- "storageos:key-management"
- name: secret
command:
- "secret"
- "init-secret"
- name: service
command:
- "service"
- "storageos"
- name: clusterrole
command:
- "clusterrole"
- "storageos:driver-registrar"
- "storageos:csi-attacher"
- "storageos:csi-provisioner"
- "storageos:pod-fencer"
- "storageos:scheduler-extender"
- "storageos:init"
- name: clusterrolebinding
command:
- "clusterrolebinding"
- "storageos:csi-provisioner"
- "storageos:csi-attacher"
- "storageos:driver-registrar"
- "storageos:k8s-driver-registrar"
- "storageos:pod-fencer"
- "storageos:scheduler-extender"
- "storageos:init"
- name: storageclass
command:
- "storageclass"
- "fast"
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment