Unverified Commit 4f46758c by Denise Committed by GitHub

Merge pull request #197 from rancher/dev

Sync dev to master with partner PRs
parents 454ad004 b213608a
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
name: neuvector
apiVersion: v1
version: 1.2.3
appVersion: 2.4.2
description: NeuVector Container Security Platform includes layer 7 container firewall, end-to-end vulnerability scanning, and container process/file monitoring.
home: https://neuvector.com
icon: https://github.com/neuvector/kubernetes-cis-benchmark/blob/master/NeuVector-Logo.png
maintainers:
- name: Sun
email: xfsun@neuvector.com
engine: gotpl
# NeuVector
Visibility and Security: The NeuVector ‘Multi-Vector Container Security Platform’
[NeuVector](https://neuvector.com) provides a real-time Kubernetes and OpenShift container security solution that adapts easily to your changing environment and secures containers at their most vulnerable point – during run-time. The declarative security policy ensures that applications scale up or scale down quickly without manual intervention. The NeuVector solution is a Red Hat and Docker Certified container itself which deploys easily on each host, providing a container firewall, host monitoring and security, security auditing with CIS benchmarks, and vulnerability scanning.
The installation will deploy the NeuVector Enforcer container on each worker node as a daemon set, and by default 3 controller containers (for HA, one is elected the leader). The controllers can be deployed on any node, including Master, Infra or management nodes. See the NeuVector docs for node labeling to control where controllers are deployed.
## Prerequisites
- Kubernetes 1.7+
- Helm installed and Tiller pod is running
- Cluster role `cluster-admin` available, check by:
```console
$ kubectl get clusterrole cluster-admin
```
If nothing returned, then add the `cluster-admin`:
cluster-admin.yaml
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
```
```console
$ kubectl create -f cluster-admin.yaml
```
- If you have not created a service account for tiller, and give it admin abilities on the cluster:
```console
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ kubectl patch deployment tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' -n kube-system
```
## Downloading the Chart
Clone or download this repository.
## Installing the Chart
#### Kubernetes
- Create the NeuVector namespace.
```console
$ kubectl create namespace neuvector
```
- Configure Kubernetes to pull from the private NeuVector registry on Docker Hub.
```console
$ kubectl create secret docker-registry regsecret -n neuvector --docker-server=https://index.docker.io/v1/ --docker-username=your-name --docker-password=your-pword --docker-email=your-email
```
Where ’your-name’ is your Docker username, ’your-pword’ is your Docker password, ’your-email’ is your Docker email.
To install the chart with the release name `my-release` and image pull secret:
```console
$ helm install --name my-release --namespace neuvector ./neuvector-helm/ --set imagePullSecrets=regsecret
```
> If you already installed neuvector in your cluster without using helm, please `kubectl delete -f your-neuvector-yaml.yaml` before trying to use helm install.
#### RedHat OpenShift
- Create a new project. Note: If the --node-selector argument is used when creating a project this will restrict pod placement such as for the Neuvector enforcer to specific nodes.
```console
$ oc new-project neuvector
```
- Grant Service Account Access to the Privileged SCC.
```console
$ oc -n neuvector adm policy add-scc-to-user privileged -z default
```
To install the chart with the release name `my-release` and your private registry:
```console
$ helm install --name my-release --namespace neuvector ./neuvector-helm/ --set openshift=true,registry=your-private-registry
```
If you are using a private registry, and want to enable the updater cronjob, please create a script, run it as a cronjob before midnight or the updater daily schedule.
```console
$ docker login docker.io
$ docker pull docker.io/neuvector/updater
$ docker logout docker.io
$ oc login -u <user_name>
# this user_name is the one when you install neuvector
$ docker login -u <user_name> -p `oc whoami -t` docker-registry.default.svc:5000
$ docker tag docker.io/neuvector/updater docker-registry.default.svc:5000/neuvector/updater
$ docker push docker-registry.default.svc:5000/neuvector/updater
$ docker logout docker-registry.default.svc:5000
```
## Rolling upgrade
Please `git pull` the latest neuvector-helm/ before upgrade.
```console
$ helm upgrade my-release --set imagePullSecrets=regsecret,tag=2.2.0 ./neuvector-helm/
```
Please keep all of the previous settings you do not want to change during rolling upgrade.
```console
$ helm upgrade my-release --set openshift=true,registry=your-private-registry,cve.updater.enabled=true ./neuvector-helm/
```
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```console
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following table lists the configurable parameters of the NeuVector chart and their default values.
Parameter | Description | Default | Notes
--------- | ----------- | ------- | -----
`openshift` | If deploying in OpenShift, set this to true | `false` |
`registry` | image registry | `docker.io` | If Azure, set to my-reg.azurecr.io;<br>if OpenShift, set to docker-registry.default.svc:5000
`tag` | image tag for controller enforcer manager | `latest` |
`imagePullSecrets` | image pull secret | `{}` |
`controller.enabled` | If true, create controller | `true` |
`controller.image.repository` | controller image repository | `neuvector/controller` |
`controller.replicas` | controller replicas | `3` |
`controller.pvc.enabled` | If true, enable persistence for controller using PVC | `false` | Require persistent volume type RWX, and storage 1Gi
`controller.pvc.storageClass` | Storage Class to be used | `default` |
`controller.azureFileShare.enabled` | If true, enable the usage of an existing or statically provisioned Azure File Share | `false` |
`controller.azureFileShare.secretName` | The name of the secret containing the Azure file share storage account name and key | `{}` |
`controller.azureFileShare.shareName` | The name of the Azure file share to use | `{}` |
`enforcer.enabled` | If true, create enforcer | `true` |
`enforcer.image.repository` | enforcer image repository | `neuvector/enforcer` |
`enforcer.tolerations` | List of node taints to tolerate | `- effect: NoSchedule`<br>`key: node-role.kubernetes.io/master` | other taints can be added after the default
`manager.enabled` | If true, create manager | `true` |
`manager.image.repository` | manager image repository | `neuvector/manager` |
`manager.env.ssl` | enable/disable HTTPS and disable/enable HTTP access | `on`;<br>if ingress is enabled, then default is `off` |
`manager.svc.type` | set manager service type for native Kubernetes | `NodePort`;<br>if it is OpenShift platform or ingress is enabled, then default is `ClusterIP` | set to LoadBalancer if using cloud providers, such as Azure, Amazon, Google
`manager.ingress.enabled` | If true, create ingress, must also set ingress host value | `false` | enable this if ingress controller is installed
`manager.ingress.host` | Must set this host value if ingress is enabled | `{}` |
`manager.ingress.path` | Set ingress path |`/` | If set, it might be necessary to set a rewrite rule in annotations. Currently only supports `/`
`manager.ingress.annotations` | Add annotations to ingress to influence behavior | `{}` | see examples in [values.yaml](values.yaml)
`manager.ingress.tls` | If true, TLS is enabled for ingress |`false` | If set, the tls-host used is the one set with `manager.ingress.host`. It might be necessary to set `manager.env.ssl="off"`
`manager.ingress.secretName` | Name of the secret to be used for TLS-encryption | `{}` | Secret must be created separately (Let's encrypt, manually)
`cve.updater.enabled` | If true, create cve updater | `false` |
`cve.updater.image.repository` | cve updater image repository | `neuvector/updater` |
`cve.updater.image.tag` | image tag for cve updater | `latest` |
`cve.updater.schedule` | cronjob cve updater schedule | `0 0 * * *` |
`containerd.enabled` | If true, use containerd instead of docker | `false` |
`containerd.path` | If containerd enabled, this local containerd sock path will be used | `/var/run/containerd/containerd.sock` |
`admissionwebhook.type` | admission webhook type | `ClusterIP` | |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install --name my-release --namespace neuvector ./neuvector-helm/ --set manager.env.ssl=off
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm install --name my-release --namespace neuvector ./neuvector-helm/ -f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml)
## RBAC Configuration
If you installed neuvector before and manually created the cluster role and cluster role binding for neuvector-binding, you need to delete the cluster role binding first, then delete the cluster role.
```console
$ kubectl delete clusterrolebinding neuvector-binding
$ kubectl delete clusterrole neuvector-binding
```
If helm install returns error because of an existing cluster role, you need to delete the release before install again.
```console
$ helm delete --purge my-release
```
## Enabling/Disabling Ingress
Enabling/Disabling ingress by changing `manager.ingress.enabled` from `true` to `false` and vice versa - and simply updating your chart will fail, because `manager.svc.type` will be changed between 'NodePort' (default) and 'ClusterIp' - this isn't possible. The working way is:
- Disable 'manager' (`manager.enabled=false`)
- Update chart
- Enable/Disable ingress and re-enable manager
- Update chart
---
Contact <support@neuvector.com> for access to Docker Hub and docs.
### Run-Time Protection Without Compromise
NeuVector delivers a complete run-time security solution with container process/file system protection and vulnerability scanning combined with the only true Layer 7 container firewall. Protect sensitive data with a complete container security platform.
NeuVector integrates tightly with Rancher and Kubernetes to extend the built-in security features for applications that require defense in depth. Security features include:
+ Build phase vulnerability scanning with Jenkins plug-in and registry scanning
+ Admission control to prevent vulnerable or unauthorized image deployments using Kubernetes admission control webhooks
+ Complete run-time scanning with network, process, and file system monitoring and protection
+ The industry's only layer 7 container firewall for multi-protocol threat detection and automated segmentation
+ Advanced network controls including DLP detection, service mesh integration, connection blocking and packet captures
+ Run-time vulnerability scanning and CIS benchmarks
\ No newline at end of file
labels:
io.rancher.certified: partner
questions:
- variable: registry
default: "docker.io"
description: image registry
type: string
label: Image Registry
- variable: imagePullSecrets
default: ""
description: secret name to pull image
type: string
label: Image Pull Secrets
\ No newline at end of file
{{- if and .Values.manager.enabled .Values.manager.ingress.enabled }}
From outside the cluster, the NeuVector URL is:
http://{{ .Values.manager.ingress.host }}
{{- else if not .Values.openshift }}
Get the NeuVector URL by running these commands:
{{- if contains "NodePort" .Values.manager.svc.type }}
NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services neuvector-service-webui)
NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo https://$NODE_IP:$NODE_PORT
{{- else if contains "ClusterIP" .Values.manager.svc.type }}
CLUSTER_IP=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.clusterIP}" services neuvector-service-webui)
echo https://$CLUSTER_IP:8443
{{- else if contains "LoadBalancer" .Values.manager.svc.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w neuvector-service-webui'
SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} neuvector-service-webui -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
echo https://$SERVICE_IP:8443
{{- end }}
{{- end }}
\ No newline at end of file
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "neuvector.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "neuvector.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "neuvector.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-admission-webhook
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- port: 443
targetPort: 20443
protocol: TCP
name: admission-webhook
type: {{ .Values.admissionwebhook.type }}
selector:
app: neuvector-controller-pod
\ No newline at end of file
{{- if and .Values.openshift (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRole
metadata:
name: neuvector-binding-app
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- ""
resources:
- nodes
- pods
- services
verbs:
- get
- list
- watch
- update
---
{{- if and .Values.openshift (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRole
metadata:
name: neuvector-binding-rbac
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
{{- if .Values.openshift }}
- apiGroups:
- image.openshift.io
resources:
- imagestreams
verbs:
- get
- list
- watch
{{- end }}
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
- clusterrolebindings
- clusterroles
verbs:
- get
- list
- watch
---
{{- if and .Values.openshift (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRole
metadata:
name: neuvector-binding-admission
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
- mutatingwebhookconfigurations
verbs:
- get
- list
- watch
- create
- update
- delete
{{- if and .Values.openshift (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-app
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not .Values.openshift }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: neuvector-binding-app
subjects:
- kind: ServiceAccount
name: default
namespace: {{ .Release.Namespace }}
{{- if .Values.openshift }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:default
{{- end }}
---
{{- if and .Values.openshift (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-rbac
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not .Values.openshift }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: neuvector-binding-rbac
subjects:
- kind: ServiceAccount
name: default
namespace: {{ .Release.Namespace }}
{{- if .Values.openshift }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:default
{{- end }}
---
{{- if and .Values.openshift (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: authorization.openshift.io/v1
{{- else if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: rbac.authorization.k8s.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: ClusterRoleBinding
metadata:
name: neuvector-binding-admission
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
{{- if not .Values.openshift }}
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- end }}
name: neuvector-binding-admission
subjects:
- kind: ServiceAccount
name: default
namespace: {{ .Release.Namespace }}
{{- if .Values.openshift }}
userNames:
- system:serviceaccount:{{ .Release.Namespace }}:default
{{- end }}
{{- if .Values.controller.enabled -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: neuvector-controller-pod
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.controller.replicas }}
minReadySeconds: 60
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: neuvector-controller-pod
release: {{ .Release.Name }}
spec:
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
containers:
- name: neuvector-controller-pod
image: "{{ .Values.registry }}/{{ .Values.controller.image.repository }}:{{ .Values.tag }}"
securityContext:
privileged: true
readinessProbe:
exec:
command:
- cat
- /tmp/ready
initialDelaySeconds: 5
periodSeconds: 5
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.{{ .Release.Namespace }}
- name: CLUSTER_ADVERTISED_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CLUSTER_BIND_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
{{- if or .Values.controller.pvc.enabled .Values.controller.azureFileShare.enabled }}
- name: CTRL_PERSIST_CONFIG
value: "1"
{{- end }}
volumeMounts:
- mountPath: /var/neuvector
name: nv-share
readOnly: false
{{- if .Values.containerd.enabled }}
- mountPath: /var/run/containerd/containerd.sock
{{- else }}
- mountPath: /var/run/docker.sock
{{- end }}
name: runtime-sock
readOnly: false
- mountPath: /host/proc
name: proc-vol
readOnly: true
- mountPath: /host/cgroup
name: cgroup-vol
readOnly: true
lifecycle:
preStop:
exec:
command: ["/usr/local/bin/consul", "leave"]
terminationGracePeriodSeconds: 60
restartPolicy: Always
volumes:
- name: nv-share
{{- if .Values.controller.pvc.enabled }}
persistentVolumeClaim:
claimName: neuvector-data
{{- else if .Values.controller.azureFileShare.enabled }}
azureFile:
secretName: {{ .Values.controller.azureFileShare.secretName }}
shareName: {{ .Values.controller.azureFileShare.shareName }}
readOnly: false
{{- else }}
hostPath:
path: /var/neuvector
{{- end }}
- name: runtime-sock
hostPath:
{{- if .Values.containerd.enabled }}
path: {{ .Values.containerd.path }}
{{- else }}
path: /var/run/docker.sock
{{- end }}
- name: proc-vol
hostPath:
path: /proc
- name: cgroup-vol
hostPath:
path: /sys/fs/cgroup
{{- end }}
\ No newline at end of file
{{- if .Values.controller.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: neuvector-svc-controller
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- port: 18300
protocol: "TCP"
name: "cluster-tcp-18300"
- port: 18301
protocol: "TCP"
name: "cluster-tcp-18301"
- port: 18301
protocol: "UDP"
name: "cluster-udp-18301"
clusterIP: None
selector:
app: neuvector-controller-pod
{{- end }}
\ No newline at end of file
{{- if .Values.enforcer.enabled -}}
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: neuvector-enforcer-pod
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: neuvector-enforcer-pod
release: {{ .Release.Name }}
spec:
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
{{- if .Values.enforcer.tolerations }}
tolerations:
{{ toYaml .Values.enforcer.tolerations | indent 8 }}
{{- end }}
hostPID: true
containers:
- name: neuvector-enforcer-pod
image: "{{ .Values.registry }}/{{ .Values.enforcer.image.repository }}:{{ .Values.tag }}"
securityContext:
privileged: true
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.{{ .Release.Namespace }}
- name: CLUSTER_ADVERTISED_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CLUSTER_BIND_ADDR
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
{{- if .Values.containerd.enabled }}
- mountPath: /var/run/containerd/containerd.sock
{{- else }}
- mountPath: /var/run/docker.sock
{{- end }}
name: runtime-sock
readOnly: false
- mountPath: /host/proc
name: proc-vol
readOnly: true
- mountPath: /host/cgroup
name: cgroup-vol
readOnly: true
- mountPath: /lib/modules
name: modules-vol
readOnly: true
restartPolicy: Always
volumes:
- name: runtime-sock
hostPath:
{{- if .Values.containerd.enabled }}
path: {{ .Values.containerd.path }}
{{- else }}
path: /var/run/docker.sock
{{- end }}
- name: proc-vol
hostPath:
path: /proc
- name: cgroup-vol
hostPath:
path: /sys/fs/cgroup
- name: modules-vol
hostPath:
path: /lib/modules
{{- end }}
\ No newline at end of file
{{- if and .Values.manager.enabled .Values.manager.ingress.enabled -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: neuvector-webui-ingress
namespace: {{ .Release.Namespace }}
{{- with .Values.manager.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.manager.ingress.tls }}
tls:
- hosts:
- {{ .Values.manager.ingress.host }}
{{- if .Values.manager.ingress.secretName }}
secretName: {{ .Values.manager.ingress.secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.manager.ingress.host }}
http:
paths:
- path: {{ .Values.manager.ingress.path }}
backend:
serviceName: neuvector-service-webui
servicePort: 8443
{{- end }}
{{- if .Values.manager.enabled -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: neuvector-manager-pod
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: 1
template:
metadata:
labels:
app: neuvector-manager-pod
release: {{ .Release.Name }}
spec:
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
containers:
- name: neuvector-manager-pod
image: "{{ .Values.registry }}/{{ .Values.manager.image.repository }}:{{ .Values.tag }}"
env:
- name: CTRL_SERVER_IP
value: neuvector-svc-controller.{{ .Release.Namespace }}
- name: MANAGER_SSL
{{- if .Values.manager.ingress.enabled }}
value: "off"
{{- else }}
value: "{{ .Values.manager.env.ssl }}"
{{- end }}
restartPolicy: Always
{{- end }}
\ No newline at end of file
{{- if .Values.manager.enabled -}}
apiVersion: v1
kind: Service
metadata:
name: neuvector-service-webui
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if or .Values.openshift .Values.manager.ingress.enabled }}
type: ClusterIP
{{- else }}
type: {{ .Values.manager.svc.type }}
{{- end }}
ports:
- port: 8443
name: manager
protocol: TCP
selector:
app: neuvector-manager-pod
{{- end }}
\ No newline at end of file
{{- if and .Values.controller.enabled .Values.controller.pvc.enabled -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: neuvector-data
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
{{- if .Values.controller.pvc.storageClass }}
storageClassName: {{ .Values.controller.pvc.storageClass }}
{{- end }}
resources:
requests:
storage: 1Gi
{{- end }}
\ No newline at end of file
{{- if .Values.openshift -}}
{{- if (semverCompare ">=1.9-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: route.openshift.io/v1
{{- else }}
apiVersion: v1
{{- end }}
kind: Route
metadata:
name: neuvector-route-webui
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
to:
kind: Service
name: neuvector-service-webui
port:
targetPort: manager
tls:
termination: passthrough
{{- end }}
\ No newline at end of file
{{- if .Values.cve.updater.enabled -}}
{{- if (semverCompare ">=1.8-0" (substr 1 -1 .Capabilities.KubeVersion.GitVersion)) }}
apiVersion: batch/v1beta1
{{- else }}
apiVersion: batch/v2alpha1
{{- end }}
kind: CronJob
metadata:
name: neuvector-updater-pod
namespace: {{ .Release.Namespace }}
labels:
chart: {{ template "neuvector.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
schedule: {{ .Values.cve.updater.schedule | quote }}
jobTemplate:
spec:
template:
metadata:
labels:
app: neuvector-updater-pod
release: {{ .Release.Name }}
spec:
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
containers:
- name: neuvector-updater-pod
image: "{{ .Values.registry }}/{{ .Values.cve.updater.image.repository }}:{{ .Values.cve.updater.image.tag }}"
env:
- name: CLUSTER_JOIN_ADDR
value: neuvector-svc-controller.{{ .Release.Namespace }}
restartPolicy: Never
{{- end }}
\ No newline at end of file
# Default values for neuvector.
# This is a YAML-formatted file.
# Declare variables to be passed into the templates.
openshift: false
registry: docker.io
tag: latest
imagePullSecrets: {}
controller:
# If false, controller will not be installed
enabled: true
image:
repository: neuvector/controller
replicas: 3
pvc:
enabled: false
storageClass:
azureFileShare:
enabled: false
secretName: {}
shareName: {}
enforcer:
# If false, enforcer will not be installed
enabled: true
image:
repository: neuvector/enforcer
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
manager:
# If false, manager will not be installed
enabled: true
image:
repository: neuvector/manager
env:
ssl: on
svc:
type: NodePort
ingress:
enabled: false
host: {}
# MUST be set, if ingress is enabled
path: "/"
annotations: {}
# kubernetes.io/ingress.class: my-nginx
# nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1"
# nginx.ingress.kubernetes.io/rewrite-target: /
# nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
tls: false
secretName: {}
# my-tls-secret
cve:
updater:
# If false, cve updater will not be installed
enabled: false
image:
repository: neuvector/updater
tag: latest
schedule: "0 0 * * *"
containerd:
enabled: false
path: /var/run/containerd/containerd.sock
admissionwebhook:
type: ClusterIP
apiVersion: v1
version: 0.9.0
version: 1.0.0
name: openebs
appVersion: 0.9.0
appVersion: 1.0.0
description: Containerized Storage for Containers
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/openebs/icon/color/openebs-icon-color.png
home: http://www.openebs.io/
......
......@@ -11,7 +11,7 @@ Introduction
This chart bootstraps OpenEBS deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Kubernetes 1.7.5+ with RBAC enabled
- Kubernetes 1.10+ with RBAC enabled
- iSCSI PV support in the underlying infrastructure
## Installing OpenEBS
......@@ -40,45 +40,47 @@ The following table lists the configurable parameters of the OpenEBS chart and t
| `rbac.create` | Enable RBAC Resources | `true` |
| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
| `apiserver.image` | Image for API Server | `quay.io/openebs/m-apiserver` |
| `apiserver.imageTag` | Image Tag for API Server | `0.9.0` |
| `apiserver.imageTag` | Image Tag for API Server | `1.0.0` |
| `apiserver.replicas` | Number of API Server Replicas | `1` |
| `apiserver.sparse.enabled` | Create Sparse Pool based on Sparsefile | `false` |
| `provisioner.image` | Image for Provisioner | `quay.io/openebs/openebs-k8s-provisioner` |
| `provisioner.imageTag` | Image Tag for Provisioner | `0.9.0` |
| `provisioner.imageTag` | Image Tag for Provisioner | `1.0.0` |
| `provisioner.replicas` | Number of Provisioner Replicas | `1` |
| `localProvisioner.image` | Image for localProvisioner | `quay.io/openebs/provisioner-localpv` |
| `localProvisioner.imageTag` | Image Tag for localProvisioner | `0.9.0` |
| `localProvisioner.imageTag` | Image Tag for localProvisioner | `1.0.0` |
| `localProvisioner.replicas` | Number of localProvisioner Replicas | `1` |
| `localProvisioner.basePath` | BasePath for hostPath volumes on Nodes | `/var/openebs/local` |
| `localProvisioner.basePath` | BasePath for hostPath volumes on Nodes | `/var/openebs/local` |
| `webhook.image` | Image for admision server | `quay.io/openebs/admission-server` |
| `webhook.imageTag` | Image Tag for admission server | `0.9.0` |
| `webhook.imageTag` | Image Tag for admission server | `1.0.0` |
| `webhook.replicas` | Number of admission server Replicas | `1` |
| `snapshotOperator.provisioner.image` | Image for Snapshot Provisioner | `quay.io/openebs/snapshot-provisioner` |
| `snapshotOperator.provisioner.imageTag` | Image Tag for Snapshot Provisioner | `0.9.0` |
| `snapshotOperator.provisioner.imageTag` | Image Tag for Snapshot Provisioner | `1.0.0` |
| `snapshotOperator.controller.image` | Image for Snapshot Controller | `quay.io/openebs/snapshot-controller` |
| `snapshotOperator.controller.imageTag` | Image Tag for Snapshot Controller | `0.9.0` |
| `snapshotOperator.controller.imageTag` | Image Tag for Snapshot Controller | `1.0.0` |
| `snapshotOperator.replicas` | Number of Snapshot Operator Replicas | `1` |
| `ndm.image` | Image for Node Disk Manager | `quay.io/openebs/node-disk-manager-amd64` |
| `ndm.imageTag` | Image Tag for Node Disk Manager | `v0.3.5` |
| `ndm.imageTag` | Image Tag for Node Disk Manager | `v0.4.0` |
| `ndm.sparse.path` | Directory where Sparse files are created | `/var/openebs/sparse` |
| `ndm.sparse.size` | Size of the sparse file in bytes | `10737418240` |
| `ndm.sparse.count` | Number of sparse files to be created | `1` |
| `ndm.filters.excludeVendors` | Exclude devices with specified vendor | `CLOUDBYT,OpenEBS` |
| `ndm.filters.excludePaths` | Exclude devices with specified path patterns | `loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md` |
| `ndm.filters.includePaths` | Include devices with specified path patterns | `""` |
| `ndmOperator.image` | Image for NDM Operator | `quay.io/openebs/node-disk-operator-amd64`|
| `ndmOperator.imageTag` | Image Tag for NDM Operator | `v0.4.0` |
| `jiva.image` | Image for Jiva | `quay.io/openebs/jiva` |
| `jiva.imageTag` | Image Tag for Jiva | `0.9.0` |
| `jiva.imageTag` | Image Tag for Jiva | `1.0.0` |
| `jiva.replicas` | Number of Jiva Replicas | `3` |
| `cstor.pool.image` | Image for cStor Pool | `quay.io/openebs/cstor-pool` |
| `cstor.pool.imageTag` | Image Tag for cStor Pool | `0.9.0` |
| `cstor.pool.imageTag` | Image Tag for cStor Pool | `1.0.0` |
| `cstor.poolMgmt.image` | Image for cStor Pool Management | `quay.io/openebs/cstor-pool-mgmt` |
| `cstor.poolMgmt.imageTag` | Image Tag for cStor Pool Management | `0.9.0` |
| `cstor.poolMgmt.imageTag` | Image Tag for cStor Pool Management | `1.0.0` |
| `cstor.target.image` | Image for cStor Target | `quay.io/openebs/cstor-istgt` |
| `cstor.target.imageTag` | Image Tag for cStor Target | `0.9.0` |
| `cstor.target.imageTag` | Image Tag for cStor Target | `1.0.0` |
| `cstor.volumeMgmt.image` | Image for cStor Volume Management | `quay.io/openebs/cstor-volume-mgmt` |
| `cstor.volumeMgmt.imageTag` | Image Tag for cStor Volume Management | `0.9.0` |
| `cstor.volumeMgmt.imageTag` | Image Tag for cStor Volume Management | `1.0.0` |
| `policies.monitoring.image` | Image for Prometheus Exporter | `quay.io/openebs/m-exporter` |
| `policies.monitoring.imageTag` | Image Tag for Prometheus Exporter | `0.9.0` |
| `policies.monitoring.imageTag` | Image Tag for Prometheus Exporter | `1.0.0` |
| `analytics.enabled` | Enable sending stats to Google Analytics | `true` |
| `analytics.pingInterval` | Duration(hours) between sending ping stat | `24h` |
| `HealthCheck.initialDelaySeconds` | Delay before liveness probe is initiated | `30` | | 30 |
......
......@@ -18,7 +18,7 @@ questions:
type: string
label: API Server Image
- variable: apiserver.imageTag
default: "0.9.0"
default: "1.0.0"
description: "The image tag of API Server image"
type: string
label: Image Tag For OpenEBS API Server Image
......@@ -28,7 +28,7 @@ questions:
type: string
label: Provisioner Image
- variable: provisioner.imageTag
default: "0.9.0"
default: "1.0.0"
description: "The image tag of Provisioner image"
type: string
label: Image Tag For Provisioner Image
......@@ -38,7 +38,7 @@ questions:
type: string
label: Snapshot Controller Image
- variable: snapshotOperator.controller.imageTag
default: "0.9.0"
default: "1.0.0"
description: "The image tag of Snapshot Controller image"
type: string
label: Image Tag For OpenEBS Snapshot Controller Image
......@@ -48,7 +48,7 @@ questions:
type: string
label: Snapshot Provisioner Image
- variable: snapshotOperator.provisioner.imageTag
default: "0.9.0"
default: "1.0.0"
description: "The image tag of Snapshot Provisioner image"
type: string
label: Image Tag For OpenEBS Snapshot Provisioner Image
......@@ -58,17 +58,27 @@ questions:
type: string
label: Node Disk Manager Image
- variable: ndm.imageTag
default: "v0.3.5"
default: "v0.4.0"
description: "The image tag of NDM image"
type: string
label: Image Tag For Node Disk Manager Image
- variable: ndo.image
default: "quay.io/openebs/node-disk-operator-amd64"
description: "Default NDO image"
type: string
label: Node Disk Operator Image
- variable: ndo.imageTag
default: "v0.4.0"
description: "The image tag of NDO image"
type: string
label: Image Tag For Node Disk Manager Image
- variable: jiva.image
default: "quay.io/openebs/jiva"
description: "Default Jiva Storage Engine image for OpenEBS"
type: string
label: Jiva Storage Engine Image
- variable: jiva.imageTag
default: "0.9.0"
default: "1.0.0"
description: "The image tag of Jiva image"
type: string
label: Image Tag For OpenEBS Jiva Storage Engine Image
......@@ -78,7 +88,7 @@ questions:
type: string
label: cStor Storage Engine Pool Image
- variable: cstor.pool.imageTag
default: "0.9.0"
default: "1.0.0"
description: "The image tag of cStor Storage Engine Pool image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Pool Image
......@@ -88,7 +98,7 @@ questions:
type: string
label: cStor Storage Engine Pool Management Image
- variable: cstor.poolMgmt.imageTag
default: "0.9.0"
default: "1.0.0"
description: "The image tag of cStor Storage Engine Pool Management image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Pool Management Image
......@@ -98,7 +108,7 @@ questions:
type: string
label: cStor Storage Engine Target Image
- variable: cstor.target.imageTag
default: "0.9.0"
default: "1.0.0"
description: "The image tag of cStor Storage Engine Target image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Target Image
......@@ -108,7 +118,7 @@ questions:
type: string
label: cStor Storage Engine Target Management Image
- variable: cstor.volumeMgmt.imageTag
default: "0.9.0"
default: "1.0.0"
description: "The image tag of cStor Storage Engine Target Management image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Target Management Image
......@@ -119,7 +129,7 @@ questions:
label: Monitoring Exporter Image
show_if: "policies.monitoring.enabled=true&&defaultImage=false"
- variable: policies.monitoring.imageTag
default: "0.9.0"
default: "1.0.0"
description: "The image tag of OpenEBS Exporter"
type: string
label: Image Tag For OpenEBS Exporter Image
......
......@@ -25,7 +25,7 @@ rules:
resources: ["customresourcedefinitions"]
verbs: [ "get", "list", "create", "update", "delete"]
- apiGroups: ["*"]
resources: [ "disks"]
resources: [ "disks", "blockdevices", "blockdeviceclaims"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "storagepoolclaims", "storagepoolclaims/finalizers","storagepools"]
......
......@@ -35,6 +35,10 @@ spec:
securityContext:
privileged: true
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# pass hostname as env variable using downward API to the NDM container
- name: NODE_NAME
valueFrom:
......
......@@ -8,6 +8,7 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
spec:
replicas: {{ .Values.provisioner.replicas }}
selector:
......@@ -21,8 +22,8 @@ spec:
release: {{ .Release.Name }}
component: localpv-provisioner
name: openebs-localpv-provisioner
openebs.io/version: {{ .Values.release.version }}
openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
......@@ -50,8 +51,14 @@ spec:
fieldPath: spec.nodeName
# OPENEBS_IO_BASE_PATH is the environment variable that provides the
# default base path on the node where host-path PVs will be provisioned.
- name: OPENEBS_IO_ENABLE_ANALYTICS
value: "{{ .Values.analytics.enabled }}"
- name: OPENEBS_IO_BASE_PATH
value: "{{ .Values.localprovisioner.basePath }}"
- name: OPENEBS_IO_HELPER_IMAGE
value: "{{ .Values.localprovisioner.helperImage }}:{{ .Values.localprovisioner.helperImageTag }}"
- name: OPENEBS_IO_INSTALLER_TYPE
value: "charts-helm"
livenessProbe:
exec:
command:
......
......@@ -9,6 +9,7 @@ metadata:
heritage: {{ .Release.Service }}
component: apiserver
name: maya-apiserver
openebs.io/component-name: maya-apiserver
spec:
replicas: {{ .Values.apiserver.replicas }}
selector:
......@@ -94,6 +95,8 @@ spec:
# for periodic ping events sent to Google Analytics. Default is 24 hours.
- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL
value: "{{ .Values.analytics.pingInterval }}"
- name: OPENEBS_IO_INSTALLER_TYPE
value: "charts-helm"
livenessProbe:
exec:
command:
......
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-ndm-operator
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: ndm-operator
openebs.io/component-name: ndm-operator
name: ndm-operator
spec:
replicas: {{ .Values.ndmOperator.replicas }}
strategy:
type: {{ .Values.ndmOperator.upgradeStrategy }}
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: ndm-operator
name: ndm-operator
openebs.io/component-name: ndm-operator
openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: {{ template "openebs.fullname" . }}-ndm-operator
image: "{{ .Values.ndmOperator.image }}:{{ .Values.ndmOperator.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
readinessProbe:
exec:
command:
- stat
- /tmp/operator-sdk-ready
initialDelaySeconds: {{ .Values.ndmOperator.readinessCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.ndmOperator.readinessCheck.periodSeconds }}
failureThreshold: {{ .Values.ndmOperator.readinessCheck.failureThreshold }}
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "node-disk-operator"
- name: CLEANUP_JOB_IMAGE
value: "{{ .Values.ndmOperator.cleanupImage }}:{{ .Values.ndmOperator.cleanupImageTag }}"
{{- if .Values.ndmOperator.nodeSelector }}
nodeSelector:
{{ toYaml .Values.ndmOperator.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.ndmOperator.tolerations }}
tolerations:
{{ toYaml .Values.ndmOperator.tolerations | indent 8 }}
{{- end }}
......@@ -12,14 +12,14 @@ serviceAccount:
release:
# "openebs.io/version" label for control plane components
version: "0.9.0"
version: "1.0.0"
image:
pullPolicy: IfNotPresent
apiserver:
image: "quay.io/openebs/m-apiserver"
imageTag: "0.9.0"
imageTag: "1.0.0"
replicas: 1
ports:
externalPort: 5656
......@@ -35,7 +35,7 @@ apiserver:
provisioner:
image: "quay.io/openebs/openebs-k8s-provisioner"
imageTag: "0.9.0"
imageTag: "1.0.0"
replicas: 1
nodeSelector: {}
tolerations: []
......@@ -46,8 +46,11 @@ provisioner:
localprovisioner:
image: "quay.io/openebs/provisioner-localpv"
imageTag: "0.9.0"
imageTag: "1.0.0"
helperImage: "quay.io/openebs/openebs-tools"
helperImageTag: "3.8"
replicas: 1
basePath: "/var/openebs/local"
nodeSelector: {}
tolerations: []
affinity: {}
......@@ -58,10 +61,10 @@ localprovisioner:
snapshotOperator:
controller:
image: "quay.io/openebs/snapshot-controller"
imageTag: "0.9.0"
imageTag: "1.0.0"
provisioner:
image: "quay.io/openebs/snapshot-provisioner"
imageTag: "0.9.0"
imageTag: "1.0.0"
replicas: 1
upgradeStrategy: "Recreate"
nodeSelector: {}
......@@ -73,22 +76,37 @@ snapshotOperator:
ndm:
image: "quay.io/openebs/node-disk-manager-amd64"
imageTag: "v0.3.5"
imageTag: "v0.4.0"
sparse:
path: "/var/openebs/sparse"
size: "10737418240"
count: "1"
filters:
excludeVendors: "CLOUDBYT,OpenEBS"
includePaths: ""
excludePaths: "loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md"
nodeSelector: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
ndmOperator:
image: "quay.io/openebs/node-disk-operator-amd64"
imageTag: "v0.4.0"
replicas: 1
upgradeStrategy: Recreate
nodeSelector: {}
tolerations: []
readinessCheck:
initialDelaySeconds: 4
periodSeconds: 10
failureThreshold: 1
cleanupImage: "quay.io/openebs/linux-utils"
cleanupImageTag: "3.9"
webhook:
image: "quay.io/openebs/admission-server"
imageTag: "0.9.0"
imageTag: "1.0.0"
generateTLS: true
replicas: 1
nodeSelector: {}
......@@ -97,28 +115,28 @@ webhook:
jiva:
image: "quay.io/openebs/jiva"
imageTag: "0.9.0"
imageTag: "1.0.0"
replicas: 3
cstor:
pool:
image: "quay.io/openebs/cstor-pool"
imageTag: "0.9.0"
imageTag: "1.0.0"
poolMgmt:
image: "quay.io/openebs/cstor-pool-mgmt"
imageTag: "0.9.0"
imageTag: "1.0.0"
target:
image: "quay.io/openebs/cstor-istgt"
imageTag: "0.9.0"
imageTag: "1.0.0"
volumeMgmt:
image: "quay.io/openebs/cstor-volume-mgmt"
imageTag: "0.9.0"
imageTag: "1.0.0"
policies:
monitoring:
enabled: true
image: "quay.io/openebs/m-exporter"
imageTag: "0.9.0"
imageTag: "1.0.0"
analytics:
enabled: true
......
apiVersion: v1
version: 0.9.0
name: openebs
appVersion: 0.9.0
description: Containerized Storage for Containers
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/openebs/icon/color/openebs-icon-color.png
home: http://www.openebs.io/
keywords:
- cloud-native-storage
- block-storage
- iSCSI
- storage
sources:
- https://github.com/openebs/openebs
maintainers:
- name: kmova
email: kiran.mova@openebs.io
- name: prateekpandey14
email: prateek.pandey@openebs.io
OpenEBS
=======
[OpenEBS](https://github.com/openebs/openebs) is an open source storage platform that provides persistent and containerized block storage for DevOps and container environments.
OpenEBS can be deployed on any Kubernetes cluster - either in cloud, on-premise or developer laptop (minikube). OpenEBS itself is deployed as just another container on your cluster, and enables storage services that can be designated on a per pod, application, cluster or container level.
Introduction
------------
This chart bootstraps OpenEBS deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Kubernetes 1.7.5+ with RBAC enabled
- iSCSI PV support in the underlying infrastructure
## Installing OpenEBS
```
helm install --namespace openebs stable/openebs
```
## Installing OpenEBS with the release name `my-release`:
```
helm install --name `my-release` --namespace openebs stable/openebs
```
## To uninstall/delete the `my-release` deployment:
```
helm ls --all
helm delete `my-release`
```
## Configuration
The following table lists the configurable parameters of the OpenEBS chart and their default values.
| Parameter | Description | Default |
| ----------------------------------------| --------------------------------------------- | ----------------------------------------- |
| `rbac.create` | Enable RBAC Resources | `true` |
| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
| `apiserver.image` | Image for API Server | `quay.io/openebs/m-apiserver` |
| `apiserver.imageTag` | Image Tag for API Server | `0.9.0` |
| `apiserver.replicas` | Number of API Server Replicas | `1` |
| `apiserver.sparse.enabled` | Create Sparse Pool based on Sparsefile | `false` |
| `provisioner.image` | Image for Provisioner | `quay.io/openebs/openebs-k8s-provisioner` |
| `provisioner.imageTag` | Image Tag for Provisioner | `0.9.0` |
| `provisioner.replicas` | Number of Provisioner Replicas | `1` |
| `localProvisioner.image` | Image for localProvisioner | `quay.io/openebs/provisioner-localpv` |
| `localProvisioner.imageTag` | Image Tag for localProvisioner | `0.9.0` |
| `localProvisioner.replicas` | Number of localProvisioner Replicas | `1` |
| `localProvisioner.basePath` | BasePath for hostPath volumes on Nodes | `/var/openebs/local` |
| `webhook.image` | Image for admision server | `quay.io/openebs/admission-server` |
| `webhook.imageTag` | Image Tag for admission server | `0.9.0` |
| `webhook.replicas` | Number of admission server Replicas | `1` |
| `snapshotOperator.provisioner.image` | Image for Snapshot Provisioner | `quay.io/openebs/snapshot-provisioner` |
| `snapshotOperator.provisioner.imageTag` | Image Tag for Snapshot Provisioner | `0.9.0` |
| `snapshotOperator.controller.image` | Image for Snapshot Controller | `quay.io/openebs/snapshot-controller` |
| `snapshotOperator.controller.imageTag` | Image Tag for Snapshot Controller | `0.9.0` |
| `snapshotOperator.replicas` | Number of Snapshot Operator Replicas | `1` |
| `ndm.image` | Image for Node Disk Manager | `quay.io/openebs/node-disk-manager-amd64` |
| `ndm.imageTag` | Image Tag for Node Disk Manager | `v0.3.5` |
| `ndm.sparse.path` | Directory where Sparse files are created | `/var/openebs/sparse` |
| `ndm.sparse.size` | Size of the sparse file in bytes | `10737418240` |
| `ndm.sparse.count` | Number of sparse files to be created | `1` |
| `ndm.filters.excludeVendors` | Exclude devices with specified vendor | `CLOUDBYT,OpenEBS` |
| `ndm.filters.excludePaths` | Exclude devices with specified path patterns | `loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md` |
| `ndm.filters.includePaths` | Include devices with specified path patterns | `""` |
| `jiva.image` | Image for Jiva | `quay.io/openebs/jiva` |
| `jiva.imageTag` | Image Tag for Jiva | `0.9.0` |
| `jiva.replicas` | Number of Jiva Replicas | `3` |
| `cstor.pool.image` | Image for cStor Pool | `quay.io/openebs/cstor-pool` |
| `cstor.pool.imageTag` | Image Tag for cStor Pool | `0.9.0` |
| `cstor.poolMgmt.image` | Image for cStor Pool Management | `quay.io/openebs/cstor-pool-mgmt` |
| `cstor.poolMgmt.imageTag` | Image Tag for cStor Pool Management | `0.9.0` |
| `cstor.target.image` | Image for cStor Target | `quay.io/openebs/cstor-istgt` |
| `cstor.target.imageTag` | Image Tag for cStor Target | `0.9.0` |
| `cstor.volumeMgmt.image` | Image for cStor Volume Management | `quay.io/openebs/cstor-volume-mgmt` |
| `cstor.volumeMgmt.imageTag` | Image Tag for cStor Volume Management | `0.9.0` |
| `policies.monitoring.image` | Image for Prometheus Exporter | `quay.io/openebs/m-exporter` |
| `policies.monitoring.imageTag` | Image Tag for Prometheus Exporter | `0.9.0` |
| `analytics.enabled` | Enable sending stats to Google Analytics | `true` |
| `analytics.pingInterval` | Duration(hours) between sending ping stat | `24h` |
| `HealthCheck.initialDelaySeconds` | Delay before liveness probe is initiated | `30` | | 30 |
| `HealthCheck.periodSeconds` | How often to perform the liveness probe | `60` | | 10 |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```shell
helm install --name `my-release` -f values.yaml stable/openebs
```
> **Tip**: You can use the default [values.yaml](values.yaml)
# OpenEBS
OpenEBS is an open source storage platform that provides persistent container attached, cloud-native block storage for DevOps and for Kubernetes environments.
OpenEBS allows you to treat your persistent workload containers, such as DBs on containers, just like other containers. OpenEBS itself is deployed as just another container on your host and enables storage services that can be designated on a per pod, application, cluster or container level, including:
- Data persistence across nodes, dramatically reducing time spent rebuilding Cassandra rings for example.
- Synchronization of data across availability zones and cloud providers.
- Use of commodity hardware plus a container engine to deliver so called container attached block storage.
- Integration with Kubernetes, so developer and application intent flows into OpenEBS configurations automatically.
- Management of tiering to and from S3 and other targets.
categories:
- storage
namespace: openebs
labels:
io.rancher.certified: partner
questions:
- variable: defaultImage
default: "true"
description: "Use default OpenEBS images"
label: Use Default Image
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: apiserver.image
default: "quay.io/openebs/m-apiserver"
description: "Default API Server image for OpenEBS"
type: string
label: API Server Image
- variable: apiserver.imageTag
default: "0.9.0"
description: "The image tag of API Server image"
type: string
label: Image Tag For OpenEBS API Server Image
- variable: provisioner.image
default: "quay.io/openebs/openebs-k8s-provisioner"
description: "Default K8s Provisioner image for OpenEBS"
type: string
label: Provisioner Image
- variable: provisioner.imageTag
default: "0.9.0"
description: "The image tag of Provisioner image"
type: string
label: Image Tag For Provisioner Image
- variable: snapshotOperator.controller.image
default: "quay.io/openebs/snapshot-controller"
description: "Default Snapshot Controller image for OpenEBS"
type: string
label: Snapshot Controller Image
- variable: snapshotOperator.controller.imageTag
default: "0.9.0"
description: "The image tag of Snapshot Controller image"
type: string
label: Image Tag For OpenEBS Snapshot Controller Image
- variable: snapshotOperator.provisioner.image
default: "quay.io/openebs/snapshot-provisioner"
description: "Default Snapshot Provisioner image for OpenEBS"
type: string
label: Snapshot Provisioner Image
- variable: snapshotOperator.provisioner.imageTag
default: "0.9.0"
description: "The image tag of Snapshot Provisioner image"
type: string
label: Image Tag For OpenEBS Snapshot Provisioner Image
- variable: ndm.image
default: "quay.io/openebs/node-disk-manager-amd64"
description: "Default NDM image"
type: string
label: Node Disk Manager Image
- variable: ndm.imageTag
default: "v0.3.5"
description: "The image tag of NDM image"
type: string
label: Image Tag For Node Disk Manager Image
- variable: jiva.image
default: "quay.io/openebs/jiva"
description: "Default Jiva Storage Engine image for OpenEBS"
type: string
label: Jiva Storage Engine Image
- variable: jiva.imageTag
default: "0.9.0"
description: "The image tag of Jiva image"
type: string
label: Image Tag For OpenEBS Jiva Storage Engine Image
- variable: cstor.pool.image
default: "quay.io/openebs/cstor-pool"
description: "Default cStor Storage Engine Pool image for OpenEBS"
type: string
label: cStor Storage Engine Pool Image
- variable: cstor.pool.imageTag
default: "0.9.0"
description: "The image tag of cStor Storage Engine Pool image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Pool Image
- variable: cstor.poolMgmt.image
default: "quay.io/openebs/cstor-pool-mgmt"
description: "Default cStor Storage Engine Pool Management image for OpenEBS"
type: string
label: cStor Storage Engine Pool Management Image
- variable: cstor.poolMgmt.imageTag
default: "0.9.0"
description: "The image tag of cStor Storage Engine Pool Management image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Pool Management Image
- variable: cstor.target.image
default: "quay.io/openebs/cstor-istgt"
description: "Default cStor Storage Engine Target image for OpenEBS"
type: string
label: cStor Storage Engine Target Image
- variable: cstor.target.imageTag
default: "0.9.0"
description: "The image tag of cStor Storage Engine Target image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Target Image
- variable: cstor.volumeMgmt.image
default: "quay.io/openebs/cstor-volume-mgmt"
description: "Default cStor Storage Engine Target Management image for OpenEBS"
type: string
label: cStor Storage Engine Target Management Image
- variable: cstor.volumeMgmt.imageTag
default: "0.9.0"
description: "The image tag of cStor Storage Engine Target Management image"
type: string
label: Image Tag For OpenEBS cStor Storage Engine Target Management Image
- variable: policies.monitoring.image
default: "quay.io/openebs/m-exporter"
description: "Default OpeneEBS Volume and pool Exporter image"
type: string
label: Monitoring Exporter Image
show_if: "policies.monitoring.enabled=true&&defaultImage=false"
- variable: policies.monitoring.imageTag
default: "0.9.0"
description: "The image tag of OpenEBS Exporter"
type: string
label: Image Tag For OpenEBS Exporter Image
show_if: "policies.monitoring.enabled=true&&defaultImage=false"
- variable: ndm.filters.excludeVendors
default: 'CLOUDBYT\,OpenEBS'
type: string
description: "Configure NDM to filter disks from following vendors"
label: Filter Disks belonging to vendors
group: "NDM Disk Filter by Vendor "
- variable: ndm.filters.excludePaths
default: 'loop\,fd0\,sr0\,/dev/ram\,/dev/dm-\,/dev/md'
type: string
description: "Configure NDM to filter disks from following paths"
label: Filter Disks belonging to paths
group: "NDM Disk Filter by Path"
- variable: ndm.sparse.enabled
default: "true"
description: "Create a cStor Pool on Sparse Disks"
label: Create cStor Pool on Sprase Disks
type: boolean
show_subquestion_if: true
group: "NDM Sparse Disk Settings"
subquestions:
- variable: ndm.sparse.size
default: "10737418240"
description: "Default Size of Sparse Disk"
type: string
label: Sparse Disk Size in bytes
- variable: ndm.sparse.count
default: "1"
description: "Number of Sparse Disks"
type: string
label: Number of Sparse Disks
- variable: ndm.sparse.path
default: "/var/openebs/sparse"
description: "Directory where Sparse Disks should be created"
type: string
label: Directory for Sparse Disks
- variable: defaultPorts
default: "true"
description: "Use default Communication Ports"
label: Use Default Ports
type: boolean
show_subquestion_if: false
group: "Communication Ports"
subquestions:
- variable: apiserver.ports.externalPort
default: 5656
description: "Default External Port for OpenEBS API Server"
type: int
min: 0
max: 9999
label: OpenEBS API Server External Port
- variable: apiserver.ports.internalPort
default: 5656
description: "Default Internal Port for OpenEBS API Server"
type: int
min: 0
max: 9999
label: OpenEBS API Server Internal Port
- variable: policies.monitoring.enabled
default: true
description: "Enable prometheus monitoring"
type: boolean
label: Enable Prometheus Monitoring
group: "Monitoring Settings"
- variable: analytics.enabled
default: true
description: "Enable sending anonymous statistics to OpenEBS Google Analytics"
type: boolean
label: Enable updating OpenEBS with usage details
group: "Anonymous Analytics"
The OpenEBS has been installed. Check its status by running:
$ kubectl get pods -n {{ .Release.Namespace }}
For dynamically creating OpenEBS Volumes, you can either create a new StorageClass or
use one of the default storage classes provided by OpenEBS.
Use `kubectl get sc` to see the list of installed OpenEBS StorageClasses. A sample
PVC spec using `openebs-jiva-default` StorageClass is given below:"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-vol-claim
spec:
storageClassName: openebs-jiva-default
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5G
---
For more information, please visit http://docs.openebs.io/.
Please note that, OpenEBS uses iSCSI for connecting applications with the
OpenEBS Volumes and your nodes should have the iSCSI initiator installed.
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "openebs.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "openebs.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "openebs.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "openebs.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "openebs.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "openebs.fullname" . }}
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups: ["*"]
resources: ["nodes", "nodes/proxy"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["namespaces", "services", "pods", "deployments", "events", "endpoints", "configmaps", "jobs"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]
verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots", "volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: [ "get", "list", "create", "update", "delete"]
- apiGroups: ["*"]
resources: [ "disks"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "storagepoolclaims", "storagepoolclaims/finalizers","storagepools"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "castemplates", "runtasks"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"]
verbs: ["*" ]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
{{- end }}
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "openebs.fullname" . }}
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "openebs.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "openebs.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "openebs.fullname" . }}-ndm-config
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: ndm-config
data:
# udev-probe is default or primary probe which should be enabled to run ndm
# filterconfigs contails configs of filters - in ther form fo include
# and exclude comma separated strings
node-disk-manager.config: |
probeconfigs:
- key: udev-probe
name: udev probe
state: true
- key: seachest-probe
name: seachest probe
state: true
- key: smart-probe
name: smart probe
state: true
filterconfigs:
- key: os-disk-exclude-filter
name: os disk exclude filter
state: true
exclude: "/,/etc/hosts,/boot"
- key: vendor-filter
name: vendor filter
state: true
include: ""
exclude: "{{ .Values.ndm.filters.excludeVendors }}"
- key: path-filter
name: path filter
state: true
include: "{{ .Values.ndm.filters.includePaths }}"
exclude: "{{ .Values.ndm.filters.excludePaths }}"
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: {{ template "openebs.fullname" . }}-ndm
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: ndm
spec:
updateStrategy:
type: "RollingUpdate"
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: ndm
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: ndm
openebs.io/component-name: ndm
name: openebs-ndm
openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
hostNetwork: true
containers:
- name: {{ template "openebs.name" . }}-ndm
image: "{{ .Values.ndm.image }}:{{ .Values.ndm.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
privileged: true
env:
# pass hostname as env variable using downward API to the NDM container
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- if .Values.ndm.sparse }}
{{- if .Values.ndm.sparse.path }}
# specify the directory where the sparse files need to be created.
# if not specified, then sparse files will not be created.
- name: SPARSE_FILE_DIR
value: "{{ .Values.ndm.sparse.path }}"
{{- end }}
{{- if .Values.ndm.sparse.size }}
# Size(bytes) of the sparse file to be created.
- name: SPARSE_FILE_SIZE
value: "{{ .Values.ndm.sparse.size }}"
{{- end }}
{{- if .Values.ndm.sparse.count }}
# Specify the number of sparse files to be created
- name: SPARSE_FILE_COUNT
value: "{{ .Values.ndm.sparse.count }}"
{{- end }}
{{- end }}
livenessProbe:
exec:
command:
- pgrep
- ".*ndm"
initialDelaySeconds: {{ .Values.ndm.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.ndm.healthCheck.periodSeconds }}
volumeMounts:
- name: config
mountPath: /host/node-disk-manager.config
subPath: node-disk-manager.config
readOnly: true
- name: udev
mountPath: /run/udev
- name: procmount
mountPath: /host/proc
readOnly: true
{{- if .Values.ndm.sparse }}
{{- if .Values.ndm.sparse.path }}
- name: sparsepath
mountPath: {{ .Values.ndm.sparse.path }}
{{- end }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "openebs.fullname" . }}-ndm-config
- name: udev
hostPath:
path: /run/udev
type: Directory
# mount /proc (to access mount file of process 1 of host) inside container
# to read mount-point of disks and partitions
- name: procmount
hostPath:
path: /proc
type: Directory
{{- if .Values.ndm.sparse }}
{{- if .Values.ndm.sparse.path }}
- name: sparsepath
hostPath:
path: {{ .Values.ndm.sparse.path }}
{{- end }}
{{- end }}
# By default the node-disk-manager will be run on all kubernetes nodes
# If you would like to limit this to only some nodes, say the nodes
# that have storage attached, you could label those node and use
# nodeSelector.
#
# e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"
# kubectl label node <node-name> "openebs.io/nodegroup"="storage-node"
#nodeSelector:
# "openebs.io/nodegroup": "storage-node"
{{- if .Values.ndm.nodeSelector }}
nodeSelector:
{{ toYaml .Values.ndm.nodeSelector | indent 8 }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-admission-server
labels:
app: admission-webhook
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: admission-webhook
spec:
replicas: {{ .Values.webhook.replicas }}
selector:
matchLabels:
app: admission-webhook
template:
metadata:
labels:
app: admission-webhook
name: admission-webhook
openebs.io/version: {{ .Values.release.version }}
openebs.io/component-name: admission-webhook
spec:
{{- if .Values.webhook.nodeSelector }}
nodeSelector:
{{ toYaml .Values.webhook.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.webhook.tolerations }}
tolerations:
{{ toYaml .Values.webhook.tolerations | indent 8 }}
{{- end }}
{{- if .Values.webhook.affinity }}
affinity:
{{ toYaml .Values.webhook.affinity | indent 8 }}
{{- end }}
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: admission-webhook
image: "{{ .Values.webhook.image }}:{{ .Values.webhook.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- -tlsCertFile=/etc/webhook/certs/cert.pem
- -tlsKeyFile=/etc/webhook/certs/key.pem
- -alsologtostderr
- -v=8
- 2>&1
volumeMounts:
- name: webhook-certs
mountPath: /etc/webhook/certs
readOnly: true
volumes:
- name: webhook-certs
secret:
secretName: admission-server-certs
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-localpv-provisioner
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: localpv-provisioner
spec:
replicas: {{ .Values.provisioner.replicas }}
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: localpv-provisioner
name: openebs-localpv-provisioner
openebs.io/version: {{ .Values.release.version }}
openebs.io/component-name: openebs-localpv-provisioner
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: {{ template "openebs.name" . }}-localpv-provisioner
image: "{{ .Values.localprovisioner.image }}:{{ .Values.localprovisioner.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_NAMESPACE is the namespace that this provisioner will
# lookup to find maya api service
- name: OPENEBS_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# OPENEBS_IO_BASE_PATH is the environment variable that provides the
# default base path on the node where host-path PVs will be provisioned.
- name: OPENEBS_IO_BASE_PATH
value: "{{ .Values.localprovisioner.basePath }}"
livenessProbe:
exec:
command:
- pgrep
- ".*localpv"
initialDelaySeconds: {{ .Values.localprovisioner.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.localprovisioner.healthCheck.periodSeconds }}
{{- if .Values.localprovisioner.nodeSelector }}
nodeSelector:
{{ toYaml .Values.localprovisioner.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.localprovisioner.tolerations }}
tolerations:
{{ toYaml .Values.localprovisioner.tolerations | indent 8 }}
{{- end }}
{{- if .Values.localprovisioner.affinity }}
affinity:
{{ toYaml .Values.localprovisioner.affinity | indent 8 }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-apiserver
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: apiserver
name: maya-apiserver
spec:
replicas: {{ .Values.apiserver.replicas }}
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: apiserver
name: maya-apiserver
openebs.io/component-name: maya-apiserver
openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: {{ template "openebs.name" . }}-apiserver
image: "{{ .Values.apiserver.image }}:{{ .Values.apiserver.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.apiserver.ports.internalPort }}
env:
# OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for maya api server version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for maya api server version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://172.28.128.3:8080"
# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be
# configured as a part of openebs installation.
# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.
- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
value: "{{ .Values.apiserver.sparse.enabled }}"
- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR
value: "{{ .Values.ndm.sparse.path }}"
# OPENEBS_NAMESPACE provides the namespace of this deployment as an
# environment variable
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
# environment variable
- name: OPENEBS_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
# OPENEBS_MAYA_POD_NAME provides the name of this pod as
# environment variable
- name: OPENEBS_MAYA_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE
value: "{{ .Values.jiva.image }}:{{ .Values.jiva.imageTag }}"
- name: OPENEBS_IO_JIVA_REPLICA_IMAGE
value: "{{ .Values.jiva.image }}:{{ .Values.jiva.imageTag }}"
- name: OPENEBS_IO_JIVA_REPLICA_COUNT
value: "{{ .Values.jiva.replicas }}"
- name: OPENEBS_IO_CSTOR_TARGET_IMAGE
value: "{{ .Values.cstor.target.image }}:{{ .Values.cstor.target.imageTag }}"
- name: OPENEBS_IO_CSTOR_POOL_IMAGE
value: "{{ .Values.cstor.pool.image }}:{{ .Values.cstor.pool.imageTag }}"
- name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE
value: "{{ .Values.cstor.poolMgmt.image }}:{{ .Values.cstor.poolMgmt.imageTag }}"
- name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE
value: "{{ .Values.cstor.volumeMgmt.image }}:{{ .Values.cstor.volumeMgmt.imageTag }}"
- name: OPENEBS_IO_VOLUME_MONITOR_IMAGE
value: "{{ .Values.policies.monitoring.image }}:{{ .Values.policies.monitoring.imageTag }}"
- name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE
value: "{{ .Values.policies.monitoring.image }}:{{ .Values.policies.monitoring.imageTag }}"
# OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage
# events to Google Analytics
- name: OPENEBS_IO_ENABLE_ANALYTICS
value: "{{ .Values.analytics.enabled }}"
# OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours)
# for periodic ping events sent to Google Analytics. Default is 24 hours.
- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL
value: "{{ .Values.analytics.pingInterval }}"
livenessProbe:
exec:
command:
- /usr/local/bin/mayactl
- version
initialDelaySeconds: {{ .Values.apiserver.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.apiserver.healthCheck.periodSeconds }}
{{- if .Values.apiserver.nodeSelector }}
nodeSelector:
{{ toYaml .Values.apiserver.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.apiserver.tolerations }}
tolerations:
{{ toYaml .Values.apiserver.tolerations | indent 8 }}
{{- end }}
{{- if .Values.apiserver.affinity }}
affinity:
{{ toYaml .Values.apiserver.affinity | indent 8 }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-provisioner
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: provisioner
spec:
replicas: {{ .Values.provisioner.replicas }}
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: provisioner
name: openebs-provisioner
openebs.io/component-name: openebs-provisioner
openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: {{ template "openebs.name" . }}-provisioner
image: "{{ .Values.provisioner.image }}:{{ .Values.provisioner.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_NAMESPACE is the namespace that this provisioner will
# lookup to find maya api service
- name: OPENEBS_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that provisioner should forward the volume create/delete requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs provisioner version 0.5.3-RC1 onwards
- name: OPENEBS_MAYA_SERVICE_NAME
value: "{{ template "openebs.fullname" . }}-apiservice"
# The following values will be set as annotations to the PV object.
# Refer : https://github.com/openebs/external-storage/pull/15
#- name: OPENEBS_MONITOR_URL
# value: "{{ .Values.provisioner.monitorUrl }}"
#- name: OPENEBS_MONITOR_VOLKEY
# value: "{{ .Values.provisioner.monitorVolumeKey }}"
#- name: MAYA_PORTAL_URL
# value: "{{ .Values.provisioner.mayaPortalUrl }}"
livenessProbe:
exec:
command:
- pgrep
- ".*openebs"
initialDelaySeconds: {{ .Values.provisioner.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.provisioner.healthCheck.periodSeconds }}
{{- if .Values.provisioner.nodeSelector }}
nodeSelector:
{{ toYaml .Values.provisioner.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.provisioner.tolerations }}
tolerations:
{{ toYaml .Values.provisioner.tolerations | indent 8 }}
{{- end }}
{{- if .Values.provisioner.affinity }}
affinity:
{{ toYaml .Values.provisioner.affinity | indent 8 }}
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "openebs.fullname" . }}-snapshot-operator
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: snapshot-operator
spec:
replicas: {{ .Values.snapshotOperator.replicas }}
selector:
matchLabels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
strategy:
type: {{ .Values.snapshotOperator.upgradeStrategy }}
template:
metadata:
labels:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: snapshot-operator
name: openebs-snapshot-operator
openebs.io/version: {{ .Values.release.version }}
openebs.io/component-name: openebs-snapshot-operator
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
- name: {{ template "openebs.name" . }}-snapshot-controller
image: "{{ .Values.snapshotOperator.controller.image }}:{{ .Values.snapshotOperator.controller.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# OPENEBS_IO_K8S_MASTER enables openebs snapshot controller to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs snapshot controller version 0.6-RC1 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs snapshot controller to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs snapshot controller version 0.6-RC1 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_NAMESPACE is the namespace that this snapshot controller will
# lookup to find maya api service
- name: OPENEBS_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that snapshot controller should forward the volume snapshot requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs snapshot controller version 0.6-RC1 onwards
- name: OPENEBS_MAYA_SERVICE_NAME
value: "{{ template "openebs.fullname" . }}-apiservice"
livenessProbe:
exec:
command:
- pgrep
- ".*controller"
initialDelaySeconds: {{ .Values.snapshotOperator.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.snapshotOperator.healthCheck.periodSeconds }}
- name: {{ template "openebs.name" . }}-snapshot-provisioner
image: "{{ .Values.snapshotOperator.provisioner.image }}:{{ .Values.snapshotOperator.provisioner.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# OPENEBS_IO_K8S_MASTER enables openebs snapshot provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs snapshot provisioner version 0.6-RC1 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs snapshot provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs snapshot provisioner version 0.6-RC1 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: "/home/ubuntu/.kube/config"
# OPENEBS_NAMESPACE is the namespace that this snapshot provisioner will
# lookup to find maya api service
- name: OPENEBS_NAMESPACE
value: "{{ .Release.Namespace }}"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that snapshot provisioner should forward the volume snapshot PV requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs snapshot provisioner version 0.6-RC1 onwards
- name: OPENEBS_MAYA_SERVICE_NAME
value: "{{ template "openebs.fullname" . }}-apiservice"
livenessProbe:
exec:
command:
- pgrep
- ".*provisioner"
initialDelaySeconds: {{ .Values.snapshotOperator.healthCheck.initialDelaySeconds }}
periodSeconds: {{ .Values.snapshotOperator.healthCheck.periodSeconds }}
{{- if .Values.snapshotOperator.nodeSelector }}
nodeSelector:
{{ toYaml .Values.snapshotOperator.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.snapshotOperator.tolerations }}
tolerations:
{{ toYaml .Values.snapshotOperator.tolerations | indent 8 }}
{{- end }}
{{- if .Values.snapshotOperator.affinity }}
affinity:
{{ toYaml .Values.snapshotOperator.affinity | indent 8 }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: admission-server-svc
labels:
app: admission-webhook
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- port: 443
targetPort: 443
selector:
app: admission-webhook
apiVersion: v1
kind: Service
metadata:
name: {{ template "openebs.fullname" . }}-apiservice
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
ports:
- name: api
port: {{ .Values.apiserver.ports.externalPort }}
targetPort: {{ .Values.apiserver.ports.internalPort }}
protocol: TCP
selector:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: apiserver
sessionAffinity: None
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "openebs.serviceAccountName" . }}
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end }}
{{- $ca := genCA "admission-server-ca" 3650 }}
{{- $cn := printf "admission-server-svc" }}
{{- $altName1 := printf "admission-server-svc.%s" .Release.Namespace }}
{{- $altName2 := printf "admission-server-svc.%s.svc" .Release.Namespace }}
{{- $cert := genSignedCert $cn nil (list $altName1 $altName2) 3650 $ca }}
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: openebs-validation-webhook-cfg
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: admission-webhook
webhooks:
- name: admission-webhook.openebs.io
clientConfig:
service:
name: admission-server-svc
namespace: {{ .Release.Namespace }}
path: "/validate"
{{- if .Values.webhook.generateTLS }}
caBundle: {{ b64enc $ca.Cert }}
{{- else }}
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURpekNDQW5PZ0F3SUJBZ0lKQUk5NG9wdWdKb1drTUEwR0NTcUdTSWIzRFFFQkN3VUFNRnd4Q3pBSkJnTlYKQkFZVEFuaDRNUW93Q0FZRFZRUUlEQUY0TVFvd0NBWURWUVFIREFGNE1Rb3dDQVlEVlFRS0RBRjRNUW93Q0FZRApWUVFMREFGNE1Rc3dDUVlEVlFRRERBSmpZVEVRTUE0R0NTcUdTSWIzRFFFSkFSWUJlREFlRncweE9UQXpNREl3Ck56TXlOREZhRncweU1EQXpNREV3TnpNeU5ERmFNRnd4Q3pBSkJnTlZCQVlUQW5oNE1Rb3dDQVlEVlFRSURBRjQKTVFvd0NBWURWUVFIREFGNE1Rb3dDQVlEVlFRS0RBRjRNUW93Q0FZRFZRUUxEQUY0TVFzd0NRWURWUVFEREFKagpZVEVRTUE0R0NTcUdTSWIzRFFFSkFSWUJlRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBT0pxNmI2dnI0cDMzM3FRaHJQbmNCVFVIUE1ESnJtaEYvOU44NjZodzFvOGZLclFwNkJmRkcvZEQ0N2gKVGcvWnJ0U2VHT0NoRjFxSEk1dGp3SlVEeGphSUM3U0FkZGpxb1pJUGFoT1pjVlpxZE1POVVFTlFUbktIRXczVQpCUjJUaHdydi9QTTRxZitUazdRa1J6Y2VJQXg1VS9lbUlEV2t4NEk3RlRYQk1XT1hGUTNoRlFtWFppZHpHN21mCnZJTlhYN0krOHR3QVM0alNSdGhxYjVUTzMwYmpxQTFzY0RRdXlZU2R6OVg5TGw1WU1QSUtSZHpnYUR1d1Q5QkQKZjNxT1VqazN6M1FZd0IvWmowaXJtQlpKejJla0V3a1QxbWlyUHF2NTA5QVJ5V1U2QUlSSTN6dnB6S2tWeFJUaApmcUROa1M5SmRRV1Q3RW9vN2lITmRtZlhOYmtDQXdFQUFhTlFNRTR3SFFZRFZSME9CQllFRk1ORzZGeGlMYWFmCjFld2w1RDd1SXJiK0UrSE9NQjhHQTFVZEl3UVlNQmFBRk1ORzZGeGlMYWFmMWV3bDVEN3VJcmIrRStIT01Bd0cKQTFVZEV3UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHQnYxeC92OWRnWU1ZY1h5TU9MUUNENgpVZWNsS3YzSFRTVGUybXZQcTZoTW56K0ExOGF6RWhPU0xONHZuQUNSd2pzRmVobWIrWk9wMVlYWDkzMi9OckRxCk1XUmh1bENiblFndjlPNVdHWXBDQUR1dnBBMkwyT200aU50S0FucUpGNm5ubHI1UFdQZnVJelB1eVlvQUpKRDkKSFpZRjVwa2hac0EwdDlUTDFuUmdPbFY4elZ0eUg2TTVDWm5nSEpjWG9CWlVvSlBvcGJsc3BpUnh6dzBkMUU0SgpUVmVHaXZFa0RJNFpFYTVuTzZyTUZzcXJ1L21ydVQwN1FCaWd5ZzlEY3h0QU5TUTczQUhOemNRUWpZMWg3L2RiCmJ6QXQ2aWxNZXZKc2lpVFlGYjRPb0dIVW53S2tTQUJuazFNQW5oUUhvYUNuS2dXZE1vU3orQWVuYkhzYXJSMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
{{- end }}
rules:
- operations: [ "CREATE", "DELETE" ]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["persistentvolumeclaims"]
---
apiVersion: v1
kind: Secret
metadata:
name: admission-server-certs
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "openebs.name" . }}
chart: {{ template "openebs.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
type: Opaque
data:
{{- if .Values.webhook.generateTLS }}
cert.pem: {{ b64enc $cert.Cert }}
key.pem: {{ b64enc $cert.Key }}
{{- else }}
cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ3VENDQXRXZ0F3SUJBZ0lVYk84NS9JR0ZXYTA2Vm11WVdTWjdxaTUybmRRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hERUxNQWtHQTFVRUJoTUNlSGd4Q2pBSUJnTlZCQWdNQVhneENqQUlCZ05WQkFjTUFYZ3hDakFJQmdOVgpCQW9NQVhneENqQUlCZ05WQkFzTUFYZ3hDekFKQmdOVkJBTU1BbU5oTVJBd0RnWUpLb1pJaHZjTkFRa0JGZ0Y0Ck1CNFhEVEU1TURNd01qQTNNek13TUZvWERUSXdNRE13TVRBM01qYzFNbG93S3pFcE1DY0dBMVVFQXhNZ1lXUnQKYVhOemFXOXVMWE5sY25abGNpMXpkbU11YjNCbGJtVmljeTV6ZG1Nd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQQpBNElCRHdBd2dnRUtBb0lCQVFERk5MRE1xKzd6eFZidDNPcnFhaVUyOFB6K25ZeFRCblA0NVhFWGFjSUpPWG1aClM1c2ZjMjM3WVNWS0I5Tlp4cXNYT08wcXpWb0xtNlZ0UDJjREpWZGZIVUQ0QXBZSC94UVBVTktrcFg3K0NVTFEKZ3VBNWowOXozdkFaeDJidXBTaXFFdE1mVldqNkh5V0Jyd2FuZW9IaVVXVVdpbmtnUXpCQzR1SWtiRkE2djYrZwp4ZzAwS09TY2NFRWY3eU5McjBvejBKVHRpRm1aS1pVVVBwK3N3WTRpRTZ3RER5bVVnTmY4SW8wUEExVkQ1TE9vCkFwQ0l2WDJyb1RNd3VkR1VrZUc1VTA2OWIrMWtQMEJsUWdDZk9TQTBmZEN3Snp0aWE1aHpaUlVIWGxFOVArN0kKekgyR0xXeHh1aHJPTlFmT25HcVRiUE13UmowekZIdmcycUo1azJ2VkFnTUJBQUdqZ2Rjd2dkUXdEZ1lEVlIwUApBUUgvQkFRREFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEClZSME9CQllFRklnOVFSOSsyVW12THQwQXY4MlYwZml0bU81WE1COEdBMVVkSXdRWU1CYUFGTU5HNkZ4aUxhYWYKMWV3bDVEN3VJcmIrRStIT01GOEdBMVVkRVFSWU1GYUNGR0ZrYldsemMybHZiaTF6WlhKMlpYSXRjM1pqZ2h4aApaRzFwYzNOcGIyNHRjMlZ5ZG1WeUxYTjJZeTV2Y0dWdVpXSnpnaUJoWkcxcGMzTnBiMjR0YzJWeWRtVnlMWE4yCll5NXZjR1Z1WldKekxuTjJZekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSlpJRzd2d0RYaWxhWUFCS1Brc0oKZVJtdml4ZnYybTRVTVdzdlBKVVVJTXhHbzhtc1J6aWhBRjVuTExzaURKRDl4MjhraXZXaGUwbWE4aWVHYjY5Sgp1U1N4bys0OStaV3NVaTB3UlRDMi9ZWGlkWS9xNDU2c1g4ck9qQURDZlFUcFpYc2ZyekVWa2Q4NE0zdU5GTmhnCnMyWmxJMnNDTWljYXExNWxIWEh3akFkY2FqZit1VklwOXNHUElsMUhmZFcxWVFLc0NoU3dhdi80NUZJcFlMSVYKM3hiS2ZIbmh2czhJck5ZbTVIenAvVVdvcFN1Tm5tS1IwWGo3cXpGcllUYzV3eHZ3VVZrKzVpZFFreWMwZ0RDcApGbkFVdEdmaUVUQnBhU3pISjQ4STZqUFpneVE0NzlZMmRxRUtXcWtyc0RkZ2tVcXlnNGlQQ0YwWC9YVU9YU3VGClNnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeFRTd3pLdnU4OFZXN2R6cTZtb2xOdkQ4L3AyTVV3WnorT1Z4RjJuQ0NUbDVtVXViCkgzTnQrMkVsU2dmVFdjYXJGemp0S3MxYUM1dWxiVDluQXlWWFh4MUErQUtXQi84VUQxRFNwS1YrL2dsQzBJTGcKT1k5UGM5N3dHY2RtN3FVb3FoTFRIMVZvK2g4bGdhOEdwM3FCNGxGbEZvcDVJRU13UXVMaUpHeFFPcit2b01ZTgpOQ2prbkhCQkgrOGpTNjlLTTlDVTdZaFptU21WRkQ2ZnJNR09JaE9zQXc4cGxJRFgvQ0tORHdOVlErU3pxQUtRCmlMMTlxNkV6TUxuUmxKSGh1Vk5PdlcvdFpEOUFaVUlBbnprZ05IM1FzQ2M3WW11WWMyVVZCMTVSUFQvdXlNeDkKaGkxc2Nib2F6alVIenB4cWsyenpNRVk5TXhSNzROcWllWk5yMVFJREFRQUJBb0lCQVFDcXRIT2VsKzRlUWVKLwp3RTN4WUxTYUhIMURnZWxvTFJ2U2hmb2hSRURjYjA0ZExsODNHRnBKMGN2UGkzcWVLZVVNRXhEcGpoeTJFNk5kCk1CYmhtRDlMYkMxREFpb1EvZkxGVnpjZm9zcU02RU5YN3hKZGdQcEwyTjJKMHh2ODFDYWhJZTV6SHlIaDhYZ3MKQysvOHBZVXMvVHcrQ052VTI1UTVNZUNEbXViUUVuemJqQ3lIQm5SVmw1dVF6bk8zWEt2NEVyejdBT1BBWmFJTQozYmNFNC83c1JGczM4SE1aMVZTZ2JxUi9rM1N5SEFzNXhNWHVtY0hMMTBkK0FVK21BQ0svUThpdWJHMm9kNnJiCko3S0RONmFuUzRPZk4zZ3RtaEppN3ZsTjJVL3JycHdnblI0d3Y0bmV4U1ZlamYzQU9iaU9jNnYzZ0xJbXJ2Q3oKNzFETDFPaTVBb0dCQU9HeFp2RWFUSFFnNFdaQVJZbXlGZEtZeXY2MURDc1JycElmUlh3Q1YrcnBZTFM2NlV4SQprWHJISlNreWFqTjNTOXVsZUtUTXRWaU5wY2JCcjVNZ0lOaFFvdThRc2dpZlZHWFJGQ3d0OXJ3MGNDbEc1Y2pCClZ3bUQzYWFBTGR5WVQvbHc4dnk1Zndqc1hFZHd1OEQ2cC9rd0ZzMmlwZWQ4QVFPUVZlQ1dPeXF6QW9HQkFOK3YKL2VxKzZ5NHhPZ2ZtQ01KcHJ0THBBN1J0M3FsU0JKbEw3RkNsQXRCeUUxazBPTVIrZTdhSDBVTDdYWVR4YlBLOApBYnRZR3lzWDkydGM3RHlaU0k0cDFjUHhvcHdzNkt3N0RYZUt0YTNnVkRmSXVuZ3haR25XWjk2WmNjcEhyVzgyCnl5OTk5dTQ2WE1tQWZwSzEvbGxjdGdLem5FUVp5ZkhEUmlWdVVQTlhBb0dCQUxkMGxORDNKNTVkKzlvNTlFeHgKVGZ2WjUyZ1Rrc2lQbnU5NEsrc1puSTEvRnZUUjJrSC8yd0dLVDFLbGdGNUZZb3d3ZlZpNGJkQ0ZrM04walZ0eQppa0JMaTZYNFZEOWVCQ1NmUjE2Q0hrWHQraDRUVzBWTW80dEFmVE9TamJUNnVrZHc0Sk05MVYxVGc4OHVlKy9wCjBCQm1YcUxZeXpMWFFadTcvNUtIaTZDeEFvR0FaTWV2R0E5eWVEcFhrZTF6THR4Y2xzdkREb3lkMEIyUzB0cGgKR3lodEx5cm1Tcjk3Z0JRWWV2R1FONlIyeXduVzh6bi9jYi9OWmNvRGdFeTZac2NNNkhneXhuaGNzZzZOdWVOVgpPdkcwenlVTjdLQTBXeWl0dS8yTWlMOExoSDVzeG5taWE4Qk4rNkV4NHR0UXE1cnhnS09Eb1kzNHJyb0x3VEFnCnI0YVhWRHNDZ1lBYnRwZXhvNTJ4VmJkTzZCL3B5RUU2cEJCS1FkK3hiVkJNMDZwUzArSlFudSt5SVBmeXFhekwKbGdYTEhBSm01bU9Sb2RFRHk0WlVJRkM5RmhraGcrV0ZzSHJCOXpGU1IrZFc2Uzg1eFA4ZGxHVE42S2cydXJNQQowNTRCQUh4RWhPNU9QblNqT0VHSmQwYTdGQmc1UlkxN0RRQlFxV25SZENURHlDWmU0OStLcWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
{{- end }}
# Default values for openebs.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccount:
create: true
name:
release:
# "openebs.io/version" label for control plane components
version: "0.9.0"
image:
pullPolicy: IfNotPresent
apiserver:
image: "quay.io/openebs/m-apiserver"
imageTag: "0.9.0"
replicas: 1
ports:
externalPort: 5656
internalPort: 5656
sparse:
enabled: "false"
nodeSelector: {}
tolerations: []
affinity: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
provisioner:
image: "quay.io/openebs/openebs-k8s-provisioner"
imageTag: "0.9.0"
replicas: 1
nodeSelector: {}
tolerations: []
affinity: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
localprovisioner:
image: "quay.io/openebs/provisioner-localpv"
imageTag: "0.9.0"
replicas: 1
nodeSelector: {}
tolerations: []
affinity: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
snapshotOperator:
controller:
image: "quay.io/openebs/snapshot-controller"
imageTag: "0.9.0"
provisioner:
image: "quay.io/openebs/snapshot-provisioner"
imageTag: "0.9.0"
replicas: 1
upgradeStrategy: "Recreate"
nodeSelector: {}
tolerations: []
affinity: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
ndm:
image: "quay.io/openebs/node-disk-manager-amd64"
imageTag: "v0.3.5"
sparse:
path: "/var/openebs/sparse"
size: "10737418240"
count: "1"
filters:
excludeVendors: "CLOUDBYT,OpenEBS"
excludePaths: "loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md"
nodeSelector: {}
healthCheck:
initialDelaySeconds: 30
periodSeconds: 60
webhook:
image: "quay.io/openebs/admission-server"
imageTag: "0.9.0"
generateTLS: true
replicas: 1
nodeSelector: {}
tolerations: []
affinity: {}
jiva:
image: "quay.io/openebs/jiva"
imageTag: "0.9.0"
replicas: 3
cstor:
pool:
image: "quay.io/openebs/cstor-pool"
imageTag: "0.9.0"
poolMgmt:
image: "quay.io/openebs/cstor-pool-mgmt"
imageTag: "0.9.0"
target:
image: "quay.io/openebs/cstor-istgt"
imageTag: "0.9.0"
volumeMgmt:
image: "quay.io/openebs/cstor-volume-mgmt"
imageTag: "0.9.0"
policies:
monitoring:
enabled: true
image: "quay.io/openebs/m-exporter"
imageTag: "0.9.0"
analytics:
enabled: true
# Specify in hours the duration after which a ping event needs to be sent.
pingInterval: "24h"
apiVersion: v1
appVersion: "1.5.0"
description: Windocks SQL Server containers
name: windocks
version: 1.5.0
home: https://www.windocks.com/
icon: https://windocks.com/img/windockslogo.png
sources:
- https://github.com/WinDocks/rancher
maintainers:
- name: WinDocks
email: support@windocks.com
Windocks SQL proxy
The Windocks SQL proxy delivers Windows SQL Server containers with database clones to a cluster. The proxy:
- Creates a Windows SQL Server container on a designated external machine that already has Windocks (cloud or on-prem)
- Clones terabyte sized SQL Server databases in seconds and delivers them to the container
- Proxies SQL traffic from the client applications (users, .Net apps, Sql Server Management Studio, NodeJs apps etc) to the Windocks container
- Enables the client applications to work on the cloned databases (usually production database clones)
- Deletes the Windocks SQL Server container when the SQL proxy pod / container is deleted
Pre-requisites
1. Windocks installed on a machine accessible to the cluster
2. For TLS connections, the required TLS setup on the Windocks machine and an SSL certificate and key for the proxy
Steps
1. Enter the values for proxy image name/tag and environment variables (Windocks host ip, Windocks server port, etc. ). Use the default values where provided
2. Create the auth secret: create secret generic proxy-secrets --from-literal=WINDOCKS_REQUIRED_USERNAME='windocks-api-username' --from-literal=WINDOCKS_REQUIRED_PASSWORD='windocks-api-password' --from-literal=WINDOCKS_REQUIRED_CONTAINER_SAPASSWORD='sa-password-to-set-for-windocks-container'
3. For TLS: Create a secret in a file with tls.key and tls.crt, both of which are mounted as files into the container. Separate coniguration is required on the Windocks server
4. Deploy the app and use SQL Server Management Studio or Azure Management Studio to connect to the <Windocks=host-IP>,3087 using SQL auth: sa and the password above
Email support@windocks.com for issues
\ No newline at end of file
categories:
- Database
- SQL Server
labels:
io.rancher.certified: partner
questions:
- variable: image.repository
default: "windocks/windocks-sql-server-proxy"
description: "Docker image name"
type: string
required: true
label: Image Name for Sqlproxy
group: "Sqlproxy Settings"
- variable: image.tag
default: "1.5.0"
description: "Image tag"
type: string
required: true
label: Image tag
group: "Sqlproxy Settings"
- variable: image.pullPolicy
default: "Always"
description: "Image pull policy"
type: enum
required: true
options:
- "Always"
- "IfNotPresent"
label: Image pull polcy
group: "Sqlproxy Settings"
- variable: sqlproxy.windocksServerHostname
default: "34.220.44.23"
description: "IP or hostname for Windocks server"
type: string
required: true
label: Windocks Server hostname
group: "Sqlproxy Settings"
- variable: sqlproxy.windocksServerPort
default: "3000"
description: "Port for Windocks API"
type: string
required: false
label: Windocks port
group: "Sqlproxy Settings"
- variable: sqlproxy.windocksImageName
default: "clone"
description: "Windocks image name from which SQL Server containers and database clones are created"
type: string
required: true
label: Windocks image name
group: "Sqlproxy Settings"
- variable: sqlproxy.windocksContainerName
default: ""
description: "Name to use for Windocks container created by Sqlproxy"
type: string
required: false
label: Windocks container name
group: "Sqlproxy Settings"
- variable: sqlproxy.windocksPersistentContainerPort
default: ""
description: "Set this if you do not want Sqlproxy to create and manage the Windocks container. You must create the Windocks container using the Windocks web app or a docker client"
type: string
required: false
label: Pre-existing Windocks container port
group: "Sqlproxy Settings"
- variable: sqlproxy.port
default: "3087"
description: "Container port for access to Windocks Sql proxy"
type: string
required: true
label: Sqlproxy listening port
group: "Sqlproxy Settings"
- variable: sqlproxy.authSecretName
default: ""
description: "Secret; WINDOCKS_REQUIRED_USERNAME='' WINDOCKS_REQUIRED_PASSWORD='' WINDOCKS_REQUIRED_CONTAINER_SAPASSWORD=''"
type: string
label: Secret for Windocks API user, passwd, and desired SQL sa password
required: true
group: "Sqlproxy Settings"
- variable: sqlproxy.tls
default: ""
description: "Set to true for TLS"
type: string
label: TLS connection
required: false
group: "Sqlproxy Settings"
- variable: sqlproxy.sslSecretName
default: ""
description: "Secret - in a file with tls.key:... and tls.crt:......"
type: string
label: Secret for ssl cert and key (files)
required: false
group: "Sqlproxy Settings"
- variable: sqlproxy.localHostNameForTls
default: ""
description: "For TLS connections, hostname for the sql proxy"
type: string
label: For TLS connections, hostname for sql proxy service
required: false
group: "Sqlproxy Settings"
- variable: service.port
default: "3087"
description: "Service port for access to Windocks Sql proxy"
type: string
label: Windocks SQL proxy NodePort number
required: true
group: "Sqlproxy Settings"
- variable: service.loadBalancerIP
default: ""
description: "Load balancer IP"
type: string
label: Load balancer IP
required: false
group: "Sqlproxy Settings"
- variable: service.type
default: "ClusterIP"
description: "MySQL K8s Service type"
type: enum
group: "Services and Load Balancing"
options:
- "ClusterIP"
- "LoadBalancer"
- "NodePort"
required: true
label: Sqlproxy Service Type
\ No newline at end of file
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "windocks-sql-proxy.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "windocks-sql-proxy.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "windocks-sql-proxy.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "windocks-sql-proxy.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "windocks-sql-proxy.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "windocks-sql-proxy.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "windocks-sql-proxy.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "windocks-sql-proxy.fullname" . }}
labels:
app: {{ template "windocks-sql-proxy.name" . }}
chart: {{ template "windocks-sql-proxy.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "windocks-sql-proxy.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "windocks-sql-proxy.name" . }}
release: {{ .Release.Name }}
spec:
{{- if contains "true" .Values.sqlproxy.tls }}
volumes:
- name: proxy-secret-ssl
secret:
secretName: {{ .Values.sqlproxy.sslSecretName }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: tcp-proxy
containerPort: {{ .Values.sqlproxy.port }}
protocol: TCP
envFrom:
- secretRef:
name: {{ .Values.sqlproxy.authSecretName }}
env:
- name: WINDOCKS_REQUIRED_HOSTNAME
value: {{ .Values.sqlproxy.windocksServerHostname | quote }}
- name: WINDOCKS_OPTIONAL_PORT
value: {{ .Values.sqlproxy.windocksServerPort | quote}}
- name: WINDOCKS_REQUIRED_IMAGE_NAME
value: {{ .Values.sqlproxy.windocksImageName | quote}}
- name: WINDOCKS_SQL_PROXY_OPTIONAL_LISTENING_PORT
value: {{ .Values.sqlproxy.port | quote}}
- name: WINDOCKS_SQL_PROXY_OPTIONAL_LOCAL_HOSTNAME_FOR_TLS
value: {{ .Values.sqlproxy.localHostNameForTls | quote }}
- name: WINDOCKS_SQL_PROXY_OPTIONAL_TLS
value: {{ .Values.sqlproxy.tls | quote}}
# - name: WINDOCKS_OPTIONAL_CONTAINER_NAME
# value: {{ .Values.sqlproxy.windocksContainerName }}
# If WINDOCKS_OPTIONAL_PERSISTENT_CONTAINER_PORT is set, then the proxy will not create or delete the Windocks container.
# - name: WINDOCKS_OPTIONAL_PERSISTENT_CONTAINER_PORT
# value: {{ .Values.sqlproxy.windocksPersistentContainerPort }}
{{- if contains "true" .Values.sqlproxy.tls }}
volumeMounts:
- mountPath: "/usr/src/app/ssl"
name: proxy-secret-ssl
readOnly: true
{{- end }}
#livenessProbe:
#tcpSocket:
#port: {{ .Values.sqlproxy.port }}
#readinessProbe:
#tcpSocket:
#port: {{ .Values.sqlproxy.port }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "windocks-sql-proxy.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ template "windocks-sql-proxy.name" . }}
chart: {{ template "windocks-sql-proxy.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "windocks-sql-proxy.fullname" . }}
labels:
app: {{ template "windocks-sql-proxy.name" . }}
chart: {{ template "windocks-sql-proxy.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
sessionAffinity: ClientIP
type: {{ .Values.service.type }}
{{- if and (hasKey .Values.service "loadBalancerIP") (eq .Values.service.type "LoadBalancer") }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCP
name: tcp
selector:
app: {{ template "windocks-sql-proxy.name" . }}
release: {{ .Release.Name }}
# Default values for windocks-sql-proxy.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: windocks/windocks-sql-server-proxy
tag: 1.5.0
pullPolicy: Always
service:
type: LoadBalancer
port: 3087
securePort: 3088
targetPort: 3087
targetSecurePort: 3088
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
sqlproxy:
port: "3087"
windocksServerHostname: "34.220.44.23"
windocksServerPort: "3000"
windocksImageName: "clone"
# windocksContainerName: "mycontainername"
# windocksPersistentContainerPort; "10122"
# windocksPersistentContainerPort is the port on which the proxy expects the pre created Windocks container
tls: "false"
localHostNameForTls: ""
# Secret names
authSecretName: proxy-secrets
sslSecretName: proxy-secret-ssl
# kubectl create secret generic proxy-secrets --from-literal=WINDOCKS_REQUIRED_USERNAME='user' --from-literal=WINDOCKS_REQUIRED_PASSWORD='pass' --from-literal=WINDOCKS_REQUIRED_CONTAINER_SAPASSWORD='sapass'
# kubectl create -f file-that-contains-tls.key-and-tls.crt-and-name-proxy-secret-ssl
apiVersion: v1
appVersion: "1.3.0"
description: Cloud Native storage for containers
name: storageos-operator
version: 0.2.10
tillerVersion: ">=2.10.0"
keywords:
- storage
- block-storage
- volume
- operator
home: https://storageos.com
icon: https://storageos.com/wp-content/themes/storageOS/images/logo.svg
sources:
- https://github.com/storageos
maintainers:
- name: croomes
email: simon.croome@storageos.com
- name: darkowlzz
email: sunny.gogoi@storageos.com
MIT License
Copyright (c) 2019 StorageOS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
# StorageOS Operator Helm Chart
> **Note**: This is the recommended chart to use for installing StorageOS. It
installs the StorageOS Operator, and then installs a StorageOS cluster with a
minimal configuration. Other Helm charts
([storageoscluster-operator](https://github.com/storageos/charts/tree/master/stable/storageoscluster-operator)
and
[storageos](https://github.com/storageos/charts/tree/master/stable/storageos))
will be deprecated.
[StorageOS](https://storageos.com) is a software-based storage platform
designed for cloud-native applications. By deploying StorageOS on your
Kubernetes cluster, local storage from cluster node is aggregated into a
distributed pool, and persistent volumes created from it using the native
Kubernetes volume driver are available instantly to pods wherever they move in
the cluster.
Features such as replication, encryption and caching help protect data and
maximise performance.
This chart installs a StorageOS Cluster Operator which helps deploy and
configure a StorageOS cluster on kubernetes.
## Prerequisites
- Helm 2.10+
- Kubernetes 1.9+.
- Privileged mode containers (enabled by default)
- Kubernetes 1.9 only:
- Feature gate: MountPropagation=true. This can be done by appending
`--feature-gates MountPropagation=true` to the kube-apiserver and kubelet
services.
Refer to the [StorageOS prerequisites
docs](https://docs.storageos.com/docs/prerequisites/overview) for more
information.
## Installing the chart
```console
# Add storageos charts repo.
$ helm repo add storageos https://charts.storageos.com
# Install the chart in a namespace.
$ helm install storageos/storageos-operator --namespace storageos-operator
```
This will install the StorageOSCluster operator in `storageos-operator`
namespace and deploys StorageOS with a minimal configuration.
> **Tip**: List all releases using `helm list`
## Creating a StorageOS cluster manually
The Helm chart supports a subset of StorageOSCluster custom resource parameters.
For advanced configurations, you may wish to create the cluster resource
manually and only use the Helm chart to install the Operator.
To disable auto-provisioning the cluster with the Helm chart, set
`cluster.create` to false:
```yaml
cluster:
...
create: false
```
Create a secret to store storageos cluster secrets:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: "storageos-api"
namespace: "default"
labels:
app: "storageos"
type: "kubernetes.io/storageos"
data:
# echo -n '<secret>' | base64
apiAddress: c3RvcmFnZW9zOjU3MDU=
apiUsername: c3RvcmFnZW9z
apiPassword: c3RvcmFnZW9z
```
Create a `StorageOSCluster` custom resource and refer the above secret in
`secretRefName` and `secretRefNamespace` fields.
```yaml
apiVersion: "storageos.com/v1"
kind: "StorageOSCluster"
metadata:
name: "example-storageos"
namespace: "default"
spec:
secretRefName: "storageos-api"
secretRefNamespace: "default"
```
Once the `StorageOSCluster` configuration is applied, the StorageOSCluster
operator will create a StorageOS cluster in the `storageos` namespace by
default.
Most installations will want to use the default [CSI](https://kubernetes-csi.github.io/docs/)
driver. To use the [Native Driver](https://kubernetes.io/docs/concepts/storage/volumes/#storageos)
instead, disable CSI:
```yaml
spec:
...
csi:
enable: false
...
```
in the above `StorageOSCluster` resource config.
Learn more about advanced configuration options
[here](https://github.com/storageos/cluster-operator/blob/master/README.md#storageoscluster-resource-configuration).
To check cluster status, run:
```bash
$ kubectl get storageoscluster
NAME READY STATUS AGE
example-storageos 3/3 Running 4m
```
All the events related to this cluster are logged as part of the cluster object
and can be viewed by describing the object.
```bash
$ kubectl describe storageoscluster example-storageos
Name: example-storageos
Namespace: default
Labels: <none>
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ChangedStatus 1m (x2 over 1m) storageos-operator 0/3 StorageOS nodes are functional
Normal ChangedStatus 35s storageos-operator 3/3 StorageOS nodes are functional. Cluster healthy
```
## Configuration
The following tables lists the configurable parameters of the StorageOSCluster
Operator chart and their default values.
Parameter | Description | Default
--------- | ----------- | -------
`operator.image.repository` | StorageOS Operator container image repository | `storageos/cluster-operator`
`operator.image.tag` | StorageOS Operator container image tag | `1.3.0`
`operator.image.pullPolicy` | StorageOS Operator container image pull policy | `IfNotPresent`
`podSecurityPolicy.enabled` | If true, create & use PodSecurityPolicy resources | `false`
`podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | `{}`
`cluster.create` | If true, auto-create the StorageOS cluster | `true`
`cluster.name` | Name of the storageos deployment | `storageos`
`cluster.namespace` | Namespace to install the StorageOS cluster into | `kube-system`
`cluster.secretRefName` | Name of the secret containing StorageOS API credentials | `storageos-api`
`cluster.admin.username` | Username to authenticate to the StorageOS API with | `storageos`
`cluster.admin.password` | Password to authenticate to the StorageOS API with |
`cluster.sharedDir` | The path shared into to kubelet container when running kubelet in a container |
`cluster.kvBackend.embedded` | Use StorageOS embedded etcd | `true`
`cluster.kvBackend.address` | List of etcd targets, in the form ip[:port], separated by commas |
`cluster.kvBackend.backend` | Key-Value store backend name | `etcd`
`cluster.kvBackend.tlsSecretName` | Name of the secret containing kv backend tls cert |
`cluster.kvBackend.tlsSecretNamespace` | Namespace of the secret containing kv backend tls cert |
`cluster.nodeSelectorTerm.key` | Key of the node selector term used for pod placement |
`cluster.nodeSelectorTerm.value` | Value of the node selector term used for pod placement |
`cluster.toleration.key` | Key of the pod toleration parameter |
`cluster.toleration.value` | Value of the pod toleration parameter |
`cluster.disableTelemetry` | If true, no telemetry data will be collected from the cluster | `false`
`cluster.images.node.repository` | StorageOS Node container image repository | `storageos/node`
`cluster.images.node.tag` | StorageOS Node container image tag | `1.3.0`
`cluster.csi.enable` | If true, CSI driver is enabled | `true`
`cluster.csi.deploymentStrategy` | Whether CSI helpers should be deployed as a `deployment` or `statefulset` | `deployment`
## Deleting a StorageOS Cluster
Deleting the `StorageOSCluster` custom resource object would delete the
storageos cluster and all the associated resources.
In the above example,
```bash
kubectl delete storageoscluster example-storageos
```
would delete the custom resource and the cluster.
## Uninstalling the Chart
To uninstall/delete the storageos cluster operator deployment:
```bash
helm delete --purge <release-name>
```
Learn more about configuring the StorageOS Operator on
[GitHub](https://github.com/storageos/cluster-operator).
# StorageOS Operator
[StorageOS](https://storageos.com) is a cloud native, software-defined storage
platform that transforms commodity server or cloud based disk capacity into
enterprise-class persistent storage for containers. StorageOS is ideal for
deploying databases, message busses, and other mission-critical stateful
solutions, where rapid recovery and fault tolerance are essential.
The StorageOS Operator installs and manages StorageOS within a cluster.
Cluster nodes may contribute local or attached disk-based storage into a
distributed pool, which is then available to all cluster members via a
global namespace.
By default, a minimal configuration of StorageOS is installed. To set advanced
configurations, disable the default installation of StorageOS and create a
custom StorageOSCluster resource
([documentation](https://docs.storageos.com/docs/reference/cluster-operator/examples)).
`Notes: The StorageOS Operator must be installed in the System Project with
Cluster Role`
podSecurityPolicy:
enabled: true
cluster:
# Disable cluster creation in CI, should install the operator only.
create: false
categories:
- storage
labels:
io.rancher.certified: partner
questions:
- variable: k8sDistro
default: rancher
description: "Kubernetes Distribution"
show_if: false
# Operator image configuration.
- variable: defaultImage
default: true
description: "Use default Docker images"
label: Use Default Images
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: operator.image.pullPolicy
default: IfNotPresent
description: "Operator Image pull policy"
type: enum
label: Operator Image pull policy
options:
- IfNotPresent
- Always
- Never
- variable: operator.image.repository
default: "storageos/cluster-operator"
description: "StorageOS operator image name"
type: string
label: StorageOS Operator Image Name
- variable: operator.image.tag
default: "1.3.0"
description: "StorageOS Operator image tag"
type: string
label: StorageOS Operator Image Tag
# Default minimal cluster configuration.
- variable: cluster.create
default: true
type: boolean
description: "Install StorageOS cluster with minimal configurations"
label: "Install StorageOS cluster"
show_subquestion_if: true
group: "StorageOS Cluster"
subquestions:
# CSI configuration.
- variable: cluster.csi.enable
default: true
description: "Use Container Storage Interface (CSI) driver"
label: Use CSI Driver
type: boolean
# Cluster metadata.
- variable: cluster.name
default: "storageos"
description: "Name of the StorageOS cluster deployment"
type: string
label: Name
- variable: cluster.namespace
default: "kube-system"
description: "Namespace of the StorageOS cluster deployment. `kube-system` recommended to avoid pre-emption when node is under load."
type: string
label: Namespace
# Node container image.
- variable: cluster.images.node.repository
default: "storageos/node"
description: "StorageOS node container image name"
type: string
label: StorageOS Node Container Image Name
- variable: cluster.images.node.tag
default: "1.3.0"
description: "StorageOS Node container image tag"
type: string
label: StorageOS Node Container Image Tag
# Credentials.
- variable: cluster.admin.username
default: "admin"
description: "Username of the StorageOS administrator account"
type: string
label: Username
- variable: cluster.admin.password
default: ""
description: "Password of the StorageOS administrator account. If empty, a random password will be generated."
type: password
label: Password
# Telemetry.
- variable: cluster.disableTelemetry
default: false
type: boolean
description: "Disable telemetry data collection. See https://docs.storageos.com/docs/reference/telemetry for more information."
label: Disable Telemetry
# KV store backend.
- variable: cluster.kvBackend.embedded
default: true
type: boolean
description: "Use embedded KV store for testing. Select false to use external etcd for production deployments."
label: "Use embedded KV store"
- variable: cluster.kvBackend.address
default: "10.0.0.1:2379"
description: "List of etcd targets, in the form ip[:port], separated by commas. Prefer multiple direct endpoints over a single load-balanced endpoint. Only used if not using embedded KV store."
type: string
label: External etcd address(es)
show_if: "cluster.kvBackend.embedded=false"
- variable: cluster.kvBackend.tls
default: false
type: boolean
description: "Enable etcd TLS"
label: "TLS should be configured for external etcd to protect configuration data (Optional)."
show_if: "cluster.kvBackend.embedded=false"
- variable: cluster.kvBackend.tlsSecretName
required: false
default: ""
description: "Name of the secret that contains the etcd TLS certs. This secret is typically shared with etcd."
type: string
label: External etcd TLS secret name
show_if: "cluster.kvBackend.tls=true"
- variable: cluster.kvBackend.tlsSecretNamespace
required: false
default: ""
description: "Namespace of the secret that contains the etcd TLS certs. This secret is typically shared with etcd."
type: string
label: External etcd TLS secret namespace
show_if: "cluster.kvBackend.tls=true"
# Node Selector Term.
- variable: cluster.nodeSelectorTerm.key
required: false
default: ""
description: "Key of the node selector term match expression used to select the nodes to install StorageOS on, e.g. `node-role.kubernetes.io/worker`"
type: string
label: Node selector term key
- variable: cluster.nodeSelectorTerm.value
required: false
default: ""
description: "Value of the node selector term match expression used to select the nodes to install StorageOS on."
type: string
label: Node selector term value
# Pod tolerations.
- variable: cluster.toleration.key
required: false
default: ""
description: "Key of pod toleration with operator 'Equal' and effect 'NoSchedule'"
type: string
label: Pod toleration key
- variable: cluster.toleration.value
required: false
default: ""
description: "Value of pod toleration with operator 'Equal' and effect 'NoSchedule'"
type: string
label: Pod toleration value
# Shared Directory
- variable: cluster.sharedDir
required: false
default: "/var/lib/kubelet/plugins/kubernetes.io~storageos"
description: "Shared Directory should be set if running kubelet in a container. This should be the path shared into to kubelet container, typically: '/var/lib/kubelet/plugins/kubernetes.io~storageos'. If not set, defaults will be used."
type: string
label: Shared Directory
StorageOS Operator deployed.
If you disabled automatic cluster creation, you can deploy a StorageOS cluster
by creating a custom StorageOSCluster resource:
1. Create a secret containing StorageOS cluster credentials. This secret
contains the API username and password that will be used to authenticate to the
StorageOS cluster. Base64 encode the username and password that you want to use
for your StorageOS cluster.
apiVersion: v1
kind: Secret
metadata:
name: storageos-api
namespace: default
labels:
app: storageos
type: kubernetes.io/storageos
data:
# echo -n '<secret>' | base64
apiUsername: c3RvcmFnZW9z
apiPassword: c3RvcmFnZW9z
2. Create a StorageOS custom resource that references the secret created
above (storageos-api in the above example). When the resource is created, the
cluster will be deployed.
apiVersion: storageos.com/v1
kind: StorageOSCluster
metadata:
name: example-storageos
namespace: default
spec:
secretRefName: storageos-api
secretRefNamespace: default
csi:
enable: true
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "storageos.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "storageos.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "storageos.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "storageos.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "storageos.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- if .Values.cluster.create }}
# ClusterRole, ClusterRoleBinding and ServiceAccounts have hook-failed in
# hook-delete-policy to make it easy to rerun the whole setup even after a
# failure, else the rerun fails with existing resource error.
# Hook delete policy before-hook-creation ensures any other leftover resources
# from previous run gets deleted when run again.
# The Job resources will not be deleted to help investigage the failure.
# Since the resources created by the operator are not managed by the chart, each
# of them must be individually deleted in separate jobs.
apiVersion: v1
kind: ServiceAccount
metadata:
name: storageos-cleanup
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: storageos:cleanup
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "1"
rules:
# Using apiGroup "apps" for daemonsets fails and the permission error indicates
# that it's in group "extensions". Not sure if it's a Job specific behavior,
# because the daemonsets deployed by the operator use "apps" apiGroup.
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
verbs:
- delete
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- delete
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
- clusterroles
- clusterrolebindings
verbs:
- delete
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- delete
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- delete
- apiGroups:
- ""
resources:
- serviceaccounts
- secrets
- services
verbs:
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: storageos:cleanup
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "2"
subjects:
- name: storageos-cleanup
kind: ServiceAccount
namespace: {{ .Release.Namespace }}
roleRef:
name: storageos:cleanup
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
---
# Iterate through the Values.cleanup list and create jobs to delete all the
# unmanaged resources of the cluster.
{{- range .Values.cleanup }}
apiVersion: batch/v1
kind: Job
metadata:
name: "storageos-{{ .name }}-cleanup"
namespace: {{ .namespace }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, before-hook-creation"
"helm.sh/hook-weight": "3"
spec:
template:
spec:
serviceAccountName: storageos-cleanup
containers:
- name: "storageos-{{ .name }}-cleanup"
image: bitnami/kubectl:1.14.1
command:
- kubectl
- -n
- {{ $.Values.cluster.namespace }}
- delete
{{- range .command }}
- {{ . | quote }}
{{- end }}
- --ignore-not-found=true
restartPolicy: Never
backoffLimit: 4
---
{{- end }}
{{- end }}
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: jobs.storageos.com
annotations:
"helm.sh/hook": crd-install
spec:
group: storageos.com
names:
kind: Job
listKind: JobList
plural: jobs
singular: job
scope: Namespaced
version: v1
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata: {}
spec:
properties:
image:
type: string
args: {}
mountPath:
type: string
hostPath:
type: string
completionWord:
type: string
labelSelector:
type: string
nodeSelectorTerms: {}
tolerations: {}
status:
properties:
completed:
type: boolean
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "storageos.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ template "storageos.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "storageos.name" . }}
release: {{ .Release.Name }}
spec:
serviceAccountName: {{ template "storageos.serviceAccountName" . }}
containers:
- name: storageos-operator
image: "{{ .Values.operator.image.repository }}:{{ .Values.operator.image.tag }}"
imagePullPolicy: {{ .Values.operator.image.pullPolicy }}
ports:
- containerPort: 60000
name: metrics
command:
- cluster-operator
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OPERATOR_NAME
value: "cluster-operator"
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "storageos.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
{{- if .Values.podSecurityPolicy.annotations }}
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
{{- end }}
spec:
volumes:
- '*'
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
{{- end }}
# Role for storageos operator
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storageos:operator
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- storageos.com
resources:
- storageosclusters
- storageosupgrades
- jobs
verbs:
- "*"
- apiGroups:
- apps
resources:
- statefulsets
- daemonsets
- deployments
- replicasets
verbs:
- "*"
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- get
- update
- create
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- get
- update
- patch
- delete
- create
- apiGroups:
- ""
resources:
- events
- namespaces
- serviceaccounts
- secrets
- services
- persistentvolumeclaims
- persistentvolumes
- configmaps
- replicationcontrollers
- pods/binding
- endpoints
verbs:
- create
- patch
- get
- list
- delete
- watch
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
- clusterroles
- clusterrolebindings
verbs:
- create
- delete
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
- volumeattachments
- csinodeinfos
verbs:
- create
- delete
- watch
- list
- get
- update
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- delete
- apiGroups:
- csi.storage.k8s.io
resources:
- csidrivers
verbs:
- create
- delete
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- list
- watch
# OpenShift specific rule.
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
verbs:
- create
- delete
- update
- get
- use
resourceNames:
- privileged
---
# Bind operator service account to storageos-operator role
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: storageos:operator
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: storageos:operator
apiGroup: rbac.authorization.k8s.io
{{- if .Values.podSecurityPolicy.enabled }}
---
# ClusterRole for using pod security policy.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: storageos:psp-user
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames:
- {{ template "storageos.fullname" . }}-psp
---
# Bind pod security policy cluster role to the operator service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: storageos:psp-user
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: storageos:psp-user
subjects:
- kind: ServiceAccount
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.cluster.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.cluster.secretRefName }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
type: "kubernetes.io/storageos"
data:
apiUsername: {{ default "" .Values.cluster.admin.username | b64enc | quote }}
{{ if .Values.cluster.admin.password }}
apiPassword: {{ default "" .Values.cluster.admin.password | b64enc | quote }}
{{ else }}
apiPassword: {{ randAlphaNum 10 | b64enc | quote }}
{{ end }}
# Add base64 encoded TLS cert and key below if ingress.tls is set to true.
# tls.crt:
# tls.key:
# Add base64 encoded creds below for CSI credentials.
# csiProvisionUsername:
# csiProvisionPassword:
# csiControllerPublishUsername:
# csiControllerPublishPassword:
# csiNodePublishUsername:
# csiNodePublishPassword:
{{- end }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.cluster.create }}
apiVersion: storageos.com/v1
kind: StorageOSCluster
metadata:
name: {{ .Values.cluster.name }}
namespace: {{ .Release.Namespace }}
spec:
namespace: {{ .Values.cluster.namespace }}
secretRefName: {{ .Values.cluster.secretRefName }}
secretRefNamespace: {{ .Release.Namespace }}
disableTelemetry: {{ .Values.cluster.disableTelemetry }}
{{- if .Values.k8sDistro }}
k8sDistro: {{ .Values.k8sDistro }}
{{- end }}
{{- if .Values.cluster.images.node.repository }}
images:
nodeContainer: "{{ .Values.cluster.images.node.repository }}:{{ .Values.cluster.images.node.tag }}"
{{- end }}
csi:
enable: {{ .Values.cluster.csi.enable }}
deploymentStrategy: {{ .Values.cluster.csi.deploymentStrategy }}
{{- if .Values.cluster.sharedDir }}
sharedDir: {{ .Values.cluster.sharedDir }}
{{- end }}
{{- if eq .Values.cluster.kvBackend.embedded false }}
kvBackend:
address: {{ .Values.cluster.kvBackend.address }}
backend: {{ .Values.cluster.kvBackend.backend }}
tlsEtcdSecretRefName: {{ .Values.cluster.kvBackend.tlsSecretName }}
tlsEtcdSecretRefNamespace: {{ .Values.cluster.kvBackend.tlsSecretNamespace }}
{{- end }}
{{- if .Values.cluster.nodeSelectorTerm.key }}
nodeSelectorTerms:
- matchExpressions:
- key: {{ .Values.cluster.nodeSelectorTerm.key }}
operator: In
values:
- "{{ .Values.cluster.nodeSelectorTerm.value }}"
{{- end }}
{{- if .Values.cluster.toleration.key }}
tolerations:
- key: {{ .Values.cluster.toleration.key }}
operator: "Equal"
value: {{ .Values.cluster.toleration.value }}
effect: "NoSchedule"
{{- end }}
{{- end }}
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: storageosclusters.storageos.com
annotations:
"helm.sh/hook": crd-install
spec:
group: storageos.com
names:
kind: StorageOSCluster
listKind: StorageOSClusterList
plural: storageosclusters
singular: storageoscluster
shortNames:
- stos
scope: Namespaced
version: v1
additionalPrinterColumns:
- name: Ready
type: string
description: Ready status of the storageos nodes.
JSONPath: .status.ready
- name: Status
type: string
description: Status of the whole cluster.
JSONPath: .status.phase
- name: Age
type: date
JSONPath: .metadata.creationTimestamp
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata: {}
spec:
properties:
join:
type: string
namespace:
type: string
k8sDistro:
type: string
disableFencing:
type: boolean
disableTelemetry:
type: boolean
disableTCMU:
type: boolean
forceTCMU:
type: boolean
disableScheduler:
type: boolean
images:
properties:
nodeContainer:
type: string
initContainer:
type: string
csiDriverRegistrarContainer:
type: string
csiExternalProvisionerContainer:
type: string
csiExternalAttacherContainer:
type: string
csiLivenessProbeContainer:
type: string
csi:
properties:
enable:
type: boolean
enableProvisionCreds:
type: boolean
enableControllerPublishCreds:
type: boolean
enableNodePublishCreds:
type: boolean
deploymentStrategy:
type: string
service:
properties:
name:
type: string
type:
type: string
externalPort:
type: integer
format: int32
internalPort:
type: integer
format: int32
secretRefName:
type: string
secretRefNamespace:
type: string
tlsEtcdSecretRefName:
type: string
tlsEtcdSecretRefNamespace:
type: string
sharedDir:
type: string
ingress:
properties:
enable:
type: boolean
hostname:
type: string
tls:
type: boolean
annotations: {}
kvBackend:
properties:
address:
type: string
backend:
type: string
pause:
type: boolean
debug:
type: boolean
nodeSelectorTerms: {}
tolerations: {}
resources:
properties:
limits: {}
requests: {}
status:
properties:
phase:
type: string
nodeHealthStatus: {}
nodes:
type: array
items:
type: string
ready:
type: string
members:
properties:
ready: {}
unready: {}
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: storageosupgrades.storageos.com
annotations:
"helm.sh/hook": crd-install
spec:
group: storageos.com
names:
kind: StorageOSUpgrade
listKind: StorageOSUpgradeList
plural: storageosupgrades
singular: storageosupgrade
scope: Namespaced
version: v1
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata: {}
spec:
properties:
newImage:
type: string
status:
properties:
completed:
type: boolean
# Default values for storageos.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
name: storageos-operator
k8sDistro: default
serviceAccount:
create: true
name: storageos-operator-sa
podSecurityPolicy:
enabled: false
annotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
# operator-specific configuation parameters.
operator:
image:
repository: storageos/cluster-operator
tag: 1.3.0
pullPolicy: IfNotPresent
# cluster-specific configuation parameters.
cluster:
# set create to true if the operator should auto-create the StorageOS cluster.
create: true
# Name of the deployment.
name: storageos
# Namespace to install the StorageOS cluster into.
namespace: kube-system
# Name of the secret containing StorageOS API credentials.
secretRefName: storageos-api
# Default admin account.
admin:
# Username to authenticate to the StorageOS API with.
username: storageos
# Password to authenticate to the StorageOS API with. If empty, a random
# password will be generated and set in the secretRefName secret.
password:
# sharedDir should be set if running kubelet in a container. This should
# be the path shared into to kubelet container, typically:
# "/var/lib/kubelet/plugins/kubernetes.io~storageos". If not set, defaults
# will be used.
sharedDir:
# Key-Value store backend.
kvBackend:
embedded: true
address:
backend: etcd
tlsSecretName:
tlsSecretNamespace:
# Node selector terms to install StorageOS on.
nodeSelectorTerm:
key:
value:
# Pod toleration for the StorageOS pods.
toleration:
key:
value:
# To disable anonymous usage reporting across the cluster, set to true.
# Defaults to false. To help improve the product, data such as API usage and
# StorageOS configuration information is collected.
disableTelemetry: false
images:
# nodeContainer is the StorageOS node image to use, available from the
# [Docker Hub](https://hub.docker.com/r/storageos/node/).
node:
repository: storageos/node
tag: 1.3.0
csi:
enable: true
deploymentStrategy: deployment
# The following is used for cleaning up unmanaged cluster resources when
# auto-install is enabled.
cleanup:
- name: daemonset
command:
- "daemonset"
- "storageos-daemonset"
- name: statefulset
command:
- "statefulset"
- "storageos-statefulset"
- name: csi-helper
command:
- "deployment"
- "storageos-csi-helper"
- name: serviceaccount
command:
- "serviceaccount"
- "storageos-daemonset-sa"
- "storageos-statefulset-sa"
- name: role
command:
- "role"
- "storageos:key-management"
- name: rolebinding
command:
- "rolebinding"
- "storageos:key-management"
- name: secret
command:
- "secret"
- "init-secret"
- name: service
command:
- "service"
- "storageos"
- name: clusterrole
command:
- "clusterrole"
- "storageos:driver-registrar"
- "storageos:csi-attacher"
- "storageos:csi-provisioner"
- name: clusterrolebinding
command:
- "clusterrolebinding"
- "storageos:csi-provisioner"
- "storageos:csi-attacher"
- "storageos:driver-registrar"
- "storageos:k8s-driver-registrar"
- name: storageclass
command:
- "storageclass"
- "fast"
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment