Unverified Commit 27bbefb4 by Denise Committed by GitHub

Merge pull request #365 from darkowlzz/v0.2.19

Update storageos-operator to version 0.2.19
parents a8e4969e 745d1630
apiVersion: v1
appVersion: "1.5.3"
description: Cloud Native storage for containers
name: storageos-operator
version: 0.2.19
tillerVersion: ">=2.10.0"
keywords:
- storage
- block-storage
- volume
- operator
home: https://storageos.com
icon: https://storageos.com/wp-content/themes/storageOS/images/logo.svg
sources:
- https://github.com/storageos
maintainers:
- name: croomes
email: simon.croome@storageos.com
- name: darkowlzz
email: sunny.gogoi@storageos.com
MIT License
Copyright (c) 2020 StorageOS
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
# StorageOS Operator Helm Chart
> **Note**: This is the recommended chart to use for installing StorageOS. It
installs the StorageOS Operator, and then installs a StorageOS cluster with a
minimal configuration. Other Helm charts
([storageoscluster-operator](https://github.com/storageos/charts/tree/master/stable/storageoscluster-operator)
and
[storageos](https://github.com/storageos/charts/tree/master/stable/storageos))
will be deprecated.
[StorageOS](https://storageos.com) is a software-based storage platform
designed for cloud-native applications. By deploying StorageOS on your
Kubernetes cluster, local storage from cluster node is aggregated into a
distributed pool, and persistent volumes created from it using the native
Kubernetes volume driver are available instantly to pods wherever they move in
the cluster.
Features such as replication, encryption and caching help protect data and
maximise performance.
This chart installs a StorageOS Cluster Operator which helps deploy and
configure a StorageOS cluster on kubernetes.
## Prerequisites
- Helm 2.10+
- Kubernetes 1.9+.
- Privileged mode containers (enabled by default)
- Kubernetes 1.9 only:
- Feature gate: MountPropagation=true. This can be done by appending
`--feature-gates MountPropagation=true` to the kube-apiserver and kubelet
services.
Refer to the [StorageOS prerequisites
docs](https://docs.storageos.com/docs/prerequisites/overview) for more
information.
## Installing the chart
```console
# Add storageos charts repo.
$ helm repo add storageos https://charts.storageos.com
# Install the chart in a namespace.
$ helm install storageos/storageos-operator --namespace storageos-operator
```
This will install the StorageOSCluster operator in `storageos-operator`
namespace and deploys StorageOS with a minimal configuration.
> **Tip**: List all releases using `helm list`
## Creating a StorageOS cluster manually
The Helm chart supports a subset of StorageOSCluster custom resource parameters.
For advanced configurations, you may wish to create the cluster resource
manually and only use the Helm chart to install the Operator.
To disable auto-provisioning the cluster with the Helm chart, set
`cluster.create` to false:
```yaml
cluster:
...
create: false
```
Create a secret to store storageos cluster secrets:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: "storageos-api"
namespace: "default"
labels:
app: "storageos"
type: "kubernetes.io/storageos"
data:
# echo -n '<secret>' | base64
apiAddress: c3RvcmFnZW9zOjU3MDU=
apiUsername: c3RvcmFnZW9z
apiPassword: c3RvcmFnZW9z
```
Create a `StorageOSCluster` custom resource and refer the above secret in
`secretRefName` and `secretRefNamespace` fields.
```yaml
apiVersion: "storageos.com/v1"
kind: "StorageOSCluster"
metadata:
name: "example-storageos"
namespace: "default"
spec:
secretRefName: "storageos-api"
secretRefNamespace: "default"
```
Once the `StorageOSCluster` configuration is applied, the StorageOSCluster
operator will create a StorageOS cluster in the `storageos` namespace by
default.
Most installations will want to use the default [CSI](https://kubernetes-csi.github.io/docs/)
driver. To use the [Native Driver](https://kubernetes.io/docs/concepts/storage/volumes/#storageos)
instead, disable CSI:
```yaml
spec:
...
csi:
enable: false
...
```
in the above `StorageOSCluster` resource config.
Learn more about advanced configuration options
[here](https://github.com/storageos/cluster-operator/blob/master/README.md#storageoscluster-resource-configuration).
To check cluster status, run:
```bash
$ kubectl get storageoscluster
NAME READY STATUS AGE
example-storageos 3/3 Running 4m
```
All the events related to this cluster are logged as part of the cluster object
and can be viewed by describing the object.
```bash
$ kubectl describe storageoscluster example-storageos
Name: example-storageos
Namespace: default
Labels: <none>
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ChangedStatus 1m (x2 over 1m) storageos-operator 0/3 StorageOS nodes are functional
Normal ChangedStatus 35s storageos-operator 3/3 StorageOS nodes are functional. Cluster healthy
```
## Configuration
The following tables lists the configurable parameters of the StorageOSCluster
Operator chart and their default values.
Parameter | Description | Default
--------- | ----------- | -------
`operator.image.repository` | StorageOS Operator container image repository | `storageos/cluster-operator`
`operator.image.tag` | StorageOS Operator container image tag | `1.5.3`
`operator.image.pullPolicy` | StorageOS Operator container image pull policy | `IfNotPresent`
`podSecurityPolicy.enabled` | If true, create & use PodSecurityPolicy resources | `false`
`podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | `{}`
`cluster.create` | If true, auto-create the StorageOS cluster | `true`
`cluster.name` | Name of the storageos deployment | `storageos`
`cluster.namespace` | Namespace to install the StorageOS cluster into | `kube-system`
`cluster.secretRefName` | Name of the secret containing StorageOS API credentials | `storageos-api`
`cluster.admin.username` | Username to authenticate to the StorageOS API with | `storageos`
`cluster.admin.password` | Password to authenticate to the StorageOS API with |
`cluster.sharedDir` | The path shared into to kubelet container when running kubelet in a container |
`cluster.kvBackend.embedded` | Use StorageOS embedded etcd | `true`
`cluster.kvBackend.address` | List of etcd targets, in the form ip[:port], separated by commas |
`cluster.kvBackend.backend` | Key-Value store backend name | `etcd`
`cluster.kvBackend.tlsSecretName` | Name of the secret containing kv backend tls cert |
`cluster.kvBackend.tlsSecretNamespace` | Namespace of the secret containing kv backend tls cert |
`cluster.nodeSelectorTerm.key` | Key of the node selector term used for pod placement |
`cluster.nodeSelectorTerm.value` | Value of the node selector term used for pod placement |
`cluster.toleration.key` | Key of the pod toleration parameter |
`cluster.toleration.value` | Value of the pod toleration parameter |
`cluster.disableTelemetry` | If true, no telemetry data will be collected from the cluster | `false`
`cluster.images.node.repository` | StorageOS Node container image repository |
`cluster.images.node.tag` | StorageOS Node container image tag |
`cluster.images.init.repository` | StorageOS init container image repository |
`cluster.images.init.tag` | StorageOS init container image tag |
`cluster.images.csiV1ClusterDriverRegistrar.repository` | CSI v1 Cluster Driver Registrar image repository |
`cluster.images.csiV1ClusterDriverRegistrar.tag` | CSI v1 Cluster Driver Registrar image tag |
`cluster.images.csiV1NodeDriverRegistrar.repository` | CSI v1 Node Driver Registrar image repository |
`cluster.images.csiV1NodeDriverRegistrar.tag` | CSI v1 Node Driver Registrar image tag |
`cluster.images.csiV1ExternalProvisioner.repository` | CSI v1 External Provisioner image repository |
`cluster.images.csiV1ExternalProvisioner.tag` | CSI v1 External Provisioner image tag |
`cluster.images.csiV1ExternalAttacher.repository` | CSI v1 External Attacher image repository |
`cluster.images.csiV1ExternalAttacher.tag` | CSI v1 External Attacher image tag |
`cluster.images.csiV1ExternalAttacherV2.repository` | CSI v1 External Attacher v2 image repository |
`cluster.images.csiV1ExternalAttacherV2.tag` | CSI v1 External Attacher v2 image tag |
`cluster.images.csiV1LivenessProbe.repository` | CSI v1 Liveness Probe image repository |
`cluster.images.csiV1LivenessProbe.tag` | CSI v1 Liveness Probe image tag |
`cluster.images.csiV0DriverRegistrar.repository` | CSI v0 Driver Registrar image repository |
`cluster.images.csiV0DriverRegistrar.tag` | CSI v0 Driver Registrar image tag |
`cluster.images.csiV0ExternalProvisioner.repository` | CSI v0 External Provisioner image repository |
`cluster.images.csiV0ExternalProvisioner.tag` | CSI v0 External Provisioner image tag |
`cluster.images.csiV0ExternalAttacher.repository` | CSI v0 External Attacher image repository |
`cluster.images.csiV0ExternalAttacher.tag` | CSI v0 External Attacher image tag |
`cluster.images.nfs.repository` | NFS container image repository |
`cluster.images.nfs.tag` | NFS container image tag |
`cluster.images.kubeScheduler.repository` | Kube Scheduler container image repository |
`cluster.images.kubeScheduler.tag` | Kube Scheduler container image tag |
`cluster.csi.enable` | If true, CSI driver is enabled | `true`
`cluster.csi.deploymentStrategy` | Whether CSI helpers should be deployed as a `deployment` or `statefulset` | `deployment`
## Deleting a StorageOS Cluster
Deleting the `StorageOSCluster` custom resource object would delete the
storageos cluster and all the associated resources.
In the above example,
```bash
kubectl delete storageoscluster example-storageos
```
would delete the custom resource and the cluster.
## Uninstalling the Chart
To uninstall/delete the storageos cluster operator deployment:
```bash
helm delete --purge <release-name>
```
Learn more about configuring the StorageOS Operator on
[GitHub](https://github.com/storageos/cluster-operator).
# StorageOS Operator
[StorageOS](https://storageos.com) is a cloud native, software-defined storage
platform that transforms commodity server or cloud based disk capacity into
enterprise-class persistent storage for containers. StorageOS is ideal for
deploying databases, message busses, and other mission-critical stateful
solutions, where rapid recovery and fault tolerance are essential.
The StorageOS Operator installs and manages StorageOS within a cluster.
Cluster nodes may contribute local or attached disk-based storage into a
distributed pool, which is then available to all cluster members via a
global namespace.
By default, a minimal configuration of StorageOS is installed. To set advanced
configurations, disable the default installation of StorageOS and create a
custom StorageOSCluster resource
([documentation](https://docs.storageos.com/docs/reference/cluster-operator/examples)).
`Notes: The StorageOS Operator must be installed in the System Project with
Cluster Role`
podSecurityPolicy:
enabled: true
cluster:
# Disable cluster creation in CI, should install the operator only.
create: false
categories:
- storage
labels:
io.rancher.certified: partner
questions:
- variable: k8sDistro
default: rancher
description: "Kubernetes Distribution"
show_if: false
# Operator image configuration.
- variable: defaultImage
default: true
description: "Use default Docker images"
label: Use Default Images
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: operator.image.pullPolicy
default: IfNotPresent
description: "Operator Image pull policy"
type: enum
label: Operator Image pull policy
options:
- IfNotPresent
- Always
- Never
- variable: operator.image.repository
default: "storageos/cluster-operator"
description: "StorageOS operator image name"
type: string
label: StorageOS Operator Image Name
- variable: operator.image.tag
default: "1.5.3"
description: "StorageOS Operator image tag"
type: string
label: StorageOS Operator Image Tag
# Default minimal cluster configuration.
- variable: cluster.create
default: true
type: boolean
description: "Install StorageOS cluster with minimal configurations"
label: "Install StorageOS cluster"
show_subquestion_if: true
group: "StorageOS Cluster"
subquestions:
# CSI configuration.
- variable: cluster.csi.enable
default: true
description: "Use Container Storage Interface (CSI) driver"
label: Use CSI Driver
type: boolean
# Cluster metadata.
- variable: cluster.name
default: "storageos"
description: "Name of the StorageOS cluster deployment"
type: string
label: Name
- variable: cluster.namespace
default: "kube-system"
description: "Namespace of the StorageOS cluster deployment. `kube-system` recommended to avoid pre-emption when node is under load."
type: string
label: Namespace
# Node container image.
- variable: cluster.images.node.repository
default: "storageos/node"
description: "StorageOS node container image name"
type: string
label: StorageOS Node Container Image Name
- variable: cluster.images.node.tag
default: "1.5.3"
description: "StorageOS Node container image tag"
type: string
label: StorageOS Node Container Image Tag
# Credentials.
- variable: cluster.admin.username
default: "admin"
description: "Username of the StorageOS administrator account"
type: string
label: Username
- variable: cluster.admin.password
default: ""
description: "Password of the StorageOS administrator account. If empty, a random password will be generated."
type: password
label: Password
# Telemetry.
- variable: cluster.disableTelemetry
default: false
type: boolean
description: "Disable telemetry data collection. See https://docs.storageos.com/docs/reference/telemetry for more information."
label: Disable Telemetry
# KV store backend.
- variable: cluster.kvBackend.embedded
default: true
type: boolean
description: "Use embedded KV store for testing. Select false to use external etcd for production deployments."
label: "Use embedded KV store"
- variable: cluster.kvBackend.address
default: "10.0.0.1:2379"
description: "List of etcd targets, in the form ip[:port], separated by commas. Prefer multiple direct endpoints over a single load-balanced endpoint. Only used if not using embedded KV store."
type: string
label: External etcd address(es)
show_if: "cluster.kvBackend.embedded=false"
- variable: cluster.kvBackend.tls
default: false
type: boolean
description: "Enable etcd TLS"
label: "TLS should be configured for external etcd to protect configuration data (Optional)."
show_if: "cluster.kvBackend.embedded=false"
- variable: cluster.kvBackend.tlsSecretName
required: false
default: ""
description: "Name of the secret that contains the etcd TLS certs. This secret is typically shared with etcd."
type: string
label: External etcd TLS secret name
show_if: "cluster.kvBackend.tls=true"
- variable: cluster.kvBackend.tlsSecretNamespace
required: false
default: ""
description: "Namespace of the secret that contains the etcd TLS certs. This secret is typically shared with etcd."
type: string
label: External etcd TLS secret namespace
show_if: "cluster.kvBackend.tls=true"
# Node Selector Term.
- variable: cluster.nodeSelectorTerm.key
required: false
default: ""
description: "Key of the node selector term match expression used to select the nodes to install StorageOS on, e.g. `node-role.kubernetes.io/worker`"
type: string
label: Node selector term key
- variable: cluster.nodeSelectorTerm.value
required: false
default: ""
description: "Value of the node selector term match expression used to select the nodes to install StorageOS on."
type: string
label: Node selector term value
# Pod tolerations.
- variable: cluster.toleration.key
required: false
default: ""
description: "Key of pod toleration with operator 'Equal' and effect 'NoSchedule'"
type: string
label: Pod toleration key
- variable: cluster.toleration.value
required: false
default: ""
description: "Value of pod toleration with operator 'Equal' and effect 'NoSchedule'"
type: string
label: Pod toleration value
# Shared Directory
- variable: cluster.sharedDir
required: false
default: "/var/lib/kubelet/plugins/kubernetes.io~storageos"
description: "Shared Directory should be set if running kubelet in a container. This should be the path shared into to kubelet container, typically: '/var/lib/kubelet/plugins/kubernetes.io~storageos'. If not set, defaults will be used."
type: string
label: Shared Directory
StorageOS Operator deployed.
If you disabled automatic cluster creation, you can deploy a StorageOS cluster
by creating a custom StorageOSCluster resource:
1. Create a secret containing StorageOS cluster credentials. This secret
contains the API username and password that will be used to authenticate to the
StorageOS cluster. Base64 encode the username and password that you want to use
for your StorageOS cluster.
apiVersion: v1
kind: Secret
metadata:
name: storageos-api
namespace: default
labels:
app: storageos
type: kubernetes.io/storageos
data:
# echo -n '<secret>' | base64
apiUsername: c3RvcmFnZW9z
apiPassword: c3RvcmFnZW9z
2. Create a StorageOS custom resource that references the secret created
above (storageos-api in the above example). When the resource is created, the
cluster will be deployed.
apiVersion: storageos.com/v1
kind: StorageOSCluster
metadata:
name: example-storageos
namespace: default
spec:
secretRefName: storageos-api
secretRefNamespace: default
csi:
enable: true
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "storageos.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "storageos.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "storageos.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "storageos.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "storageos.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- if .Values.cluster.create }}
# ClusterRole, ClusterRoleBinding and ServiceAccounts have hook-failed in
# hook-delete-policy to make it easy to rerun the whole setup even after a
# failure, else the rerun fails with existing resource error.
# Hook delete policy before-hook-creation ensures any other leftover resources
# from previous run gets deleted when run again.
# The Job resources will not be deleted to help investigage the failure.
# Since the resources created by the operator are not managed by the chart, each
# of them must be individually deleted in separate jobs.
apiVersion: v1
kind: ServiceAccount
metadata:
name: storageos-cleanup
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: storageos:cleanup
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "1"
rules:
# Using apiGroup "apps" for daemonsets fails and the permission error indicates
# that it's in group "extensions". Not sure if it's a Job specific behavior,
# because the daemonsets deployed by the operator use "apps" apiGroup.
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
verbs:
- delete
- apiGroups:
- apps
resources:
- statefulsets
- deployments
- daemonsets
verbs:
- delete
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
- clusterroles
- clusterrolebindings
verbs:
- delete
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- delete
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- delete
- apiGroups:
- ""
resources:
- serviceaccounts
- secrets
- services
- configmaps
verbs:
- delete
- apiGroups:
- storageos.com
resources:
- storageosclusters
verbs:
- get
- patch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: storageos:cleanup
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, hook-failed, before-hook-creation"
"helm.sh/hook-weight": "2"
subjects:
- name: storageos-cleanup
kind: ServiceAccount
namespace: {{ .Release.Namespace }}
roleRef:
name: storageos:cleanup
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
---
# Delete the StorageOSCluster object by removing the finalizer.
apiVersion: batch/v1
kind: Job
metadata:
name: "storageos-storageoscluster-cleanup"
namespace: {{ .namespace }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, before-hook-creation"
"helm.sh/hook-weight": "3"
spec:
template:
spec:
serviceAccountName: storageos-cleanup
containers:
- name: "storageos-storageoscluster-cleanup"
image: "{{ $.Values.cleanup.images.kubectl.repository }}:{{ $.Values.cleanup.images.kubectl.tag }}"
command:
- kubectl
- -n
- {{ $.Release.namespace }}
- patch
- stos
- {{ $.Values.cluster.name }}
- --type=merge
- --patch={"metadata":{"finalizers":null}}
restartPolicy: Never
backoffLimit: 4
---
# Iterate through the Values.cleanup list and create jobs to delete all the
# unmanaged resources of the cluster.
{{- range .Values.cleanup.resources }}
apiVersion: batch/v1
kind: Job
metadata:
name: "storageos-{{ .name }}-cleanup"
namespace: {{ .namespace }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": "hook-succeeded, before-hook-creation"
"helm.sh/hook-weight": "3"
spec:
template:
spec:
serviceAccountName: storageos-cleanup
containers:
- name: "storageos-{{ .name }}-cleanup"
image: "{{ $.Values.cleanup.images.kubectl.repository }}:{{ $.Values.cleanup.images.kubectl.tag }}"
command:
- kubectl
- -n
- {{ $.Values.cluster.namespace }}
- delete
{{- range .command }}
- {{ . | quote }}
{{- end }}
- --ignore-not-found=true
restartPolicy: Never
backoffLimit: 4
---
{{- end }}
{{- end }}
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: jobs.storageos.com
annotations:
"helm.sh/hook": crd-install
spec:
group: storageos.com
names:
kind: Job
listKind: JobList
plural: jobs
singular: job
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
args:
description: Args is an array of strings passed as an argument to the
job container.
items:
type: string
type: array
completionWord:
description: CompletionWord is the word that's looked for in the pod
logs to find out if a DaemonSet Pod has completed its task.
type: string
hostPath:
description: HostPath is the path in the host that's mounted into a
job container.
type: string
image:
description: Image is the container image to run as the job.
type: string
labelSelector:
description: LabelSelector is the label selector for the job Pods.
type: string
mountPath:
description: MountPath is the path in the job container where a volume
is mounted.
type: string
nodeSelectorTerms:
description: NodeSelectorTerms is the set of placement of the job pods
using node affinity requiredDuringSchedulingIgnoredDuringExecution.
items:
type: object
type: array
tolerations:
description: Tolerations is to set the placement of storageos pods using
pod toleration.
items:
type: object
type: array
required:
- image
- args
- mountPath
- hostPath
- completionWord
type: object
status:
properties:
completed:
description: Completed indicates the complete status of job.
type: boolean
type: object
version: v1
versions:
- name: v1
served: true
storage: true
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: nfsservers.storageos.com
annotations:
"helm.sh/hook": crd-install
spec:
additionalPrinterColumns:
- JSONPath: .status.phase
description: Status of the NFS server.
name: status
type: string
- JSONPath: .spec.resources.requests.storage
description: Capacity of the NFS server.
name: capacity
type: string
- JSONPath: .status.remoteTarget
description: Remote target address of the NFS server.
name: target
type: string
- JSONPath: .status.accessModes
description: Access modes supported by the NFS server.
name: access modes
type: string
- JSONPath: .spec.storageClassName
description: StorageClass used for creating the NFS volume.
name: storageclass
type: string
- JSONPath: .metadata.creationTimestamp
name: age
type: date
group: storageos.com
names:
kind: NFSServer
listKind: NFSServerList
plural: nfsservers
shortNames:
- nfsserver
singular: nfsserver
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
annotations:
additionalProperties:
type: string
description: The annotations-related configuration to add/set on each
Pod related object.
type: object
export:
description: The parameters to configure the NFS export
properties:
name:
description: Name of the export
type: string
persistentVolumeClaim:
description: PVC from which the NFS daemon gets storage for sharing
type: object
server:
description: The NFS server configuration
properties:
accessMode:
description: Reading and Writing permissions on the export Valid
values are "ReadOnly", "ReadWrite" and "none"
type: string
squash:
description: This prevents the root users connected remotely
from having root privileges Valid values are "none", "rootid",
"root", and "all"
type: string
type: object
type: object
mountOptions:
description: PV mount options. Not validated - mount of the PVs will
simply fail if one is invalid.
items:
type: string
type: array
nfsContainer:
description: NFSContainer is the container image to use for the NFS
server.
type: string
persistentVolumeClaim:
description: PersistentVolumeClaim is the PVC source of the PVC to be
used with the NFS Server. If not specified, a new PVC is provisioned
and used.
type: object
persistentVolumeReclaimPolicy:
description: Reclamation policy for the persistent volume shared to
the user's pod.
type: string
resources:
description: Resources represents the minimum resources required
type: object
storageClassName:
description: StorageClassName is the name of the StorageClass used by
the NFS volume.
type: string
tolerations:
description: Tolerations is to set the placement of NFS server pods
using pod toleration.
items:
type: object
type: array
type: object
status:
properties:
accessModes:
description: AccessModes is the access modes supported by the NFS server.
type: string
phase:
description: 'Phase is a simple, high-level summary of where the NFS
Server is in its lifecycle. Phase will be set to Ready when the NFS
Server is ready for use. It is intended to be similar to the PodStatus
Phase described at: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#podstatus-v1-core There
are five possible phase values: - Pending: The NFS Server has been
accepted by the Kubernetes system, but one or more of the components
has not been created. This includes time before being scheduled
as well as time spent downloading images over the network, which
could take a while. - Running: The NFS Server has been bound to
a node, and all of the dependencies have been created. - Succeeded:
All NFS Server dependencies have terminated in success, and will
not be restarted. - Failed: All NFS Server dependencies in the pod
have terminated, and at least one container has terminated in
failure. The container either exited with non-zero status or was
terminated by the system. - Unknown: For some reason the state of
the NFS Server could not be obtained, typically due to an error
in communicating with the host of the pod.'
type: string
remoteTarget:
description: RemoteTarget is the connection string that clients can
use to access the shared filesystem.
type: string
type: object
version: v1
versions:
- name: v1
served: true
storage: true
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "storageos.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ template "storageos.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "storageos.name" . }}
release: {{ .Release.Name }}
spec:
serviceAccountName: {{ template "storageos.serviceAccountName" . }}
containers:
- name: storageos-operator
image: "{{ .Values.operator.image.repository }}:{{ .Values.operator.image.tag }}"
imagePullPolicy: {{ .Values.operator.image.pullPolicy }}
ports:
- containerPort: 8383
name: metrics
- containerPort: 8686
name: operatormetrics
- containerPort: 5720
name: podschedwebhook
command:
- cluster-operator
env:
{{- if and .Values.cluster.images.node.repository .Values.cluster.images.node.tag }}
- name: RELATED_IMAGE_STORAGEOS_NODE
value: "{{ .Values.cluster.images.node.repository }}:{{ .Values.cluster.images.node.tag }}"
{{- end }}
{{- if and .Values.cluster.images.init.repository .Values.cluster.images.init.tag }}
- name: RELATED_IMAGE_STORAGEOS_INIT
value: "{{ .Values.cluster.images.init.repository }}:{{ .Values.cluster.images.init.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1ClusterDriverRegistrar.repository .Values.cluster.images.csiV1ClusterDriverRegistrar.tag }}
- name: RELATED_IMAGE_CSIV1_CLUSTER_DRIVER_REGISTRAR
value: "{{ .Values.cluster.images.csiV1ClusterDriverRegistrar.repository }}:{{ .Values.cluster.images.csiV1ClusterDriverRegistrar.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1NodeDriverRegistrar.repository .Values.cluster.images.csiV1NodeDriverRegistrar.tag }}
- name: RELATED_IMAGE_CSIV1_NODE_DRIVER_REGISTRAR
value: "{{ .Values.cluster.images.csiV1NodeDriverRegistrar.repository }}:{{ .Values.cluster.images.csiV1NodeDriverRegistrar.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1ExternalProvisioner.repository .Values.cluster.images.csiV1ExternalProvisioner.tag }}
- name: RELATED_IMAGE_CSIV1_EXTERNAL_PROVISIONER
value: "{{ .Values.cluster.images.csiV1ExternalProvisioner.repository }}:{{ .Values.cluster.images.csiV1ExternalProvisioner.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1ExternalAttacher.repository .Values.cluster.images.csiV1ExternalAttacher.tag }}
- name: RELATED_IMAGE_CSIV1_EXTERNAL_ATTACHER
value: "{{ .Values.cluster.images.csiV1ExternalAttacher.repository }}:{{ .Values.cluster.images.csiV1ExternalAttacher.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1ExternalAttacherV2.repository .Values.cluster.images.csiV1ExternalAttacherV2.tag }}
- name: RELATED_IMAGE_CSIV1_EXTERNAL_ATTACHER_V2
value: "{{ .Values.cluster.images.csiV1ExternalAttacherV2.repository }}:{{ .Values.cluster.images.csiV1ExternalAttacherV2.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV1LivenessProbe.repository .Values.cluster.images.csiV1LivenessProbe.tag }}
- name: RELATED_IMAGE_CSIV1_LIVENESS_PROBE
value: "{{ .Values.cluster.images.csiV1LivenessProbe.repository }}:{{ .Values.cluster.images.csiV1LivenessProbe.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV0DriverRegistrar.repository .Values.cluster.images.csiV0DriverRegistrar.tag }}
- name: RELATED_IMAGE_CSIV0_DRIVER_REGISTRAR
value: "{{ .Values.cluster.images.csiV0DriverRegistrar.repository }}:{{ .Values.cluster.images.csiV0DriverRegistrar.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV0ExternalProvisioner.repository .Values.cluster.images.csiV0ExternalProvisioner.tag }}
- name: RELATED_IMAGE_CSIV0_EXTERNAL_PROVISIONER
value: "{{ .Values.cluster.images.csiV0ExternalProvisioner.repository }}:{{ .Values.cluster.images.csiV0ExternalProvisioner.tag }}"
{{- end }}
{{- if and .Values.cluster.images.csiV0ExternalAttacher.repository .Values.cluster.images.csiV0ExternalAttacher.tag }}
- name: RELATED_IMAGE_CSIV0_EXTERNAL_ATTACHER
value: "{{ .Values.cluster.images.csiV0ExternalAttacher.repository }}:{{ .Values.cluster.images.csiV0ExternalAttacher.tag }}"
{{- end }}
{{- if and .Values.cluster.images.nfs.repository .Values.cluster.images.nfs.tag }}
- name: RELATED_IMAGE_NFS
value: "{{ .Values.cluster.images.nfs.repository }}:{{ .Values.cluster.images.nfs.tag }}"
{{- end }}
{{- if and .Values.cluster.images.kubeScheduler.repository .Values.cluster.images.kubeScheduler.tag }}
- name: RELATED_IMAGE_KUBE_SCHEDULER
value: "{{ .Values.cluster.images.kubeScheduler.repository }}:{{ .Values.cluster.images.kubeScheduler.tag }}"
{{- end }}
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OPERATOR_NAME
value: "storageos-cluster-operator"
- name: DISABLE_SCHEDULER_WEBHOOK
value: "false"
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "storageos.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
annotations:
{{- if .Values.podSecurityPolicy.annotations }}
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
{{- end }}
spec:
volumes:
- '*'
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
{{- end }}
# Role for storageos operator
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storageos:operator
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- storageos.com
resources:
- storageosclusters
- storageosclusters/status
- storageosupgrades
- storageosupgrades/status
- jobs
- jobs/status
- nfsservers
- nfsservers/status
verbs:
- "*"
- apiGroups:
- apps
resources:
- statefulsets
- daemonsets
- deployments
- replicasets
verbs:
- "*"
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- get
- update
- create
- patch
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- get
- update
- patch
- delete
- create
- apiGroups:
- ""
resources:
- events
- namespaces
- serviceaccounts
- secrets
- services
- services/finalizers
- persistentvolumeclaims
- persistentvolumes
- configmaps
- replicationcontrollers
- pods/binding
- endpoints
verbs:
- create
- patch
- get
- list
- delete
- watch
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
- rolebindings
- clusterroles
- clusterrolebindings
verbs:
- create
- delete
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
- volumeattachments
- csinodeinfos
- csinodes
- csidrivers
verbs:
- create
- delete
- watch
- list
- get
- update
- patch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
- delete
- apiGroups:
- csi.storage.k8s.io
resources:
- csidrivers
verbs:
- create
- delete
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- list
- watch
- apiGroups:
- security.openshift.io
resourceNames:
- privileged
resources:
- securitycontextconstraints
verbs:
- create
- delete
- update
- get
- use
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
verbs:
- "*"
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- "*"
- apiGroups:
- apps
resources:
- deployments/finalizers
resourceNames:
- storageos-cluster-operator
verbs:
- update
- apiGroups:
- events.k8s.io
resources:
- events
verbs:
- create
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- create
- update
---
# Bind operator service account to storageos-operator role
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: storageos:operator
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: storageos:operator
apiGroup: rbac.authorization.k8s.io
{{- if .Values.podSecurityPolicy.enabled }}
---
# ClusterRole for using pod security policy.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: storageos:psp-user
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
rules:
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames:
- {{ template "storageos.fullname" . }}-psp
---
# Bind pod security policy cluster role to the operator service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: storageos:psp-user
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: storageos:psp-user
subjects:
- kind: ServiceAccount
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.cluster.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.cluster.secretRefName }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
type: "kubernetes.io/storageos"
data:
apiUsername: {{ default "" .Values.cluster.admin.username | b64enc | quote }}
{{ if .Values.cluster.admin.password }}
apiPassword: {{ default "" .Values.cluster.admin.password | b64enc | quote }}
{{ else }}
apiPassword: {{ randAlphaNum 10 | b64enc | quote }}
{{ end }}
# Add base64 encoded TLS cert and key below if ingress.tls is set to true.
# tls.crt:
# tls.key:
# Add base64 encoded creds below for CSI credentials.
# csiProvisionUsername:
# csiProvisionPassword:
# csiControllerPublishUsername:
# csiControllerPublishPassword:
# csiNodePublishUsername:
# csiNodePublishPassword:
{{- end }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "storageos.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "storageos.name" . }}
chart: {{ template "storageos.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.cluster.create }}
apiVersion: storageos.com/v1
kind: StorageOSCluster
metadata:
name: {{ .Values.cluster.name }}
namespace: {{ .Release.Namespace }}
spec:
namespace: {{ .Values.cluster.namespace }}
secretRefName: {{ .Values.cluster.secretRefName }}
secretRefNamespace: {{ .Release.Namespace }}
disableTelemetry: {{ .Values.cluster.disableTelemetry }}
{{- if .Values.k8sDistro }}
k8sDistro: {{ .Values.k8sDistro }}
{{- end }}
csi:
enable: {{ .Values.cluster.csi.enable }}
deploymentStrategy: {{ .Values.cluster.csi.deploymentStrategy }}
{{- if .Values.cluster.sharedDir }}
sharedDir: {{ .Values.cluster.sharedDir }}
{{- end }}
{{- if eq .Values.cluster.kvBackend.embedded false }}
kvBackend:
address: {{ .Values.cluster.kvBackend.address }}
backend: {{ .Values.cluster.kvBackend.backend }}
{{- end }}
{{- if .Values.cluster.kvBackend.tlsSecretName }}
tlsEtcdSecretRefName: {{ .Values.cluster.kvBackend.tlsSecretName }}
{{- end }}
{{- if .Values.cluster.kvBackend.tlsSecretNamespace }}
tlsEtcdSecretRefNamespace: {{ .Values.cluster.kvBackend.tlsSecretNamespace }}
{{- end }}
{{- if .Values.cluster.nodeSelectorTerm.key }}
nodeSelectorTerms:
- matchExpressions:
- key: {{ .Values.cluster.nodeSelectorTerm.key }}
operator: In
values:
- "{{ .Values.cluster.nodeSelectorTerm.value }}"
{{- end }}
{{- if .Values.cluster.toleration.key }}
tolerations:
- key: {{ .Values.cluster.toleration.key }}
operator: "Equal"
value: {{ .Values.cluster.toleration.value }}
effect: "NoSchedule"
{{- end }}
{{- end }}
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: storageosupgrades.storageos.com
annotations:
"helm.sh/hook": crd-install
spec:
group: storageos.com
names:
kind: StorageOSUpgrade
listKind: StorageOSUpgradeList
plural: storageosupgrades
singular: storageosupgrade
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
newImage:
description: NewImage is the new StorageOS node container image.
type: string
required:
- newImage
type: object
status:
properties:
completed:
description: Completed is the status of upgrade process.
type: boolean
type: object
version: v1
versions:
- name: v1
served: true
storage: true
# Default values for storageos.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
name: storageos-operator
k8sDistro: default
serviceAccount:
create: true
name: storageos-operator-sa
podSecurityPolicy:
enabled: false
annotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
# operator-specific configuation parameters.
operator:
image:
repository: storageos/cluster-operator
tag: 1.5.3
pullPolicy: IfNotPresent
# cluster-specific configuation parameters.
cluster:
# set create to true if the operator should auto-create the StorageOS cluster.
create: true
# Name of the deployment.
name: storageos
# Namespace to install the StorageOS cluster into.
namespace: kube-system
# Name of the secret containing StorageOS API credentials.
secretRefName: storageos-api
# Default admin account.
admin:
# Username to authenticate to the StorageOS API with.
username: storageos
# Password to authenticate to the StorageOS API with. If empty, a random
# password will be generated and set in the secretRefName secret.
password:
# sharedDir should be set if running kubelet in a container. This should
# be the path shared into to kubelet container, typically:
# "/var/lib/kubelet/plugins/kubernetes.io~storageos". If not set, defaults
# will be used.
sharedDir:
# Key-Value store backend.
kvBackend:
embedded: true
address:
backend: etcd
tlsSecretName:
tlsSecretNamespace:
# Node selector terms to install StorageOS on.
nodeSelectorTerm:
key:
value:
# Pod toleration for the StorageOS pods.
toleration:
key:
value:
# To disable anonymous usage reporting across the cluster, set to true.
# Defaults to false. To help improve the product, data such as API usage and
# StorageOS configuration information is collected.
disableTelemetry: false
images:
# nodeContainer is the StorageOS node image to use, available from the
# [Docker Hub](https://hub.docker.com/r/storageos/node/).
node:
repository:
tag:
init:
repository:
tag:
csiV1ClusterDriverRegistrar:
repository:
tag:
csiV1NodeDriverRegistrar:
repository:
tag:
csiV1ExternalProvisioner:
repository:
tag:
csiV1ExternalAttacher:
repository:
tag:
csiV1ExternalAttacherV2:
repository:
tag:
csiV1LivenessProbe:
repository:
tag:
csiV0DriverRegistrar:
repository:
tag:
csiV0ExternalProvisioner:
repository:
tag:
csiV0ExternalAttacher:
repository:
tag:
nfs:
repository:
tag:
kubeScheduler:
repository:
tag:
csi:
enable: true
deploymentStrategy: deployment
# The following is used for cleaning up unmanaged cluster resources when
# auto-install is enabled.
cleanup:
images:
kubectl:
repository: bitnami/kubectl
tag: 1.14.1
resources:
- name: daemonset
command:
- "daemonset"
- "storageos-daemonset"
- name: statefulset
command:
- "statefulset"
- "storageos-statefulset"
- name: csi-helper
command:
- "deployment"
- "storageos-csi-helper"
- name: scheduler
command:
- "deployment"
- "storageos-scheduler"
- name: configmap
command:
- "configmap"
- "storageos-scheduler-config"
- "storageos-scheduler-policy"
- name: serviceaccount
command:
- "serviceaccount"
- "storageos-daemonset-sa"
- "storageos-statefulset-sa"
- name: role
command:
- "role"
- "storageos:key-management"
- name: rolebinding
command:
- "rolebinding"
- "storageos:key-management"
- name: secret
command:
- "secret"
- "init-secret"
- name: service
command:
- "service"
- "storageos"
- name: clusterrole
command:
- "clusterrole"
- "storageos:driver-registrar"
- "storageos:csi-attacher"
- "storageos:csi-provisioner"
- "storageos:pod-fencer"
- "storageos:scheduler-extender"
- "storageos:init"
- "storageos:nfs-provisioner"
- name: clusterrolebinding
command:
- "clusterrolebinding"
- "storageos:csi-provisioner"
- "storageos:csi-attacher"
- "storageos:driver-registrar"
- "storageos:k8s-driver-registrar"
- "storageos:pod-fencer"
- "storageos:scheduler-extender"
- "storageos:init"
- "storageos:nfs-provisioner"
- name: storageclass
command:
- "storageclass"
- "fast"
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment