Commit cc069322 by Harsh Desai Committed by Guangbo

Add portworx chart v1.0.0 (#48)

* Add portworx chart v1.0.0 Signed-off-by: 's avatarHarsh Desai <harsh@portworx.com> * Update questions.yml 1. set default namespace to kube-system 2. added parter label 3. add show_if dependency to kvdb variables * Update Chart.yaml remove helm tillerversion requirements * Remove tiller 2.9.0 requirements from the chart Signed-off-by: 's avatarHarsh Desai <harsh@portworx.com> * Update pre-requisites in the README Signed-off-by: 's avatarHarsh Desai <harsh@portworx.com>
parent d6a7bf7d
name: portworx
appVersion: 1.0.0
version: 1.0.0
description: A Helm chart for installing Portworx on Kubernetes.
keywords:
- Storage
- ICP
- persistent disk
- pvc
- cloud native storage
- persistent storage
- portworx
- amd64
home: https://portworx.com/
maintainers:
- name: harsh-px
email: harsh@portworx.com
sources:
- https://github.com/portworx/helm
icon: https://raw.githubusercontent.com/portworx/helm/master/doc/media/k8s-porx.png
# Portworx
## Pre-requisites
This helm chart deploys [Portworx](https://portworx.com/) and [Stork](https://docs.portworx.com/scheduler/kubernetes/stork.html) on your Kubernetes cluster. The minimum requirements for deploying the helm chart are as follows:
- All [Pre-requisites](https://docs.portworx.com/scheduler/kubernetes/install.html#prerequisites) for Portworx must be fulfilled.
## Limitations
* The portworx helm chart can only be deployed in the kube-system namespace. Hence use "kube-system" in the "Target namespace" during configuration.
* You can only deploy one portworx helm chart per Kubernetes cluster.
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
> **Tip** > The Portworx configuration files under `/etc/pwx/` directory are preserved, and will not be deleted.
```
helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Documentation
* [Portworx docs site](https://docs.portworx.com/scheduler/kubernetes/)
* [Portworx interactive tutorials](https://docs.portworx.com/scheduler/kubernetes/px-k8s-interactive.html)
## Installing the Chart using the CLI
To install the chart with the release name `my-release` run the following commands substituting relevant values for your setup:
##### NOTE:
`etcdEndPoint` is a required field. The chart installation would not proceed unless this option is provided.
If the etcdcluster being used is a secured ETCD (SSL/TLS) then please follow instructions to create a kubernetes secret with the certs. https://docs.portworx.com/scheduler/kubernetes/etcd-certs-using-secrets.html#create-kubernetes-secret
`clusterName` should be a unique name identifying your Portworx cluster. The default value is `mycluster`, but it is suggested to update it with your naming scheme.
Example of using the helm CLI to install the chart:
```
helm install --debug --name my-release --set etcdEndPoint=etcd:http://192.168.70.90:2379,clusterName=$(uuid) ./helm/charts/portworx/
```
## Basic troubleshooting
#### Helm install errors with "no available release name found"
```
helm install --dry-run --debug --set etcdEndPoint=etcd:http://192.168.70.90:2379,clusterName=$(uuid) ./helm/charts/px/
[debug] Created tunnel using local port: '37304'
[debug] SERVER: "127.0.0.1:37304"
[debug] Original chart version: ""
[debug] CHART PATH: /root/helm/charts/px
Error: no available release name found
```
This most likely indicates that Tiller doesn't have the right RBAC permissions.
You can verify the tiller logs
```
[storage/driver] 2018/02/07 06:00:13 get: failed to get "singing-bison.v1": configmaps "singing-bison.v1" is forbidden: User "system:serviceaccount:kube-system:default" cannot get configmaps in the namespace "kube-system"
[tiller] 2018/02/07 06:00:13 info: generated name singing-bison is taken. Searching again.
[tiller] 2018/02/07 06:00:13 warning: No available release names found after 5 tries
[tiller] 2018/02/07 06:00:13 failed install prepare step: no available release name found
```
#### Helm install errors with "Job failed: BackoffLimitExceeded"
```
helm install --debug --set dataInterface=eth1,managementInterface=eth1,etcdEndPoint=etcd:http://192.168.70.179:2379,clusterName=$(uuid) ./helm/charts/px/
[debug] Created tunnel using local port: '36389'
[debug] SERVER: "127.0.0.1:36389"
[debug] Original chart version: ""
[debug] CHART PATH: /root/helm/charts/px
Error: Job failed: BackoffLimitExceeded
```
This most likely indicates that the pre-install hook for the helm chart has failed due to a misconfigured or inaccessible ETCD url.
Follow the below steps to check the reason for failure.
```
kubectl get pods -nkube-system -a | grep preinstall
px-etcd-preinstall-hook-hxvmb 0/1 Error 0 57s
kubectl logs po/px-etcd-preinstall-hook-hxvmb -nkube-system
Initializing...
Verifying if the provided etcd url is accessible: http://192.168.70.179:2379
Response Code: 000
Incorrect ETCD URL provided. It is either not reachable or is incorrect...
```
Ensure the correct etcd URL is set as a parameter to the `helm install` command.
#### Helm install errors with "Job failed: Deadline exceeded"
```
helm install --debug --set dataInterface=eth1,managementInterface=eth1,etcdEndPoint=etcd:http://192.168.20.290:2379,clusterName=$(uuid) ./charts/px/
[debug] Created tunnel using local port: '39771'
[debug] SERVER: "127.0.0.1:39771"
[debug] Original chart version: ""
[debug] CHART PATH: /root/helm/charts/px
Error: Job failed: DeadlineExceeded
```
This error indicates that the pre-install hook for the helm chart has failed to run to completion correctly. Verify that the etcd URL is accessible. This error occurs on kubernetes cluster(s) with version below 1.8
Follow the below steps to check the reason for failure.
```
kubectl get pods -nkube-system -a | grep preinstall
px-hook-etcd-preinstall-dzmkl 0/1 Error 0 6m
px-hook-etcd-preinstall-nlqwl 0/1 Error 0 6m
px-hook-etcd-preinstall-nsjrj 0/1 Error 0 5m
px-hook-etcd-preinstall-r9gmz 0/1 Error 0 6m
kubectl logs po/px-hook-etcd-preinstall-dzmkl -nkube-system
Initializing...
Verifying if the provided etcd url is accessible: http://192.168.20.290:2379
Response Code: 000
Incorrect ETCD URL provided. It is either not reachable or is incorrect...
```
Ensure the correct etcd URL is set as a parameter to the `helm install` command.
# Portworx
[Portworx](https://portworx.com/) is a software defined persistent storage solution designed and purpose built for applications deployed as containers, via container orchestrators such as Kubernetes, Marathon and Swarm. It is a clustered block storage solution and provides a Cloud-Native layer from which containerized stateful applications programmatically consume block, file and object storage services directly through the scheduler.
categories:
- storage
namespace: kube-system
labels:
io.rancher.certified: partner
questions:
################################### KVDB options ################################
- variable: internalKVDB
description: "Select if you wish to run internal kvdb. Note internal kvdb is in beta. DO NOT enable internal kvdb when running with KOPS. The kvdb endpoints provided above will be ignored."
type: boolean
label: Enable Internal KVDB store
default: false
group: "Key value store parameters (Required)"
- variable: kvdb
description: "Points to your key value store, such as an etcd cluster or a consul cluster. Use semicolon separated for multiple endpoints. (example: etcd:http://etcd-1.com.net:2379;etcd:http://etcd-2.com.net:2379;etcd:http://etcd-3.com.net:2379)"
type: string
label: "Endpoint address"
required: true
show_if: "internalKVDB=false"
group: "Key value store parameters (Required)"
- variable: etcd.ca
description: "Name of CA file for ETCD authentication. Example: etcd-ca.crt. Follow https://docs.portworx.com/scheduler/kubernetes/etcd-certs-using-secrets.html to create a Kubernetes secret for the etcd certs."
type: string
label: "ETCD CA file"
group: "Key value store security Parameters"
show_if: "internalKVDB=false"
- variable: etcd.cert
description: "Name of certificate for ETCD authentication. Example: etcd.crt"
type: string
label: "ETCD cert file"
group: "Key value store security Parameters"
show_if: "internalKVDB=false"
- variable: etcd.key
description: "Name of certificate key for ETCD authentication Example: etcd.key"
type: string
label: "ETCD cert key file"
group: "Key value store security Parameters"
show_if: "internalKVDB=false"
- variable: etcd.credentials
description: "Username and password for ETCD authentication in the form user:password. Not needed if using certificates."
type: string
label: "ETCD credentials"
group: "Key value store security Parameters"
show_if: "internalKVDB=false"
- variable: consul.acl
description: "ACL token value used for Consul authentication. (example: 398073a8-5091-4d9c-871a-bbbeb030d1f6). Needed only for consul."
type: string
group: "Key value store security Parameters"
label: Consul ACL Token
show_if: "internalKVDB=false"
################################### Storage options ################################
- variable: drives
description: "This is a ';' seperated list of drives. For eg: '/dev/sda;/dev/sdb;/dev/sdc'. If left empty, Portworx will try to use available drives."
label: "Drives"
type: string
group: "Storage Parameters"
- variable: usefileSystemDrive
default: false
label: "Use drives with a filesystem."
description: "Instructs PX to use drives with a filesystem."
type: boolean
group: "Storage Parameters"
- variable: usedrivesAndPartitions
default: false
description: "Instructs PX to use unmounted drives and partitions."
type: boolean
label: "Use unmounted drives and partitions"
group: "Storage Parameters"
- variable: journalDevice
description: "This allows PX to create it’s own journal partition on the best drive to absorb PX metadata writes. Journal writes are small with frequent syncs and hence a separate journal partition will enable better performance. Use value 'auto' if you want Portworx to create it's own journal partition."
type: string
label: "Journal Device"
group: "Storage Parameters"
################################### Network options ################################
- variable: dataInterface
description: "Specify data network interface. This is useful if your instances have non-standard network interfaces. (example: eth1). By default, Portworx will select the first routable interface."
type: string
label: "Data Network Interface"
group: "Network Parameters"
- variable: managementInterface
description: "Specify management network interface. This is useful if your instances have non-standard network interfaces. (example: eth1). By default, Portworx will select the first routable interface.<Paste>"
type: string
label: "Management Network Interface"
group: "Network Parameters"
################################### Platform options ################################
- variable: pksInstall
default: false
label: "Installing on Pivotal Container Service (PKS)"
description: "Select if installing on Pivotal Container service."
type: boolean
group: "Platform Parameters"
- variable: AKSorEKSInstall
default: false
label: "Installing on AKS or EKS"
description: "Select if installing on Amazon Elastic Container Service for Kubernetes (EKS) or Azure Kubernetes Service (AKS)."
type: boolean
group: "Platform Parameters"
- variable: isTargetOSCoreOS
default: false
label: "Installing on CoreOS"
description: "Select if installing on CoreOS"
type: boolean
group: "Platform Parameters"
################################### Registry settings options ################################
- variable: registrySecret
description: "Specify a custom Kubernetes secret that will be used to authenticate with a container registry. Must be defined in kube-system namespace. (example: regcred)"
type: string
label: "Registry Kubernetes Secret"
group: "Container Registry Parameters"
- variable: customRegistryURL
description: "Specify a custom container registry server (including repository) that will be used instead of index.docker.io to download Docker images. (example: dockerhub.acme.net:5443 or myregistry.com/myrepository/)"
label: "Custom Registry URL"
type: string
group: "Container Registry Parameters"
################################## Optional features ############################
- variable: csi
description: "Select if you want to enable CSI (Container Storage Interface). CSI is still in ALPHA."
type: boolean
label: "Enable CSI"
default: false
required: false
group: "Advanced parameters"
- variable: stork
default: true
label: "Install Stork"
description: "Storage Orchestration Runtime for Kubernetes (STORK) is a scheduler plugin that provides hyper-convergence, snapshots and storage-aware scheduling (recommended)."
type: boolean
group: "Advanced parameters"
- variable: storkVersion
default: "1.2.0"
label: "Stork version"
description: "Version of Stork to be used"
type: string
group: "Advanced parameters"
- variable: lighthouse
default: false
label: "Lighthouse"
description: "Select if you want to install Portworx Lighthouse GUI."
type: boolean
group: "Advanced parameters"
- variable: lighthouseVersion
default: "1.4.0"
description: "Version of the Lighthouse GUI to be used"
type: string
label: "Lighthouse version"
group: "Advanced parameters"
- variable: envVars
label: "Environment variables"
description: "semi-colon-separated list of environment variables that will be exported to portworx. (example: API_SERVER=http://lighthouse-new.portworx.com;MYENV1=val1;MYENV2=val2)"
type: string
group: "Advanced parameters"
- variable: imageVersion
default: "1.5.1"
description: "The Portworx image version to be used while deploying"
type: string
label: Portworx version to be deployed.
group: "Advanced parameters"
- variable: clusterName
description: "Name of the Portworx Cluster"
type: string
label: Portworx cluster name
default: mycluster
group: "Advanced parameters"
Your Release is named {{ .Release.Name | quote }}
Portworx Pods should be running on each node in your cluster.
Portworx would create a unified pool of the disks attached to your Kubernetes nodes. No further action should be required and you are ready to consume Portworx Volumes as part of your application data requirements.
For further information on usage of the Portworx, refer to following doc pages.
- For dynamically provisioning volumes: https://docs.portworx.com/scheduler/kubernetes/dynamic-provisioning.html
- For preprovisioned volumes: https://docs.portworx.com/scheduler/kubernetes/preprovisioned-volumes.html
- To use Stork (Storage Orchestration Runtime for Kubernetes) for hyperconvergence and snapshots: https://docs.portworx.com/scheduler/kubernetes/stork.html
- For stateful application solutions using Portworx: https://docs.portworx.com/scheduler/kubernetes/k8s-px-app-samples.html
- For interactive tutorials on using Portworx on Kubernetes: https://docs.portworx.com/scheduler/kubernetes/px-k8s-interactive.html
{{/* Gets the correct API Version based on the version of the cluster
*/}}
{{- define "rbac.apiVersion" -}}
{{- if semverCompare ">= 1.8-0" .Capabilities.KubeVersion.GitVersion -}}
"rbac.authorization.k8s.io/v1"
{{- else -}}
"rbac.authorization.k8s.io/v1beta1"
{{- end -}}
{{- end -}}
{{- define "px.labels" -}}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
{{- end -}}
{{- define "driveOpts" }}
{{ $v := .Values.installOptions.drives | split "," }}
{{$v._0}}
{{- end -}}
{{- define "px.kubernetesVersion" -}}
{{$version := .Capabilities.KubeVersion.GitVersion | regexFind "^v\\d+\\.\\d+\\.\\d+"}}{{$version}}
{{- end -}}
{{- define "px.getImage" -}}
{{- if (.Values.customRegistryURL) -}}
{{- if (eq "/" (.Values.customRegistryURL | regexFind "/")) -}}
{{- if .Values.openshiftInstall -}}
{{ cat (trim .Values.customRegistryURL) "/px-monitor" | replace " " ""}}
{{- else -}}
{{ cat (trim .Values.customRegistryURL) "/oci-monitor" | replace " " ""}}
{{- end -}}
{{- else -}}
{{- if .Values.openshiftInstall -}}
{{cat (trim .Values.customRegistryURL) "/portworx/px-monitor" | replace " " ""}}
{{- else -}}
{{cat (trim .Values.customRegistryURL) "/portworx/oci-monitor" | replace " " ""}}
{{- end -}}
{{- end -}}
{{- else -}}
{{- if .Values.openshiftInstall -}}
{{ "registry.connect.redhat.com/portworx/px-monitor" }}
{{- else -}}
{{ "portworx/oci-monitor" }}
{{- end -}}
{{- end -}}
{{- end -}}
{{- define "px.getStorkImage" -}}
{{- if (.Values.customRegistryURL) -}}
{{- if (eq "/" (.Values.customRegistryURL | regexFind "/")) -}}
{{ cat (trim .Values.customRegistryURL) "/stork" | replace " " ""}}
{{- else -}}
{{cat (trim .Values.customRegistryURL) "/openstorage/stork" | replace " " ""}}
{{- end -}}
{{- else -}}
{{ "openstorage/stork" }}
{{- end -}}
{{- end -}}
{{- define "px.getk8sImages" -}}
{{- if (.Values.customRegistryURL) -}}
{{- if (eq "/" (.Values.customRegistryURL | regexFind "/")) -}}
{{ trim .Values.customRegistryURL }}
{{- else -}}
{{cat (trim .Values.customRegistryURL) "/gcr.io/google_containers" | replace " " ""}}
{{- end -}}
{{- else -}}
{{ "gcr.io/google_containers" }}
{{- end -}}
{{- end -}}
{{- define "px.getcsiImages" -}}
{{- if (.Values.customRegistryURL) -}}
{{- if (eq "/" (.Values.customRegistryURL | regexFind "/")) -}}
{{ trim .Values.customRegistryURL }}
{{- else -}}
{{cat (trim .Values.customRegistryURL) "/quay.io/k8scsi" | replace " " ""}}
{{- end -}}
{{- else -}}
{{ "quay.io/k8scsi" }}
{{- end -}}
{{- end -}}
{{- define "px.getLighthouseImages" -}}
{{- if (.Values.customRegistryURL) -}}
{{- if (eq "/" (.Values.customRegistryURL | regexFind "/")) -}}
{{ trim .Values.customRegistryURL }}
{{- else -}}
{{cat (trim .Values.customRegistryURL) "/portworx/" | replace " " ""}}
{{- end -}}
{{- else -}}
{{ "/portworx/" }}
{{- end -}}
{{- end -}}
{{- define "px.registryConfigType" -}}
{{- if semverCompare ">=1.9-0" .Capabilities.KubeVersion.GitVersion -}}
".dockerconfigjson"
{{- else -}}
".dockercfg"
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use for hooks
*/}}
{{- define "px.hookServiceAccount" -}}
{{- if .Values.serviceAccount.hook.create -}}
{{- printf "%s-hook" .Chart.Name | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{ default "default" .Values.serviceAccount.hook.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the cluster role to use for hooks
*/}}
{{- define "px.hookClusterRole" -}}
{{- if .Values.serviceAccount.hook.create -}}
{{- printf "%s-hook" .Chart.Name | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{ default "default" .Values.serviceAccount.hook.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the cluster role binding to use for hooks
*/}}
{{- define "px.hookClusterRoleBinding" -}}
{{- if .Values.serviceAccount.hook.create -}}
{{- printf "%s-hook" .Chart.Name | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{ default "default" .Values.serviceAccount.hook.name }}
{{- end -}}
{{- end -}}
{{- $customRegistryURL := .Values.customRegistryURL | default "none" }}
{{- $registrySecret := .Values.registrySecret | default "none" }}
apiVersion: batch/v1
kind: Job
metadata:
namespace: kube-system
name: px-hook-postdelete-unlabelnode
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
{{ if semverCompare ">= 1.8-0" .Capabilities.KubeVersion.GitVersion }}
backoffLimit: 0
{{ else }}
activeDeadlineSeconds: 30
{{ end }}
template:
spec:
{{- if not (eq $registrySecret "none") }}
imagePullSecrets:
- name: {{ $registrySecret }}
{{- end }}
restartPolicy: Never
serviceAccountName: {{ template "px.hookServiceAccount" . }}
containers:
- name: post-delete-job
{{- if eq $customRegistryURL "none" }}
image: "lachlanevenson/k8s-kubectl:{{ template "px.kubernetesVersion" . }}"
{{- else}}
image: "{{ $customRegistryURL }}/lachlanevenson/k8s-kubectl:{{ template "px.kubernetesVersion" . }}"
{{- end}}
args: ['label','nodes','--all','px/enabled-']
{{- $customRegistryURL := .Values.customRegistryURL | default "none" }}
{{- $registrySecret := .Values.registrySecret | default "none" }}
apiVersion: batch/v1
kind: Job
metadata:
namespace: kube-system
name: px-hook-predelete-nodelabel
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
{{ if semverCompare ">= 1.8-0" .Capabilities.KubeVersion.GitVersion }}
backoffLimit: 0
{{ else }}
activeDeadlineSeconds: 30
{{ end }}
template:
spec:
{{- if not (eq $registrySecret "none") }}
imagePullSecrets:
- name: {{ $registrySecret }}
{{- end }}
serviceAccountName: {{ template "px.hookServiceAccount" . }}
restartPolicy: Never
containers:
- name: pre-delete-job
{{- if eq $customRegistryURL "none" }}
image: "lachlanevenson/k8s-kubectl:{{ template "px.kubernetesVersion" . }}"
{{- else}}
image: "{{ $customRegistryURL }}/lachlanevenson/k8s-kubectl:{{ template "px.kubernetesVersion" . }}"
{{- end}}
args: ['label','nodes','--all','px/enabled=remove','--overwrite']
{{- $customRegistryURL := .Values.customRegistryURL | default "none" }}
{{- $registrySecret := .Values.registrySecret | default "none" }}
{{- $etcdCA := .Values.etcd.ca | default "none" }}
{{- $etcdCert := .Values.etcd.cert | default "none" }}
{{- $etcdKey := .Values.etcd.key | default "none" }}
apiVersion: batch/v1
kind: Job
metadata:
namespace: kube-system
name: px-hook-etcd-preinstall
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
{{ if semverCompare ">= 1.8-0" .Capabilities.KubeVersion.GitVersion }}
backoffLimit: 0
{{ else }}
activeDeadlineSeconds: 30
{{ end }}
template:
spec:
{{- if not (eq $registrySecret "none") }}
imagePullSecrets:
- name: {{ $registrySecret }}
{{- end }}
restartPolicy: Never
containers:
- name: pre-install-job
terminationMessagePath: '/dev/termination-log'
terminationMessagePolicy: 'FallbackToLogsOnError'
imagePullPolicy: Always
{{- if eq $customRegistryURL "none" }}
image: "hrishi/px-etcd-preinstall-hook:v2"
{{- else}}
image: "{{ $customRegistryURL }}/hrishi/px-etcd-preinstall-hook:v2"
{{- end }}
{{- if not (eq $etcdCert "none") }}
command: ['/bin/bash']
args: ['/usr/bin/etcdStatus.sh',
"{{ .Values.kvdb }}",
"/etc/pwx/etcdcerts/$etcdCA",
"/etc/pwx/etcdcerts/$etcdCert",
"/etc/pwx/etcdcerts/$etcdKey"]
volumeMounts:
- mountPath: /etc/pwx/etcdcerts
name: etcdcerts
volumes:
- name: etcdcerts
secret:
secretName: px-etcd-certs
items:
- key: $etcdCA
path: $etcdCA
- key: $etcdCert
path: $etcdCert
- key: $etcdKey
path: $etcdKey
{{- else}}
command: ['/bin/bash']
args: ['/usr/bin/etcdStatus.sh',"{{ .Values.kvdb }}"]
{{- end}}
{{- if or ((.Values.openshiftInstall) and (eq .Values.openshiftInstall true)) ((.Values.AKSorEKSInstall) and (eq .Values.AKSorEKSInstall true)) ((.Capabilities.KubeVersion.GitVersion | regexMatch "gke")) }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: portworx-pvc-controller-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: portworx-pvc-controller-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["create","delete","get","list","update","watch"]
- apiGroups: [""]
resources: ["persistentvolumes/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "update", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "delete", "get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "services"]
verbs: ["create", "delete", "get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: portworx-pvc-controller-role-binding
subjects:
- kind: ServiceAccount
name: portworx-pvc-controller-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: portworx-pvc-controller-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
tier: control-plane
name: portworx-pvc-controller
namespace: kube-system
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
name: portworx-pvc-controller
tier: control-plane
spec:
{{- if (and (.Values.openshiftInstall) (eq .Values.openshiftInstall true))}}
imagePullSecrets:
- name: {{ required "A registry secret is required for openshift installation" .Values.registrySecret }}
{{- else }}
{{- if not (empty .Values.registrySecret) }}
imagePullSecrets:
- name: {{ .Values.registrySecret }}
{{- end }}
{{- end }}
containers:
- command:
- kube-controller-manager
- --leader-elect=true
- --address=0.0.0.0
- --controllers=persistentvolume-binder
- --use-service-account-credentials=true
- --leader-elect-resource-lock=configmaps
image: "{{ template "px.getk8sImages" . }}/kube-controller-manager-amd64:{{ template "px.kubernetesVersion" . }}"
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: portworx-pvc-controller-manager
resources:
requests:
cpu: 200m
hostNetwork: true
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- portworx-pvc-controller
topologyKey: "kubernetes.io/hostname"
serviceAccountName: portworx-pvc-controller-account
{{- end }}
{{- if (.Values.csi) and (eq .Values.csi true)}}
{{- $customRegistryURL := .Values.customRegistryURL | default "none" }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-csi-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: px-csi-role
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: px-csi-role-binding
subjects:
- kind: ServiceAccount
name: px-csi-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: px-csi-role
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: px-csi-service
namespace: kube-system
spec:
clusterIP: None
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: px-csi-ext
namespace: kube-system
spec:
serviceName: "px-csi-service"
replicas: 1
template:
metadata:
labels:
app: px-csi-driver
spec:
serviceAccount: px-csi-account
containers:
- name: csi-external-provisioner
imagePullPolicy: Always
image: "{{ template "px.getcsiProvisioner" . }}/csi-provisioner:v0.2.0"
args:
- "--v=5"
- "--provisioner=com.openstorage.pxd"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-attacher
imagePullPolicy: Always
image: "{{ template "px.getcsiImages" . }}/csi-attacher:v0.2.0"
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /csi
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/com.openstorage.pxd
type: DirectoryOrCreate
{{- end }}
\ No newline at end of file
{{- if (.Values.lighthouse) and (eq .Values.lighthouse true) -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-lh-account
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: px-lh-account
namespace: kube-system
roleRef:
kind: Role
name: px-lh-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
type: NodePort
ports:
- name: http
port: 80
nodePort: 32678
- name: https
port: 443
nodePort: 32679
selector:
tier: px-web-console
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
tier: px-web-console
replicas: 1
template:
metadata:
labels:
tier: px-web-console
spec:
initContainers:
- name: config-init
image: "{{ template "px.getk8sImages" . }}/lh-config-sync:0.2"
imagePullPolicy: Always
args:
- "init"
volumeMounts:
- name: config
mountPath: /config/lh
containers:
- name: px-lighthouse
image: "{{ template "px.getk8sImages" . }}/px-lighthouse:{{ required "A valid lighthouse image version is required" .Values.lighthouseVersion}}"
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: config
mountPath: /config/lh
- name: config-sync
image: "{{ template "px.getk8sImages" . }}/lh-config-sync:0.2"
imagePullPolicy: Always
args:
- "sync"
volumeMounts:
- name: config
mountPath: /config/lh
serviceAccountName: px-lh-account
volumes:
- name: config
emptyDir: {}
{{- end -}}
\ No newline at end of file
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: node-get-put-list-role
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "get", "update", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "update", "create"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["privileged"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: node-role-binding
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-get-put-list-role
apiGroup: rbac.authorization.k8s.io
kind: Service
apiVersion: v1
metadata:
name: portworx-service
namespace: kube-system
labels:
name: portworx
spec:
selector:
name: portworx
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
\ No newline at end of file
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-db-sc
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "3"
io_profile: "db"
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-db2-sc
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "3"
block_size: "512b"
io_profile: "db"
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-shared-sc
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "3"
shared: "true"
---
#
# NULL StorageClass that documents all possible
# Portworx StorageClass parameters
#
# Please refer to : https://docs.portworx.com/scheduler/kubernetes/dynamic-provisioning.html
#
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-null-sc
annotations:
params/docs: 'https://docs.portworx.com/scheduler/kubernetes/dynamic-provisioning.html'
params/fs: "Filesystem to be laid out: none|xfs|ext4 "
params/block_size: "Block size"
params/repl: "Replication factor for the volume: 1|2|3"
params/shared: "Flag to create a globally shared namespace volume which can be used by multiple pods : true|false"
params/priority_io: "IO Priority: low|medium|high"
params/io_profile: "IO Profile can be used to override the I/O algorithm Portworx uses for the volumes. Supported values are [db](/maintain/performance/tuning.html#db), [sequential](/maintain/performance/tuning.html#sequential), [random](/maintain/performance/tuning.html#random), [cms](/maintain/performance/tuning.html#cms)"
params/group: "The group a volume should belong too. Portworx will restrict replication sets of volumes of the same group on different nodes. If the force group option 'fg' is set to true, the volume group rule will be strictly enforced. By default, it's not strictly enforced."
params/fg: "This option enforces volume group policy. If a volume belonging to a group cannot find nodes for it's replication sets which don't have other volumes of same group, the volume creation will fail."
params/label: "List of comma-separated name=value pairs to apply to the Portworx volume"
params/nodes: "Comma-separated Portworx Node ID's to use for replication sets of the volume"
params/aggregation_level: "Specifies the number of replication sets the volume can be aggregated from"
params/snap_schedule: "Snapshot schedule. Following are the accepted formats: periodic=_mins_,_snaps-to-keep_ daily=_hh:mm_,_snaps-to-keep_ weekly=_weekday@hh:mm_,_snaps-to-keep_ monthly=_day@hh:mm_,_snaps-to-keep_ _snaps-to-keep_ is optional. Periodic, Daily, Weekly and Monthly keep last 5, 7, 5 and 12 snapshots by default respectively"
params/sticky: "Flag to create sticky volumes that cannot be deleted until the flag is disabled"
params/journal: "Flag to indicate if you want to use journal device for the volume's metadata. This will use the journal device that you used when installing Portworx. As of PX version 1.3, it is recommended to use a journal device to absorb PX metadata writes"
provisioner: kubernetes.io/portworx-volume
parameters:
{{- if (.Values.stork) and (eq .Values.stork true)}}
{{- $isCoreOS := .Values.isTargetOSCoreOS | default false }}
{{- $customRegistryURL := .Values.customRegistryURL | default "none" }}
{{- $registrySecret := .Values.registrySecret | default "none" }}
apiVersion: v1
kind: ConfigMap
metadata:
name: stork-config
namespace: kube-system
data:
policy.cfg: |-
{
"kind": "Policy",
"apiVersion": "v1",
{{- if semverCompare "< 1.10-0" .Capabilities.KubeVersion.GitVersion }}
"predicates": [
{{- if semverCompare "< 1.9-0" .Capabilities.KubeVersion.GitVersion }}
{"name": "NoVolumeNodeConflict"},
{{- end}}
{"name": "MaxAzureDiskVolumeCount"},
{"name": "NoVolumeZoneConflict"},
{"name": "PodToleratesNodeTaints"},
{"name": "CheckNodeMemoryPressure"},
{"name": "MaxEBSVolumeCount"},
{"name": "MaxGCEPDVolumeCount"},
{"name": "MatchInterPodAffinity"},
{"name": "NoDiskConflict"},
{"name": "GeneralPredicates"},
{"name": "CheckNodeDiskPressure"}
],
"priorities": [
{"name": "NodeAffinityPriority", "weight": 1},
{"name": "TaintTolerationPriority", "weight": 1},
{"name": "SelectorSpreadPriority", "weight": 1},
{"name": "InterPodAffinityPriority", "weight": 1},
{"name": "LeastRequestedPriority", "weight": 1},
{"name": "BalancedResourceAllocation", "weight": 1},
{"name": "NodePreferAvoidPodsPriority", "weight": 1}
],
{{- end}}
"extenders": [
{
"urlPrefix": "http://stork-service.kube-system.svc.cluster.local:8099",
"apiVersion": "v1beta1",
"filterVerb": "filter",
"prioritizeVerb": "prioritize",
"weight": 5,
"enableHttps": false,
"nodeCacheCapable": false
}
]
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: stork-role
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "delete", "create"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["stork.libopenstorage.org"]
resources: ["rules"]
verbs: ["get", "list"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete", "get"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["deployments", "deployments/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
- apiGroups: ["*"]
resources: ["statefulsets", "statefulsets/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: stork-role-binding
subjects:
- kind: ServiceAccount
name: stork-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-role
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: stork-service
namespace: kube-system
spec:
selector:
name: stork
ports:
- protocol: TCP
port: 8099
targetPort: 8099
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
tier: control-plane
name: stork
namespace: kube-system
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 3
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
name: stork
tier: control-plane
spec:
{{- if not (eq $registrySecret "none") }}
imagePullSecrets:
- name: {{ $registrySecret }}
{{- end }}
containers:
- command:
- /stork
- --driver=pxd
- --verbose
- --leader-elect=true
imagePullPolicy: Always
image: {{ template "px.getStorkImage" . }}:{{ required "A valid Image tag is required in the SemVer format" .Values.storkVersion }}
resources:
requests:
cpu: '0.1'
name: stork
hostPID: false
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork
topologyKey: "kubernetes.io/hostname"
serviceAccountName: stork-account
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: stork-snapshot-sc
provisioner: stork-snapshot
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-scheduler-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: stork-scheduler-role
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
- apiGroups: [""]
resourceNames: ["kube-scheduler"]
resources: ["endpoints"]
verbs: ["delete", "get", "patch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list", "watch"]
- apiGroups: [""]
resources: ["bindings", "pods/binding"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["patch", "update"]
- apiGroups: [""]
resources: ["replicationcontrollers", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["app", "extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: stork-scheduler-role-binding
subjects:
- kind: ServiceAccount
name: stork-scheduler-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-scheduler-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
namespace: kube-system
spec:
replicas: 3
template:
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
spec:
containers:
- command:
- /usr/local/bin/kube-scheduler
- --address=0.0.0.0
- --leader-elect=true
- --scheduler-name=stork
- --policy-configmap=stork-config
- --policy-configmap-namespace=kube-system
- --lock-object-name=stork-scheduler
image: "{{ template "px.getk8sImages" . }}/kube-scheduler-amd64:{{ template "px.kubernetesVersion" . }}"
livenessProbe:
httpGet:
path: /healthz
port: 10251
initialDelaySeconds: 15
name: stork-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10251
resources:
requests:
cpu: '0.1'
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork-scheduler
topologyKey: "kubernetes.io/hostname"
hostPID: false
serviceAccountName: stork-scheduler-account
{{- end }}
{{- if .Values.serviceAccount.hook.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "px.hookServiceAccount" . }}
namespace: kube-system
annotations:
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": "pre-install,pre-delete,post-delete"
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
annotations:
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": "pre-install,pre-delete,post-delete"
name: {{ template "px.hookClusterRole" . }}
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["patch", "get", "update", "list"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
annotations:
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": "pre-install,pre-delete,post-delete"
name: {{ template "px.hookClusterRoleBinding" . }}
subjects:
- kind: ServiceAccount
name: {{ template "px.hookServiceAccount" . }}
namespace: kube-system
roleRef:
kind: ClusterRole
name: {{ template "px.hookClusterRole" . }}
apiGroup: rbac.authorization.k8s.io
{{- end }}
# Please uncomment and specify values for these options as per your requirements.
drives: none # NOTE: This is a ";" seperated list of drives. For eg: "/dev/sda;/dev/sdb;/dev/sdc" Defaults to use -A switch.
usefileSystemDrive: false # true/false Instructs PX to use an unmounted Drive even if it has a filesystem.
usedrivesAndPartitions: false # Defaults to false. Change to true and PX will use unmounted drives and partitions.
journalDevice:
kvdb:
internalKVDB: false # internal KVDB
etcd:
credentials: none:none # Username and password for ETCD authentication in the form user:password
ca: none # Name of CA file for ETCD authentication. server.ca
cert: none # Name of certificate for ETCD authentication. Should be server.crt
key: none # Name of certificate key for ETCD authentication Should be server.key
consul:
token: none # ACL token value used for Consul authentication. (example: 398073a8-5091-4d9c-871a-bbbeb030d1f6)
dataInterface: none # Name of the interface <ethX>
managementInterface: none # Name of the interface <ethX>
isTargetOSCoreOS: false # Is your target OS CoreOS? Defaults to false.
pksInstall: false # installation on PKS (Pivotal Container Service)
AKSorEKSInstall: false # installation on AKS or EKS.
customRegistryURL:
registrySecret:
clusterName: mycluster # This is the default. please change it to your cluster name.
secretType: none # Defaults to None, but can be AWS / KVDB / Vault.
envVars: none # NOTE: This is a ";" seperated list of environment variables. For eg: MYENV1=myvalue1;MYENV2=myvalue2
stork: true # Use Stork https://docs.portworx.com/scheduler/kubernetes/stork.html for hyperconvergence.
storkVersion: 1.2.0
lighthouse: false
lighthouseVersion: 1.4.0
deployOnMaster: false # For POC only
csi: false # Enable CSI
serviceAccount:
hook:
create: true
name:
deploymentType: oci # accepts "oci" or "docker"
imageType: none #
imageVersion: 1.5.1 # Version of the PX Image.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment