Unverified Commit 90a6a8d8 by Denise Committed by GitHub

Merge pull request #367 from trierra/px-release-106

px release 106
parents 6af89855 f1febc25
name: portworx
appVersion: 1.0.6
version: 1.0.6
description: A Helm chart for installing Portworx on Kubernetes.
keywords:
- Storage
- ICP
- persistent disk
- pvc
- cloud native storage
- persistent storage
- portworx
- amd64
home: https://portworx.com/
maintainers:
- name: harsh-px
email: harsh@portworx.com
- name: trierra
email: oksana@portworx.com
sources:
- https://github.com/portworx/helm
icon: https://raw.githubusercontent.com/portworx/helm/master/doc/media/k8s-porx.png
# Portworx
## **Pre-requisites**
Use this Helm chart to deploy [Portworx](https://portworx.com/) and [Stork](https://docs.portworx.com/scheduler/kubernetes/stork.html) to your Kubernetes cluster.
Prerequisites
Refer to the [Install Portworx on Kubernetes via Helm](https://docs.portworx.com/portworx-install-with-kubernetes/install-px-helm/#pre-requisites) page for the list of prerequisites.
## **Limitations**
* The portworx helm chart can only be deployed in the kube-system namespace. Hence use "kube-system" in the "Target namespace" during configuration.
## **Uninstalling the Chart**
#### You can uninstall Portworx using one of the following methods:
#### **1. Delete all the Kubernetes components associated with the chart and the release.**
> **Note** > The Portworx configuration files under `/etc/pwx/` directory are preserved, and will not be deleted.
To perform this operation simply delete the application from the Apps page
#### **2. Wipe your Portworx installation**
> **Note** > The commands in this section are disruptive and will lead to data loss. Please use caution..
See more details [here](https://docs.portworx.com/portworx-install-with-kubernetes/install-px-helm/#uninstall)
## **Documentation**
* [Portworx docs site](https://docs.portworx.com/install-with-other/rancher/rancher-2.x/#step-1-install-rancher)
* [Portworx interactive tutorials](https://docs.portworx.com/scheduler/kubernetes/px-k8s-interactive.html)
## **Installing the Chart using the CLI**
See the installation details [here](https://docs.portworx.com/portworx-install-with-kubernetes/install-px-helm/)
## **Installing Portworx on AWS**
See the installation details [here](https://docs.portworx.com/cloud-references/auto-disk-provisioning/aws)
## ** Giving your etcd certificates to Portworx using Kubernetes Secrets.**
This is the recommended way of providing etcd certificates, as the certificates will be automatically available to the new nodes joining the cluster
* Create Kubernetes secret
* Copy all your etcd certificates and key in a directory etcd-secrets/ to create a Kubernetes secret from it. Make sure the file names are the same as you gave above.
```
# ls -1 etcd-secrets/
etcd-ca.crt
etcd.crt
etcd.key
```
* Use kubectl to create the secret named px-etcd-certs from the above files:
```
# kubectl -n kube-system create secret generic px-etcd-certs --from-file=etcd-secrets/
```
* Notice that the secret has 3 keys etcd-ca.crt, etcd.crt and etcd.key, corresponding to file names in the etcd-secrets folder. We will use these keys in the Portworx spec file to reference the certificates.
```
# kubectl -n kube-system describe secret px-etcd-certs
Name: px-etcd-certs
Namespace: kube-system
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
etcd-ca.crt: 1679 bytes
etcd.crt: 1680 bytes
etcd.key: 414 bytes
```
Once above secret is created, proceed to the next steps.
# Portworx
[Portworx](https://portworx.com/) is a software defined storage overlay that allows you to
* Run containerized stateful applications that are highly-available (HA) across multiple nodes, cloud instances, regions, data centers or even clouds
* Migrate workflows between multiple clusters running across same or hybrid clouds
* Run hyperconverged workloads where the data resides on the same host as the applications
* Have programmatic control on your storage resources
\ No newline at end of file
etcdType: Built-in
\ No newline at end of file
Your Release is named {{ .Release.Name | quote }}
Portworx Pods should be running on each node in your cluster.
Portworx would create a unified pool of the disks attached to your Kubernetes nodes. No further action should be required and you are ready to consume Portworx Volumes as part of your application data requirements.
For further information on usage of the Portworx, refer to following doc pages.
- For dynamically provisioning volumes: https://docs.portworx.com/scheduler/kubernetes/dynamic-provisioning.html
- For preprovisioned volumes: https://docs.portworx.com/scheduler/kubernetes/preprovisioned-volumes.html
- To use Stork (Storage Orchestration Runtime for Kubernetes) for hyperconvergence and snapshots: https://docs.portworx.com/scheduler/kubernetes/stork.html
- For stateful application solutions using Portworx: https://docs.portworx.com/scheduler/kubernetes/k8s-px-app-samples.html
- For interactive tutorials on using Portworx on Kubernetes: https://docs.portworx.com/scheduler/kubernetes/px-k8s-interactive.html
{{- $customRegistryURL := .Values.customRegistryURL | default "none" }}
{{- $registrySecret := .Values.registrySecret | default "none" }}
apiVersion: batch/v1
kind: Job
metadata:
namespace: kube-system
name: px-hook-postdelete-unlabelnode
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
{{ if semverCompare ">= 1.8-0" .Capabilities.KubeVersion.GitVersion }}
backoffLimit: 0
{{ else }}
activeDeadlineSeconds: 30
{{ end }}
template:
spec:
{{- if not (eq $registrySecret "none") }}
imagePullSecrets:
- name: {{ $registrySecret }}
{{- end }}
restartPolicy: Never
serviceAccountName: {{ template "px.hookServiceAccount" . }}
containers:
- name: post-delete-job
{{- if eq $customRegistryURL "none" }}
image: "lachlanevenson/k8s-kubectl:{{ template "px.kubernetesVersion" . }}"
{{- else}}
image: "{{ $customRegistryURL }}/lachlanevenson/k8s-kubectl:{{ template "px.kubernetesVersion" . }}"
{{- end}}
args: ['label','nodes','--all','px/enabled-']
{{- $customRegistryURL := .Values.customRegistryURL | default "none" }}
{{- $registrySecret := .Values.registrySecret | default "none" }}
apiVersion: batch/v1
kind: Job
metadata:
namespace: kube-system
name: px-hook-predelete-nodelabel
labels:
heritage: {{.Release.Service | quote }}
release: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
{{ if semverCompare ">= 1.8-0" .Capabilities.KubeVersion.GitVersion }}
backoffLimit: 0
{{ else }}
activeDeadlineSeconds: 30
{{ end }}
template:
spec:
{{- if not (eq $registrySecret "none") }}
imagePullSecrets:
- name: {{ $registrySecret }}
{{- end }}
serviceAccountName: {{ template "px.hookServiceAccount" . }}
restartPolicy: Never
containers:
- name: pre-delete-job
{{- if eq $customRegistryURL "none" }}
image: "lachlanevenson/k8s-kubectl:{{ template "px.kubernetesVersion" . }}"
{{- else}}
image: "{{ $customRegistryURL }}/lachlanevenson/k8s-kubectl:{{ template "px.kubernetesVersion" . }}"
{{- end}}
args: ['label','nodes','--all','px/enabled=remove','--overwrite']
{{- if or ((.Values.openshiftInstall) and (eq .Values.openshiftInstall true)) ((.Values.AKSorEKSInstall) and (eq .Values.AKSorEKSInstall true)) ((.Capabilities.KubeVersion.GitVersion | regexMatch "gke")) }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: portworx-pvc-controller-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: portworx-pvc-controller-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["create","delete","get","list","update","watch"]
- apiGroups: [""]
resources: ["persistentvolumes/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "update", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "delete", "get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "services"]
verbs: ["create", "delete", "get", "update"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: portworx-pvc-controller-role-binding
subjects:
- kind: ServiceAccount
name: portworx-pvc-controller-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: portworx-pvc-controller-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
tier: control-plane
name: portworx-pvc-controller
namespace: kube-system
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
name: portworx-pvc-controller
tier: control-plane
spec:
{{- if not (empty .Values.registrySecret) }}
imagePullSecrets:
- name: {{ .Values.registrySecret }}
{{- end }}
containers:
- command:
- kube-controller-manager
- --leader-elect=true
- --address=0.0.0.0
- --controllers=persistentvolume-binder
- --use-service-account-credentials=true
- --leader-elect-resource-lock=configmaps
image: "{{ template "px.getk8sImages" . }}/kube-controller-manager-amd64:{{ template "px.kubernetesVersion" . }}"
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: portworx-pvc-controller-manager
resources:
requests:
cpu: 200m
hostNetwork: true
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- portworx-pvc-controller
topologyKey: "kubernetes.io/hostname"
serviceAccountName: portworx-pvc-controller-account
{{- end }}
{{- if (.Values.csi) and (eq .Values.csi true)}}
{{- $customRegistryURL := .Values.customRegistryURL | default "none" }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-csi-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: px-csi-role
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: px-csi-role-binding
subjects:
- kind: ServiceAccount
name: px-csi-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: px-csi-role
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: px-csi-service
namespace: kube-system
spec:
clusterIP: None
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: px-csi-ext
namespace: kube-system
spec:
serviceName: "px-csi-service"
replicas: 1
template:
metadata:
labels:
app: px-csi-driver
spec:
serviceAccount: px-csi-account
containers:
- name: csi-external-provisioner
imagePullPolicy: Always
image: "{{ template "px.getcsiProvisioner" . }}/csi-provisioner:v0.2.0"
args:
- "--v=5"
- "--provisioner=com.openstorage.pxd"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-attacher
imagePullPolicy: Always
image: "{{ template "px.getcsiImages" . }}/csi-attacher:v0.2.0"
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /csi
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/com.openstorage.pxd
type: DirectoryOrCreate
{{- end }}
\ No newline at end of file
{{- if (.Values.lighthouse) and (eq .Values.lighthouse true) -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-lh-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["nodes", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["stork.libopenstorage.org"]
resources: ["clusterpairs", "migrations", "groupvolumesnapshots"]
verbs: ["get", "list", "create", "update", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: px-lh-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: px-lh-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
type: NodePort
ports:
- name: http
port: 80
- name: https
port: 443
selector:
tier: px-web-console
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
tier: px-web-console
replicas: 1
template:
metadata:
labels:
tier: px-web-console
spec:
initContainers:
- name: config-init
image: "{{ template "px.getLighthouseImages" . }}/lh-config-sync:{{.Values.lighthouseSyncVersion}}"
imagePullPolicy: Always
args:
- "init"
volumeMounts:
- name: config
mountPath: /config/lh
containers:
- name: px-lighthouse
image: "{{ template "px.getLighthouseImages" . }}/px-lighthouse:{{ required "A valid lighthouse image version is required" .Values.lighthouseVersion}}"
imagePullPolicy: Always
args: [ "-kubernetes", "true" ]
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: config
mountPath: /config/lh
- name: config-sync
image: "{{ template "px.getLighthouseImages" . }}/lh-config-sync:{{.Values.lighthouseSyncVersion}}"
imagePullPolicy: Always
args:
- "sync"
volumeMounts:
- name: config
mountPath: /config/lh
- name: stork-connector
image: "{{ template "px.getLighthouseImages" . }}/lh-stork-connector:{{.Values.lighthouseStorkConnectorVersion}}"
imagePullPolicy: Always
serviceAccountName: px-lh-account
volumes:
- name: config
emptyDir: {}
{{- end -}}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: node-get-put-list-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "get", "update", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "update", "create"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["privileged"]
verbs: ["use"]
- apiGroups: ["portworx.io"]
resources: ["volumeplacementstrategies"]
verbs: ["get", "list"]
- apiGroups: ["stork.libopenstorage.org"]
resources: ["backuplocations"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: node-role-binding
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-get-put-list-role
apiGroup: rbac.authorization.k8s.io
kind: Service
apiVersion: v1
metadata:
name: portworx-service
namespace: kube-system
labels:
name: portworx
spec:
selector:
name: portworx
type: ClusterIP
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
- name: px-kvdb
protocol: TCP
port: 9019
targetPort: 9019
- name: px-sdk
protocol: TCP
port: 9020
targetPort: 9020
- name: px-rest-gateway
protocol: TCP
port: 9021
targetPort: 9021
---
kind: Service
apiVersion: v1
metadata:
name: portworx-api
namespace: kube-system
labels:
name: portworx-api
spec:
selector:
name: portworx-api
type: ClusterIP
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
- name: px-sdk
protocol: TCP
port: 9020
targetPort: 9020
- name: px-rest-gateway
protocol: TCP
port: 9021
targetPort: 9021
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-db-sc
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "3"
io_profile: "db"
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-db2-sc
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "3"
block_size: "512b"
io_profile: "db"
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-shared-sc
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "3"
shared: "true"
---
#
# NULL StorageClass that documents all possible
# Portworx StorageClass parameters
#
# Please refer to : https://docs.portworx.com/scheduler/kubernetes/dynamic-provisioning.html
#
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-null-sc
annotations:
params/docs: 'https://docs.portworx.com/scheduler/kubernetes/dynamic-provisioning.html'
params/fs: "Filesystem to be laid out: none|xfs|ext4 "
params/block_size: "Block size"
params/repl: "Replication factor for the volume: 1|2|3"
params/shared: "Flag to create a globally shared namespace volume which can be used by multiple pods : true|false"
params/priority_io: "IO Priority: low|medium|high"
params/io_profile: "IO Profile can be used to override the I/O algorithm Portworx uses for the volumes. Supported values are [db](/maintain/performance/tuning.html#db), [sequential](/maintain/performance/tuning.html#sequential), [random](/maintain/performance/tuning.html#random), [cms](/maintain/performance/tuning.html#cms)"
params/group: "The group a volume should belong too. Portworx will restrict replication sets of volumes of the same group on different nodes. If the force group option 'fg' is set to true, the volume group rule will be strictly enforced. By default, it's not strictly enforced."
params/fg: "This option enforces volume group policy. If a volume belonging to a group cannot find nodes for it's replication sets which don't have other volumes of same group, the volume creation will fail."
params/label: "List of comma-separated name=value pairs to apply to the Portworx volume"
params/nodes: "Comma-separated Portworx Node ID's to use for replication sets of the volume"
params/aggregation_level: "Specifies the number of replication sets the volume can be aggregated from"
params/snap_schedule: "Snapshot schedule. Following are the accepted formats: periodic=_mins_,_snaps-to-keep_ daily=_hh:mm_,_snaps-to-keep_ weekly=_weekday@hh:mm_,_snaps-to-keep_ monthly=_day@hh:mm_,_snaps-to-keep_ _snaps-to-keep_ is optional. Periodic, Daily, Weekly and Monthly keep last 5, 7, 5 and 12 snapshots by default respectively"
params/sticky: "Flag to create sticky volumes that cannot be deleted until the flag is disabled"
params/journal: "Flag to indicate if you want to use journal device for the volume's metadata. This will use the journal device that you used when installing Portworx. As of PX version 1.3, it is recommended to use a journal device to absorb PX metadata writes"
provisioner: kubernetes.io/portworx-volume
parameters:
{{- if (.Values.stork) and (eq .Values.stork true)}}
{{- $isCoreOS := .Values.isTargetOSCoreOS | default false }}
{{- $customRegistryURL := .Values.customRegistryURL | default "none" }}
{{- $registrySecret := .Values.registrySecret | default "none" }}
apiVersion: v1
kind: ConfigMap
metadata:
name: stork-config
namespace: kube-system
data:
policy.cfg: |-
{
"kind": "Policy",
"apiVersion": "v1",
{{- if semverCompare "< 1.10-0" .Capabilities.KubeVersion.GitVersion }}
"predicates": [
{{- if semverCompare "< 1.9-0" .Capabilities.KubeVersion.GitVersion }}
{"name": "NoVolumeNodeConflict"},
{{- end}}
{"name": "MaxAzureDiskVolumeCount"},
{"name": "NoVolumeZoneConflict"},
{"name": "PodToleratesNodeTaints"},
{"name": "CheckNodeMemoryPressure"},
{"name": "MaxEBSVolumeCount"},
{"name": "MaxGCEPDVolumeCount"},
{"name": "MatchInterPodAffinity"},
{"name": "NoDiskConflict"},
{"name": "GeneralPredicates"},
{"name": "CheckNodeDiskPressure"}
],
"priorities": [
{"name": "NodeAffinityPriority", "weight": 1},
{"name": "TaintTolerationPriority", "weight": 1},
{"name": "SelectorSpreadPriority", "weight": 1},
{"name": "InterPodAffinityPriority", "weight": 1},
{"name": "LeastRequestedPriority", "weight": 1},
{"name": "BalancedResourceAllocation", "weight": 1},
{"name": "NodePreferAvoidPodsPriority", "weight": 1}
],
{{- end}}
"extenders": [
{
"urlPrefix": "http://stork-service.kube-system:8099",
"apiVersion": "v1beta1",
"filterVerb": "filter",
"prioritizeVerb": "prioritize",
"weight": 5,
"enableHttps": false,
"nodeCacheCapable": false
}
]
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: stork-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: stork-role-binding
subjects:
- kind: ServiceAccount
name: stork-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-role
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: stork-service
namespace: kube-system
spec:
selector:
name: stork
ports:
- protocol: TCP
port: 8099
targetPort: 8099
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: volumeplacementstrategies.portworx.io
spec:
group: portworx.io
versions:
- name: v1beta2
served: true
storage: true
- name: v1beta1
served: false
storage: false
scope: Cluster
names:
plural: volumeplacementstrategies
singular: volumeplacementstrategy
kind: VolumePlacementStrategy
shortNames:
- vps
- vp
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
tier: control-plane
name: stork
namespace: kube-system
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 3
selector:
matchLabels:
name: stork
tier: control-plane
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
name: stork
tier: control-plane
spec:
{{- if not (eq $registrySecret "none") }}
imagePullSecrets:
- name: {{ $registrySecret }}
{{- end }}
containers:
- command:
- /stork
- --driver=pxd
- --verbose
- --leader-elect=true
imagePullPolicy: Always
image: {{ template "px.getStorkImage" . }}:{{ required "A valid Image tag is required in the SemVer format" .Values.storkVersion }}
resources:
requests:
cpu: '0.1'
name: stork
hostPID: false
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork
topologyKey: "kubernetes.io/hostname"
serviceAccountName: stork-account
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: stork-snapshot-sc
provisioner: stork-snapshot
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-scheduler-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: stork-scheduler-role
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: ["", "events.k8s.io"]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
- apiGroups: [""]
resourceNames: ["kube-scheduler"]
resources: ["endpoints"]
verbs: ["delete", "get", "patch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list", "watch"]
- apiGroups: [""]
resources: ["bindings", "pods/binding"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["patch", "update"]
- apiGroups: [""]
resources: ["replicationcontrollers", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps", "extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses", "csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create", "update", "get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
name: stork-scheduler-role-binding
subjects:
- kind: ServiceAccount
name: stork-scheduler-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-scheduler-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
namespace: kube-system
spec:
replicas: 3
selector:
matchLabels:
component: scheduler
tier: control-plane
template:
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
spec:
containers:
- command:
- /usr/local/bin/kube-scheduler
- --address=0.0.0.0
- --leader-elect=true
- --scheduler-name=stork
- --policy-configmap=stork-config
- --policy-configmap-namespace=kube-system
- --lock-object-name=stork-scheduler
image: "{{ template "px.getk8sImages" . }}/kube-scheduler-amd64:{{ template "px.kubernetesVersion" . }}"
livenessProbe:
httpGet:
path: /healthz
port: 10251
initialDelaySeconds: 15
name: stork-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10251
resources:
requests:
cpu: '0.1'
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork-scheduler
topologyKey: "kubernetes.io/hostname"
hostPID: false
serviceAccountName: stork-scheduler-account
{{- end }}
{{- if .Values.serviceAccount.hook.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "px.hookServiceAccount" . }}
namespace: kube-system
annotations:
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": "pre-delete,post-delete"
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
---
kind: ClusterRole
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
annotations:
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": "pre-delete,post-delete"
name: {{ template "px.hookClusterRole" . }}
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["patch", "get", "update", "list"]
---
kind: ClusterRoleBinding
apiVersion: {{ template "rbac.apiVersion" . }}
metadata:
annotations:
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": "pre-delete,post-delete"
name: {{ template "px.hookClusterRoleBinding" . }}
subjects:
- kind: ServiceAccount
name: {{ template "px.hookServiceAccount" . }}
namespace: kube-system
roleRef:
kind: ClusterRole
name: {{ template "px.hookClusterRole" . }}
apiGroup: rbac.authorization.k8s.io
{{- end }}
# Please uncomment and specify values for these options as per your requirements.
kvdb:
ownEtcdOption: none
etcdAuth: none
etcdType: none # KVDB type
etcd:
credentials: none:none # Username and password for ETCD authentication in the form user:password
ca: none # Name of CA file for ETCD authentication. server.ca
cert: none # Name of certificate for ETCD authentication. Should be server.crt
key: none # Name of certificate key for ETCD authentication Should be server.key
consul:
token: none # ACL token value used for Consul authentication. (example: 398073a8-5091-4d9c-871a-bbbeb030d1f6)
region: none # US or EU regions for Portworx hosted etcds
dataInterface: none # Name of the interface <ethX>
managementInterface: none # Name of the interface <ethX>
platformOptions: none # AKS, EKS or GKE platforms
customRegistryURL:
registrySecret:
clusterName: mycluster # This is the default. please change it to your cluster name.
secretType: k8s # Defaults to None, but can be AWS / KVDB / Vault.
envVars: none # NOTE: This is a ";" seperated list of environment variables. For eg: MYENV1=myvalue1;MYENV2=myvalue2
stork: true # Use Stork https://docs.portworx.com/scheduler/kubernetes/stork.html for hyperconvergence.
storkVersion: 2.3.1
lighthouse: true
lighthouseVersion: 2.0.5
lighthouseSyncVersion: 2.0.5
lighthouseStorkConnectorVersion: 2.0.5
deployOnMaster: false # For POC only
csi: false # Enable CSI
serviceAccount:
hook:
create: true
name:
deploymentType: oci # accepts "oci" or "docker"
imageType: none #
imageVersion: 2.3.4 # Version of the PX Image.
result: none
environment: none
onpremStorage: none
maxStorageNodes: none
journalDevice: none
usefileSystemDrive: false # true/false Instructs PX to use an unmounted Drive even if it has a filesystem.
usedrivesAndPartitions: false # Use unmounted disks even if they have a partition or filesystem on it. PX will never use a drive or partition that is mounted. (useDrivesAndPartitions)
provider: none
deviceConfig: none
drive_1:
aws:
type: none
size: none
iops: none
gc:
type: standard
size: 1000
drive_2:
aws:
type: none
size: none
iops: none
gc:
type: none
size: none
drive_3:
aws:
type: none
size: none
iops: none
gc:
type: none
size: none
drive_4:
aws:
type: none
size: none
iops: none
gc:
type: none
size: none
drive_5:
aws:
type: none
size: none
iops: none
gc:
type: none
size: none
drive_6:
aws:
type: none
size: none
iops: none
gc:
type: none
size: none
drive_7:
aws:
type: none
size: none
iops: none
gc:
type: none
size: none
drive_8:
aws:
type: none
size: none
iops: none
gc:
type: none
size: none
drive_9:
aws:
type: none
size: none
iops: none
gc:
type: none
size: none
drive_10:
aws:
type: none
size: none
iops: none
gc:
type: none
size: none
existingDisk1: none
existingDisk2: none
existingDisk3: none
existingDisk4: none
existingDisk5: none
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment