Commit 18fe8a48 by Guangbo Chen

remove zetcd chart

parent 5a0e217c
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
description: CoreOS zetcd Helm chart for Kubernetes
name: zetcd
version: 0.1.7
appVersion: 0.0.3
home: https://github.com/coreos/zetcd
sources:
- https://github.com/coreos/zetcd
maintainers:
- name: hunter
# CoreOS zetcd chart
This chart runs zetcd, a ZooKeeper "personality" for etcd.
## Introduction
This chart bootstraps zetcd and optionally an etcd-operator
## Official Documentation
Official project documentation found [here](https://github.com/coreos/zetcd)
## Prerequisites
- Kubernetes 1.4+ with Beta APIs enabled
- __Suggested:__ PV provisioner support in the underlying infrastructure to support backups of etcd
## Installing the Chart
To install the chart with the release name `my-release`:
```bash
$ helm install stable/zetcd --name my-release
```
__Note__: By default etcd-operator is installed with zetcd. `cluster.enabled` is set on install but it will have no effect.
Before you create an zetcd deployment, the TPR must be installed by the operator, so this option is ignored during helm installs. Alternatively, the release can be upgraded after install to launch the etcd cluster pods.
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```bash
$ helm delete my-release
```
The command removes all the Kubernetes components EXCEPT the persistent volume.
## Updating
Updating the TPR resource will not result in the cluster being update until `kubectl apply` for
TPRs is fixed see [kubernetes/issues/29542](https://github.com/kubernetes/kubernetes/issues/29542)
Work around options are documented [here](https://github.com/coreos/etcd-operator#resize-an-etcd-cluster)
## Configuration
The following table lists the configurable parameters of the zetcd chart and their default values. Check the etcd-operator chart for additional configuration options
| Parameter | Description | Default |
| ------------------------------------------------- | -------------------------------------------------------------------- | ---------------------------------------------- |
| `replicaCount` | Number of zetcd replicas to create | `1` |
| `image.repository` | zetcd container image | `quay.io/coreos/zetcd` |
| `image.tag` | zetcd container image tag | `v0.0.3` |
| `image.pullPolicy` | zetcd container image pull policy | `IfNotPresent` |
| `resources.limits.cpu` | CPU limit per zetcd pod | |
| `resources.limits.memory` | Memory limit per zetcd pod | |
| `resources.requests.cpu` | CPU request per zetcd pod | |
| `resources.requests.memory` | Memory request per zetcd pod | |
| `nodeSelector` | Node labels for pod assignment |`{}` |
| `etcd.operatorEnabled` | Whether to use etcd-operator to launch a cluster | `true` |
| `etcd.endpoints` | Existing etcd endpoints to be used when etcd-operator is disabled | `localhost:2379` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example:
```bash
$ helm install --name my-release --set image.tag=v0.0.3 stable/zetcd
```
Alternatively, a YAML file that specifies the values for the parameters can be provided while
installing the chart. For example:
```bash
$ helm install --name my-release --values values.yaml stable/zetcd
```
questions:
- variable: replicaCount
default: "1"
description: "Replica count"
type: string
required: true
label: Replicas
dependencies:
- name: etcd-operator
repository: https://kubernetes-charts.storage.googleapis.com/
version: 0.4.3
digest: sha256:769c1306d6c388ec19d119171b0c37c27a24ad93fc239506e3f4110563f8af2c
generated: 2017-09-03T14:47:23.883138886-04:00
dependencies:
- name: etcd-operator
version: 0.4.3
repository: https://kubernetes-charts.storage.googleapis.com/
condition: etcd.operator.enabled
\ No newline at end of file
{{- if and .Release.IsInstall (not .Values.etcd.operatorEnabled) -}}
The etcd cluster has been installed but the TPR will need to be launched again to start the etcd deployment.
Upgrading this zetcd chart will trigger the TPR. eg:
helm upgrade {{ .Release.Name }} stable/etcd
{{ end -}}
1. Get the zetcd endpoint by running these commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "zetcd.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "zetcd.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "zetcd.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.externalPort }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "zetcd.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:{{ .Values.service.externalPort }}
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "zetcd.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "zetcd.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "zetcd.fullname" . }}
labels:
app: {{ template "zetcd.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "zetcd.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- "/usr/local/bin/zetcd"
- "-zkaddr"
- "0.0.0.0:{{ .Values.service.internalPort }}"
{{- if .Values.etcd.operatorEnabled }}
- "-endpoints"
- "{{ index .Values "etcd-operator" "cluster" "name" }}-client:2379"
{{- else }}
- "-endpoints"
- "{{ .Values.etcd.endpoints }}"
{{- end }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
tcpSocket:
port: {{ .Values.service.internalPort }}
readinessProbe:
tcpSocket:
port: {{ .Values.service.internalPort }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "zetcd.fullname" . }}
labels:
app: {{ template "zetcd.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "zetcd.name" . }}
release: {{ .Release.Name }}
# Default values for zetcd.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: quay.io/coreos/zetcd
tag: v0.0.3
pullPolicy: IfNotPresent
service:
name: zetcd
type: ClusterIP
externalPort: 2181
internalPort: 2181
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
etcd:
operatorEnabled: true
endpoints: localhost:2379
etcd-operator:
cluster:
enabled: true
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment