Unverified Commit b44889ca by Denise Committed by GitHub

Merge pull request #284 from guangbochen/fluentd2.3

Add Fluentd-aggregator 0.3.1
parents a10682fa 22798c4d
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
description: A Fluentd Aggregator Helm chart Continuously Receive Events From The Log Forwarders.
icon: file://../fluentd-logo.png
name: fluentd-aggregator
version: 0.3.1
appVersion: v1.6.3
home: https://www.fluentd.org/
sources:
- https://www.fluentd.org/
maintainers:
- name: guangbochen
email: support@rancher.com
## Configuration
The following table lists the configurable parameters of the Fluentd elasticsearch chart and their default values.
| Parameter | Description | Default |
| ---------------------------------- | ------------------------------------------ | ---------------------------------------------------------- |
| `image.repository` | Image | `guangbo/fluentd` |
| `image.tag` | Image tag | `v1.0.0` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `configMaps` | Fluentd configmaps | `default conf files` |
| `env` | List of environment variables that are added to the fluentd pods | `` |
| `nodeSelector` | Optional daemonset nodeSelector | `{}` |
| `resources.limits.cpu` | CPU limit | `500m` |
| `resources.limits.memory` | Memory limit | `200Mi` |
| `resources.requests.cpu` | CPU request | `100m` |
| `resources.requests.memory` | Memory request | `200Mi` |
| `service` | Service definition | `{}` |
| `service.type` | Service type (ClusterIP/NodePort) | ClusterIP |
| `service.ports` | List of service ports dict [{name:...}...] | Not Set |
| `service.ports[].name` | One of service ports name | Not Set |
| `service.ports[].port` | Service port | Not Set |
| `service.ports[].nodePort` | NodePort port(when service.type is NodePort) | Not Set |
| `service.ports[].protocol` | Service protocol(optional, can be TCP/UDP) | Not Set |
| `tolerations` | Optional statefulset tolerations | `{}` |
| `annotations` | Optional statefulset annotations | `NULL` |
| `persistence.enabled` | Enable persistence using PVC | `false` |
| `persistence.storageClass` | PVC Storage Class | `nil` (uses alpha storage class annotation) |
| `persistence.accessMode` | PVC Access Mode | `ReadWriteOnce` |
| `persistence.size` | PVC Storage Request | `10Gi` |
| `extraPersistence.enabled` | Enable extra persistence using PVC | `false` |
| `extraPersistence.storageClass` | PVC extra Storage Class | `nil` (uses alpha storage class annotation) |
| `extraPersistence.accessMode` | PVC extra Access Mode | `ReadWriteOnce` |
| `extraPersistence.size` | PVC extra Storage Request | `10Gi` |
| `extraPersistence.mountPath` | PVC extra Mount Path | `/extra` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install --name my-release \
stable/fluentd-aggregator
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm install --name my-release -f values.yaml stable/fluentd-aggregator
```
# Flunetd Aggregator
Flunetd Log aggregators are statefulsets that continuously receive events from the log forwarders(flunetd daemonsets). They buffer the events and periodically upload the data into the cloud or user's logging system.
### This chart includes the following pre-installed additional fluentd plugins, Flunetd's core plugins are enabled by [default](https://docs.fluentd.org/v1.0/articles/filter-plugin-overview):
### Output Plugins:
[Elasticsearch](https://github.com/uken/fluent-plugin-elasticsearch) / [Splunk](https://github.com/fluent/fluent-plugin-splunk) / [Kafka](https://github.com/fluent/fluent-plugin-kafka) / [Remote Syslog](https://github.com/dlackty/fluent-plugin-remote_syslog) / [Kinesis](https://github.com/awslabs/aws-fluent-plugin-kinesis) / [AWS S3](https://github.com/fluent/fluent-plugin-s3)
### Filter Plugins:
[Rewrite Tag Filter](https://github.com/fluent/fluent-plugin-rewrite-tag-filter) / [Record Modifier](https://github.com/repeatedly/fluent-plugin-record-modifier) / [Concat](https://github.com/fluent-plugins-nursery/fluent-plugin-concat) / [Fields Parser](https://github.com/tomas-zemres/fluent-plugin-fields-parser)
### Parser Plugins:
[Grok parser](https://github.com/fluent/fluent-plugin-grok-parser) / [Multi Format Parser](https://github.com/repeatedly/fluent-plugin-multi-format-parser)
### Formatter Plugins:
[Formatter Sprintf](https://github.com/toyama0919/fluent-plugin-formatter_sprintf)
## Fault Tolerant and Persistent Storage
User must enable persistent storage for Fault Tolerant, the buffered data is stored on the disk. After Fluentd recovers, it will try to send the buffered data to the destination again.
Please note that the data will be lost if the buffer file is broken due to I/O errors. The data will also be lost if the disk is full, since there is nowhere to store the data on disk.
### Limitation
`Caution:` file buffer implementation depends on the characteristics of local file system. Don’t use file buffer on remote file system, e.g. NFS, GlusterFS, HDFS and etc. We observed major data loss by using remote file system.
labels:
io.cattle.role: project # options are cluster/project
questions:
- variable: defaultImage
default: true
description: "Use default Docker image"
label: Use Default Image
type: boolean
show_subquestion_if: false
group: "Container Images"
subquestions:
- variable: image.repository
default: "ranchercharts/fluentd-aggregator"
description: "Fluentd image name"
type: string
label: Fluentd Image Name
- variable: image.tag
default: "1.6.3"
description: "Fluentd image tag"
type: string
label: Image Tag
- variable: replicas
default: 1
description: "fluentd replicas"
type: int
required: true
label: Fluentd Replicas
group: "Fluentd Settings"
min: 1
max: 99
- variable: service.type
default: "ClusterIP"
description: "Fluentd Service type"
type: enum
options:
- "ClusterIP"
- "NodePort"
required: true
label: Fluentd Service Type
group: "Fluentd Settings"
- variable: persistence.enabled
default: false
description: "Enable persistent volume for fluentd aggregator"
type: boolean
required: true
label: Fluentd Persistent Volume Enabled
show_subquestion_if: true
group: "Fluentd Settings"
subquestions:
- variable: persistence.size
default: "10Gi"
description: "Fluntd Persistent Volume Size"
type: string
label: Fluntd Volume Size
- variable: persistence.storageClass
default: ""
description: "If undefined or null, uses the default StorageClass. Default to null"
type: storageclass
label: Default StorageClass for Fluntd
- variable: extraPersistence.enabled
default: false
description: "Enable extra persistent volume for fluentd aggregator"
type: boolean
required: true
label: Fluentd extra Persistent Volume Enabled
show_subquestion_if: true
group: "Fluentd Settings"
subquestions:
- variable: extraPersistence.size
default: "10Gi"
description: "Fluntd extra Persistent Volume Size"
type: string
label: Fluntd extra Volume Size
- variable: extraPersistence.mountPath
default: "/extra"
description: "Fluntd extra Persistent Volume Mount Path"
type: string
label: Fluntd extra Volume Size
- variable: extraPersistence.storageClass
default: ""
description: "If undefined or null, uses the default StorageClass. Default to null"
type: storageclass
label: Default StorageClass for Fluntd
# output configs
- variable: output.type
default: "elasticsearch"
description: "config the fluentd output type"
type: enum
label: Fluentd Output Type
required: true
group: "Output Configs"
options:
- "elasticsearch"
- "splunk_hec"
- "kafka"
- "syslog"
- "custom"
- variable: output.flushInterval
default: "5s"
description: "How often buffered logs would be flushed"
type: string
label: Flush Interval
group: "Output Configs"
required: true
- variable: env.OUTPUT_ES_HOSTS
default: 'http://elasticsearch:9200'
description: "Endpoint should start with \"http://\" or \"https://\"."
type: string
label: Elasticsearch Endpoint
group: "Output Configs"
show_if: "output.type=elasticsearch"
required: true
- variable: env.OUTPUT_ES_PREFIX
default: "k8s"
description: "Index patterns are used to generate Elacticsearch index"
type: string
label: Elasticsearch Index Prefix
required: true
group: "Output Configs"
show_if: "output.type=elasticsearch"
- variable: env.OUTPUT_ES_DATEFORMAT
default: "%Y.%m.%d"
description: "The strftime format to generate index target index"
type: enum
label: Elasticsearch Index Dateformat
group: "Output Configs"
show_if: "output.type=elasticsearch"
required: true
options:
- "%Y.%m.%d"
- "%Y.%m."
- "%Y."
- variable: env.OUTPUT_SPLUNK_HOST
default: ""
description: "e.g. 192.168.1.10"
type: string
label: Splunk Endpoint
required: true
group: "Output Configs"
show_if: "output.type=splunk_hec"
- variable: env.OUTPUT_SPLUNK_PORT
default: "9200"
description: "The splunk port"
type: string
label: Splunk Port
required: true
group: "Output Configs"
show_if: "output.type=splunk_hec"
- variable: env.OUTPUT_SPLUNK_TOKEN
default: ""
description: "Tokens are entities that let logging agents and HTTP clients connect to the HEC input"
type: string
label: Splunk Token
required: true
group: "Output Configs"
show_if: "output.type=splunk_hec"
- variable: env.OUTPUT_SPLUNK_SOURCE_TYPE
default: ""
description: "A default field that identifies the source of an event, that is, where the event originated"
type: string
label: Splunk Source
group: "Output Configs"
show_if: "output.type=splunk_hec"
- variable: env.OUTPUT_SPLUNK_INDEX
default: ""
description: "The index you specify here must within the list of this token’s allowed indexes"
type: string
label: Splunk Index
group: "Output Configs"
show_if: "output.type=splunk_hec"
- variable: env.OUTPUT_SPLUNK_ACK
default: false
description: "Enable/Disable Indexer acknowledgement. When this is set true, channel parameter is required"
type: boolean
label: Enable/Disable Indexer Acknowledgement
required: true
group: "Output Configs"
show_if: "output.type=splunk_hec"
show_subquestion_if: true
subquestions:
- variable: env.OUTPUT_SPLUNK_CHANNEL
default: ""
description: "This is used as channel identifier. When you set use_ack or raw, this parameter is required."
type: string
label: Splunk ACK Channel
required: true
# kafka config
- variable: env.OUTPUT_KAFKA_HOST_TYPE
default: "zookeeper"
description: "Kafka zookeeper endpoints: e.g. https://192.168.1.10:9200"
type: enum
label: Kafka Output Endpoint Type
required: true
group: "Output Configs"
show_if: "output.type=kafka"
options:
- "zookeeper"
- "brokers"
- variable: env.OUTPUT_KAFKA_ZK_HOSTS
default: ""
description: "Kafka zookeeper endpoints: e.g. https://192.168.1.10:9200"
type: string
label: Zookeeper Endpoint of Kafka Output
required: true
group: "Output Configs"
show_if: "output.type=kafka&&env.OUTPUT_KAFKA_HOST_TYPE=zookeeper"
- variable: env.OUTPUT_KAFKA_BROKER_HOSTS
default: ""
description: "Use either Zookeeper or Broker list as the Kafka connection entrypoint.e.g. <broker1_host>:<broker1_port>,<broker2_host>:<broker2_port>"
type: string
label: Kafka Broker Endpoints
required: true
group: "Output Configs"
show_if: "output.type=kafka&&env.OUTPUT_KAFKA_HOST_TYPE=brokers"
- variable: env.OUTPUT_KAFKA_TOPIC_KEY
default: "topic"
description: "Logs will be send to this topic"
type: string
label: Kafka Topic
group: "Output Configs"
show_if: "output.type=kafka"
- variable: env.OUTPUT_KAFKA_PARTITION
default: "partition"
description: "Kafka partition value"
type: string
label: Kafka Partition
group: "Output Configs"
show_if: "output.type=kafka"
- variable: env.OUTPUT_KAFKA_PARTITION_KEY
default: "partition_key"
description: "Kafka partition key"
type: string
label: Kafka Partition Key
group: "Output Configs"
show_if: "output.type=kafka"
- variable: env.OUTPUT_KAFKA_MESSAGE_KEY
default: "message_key"
description: "Kafka message key"
type: string
label: Kafka Message Key
group: "Output Configs"
show_if: "output.type=kafka"
# syslog config
- variable: env.OUTPUT_SYSLOG_HOST
default: ""
description: "Syslog endpoint, e.g. 192.168.1.10:514"
type: string
label: Syslog Endpoint
group: "Output Configs"
required: true
show_if: "output.type=syslog"
- variable: env.OUTPUT_SYSLOG_PROTOCOL
default: "udp"
description: "syslog transfer protocol"
type: enum
label: Syslog Transfer Protocol
required: true
group: "Output Configs"
show_if: "output.type=syslog"
options:
- "udp"
- "tcp"
- variable: output.syslogCaFile
default: ""
description: "syslog tls certificate file"
type: multiline
label: Syslog Certificate File
group: "Output Configs"
show_if: "output.type=syslog&&env.OUTPUT_SYSLOG_PROTOCOL=tcp"
- variable: env.OUTPUT_SYSLOG_SEVERITY
default: "notice"
description: "The severity of logs"
type: string
label: Syslog Severity
group: "Output Configs"
show_if: "output.type=syslog"
- variable: env.OUTPUT_SYSLOG_PROGRAM
default: "fluentd"
description: "The program name of the log."
type: string
label: Syslog Severity
group: "Output Configs"
show_if: "output.type=syslog"
- variable: env.OUTPUT_SYSLOG_TOKEN
default: ""
description: "Will add token to structured data in every syslog message. For cloud syslog like Sumologic, Loggly etc, you could generate token on their configure page"
type: string
label: Syslog Token
group: "Output Configs"
show_if: "output.type=syslog"
- variable: output.customConf
default: "<match pattern>\n @type stdout\n</match>"
description: "fluentd custom output config"
type: multiline
label: Fluentd Custom Output Config
group: "Output Configs"
show_if: "output.type=custom"
# fluentd configs
- variable: configMaps.filter\.conf
default: ""
description: "fluentd filter config, https://docs.fluentd.org/v1.0/articles/filter-plugin-overview"
type: multiline
label: Fluentd Filter Config
group: "Filter Configs"
- variable: configMaps.parser\.conf
default: ""
description: "fluentd parser config, https://docs.fluentd.org/v1.0/articles/parser-plugin-overview"
type: multiline
label: Fluentd Parser Config
group: "Fluentd Parser Configs"
- variable: configMaps.formatter\.conf
default: ""
description: "fluentd formatter config, https://docs.fluentd.org/v1.0/articles/formatter-plugin-overview"
type: multiline
label: Fluentd Formatter Config
group: "Formatter Configs"
To verify that Fluentd Elasticsearch has started, run:
kubectl --namespace={{ .Release.Namespace }} get all -l "app={{ template "fluentd.name" . }},release={{ .Release.Name }}"
THIS APPLICATION CAPTURES ALL CONSOLE OUTPUT AND FORWARDS IT TO Elasticsearch. Anything that might be identifying,
including things like IP addresses, container images, and object names will NOT be anonymized.
# Limitation
Caution, file buffer implementation depends on the characteristics of local file system. Don’t use file buffer on remote file system, e.g. NFS, GlusterFS, HDFS and etc. We observed major data loss by using remote file system.
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "fluentd.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "fluentd.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "fluentd.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fluentd.fullname" . }}
labels:
app: {{ template "fluentd.name" . }}
chart: {{ template "fluentd.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
{{- range $key, $value := .Values.configMaps }}
{{- if $value }}
{{ $key }}: |-
{{ $value | indent 4 }}
{{- end }}
{{- end }}
output.conf: |-
{{- if eq .Values.output.type "custom" }}
{{ .Values.output.customConf | indent 4 }}
{{- else }}
<match **>
{{- if eq .Values.output.type "elasticsearch" }}
@type elasticsearch
@log_level info
include_tag_key true
hosts "#{ENV['OUTPUT_ES_HOSTS']}"
logstash_format true
logstash_prefix "#{ENV['OUTPUT_ES_PREFIX']}"
logstash_dateformat "#{ENV['OUTPUT_ES_DATEFORMAT']}"
type_name "#{ENV['RELEASENAME']}"
{{- else if eq .Values.output.type "kafka" }}
@type kafka_buffered
{{- if eq .Values.env.OUTPUT_KAFKA_HOST_TYPE "brokers" }}
brokers "#{ENV['OUTPUT_KAFKA_BROKER_HOSTS']}" #<broker1_host>:<broker1_port>,<broker2_host>:<broker2_port>,..
{{- else if eq .Values.env.OUTPUT_KAFKA_HOST_TYPE "zookeeper" }}
zookeeper "#{ENV['OUTPUT_KAFKA_ZK_HOSTS']}" #<broker1_host>:<broker1_port>,<broker2_host>:<broker2_port>,..
{{- end }}
topic_key (string) :default => "#{ENV['OUTPUT_KAFKA_TOPIC_KEY']}"
partition_key (string) :default => "#{ENV['OUTPUT_KAFKA_PARTITION']}"
partition_key_key (string) :default => "#{ENV['OUTPUT_KAFKA_PARTITION_KEY']}"
message_key_key (string) :default => "#{ENV['OUTPUT_KAFKA_MESSAGE_KEY']}"
default_topic (string) :default => nil
default_partition_key (string) :default => nil
default_message_key (string) :default => nil
exclude_topic_key (bool) :default => false
exclude_partition_key (bool) :default => false
get_kafka_client_log (bool) :default => false
# ruby-kafka producer options
max_send_retries (integer) :default => 1
required_acks (integer) :default => -1
ack_timeout (integer) :default => nil (Use default of ruby-kafka)
compression_codec (gzip|snappy) :default => nil (No compression)
{{- else if eq .Values.output.type "syslog" }}
@type remote_syslog
host_with_port "#{ENV['OUTPUT_SYSLOG_HOST']}"
severity "#{ENV['OUTPUT_SYSLOG_SEVERITY']}"
program "#{ENV['OUTPUT_SYSLOG_PROGRAM']}"
hostname ${tag[1]}
{{- if eq .Values.env.OUTPUT_SYSLOG_PROTOCOL "udp" }}
protocol udp
{{- else if eq .Values.env.OUTPUT_SYSLOG_PROTOCOL "tcp" }}
protocol tcp
tls true
ca_file /fluentd/etc/ssl/ca.pem
{{- end }}
{{- if .Values.env.OUTPUT_SYSLOG_TOKEN }}
<filter **>
@type record_transformer
<record>
tag ${tag} "#{ENV['OUTPUT_SYSLOG_TOKEN']}"
</record>
</filter>
{{- end }}
{{- else if eq .Values.output.type "splunk_hec" }}
@type splunk_hec
host "#{ENV['OUTPUT_SPLUNK_HOST']}"
port "#{ENV['OUTPUT_SPLUNK_PORT']}"
token "#{ENV['OUTPUT_SPLUNK_TOKEN']}"
# metadata parameter
# default_source fluentd
sourcetype "#{ENV['OUTPUT_SPLUNK_SOURCE_TYPE']}"
index_key "#{ENV['OUTPUT_SPLUNK_INDEX']}"
{{- if and .Values.env.OUTPUT_SPLUNK_ACK .Values.env.OUTPUT_SPLUNK_CHANNEL }}
# ack parameter
use_ack true
channel "#{ENV['OUTPUT_SPLUNK_CHANNEL']}"
ack_retry 8
{{- end }}
# ssl parameter
# use_ssl true
# ca_file /path/to/ca.pem
{{- end }}
# fluentd file buffer config
<buffer>
@type file
path /var/log/fluentd-buffers/*.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval "#{ENV['OUTPUT_BUFFER_FLUSH_INTERVAL']}"
retry_forever
retry_max_interval 30
chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}"
queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}"
overflow_action block
</buffer>
</match>
{{- end }}
{{- if .Values.ingress.enabled -}}
{{- $serviceName := include "fluentd.fullname" . -}}
{{- $servicePort := .Values.service.externalPort -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fluentd.fullname" . }}
labels:
app: {{ template "fluentd.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
rules:
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host.name }}
http:
paths:
- path: /
backend:
serviceName: {{ $host.serviceName }}
servicePort: {{ $host.servicePort }}
{{- end -}}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fluentd.fullname" . }}-ca
labels:
app: {{ template "fluentd.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
ca.pem: |-
{{ .Values.output.syslogCaFile | indent 4 }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "fluentd.fullname" . }}-headless
labels:
app: {{ template "fluentd.name" . }}
chart: {{ template "fluentd.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
# service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- name: tcp
port: 24220
clusterIP: None
selector:
app: {{ template "fluentd.name" . }}
release: {{ .Release.Name }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "fluentd.fullname" . }}
labels:
app: {{ template "fluentd.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
{{- range $port := .Values.service.ports }}
- name: {{ $port.name }}
port: {{ $port.containerPort }}
targetPort: {{ $port.containerPort }}
protocol: {{ $port.protocol }}
{{- end }}
selector:
app: {{ template "fluentd.name" . }}
release: {{ .Release.Name }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "fluentd.fullname" . }}
labels:
app: {{ template "fluentd.name" . }}
chart: {{ template "fluentd.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
selector:
matchLabels:
app: {{ template "fluentd.name" . }} # has to match .spec.template.metadata.labels
release: {{ .Release.Name }}
serviceName: {{ include "fluentd.fullname" . }}-headless
podManagementPolicy: {{ .Values.podManagementPolicy }}
updateStrategy:
{{ toYaml .Values.updateStrategy | indent 4 }}
replicas: {{ default 1 .Values.replicas }}
template:
metadata:
labels:
app: {{ template "fluentd.name" . }}
release: {{ .Release.Name }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- if .Values.annotations }}
{{ toYaml .Values.annotations | indent 8 }}
{{- end }}
spec:
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range $pullSecret := .Values.image.pullSecrets }}
- name: {{ $pullSecret }}
{{- end }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: RELEASENAME
value: {{ .Release.Name | quote }}
- name: OUTPUT_BUFFER_FLUSH_INTERVAL
value: {{ .Values.output.flushInterval | quote }}
- name: OUTPUT_BUFFER_CHUNK_LIMIT
value: {{ .Values.output.bufferChunkLimit | quote }}
- name: OUTPUT_BUFFER_QUEUE_LIMIT
value: {{ .Values.output.bufferQueueLimit | quote }}
{{- range $key, $value := .Values.env }}
- name: {{ $key | quote }}
value: {{ $value | quote }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
ports:
{{- range $port := .Values.service.ports }}
- name: {{ $port.name }}
containerPort: {{ $port.containerPort }}
protocol: {{ $port.protocol }}
{{- end }}
- name: http-input
containerPort: 9880
protocol: TCP
livenessProbe:
httpGet:
# Use percent encoding for query param.
# The value is {"log": "health check"}.
# the endpoint itself results in a new fluentd
# tag 'fluentd.pod-healthcheck'
path: /fluentd.pod.healthcheck?json=%7B%22log%22%3A+%22health+check%22%7D
port: 9880
initialDelaySeconds: 5
timeoutSeconds: 1
volumeMounts:
- name: config-volume
mountPath: /etc/fluent/config.d
- name: buffer
mountPath: "/var/log/fluentd-buffers"
- name: extra
mountPath: {{ .Values.extraPersistence.mountPath | default "/extra" | quote }}
- name: ca
mountPath: "/etc/fluent/ssl"
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ template "fluentd.fullname" . }}
- name: ca
configMap:
name: {{ template "fluentd.fullname" . }}-ca
{{- if not .Values.persistence.enabled }}
- name: buffer
emptyDir: {}
{{- end }}
{{- if not .Values.extraPersistence.enabled }}
- name: extra
emptyDir: {}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- if or .Values.persistence.enabled .Values.extraPersistence.enabled }}
volumeClaimTemplates:
{{- end }}
{{- if .Values.persistence.enabled }}
- metadata:
name: buffer
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.extraPersistence.enabled }}
- metadata:
name: extra
spec:
accessModes:
- {{ .Values.extraPersistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.extraPersistence.size }}
{{- if .Values.extraPersistence.storageClass }}
{{- if (eq "-" .Values.extraPersistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.extraPersistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}
# Default values for fluentd.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: ranchercharts/fluentd-aggregator
tag: 1.6.3
pullPolicy: IfNotPresent
# pullSecrets:
# - secret1
# - secret2
replicas: 1
## Start and stop pods in Parallel or OrderedReady (one-by-one.) Note - Can not change after first release.
## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
podManagementPolicy: OrderedReady
## The StatefulSet Update Strategy which Kafka will use when changes are applied.
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: "RollingUpdate"
output:
bufferChunkLimit: "2M"
bufferQueueLimit: "8"
flushInterval: "5s"
# options are elasticsearch, syslog, splunk, custom
type: "elasticsearch"
syslogCaFile: ""
customConf: ""
env:
OUTPUT_ES_HOSTS: "elasticsearch:9200"
OUTPUT_ES_PREFIX: "k8s"
OUTPUT_ES_DATEFORMAT: "%Y.%m.%d"
# OUTPUT_SPLUNK_HOST:
# OUTPUT_SPLUNK_PORT:
# OUTPUT_SPLUNK_TOKEN:
# OUTPUT_SPLUNK_SOURCE_TYPE:
# OUTPUT_SPLUNK_INDEX:
# OUTPUT_SPLUNK_ACK: false
# OUTPUT_SPLUNK_CHANNEL:
# OUTPUT_KAFKA_HOST_TYPE: "zookeeper"
# OUTPUT_KAFKA_ZK_HOSTS: ""
# OUTPUT_KAFKA_BROKER_HOSTS: ""
# OUTPUT_KAFKA_TOPIC_KEY: topic
# OUTPUT_KAFKA_PARTITION: partition
# OUTPUT_KAFKA_PARTITION_KEY: partition_key
# OUTPUT_KAFKA_MESSAGE_KEY: message_key
# OUTPUT_SYSLOG_PROTOCOL: udp
# OUTPUT_SYSLOG_HOST:
# OUTPUT_SYSLOG_SEVERITY: notice
# OUTPUT_SYSLOG_PROGRAM: fluentd
# OUTPUT_SYSLOG_TOKEN:
service:
type: ClusterIP
externalPort: 80
ports:
- name: "prometheus"
protocol: TCP
containerPort: 24231
- name: "forward-input"
protocol: TCP
containerPort: 24224
- name: "input-udp"
protocol: UDP
containerPort: 24224
# - name: "monitor-agent"
# protocol: TCP
# containerPort: 24220
ingress:
enabled: false
# Used to create an Ingress and Service record.
# hosts:
# - name: "http-input.local"
# protocol: TCP
# serviceName: http-input
# servicePort: 9880
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: http-input-tls
# hosts:
# - http-input.local
configMaps:
general.conf: |
# Prevent fluentd from handling records containing its own logs. Otherwise
# it can lead to an infinite loop, when error in sending one message generates
# another message which also fails to be sent and so on.
<match fluentd.**>
@type null
</match>
# Used for health checking
<source>
@type http
port 9880
bind 0.0.0.0
body_size_limit 32m
keepalive_timeout 10s
</source>
# Emits internal metrics to every minute, and also exposes them on port
# 24220. Useful for determining if an output plugin is retryring/erroring,
# or determining the buffer queue length.
<source>
@type monitor_agent
bind 0.0.0.0
port 24220
tag fluentd.monitor.metrics
</source>
system.conf: |-
<system>
root_dir /tmp/fluentd-buffers/
</system>
monitoring.conf: |
# expose metrics in prometheus format
<source>
@type prometheus
bind 0.0.0.0
port 24231
metrics_path /metrics
</source>
<source>
@type prometheus_output_monitor
interval 10
<labels>
hostname ${hostname}
</labels>
</source>
<source>
@type prometheus_monitor
</source>
forward-input.conf: |
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 500m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
## Persist data to a persistent volume of buffer
## Caution, file buffer implementation depends on the characteristics of local file system.
## Don’t use file buffer on remote file system, e.g. NFS, GlusterFS, HDFS and etc. We observed major data loss by using remote file system.
persistence:
enabled: false
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
# annotations: {}
accessMode: ReadWriteOnce
size: 10Gi
extraPersistence:
enabled: false
accessMode: ReadWriteOnce
size: 10Gi
nodeSelector: {}
tolerations: []
affinity: {}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment