Commit 96592d24 by kmova

update readme and go imports to new location

This commit fixes the following: - fixes the go imports to use the new loction - setup go modules for this project - update the docker files to use the new binary name - update the README to use the new repo name Signed-off-by: 's avatarkmova <kiran.mova@mayadata.io> (cherry picked from commit 234543e14d416979d3397e40940cae99bc241365)
parent f07117e2
...@@ -18,20 +18,20 @@ endif ...@@ -18,20 +18,20 @@ endif
ifeq ($(VERSION),) ifeq ($(VERSION),)
VERSION = latest VERSION = latest
endif endif
IMAGE = $(REGISTRY)nfs-client-provisioner:$(VERSION) IMAGE = $(REGISTRY)nfs-subdir-external-provisioner:$(VERSION)
IMAGE_ARM = $(REGISTRY)nfs-client-provisioner-arm:$(VERSION) IMAGE_ARM = $(REGISTRY)nfs-subdir-external-provisioner-arm:$(VERSION)
MUTABLE_IMAGE = $(REGISTRY)nfs-client-provisioner:latest MUTABLE_IMAGE = $(REGISTRY)nfs-subdir-external-provisioner:latest
MUTABLE_IMAGE_ARM = $(REGISTRY)nfs-client-provisioner-arm:latest MUTABLE_IMAGE_ARM = $(REGISTRY)nfs-subdir-external-provisioner-arm:latest
all: build image build_arm image_arm all: build image build_arm image_arm
container: build image build_arm image_arm container: build image build_arm image_arm
build: build:
CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' -o docker/x86_64/nfs-client-provisioner ./cmd/nfs-client-provisioner CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' -o docker/x86_64/nfs-subdir-external-provisioner ./cmd/nfs-subdir-external-provisioner
build_arm: build_arm:
CGO_ENABLED=0 GOOS=linux GOARCH=arm GOARM=7 go build -a -ldflags '-extldflags "-static"' -o docker/arm/nfs-client-provisioner ./cmd/nfs-client-provisioner CGO_ENABLED=0 GOOS=linux GOARCH=arm GOARM=7 go build -a -ldflags '-extldflags "-static"' -o docker/arm/nfs-subdir-external-provisioner ./cmd/nfs-subdir-external-provisioner
image: image:
docker build -t $(MUTABLE_IMAGE) docker/x86_64 docker build -t $(MUTABLE_IMAGE) docker/x86_64
......
# Kubernetes NFS-Client Provisioner # Kubernetes NFS-Client Provisioner
[![Docker Repository on Quay](https://quay.io/repository/external_storage/nfs-client-provisioner/status "Docker Repository on Quay")](https://quay.io/repository/external_storage/nfs-client-provisioner) NFS subdir external provisioner is an automatic provisioner that use your *existing and already configured* NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ``${namespace}-${pvcName}-${pvName}``.
Note: This repository is being migrated from https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client. Some of the following instructions will be updated once the migration is completed. To test container image built from this repository, you will have to build and push the nfs-client-provisioner image using the following instructions.
```sh
make build
# Set a custom image registry to push the container image
# Example REGISTRY="quay.io/myorg"
make image
```
# How to deploy nfs-client to your cluster.
To note again, you must *already* have an NFS Server.
## With Helm
Follow the instructions for the stable helm chart maintained at https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner
The tl;dr is
```bash
$ helm install stable/nfs-client-provisioner --set nfs.server=x.x.x.x --set nfs.path=/exported/path
```
## Without Helm
**Step 1: Get connection information for your NFS server**. Make sure your NFS server is accessible from your Kubernetes cluster and get the information you need to connect to it. At a minimum you will need its hostname.
**Step 2: Get the NFS-Client Provisioner files**. To setup the provisioner you will download a set of YAML files, edit them to add your NFS server's connection information and then apply each with the ``kubectl`` / ``oc`` command.
Get all of the files in the [deploy](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/tree/master/deploy) directory of this repository. These instructions assume that you have cloned the [kubernetes-sigs/nfs-subdir-external-provisioner](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/) repository and have a bash-shell open in the root directory.
**Step 3: Setup authorization**. If your cluster has RBAC enabled or you are running OpenShift you must authorize the provisioner. If you are in a namespace/project other than "default" edit `deploy/rbac.yaml`.
Kubernetes:
```sh
# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ kubectl create -f deploy/rbac.yaml
```
OpenShift:
On some installations of OpenShift the default admin user does not have cluster-admin permissions. If these commands fail refer to the OpenShift documentation for **User and Role Management** or contact your OpenShift provider to help you grant the right permissions to your admin user.
```sh
# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NAMESPACE=`oc project -q`
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml
$ oc create -f deploy/rbac.yaml
$ oadm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner
```
**Step 4: Configure the NFS-Client provisioner**
Note: To deploy to an ARM-based environment, use: `deploy/deployment-arm.yaml` instead, otherwise use `deploy/deployment.yaml`.
You must edit the provisioner's deployment file to specify the correct location of your nfs-client-provisioner container image.
Next you must edit the provisioner's deployment file to add connection information for your NFS server. Edit `deploy/deployment.yaml` and replace the two occurences of <YOUR NFS SERVER HOSTNAME> with your server's hostname.
```yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: <YOUR NFS SERVER HOSTNAME>
- name: NFS_PATH
value: /var/nfs
volumes:
- name: nfs-client-root
nfs:
server: <YOUR NFS SERVER HOSTNAME>
path: /var/nfs
```
You may also want to change the PROVISIONER_NAME above from ``fuseim.pri/ifs`` to something more descriptive like ``nfs-storage``, but if you do remember to also change the PROVISIONER_NAME in the storage class definition below:
This is `deploy/class.yaml` which defines the NFS-Client's Kubernetes Storage Class:
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false" # When set to "false" your PVs will not be archived
# by the provisioner upon deletion of the PVC.
```
**Step 5: Finally, test your environment!**
Now we'll test your NFS provisioner.
Deploy:
```sh
$ kubectl create -f deploy/test-claim.yaml -f deploy/test-pod.yaml
```
Now check your NFS Server for the file `SUCCESS`.
```sh
kubectl delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml
```
Now check the folder has been deleted.
**Step 6: Deploying your own PersistentVolumeClaims**. To deploy your own PVC, make sure that you have the correct `storage-class` as indicated by your `deploy/class.yaml` file.
For example:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
```
>>>>>>> 234543e... update readme and go imports to new location
**nfs-client** is an automatic provisioner that use your *existing and already configured* NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ``${namespace}-${pvcName}-${pvName}``. **nfs-client** is an automatic provisioner that use your *existing and already configured* NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ``${namespace}-${pvcName}-${pvName}``.
......
...@@ -28,7 +28,7 @@ import ( ...@@ -28,7 +28,7 @@ import (
"k8s.io/kubernetes/pkg/apis/core/v1/helper" "k8s.io/kubernetes/pkg/apis/core/v1/helper"
"github.com/golang/glog" "github.com/golang/glog"
"github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller" "sigs.k8s.io/sig-storage-lib-external-provisioner/controller"
"k8s.io/api/core/v1" "k8s.io/api/core/v1"
storage "k8s.io/api/storage/v1" storage "k8s.io/api/storage/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
...@@ -53,7 +53,7 @@ const ( ...@@ -53,7 +53,7 @@ const (
var _ controller.Provisioner = &nfsProvisioner{} var _ controller.Provisioner = &nfsProvisioner{}
func (p *nfsProvisioner) Provision(options controller.VolumeOptions) (*v1.PersistentVolume, error) { func (p *nfsProvisioner) Provision(options controller.ProvisionOptions) (*v1.PersistentVolume, error) {
if options.PVC.Spec.Selector != nil { if options.PVC.Spec.Selector != nil {
return nil, fmt.Errorf("claim Selector is not supported") return nil, fmt.Errorf("claim Selector is not supported")
} }
...@@ -78,9 +78,9 @@ func (p *nfsProvisioner) Provision(options controller.VolumeOptions) (*v1.Persis ...@@ -78,9 +78,9 @@ func (p *nfsProvisioner) Provision(options controller.VolumeOptions) (*v1.Persis
Name: options.PVName, Name: options.PVName,
}, },
Spec: v1.PersistentVolumeSpec{ Spec: v1.PersistentVolumeSpec{
PersistentVolumeReclaimPolicy: options.PersistentVolumeReclaimPolicy, PersistentVolumeReclaimPolicy: *options.StorageClass.ReclaimPolicy,
AccessModes: options.PVC.Spec.AccessModes, AccessModes: options.PVC.Spec.AccessModes,
MountOptions: options.MountOptions, //MountOptions: options.MountOptions,
Capacity: v1.ResourceList{ Capacity: v1.ResourceList{
v1.ResourceName(v1.ResourceStorage): options.PVC.Spec.Resources.Requests[v1.ResourceName(v1.ResourceStorage)], v1.ResourceName(v1.ResourceStorage): options.PVC.Spec.Resources.Requests[v1.ResourceName(v1.ResourceStorage)],
}, },
......
...@@ -5,6 +5,7 @@ metadata: ...@@ -5,6 +5,7 @@ metadata:
annotations: annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec: spec:
storageClassName: managed-nfs-storage
accessModes: accessModes:
- ReadWriteMany - ReadWriteMany
resources: resources:
......
...@@ -14,5 +14,5 @@ ...@@ -14,5 +14,5 @@
FROM hypriot/rpi-alpine:3.6 FROM hypriot/rpi-alpine:3.6
RUN apk update --no-cache && apk add ca-certificates RUN apk update --no-cache && apk add ca-certificates
COPY nfs-client-provisioner /nfs-client-provisioner COPY nfs-subdir-external-provisioner /nfs-subdir-external-provisioner
ENTRYPOINT ["/nfs-client-provisioner"] ENTRYPOINT ["/nfs-subdir-external-provisioner"]
...@@ -14,5 +14,5 @@ ...@@ -14,5 +14,5 @@
FROM alpine:3.6 FROM alpine:3.6
RUN apk update --no-cache && apk add ca-certificates RUN apk update --no-cache && apk add ca-certificates
COPY nfs-client-provisioner /nfs-client-provisioner COPY nfs-subdir-external-provisioner /nfs-subdir-external-provisioner
ENTRYPOINT ["/nfs-client-provisioner"] ENTRYPOINT ["/nfs-subdir-external-provisioner"]
module github.com/kubernetes-sigs/nfs-subdir-external-provisioner
go 1.14
require (
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b
github.com/miekg/dns v1.1.29 // indirect
github.com/onsi/ginkgo v1.12.0 // indirect
github.com/onsi/gomega v1.9.0 // indirect
github.com/prometheus/client_golang v1.5.1 // indirect
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d // indirect
golang.org/x/time v0.0.0-20191024005414-555d28b269f0 // indirect
k8s.io/api v0.18.0
k8s.io/apimachinery v0.18.0
k8s.io/client-go v0.18.0
k8s.io/klog v1.0.0 // indirect
k8s.io/kubernetes v1.15.0
k8s.io/utils v0.0.0-20200327001022-6496210b90e8 // indirect
sigs.k8s.io/sig-storage-lib-external-provisioner v4.1.0+incompatible
)
replace (
k8s.io/api => k8s.io/api v0.0.0-20190620084959-7cf5895f2711
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.0.0-20190620085554-14e95df34f1f
k8s.io/apimachinery => k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719
k8s.io/apiserver => k8s.io/apiserver v0.0.0-20190620085212-47dc9a115b18
k8s.io/cli-runtime => k8s.io/cli-runtime v0.0.0-20190620085706-2090e6d8f84c
k8s.io/client-go => k8s.io/client-go v0.0.0-20190620085101-78d2af792bab
k8s.io/cloud-provider => k8s.io/cloud-provider v0.0.0-20190620090043-8301c0bda1f0
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.0.0-20190620090013-c9a0fc045dc1
k8s.io/code-generator => k8s.io/code-generator v0.0.0-20190612205613-18da4a14b22b
k8s.io/component-base => k8s.io/component-base v0.0.0-20190620085130-185d68e6e6ea
k8s.io/cri-api => k8s.io/cri-api v0.0.0-20190531030430-6117653b35f1
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.0.0-20190620090116-299a7b270edc
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.0.0-20190620085325-f29e2b4a4f84
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.0.0-20190620085942-b7f18460b210
k8s.io/kube-proxy => k8s.io/kube-proxy v0.0.0-20190620085809-589f994ddf7f
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.0.0-20190620085912-4acac5405ec6
k8s.io/kubelet => k8s.io/kubelet v0.0.0-20190620085838-f1cb295a73c9
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.0.0-20190620090156-2138f2c9de18
k8s.io/metrics => k8s.io/metrics v0.0.0-20190620085625-3b22d835f165
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.0.0-20190620085408-1aef9010884e
)
This diff is collapsed. Click to expand it.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment