[](https://quay.io/repository/external_storage/nfs-client-provisioner)
**nfs-client** is an automatic provisioner that use your *existing and already configured* NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ``${namespace}-${pvcName}-${pvName}``.
`nfs-client` is an automatic provisioner that used your *already configured* NFS server, automatically creating Persistent Volumes.
# How to deploy nfs-client to your cluster.
- Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}
To note again, you must *already* have an NFS Server.
# How to deploy nfs-client to your cluster.
**Step 1: Get connection information for your NFS server**. Make sure your NFS server as accessible from your Kubernetes cluster and get the information you need to connect to it. At a minimum you will need its hostname.
To note, you must *already* have an NFS Server.
**Step 2: Get the NFS-Client Provisioner files**. To setup the provisioner you will download a set of YAML files, edit them to add your NFS server's connection information and then apply each with the ``oc`` command.
1. Editing:
Get all of the files in the [deploy](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy) directory of this repository. These instructions assume that you have cloned the [external-storage](https://github.com/kubernetes-incubator/external-storage) repository and have a bash-shell open in the ``nfs-client`` directory.
**Step 3: Setup authorization**. If your cluster has RBAC enabled or you are running OpenShift you must authorize the provisioner. If you are in a namespace/project other than "default" either edit `deploy/auth/clusterrolebinding.yaml` or edit the `oadm policy` command accordingly.
clusterrole "nfs-client-provisioner-runner" created
clusterrolebinding "run-nfs-client-provisioner" created
```
OpenShift:
On some installations of OpenShift the default admin user does not have cluster-admin permissions. If these commands fail refer to the Red Hat OpenShift documentation for **User and Role Management** or contact Red Hat support to help you grant the right permissions to your admin user.
Note: To deploy to an ARM-based environment, use: `deploy/deployment-arm.yaml` instead, otherwise use `deploy/deployment.yaml`.
Modify `deploy/deployment.yaml` and change the values to your own NFS server:
Next you must edit the provisioner's deployment file to add connection information for your NFS server. Edit `deploy/deployment.yaml` and replace the two occurances of <YOURNFSSERVERHOSTNAME> with your server's hostname.
Modify `deploy/class.yaml` to match the same value indicated by `PROVISIONER_NAME`:
You may also want to change the PROVISIONER_NAME above from ``fuseim.pri/ifs`` to something more descriptive like ``nfs-storage``, but if you do remember to also change the PROVISIONER_NAME in the storage class definition below:
This is `deploy/class.yaml` which defines the NFS-Client's Kubernetes Storage Class:
```yaml
apiVersion:storage.k8s.io/v1
...
...
@@ -41,33 +89,11 @@ metadata:
name:managed-nfs-storage
provisioner:fuseim.pri/ifs# or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete:"false"# When set to "false" your PVs will not be archived by the provisioner upon deletion of the PVC.
archiveOnDelete:"false"# When set to "false" your PVs will not be archived
# by the provisioner upon deletion of the PVC.
```
2. Authorization
If your cluster has RBAC enabled or you are running OpenShift you must authorize the provisioner. If you are in a namespace/project other than "default" either edit `deploy/auth/clusterrolebinding.yaml` or edit the `oadm policy` command accordingly.
To deploy your own PVC, make sure that you have the correct `storage-class` as indicated by your `deploy/class.yaml` file.
**Step 6: Deploying your own PersistentVolumeClaims**. To deploy your own PVC, make sure that you have the correct `storage-class` as indicated by your `deploy/class.yaml` file.