Skip to main content

Using AWS EFS as Kubernetes Persistent Volume

Written on February 12, 2018 by Ron Rivera.

4 min read
––– views

Kubernetes supports Amazon's Elastic File System (EFS) as a storage backend which gives the ability to share data between containers running in a Pod and at the same time preserve them between restarts.

In this post, we will setup a Persistent Volume using Amazon EFS in Kubernetes.

Prerequisites

  • Amazon VPC, EC2 and EFS resources are created in the same region as per the user guide
  • EFS Security Groups are configured to allow inbound TCP access from worker nodes

Connect EFS on each worker node

Install NFS utils first.

yum install -y nfs-utils

Add EFS target mount IPs to EFS ID mapping

This step only required if your k8s cluster is on a VPC that's not configured to use Amazon's DNS nameservers and DNS hostnames is disabled. This is a typical configuration for a VPC with Private Subnet Only (as in my case).

Otherwise, proceed to the next step.

Take note of the EFS ID and target mount IP which can be obtained from the AWS console.

Append the EFS target mount IPs to the worker's /etc/hosts file:

echo "10.6.148.66 fs-xxxxxxxx.efs.eu-west-1.amazonaws.com" >> /etc/hosts
echo "10.6.149.70 fs-xxxxxxxx.efs.eu-west-1.amazonaws.com" >> /etc/hosts
echo "10.6.148.243 fs-xxxxxxxx.efs.eu-west-1.amazonaws.com" >> /etc/hosts

Mount the EFS target to the worker node

mkdir -p /k8spv
echo "fs-xxxxxxxx.efs.eu-west-1.amazonaws.com:/ /k8spv nfs nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,proto=tcp,sync 0 0" >> /etc/fstab
mount -a -t nfs

All PersistenVolumeClaim created will show up under /k8spv/pvc-<claim_id>.

Repeat the above steps to all the worker nodes in your cluster.

Configure the Storage Provisioner

Now that the EFS target is mounted, let's configure the provisioner.

Create the EFS ConfigMap

This should contain references to your EFS ID.

$ cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: efs-provisioner
data:
  file.system.id: fs-xxxxxxxx
  aws.region: eu-west-1
  provisioner.name: kubernetes-tools-cluster/aws-efs
EOF

Create the EFS Provisioner

$ cat <<EOF | kubectl apply -f -
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: efs-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: efs-provisioner
    spec:
      containers:
        - name: efs-provisioner
          image: quay.io/external_storage/efs-provisioner:latest
          env:
            - name: FILE_SYSTEM_ID
              valueFrom:
                configMapKeyRef:
                  name: efs-provisioner
                  key: file.system.id
            - name: AWS_REGION
              valueFrom:
                configMapKeyRef:
                  name: efs-provisioner
                  key: aws.region
            - name: PROVISIONER_NAME
              valueFrom:
                configMapKeyRef:
                  name: efs-provisioner
                  key: provisioner.name
          volumeMounts:
            - name: pv-volume
              mountPath: /persistentvolumes
      volumes:
        - name: pv-volume
          nfs:
            server: fs-xxxxxxxx.efs.eu-west-1.amazonaws.com
            path: /
EOF

ProTip: Make sure to check the logs for EFS provisioner pod for permission errors, e.g. $ kubectl logs efs-provisioner-76fc7ff666-l7jwf.

If it complains about permission errors for the default service account, you might need to grant it the cluster-admin role to create the required volume resources as below.

$ cat <<EOF | kubectl apply -f -
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: clusterrolebinding-cluster-admin-default-default
subjects:
- kind: ServiceAccount
  name: default
  namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""
EOF

Create the EFS StorageClass and set it as default

$ cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: aws-efs
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes-tools-cluster/aws-efs
EOF

With the default storage class configured, persistent volumes will now be automatically provisioned as requested.

Now it's time to test that this works.

PersistentVolumeClaim is the mechanism to request for storage including specific size and access mode so let's create one.

$ cat <<EOF | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: efs-deployment
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Gi
EOF

To use this PVC, we must refer to this claim within our Deployment spec, e.g.

      volumes:
        - name: efs
          persistentVolumeClaim:
            claimName: efs-deployment

Here's a complete Deployment manifest with this PVC.

$ cat <<EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: efs-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: efs-deployment
    spec:
      containers:
      - name: efs-deployment
        image: alpine:latest
        volumeMounts:
        - mountPath: /efs
          name: efs
      volumes:
        - name: efs
          persistentVolumeClaim:
            claimName: efs-deployment
EOF

Voila! You're done.

Conclusion

Using Amazon EFS as storage backend for Persistent Volume provides the capability to share data between containers running in a Pod and also allows the Pods to be scheduled across different Availability Zones and still maintain access to the data.

Tweet this article

Join my newsletter

Get notified whenever I post, 100% no spam.

Subscribe Now