[go: up one dir, main page]

Skip to content

Latest commit

 

History

History

persistent-volume-provisioning

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Persistent Volume Provisioning

This example shows how to use dynamic persistent volume provisioning.

Prerequisites

This example assumes that you have an understanding of Kubernetes administration and can modify the scripts that launch kube-controller-manager.

Admin Configuration

The admin must define StorageClass objects that describe named "classes" of storage offered in a cluster. Different classes might map to arbitrary levels or policies determined by the admin. When configuring a StorageClass object for persistent volume provisioning, the admin will need to describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume belonging to the class.

The name of a StorageClass object is significant, and is how users can request a particular class, by specifying the name in their PersistentVolumeClaim. The provisioner field must be specified as it determines what volume plugin is used for provisioning PVs. The parameters field contains the parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the provisioner. For example, the value io1, for the parameter type, and the parameter iopsPerGB are specific to EBS . When a parameter is omitted, some default is used.

See Kubernetes StorageClass documentation for complete reference of all supported parameters.

AWS

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io1
  zones: us-east-1d, us-east-1c
  iopsPerGB: "10"
  fsType: ext4
  • type: io1, gp2, sc1, st1. See AWS docs for details. Default: gp2.
  • zone: AWS zone. If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
  • zones: a comma separated list of AWS zone(s). If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
  • iopsPerGB: only for io1 volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).
  • encrypted: denotes whether the EBS volume should be encrypted or not. Valid values are true or false.
  • kmsKeyId: optional. The full Amazon Resource Name of the key to use when encrypting the volume. If none is supplied but encrypted is true, a key is generated by AWS. See AWS docs for valid ARN value.
  • fsType: fsType that are supported by kubernetes. Default: "ext4".

GCE

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: slow
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-central1-a
  fsType: ext4
  • type: pd-standard or pd-ssd. Default: pd-ssd
  • zone: GCE zone. If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
  • zones: a comma separated list of GCE zone(s). If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.

vSphere

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
  diskformat: eagerzeroedthick
  fsType:     ext3
  • diskformat: thin, zeroedthick and eagerzeroedthick. See vSphere docs for details. Default: "thin".
  • fsType: fsType that are supported by kubernetes. Default: "ext4".

Portworx Volume

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: portworx-io-priority-high
provisioner: kubernetes.io/portworx-volume
parameters:
  repl: "1"
  snap_interval:   "70"
  io_priority:  "high"
  • fs: filesystem to be laid out: [none/xfs/ext4] (default: ext4)
  • block_size: block size in Kbytes (default: 32)
  • repl: replication factor [1..3] (default: 1)
  • io_priority: IO Priority: [high/medium/low] (default: low)
  • snap_interval: snapshot interval in minutes, 0 disables snaps (default: 0)
  • aggregation_level: specifies the number of chunks the volume would be distributed into, 0 indicates a non-aggregated volume (default: 0)
  • ephemeral: ephemeral storage [true/false] (default false)

For a complete example refer (Portworx Volume docs)

StorageOS

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sc-fast
provisioner: kubernetes.io/storageos
parameters:
  pool: default
  description: Kubernetes volume
  fsType: ext4
  adminSecretNamespace: default
  adminSecretName: storageos-secret
  • pool: The name of the StorageOS distributed capacity pool to provision the volume from. Uses the default pool which is normally present if not specified.
  • description: The description to assign to volumes that were created dynamically. All volume descriptions will be the same for the storage class, but different storage classes can be used to allow descriptions for different use cases. Defaults to Kubernetes volume.
  • fsType: The default filesystem type to request. Note that user-defined rules within StorageOS may override this value. Defaults to ext4.
  • adminSecretNamespace: The namespace where the API configuration secret is located. Required if adminSecretName set.
  • adminSecretName: The name of the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted.

For a complete example refer to the (StorageOS example)

GLUSTERFS

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://127.0.0.1:8081"
  clusterid: "630372ccdc720a92c681fb928f27b53f"
  restuser: "admin"
  secretNamespace: "default"
  secretName: "heketi-secret"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"
  volumeoptions: "client.ssl on, server.ssl on"
  volumenameprefix: "dept-dev"
  snapfactor: "10"
  customepnameprefix: "dbstorage"

Example storageclass can be found in glusterfs-storageclass.yaml.

  • resturl : Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.

  • restauthenabled : Gluster REST service authentication boolean that enables authentication to the REST server. If this value is 'true', restuser and restuserkey or secretNamespace + secretName have to be filled. This option is deprecated, authentication is enabled when any of restuser, restuserkey, secretName or secretNamespace is specified.

  • restuser : Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool.

  • restuserkey : Gluster REST service/Heketi user's password which will be used for authentication to the REST server. This parameter is deprecated in favor of secretNamespace + secretName.

  • secretNamespace + secretName : Identification of Secret instance that contains user password to use when talking to Gluster REST service. These parameters are optional, empty password will be used when both secretNamespace and secretName are omitted. The provided secret must have type "kubernetes.io/glusterfs". When both restuserkey and secretNamespace + secretName is specified, the secret will be used.

  • clusterid: 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids, for ex: "8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397". This is an optional parameter.

Example of a secret can be found in glusterfs-secret.yaml.

  • gidMin + gidMax : The minimum and maximum value of GID range for the storage class. A unique value (GID) in this range ( gidMin-gidMax ) will be used for dynamically provisioned volumes. These are optional values. If not specified, the volume will be provisioned with a value between 2000-2147483647 which are defaults for gidMin and gidMax respectively.

  • volumetype : The volume type and its parameters can be configured with this optional value. If the volume type is not mentioned, it's up to the provisioner to decide the volume type. For example:

    'Replica volume': volumetype: replicate:3 where '3' is replica count. 'Disperse/EC volume': volumetype: disperse:4:2 where '4' is data and '2' is the redundancy count. 'Distribute volume': volumetype: none

For available volume types and its administration options refer: (Administration Guide)

  • volumeoptions : This option allows to specify the gluster volume option which has to be set on the dynamically provisioned GlusterFS volume. The value string should be comma separated strings which need to be set on the volume. As shown in example, if you want to enable encryption on gluster dynamically provisioned volumes you can pass client.ssl on, server.ssl on options. This is an optional parameter.

For available volume options and its administration refer: (Administration Guide)

  • volumenameprefix : By default dynamically provisioned volumes has the naming schema of vol_UUID format. With this option present in storageclass, an admin can now prefix the desired volume name from storageclass. If volumenameprefix storageclass parameter is set, the dynamically provisioned volumes are created in below format where _ is the field separator/delimiter:

volumenameprefix_Namespace_PVCname_randomUUID

Please note that, the value for this parameter cannot contain _ in storageclass. This is an optional parameter.

*snapfactor: Dynamically provisioned volume's thinpool size can be configured with this parameter. The value for the parameter should be in range of 1-100, this value will be taken into account while creating thinpool for the provisioned volume. This is an optional parameter with default value of 1.

  • customepnameprefix : By default dynamically provisioned volumes has an endpoint and service created with the naming schema of glusterfs-dynamic-<PVC UUID format. With this option present in storageclass, an admin can now prefix the desired endpoint from storageclass. If customepnameprefix storageclass parameter is set, the dynamically provisioned volumes will have an endpoint and service created in the following format where - is the field separator/delimiter: customepnameprefix-<PVC UUID>

Reference : (How to configure Gluster on Kubernetes)

Reference : (How to configure Heketi)

When the persistent volumes are dynamically provisioned, the Gluster plugin automatically create an endpoint and a headless service in the name glusterfs-dynamic-<claimname>. This dynamic endpoint and service will be deleted automatically when the persistent volume claim is deleted.

OpenStack Cinder

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gold
provisioner: kubernetes.io/cinder
parameters:
  type: fast
  availability: nova
  fsType: ext4
  • type: VolumeType created in Cinder. Default is empty.
  • availability: Availability Zone. Default is empty.
  • fsType: fsType that are supported by kubernetes. Default: "ext4".

Ceph RBD

  apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    name: fast
  provisioner: kubernetes.io/rbd
  parameters:
    monitors: 10.16.153.105:6789
    adminId: kube
    adminSecretName: ceph-secret
    adminSecretNamespace: kube-system
    pool: kube
    userId: kube
    userSecretName: ceph-secret-user
    fsType: ext4
    imageFormat: "2"
  • monitors: Ceph monitors, comma delimited. It is required.
  • adminId: Ceph client ID that is capable of creating images in the pool. Default is "admin".
  • adminSecretName: Secret Name for adminId. It is required. The provided secret must have type "kubernetes.io/rbd".
  • adminSecretNamespace: The namespace for adminSecret. Default is "default".
  • pool: Ceph RBD pool. Default is "rbd".
  • userId: Ceph client ID that is used to map the RBD image. Default is the same as adminId.
  • userSecretName: The name of Ceph Secret for userId to map RBD image. It must exist in the same namespace as PVCs. It is required.
  • fsType: fsType that are supported by kubernetes. Default: "ext4".
  • imageFormat: Ceph RBD image format, "1" or "2". Default is "2".
  • imageFeatures: Ceph RBD image format 2 features, comma delimited. This is optional, and only be used if you set imageFormat to "2". Currently supported features are layering only. Default is "", no features is turned on.

NOTE: We cannot turn on exclusive-lock feature for now (and object-map, fast-diff, journaling which require exclusive-lock), because exclusive lock and advisory lock cannot work together. (See #45805)

Quobyte

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: slow
provisioner: kubernetes.io/quobyte
parameters:
    quobyteAPIServer: "http://138.68.74.142:7860"
    registry: "138.68.74.142:7861"
    adminSecretName: "quobyte-admin-secret"
    adminSecretNamespace: "kube-system"
    user: "root"
    group: "root"
    quobyteConfig: "BASE"
    quobyteTenant: "DEFAULT"

Download example

  • quobyteAPIServer API Server of Quobyte in the format http(s)://api-server:7860
  • registry Quobyte registry to use to mount the volume. You can specify the registry as : pair or if you want to specify multiple registries you just have to put a comma between them e.q. :,:,:. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
  • adminSecretName secret that holds information about the Quobyte user and the password to authenticate against the API server. The provided secret must have type "kubernetes.io/quobyte".
  • adminSecretNamespace The namespace for adminSecretName. Default is default.
  • user maps all access to this user. Default is root.
  • group maps all access to this group. Default is nfsnobody.
  • quobyteConfig use the specified configuration to create the volume. You can create a new configuration or modify an existing one with the Web console or the quobyte CLI. Default is BASE
  • quobyteTenant use the specified tenant ID to create/delete the volume. This Quobyte tenant has to be already present in Quobyte. For Quobyte < 1.4 use an empty string "" as DEFAULT tenant. Default is DEFAULT
  • createQuota if set all volumes created by this storage class will get a Quota for the specified size. The quota is set for the logical disk size (which can differ from the physical size e.q. if replication is used). Default is ``False

First create Quobyte admin's Secret in the system namespace. Here the Secret is created in kube-system:

$ kubectl create -f examples/persistent-volume-provisioning/quobyte/quobyte-admin-secret.yaml --namespace=kube-system

Then create the Quobyte storage class:

$ kubectl create -f examples/persistent-volume-provisioning/quobyte/quobyte-storage-class.yaml

Now create a PVC

$ kubectl create -f examples/persistent-volume-provisioning/claim1.json

Check the created PVC:

$ kubectl describe pvc
Name:       claim1
Namespace:      default
Status:     Bound
Volume:     pvc-bdb82652-694a-11e6-b811-080027242396
Labels:     <none>
Capacity:       3Gi
Access Modes:   RWO
No events.

$ kubectl describe pv
Name:       pvc-bdb82652-694a-11e6-b811-080027242396
Labels:		<none>
Status:		Bound
Claim:      default/claim1
Reclaim Policy:	Delete
Access Modes:   RWO
Capacity:       3Gi
Message:
Source:
    Type:       Quobyte (a Quobyte mount on the host that shares a pod's lifetime)
    Registry:   138.68.79.14:7861
    Volume:     kubernetes-dynamic-pvc-bdb97c58-694a-11e6-91b6-080027242396
    ReadOnly:   false
No events.

Create a Pod to use the PVC:

$ kubectl create -f examples/persistent-volume-provisioning/quobyte/example-pod.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: slow
provisioner: kubernetes.io/azure-disk
parameters:
  skuName: Standard_LRS
  location: eastus
  storageAccount: azure_storage_account_name
  fsType: ext4
  • skuName: Azure storage account Sku tier. Default is empty.
  • location: Azure storage account location. Default is empty.
  • storageAccount: Azure storage account name. If storage account is not provided, all storage accounts associated with the resource group are searched to find one that matches skuName and location. If storage account is provided, it must reside in the same resource group as the cluster, and skuName and location are ignored.
  • fsType: fsType that are supported by kubernetes. Default: "ext4".

Azure File

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: slow
provisioner: kubernetes.io/azure-file
parameters:
  skuName: Standard_LRS
  location: eastus
  storageAccount: azure_storage_account_name

The parameters are the same as those used by Azure Disk

User provisioning requests

Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim using spec.storageClassName attribute. It is required that this value matches the name of a StorageClass configured by the administrator.

{
  "kind": "PersistentVolumeClaim",
  "apiVersion": "v1",
  "metadata": {
    "name": "claim1"
  },
  "spec": {
    "accessModes": [
      "ReadWriteOnce"
    ],
    "resources": {
      "requests": {
        "storage": "3Gi"
      }
    },
    "storageClassName": "slow"
  }
}

Sample output

GCE

This example uses GCE but any provisioner would follow the same flow.

First we note there are no Persistent Volumes in the cluster. After creating a storage class and a claim including that storage class, we see a new PV is created and automatically bound to the claim requesting storage.

$ kubectl get pv

$ kubectl create -f examples/persistent-volume-provisioning/gce-pd.yaml
storageclass "slow" created

$ kubectl create -f examples/persistent-volume-provisioning/claim1.json
persistentvolumeclaim "claim1" created

$ kubectl get pv
NAME                                       CAPACITY   ACCESSMODES   STATUS    CLAIM                        REASON    AGE
pvc-bb6d2f0c-534c-11e6-9348-42010af00002   3Gi        RWO           Bound     default/claim1                         4s

$ kubectl get pvc
NAME      LABELS    STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
claim1    <none>    Bound     pvc-bb6d2f0c-534c-11e6-9348-42010af00002   3Gi        RWO           7s

# delete the claim to release the volume
$ kubectl delete pvc claim1
persistentvolumeclaim "claim1" deleted

# the volume is deleted in response to being release of its claim
$ kubectl get pv

Ceph RBD

This section will guide you on how to configure and use the Ceph RBD provisioner.

Pre-requisites

For this to work you must have a functional Ceph cluster, and the rbd command line utility must be installed on any host/container that kube-controller-manager or kubelet is running on.

Configuration

First we must identify the Ceph client admin key. This is usually found in /etc/ceph/ceph.client.admin.keyring on your Ceph cluster nodes. The file will look something like this:

[client.admin]
  key = AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
  auid = 0
  caps mds = "allow"
  caps mon = "allow *"
  caps osd = "allow *"

From the key value, we will create a secret. We must create the Ceph admin Secret in the namespace defined in our StorageClass. In this example we've set the namespace to kube-system.

$ kubectl create secret generic ceph-secret-admin --from-literal=key='AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==' --namespace=kube-system --type=kubernetes.io/rbd

Now modify examples/persistent-volume-provisioning/rbd/rbd-storage-class.yaml to reflect your environment, particularly the monitors field. We are now ready to create our RBD Storage Class:

$ kubectl create -f examples/persistent-volume-provisioning/rbd/rbd-storage-class.yaml

The kube-controller-manager is now able to provision storage, however we still need to be able to map the RBD volume to a node. Mapping should be done with a non-privileged key, if you have existing users you can get all keys by running ceph auth list on your Ceph cluster with the admin key. For this example we will create a new user and pool.

$ ceph osd pool create kube 512
$ ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'
[client.kube]
    key = AQBQyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==

This key will be made into a secret, just like the admin secret. However this user secret will need to be created in every namespace where you intend to consume RBD volumes provisioned in our example storage class. Let's create a namespace called myns, and create the user secret in that namespace.

kubectl create namespace myns
kubectl create secret generic ceph-secret-user --from-literal=key='AQBQyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==' --namespace=myns --type=kubernetes.io/rbd

You are now ready to provision and use RBD storage.

Usage

With the storageclass configured, let's create a PVC in our example namespace, myns:

$ kubectl create -f examples/persistent-volume-provisioning/claim1.json --namespace=myns

Eventually the PVC creation will result in a PV and RBD volume to match:

$ kubectl describe pvc --namespace=myns
Name:		claim1
Namespace:	myns
Status:		Bound
Volume:		pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
Labels:		<none>
Capacity:	3Gi
Access Modes:	RWO
No events.

$ kubectl describe pv
Name:		pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
Labels:		<none>
Status:		Bound
Claim:		myns/claim1
Reclaim Policy:	Delete
Access Modes:	RWO
Capacity:	3Gi
Message:
Source:
    Type:		RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:	[127.0.0.1:6789]
    RBDImage:		kubernetes-dynamic-pvc-1cfb1862-664b-11e6-9a5d-90b11c09520d
    FSType:
    RBDPool:		kube
    RadosUser:		kube
    Keyring:		/etc/ceph/keyring
    SecretRef:		&{ceph-secret-user}
    ReadOnly:		false
No events.

With our storage provisioned, we can now create a Pod to use the PVC:

$ kubectl create -f examples/persistent-volume-provisioning/rbd/pod.yaml --namespace=myns

Now our pod has an RBD mount!

$ export PODNAME=`kubectl get pod --selector='role=server' --namespace=myns --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"`
$ kubectl exec -it $PODNAME --namespace=myns -- df -h | grep rbd
/dev/rbd1       2.9G  4.5M  2.8G   1% /var/lib/www/html

Analytics