g6MF3V.jpg

以前的文档中,是使用社区中提供的cephfs provisioner进行动态分配的pv。此文档使用ceph官方的ceph-csi来做持久化,ceph-csi包含RBD和cephfs两种方式。这里仅介绍cephfs方式。

k8s:v1.18.3

ceph:14.2.19 nautilus (stable)

ceph-csi:v3.3.1

部署

  • 拉取代码
1
2
3
$ git clone https://github.com/ceph/ceph-csi.git
$ git checkout v3.3.1
$ ceph-csi/deploy/cephfs/kubernetes
  • 修改yaml文件

    • csi-config-map.yaml

       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      
      $ vim csi-config-map.yaml
      ---
      apiVersion: v1
      kind: ConfigMap
      data:
        config.json: |-
          [
            {
              "clusterID": "71057bd8-9a18-42ea-95e0-7596901370fe", # 此内容可以使用ceph mon dump来查看,clusterID对应fsid
              "monitors": [
                "172.16.77.67:6789"
              ]
            }
          ]
      metadata:
        name: ceph-csi-config
      
    • csi-provisioner-rbac.yaml和csi-nodeplugin-rbac.yaml里面的命名空间改为ceph

  • 部署cephfs csi

    这里的集群没有开启PodSecurityPolicy 准入控制器,所以不部署psp那两个yaml文件

    1
    2
    3
    4
    5
    6
    7
    
    # 创建ceph的命名空间 ceph的想东西都部署在此命名空间中
    $ kubectl create ns ceph
    $ kubectl apply -f csi-config-map.yaml -n ceph
    $ kubectl create -f csi-provisioner-rbac.yaml -n ceph
    $ kubectl create -f csi-nodeplugin-rbac.yaml -n ceph
    $ kubectl create -f csi-cephfsplugin-provisioner.yaml -n ceph
    $ kubectl create -f csi-cephfsplugin.yaml -n ceph
    
  • 查看状态

    1
    2
    3
    4
    
    $ kubectl get pods -n ceph #由于是单节点,这里将csi-cephfsplugin-provisioner设置为了一个副本
    NAME                                            READY   STATUS    RESTARTS   AGE
    csi-cephfsplugin-p8qfk                          3/3     Running   0          21s
    csi-cephfsplugin-provisioner-5f9c4db495-qsq8w   6/6     Running   0          8s
    

验证

注意,这里ceph集群已经创建好存储池和文件系统,不再单独创建

1
2
$ ceph fs ls # 查看文件系统
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
  • 创建密钥

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    $ cd ceph-csi/examples/cephfs
    $ vim secret.yaml
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: csi-cephfs-secret
      namespace: ceph
    stringData:
      # 通过ceph auth get client.admin查看
      # Required for statically provisioned volumes
      userID: admin              
      userKey: AQBgCIpg8OC9LRAAcl8XOfU9/71WiZNLGgnjgA==
      
      # Required for dynamically provisioned volumes
      adminID: admin
      adminKey: AQBgCIpg8OC9LRAAcl8XOfU9/71WiZNLGgnjgA==
        
    $ kubectl apply -f secret.yaml
    
  • 创建storageclass.yaml

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    
    $ vim storageclass.yaml
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: csi-cephfs-sc
    provisioner: cephfs.csi.ceph.com
    parameters:
      # (required) String representing a Ceph cluster to provision storage from.
      # Should be unique across all Ceph clusters in use for provisioning,
      # cannot be greater than 36 bytes in length, and should remain immutable for
      # the lifetime of the StorageClass in use.
      # Ensure to create an entry in the configmap named ceph-csi-config, based on
      # csi-config-map-sample.yaml, to accompany the string chosen to
      # represent the Ceph cluster in clusterID below
      clusterID: 71057bd8-9a18-42ea-95e0-7596901370fe  #此处就是填写上面的clusterID
      
      # (required) CephFS filesystem name into which the volume shall be created
      # eg: fsName: myfs
      fsName: cephfs #填写上面的文件系统
      
      # (optional) Ceph pool into which volume data shall be stored
      # pool: <cephfs-data-pool>
      
      # (optional) Comma separated string of Ceph-fuse mount options.
      # For eg:
      # fuseMountOptions: debug
      
      # (optional) Comma separated string of Cephfs kernel mount options.
      # Check man mount.ceph for mount options. For eg:
      # kernelMountOptions: readdir_max_bytes=1048576,norbytes
      
      # The secrets have to contain user and/or Ceph admin credentials.
      # 注意,这里的命名空间都改为ceph
      csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
      csi.storage.k8s.io/provisioner-secret-namespace: ceph 
      csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
      csi.storage.k8s.io/controller-expand-secret-namespace: ceph
      csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
      csi.storage.k8s.io/node-stage-secret-namespace: ceph
      
      # (optional) The driver can use either ceph-fuse (fuse) or
      # ceph kernelclient (kernel).
      # If omitted, default volume mounter will be used - this is
      # determined by probing for ceph-fuse and mount.ceph
      # mounter: kernel
      
      # (optional) Prefix to use for naming subvolumes.
      # If omitted, defaults to "csi-vol-".
      # volumeNamePrefix: "foo-bar-"
      
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    mountOptions:
      - discard
        
    $ kubectl apply -f storageclass.yaml
    $ kubectl get sc
    NAME                    PROVISIONER           RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    csi-cephfs-sc           cephfs.csi.ceph.com   Delete          Immediate           true                   4s
    
  • 创建pvc

    1
    2
    3
    4
    
    $ kubectl apply -f pvc.yaml
    $ kubectl get pvc
    NAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
    csi-cephfs-pvc                 Bound    pvc-5c5fb10a-c8db-48da-b71c-b1cefc9ebb6e   1Gi        RWX            csi-cephfs-sc   18s
    
  • 创建pod

    1
    2
    3
    4
    
    $ kubectl apply -f pod.yaml
    $ kubectl get pods
    NAME                  READY   STATUS    RESTARTS   AGE
    csi-cephfs-demo-pod   1/1     Running   0          30s
    

至此,验证成功。官方文档还有针对pvc快照的内容,不过好像需要O版以上的ceph集群,对服务器内核好像也有要求。后期再研究

参考链接

https://github.com/ceph/ceph-csi/blob/devel/docs/deploy-cephfs.md

https://github.com/ceph/ceph-csi/blob/devel/examples/README.md#deploying-the-storage-class