g2xpLV.jpg

此文档使用ceph官方的ceph-csi来做持久化,ceph-csi包含RBD和cephfs两种方式。这里仅介绍rbd方式。

k8s:v1.18.3

ceph:14.2.19 nautilus (stable)

ceph-csi:v3.3.1

部署

  • 拉取代码

    1
    2
    3
    
    $ git clone https://github.com/ceph/ceph-csi.git
    $ git checkout v3.3.1
    $ ceph-csi/deploy/rbd/kubernetes
    
  • 修改yaml文件

    • csi-config-map.yaml

       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      
      ---
      apiVersion: v1
      kind: ConfigMap
      data:
        config.json: |-
          [
            {
              "clusterID": "71057bd8-9a18-42ea-95e0-7596901370fe", # 此内容可以使用ceph mon dump来查看,clusterID对应fsid
              "monitors": [
                "172.16.77.67:6789"
              ]
            }
          ]
      metadata:
        name: ceph-csi-config
      
    • csi-provisioner-rbac.yaml和csi-nodeplugin-rbac.yaml里面的命名空间改为ceph

    • 修改csi-rbdplugin-provisioner.yaml和csi-rbdplugin.yaml文件,注释kms-config的配置

       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      #注意,这里因为没有创建kms的configmap,所以注释掉kms-config的内容
      #如果想使用kms的configmap,可以参考ceph-csi/examples/kms/vault下的kms-config.yaml文件
      $ vim csi-rbdplugin-provisioner.yaml  #csi-rbdplugin.yaml同理也注释掉
      ......  #注释以下部分
                 # - name: ceph-csi-encryption-kms-config
                 #   mountPath: /etc/ceph-csi-encryption-kms-config/
      ......
             # - name: ceph-csi-encryption-kms-config
             #   configMap:
             #     name: ceph-csi-encryption-kms-config
      .......
      
  • 部署 rbd csi

    这里的集群没有开启PodSecurityPolicy 准入控制器,所以不部署psp那两个yaml文件

    1
    2
    3
    4
    5
    6
    7
    
    # 创建ceph的命名空间 ceph的想东西都部署在此命名空间中
    $ kubectl create ns ceph
    $ kubectl apply -f csi-config-map.yaml -n ceph
    $ kubectl create -f csi-provisioner-rbac.yaml -n ceph
    $ kubectl create -f csi-nodeplugin-rbac.yaml -n ceph
    $ kubectl create -f csi-rbdplugin-provisioner.yaml -n ceph
    $ kubectl create -f csi-rbdplugin.yaml -n ceph
    
  • 查看状态

    1
    2
    3
    4
    
    $ kubectl get pods -n ceph #由于是单节点,这里将csi-rbdplugin-provisioner设置为了一个副本
    NAME                                            READY   STATUS    RESTARTS   AGE
    csi-rbdplugin-r2z22                          3/3     Running   0          21s
    csi-rbdplugin-provisioner-db7b756df-htlrd   7/7     Running   0          8s
    

验证

ceph集群中创建资源等配置

  • 创建存储池并设定pgpg_num

    1
    
    $ ceph osd pool create k8s 32 32
    
  • 初始化存储池

    1
    
    $ rbd pool init k8s
    
  • 创建k8s用户(此步骤可选,本文档中还是使用admin用户)

    1
    
    $ ceph auth get-or-create client.k8s mon 'profile rbd' osd 'profile rbd pool=k8s' mgr 'profile rbd pool=k8s'
    

k8s集群验证

  • 创建秘钥

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    
    $ cd ceph-csi/examples/rbd
    $ vim secret.yaml
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: csi-rbd-secret
      namespace: ceph
    stringData:
      # Key values correspond to a user name and its key, as defined in the
      # ceph cluster. User ID should have required access to the 'pool'
      # specified in the storage class
      userID: admin
      userKey: AQBgCIpg8OC9LRAAcl8XOfU9/71WiZNLGgnjgA==
      
      # Encryption passphrase
      encryptionPassphrase: test_passphrase
    $ kubectl apply -f secret.yaml
    
  • 创建storageclass.yaml

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    
    $ vim storageclass.yaml
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
       name: csi-rbd-sc
    provisioner: rbd.csi.ceph.com
    # If topology based provisioning is desired, delayed provisioning of
    # PV is required and is enabled using the following attribute
    # For further information read TODO<doc>
    # volumeBindingMode: WaitForFirstConsumer
    parameters:
       # (required) String representing a Ceph cluster to provision storage from.
       # Should be unique across all Ceph clusters in use for provisioning,
       # cannot be greater than 36 bytes in length, and should remain immutable for
       # the lifetime of the StorageClass in use.
       # Ensure to create an entry in the configmap named ceph-csi-config, based on
       # csi-config-map-sample.yaml, to accompany the string chosen to
       # represent the Ceph cluster in clusterID below
       clusterID: 71057bd8-9a18-42ea-95e0-7596901370fe  #此处就是填写上面的clusterID
      
       # (optional) If you want to use erasure coded pool with RBD, you need to
       # create two pools. one erasure coded and one replicated.
       # You need to specify the replicated pool here in the `pool` parameter, it is
       # used for the metadata of the images.
       # The erasure coded pool must be set as the `dataPool` parameter below.
       # dataPool: <ec-data-pool>
      
       # (required) Ceph pool into which the RBD image shall be created
       # eg: pool: rbdpool
       pool: k8s  #填写上面的存储池
      
       # Set thickProvision to true if you want RBD images to be fully allocated on
       # creation (thin provisioning is the default).
       thickProvision: "false"
       # (required) RBD image features, CSI creates image with image-format 2
       # CSI RBD currently supports `layering`, `journaling`, `exclusive-lock`
       # features. If `journaling` is enabled, must enable `exclusive-lock` too.
       # imageFeatures: layering,journaling,exclusive-lock
       imageFeatures: layering
      
       # (optional) mapOptions is a comma-separated list of map options.
       # For krbd options refer
       # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
       # For nbd options refer
       # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
       # mapOptions: lock_on_read,queue_depth=1024
      
       # (optional) unmapOptions is a comma-separated list of unmap options.
       # For krbd options refer
       # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
       # For nbd options refer
       # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
       # unmapOptions: force
      
       # The secrets have to contain Ceph credentials with required access
       # to the 'pool'.
       csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
       csi.storage.k8s.io/provisioner-secret-namespace: ceph
       csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
       csi.storage.k8s.io/controller-expand-secret-namespace: ceph
       csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
       csi.storage.k8s.io/node-stage-secret-namespace: ceph
      
       # (optional) Specify the filesystem type of the volume. If not specified,
       # csi-provisioner will set default as `ext4`.
       csi.storage.k8s.io/fstype: ext4
      
       # (optional) uncomment the following to use rbd-nbd as mounter
       # on supported nodes
       # mounter: rbd-nbd
      
       # (optional) Prefix to use for naming RBD images.
       # If omitted, defaults to "csi-vol-".
       # volumeNamePrefix: "foo-bar-"
      
       # (optional) Instruct the plugin it has to encrypt the volume
       # By default it is disabled. Valid values are "true" or "false".
       # A string is expected here, i.e. "true", not true.
       # encrypted: "true"
      
       # (optional) Use external key management system for encryption passphrases by
       # specifying a unique ID matching KMS ConfigMap. The ID is only used for
       # correlation to configmap entry.
       # encryptionKMSID: <kms-config-id>
      
       # Add topology constrained pools configuration, if topology based pools
       # are setup, and topology constrained provisioning is required.
       # For further information read TODO<doc>
       # topologyConstrainedPools: |
       #   [{"poolName":"pool0",
       #     "dataPool":"ec-pool0" # optional, erasure-coded pool for data
       #     "domainSegments":[
       #       {"domainLabel":"region","value":"east"},
       #       {"domainLabel":"zone","value":"zone1"}]},
       #    {"poolName":"pool1",
       #     "dataPool":"ec-pool1" # optional, erasure-coded pool for data
       #     "domainSegments":[
       #       {"domainLabel":"region","value":"east"},
       #       {"domainLabel":"zone","value":"zone2"}]},
       #    {"poolName":"pool2",
       #     "dataPool":"ec-pool2" # optional, erasure-coded pool for data
       #     "domainSegments":[
       #       {"domainLabel":"region","value":"west"},
       #       {"domainLabel":"zone","value":"zone1"}]}
       #   ]
      
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    mountOptions:
       - discard
         
         
    $ kubectl apply -f storageclass.yaml
    $ kubectl get sc
    NAME                    PROVISIONER           RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    csi-rbd-sc           rbd.csi.ceph.com   Delete          Immediate           true                   4s
      
    
  • 创建pvc

    1
    2
    3
    4
    
    $ kubectl apply -f pvc.yaml
    $ kubectl get pvc
    NAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
    rbd-pvc                        Bound    pvc-31d6b34a-65f8-4f51-983f-b12c62807084   1Gi        RWO            csi-rbd-sc      18s
    
  • 创建pod

    1
    2
    3
    4
    
    $ kubectl apply -f pod.yaml
    $ kubectl get pods
    NAME                  READY   STATUS    RESTARTS   AGE
    csi-rbd-demo-pod      1/1     Running   0          28s
    

至此,验证成功。

参考链接

https://github.com/ceph/ceph-csi/blob/devel/docs/deploy-rbd.md

https://github.com/ceph/ceph-csi/tree/devel/examples