公司新的k8s集群使用Ceph做持久化存储 ,此文档用作记录。

kubernetes版本:v1.14.3

8WO10K.jpg

相关介绍

cephfs是ceph提供的兼容POSIX协议的文件系统,对比RBD和RGW功能,这个是ceph里最晚满足productionready的一个功能,它底层还是使用的rados存储数据

此文中ceph为k8s提供底层存储功能,cephfs方式支持k8s的pv的3中访问模式ReadWriteOnce,ReadOnlyMany,ReadWriteMany,RBD支持ReadWriteOnce,ReadOnlyMany两种模式。

ReadWriteOnce –该卷可以被单个节点以读/写模式挂载

ReadOnlyMany – 该卷可以被多个节点以只读模式挂载

ReadWriteMany – 该卷可以被多个节点以读/写模式挂载

在命令行中,访问模式缩写为:

RWO - ReadWriteOnce ROX - ReadOnlyMany RWX - ReadWriteMany

k8s使用cephfs进行数据持久化时,主要有三种方式:

  • 使用kubernetes支持的cephfs类型volume直接挂载。
  • 使用kubernetes支持的PV&PVC方式进行数据卷的挂载。
  • 使用社区提供的一个cephfs provisioner来支持以storageClasss的方式动态的分配pv,来进行挂载。

k8s中配置cephfs存储

生成ceph-secret

  • 方法一
1
2
3
$ ceph auth get-key client.admin > /tmp/secret
$ cat /tmp/secret
AQAbKRxeBG6MEhAABKES4f8UqzEDCdPE6wXiWg==
  • 方法二
1
2
$ ceph auth get-key client.admin |base64
QVFBYktSeGVCRzZNRWhBQUJLRVM0ZjhVcXpFRENkUEU2d1hpV2c9PQ==

在k8s中创建ceph-secret

  • 方法一
1
2
$ kubectl create ns storage
$ kubectl create secret generic ceph-secret-admin --from-file=/tmp/secret --namespace=storage
  • 方法二
1
2
3
4
5
6
7
8
9
$ kubectl create ns storage
$ vim ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin
  namespace: storage
data:
  key: QVFCSmQ3ZGFyMTUvSGhBQXF2VVAySU5pSmhmQTZ1SjVBUTkxNFE9PQo=

部署cephfs相关

  • clusterrolebinding.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: storage
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io
  • clusterrole.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
  namespace: storage
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  • deployment.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace:  storage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cephfs-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        - name: PROVISIONER_SECRET_NAMESPACE
          value: storage
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
      serviceAccount: cephfs-provisioner
  • rolebinding.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cephfs-provisioner
  namespace: storage
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cephfs-provisioner
subjects:
- kind: ServiceAccount
  name: cephfs-provisioner
  namespace: storage

  • role.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cephfs-provisioner
  namespace: storage
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  • serviceaccount.yaml
1
2
3
4
5
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: storage

执行以下命令部署

1
2
3
4
5
6
$ kubectl apply -f clusterrolebinding.yaml
$ kubectl apply -f clusterrole.yaml
$ kubectl apply -f deployment.yaml
$ kubectl apply -f rolebinding.yaml
$ kubectl apply -f role.yaml
$ kubectl apply -f serviceaccount.yaml

测试使用

  • storageclass.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: bl-test
provisioner: ceph.com/cephfs
parameters:
    monitors: 172.16.7.xxx:6789,172.16.7.xxx:6789,172.16.7.xxx:6789
    adminId: admin
    adminSecretName: ceph-secret-admin
    adminSecretNamespace: "storage"
    claimRoot: /volumes/kubernetes/test/bl-test
  • pvc.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: bl-raptor-151
  annotations:
    volume.beta.kubernetes.io/storage-class: "bl-test"
spec:
  accessModes: [ "ReadWriteMany" ]
  resources:
    requests:
      storage: 2Gi
  • deployment.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
apiVersion: apps/v1
kind: Deployment
metadata:
  name: raptor
spec:
  selector:
    matchLabels:
      app: raptor
  replicas: 2
  minReadySeconds: 80  #滚动升级时runing状态30s后认为该pod就绪
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1  #滚动升级时会先启动1个pod
      maxUnavailable: 1 #滚动升级时允许的最大Unavailable的pod个数
  template:
    metadata:
      labels:
        app: raptor
    spec:
      terminationGracePeriodSeconds: 3  #优雅关闭时间
      hostAliases:
      - ip: "172.16.xxx.xxx"
        hostnames:
        - "xxxxx.xxxxx.com"
      - ip: "172.16.xx.xxx"
        hostnames:
        - "xxxxx.xxxxx.com"
      containers:
      - name: raptor
        image: 172.16.77.53:30882/bl-raptor-jdk151:4833
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          name: raptor
        command: ["/jetty/docker-entrypoint.sh"]
        env:
        - name: JETTY_HOME
          value: "/jetty"
        volumeMounts:
          - mountPath: "/jetty/work/jetty-0_0_0_0-8080-ROOT_war-_-any-/webapp/tmp"
            name: data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: bl-raptor-151