Kasten 精英训练营教材升级 – 加入基于CSI快照的备份操作

Kasten 精英训练营教材升级 - 加入基于CSI快照的备份操作

前言

创新与迭代是 Veeam 的核心价值观之一,我们近日又对 Kasten 精英训练营的教材进行了升级, 在新升级的教材中有以下更新。

  • 升级 Ubuntu 到 22.04,继利用4C,8G,200G 这个单个虚拟机配置完成实验
  • 使用 External Hostpath CSI 和 External Snapshotter
  • Kasten K10 升级到 5.0.11
  • 利用 Kasten 调用 Hostpath CSI 和 快照进行备份

本文主要内容

[toc]

作者观察 -- 来自 KubeCon 的消息

近日 KubeCon + CloudNativeCon 2022 在底特律的密歇根市如期进行,做为钻石级赞助商的 Kasten by Veeam 在 KubeCon 宣布推出全新的 Kasten by Veeam K10 V5.5,这将包括一系列新的功能。

20221030160606

Kasten K10 V5.5 的新功能包括:

  • 智能策略
    Kasten K10 具有智能决策功能,可大规模简化数据保护流程。用户可以依策略建议在非高峰时间指定备份时间窗口。Kasten K10 不仅会灵活的建议备份时间窗口,还会进一步自动化底层备份作业的排序。这优化了底层基础设施的利用率,并可以在多个策略时并发时自动处理冲突。

  • 提高部署和扩展的便利性
    Kasten K10 通过提供直观的图形向导来生成最合适的安装清单,以进一步简化了部署环节。Kasten K10 5.5 还支持IPv6,支持 Amazon EKS 与 IPv6 Pod 间通信和 GitOps 工作流集成,提供可扩展的工作流以实现高效的应用程序部署以及备份/恢复。

  • 扩展的云原生生态系统
    确保客户能够在不断增加的工作负载类型、地理区域、存储类型和安全性方面获得最新的进步。Kasten K10 现在支持 RedHat OpenShift 虚拟化,使您能够在 Red Hat OpenShift 上并行运行和管理 VM 和容器工作负载。此外,Kasten K10 增加了对 OCP 4.10、Kubernetes 1.23、Azure 文件服务作为备份目标和 Azure 托管身份的支持,以及 AWS 和 GCP 更多区域的支持。

Kasten by Veeam Announces NEW Kasten K10 V5.5 to Simplify Kubernetes Data Protection at Scale with Autonomous Operations and Cloud Native Expansion
https://www.veeam.com/news/kasten-by-veeam-announces-new-kasten-k10-v5-5-to-simplify-kubernetes-data-protection-at-scale-with-autonomous-operations-and-cloud-native-expansion.html
kubeCon + CloudnativeCon 大会详情
https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/

1. Kasten K10 实验环境准备

1.1. 配置虚拟机

  • 虚拟机配置
    4C 8G 200G HDD, IP: NAT + Static IP, User: Supervisors User and Root(Option)
  • 操作系统
    Ubuntu 20.04 LTS or Ubuntu 22.04 LTS 为了实验方便,Desktop 版本就可以使用。可使用以下链接进行下载。

    Ubuntu 下载链接

1.2. 虚拟机系统环境准备

在安装了操作系统之后,我们要对虚拟机进行一系列软件包的安装

#主机配置,option 自己了解环境就好
$ sudo apt install systemd
#hostname config
$ hostnamectl set-hostname mars-k8s1 $hostnamectl
$ sudo apt install vim
# 多节点时用 (Option)
$ sudo vim /etc/hosts
172.16.124.70 mars-k8s-master1
172.16.124.71 mars-k8s-worker1
# 软件配置,option 自己了解环境就好
$ sudo apt-get install openssh-server
$ sudo apt-get install -y apt-transport-https ca-certificates \
curl gnupg lsb-release
$ sudo apt-get update
# 安装 helm
$ sudo snap install helm --classic

1.3. Docker安装环境准备

#Docker 环境清理
$ sudo apt-get remove -y docker docker-engine \ docker.io containerd runc
#关闭 Swap
$ sudo swapoff -a
$ sudo sed -i -e '/swap/d' /etc/fstab $ swapon --show
#安装Docker 环境
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg \ | sudo gpg --dearmor \
-o /usr/share/keyrings/docker-archive-keyring.gpg
$ echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu
$(lsb_release -cs) stable" \
| sudo tee /etc/apt/sources.list.d/docker.list
$ sudo apt-get update
$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io
$ sudo mkdir -p /etc/docker
$ cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file",
"log-opts": {"max-size": "100m"}, "storage-driver": "overlay2"
} EOF
# Docker 开机启动,赋权
$ sudo systemctl restart docker
$ sudo systemctl enable docker
$ sudo usermod -aG docker $USER

1.4. Kubernetes安装与配置

#kubernetes Install
$ curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
$ sudo apt-get update
$ sudo apt-get install -y kubeadm=1.23.1-00 kubectl=1.23.1-00 kubelet=1.23.1-00
#kubernetes 开机启动
$ sudo systemctl enable kubelet
$ sudo -i
# 以下带♯命令需要在Root权限下运行
♯ systemctl enable kubelet 
♯ systemctl stop etcd
♯ rm -rf /var/lib/etcd
# kubernetes 集群初始化
♯ kubeadm init --kubernetes-version=1.23.1 --pod-network-cidr 10.16.0.0/16 --image-repository registry.aliyuncs.com/google_containers
♯ exit
# 设置kubeconfig
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 查看 Node 状态
$ kubectl get node
# 去除污点
$ kubectl taint nodes --all node-role.kubernetes.io/master-

1.5. 配置 Kubernetes网络

#安装 CNI
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
#查看 Calico 服务启动的情况
$ kubectl get pod -n kube-system -w
#注意 CNI 在这里是 Calico 是 CoreDNS 启动的前题条件,Pull Image 有时需要等待和自查。

$ kubectl get po -n kube-system
NAMESPACE     NAME                                       READY   STATUS    RESTARTS      AGE
kube-system   calico-kube-controllers-54965c7ccb-w944s   1/1     Running   0             40m
kube-system   calico-node-rzrww                          1/1     Running   0             39m
kube-system   coredns-6d8c4cb4d-5gz58                    1/1     Running   0             39m
kube-system   coredns-6d8c4cb4d-kl28q                    1/1     Running   0             39m
kube-system   etcd-mars-k8s-1                            1/1     Running   2 (32m ago)   8h
kube-system   kube-apiserver-mars-k8s-1                  1/1     Running   2 (32m ago)   8h
kube-system   kube-controller-manager-mars-k8s-1         1/1     Running   2 (32m ago)   8h
kube-system   kube-proxy-k78cr                           1/1     Running   2 (32m ago)   8h
kube-system   kube-scheduler-mars-k8s-1                  1/1     Running   3 (32m ago)   8h

1.6.安装配置存储与快照组件 StorageClass 和 VolumeSnapshotClass

HostPath 是一种云原生存储类型,它使用的卷是指计划运行 Pod 的节点(虚拟机/计算机)上的目录。我们将为 Kubernetes 集群启用 HostPath 并测试 VolumeSnapshot 功能,这是 Kasten k10 运行的先决条件。请按照以下步骤安装和验证主机路径 CSI 驱动程序:

1.6.1 安装存储类与快照类

安装 volumesnapshot, volumesnapshotcontent, volumesnapshotclass crds

# Create snapshot controller CRDs
$ kubectl apply -f https://github.com/kubernetes-csi/external-snapshotter/raw/v5.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
$ kubectl apply -f https://github.com/kubernetes-csi/external-snapshotter/raw/v5.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
$ kubectl apply -f https://github.com/kubernetes-csi/external-snapshotter/raw/v5.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml

# Create snapshot controller
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml

External snapshot CRDs and Controller 参考链接
external snapshotter controller
external snapshotter crd

1.6.2 配置 CSI-Driver 部署脚本

$ git clone https://github.com/kubernetes-csi/csi-driver-host-path.git
$ cd csi-driver-host-path/deploy/kubernetes-1.XX
$ ./deploy.sh

# 查看运行情况 
$kubectl get po -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS        AGE
default       csi-hostpath-socat-0                       1/1     Running   0               4m47s
default       csi-hostpathplugin-0                       8/8     Running   0               4m47s
kube-system   calico-kube-controllers-54965c7ccb-w944s   1/1     Running   0               5h46m
kube-system   calico-node-rzrww                          1/1     Running   0               5h46m
kube-system   coredns-6d8c4cb4d-5gz58                    1/1     Running   0               5h45m
kube-system   coredns-6d8c4cb4d-kl28q                    1/1     Running   0               5h46m
kube-system   etcd-mars-k8s-1                            1/1     Running   2 (5h39m ago)   13h

# csi-hostpathplugin 包括 8 个 容器,看名字我们可以大体知道它们的作用
$ kubectl describe po csi-hostpathplugin-0 |grep Image:
    Image:         registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
    Image:         registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.6.0
    Image:         registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1
    Image:         registry.k8s.io/sig-storage/livenessprobe:v2.7.0
    Image:         registry.k8s.io/sig-storage/csi-attacher:v3.5.0
    Image:         registry.k8s.io/sig-storage/csi-provisioner:v3.2.1
    Image:         registry.k8s.io/sig-storage/csi-resizer:v1.5.0
    Image:         registry.k8s.io/sig-storage/csi-snapshotter:v6.0.1

1.6.3 查看与配置 storageclass 与 snapshotclass

# 存储类
$ kubectl get sc
NAME                        PROVISIONER           RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-hostpath-sc (default)   hostpath.csi.k8s.io   Retain          Immediate           false                  3s
# 快照类
$ kubectl get volumesnapshotclasses
NAME                     DRIVER                DELETIONPOLICY   AGE
csi-hostpath-snapclass   hostpath.csi.k8s.io   Delete           31m

# 标记存储类给K10
$ kubectl annotate volumesnapshotclass csi-hostpath-snapclass k10.kasten.io/is-snapshot-class=true
volumesnapshotclass.snapshot.storage.k8s.io/csi-hostpath-snapclass annotated
# 为存储类标记 snapshotclass
$ kubectl annotate storageclass csi-hostpath-sc k10.kasten.io/volume-snapshot-class=csi-hostpath-snapclass

1.6.4. 用 Kasten Pre-Flight Checks 脚本验证存储类与快照是否工作正常

$ curl https://docs.kasten.io/tools/k10_primer.sh | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7113  100  7113    0     0  12513      0 --:--:-- --:--:-- --:--:-- 12522
Namespace option not pbashrovided, using default namespace
Checking for tools
 --> Found kubectl
 --> Found helm
Checking if the Kasten Helm repo is present
 --> The Kasten Helm repo was found
Checking for required Helm version (>= v3.0.0)
 --> No Tiller needed with Helm v3.7.0
K10Primer image
 --> Using Image (gcr.io/kasten-images/k10tools:5.0.11) to run test
Checking access to the Kubernetes context kubernetes-admin@kubernetes
 --> Able to access the default Kubernetes namespace
K10 Kanister tools image
 --> Using Kanister tools image (ghcr.io/kanisterio/kanister-tools:0.83.0) to run test

Running K10Primer Job in cluster with command- 
     ./k10tools primer
serviceaccount/k10-primer created
clusterrolebinding.rbac.authorization.k8s.io/k10-primer created
job.batch/k10primer created
Waiting for pod k10primer-lfc5l to be ready - ContainerCreating
Waiting for pod k10primer-lfc5l to be ready - ContainerCreating
Pod Ready!

Kubernetes Version Check:
  Valid kubernetes version (v1.23.1)  -  OK

RBAC Check:
  Kubernetes RBAC is enabled  -  OK

Aggregated Layer Check:
  The Kubernetes Aggregated Layer is enabled  -  OK

         Found multiple snapshot API group versions, using preferred.
CSI Capabilities Check:
  Using CSI GroupVersion snapshot.storage.k8s.io/v1  -  OK

Validate Generic Volume Snapshot:
  Pod created successfully  -  OK
  GVS Backup command executed successfully  -  OK
  Pod deleted successfully  -  OK

serviceaccount "k10-primer" deleted
clusterrolebinding.rbac.authorization.k8s.io "k10-primer" deleted
job.batch "k10primer" deleted

安装测试用的 workload -- mysql

# add bitnami repo
$ helm repo add bitnami https://charts.bitnami.com/bitnami 
# helm install mysql
$ helm install mysql-release bitnami/mysql --namespace mysql --create-namespace \
 --set auth.rootPassword='Start123' \
 --set primary.persistence.size=10Gi
# check mysql status
$ kubectl get all -n mysql
NAME                  READY   STATUS    RESTARTS   AGE
pod/mysql-release-0   1/1     Running   0          99s

NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/mysql-release            ClusterIP   10.104.247.217   <none>        3306/TCP   48m
service/mysql-release-headless   ClusterIP   None             <none>        3306/TCP   48m

NAME                             READY   AGE
statefulset.apps/mysql-release   1/1     48m

2. Kasten K10 的安装与配置

2.1. 安装 Kasten K10

# Add helm chart for Kasten
$ helm repo add kasten https://charts.kasten.io/ 
# Check if kasten repo already added
$ helm repo list
# Fetch the charts
$ helm fetch kasten/k10 --version=5.0.11
# install kasten with following parameters
$ helm install k10 k10-5.0.11.tgz --namespace kasten-io --create-namespace --set global.airgapped.repository=ccr.ccs.tencentyun.com/kasten \
--set auth.tokenAuth.enabled=true \
--set metering.mode=airgap \
--set injectKanisterSidecar.enabled=true \
--set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true \
--set global.persistence.storageClass=csi-hostpath-sc

# 要确定所有的 Pod 都是 Running 状态才能进行下一步
$ kubectl get po -n kasten-io
NAME                                     READY   STATUS    RESTARTS   AGE
aggregatedapis-svc-5cb69ccb87-f728h      1/1     Running   0          7m18s
auth-svc-59f8597cd-kkm87                 1/1     Running   0          7m19s
catalog-svc-5cc7f8c865-g9wqt             2/2     Running   0          7m18s
controllermanager-svc-64c5fcf9c4-ncwjf   1/1     Running   0          7m19s
crypto-svc-cf68bdd99-n528w               4/4     Running   0          7m19s
dashboardbff-svc-669449d5f4-kxhrw        1/1     Running   0          7m19s
executor-svc-56d8fff95d-7q9gf            2/2     Running   0          7m19s
executor-svc-56d8fff95d-kc769            2/2     Running   0          7m19s
executor-svc-56d8fff95d-zzmhm            2/2     Running   0          7m19s
frontend-svc-7bff945b6c-mndws            1/1     Running   0          7m19s
gateway-78946b9fd7-rsq4t                 1/1     Running   0          7m19s
jobs-svc-c556bffd7-jgwng                 1/1     Running   0          7m18s
k10-grafana-58b85c856d-8pctz             1/1     Running   0          7m19s
kanister-svc-6db7f4d4bc-nhxdw            1/1     Running   0          7m19s
logging-svc-7b87c6dc8-zqfgg              1/1     Running   0          7m19s
metering-svc-b45bfb675-xzp26             1/1     Running   0          7m19s
prometheus-server-64877fdd68-z4ws6       2/2     Running   0          7m19s
state-svc-786864ddc6-tk99x               2/2     Running   0          7m18s

# 设置采 Kasten k10 的 Gateway Service 暴露出来,用 Node Port 方式访问 GUI
$ kubectl expose service gateway -n kasten-io --type=NodePort --name=gateway-nodeport
$ kubectl get svc -n kasten-io gateway-nodeport
NAME               TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
gateway-nodeport   NodePort   10.97.233.211   <none>        8000:32612/TCP   93s
$ kubectl get nodes -owide
NAME         STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
mars-k8s-1   Ready    control-plane,master   20h   v1.23.1   172.16.60.90   <none>        Ubuntu 22.04.1 LTS   5.15.0-52-generic   docker://20.10.21 

☞ 浏览器访问 K10 UI: http://172.16.60.90:32612/k10/#

20221030200444

2.2. 登录 Kasten K10 GUI 管理界面

☞ 浏览器访问:http://172.16.60.90:32612/k10/#

☞ 用以下命令得到 Token, 以登录到 K10 控制台

$ sa_secret=$(kubectl get serviceaccount k10-k10 -o jsonpath="{.secrets[0].name}" --namespace kasten-io) && kubectl get secret $sa_secret --namespace kasten-io -ojsonpath="{.data.token}{'\n'}" | base64 --decode

☞ 以登录到 K10 控制台

20221030200658

2.3. 安装 MinIO 做为存储库

# Create MinIO Namespace
$ kubectl create ns minio
# Create MinIO by manifest
$ cat <<EOF | kubectl -n minio create -f -
apiVersion: v1
kind: Service
metadata:
  name: minio
spec:
  ports:
  - port: 9000
    targetPort: 9000
  selector:
    app: minio
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: minio
spec:
  serviceName: "minio"
  replicas: 1
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
    spec:
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: minio-pvc
      containers:
      - name: minio
        volumeMounts:
        - name: data 
          mountPath: "/data"
        image: minio/minio:RELEASE.2020-12-10T01-54-29Z
        args:
        - server
        - /data
        env:
        - name: MINIO_ACCESS_KEY
          value: "minio"
        - name: MINIO_SECRET_KEY
          value: "minio123"
        ports:
        - containerPort: 9000
          hostPort: 9000
EOF

#Check if MinIO Service running normally
$ kubectl get all -n minio
NAME          READY   STATUS    RESTARTS   AGE
pod/minio-0   1/1     Running   0          21m

NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/minio   ClusterIP   10.110.230.23   <none>        9000/TCP   21m

NAME                     READY   AGE
statefulset.apps/minio   1/1     21m 

#Expose MinIO Service for UI Acess
$ kubectl expose service minio -n minio --type=NodePort --name=minio-nodeport
$ kubectl get svc -n minio && kubectl get node -o wide
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
minio            ClusterIP   10.110.230.23    <none>        9000/TCP         23m
minio-nodeport   NodePort    10.106.243.153   <none>        9000:31106/TCP   29s
NAME         STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
mars-k8s-1   Ready    control-plane,master   21h   v1.23.1   172.16.60.90   <none>        Ubuntu 22.04.1 LTS   5.15.0-52-generic   docker://20.10.21

☞ 浏览器访问:http://172.16.60.90:31106/
Access Key: minio
Secret Key: minio123

☞ 点击硬盘图标,Create 名为 Kasten 的 Bucket

20221030201637

2.4. 在 K10 中配置MinIO做为存储库

在 K10 控制台配置存储库

主界面 Settings > Locations > Location Profiles > New Profiles

20221030201456

设定如下参数,点击 “Save Profile”

  • Profile Name: minio
  • Cloud Storage Provider: S3 Compatible
  • Access Key: minio
  • Secret Key: minio123
  • Endpoint: http://minio.minio:9000
  • Check ☑ Skip certificate chain and hostname verification
  • Region:
  • Bucket: kasten

3. 利用快照进行应用程序的备份与恢复

3.1. K10 应用程序发现

主界面 > Applications > mysql 找到我们需要备份的应用

20221030202513

3.4. 创建 Policy 保护 mysql 应用

点击 Crete Policy > 创建备份的策略

选项一切都可默认,点击Create Policy

20221030202924

3.5. 执行备份策略

主界面 > Policies > mysql-backup > Run Once 执行应用备份

20221030203033

3.6. 查看 Dashboard 与命令行中查看备份的执行情况

主界面 > Actions > Policy Run > 点击运行任务查看备份结果

20221030203404

在命令行上查看快照调用的情况

# 查看 volumesnapshot
$ kubectl get volumesnapshot -n mysql
NAME                            READYTOUSE   SOURCEPVC              SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS            SNAPSHOTCONTENT                                    CREATIONTIME   AGE
k10-csi-snap-r4tn4qk94h674rpg   true         data-mysql-release-0                           10Gi          csi-hostpath-snapclass   snapcontent-af9a8c05-b077-4294-aa4f-e7e1d4265759   2d             2d
# 查看 volumesnapshotcontent
$ kubectl get volumesnapshotcontent -n mysql
NAME                                               READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER                VOLUMESNAPSHOTCLASS      VOLUMESNAPSHOT                  VOLUMESNAPSHOTNAMESPACE   AGE
snapcontent-af9a8c05-b077-4294-aa4f-e7e1d4265759   true         10737418240   Delete           hostpath.csi.k8s.io   csi-hostpath-snapclass   k10-csi-snap-r4tn4qk94h674rpg   mysql                     2d

3.9. 在 K10 中还原 mysql 数据到另一个 Namespace

主界面 > Applications > mysql > 点击 Restore查看还原点

20221030204015

点击还原点 > 在 Applications Name 下 > 点击 Create a New Namespace > 输入 mysql-restore > 点击 Create > 点击 Restore

20221030204220

在 Confirm Restore 对话框中, 点击 Restore
20221030204522

3.10.查看 Dashboard 上 mysql-restore 数据还原的情况

20221030204811

3.10.对数据还原的情况进行分析

此时,让我们来观察一下快照的还原机制,通过查看 volumesnapshotcontent, 我们可以发现这里有一个名为 k10-csi-snap-wh9m8wgjg5gzj2x4 的快照,它的快照类为 k10-clone-csi-hostpath-snapclass, DELETIONPOLICY 为 Retain, 很明显这是 K10 为还原而创建的。同时,在对这个 mysql-restore 的 PVC 的描述时,可以发现,这就是源自于这个快照的。

$ kubectl get volumesnapshotcontent -n mysql-restore
NAME                                                                         READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER                VOLUMESNAPSHOTCLASS                VOLUMESNAPSHOT                  VOLUMESNAPSHOTNAMESPACE   AGE
k10-csi-snap-wh9m8wgjg5gzj2x4-content-9e561840-c004-4e0f-88e9-6c7374438a2f   true         10737418240   Retain           hostpath.csi.k8s.io   k10-clone-csi-hostpath-snapclass   k10-csi-snap-wh9m8wgjg5gzj2x4   mysql-restore             2m41s
snapcontent-af9a8c05-b077-4294-aa4f-e7e1d4265759                             true         10737418240   Delete           hostpath.csi.k8s.io   csi-hostpath-snapclass             k10-csi-snap-r4tn4qk94h674rpg   mysql                     2d1h

$ kubectl get pvc -n mysql-restore
mysql-restore   data-mysql-release-0   Bound    pvc-13d8ed12-0173-4f59-b7d5-b5b01d98ea16   1        csi-hostpath-sc   4m51s

$ kubectl -n mysql-restore describe pvc data-mysql-release-0
Name:          data-mysql-release-0
Namespace:     mysql-restore
StorageClass:  csi-hostpath-sc
Status:        Bound
Volume:        pvc-13d8ed12-0173-4f59-b7d5-b5b01d98ea16
Labels:        app.kubernetes.io/component=primary
               app.kubernetes.io/instance=mysql-release
               app.kubernetes.io/name=mysql
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: hostpath.csi.k8s.io
               volume.kubernetes.io/storage-provisioner: hostpath.csi.k8s.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
DataSource:
  APIGroup:  snapshot.storage.k8s.io
  Kind:      VolumeSnapshot
  Name:      k10-csi-snap-wh9m8wgjg5gzj2x4
Used By:     mysql-release-0
Events:
  Type    Reason                 Age    From                                                                           Message
  ----    ------                 ----   ----                                                                           -------
  Normal  ExternalProvisioning   7m29s  persistentvolume-controller                                                    waiting for a volume to be created, either by external provisioner "hostpath.csi.k8s.io" or manually created by system administrator
  Normal  Provisioning           7m29s  hostpath.csi.k8s.io_csi-hostpathplugin-0_38bf5516-630c-48d9-abf4-56ec8da0599b  External provisioner is provisioning volume for claim "mysql-restore/data-mysql-release-0"
  Normal  ProvisioningSucceeded  7m28s  hostpath.csi.k8s.io_csi-hostpathplugin-0_38bf5516-630c-48d9-abf4-56ec8da0599b  Successfully provisioned volume pvc-13d8ed12-0173-4f59-b7d5-b5b01d98ea16

4. 总结

本文中我们利用一个简单的 Ubuntu 虚拟机,创建了存储类与快照类,并利用 Kasten K10 对调用快照进行备份与还原,使大家在做 K10 的实验时,可以得到与企业级存储快照同样的体验,欢迎关注并转发!

5. 参考链接

Ubuntu 下载链接
https://ubuntu.com/download
external snapshotter controller
https://github.com/kubernetes-csi/external-snapshotter/tree/master/deploy/kubernetes/snapshot-controller
external snapshotter crd
https://github.com/kubernetes-csi/external-snapshotter/tree/v5.0.1/client/config/crd
Kasten Document
https://docs.kasten.io/latest/