This product or feature is in a pre-release state and might change or have limited support. For more information, see the product launch stages.

Prerequisite

  1. A cluster
  2. A new cloud KMS key (Should be an available key)
  3. Deploy the compute engine persistent disk CSI driver to GKE cluster
  4. Assign the Cloud KMS CryptoKey Encrypter/Decrypter role

1. Kubernetes Driver Installation Guide

roles/container.developer roles/iam.roleAdmin roles/iam.serviceAccountAdmin roles/iam.serviceAccountKeyAdmin roles/resourcemanager.projectIamAdmin

Above roles required to run “setup-project.sh” and “deploy-driver.sh”

git clone https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver $GOPATH/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver

export PROJECT=reborn                    	  # GCP project
export GCE_PD_SA_NAME=my-gce-pd-csi-sa   	  # Name of the service account to create
export GCE_PD_SA_DIR=/root/infra    				# Directory to save the service account key
export GOPATH=""														# set GOPATH if have, empty value is also fine

./deploy/setup-project.sh
# You will get a new Service Account and a new customer Role once "setup-project.sh" finished

# Service Account
my-gce-pd-csi-sa@reborn.iam.gserviceaccount.com
includedRoles:
- projects/reborn/roles/gcp_compute_pd_csi_driver_custom_role
- roles/compute.storageAdmin
- roles/iam.serviceAccountUser

# Custom roles required for functions of the gcp-compute-persistent-disk-csi-driver
projects/reborn/roles/gcp_compute_pd_csi_driver_custom_role
includedPermissions:
- compute.instances.attachDisk
- compute.instances.detachDisk
- compute.instances.get
export GCE_PD_DRIVER_VERSION=stable					# Driver version to deploy

./deploy/kubernetes/deploy-driver.sh

1) “no matches for kind “PriorityClass” in version “scheduling.k8s.io/v1”

issues-276

/tmp/gcp-compute-persistent-disk-csi-driver-specs-generated.yaml

apiVersion: scheduling.k8s.io/v1beta1
# apiVersion: scheduling.k8s.io/v1 
# origin one is "v1"
# but once the cluster upgrade to GKE 1.14 or higher, no need to change apiVersion

2) “Unable to deploy driver to k8s cluster: error listing CSINodes: the server could not find the requested resource”

issues-420

gcloud projects add-iam-policy-binding reborn --member serviceAccount:"206384265366-compute@developer.gserviceaccount.com" --role projects/reborn/roles/roles/cloudkms.cryptoKeyEncrypterDecrypter

# The compute service account does the encryption/decryption.
# So even you make infra SA as default SA of node pool, it's not work

2. Create a StorageClass referencing the new KMS key

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: csi-gce-pd
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-standard
  disk-encryption-kms-key: projects/[project]/locations/europe-west2/keyRings/gcePdKmsKeyring/cryptoKeys/gcePdKmsCrypto
kubectl apply -f gcepd-sc.yaml
kubectl delete -f gcepd-sc.yaml 						# Remove StorageClass by yaml
kubectl describe storageclass csi-gce-pd		# Check StorageClass

3. Create an encrypted Persistent Disk in GKE

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: podpvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-gce-pd
  resources:
    requests:
      storage: 6Gi
kubectl apply -f pvc.yaml
kubectl delete -f pvc.yaml 								# Remove PVC by yaml
kubectl get sc,pvc
kubectl describe pvc podpvc --watch

NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
podpvc1   Bound    pvc-66c4cd4c-3060-11ea-994c-4201ac101003   6Gi        RWO            csi-gce-pd     4m6s

4. Create a new node pool

gcloud container node-pools create pool --cluster private-gke-cluster --service-account infra-build@reborn.iam.gserviceaccount.com --machine-type n1-standard-4 --min-nodes 1

5. Resize cluster

gcloud container clusters resize [CLUSTER_NAME] --node-pool [NODE_POOL] \
    --num-nodes [NUM_NODES]

6. Blocker

“failed to insert zonal disk: unkown Insert disk error: googleapi: Error 503: Internal error. Please try again or contact Google Support. (Code: ‘0’), backendError”

raised an issue on GitHub page, and need further discussion.

issues-446

Reference

Dynamic provisioning CMEK

Deploy driver to Cluster

node-pool

roles