Certified Kubernetes Administrator (CKA) Practice Exam: Part 1

Introduction

This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) exam. You will be presented with tasks to complete as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!

Solution

Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>

Count the Number of Nodes That Are Ready to Run Normal Workloads

  1. Switch to the appropriate context with kubectl:

    kubectl config use-context acgk8s
    
  2. Count the number of nodes ready to run a normal workload:

    kubectl get nodes
    
  3. Check that the worker nodes can run normal workloads:

    kubectl describe node  acgk8s-worker1
    
  4. Scroll to the top of the output and check the list of taints. You should see none.

  5. Repeat the steps above for acgk8s-worker2. You should see no taints on that node either.

  6. Save this number to the file /k8s/0001/count.txt:

    echo 2 > /k8s/0001/count.txt
    

Retrieve Error Messages from a Container Log

  1. Obtain error messages from a container’s log:

    kubectl logs -n backend data-handler -c proc
    
  2. Return only the error messages:

    kubectl logs -n backend data-handler -c proc | grep ERROR
    
  3. Save this output to the file /k8s/0002/errors.txt:

    kubectl logs -n backend data-handler -c proc | grep ERROR > /k8s/0002/errors.txt
    

Find the Pod That Is Utilizing the Most CPU within a Namespace

  1. Locate which Pod in the web namespace with the label app=auth is using the most CPU (In some cases, other pods may show as consuming more cpu):

    kubectl top pod -n web --sort-by cpu --selector app=auth
    
  2. Save the name of this Pod to /k8s/0003/cpu-pod.txt:

    echo auth-web > /k8s/0003/cpu-pod.txt
    

Certified Kubernetes Administrator (CKA) Practice Exam: Part 2

Introduction

This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) exam. You will be presented with tasks to complete as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!

Solution

Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>

Edit the web-frontend Deployment to Expose the HTTP Port

  1. Switch to the appropriate context with kubectl:

    kubectl config use-context acgk8s
    
  2. Edit the web-frontend deployment in the web namespace:

    kubectl edit deployment -n web web-frontend
    
  3. Change the Pod template to expose port 80 on our NGINX containers:

    spec:
      containers:
      - image: nginx:1.14.2
        ports:
        - containerPort: 80
    
  4. Press Esc and enter :wq to save and exit.

Create a Service to Expose the web-frontend Deployment’s Pods Externally

  1. Open a web-frontend service file:

    vi web-frontend-svc.yml
    
  2. Define the service in the YAML document:

    apiVersion: v1
    kind: Service
    metadata:
      name: web-frontend-svc
      namespace: web
    spec:
      type: NodePort
      selector:
        app: web-frontend
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
        nodePort: 30080
    
  3. Press Esc and enter :wq to save and exit.

  4. Create the service:

    kubectl create -f web-frontend-svc.yml
    

Scale Up the Web Frontend Deployment

  1. Scale up the deployment:

    kubectl scale deployment web-frontend -n web --replicas=5
    

Create an Ingress That Maps to the New Service

  1. Create a web-frontend-ingress file:

    vi web-frontend-ingress.yml
    
  2. Define an Ingress in the YAML document:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: web-frontend-ingress
      namespace: web
    spec:
      rules:
      - http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: web-frontend-svc
                port:
                  number: 80
    
  3. Press Esc and enter :wq to save and exit.

  4. Create the Ingress:

    kubectl create -f web-frontend-ingress.yml
    

Certified Kubernetes Administrator (CKA) - Practice Exam Part 3

Introduction

This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) exam. You will be presented with tasks to complete as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!

Solution

Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>

Create a Service Account

  1. Switch to the appropriate context with kubectl:

    kubectl config use-context acgk8s
    
  2. Create a service account:

    kubectl create sa webautomation -n web
    

Create a ClusterRole That Provides Read Access to Pods

  1. Create a pod-reader.yml file:

    vi pod-reader.yml
    
  2. Define the ClusterRole:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: pod-reader
    rules:
    - apiGroups: [""]
      resources: ["pods"]
      verbs: ["get", "watch", "list"]
    
  3. Press Esc and enter:wq to save and exit.

  4. Creat the ClusterRole:

    kubectl create -f pod-reader.yml
    

Bind the ClusterRole to the Service Account to Only Read Pods in the web Namespace

  1. Create the rb-pod-reader.yml file:

    vi rb-pod-reader.yml
    
  2. Define the RoleBinding:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: rb-pod-reader
      namespace: web
    subjects:
    - kind: ServiceAccount
      name: webautomation
    roleRef:
      kind: ClusterRole
      name: pod-reader
      apiGroup: rbac.authorization.k8s.io
    
  3. Press Esc and enter:wq to save and exit.

  4. Create the RoleBinding:

    kubectl create -f rb-pod-reader.yml
    
  5. Verify the RoleBinding works:

    kubectl get pods -n web --as=system:serviceaccount:web:webautomation
    

Certified Kubernetes Administrator (CKA) - Practice Exam Part 4

Introduction

This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) exam. You will be presented with tasks to complete as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!

Solution

Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>

Back Up the etcd Data

  1. From the terminal, log in to the etcd server:

    ssh etcd1
    
  2. Back up the etcd data:

    ETCDCTL_API=3 etcdctl snapshot save /home/cloud_user/etcd_backup.db \
    --endpoints=https://etcd1:2379 \
    --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \
    --cert=/home/cloud_user/etcd-certs/etcd-server.crt \
    --key=/home/cloud_user/etcd-certs/etcd-server.key
    

Restore the etcd Data from the Backup

  1. Stop etcd:

    sudo systemctl stop etcd
    
  2. Delete the existing etcd data:

    sudo rm -rf /var/lib/etcd
    
  3. Restore etcd data from a backup:

    sudo ETCDCTL_API=3 etcdctl snapshot restore /home/cloud_user/etcd_backup.db \
    --initial-cluster etcd-restore=https://etcd1:2380 \
    --initial-advertise-peer-urls https://etcd1:2380 \
    --name etcd-restore \
    --data-dir /var/lib/etcd
    
  4. Set database ownership:

    sudo chown -R etcd:etcd /var/lib/etcd
    
  5. Start etcd:

    sudo systemctl start etcd
    
  6. Verify the system is working:

    ETCDCTL_API=3 etcdctl get cluster.name \
    --endpoints=https://etcd1:2379 \
    --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \
    --cert=/home/cloud_user/etcd-certs/etcd-server.crt \
    --key=/home/cloud_user/etcd-certs/etcd-server.key
    

CKA Practice Exam - Part 5

Introduction

This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) exam. You will be presented with tasks to complete as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!

Solution

Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>

Upgrade All Kubernetes Components on the Control Plane Node

  1. Switch to the appropriate context with kubectl:

    kubectl config use-context acgk8s
    
  2. Upgrade kubeadm:

    sudo apt-get update && \
    sudo apt-get install -y --allow-change-held-packages kubeadm=1.21.1-00
    
  3. Drain the control plane node:

    kubectl drain acgk8s-control --ignore-daemonsets
    
  4. Check whether the components are upgradeable to v1.21.1:

    sudo kubeadm upgrade plan v1.21.1
    
  5. Apply the upgrade:

    sudo kubeadm upgrade apply v1.21.1
    
  6. Upgrade kubelet and kubectl:

    sudo apt-get update && \
    sudo apt-get install -y --allow-change-held-packages kubelet=1.21.1-00 kubectl=1.21.1-00
    
  7. Reload:

    sudo systemctl daemon-reload
    
  8. Restart kubelet:

    sudo systemctl restart kubelet
    
  9. Uncordon the control plane node:

    kubectl uncordon acgk8s-control
    

Upgrade All Kubernetes Components on the Worker Node

  1. Drain the worker1 node:

    kubectl drain acgk8s-worker1 --ignore-daemonsets --force
    
  2. SSH into the node:

    ssh acgk8s-worker1
    
  3. Install a new version of kubeadm:

    sudo apt-get update && \
    sudo apt-get install -y --allow-change-held-packages kubeadm=1.21.1-00
    
  4. Upgrade the node:

    sudo kubeadm upgrade node
    
  5. Upgrade kubelet and kubectl:

    sudo apt-get update && \
    sudo apt-get install -y --allow-change-held-packages kubelet=1.21.1-00 kubectl=1.21.1-00
    
  6. Reload:

    sudo systemctl daemon-reload
    
  7. Restart kubelet:

    sudo systemctl restart kubelet
    
  8. Type exit to exit the node.

  9. Uncordon the node:

    kubectl uncordon acgk8s-worker1
    
  10. Repeat the process above for acgk8s-worker2 to upgrade the other worker node.

Certified Kubernetes Administrator (CKA): Practice Exam: Part 6

Introduction

This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) Exam. You will be presented with tasks to complete, as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!

Solution

Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>

Drain the Worker1 Node

  1. Switch to the acgk8s

    kubectl config use-context acgk8s
    
  2. Attempt to drain the

worker1

kubectl drain acgk8s-worker1


3. Does the node drain successfully?

4. Override the errors and drain the node:

kubectl drain acgk8s-worker1 –delete-local-data –ignore-daemonsets –force


5. Check the status of the exam objectives:

./verify.sh


### Create a Pod That Will Only Be Scheduled on Nodes with a Specific Label

1. Add the

disk=fast


label to the

worker2


node:

kubectl label nodes acgk8s-worker2 disk=fast


2. Create a YAML file named

fast-nginx.yml

vim fast-nginx.yml


3. In the file, paste the following:

apiVersion: v1 kind: Pod metadata: name: fast-nginx namespace: dev spec: nodeSelector: disk: fast containers: - name: nginx image: nginx


4. Save the file:

ESC :wq


5. Create the

fast-nginx


pod:

kubectl create -f fast-nginx.yml


6. Check the status of the pod:

kubectl get pod fast-nginx -n dev -o wide


7. Check the status of the exam objectives:

./verify.sh


# Certified Kubernetes Administrator (CKA): Practice Exam: Part 7

## Introduction

This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) Exam. You will be presented with tasks to complete, as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!

## Solution

Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>


> **Note:** *When copying and pasting code into Vim from the lab guide, first enter `:set paste` (and then `i` to enter insert mode) to avoid adding unnecessary spaces and hashes. To save and quit the file, press **Escape** followed by `:wq`. To exit the file* without *saving, press **Escape** followed by `:q!`.*

### Create a PersistentVolume

1. Switch to the

acgk8s


context:

kubectl config use-context acgk8s


2. Create a YAML file named

localdisk.yml

vim localdisk.yml


3. In the file, paste the following:

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: localdisk provisioner: kubernetes.io/no-provisioner allowVolumeExpansion: true


4. Save the file:

ESC :wq


5. Create a storage class using the YAML file:

kubectl create -f localdisk.yml


6. Create a YAML file named

host-storage-pv.yml

vim host-storage-pv.yml


7. In the file, paste the following:

apiVersion: v1 kind: PersistentVolume metadata: name: host-storage-pv spec: storageClassName: localdisk persistentVolumeReclaimPolicy: Recycle capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: /etc/data


8. Save the file:

ESC :wq


9. Create the PersistentVolume:

kubectl create -f host-storage-pv.yml


10. Check the status of the volume:

 ```
 kubectl get pv host-storage-pv
 ```

11. Check the status of the exam objectives:

 ```
 ./verify.sh
 ```

### Create a Pod That Uses the PersistentVolume for Storage

1. Create a YAML file named

host-storage-pvc.yml

vim host-storage-pvc.yml


2. In the file, paste the following:

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: host-storage-pvc namespace: auth spec: storageClassName: localdisk accessModes: - ReadWriteOnce resources: requests: storage: 100Mi


3. Save the file:

ESC :wq


4. Create the PersistentVolumeClaim:

kubectl create -f host-storage-pvc.yml


5. Check the status of the PersistentVolumeClaim:

kubectl get pv


6. Verify that the claim is bound to the volume:

kubectl get pvc -n auth


7. Create a YAML file named

pv-pod.yml

vim pv-pod.yml


8. In the file, paste the following:

apiVersion: v1 kind: Pod metadata: name: pv-pod namespace: auth spec: containers: - name: busybox image: busybox command: [‘sh’, ‘-c’, ‘while true; do echo success > /output/output.log; sleep 5; done’] volumeMounts: - name: pv-storage mountPath: /output volumes: - name: pv-storage persistentVolumeClaim: claimName: host-storage-pvc


9. Save the file:

ESC :wq


10. Create the pod:

 ```
 kubectl create -f pv-pod.yml
 ```

11. Check the status of the exam objectives:

 ```
 ./verify.sh
 ```

### Expand the PersistentVolumeClaim

1. Edit

host-storage-pvc

kubectl edit pvc host-storage-pvc -n auth


2. Under `spec`, change the `storage` value to **200Mi**.

3. Save the file:

ESC :wq


4. Check the status of the exam objectives:

./verify.sh


# Certified Kubernetes Administrator (CKA): Practice Exam: Part 8

## Introduction

This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) Exam. You will be presented with tasks to complete, as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!

## Solution

Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>


### Create a Networkpolicy That Denies All Access to the Maintenance Pod

1. Switch to the

acgk8s


context:

kubectl config use-context acgk8s


2. Check the

foo


namespace:

kubectl get pods -n foo


3. Check the

maintenance


pod's labels:

kubectl describe pod maintenance -n foo


4. Create a new YAML file named

np-maintenance.yml

vim np-maintenance.yml


5. In the file, paste the following:

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: np-maintenance namespace: foo spec: podSelector: matchLabels: app: maintenance policyTypes: - Ingress - Egress


6. Save the file:

ESC :wq


7. Create the

NetworkPolicy

kubectl create -f np-maintenance.yml


8. Check the status of the exam objectives:

./verify.sh


### Create a Networkpolicy That Allows All Pods in the `users-backend` Namespace to Communicate with Each Other Only on a Specific Port

1. Label the

users-backend


namespace:

kubectl label namespace users-backend app=users-backend


2. Create a YAML file named

np-users-backend-80.yml

vim np-users-backend-80.yml


3. In the file, paste the following:

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: np-users-backend-80 namespace: users-backend spec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: app: users-backend ports: - protocol: TCP port: 80


4. Save the file:

ESC :wq


5. Create the NetworkPolicy:

kubectl create -f np-users-backend-80.yml


6. Check the status of the exam objectives:

./verify.sh


# Certified Kubernetes Administrator (CKA): Practice Exam: Part 7

## Introduction

This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) Exam. You will be presented with tasks to complete, as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes, to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!

## Solution

Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>


### Create a Multi-Container Pod

1. Switch to the

acgk8s


context:

kubectl config use-context acgk8s


2. Create a YAML file named

multi.yml

apiVersion: v1 kind: Pod metadata: name: multi namespace: baz spec: containers: - name: nginx image: nginx - name: redis image: redis


3. Save the file:

ESC :wq


4. Create the multi-container pod:

kubectl create -f multi.yml


5. Check the status of the pod:

kubectl get pods -n baz


6. Check the status of the exam objectives:

./verify.sh


### Create a Pod Which Uses a Sidecar to Expose the Main Container's Log File to Stdout

1. Create a YAML file named logging-sidecar.yml:

vim logging-sidecar.yml


2. In the file, paste the following:

apiVersion: v1 kind: Pod metadata: name: logging-sidecar namespace: baz spec: containers: - name: busybox1 image: busybox command: [‘sh’, ‘-c’, ‘while true; do echo Logging data > /output/output.log; sleep 5; done’] volumeMounts: - name: sharedvol mountPath: /output - name: sidecar image: busybox command: [‘sh’, ‘-c’, ’tail -f /input/output.log’] volumeMounts: - name: sharedvol mountPath: /input volumes: - name: sharedvol emptyDir: {}


3. Save the file:

ESC :wq


4. Create the

logging-sidecar


pod:

kubectl create -f logging-sidecar.yml


5. Check the status of the pod:

kubectl get pods -n baz


6. Check the

logging-sidecar


logs:

kubectl logs logging-sidecar -n baz -c sidecar


7. Check the status of the exam objectives:

./verify.sh


# Certified Kubernetes Administrator (CKA): Practice Exam: Part 10

## Introduction

This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) Exam. You will be presented with tasks to complete, as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes, to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!

## Solution

Log in to the server using the credentials provided:

ssh cloud_user@<PUBLIC_IP_ADDRESS>


### Determine What Is Wrong with the Broken Node

1. Switch to the

acgk8s


context:

kubectl config use-context acgk8s


2. Check the current status of the nodes:

kubectl get nodes


3. Identify the broken node and save the name to a file named

broken-node.txt

echo acgk8s-worker2 > /k8s/0004/broken-node.txt


4. Attempt to determine the cause of the issue:

kubectl describe node acgk8s-worker2


### Fix the Problem

1. Access the node using

ssh acgk8s-worker2


2. Check the

kubelet


log:

sudo journalctl -u kubelet


3. Check the last entry in the log.

4. Check the

kubelet


status:

sudo systemctl status kubelet


5. Enable

kubelet

sudo systemctl enable kubelet


6. Start

kubelet

sudo systemctl start kubelet


7. Verify that

kubelet


started successfully:

sudo systemctl status kubelet


8. Return to the control plane node:

exit


9. Check the status of the nodes:

kubectl get nodes


10. Check the status of the exam objectives:

 ```
 ./verify.sh
 ```