최신 CKA 무료덤프 - Linux Foundation Certified Kubernetes Administrator (CKA) Program
Check to see how many worker nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/KUCC00104/kucc00104.txt.
정답:


Task Weight: 4%

Task
Scale the deployment webserver to 3 pods.

Task
Scale the deployment webserver to 3 pods.
정답:
Solution:


You must connect to the correct host.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000037
Context
A legacy app needs to be integrated into the Kubernetes built-in logging architecture (i.e.
kubectl logs). Adding a streaming co-located container is a good and common way to accomplish this requirement.
Task
Update the existing Deployment synergy-leverager, adding a co-located container named sidecar using the busybox:stable image to the existing Pod . The new co-located container has to run the following command:
/bin/sh -c "tail -n+1 -f /var/log/syne
rgy-leverager.log"
Use a Volume mounted at /var/log to make the log file synergy-leverager.log available to the co- located container .
Do not modify the specification of the existing container other than adding the required volume mount .
Failure to do so may result in a reduced score.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000037
Context
A legacy app needs to be integrated into the Kubernetes built-in logging architecture (i.e.
kubectl logs). Adding a streaming co-located container is a good and common way to accomplish this requirement.
Task
Update the existing Deployment synergy-leverager, adding a co-located container named sidecar using the busybox:stable image to the existing Pod . The new co-located container has to run the following command:
/bin/sh -c "tail -n+1 -f /var/log/syne
rgy-leverager.log"
Use a Volume mounted at /var/log to make the log file synergy-leverager.log available to the co- located container .
Do not modify the specification of the existing container other than adding the required volume mount .
Failure to do so may result in a reduced score.
정답:
Task Summary
* SSH into the correct node: cka000037
* Modify existing deployment synergy-leverager
* Add a sidecar container:
* Name: sidecar
* Image: busybox:stable
* Command:
/bin/sh -c "tail -n+1 -f /var/log/synergy-leverager.log"
* Use a shared volume mounted at /var/log
* Don't touch existing container config except adding volume mount
Step-by-Step Solution
1## SSH into the correct node
ssh cka000037
## Skipping this will result in a zero score.
2## Edit the deployment
kubectl edit deployment synergy-leverager
This opens the deployment YAML in your default editor (vi or similar).
3## Modify the spec as follows
# Inside the spec.template.spec, do these 3 things:
# A. Define a shared volume
Add under volumes: (at the same level as containers):
volumes:
- name: log-volume
emptyDir: {}
# B. Add volume mount to the existing container
Locate the existing container under containers: and add this:
volumeMounts:
- name: log-volume
mountPath: /var/log
# Do not change any other configuration for this container.
# C. Add the sidecar container
Still inside containers:, add the new container definition after the first one:
- name: sidecar
image: busybox:stable
command:
- /bin/sh
- -c
- "tail -n+1 -f /var/log/synergy-leverager.log"
volumeMounts:
- name: log-volume
mountPath: /var/log
spec:
containers:
- name: main-container
image: your-existing-image
volumeMounts:
- name: log-volume
mountPath: /var/log
- name: sidecar
image: busybox:stable
command:
- /bin/sh
- -c
- "tail -n+1 -f /var/log/synergy-leverager.log"
volumeMounts:
- name: log-volume
mountPath: /var/log
volumes:
- name: log-volume
emptyDir: {}
Save and exit
If using vi or vim, type:
bash
CopyEdit
wq
5## Verify
Check the updated pods:
kubectl get pods -l app=synergy-leverager
Pick a pod name and describe it:
kubectl describe pod <pod-name>
Confirm:
* 2 containers running (main-container + sidecar)
* Volume mounted at /var/log
ssh cka000037
kubectl edit deployment synergy-leverager
# Modify as explained above
kubectl get pods -l app=synergy-leverager
kubectl describe pod <pod-name>
* SSH into the correct node: cka000037
* Modify existing deployment synergy-leverager
* Add a sidecar container:
* Name: sidecar
* Image: busybox:stable
* Command:
/bin/sh -c "tail -n+1 -f /var/log/synergy-leverager.log"
* Use a shared volume mounted at /var/log
* Don't touch existing container config except adding volume mount
Step-by-Step Solution
1## SSH into the correct node
ssh cka000037
## Skipping this will result in a zero score.
2## Edit the deployment
kubectl edit deployment synergy-leverager
This opens the deployment YAML in your default editor (vi or similar).
3## Modify the spec as follows
# Inside the spec.template.spec, do these 3 things:
# A. Define a shared volume
Add under volumes: (at the same level as containers):
volumes:
- name: log-volume
emptyDir: {}
# B. Add volume mount to the existing container
Locate the existing container under containers: and add this:
volumeMounts:
- name: log-volume
mountPath: /var/log
# Do not change any other configuration for this container.
# C. Add the sidecar container
Still inside containers:, add the new container definition after the first one:
- name: sidecar
image: busybox:stable
command:
- /bin/sh
- -c
- "tail -n+1 -f /var/log/synergy-leverager.log"
volumeMounts:
- name: log-volume
mountPath: /var/log
spec:
containers:
- name: main-container
image: your-existing-image
volumeMounts:
- name: log-volume
mountPath: /var/log
- name: sidecar
image: busybox:stable
command:
- /bin/sh
- -c
- "tail -n+1 -f /var/log/synergy-leverager.log"
volumeMounts:
- name: log-volume
mountPath: /var/log
volumes:
- name: log-volume
emptyDir: {}
Save and exit
If using vi or vim, type:
bash
CopyEdit
wq
5## Verify
Check the updated pods:
kubectl get pods -l app=synergy-leverager
Pick a pod name and describe it:
kubectl describe pod <pod-name>
Confirm:
* 2 containers running (main-container + sidecar)
* Volume mounted at /var/log
ssh cka000037
kubectl edit deployment synergy-leverager
# Modify as explained above
kubectl get pods -l app=synergy-leverager
kubectl describe pod <pod-name>
List the nginx pod with custom columns POD_NAME and POD_STATUS
정답:
kubectl get po -o=custom-columns="POD_NAME:.metadata.name,
POD_STATUS:.status.containerStatuses[].state"
POD_STATUS:.status.containerStatuses[].state"
You must connect to the correct host.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000054
Context:
Your cluster 's CNI has failed a security audit. It has been removed. You must install a new CNI that can enforce network policies.
Task
Install and set up a Container Network Interface (CNI ) that meets these requirements:
Pick and install one of these CNI options:
Flannel version 0.26.1
Manifest:
https://github.com/flannel-io/flannel/releases/download/v0.26.1/kube-flannel.yml
Calico version 3.28.2
Manifest:
https://raw.githubusercontent.com/project calico/calico/v3.28.2/manifests/tigera-operator.yaml
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000054
Context:
Your cluster 's CNI has failed a security audit. It has been removed. You must install a new CNI that can enforce network policies.
Task
Install and set up a Container Network Interface (CNI ) that meets these requirements:
Pick and install one of these CNI options:
Flannel version 0.26.1
Manifest:
https://github.com/flannel-io/flannel/releases/download/v0.26.1/kube-flannel.yml
Calico version 3.28.2
Manifest:
https://raw.githubusercontent.com/project calico/calico/v3.28.2/manifests/tigera-operator.yaml
정답:
Task Summary
* SSH into cka000054
* Install a CNI plugin that supports NetworkPolicies
* Two CNI options provided:
* Flannel v0.26.1 (# does NOT support NetworkPolicies)
* Calico v3.28.2 # (does support NetworkPolicies)
# Decision Point: Which CNI to choose?
# Choose Calico, because only Calico supports enforcing NetworkPolicies natively. Flannel does not.
# Step-by-Step Solution
1## SSH into the correct node
ssh cka000054
## Required. Skipping this results in zero score.
2## Install Calico CNI (v3.28.2)
Use the official manifest provided:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml This installs the Calico Operator, which then deploys the full Calico CNI stack.
3## Wait for Calico components to come up
Check the pods in tigera-operator and calico-system namespaces:
kubectl get pods -n tigera-operator
kubectl get pods -n calico-system
# You should see pods like:
* calico-kube-controllers
* calico-node
* calico-typha
* tigera-operator
Wait for all to be in Running state.
# (Optional) 4## Confirm CNI is enforcing NetworkPolicies
You can check:
kubectl get crds | grep networkpolicy
You should see:
* networkpolicies.crd.projectcalico.org
* This confirms Calico's CRDs are installed for policy enforcement.
Final Command Summary
ssh cka000054
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml kubectl get pods -n tigera-operator kubectl get pods -n calico-system kubectl get crds | grep networkpolicy
* SSH into cka000054
* Install a CNI plugin that supports NetworkPolicies
* Two CNI options provided:
* Flannel v0.26.1 (# does NOT support NetworkPolicies)
* Calico v3.28.2 # (does support NetworkPolicies)
# Decision Point: Which CNI to choose?
# Choose Calico, because only Calico supports enforcing NetworkPolicies natively. Flannel does not.
# Step-by-Step Solution
1## SSH into the correct node
ssh cka000054
## Required. Skipping this results in zero score.
2## Install Calico CNI (v3.28.2)
Use the official manifest provided:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml This installs the Calico Operator, which then deploys the full Calico CNI stack.
3## Wait for Calico components to come up
Check the pods in tigera-operator and calico-system namespaces:
kubectl get pods -n tigera-operator
kubectl get pods -n calico-system
# You should see pods like:
* calico-kube-controllers
* calico-node
* calico-typha
* tigera-operator
Wait for all to be in Running state.
# (Optional) 4## Confirm CNI is enforcing NetworkPolicies
You can check:
kubectl get crds | grep networkpolicy
You should see:
* networkpolicies.crd.projectcalico.org
* This confirms Calico's CRDs are installed for policy enforcement.
Final Command Summary
ssh cka000054
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml kubectl get pods -n tigera-operator kubectl get pods -n calico-system kubectl get crds | grep networkpolicy
Score: 4%

Context
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific namespace.
Task
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
* Deployment
* StatefulSet
* DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole lo the new ServiceAccount cicd-token , limited to the namespace app-team1.

Context
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific namespace.
Task
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
* Deployment
* StatefulSet
* DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole lo the new ServiceAccount cicd-token , limited to the namespace app-team1.
정답:
Solution:
Task should be complete on node k8s -1 master, 2 worker for this connect use command
[student@node-1] > ssh k8s
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets, daemonsets kubectl create serviceaccount cicd-token --namespace=app-team1 kubectl create rolebinding deployment-clusterrole --clusterrole=deployment-clusterrole -- serviceaccount=default:cicd-token --namespace=app-team1
Task should be complete on node k8s -1 master, 2 worker for this connect use command
[student@node-1] > ssh k8s
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets, daemonsets kubectl create serviceaccount cicd-token --namespace=app-team1 kubectl create rolebinding deployment-clusterrole --clusterrole=deployment-clusterrole -- serviceaccount=default:cicd-token --namespace=app-team1
Create a deployment as follows:
* Name: nginx-random
* Exposed via a service nginx-random
* Ensure that the service and pod are accessible via their respective DNS records
* The container(s) within any pod(s) running as a part of this deployment should use the nginx Image Next, use the utility nslookup to look up the DNS records of the service and pod and write the output to /opt
/KUNW00601/service.dns and /opt/KUNW00601/pod.dns respectively.
* Name: nginx-random
* Exposed via a service nginx-random
* Ensure that the service and pod are accessible via their respective DNS records
* The container(s) within any pod(s) running as a part of this deployment should use the nginx Image Next, use the utility nslookup to look up the DNS records of the service and pod and write the output to /opt
/KUNW00601/service.dns and /opt/KUNW00601/pod.dns respectively.
정답:
Solution:






Create a deployment as follows:
* Name: nginx-app
* Using container nginx with version 1.11.10-alpine
* The deployment should contain 3 replicas
Next, deploy the application with new version 1.11.13-alpine, by performing a rolling update.
Finally, rollback that update to the previous version 1.11.10-alpine.
* Name: nginx-app
* Using container nginx with version 1.11.10-alpine
* The deployment should contain 3 replicas
Next, deploy the application with new version 1.11.13-alpine, by performing a rolling update.
Finally, rollback that update to the previous version 1.11.10-alpine.
정답:


