This page was exported from Best Free Exam Guide [ http://free.exams4sures.com ] Export date:Sat Mar 15 5:35:05 2025 / +0000 GMT ___________________________________________________ Title: Updated CKS Dumps Questions Are Available [2024] For Passing Linux Foundation Exam [Q29-Q53] --------------------------------------------------- Updated CKS Dumps Questions Are Available [2024] For Passing Linux Foundation Exam Free UPDATED Linux Foundation CKS Certification Exam Dumps is Online NO.29 SIMULATIONA container image scanner is set up on the cluster.Given an incomplete configuration in the directory/etc/Kubernetes/confcontrol and a functional container image scanner with HTTPS endpoint https://acme.local.8081/image_policy1. Enable the admission plugin.2. Validate the control configuration and change it to implicit deny.Finally, test the configuration by deploying the pod having the image tag as the latest.  Send us the Feedback on it. NO.30 A container image scanner is set up on the cluster.Given an incomplete configuration in the directory/etc/kubernetes/confcontrol and a functional container image scanner with HTTPS endpoint https://test-server.local.8081/image_policy1. Enable the admission plugin.2. Validate the control configuration and change it to implicit deny.Finally, test the configuration by deploying the pod having the image tag as latest. ssh-add ~/.ssh/tempprivateeval “$(ssh-agent -s)”cd contrib/terraform/awsvi terraform.tfvarsterraform initterraform apply -var-file=credentials.tfvarsansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_ssh_user=core -e bootstrap_os=coreos -b –become-user=root –flush-cache -e ansible_user=coreNO.31 Service is running on port 389 inside the system, find the process-id of the process, and stores the names of all the open-files inside the /candidate/KH77539/files.txt, and also delete the binary.  Send us your Feedback on this. NO.32 Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.Fix all of the following violations that were found against the API server:- a. Ensure the –authorization-mode argument includes RBAC b. Ensure the –authorization-mode argument includes Node c. Ensure that the –profiling argument is set to false Fix all of the following violations that were found against the Kubelet:- a. Ensure the –anonymous-auth argument is set to false.b. Ensure that the –authorization-mode argument is set to Webhook.Fix all of the following violations that were found against the ETCD:-a. Ensure that the –auto-tls argument is not set to trueHint: Take the use of Tool Kube-Bench API server:Ensure the –authorization-mode argument includes RBACTurn on Role Based Access Control. Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. It is recommended to use the RBAC authorization mode.Fix – BuildtimeKubernetesapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-systemspec:containers:– command:+ – kube-apiserver+ – –authorization-mode=RBAC,Nodeimage: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0livenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthzport: 6443scheme: HTTPSinitialDelaySeconds: 15timeoutSeconds: 15name: kube-apiserver-should-passresources:requests:cpu: 250mvolumeMounts:– mountPath: /etc/kubernetes/name: k8sreadOnly: true– mountPath: /etc/ssl/certsname: certs– mountPath: /etc/pkiname: pkihostNetwork: truevolumes:– hostPath:path: /etc/kubernetesname: k8s– hostPath:path: /etc/ssl/certsname: certs– hostPath:path: /etc/pkiname: pkiEnsure the –authorization-mode argument includes NodeRemediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the –authorization-mode parameter to a value that includes Node.–authorization-mode=Node,RBACAudit:/bin/ps -ef | grep kube-apiserver | grep -v grepExpected result:‘Node,RBAC’ has ‘Node’Ensure that the –profiling argument is set to falseRemediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the below parameter.–profiling=falseAudit:/bin/ps -ef | grep kube-apiserver | grep -v grepExpected result:‘false’ is equal to ‘false’Fix all of the following violations that were found against the Kubelet:- Ensure the –anonymous-auth argument is set to false.Remediation: If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to false. If using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.–anonymous-auth=falseBased on your system, restart the kubelet service. For example:systemctl daemon-reloadsystemctl restart kubelet.serviceAudit:/bin/ps -fC kubeletAudit Config:/bin/cat /var/lib/kubelet/config.yamlExpected result:‘false’ is equal to ‘false’2) Ensure that the –authorization-mode argument is set to Webhook.Auditdocker inspect kubelet | jq -e ‘.[0].Args[] | match(“–authorization-mode=Webhook”).string’ Returned Value: –authorization-mode=Webhook Fix all of the following violations that were found against the ETCD:- a. Ensure that the –auto-tls argument is not set to true Do not use self-signed certificates for TLS. etcd is a highly-available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service.Fix – BuildtimeKubernetesapiVersion: v1kind: Podmetadata:annotations:scheduler.alpha.kubernetes.io/critical-pod: “”creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-systemspec:containers:– command:+ – etcd+ – –auto-tls=trueimage: k8s.gcr.io/etcd-amd64:3.2.18imagePullPolicy: IfNotPresentlivenessProbe:exec:command:– /bin/sh– -ec– ETCDCTL_API=3 etcdctl –endpoints=https://[192.168.22.9]:2379 –cacert=/etc/kubernetes/pki/etcd/ca.crt–cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt –key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo failureThreshold: 8 initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd-should-fail resources: {} volumeMounts:– mountPath: /var/lib/etcdname: etcd-data– mountPath: /etc/kubernetes/pki/etcdname: etcd-certshostNetwork: truepriorityClassName: system-cluster-criticalvolumes:– hostPath:path: /var/lib/etcdtype: DirectoryOrCreatename: etcd-data– hostPath:path: /etc/kubernetes/pki/etcdtype: DirectoryOrCreatename: etcd-certsstatus: {}NO.33 Cluster: qa-cluster Master node: master Worker node: worker1 You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context qa-cluster Task: Create a NetworkPolicy named restricted-policy to restrict access to Pod product running in namespace dev. Only allow the following Pods to connect to Pod products-service: 1. Pods in the namespace qa 2. Pods with label environment: stage, in any namespace NO.34 Cluster: scanner Master node: controlplane Worker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context scannerGiven: You may use Trivy’s documentation.Task: Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace nato.Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images. Trivy is pre-installed on the cluster’s master node. Use cluster’s master node to use Trivy. NO.35 Service is running on port 389 inside the system, find the process-id of the process, and stores the names of all the open-files inside the /candidate/KH77539/files.txt, and also delete the binary. root# netstat -ltnupActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:17600 0.0.0.0:* LISTEN 1293/dropbox tcp 0 0 127.0.0.1:17603 0.0.0.0:* LISTEN 1293/dropbox tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 575/sshd tcp 0 0 127.0.0.1:9393 0.0.0.0:* LISTEN 900/perl tcp 0 0 :::80 :::* LISTEN 9583/docker-proxy tcp 0 0 :::443 :::* LISTEN 9571/docker-proxy udp 0 0 0.0.0.0:68 0.0.0.0:* 8822/dhcpcd…root# netstat -ltnup | grep ‘:22’tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 575/sshdThe ss command is the replacement of the netstat command.Now let’s see how to use the ss command to see which process is listening on port 22:root# ss -ltnup ‘sport = :22’Netid State Recv-Q Send-Q Local Address:Port Peer Address:Porttcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(“sshd”,pid=575,fd=3))NO.36 Cluster: qa-clusterMaster node: master Worker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context qa-clusterTask:Create a NetworkPolicy named restricted-policy to restrict access to Pod product running in namespace dev.Only allow the following Pods to connect to Pod products-service:1. Pods in the namespace qa2. Pods with label environment: stage, in any namespace $ k get ns qa –show-labelsNAME STATUS AGE LABELSqa Active 47m env=stage$ k get pods -n dev –show-labelsNAME READY STATUS RESTARTS AGE LABELSproduct 1/1 Running 0 3s env=dev-teamapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: restricted-policynamespace: devspec:podSelector:matchLabels:env: dev-teampolicyTypes:– Ingressingress:– from:– namespaceSelector:matchLabels:env: stage– podSelector:matchLabels:env: stage[desk@cli] $ k get ns qa –show-labelsNAME STATUS AGE LABELSqa Active 47m env=stage[desk@cli] $ k get pods -n dev –show-labelsNAME READY STATUS RESTARTS AGE LABELSproduct 1/1 Running 0 3s env=dev-team[desk@cli] $ vim netpol2.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: restricted-policynamespace: devspec:podSelector:matchLabels:env: dev-teampolicyTypes:– Ingressingress:– from:– namespaceSelector:matchLabels:env: stage– podSelector:matchLabels:env: stage[desk@cli] $ k apply -f netpol2.yaml Reference: https://kubernetes.io/docs/concepts/services-networking/network-policies/[desk@cli] $ k apply -f netpol2.yaml Reference: https://kubernetes.io/docs/concepts/services-networking/network-policies/NO.37 You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context qa Context: A pod fails to run because of an incorrectly specified ServiceAccount Task: Create a new service account named backend-qa in an existing namespace qa, which must not have access to any secret. Edit the frontend pod yaml to use backend-qa service account Note: You can find the frontend pod yaml at /home/cert_masters/frontend-pod.yaml [desk@cli] $ k create sa backend-qa -n qa sa/backend-qa created [desk@cli] $ k get role,rolebinding -n qa No resources found in qa namespace. [desk@cli] $ k create role backend -n qa –resource pods,namespaces,configmaps –verb list # No access to secret [desk@cli] $ k create rolebinding backend -n qa –role backend –serviceaccount qa:backend-qa [desk@cli] $ vim /home/cert_masters/frontend-pod.yaml apiVersion: v1 kind: Pod metadata:name: frontendspec:serviceAccountName: backend-qa # Add thisimage: nginxname: frontend[desk@cli] $ k apply -f /home/cert_masters/frontend-pod.yaml pod created[desk@cli] $ k create sa backend-qa -n qa serviceaccount/backend-qa created [desk@cli] $ k get role,rolebinding -n qa No resources found in qa namespace. [desk@cli] $ k create role backend -n qa –resource pods,namespaces,configmaps –verb list role.rbac.authorization.k8s.io/backend created [desk@cli] $ k create rolebinding backend -n qa –role backend –serviceaccount qa:backend-qa rolebinding.rbac.authorization.k8s.io/backend created [desk@cli] $ vim /home/cert_masters/frontend-pod.yaml apiVersion: v1 kind: Pod metadata:name: frontendspec:serviceAccountName: backend-qa # Add thisimage: nginxname: frontend[desk@cli] $ k apply -f /home/cert_masters/frontend-pod.yaml pod/frontend created https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/NO.38 SIMULATIONBefore Making any changes build the Dockerfile with tag base:v1Now Analyze and edit the given Dockerfile(based on ubuntu 16:04)Fixing two instructions present in the file, Check from Security Aspect and Reduce Size point of view.Dockerfile:FROM ubuntu:latestRUN apt-get update -yRUN apt install nginx -yCOPY entrypoint.sh /RUN useradd ubuntuENTRYPOINT [“/entrypoint.sh”]USER ubuntuentrypoint.sh#!/bin/bashecho “Hello from CKS”After fixing the Dockerfile, build the docker-image with the tag base:v2 To Verify: Check the size of the image before and after the build.  Send us the Feedback on it. NO.39 You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context prod-account Context: A Role bound to a Pod’s ServiceAccount grants overly permissive permissions. Complete the following tasks to reduce the set of permissions. Task: Given an existing Pod named web-pod running in the namespace database. 1. Edit the existing Role bound to the Pod’s ServiceAccount test-sa to only allow performing get operations, only on resources of type Pods. 2. Create a new Role named test-role-2 in the namespace database, which only allows performing update operations, only on resources of type statuefulsets. 3. Create a new RoleBinding named test-role-2-bind binding the newly created Role to the Pod’s ServiceAccount. Note: Don’t delete the existing RoleBinding. NO.40 SIMULATIONAnalyze and edit the given DockerfileFROM ubuntu:latestRUN apt-get update -yRUN apt-install nginx -yCOPY entrypoint.sh /ENTRYPOINT [“/entrypoint.sh”]USER ROOTFixing two instructions present in the file being prominent security best practice issues Analyze and edit the deployment manifest file apiVersion: v1 kind: Pod metadata:name: security-context-demo-2spec:securityContext:runAsUser: 1000containers:– name: sec-ctx-demo-2image: gcr.io/google-samples/node-hello:1.0securityContext:runAsUser: 0privileged: TrueallowPrivilegeEscalation: falseFixing two fields present in the file being prominent security best practice issues Don’t add or remove configuration settings; only modify the existing configuration settings Whenever you need an unprivileged user for any of the tasks, use user test-user with the user id 5487  Send us the Feedback on it. NO.41 Before Making any changes build the Dockerfile with tag base:v1Now Analyze and edit the given Dockerfile(based on ubuntu 16:04)Fixing two instructions present in the file, Check from Security Aspect and Reduce Size point of view.Dockerfile:FROM ubuntu:latestRUN apt-get update -yRUN apt install nginx -yCOPY entrypoint.sh /RUN useradd ubuntuENTRYPOINT [“/entrypoint.sh”]USER ubuntuentrypoint.sh#!/bin/bashecho “Hello from CKS”After fixing the Dockerfile, build the docker-image with the tag base:v2  To Verify: Check the size of the image before and after the build. NO.42 Cluster: devMaster node: master1Worker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context devTask:Retrieve the content of the existing secret named adam in the safe namespace.Store the username field in a file names /home/cert-masters/username.txt, and the password field in a file named /home/cert-masters/password.txt.1. You must create both files; they don’t exist yet.2. Do not use/modify the created files in the following steps, create new temporary files if needed.Create a new secret names newsecret in the safe namespace, with the following content:Username: dbadminPassword: moresecurepasFinally, create a new Pod that has access to the secret newsecret via a volume:Namespace: safePod name: mysecret-podContainer name: db-containerImage: redisVolume name: secret-volMount path: /etc/mysecret 1. Get the secret, decrypt it & save in filesk get secret adam -n safe -o yaml2. Create new secret using –from-literal[desk@cli] $k create secret generic newsecret -n safe –from-literal=username=dbadmin –from-literal=password=moresecurepass3. Mount it as volume of db-container of mysecret-podExplanation[desk@cli] $k create secret generic newsecret -n safe –from-literal=username=dbadmin –from-literal=password=moresecurepass secret/newsecret created[desk@cli] $vim /home/certs_masters/secret-pod.yamlapiVersion: v1kind: Podmetadata:name: mysecret-podnamespace: safelabels:run: mysecret-podspec:containers:– name: db-containerimage: redisvolumeMounts:– name: secret-volmountPath: /etc/mysecretreadOnly: truevolumes:– name: secret-volsecret:secretName: newsecret[desk@cli] $ k apply -f /home/certs_masters/secret-pod.yamlpod/mysecret-pod created[desk@cli] $ k exec -it mysecret-pod -n safe – cat /etc/mysecret/username dbadmin[desk@cli] $ k exec -it mysecret-pod -n safe – cat /etc/mysecret/password moresecurepasNO.43 On the Cluster worker node, enforce the prepared AppArmor profile#include <tunables/global>profile nginx-deny flags=(attach_disconnected) {#include <abstractions/base>file,# Deny all file writes.deny /** w,}EOF’  Edit the prepared manifest file to include the AppArmor profile. apiVersion: v1kind: Podmetadata:name: apparmor-podspec:containers:– name: apparmor-podimage: nginxFinally, apply the manifests files and create the Pod specified on it.Verify: Try to make a file inside the directory which is restricted.NO.44 Create a new ServiceAccount named backend-sa in the existing namespace default, which has the capability to list the pods inside the namespace default.Create a new Pod named backend-pod in the namespace default, mount the newly created sa backend-sa to the pod, and Verify that the pod is able to list pods.Ensure that the Pod is running. A service account provides an identity for processes that run in a Pod.When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace. If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/<podname> -o yaml), you can see the spec.serviceAccountName field has been automatically set.You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster. The API permissions of the service account depend on the authorization plugin and policy in use.In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:apiVersion: v1kind: ServiceAccountmetadata:name: build-robotautomountServiceAccountToken: false…In version 1.6+, you can also opt out of automounting API credentials for a particular pod:apiVersion: v1kind: Podmetadata:name: my-podspec:serviceAccountName: build-robotautomountServiceAccountToken: false…The pod spec takes precedence over the service account if both specify a automountServiceAccountToken value.NO.45 a. Retrieve the content of the existing secret named default-token-xxxxx in the testing namespace.Store the value of the token in the token.txtb. Create a new secret named test-db-secret in the DB namespace with the following content:username: mysqlpassword: password@123Create the Pod name test-db-pod of image nginx in the namespace db that can access test-db-secret via a volume at path /etc/mysql-credentials To add a Kubernetes cluster to your project, group, or instance:Navigate to your:Project’s Operations > Kubernetes page, for a project-level cluster.Group’s Kubernetes page, for a group-level cluster.Admin Area > Kubernetes page, for an instance-level cluster.Click Add Kubernetes cluster.Click the Add existing cluster tab and fill in the details:Kubernetes cluster name (required) – The name you wish to give the cluster.Environment scope (required) – The associated environment to this cluster.API URL (required) – It’s the URL that GitLab uses to access the Kubernetes API. Kubernetes exposes several APIs, we want the “base” URL that is common to all of them. For example, https://kubernetes.example.com rather than https://kubernetes.example.com/api/v1.Get the API URL by running this command:kubectl cluster-info | grep -E ‘Kubernetes master|Kubernetes control plane’ | awk ‘/http/ {print $NF}’ CA certificate (required) – A valid Kubernetes certificate is needed to authenticate to the cluster. We use the certificate created by default.List the secrets with kubectl get secrets, and one should be named similar to default-token-xxxxx. Copy that token name for use below.Get the certificate by running this command:kubectl get secret <secret name> -o jsonpath=”{[‘data’][‘ca.crt’]}”NO.46 A container image scanner is set up on the cluster.Given an incomplete configuration in the directory/etc/kubernetes/confcontrol and a functional container image scanner with HTTPS endpoint https://test-server.local.8081/image_policy  1. Enable the admission plugin. 2. Validate the control configuration and change it to implicit deny.Finally, test the configuration by deploying the pod having the image tag as latest.NO.47 SIMULATIONUsing the runtime detection tool Falco, Analyse the container behavior for at least 30 seconds, using filters that detect newly spawning and executing processes store the incident file art /opt/falco-incident.txt, containing the detected incidents. one per line, in the format[timestamp],[uid],[user-name],[processName]  Sendusyoursuggestiononit NO.48 TaskCreate a NetworkPolicy named pod-access to restrict access to Pod users-service running in namespace dev-team.Only allow the following Pods to connect to Pod users-service: NO.49 Context:Cluster: gvisorMaster node: master1Worker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context gvisorContext: This cluster has been prepared to support runtime handler, runsc as well as traditional one.Task:Create a RuntimeClass named not-trusted using the prepared runtime handler names runsc.Update all Pods in the namespace server to run on newruntime. Find all the pods/deployment and edit runtimeClassName parameter to not-trusted under spec[desk@cli] $ k edit deploy nginxspec:runtimeClassName: not-trusted. # Add thisExplanation[desk@cli] $vim runtime.yamlapiVersion: node.k8s.io/v1kind: RuntimeClassmetadata:name: not-trustedhandler: runsc[desk@cli] $ k apply -f runtime.yaml[desk@cli] $ k get podsNAME READY STATUS RESTARTS AGEnginx-6798fc88e8-chp6r 1/1 Running 0 11mnginx-6798fc88e8-fs53n 1/1 Running 0 11mnginx-6798fc88e8-ndved 1/1 Running 0 11m[desk@cli] $ k get deployNAME READY UP-TO-DATE AVAILABLE AGEnginx 3/3 11 3 5m[desk@cli] $ k edit deploy nginxNO.50 Create a network policy named restrict-np to restrict to pod nginx-test running in namespace testing.Only allow the following Pods to connect to Pod nginx-test:-1. pods in the namespace default2. pods with label version:v1 in any namespace.Make sure to apply the network policy.  Send us your Feedback on this. NO.51 You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context test-account Task: Enable audit logs in the cluster.To do so, enable the log backend, and ensure that:1. logs are stored at /var/log/Kubernetes/logs.txt2. log files are retained for 5 days3. at maximum, a number of 10 old audit log files are retainedA basic policy is provided at /etc/Kubernetes/logpolicy/audit-policy.yaml. It only specifies what not to log. Note: The base policy is located on the cluster’s master node.Edit and extend the basic policy to log: 1. Nodes changes at RequestResponse level 2. The request body of persistentvolumes changes in the namespace frontend 3. ConfigMap and Secret changes in all namespaces at the Metadata level Also, add a catch-all rule to log all other requests at the Metadata level Note: Don’t forget to apply the modified policy. $ vim /etc/kubernetes/log-policy/audit-policy.yaml– level: RequestResponseuserGroups: [“system:nodes”]– level: Requestresources:– group: “” # core API groupresources: [“persistentvolumes”]namespaces: [“frontend”]– level: Metadataresources:– group: “”resources: [“configmaps”, “secrets”]– level: Metadata$ vim /etc/kubernetes/manifests/kube-apiserver.yaml Add these– –audit-policy-file=/etc/kubernetes/log-policy/audit-policy.yaml– –audit-log-path=/var/log/kubernetes/logs.txt– –audit-log-maxage=5– –audit-log-maxbackup=10Explanation[desk@cli] $ ssh master1 [master1@cli] $ vim /etc/kubernetes/log-policy/audit-policy.yaml apiVersion: audit.k8s.io/v1 # This is required.kind: Policy# Don’t generate audit events for all requests in RequestReceived stage.omitStages:– “RequestReceived”rules:# Don’t log watch requests by the “system:kube-proxy” on endpoints or services– level: Noneusers: [“system:kube-proxy”]verbs: [“watch”]resources:– group: “” # core API groupresources: [“endpoints”, “services”]# Don’t log authenticated requests to certain non-resource URL paths.– level: NoneuserGroups: [“system:authenticated”]nonResourceURLs:– “/api*” # Wildcard matching.– “/version”# Add your changes below– level: RequestResponseuserGroups: [“system:nodes”] # Block for nodes– level: Requestresources:– group: “” # core API groupresources: [“persistentvolumes”] # Block for persistentvolumesnamespaces: [“frontend”] # Block for persistentvolumes of frontend ns– level: Metadataresources:– group: “” # core API groupresources: [“configmaps”, “secrets”] # Block for configmaps & secrets– level: Metadata # Block for everything else[master1@cli] $ vim /etc/kubernetes/manifests/kube-apiserver.yamlapiVersion: v1kind: Podmetadata:annotations:kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.0.0.5:6443 labels:component: kube-apiservertier: control-planename: kube-apiservernamespace: kube-systemspec:containers:– command:– kube-apiserver– –advertise-address=10.0.0.5– –allow-privileged=true– –authorization-mode=Node,RBAC– –audit-policy-file=/etc/kubernetes/log-policy/audit-policy.yaml #Add this– –audit-log-path=/var/log/kubernetes/logs.txt #Add this– –audit-log-maxage=5 #Add this– –audit-log-maxbackup=10 #Add this…output truncatedNote: log volume & policy volume is already mounted in vim /etc/kubernetes/manifests/kube-apiserver.yaml so no need to mount it. Reference: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/NO.52 use the Trivy to scan the following images,  1. amazonlinux:1 2. k8s.gcr.io/kube-controller-manager:v1.18.6Look for images with HIGH or CRITICAL severity vulnerabilities and store the output of the same in /opt/trivy-vulnerable.txtNO.53 SIMULATIONCreate a network policy named restrict-np to restrict to pod nginx-test running in namespace testing.Only allow the following Pods to connect to Pod nginx-test:-1. pods in the namespace default2. pods with label version:v1 in any namespace.Make sure to apply the network policy.  Send us your Feedback on this.  Loading … Linux Foundation CKS (Certified Kubernetes Security Specialist) Exam is a certification program designed to test and validate the knowledge and skills of professionals in Kubernetes security. Kubernetes is an open-source container orchestration platform that is widely used by organizations to manage their containerized applications. As Kubernetes grows in popularity, the need for professionals with expertise in securing Kubernetes environments has also increased.   Linux Foundation Exam 2024 CKS Dumps Updated Questions: https://www.exams4sures.com/Linux-Foundation/CKS-practice-exam-dumps.html --------------------------------------------------- Images: https://free.exams4sures.com/wp-content/plugins/watu/loading.gif https://free.exams4sures.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2024-09-12 13:06:21 Post date GMT: 2024-09-12 13:06:21 Post modified date: 2024-09-12 13:06:21 Post modified date GMT: 2024-09-12 13:06:21