Currently Empty: ₹0.00
Can you explain the fundamental difference between the Kubernetes Control Plane and the Worker Nodes?
The Control Plane is the “brain” of the cluster. It runs the services responsible for managing the cluster’s state, such as the API server, controller manager, scheduler, and the etcd datastore. The Worker Nodes, on the other hand, carry the actual workload. They receive instructions from the control plane to run containers inside pods and report back on their health.
What are the specific responsibilities of the kube-scheduler?
The scheduler is responsible for assigning (scheduling) new pods to specific nodes. It detects available resources on nodes and places pods based on those resources and other constraints. If multiple nodes have existing workloads, the scheduler calculates the best placement for the new pod.
Why is etcd considered a critical component of the Kubernetes architecture?
etcd is a consistent distributed key-value store that holds all the Kubernetes cluster configuration data and the state of the resources. It acts as the single source of truth for the cluster. Losing etcd is catastrophic because it contains the persistent state of all objects in the cluster.
How does the kubelet differ from other Kubernetes components in terms of how it is run on a node?
While many Kubernetes components run as pods, the kubelet typically runs as a standard Linux background process (a systemd service) on the host operating system. It is one of the few background processes that you generally do not want Kubernetes to manage as a container itself, because it is responsible for managing the containers on that node.
Explain the concept of “Declarative Syntax” versus “Imperative” commands in Kubernetes.
Imperative means running a specific sequence of commands to achieve a result (e.g., kubectl run...), which can be hard to track or reproduce. Declarative syntax involves describing the desired end state of the resource (usually in a YAML manifest) and submitting it to the API. Kubernetes then handles the logic to reconcile the current state to match that desired state.
What is the role of kube-proxy on a worker node?
kube-proxy runs on every node and acts as the network communication mechanism for Kubernetes Services. It maintains network rules (often using iptables) to ensure traffic is allocated and forwarded to the correct pods associated with a service.
If you needed to troubleshoot a failing kubelet process on a node, which tool would you use?
Since the kubelet runs as a systemd service, you would use journalctl to view its logs (e.g., journalctl -u kubelet). You would also use systemctl to check its status (systemctl status kubelet) or to start/enable the service.
Fundamentally, how does a user or tool interact with Kubernetes?
Kubernetes is essentially an application that wraps a RESTful API. Users interact with this API (usually via kubectl) to perform CRUD (Create, Read, Update, Delete) actions on resources. The API server acts as the entry point and gatekeeper for the cluster.
What is the Public Key Infrastructure (PKI) model in Kubernetes?
Kubernetes uses a PKI model for security, where a Certificate Authority (CA) acts as the root of trust. Components like the API server, kubelet, and scheduler use certificates to authenticate with each other. For example, a client uses a certificate to prove its identity to the server, and the CA signature verifies that the server is who it claims to be.
What is a “Pod” in the context of Kubernetes architecture?
A pod is the smallest deployable unit in Kubernetes. It is an abstraction that represents a group of one or more containers running together on a node. Rather than managing individual containers directly, Kubernetes manages pods.
What are the four main components running on a Kubernetes control plane node, and what are their functions?
The four core components are the API server, which exposes the REST interface and acts as the cluster’s entry point; the Controller Manager, which ensures the current state matches the desired state (e.g., maintaining replica counts); the Scheduler, which assigns new pods to available nodes based on resources and constraints; and etcd, the distributed key-value store that holds all cluster configuration data.
How would you determine if a node is running into disk or memory pressure using the command line?
You can use the kubectl describe node <node-name> command. This output provides detailed conditions of the node, such as DiskPressure, MemoryPressure, and PIDPressure. Additionally, checking the status section will reveal if the node is Ready or NotReady due to these underlying resource issues.
What is the difference between a Taint and a Toleration?
A Taint is applied to a node to repel pods from being scheduled on it unless they have a matching specific property. A Toleration is applied to a pod, allowing (but not requiring) it to be scheduled on a node with a matching taint. This mechanism is primarily used to dedicate nodes for specific users or to protect the control plane from running application workloads.
Why do standard application pods not run on the control plane node by default?
The control plane node has a specific taint applied to it during cluster creation (e.g., node-role.kubernetes.io/control-plane:NoSchedule). This prevents application pods from being scheduled there to ensure the control plane has sufficient resources to manage the cluster. Only system pods with the specific toleration for that taint can run there.
How do you upgrade the control plane components of a cluster using kubeadm?
You first run kubeadm upgrade plan to check the current version and see available upgrades. Then, you execute kubeadm upgrade apply <version> to perform the upgrade. This updates components like the API server, controller manager, and scheduler to the target version.
What is the role of the kubelet service, and how is it different from other control plane components?
The kubelet runs on every node (both worker and control plane) and is responsible for ensuring containers are running in a pod as specified by the API server. Unlike other components that run as pods, kubelet runs as a standard Linux background process (systemd service) on the host operating system.
If the kubelet service is down on a worker node, how would you troubleshoot it?
Since kubelet runs as a systemd service, you cannot debug it with kubectl logs. Instead, you must SSH into the node and use systemctl status kubelet to check its state. You can also view its logs using journalctl -u kubelet to identify why it failed.
What is the Container Runtime Interface (CRI), and which tool do you use to debug it?
CRI is a standard interface that enables the kubelet to communicate with various container runtimes (like containerd) without needing to recompile Kubernetes. To debug running containers directly at the runtime level (bypassing the Kubernetes API), you use the crictl command-line tool.
Where does the kubelet look for static pod manifests, and what is a static pod?
Static pods are managed directly by the kubelet on a specific node, not by the API server or scheduler. The kubelet watches the directory /etc/kubernetes/manifests for YAML files and automatically creates pods for any manifests found there. This is how control plane components like the API server and etcd are typically run.
What is the purpose of the kube-proxy component?
kube-proxy runs on every node and maintains network rules (using technologies like iptables) to allow network communication to your pods. It facilitates the forwarding of traffic to the correct pod IP addresses when services are accessed.
How do you manually add a new worker node to an existing cluster using kubeadm?
You must generate a bootstrap token on the control plane using kubeadm token create --print-join-command. This outputs a complete kubeadm join command containing the token and the discovery token CA cert hash, which you then run on the new worker node to join it to the cluster.
Explain the significance of the /etc/kubernetes/pki directory.
This directory contains the Public Key Infrastructure (PKI) files, including the Certificate Authority (CA), client certificates, and server certificates. These are essential for authentication and secure communication between cluster components like the API server, etcd, and kubelet.
What is a DaemonSet, and give an example of a component that runs as one.
A DaemonSet ensures that a copy of a specific pod runs on all (or selected) nodes in the cluster. System components like kube-proxy and CNI plugins (like Flannel or Calico) run as DaemonSets to ensure networking and proxy services are available on every node.
What is the Container Network Interface (CNI), and why is it necessary?
CNI is a specification and set of libraries for configuring network interfaces in Linux containers. It is necessary because Kubernetes does not provide a native network solution; it relies on CNI plugins (like Calico or Flannel) to configure pod networking, IP address management, and routing.
If a pod is stuck in CrashLoopBackOff, which tool would you use to inspect the container if kubectl logs is unavailable?Â
You would use crictl. specifically crictl ps to list the containers and identify the container ID, followed by crictl logs <container-id> or crictl inspect <container-id> to view the output and state directly from the container runtime on the node.
How does Kubernetes authenticate “normal” users differently from Service Accounts?
Kubernetes does not manage “normal” users; they are assumed to be managed externally (e.g., via certificates or LDAP) and cannot be created as Kubernetes objects. Service Accounts, however, are namespaced Kubernetes resources managed by the API, intended for processes running inside pods to communicate with the API server.
Describe the workflow to create a new user with specific permissions in Kubernetes. Answer:
- Generate a private key and a Certificate Signing Request (CSR) containing the user’s name (CN) using OpenSSL.
- Create a
CertificateSigningRequestresource in Kubernetes with the base64-encoded CSR. - Approve the CSR using
kubectl certificate approve. - Extract the signed certificate and add it to the user’s kubeconfig.
- Create a Role and RoleBinding to grant the user specific permissions.
What is the difference between a Role and a ClusterRole?
A Role is a namespaced resource used to grant permissions within a single namespace (e.g., reading pods in “default”). A ClusterRole is a cluster-wide resource used to grant permissions across all namespaces or to cluster-scoped resources like Nodes and PersistentVolumes.
Can you bind a ClusterRole to a user for a specific namespace?
Yes. You can create a RoleBinding in a specific namespace that references a ClusterRole. This grants the user the permissions defined in the ClusterRole, but only within that specific namespace.
What command would you use to verify if a specific user (e.g., “carol”) can delete pods?
You would use the kubectl auth can-i command with the --as flag. Specifically: kubectl auth can-i delete pods --as carol.
What are the four essential components of a Role definition YAML?
The YAML must define the apiVersion (rbac.authorization.k8s.io/v1), the kind (Role), the metadata (name and namespace), and the rules. The rules specify the apiGroups, resources (e.g., pods), and verbs (e.g., get, list, watch) allowed.
How do pods automatically authenticate to the API server?
Pods use a Service Account. By default, Kubernetes mounts a token for the default Service Account into the pod at /var/run/secrets/kubernetes.io/serviceaccount. The pod uses this token to authenticate its requests to the API server.
How can you prevent a Service Account token from being automatically mounted to a pod?
You can set the field automountServiceAccountToken: false in the ServiceAccount definition or directly in the Pod specification. This prevents the secret containing the token from being mounted as a volume.
What is the cluster-admin ClusterRole?
The cluster-admin role provides unrestricted “super-user” access to the cluster. It allows performing any action on any resource. It is typically bound to the system:masters group or cluster administrators.
If you create a RoleBinding but forget to create the Role it references, what happens?
The RoleBinding creation will fail. Kubernetes requires the Role (or ClusterRole) referenced in the roleRef section to exist before you can bind it to a subject.
How do you bind a role to a group rather than a specific user?
In the subjects section of the RoleBinding YAML, you specify the kind as Group and the name as the group name (e.g., developers) defined in the certificate’s Organization (O) field or external identity provider.
Explain the “Principle of Least Privilege” in the context of Kubernetes RBAC.
It means granting only the minimum permissions necessary for a user or service account to perform their function. For example, if a pod only needs to view other pods, you should create a Role allowing only get and list verbs on pods, rather than giving it admin access.
What are the three standard groups managed by the API server?Â
system:authenticated: Included in the group list for all authenticated users.system:unauthenticated: Included for users who cannot be authenticated.system:masters: A group that generally has unrestricted access to the cluster (often bound tocluster-admin).
How can you determine which actions are allowed for a specific ClusterRole?
You can use kubectl describe clusterrole <role-name> to see a human-readable list of rules, or kubectl get clusterrole <role-name> -o yaml to see the raw rule definitions including resources and verbs.
Why might you use a RoleBinding with a Service Account?
This is used to authorize applications running inside pods (machines) to access the API. For example, if you have a CI/CD tool running as a pod that needs to deploy applications, you would bind a Role allowing create deployments to that pod’s Service Account.
