What Kubernetes Pods and Services are and why they matter in 2026
A Pod is the smallest deployable unit in Kubernetes, encapsulating one or more containers that share storage, network namespace, and lifecycle. A Service is an abstraction that defines a logical set of Pods and a policy to access them, providing stable networking endpoints even as Pods are created and destroyed. In 2026, as Indian enterprises migrate from monolithic applications to microservices—driven by RBI's digital banking mandates and SEBI's cloud-first compliance frameworks—understanding how Pods communicate internally and how Services expose them externally is foundational for DevOps engineers, cloud architects, and network security professionals working at Cisco India, HCL, Akamai, and Aryaka.
Kubernetes orchestrates containerized workloads across clusters of nodes, but without Services, Pods would be ephemeral islands with no stable addressing. Services provide three primary exposure patterns: ClusterIP (internal-only access within the cluster), NodePort (exposes the Service on each node's IP at a static port), and LoadBalancer (provisions an external load balancer, typically in cloud environments like AWS, Azure, or GCP). Each pattern solves distinct networking challenges, from microservice-to-microservice communication to public internet ingress, and choosing the wrong type can lead to security vulnerabilities, latency spikes, or failed production deployments.
At Networkers Home's HSR Layout lab, we run a 24×7 Kubernetes cluster where students in our AWS DevOps course in Bangalore deploy multi-tier applications—frontend Pods communicating with backend Pods via ClusterIP Services, and exposing APIs to external clients via LoadBalancer Services integrated with AWS Elastic Load Balancing. This hands-on exposure mirrors real-world production patterns at our 800+ hiring partners, including Movate, Wipro, TCS, and Infosys, where graduates manage Kubernetes clusters serving millions of requests daily.
How Pods work under the hood
Every Pod receives a unique IP address from the cluster's Pod network CIDR, assigned by the Container Network Interface (CNI) plugin—commonly Calico, Flannel, Cilium, or AWS VPC CNI in managed Kubernetes services like EKS. Containers within the same Pod share this IP and communicate over localhost, making Pods ideal for tightly coupled processes like a web server and a logging sidecar. The kubelet agent on each node pulls container images from registries (Docker Hub, Amazon ECR, or private registries), starts containers using the container runtime (containerd or CRI-O), and mounts volumes defined in the Pod spec.
Pods are ephemeral by design. When a Pod crashes or a node fails, the ReplicaSet or Deployment controller creates a replacement Pod with a new IP address. This dynamic IP allocation breaks traditional networking assumptions where clients cache server IPs. Kubernetes solves this with Services, which maintain stable virtual IPs (VIPs) and use kube-proxy or eBPF-based implementations to load-balance traffic across healthy Pod replicas. The kube-proxy component on each node watches the Kubernetes API server for Service and Endpoint updates, then programs iptables rules (or IPVS rules in IPVS mode) to NAT incoming traffic to backend Pod IPs.
In our HSR Layout lab, we demonstrate this by intentionally killing Pods and watching how the Service endpoint list updates in real time. Students use kubectl get endpoints <service-name> to observe IP changes, then trace packet flows with tcpdump on worker nodes to see NAT transformations. This exercise clarifies why DNS-based service discovery—where applications resolve Service names like backend-svc.default.svc.cluster.local to ClusterIP addresses—is critical for resilient microservices architectures.
Pod networking model and CNI plugins
Kubernetes mandates a flat network model: every Pod can communicate with every other Pod without NAT, and nodes can communicate with all Pods. CNI plugins implement this model differently. Calico uses BGP to distribute Pod routes across nodes, enabling high-performance routing without overlay encapsulation. Flannel creates a VXLAN overlay network, encapsulating Pod traffic inside UDP packets for cross-node communication. Cilium uses eBPF for kernel-level packet filtering and observability, offering superior performance and security policy enforcement. In AWS EKS, the VPC CNI assigns secondary ENI IP addresses to Pods, integrating Kubernetes networking directly with VPC routing tables and security groups.
Choosing the right CNI impacts latency, throughput, and security posture. During our 4-month paid internship at the Network Security Operations Division, interns benchmark CNI plugins under simulated production loads—measuring packet loss, CPU overhead, and policy enforcement latency. These metrics inform architecture decisions at hiring partners like Barracuda and Aryaka, where network performance directly affects customer SLAs.
Understanding ClusterIP Services
A ClusterIP Service exposes Pods on an internal, cluster-scoped IP address. This IP is virtual—no physical interface binds to it—and is reachable only from within the cluster. ClusterIP is the default Service type and is used for microservice-to-microservice communication, database access from application tiers, and internal APIs that should never be exposed to the public internet.
When you create a ClusterIP Service, Kubernetes allocates an IP from the service CIDR range (configured during cluster bootstrap, typically 10.96.0.0/12 or 172.20.0.0/16). The kube-proxy on each node programs iptables rules to intercept traffic destined for this ClusterIP and DNAT it to one of the backend Pod IPs, selected via round-robin or session-affinity algorithms. CoreDNS, the cluster DNS server, creates an A record mapping the Service name to the ClusterIP, enabling applications to use DNS names instead of hardcoded IPs.
In practice, a three-tier application—frontend, backend, database—uses two ClusterIP Services: one for the backend API and one for the database. The frontend Pods resolve backend-svc to the backend ClusterIP, and backend Pods resolve postgres-svc to the database ClusterIP. This decouples service discovery from Pod lifecycle, allowing rolling updates and autoscaling without client-side configuration changes.
ClusterIP configuration example
apiVersion: v1
kind: Service
metadata:
name: backend-svc
namespace: production
spec:
type: ClusterIP
selector:
app: backend
tier: api
ports:
- protocol: TCP
port: 8080
targetPort: 8080
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 3600
This manifest creates a ClusterIP Service named backend-svc in the production namespace. The selector matches Pods labeled app=backend and tier=api. Traffic sent to backend-svc:8080 is load-balanced to port 8080 on matching Pods. The sessionAffinity: ClientIP directive ensures requests from the same client IP are routed to the same Pod for one hour, useful for stateful sessions or WebSocket connections.
Students in our AWS DevOps course in Bangalore deploy this configuration in our lab, then use kubectl exec to shell into a frontend Pod and curl the backend Service. They observe DNS resolution with nslookup backend-svc.production.svc.cluster.local, inspect iptables rules with iptables-save | grep backend-svc, and trace packet flows to understand NAT mechanics.
Understanding NodePort Services
A NodePort Service exposes Pods on a static port (30000-32767 by default) on every node's IP address. External clients can reach the Service by connecting to <NodeIP>:<NodePort>, and Kubernetes routes the traffic to backend Pods regardless of which node they run on. NodePort builds on ClusterIP—every NodePort Service also gets a ClusterIP for internal access—and adds an external entry point.
NodePort is commonly used in on-premises Kubernetes clusters where cloud load balancers are unavailable, in development environments for quick external access, and in hybrid architectures where an external hardware load balancer (F5, Citrix ADC, or Cisco ACI) distributes traffic across node IPs. However, NodePort has limitations: clients must know node IPs (which change if nodes are replaced), the port range is restricted, and there's no built-in health checking or SSL termination.
In production, NodePort is often paired with an external load balancer. For example, at Akamai India, edge nodes use NodePort Services to expose Kubernetes-hosted APIs, with Akamai's CDN performing SSL termination and global load balancing. At Cisco India, ACI fabric integrates with Kubernetes via the ACI CNI, automatically configuring endpoint groups (EPGs) and contracts to allow NodePort traffic through the data center firewall.
NodePort configuration example
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
namespace: production
spec:
type: NodePort
selector:
app: frontend
tier: web
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 30080
externalTrafficPolicy: Local
This manifest creates a NodePort Service on port 30080. External clients access the frontend by connecting to any node's IP on port 30080, e.g., http://192.168.1.10:30080. The externalTrafficPolicy: Local directive preserves client source IPs and avoids an extra hop by routing traffic only to Pods on the receiving node. Without this setting, kube-proxy may forward traffic to Pods on other nodes, adding latency and obscuring the true client IP behind the node's IP.
We test this in our HSR Layout lab by deploying a NodePort Service, then using curl from an external VM to hit each node's IP. Students observe that with externalTrafficPolicy: Cluster (the default), requests are load-balanced across all Pods cluster-wide, but with Local, only Pods on the target node respond. This distinction is critical for logging, rate limiting, and compliance with DPDP Act requirements to track user IP addresses.
Understanding LoadBalancer Services
A LoadBalancer Service provisions an external load balancer from the cloud provider—AWS Elastic Load Balancing (ELB/ALB/NLB), Azure Load Balancer, or GCP Cloud Load Balancing—and assigns a public IP or DNS name. Traffic sent to this external endpoint is distributed across backend Pods, with the cloud load balancer performing health checks, SSL termination, and cross-availability-zone failover. LoadBalancer is the preferred Service type for production internet-facing applications in cloud environments.
When you create a LoadBalancer Service in EKS, the AWS Cloud Controller Manager (CCM) calls the ELB API to provision a Network Load Balancer (NLB) or Application Load Balancer (ALB), depending on annotations. The load balancer's target group points to NodePort endpoints on worker nodes, and the cloud provider's health checks ensure traffic is routed only to healthy nodes. The external IP or DNS name is populated in the Service's status.loadBalancer.ingress field, which clients use to access the application.
LoadBalancer Services incur cloud provider costs—each Service provisions a separate load balancer, which can become expensive in multi-tenant clusters with dozens of Services. To optimize costs, many organizations use a single Ingress controller (NGINX, Traefik, or AWS ALB Ingress Controller) with a single LoadBalancer Service, then route traffic to multiple backend Services based on HTTP host headers or paths. This pattern is standard at our hiring partners like HCL and Movate, where cost efficiency is a KPI for cloud operations teams.
LoadBalancer configuration example for AWS EKS
apiVersion: v1
kind: Service
metadata:
name: api-gateway-svc
namespace: production
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
spec:
type: LoadBalancer
selector:
app: api-gateway
tier: edge
ports:
- protocol: TCP
port: 443
targetPort: 8443
externalTrafficPolicy: Local
This manifest provisions an internet-facing Network Load Balancer in AWS, listening on port 443 and forwarding to port 8443 on Pods labeled app=api-gateway. The cross-zone-load-balancing-enabled annotation distributes traffic evenly across availability zones, improving fault tolerance. The externalTrafficPolicy: Local setting preserves client IPs and reduces latency by avoiding cross-node hops.
In our AWS DevOps course in Bangalore, students deploy this configuration in a live EKS cluster, then inspect the provisioned NLB in the AWS console. They configure Route 53 DNS records pointing to the NLB's DNS name, enable access logs to S3 for compliance auditing, and integrate AWS WAF for DDoS protection—mirroring production architectures at Cisco India and Barracuda.
ClusterIP vs NodePort vs LoadBalancer: choosing the right Service type
Selecting the appropriate Service type depends on access requirements, cost constraints, and infrastructure capabilities. The following table summarizes key differences and use cases:
| Service Type | Accessibility | IP Assignment | Cost | Use Cases | Limitations |
|---|---|---|---|---|---|
| ClusterIP | Internal only (within cluster) | Virtual IP from service CIDR | None | Microservice-to-microservice communication, internal APIs, databases | No external access; requires Ingress or NodePort for internet exposure |
| NodePort | External via <NodeIP>:<NodePort> |
ClusterIP + static port (30000-32767) on all nodes | None (but requires external LB for HA) | On-premises clusters, dev/test environments, integration with external hardware LB | Clients must know node IPs; limited port range; no built-in SSL termination |
| LoadBalancer | External via cloud provider LB (public IP/DNS) | ClusterIP + NodePort + external LB IP | Cloud provider charges per LB | Production internet-facing apps in AWS/Azure/GCP, auto-scaling workloads | Cloud-only; cost scales with number of Services; requires cloud controller manager |
In practice, most production Kubernetes clusters use a combination: ClusterIP for internal Services, a single LoadBalancer Service for an Ingress controller, and NodePort sparingly for legacy integrations. At Aryaka, for example, SD-WAN edge appliances use NodePort to expose Kubernetes-hosted control-plane APIs to on-premises branch offices, while customer-facing dashboards use LoadBalancer Services with AWS ALB for HTTPS termination and WAF integration.
Founder Vikas Swami architected QuickZTNA's control plane using this hybrid pattern: internal microservices communicate via ClusterIP, the admin portal is exposed via a LoadBalancer Service with AWS NLB, and agent registration endpoints use NodePort with Calico network policies to restrict access to known data center IP ranges. This architecture balances security, cost, and operational simplicity—principles we teach in our DevOps curriculum.
Service discovery and DNS in Kubernetes
Kubernetes uses CoreDNS (or kube-dns in older clusters) to provide DNS-based service discovery. Every Service gets a DNS A record in the format <service-name>.<namespace>.svc.cluster.local, resolving to the ClusterIP. Pods in the same namespace can use the short name <service-name>, while Pods in other namespaces must use the fully qualified domain name (FQDN).
For example, a Pod in the frontend namespace can reach a backend Service in the backend namespace by resolving api-svc.backend.svc.cluster.local. CoreDNS watches the Kubernetes API for Service and Endpoint changes, updating DNS records in real time. This dynamic DNS eliminates the need for hardcoded IPs or external service registries like Consul or etcd, simplifying application configuration.
Headless Services—created by setting clusterIP: None—return the Pod IPs directly instead of a virtual ClusterIP. This is useful for stateful applications like databases or message queues, where clients need to connect to specific Pods rather than load-balanced endpoints. For example, a Cassandra cluster uses a headless Service so that each Cassandra Pod can discover and connect to its peers by DNS.
DNS resolution example
apiVersion: v1
kind: Service
metadata:
name: postgres-svc
namespace: database
spec:
clusterIP: None
selector:
app: postgres
role: primary
ports:
- protocol: TCP
port: 5432
targetPort: 5432
This headless Service creates DNS A records for each Pod matching the selector. If three Pods exist with IPs 10.244.1.5, 10.244.2.8, and 10.244.3.12, a DNS query for postgres-svc.database.svc.cluster.local returns all three IPs. Applications can then implement client-side load balancing or connect to a specific Pod by querying <pod-name>.postgres-svc.database.svc.cluster.local.
We demonstrate this in our HSR Layout lab by deploying a StatefulSet with a headless Service, then using nslookup and dig from a debug Pod to observe DNS responses. Students compare headless Service behavior to standard ClusterIP Services, understanding when each pattern is appropriate. This knowledge is directly applicable at hiring partners like IBM and Accenture, where stateful workloads like Kafka and Elasticsearch run on Kubernetes.
Common pitfalls and interview gotchas
During CCIE DevOps and CCNP Cloud interviews at Cisco India, candidates are frequently asked to troubleshoot Service connectivity issues. Common pitfalls include:
- Selector mismatch: The Service selector doesn't match any Pod labels, resulting in zero endpoints. Use
kubectl get endpoints <service-name>to verify backend Pods are registered. - Port vs targetPort confusion: The
portfield is the Service port (what clients connect to), whiletargetPortis the container port. Mismatched values cause connection refused errors. - Network policy blocking traffic: Calico or Cilium network policies may deny traffic between namespaces or from external sources. Use
kubectl describe networkpolicyand test withkubectl execfrom a debug Pod. - ExternalTrafficPolicy: Local with no local Pods: If a node receives traffic but has no matching Pods, the request fails. This is a common issue in autoscaling scenarios where Pods are unevenly distributed.
- LoadBalancer Service stuck in Pending: The cloud controller manager may not be installed, or IAM permissions may be insufficient to provision load balancers. Check
kubectl describe servicefor error events. - DNS resolution failures: CoreDNS Pods may be unhealthy, or the cluster DNS service IP may be misconfigured. Verify with
kubectl get pods -n kube-systemandkubectl exec <pod> -- nslookup kubernetes.default.
In our 4-month paid internship, interns troubleshoot these scenarios in simulated production environments, using kubectl logs, kubectl describe, tcpdump, and curl to diagnose and resolve issues. This hands-on debugging experience is what differentiates Networkers Home graduates in technical interviews at HCL, Wipro, and TCS, where troubleshooting skills are tested rigorously.
Interview question: How does kube-proxy implement Services?
Interviewers at Cisco and Akamai often ask candidates to explain kube-proxy modes. There are three: iptables (default), IPVS, and eBPF (via Cilium). In iptables mode, kube-proxy programs DNAT rules to rewrite destination IPs from ClusterIP to Pod IPs, using random or round-robin selection. In IPVS mode, kube-proxy uses the Linux IPVS kernel module for more efficient load balancing with algorithms like least-connection and weighted round-robin. In eBPF mode (Cilium), packet processing happens in the kernel without iptables, reducing latency and CPU overhead.
A strong answer includes mentioning that iptables mode has O(n) rule traversal complexity, causing performance degradation in clusters with thousands of Services, while IPVS and eBPF scale to tens of thousands of Services with minimal overhead. Candidates who reference real-world benchmarks—like those we conduct in our HSR Layout lab—stand out in interviews.
Real-world deployment scenarios at Indian enterprises
At Barracuda Networks India, Kubernetes clusters host web application firewalls (WAF) and email security gateways. Each customer tenant runs in a separate namespace, with ClusterIP Services for internal microservices and LoadBalancer Services for customer-facing APIs. AWS NLBs distribute traffic across availability zones, and Calico network policies enforce tenant isolation. During peak loads—such as phishing campaigns targeting Indian banks—autoscaling adds Pods dynamically, and Services automatically update endpoint lists to include new replicas.
At Aryaka Networks, SD-WAN controllers run on Kubernetes in AWS. Branch offices connect to NodePort Services on worker nodes, with IPsec tunnels terminating at node IPs. The control plane uses ClusterIP Services for internal communication between API servers, database replicas, and analytics engines. This architecture supports 10,000+ concurrent branch connections with sub-50ms latency, meeting SLAs for customers like HDFC Bank and Tata Consultancy Services.
At Cisco India's Bangalore office, the ACI fabric integrates with Kubernetes via the ACI CNI plugin. Services are automatically mapped to ACI endpoint groups (EPGs), and contracts define allowed traffic flows. For example, a frontend Service in the web EPG can communicate with a backend Service in the app EPG, but not with a database Service in the data EPG unless a contract explicitly permits it. This zero-trust networking model aligns with CERT-In's cybersecurity guidelines and is a key topic in our Container & Kubernetes Networking course.
Case study: Scaling a fintech application for UPI traffic
A fintech startup in Bengaluru—one of our hiring partners—processes UPI transactions for 5 million users. Their payment gateway runs on EKS with a LoadBalancer Service exposing an HTTPS API. During Diwali 2025, transaction volume spiked 10x, from 1,000 TPS to 10,000 TPS. The Horizontal Pod Autoscaler (HPA) scaled Pods from 10 to 100 replicas, and the LoadBalancer Service automatically updated its target group to include new Pod endpoints. AWS NLB health checks ensured only healthy Pods received traffic, and CloudWatch metrics tracked latency and error rates in real time.
The team used externalTrafficPolicy: Local to preserve client IPs for fraud detection and compliance with RBI's Payment and Settlement Systems Act. They also configured session affinity to route repeat requests from the same user to the same Pod, reducing database query overhead. This architecture handled the traffic surge with 99.99% uptime, and the team credited their success to training at Networkers Home, where they learned Kubernetes networking and AWS integration in our DevOps curriculum.
How Kubernetes Pods and Services connect to CCNA, CCNP, and CCIE syllabus
While Kubernetes is not part of the traditional Cisco certification tracks, the networking concepts underlying Pods and Services map directly to CCNA, CCNP, and CCIE topics:
- CCNA: IP addressing, subnetting, NAT, DNS, and TCP/IP fundamentals. Understanding how Pods receive IPs from CIDR ranges and how Services perform DNAT is analogous to configuring NAT on Cisco routers.
- CCNP Enterprise: OSPF, BGP, VRFs, and policy-based routing. CNI plugins like Calico use BGP to advertise Pod routes, and Kubernetes network policies resemble ACLs and firewall rules on Cisco ASA and Firepower devices.
- CCIE Security: Zero-trust networking, micro-segmentation, and application-layer firewalling. Kubernetes network policies and service meshes (Istio, Linkerd) implement these concepts at the container level, complementing perimeter security with Cisco Secure Firewall and Cisco Umbrella.
- CCIE DevOps (emerging track): CI/CD pipelines, infrastructure as code, and cloud-native architectures. Kubernetes is central to this track, with Services and Ingress controllers enabling automated deployments and blue-green rollouts.
At Networkers Home, our curriculum bridges traditional networking and cloud-native technologies. Students who complete our CCNA and CCNP courses often transition to our AWS DevOps course in Bangalore, where they apply routing, switching, and security knowledge to Kubernetes and AWS. This integrated approach prepares graduates for roles at Cisco India, where network engineers increasingly manage hybrid infrastructures spanning data centers, public clouds, and edge locations.
Advanced Service features: session affinity, headless Services, and ExternalName
Beyond the three primary Service types, Kubernetes offers advanced features for specialized use cases:
Session affinity
By default, Services load-balance requests across Pods using round-robin or random selection. For stateful applications—like shopping carts or WebSocket connections—you can enable session affinity with sessionAffinity: ClientIP. This ensures all requests from the same client IP are routed to the same Pod for a configurable timeout period (default 10,800 seconds). Session affinity is implemented by kube-proxy, which hashes the client IP and consistently maps it to a backend Pod.
Headless Services
Setting clusterIP: None creates a headless Service, which returns Pod IPs directly in DNS queries instead of a virtual ClusterIP. This is essential for stateful applications like Cassandra, MongoDB, or Kafka, where clients need to connect to specific Pods. StatefulSets use headless Services to provide stable network identities: each Pod gets a DNS name like <pod-name>.<service-name>.<namespace>.svc.cluster.local, which persists across Pod restarts.
ExternalName Services
An ExternalName Service maps a Kubernetes Service name to an external DNS name, enabling applications to use Kubernetes DNS for external dependencies. For example, a Service named legacy-db can point to mysql.example.com, allowing Pods to connect to legacy-db.default.svc.cluster.local without hardcoding the external hostname. This is useful during migrations, where applications gradually move from external databases to Kubernetes-hosted replicas.
apiVersion: v1
kind: Service
metadata:
name: legacy-db
namespace: default
spec:
type: ExternalName
externalName: mysql.example.com
We use ExternalName Services in our lab to simulate hybrid architectures, where Kubernetes workloads integrate with on-premises databases and APIs. Students configure ExternalName Services, then trace DNS resolution and connection establishment with tcpdump and openssl s_client, understanding how Kubernetes abstracts external dependencies.
Security considerations for Pods and Services
Exposing Services—especially via NodePort and LoadBalancer—introduces security risks. Best practices include:
- Network policies: Use Calico or Cilium policies to restrict traffic between namespaces and Pods. For example, allow only frontend Pods to connect to backend Services, and deny all other traffic by default.
- TLS termination: Terminate TLS at the Ingress controller or LoadBalancer, not at individual Pods, to centralize certificate management and reduce attack surface. Use AWS Certificate Manager (ACM) or Let's Encrypt for automated certificate provisioning.
- Service accounts and RBAC: Assign minimal IAM roles to Pods using Kubernetes service accounts and AWS IAM Roles for Service Accounts (IRSA). Prevent Pods from accessing the Kubernetes API unless necessary.
- Pod security standards: Enforce restricted Pod security standards to prevent privileged containers, host network access, and privilege escalation. Use admission controllers like OPA Gatekeeper or Kyverno to validate Pod specs.
- DDoS protection: For LoadBalancer Services, enable AWS Shield Standard (free) or Shield Advanced (paid) to mitigate DDoS attacks. Configure rate limiting and WAF rules on ALBs to block malicious traffic.
At our Network Security Operations Division internship, students deploy Kubernetes clusters with defense-in-depth: network policies isolate workloads, Falco detects runtime anomalies, and Trivy scans container images for CVEs. This security-first approach mirrors production environments at Cisco India and Barracuda, where compliance with ISO 27001, SOC 2, and DPDP Act is mandatory.
Frequently asked questions
What happens if a Pod crashes while a Service is routing traffic to it?
When a Pod crashes, the kubelet detects the failure via liveness probes and restarts the container. During the restart, the Pod is marked as not ready, and kube-proxy removes it from the Service's endpoint list. Traffic is automatically redistributed to healthy Pods. If all Pods fail, the Service returns connection refused errors until at least one Pod becomes ready. Readiness probes ensure Pods receive traffic only when they can handle requests, preventing cascading failures.
Can I use the same NodePort across multiple Services?
No. Each NodePort must be unique cluster-wide. If you try to create two Services with the same NodePort, the second creation will fail with a port conflict error. Kubernetes reserves the NodePort range (default 30000-32767) to avoid collisions with application ports. You can customize the range by setting --service-node-port-range on the kube-apiserver, but this is rarely necessary.
How do I expose multiple ports on a single Service?
A Service can expose multiple ports by defining multiple entries in the ports array. Each port must have a unique name field. For example, a web application might expose port 80 for HTTP and port 443 for HTTPS on the same Service. Backend Pods must listen on the corresponding targetPort values.
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8443
Why does my LoadBalancer Service show <pending> for the external IP?
This indicates the cloud controller manager failed to provision a load balancer. Common causes include: missing cloud controller manager installation, insufficient IAM permissions (e.g., EKS worker nodes lack elasticloadbalancing:CreateLoadBalancer), or quota limits in the cloud account. Check kubectl describe service <name> for error events, and verify the cloud controller manager logs with kubectl logs -n kube-system <cloud-controller-manager-pod>.
How does externalTrafficPolicy: Local differ from Cluster?
With externalTrafficPolicy: Cluster (default), kube-proxy load-balances traffic across all Pods cluster-wide, potentially forwarding traffic from the receiving node to Pods on other nodes. This adds latency and obscures the client IP behind the node's IP. With externalTrafficPolicy: Local, traffic is routed only to Pods on the receiving node, preserving the client IP and reducing latency. However, if a node has no matching Pods, requests to that node fail, so you must ensure even Pod distribution or use a load balancer that performs node-level health checks.
Can I change a Service type from ClusterIP to LoadBalancer without downtime?
Yes. Update the Service manifest with type: LoadBalancer and apply it with kubectl apply -f service.yaml. Kubernetes provisions the load balancer asynchronously while the ClusterIP remains active. Once the external IP is assigned, clients can switch to the new endpoint. Existing ClusterIP connections continue uninterrupted. This zero-downtime migration is common during production cutover from internal testing to public launch.
How do I troubleshoot DNS resolution failures for Services?
First, verify CoreDNS Pods are running: kubectl get pods -n kube-system -l k8s-app=kube-dns. Check CoreDNS logs for errors: kubectl logs -n kube-system <coredns-pod>. Test DNS from a debug Pod: kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup <service-name>.<namespace>.svc.cluster.local. If DNS fails, verify the cluster DNS service IP is correct in /etc/resolv.conf inside Pods, and ensure network policies allow traffic to CoreDNS on port 53.
What is the difference between a Service and an Ingress?
A Service exposes Pods at the transport layer (Layer 4), routing TCP or UDP traffic to backend Pods. An Ingress operates at the application layer (Layer 7), routing HTTP/HTTPS traffic based on hostnames and URL paths. Ingress requires an Ingress controller (NGINX, Traefik, AWS ALB Ingress Controller) to implement routing rules. Typically, you create a single LoadBalancer Service for the Ingress controller, then define Ingress resources to route traffic to multiple backend Services. This reduces cloud load balancer costs and centralizes SSL termination and WAF integration.