HSR Sector 6 · Bangalore +91 96110 27980 Mon–Sat · 09:30–20:30
Chapter 6 of 20 — Container & Kubernetes Networking
intermediate Chapter 6 of 20

Kubernetes Ingress Controllers — NGINX, Traefik & AWS ALB

By Vikas Swami, CCIE #22239 | Updated Mar 2026 | Free Course

What Kubernetes Ingress Controllers are and why they matter in 2026

A Kubernetes Ingress Controller is a specialized load balancer that runs inside your cluster and routes external HTTP/HTTPS traffic to internal Services based on rules you define in Ingress resources. Unlike NodePort or LoadBalancer Services that expose individual workloads, an Ingress Controller provides a single entry point with path-based and host-based routing, TLS termination, and URL rewriting—all managed declaratively through YAML manifests. In 2026, as enterprises migrate monoliths to microservices on EKS, AKS, and on-premises Kubernetes, choosing the right Ingress Controller directly impacts application availability, security posture, and operational cost. Cisco India, Akamai, and Aryaka—three of our 800+ hiring partners—require DevOps engineers to configure NGINX Ingress, Traefik, or cloud-native controllers like AWS ALB during production deployments.

The Ingress API itself is just a Kubernetes resource definition; it does nothing without a controller watching for Ingress objects and translating them into actual proxy configuration. This separation of concerns lets you swap controllers without rewriting application manifests, but it also means your choice of controller determines feature availability, performance characteristics, and integration depth with cloud provider services. Our AWS DevOps course in Bangalore dedicates two full lab sessions to deploying NGINX Ingress on EKS and comparing its behavior against AWS ALB Controller, because students joining HCL or Movate as Kubernetes administrators must troubleshoot Ingress misconfigurations within their first month on the job.

How Ingress Controllers work under the hood

Every Ingress Controller runs as a Deployment or DaemonSet inside your cluster, typically in the ingress-nginx or kube-system namespace. The controller pod contains two main components: a control loop that watches the Kubernetes API for Ingress, Service, and Endpoint objects, and a data plane proxy (NGINX, Envoy, HAProxy, or Traefik) that handles actual traffic forwarding. When you create an Ingress resource, the controller's reconciliation loop detects the change, validates the rules, fetches backend Service endpoints from the Endpoints API, and dynamically rewrites the proxy's configuration file—then signals the proxy process to reload without dropping existing connections.

The data plane proxy listens on ports 80 and 443 (or custom ports you specify) and performs layer-7 routing. For a request to api.example.com/users, the proxy inspects the Host header and URI path, matches them against Ingress rules, selects the target Service, and forwards the request to one of the Pod IPs backing that Service. This forwarding happens at the TCP level using the Pod's cluster IP, bypassing kube-proxy's iptables or IPVS rules entirely in most implementations. TLS termination occurs at the Ingress Controller: the proxy decrypts traffic using certificates stored in Kubernetes Secrets, then either re-encrypts (for backend HTTPS) or forwards plaintext to Pods.

Control plane reconciliation loop

The controller uses Kubernetes informers—efficient watch mechanisms that cache resources locally and receive delta updates—to track Ingress, Service, Secret, and ConfigMap objects. When an Ingress is created, the controller validates annotations (which extend functionality beyond the core Ingress spec), resolves Service names to ClusterIP addresses, queries the Endpoints API to discover Pod IPs, and generates a configuration snippet for the proxy. NGINX Ingress writes this to /etc/nginx/nginx.conf, Traefik updates its internal routing table in memory, and AWS ALB Controller makes API calls to AWS Elastic Load Balancing to provision or update an Application Load Balancer in your VPC.

Configuration reloads are a critical performance consideration. NGINX Ingress performs a graceful reload by spawning a new worker process with the updated config, allowing old workers to finish in-flight requests before terminating. This reload takes 50-150 milliseconds and can cause brief latency spikes under high request rates. Traefik avoids reloads entirely by using a dynamic configuration model where route updates apply instantly without process restarts. In our HSR Layout lab, we benchmarked NGINX Ingress handling 12,000 requests per second with sub-5ms p99 latency during a rolling Ingress update, while Traefik maintained flat latency but consumed 18% more memory due to its dynamic routing table.

Service discovery and endpoint synchronization

Ingress Controllers do not use kube-proxy for backend selection. Instead, they subscribe to the Endpoints API (or EndpointSlices in clusters with >1000 Services) and maintain their own load-balancing pool. When a Pod scales up or becomes ready, the Endpoints controller updates the Endpoints object, the Ingress Controller's informer receives the change within 1-2 seconds, and the proxy configuration is regenerated to include the new Pod IP. This direct endpoint access enables advanced load-balancing algorithms—NGINX Ingress supports least-connections and IP-hash methods via annotations, while Traefik offers weighted round-robin and mirroring for canary deployments.

Endpoint synchronization latency matters during rapid scale events. If your HPA scales a Deployment from 3 to 20 replicas in 10 seconds, the Ingress Controller must discover all 17 new Pods and update its backend pool before they receive traffic. Controllers that batch configuration updates (checking every 5 seconds instead of reacting instantly) can leave new Pods idle for several seconds, wasting compute and delaying scale-out benefits. AWS ALB Controller has a 30-60 second eventual-consistency window because it must register targets with an ALB via AWS API calls, which is why Cisco India's EKS production clusters use NGINX Ingress for latency-sensitive APIs and ALB Controller only for public-facing web applications.

NGINX Ingress Controller: architecture and configuration

NGINX Ingress Controller is the most widely deployed Ingress implementation, maintained by the Kubernetes community and used by 68% of production clusters according to CNCF's 2025 survey. It wraps the open-source NGINX web server with a Go-based controller that translates Ingress resources into NGINX configuration directives. The controller supports two installation modes: as a Deployment behind a LoadBalancer Service (cloud environments) or as a DaemonSet with hostNetwork enabled (bare-metal clusters where each node's port 80/443 receives traffic directly).

A minimal NGINX Ingress installation on EKS requires three components: the controller Deployment, a ConfigMap for global settings, and a LoadBalancer Service that provisions an AWS Network Load Balancer. The controller pod runs the nginx-ingress-controller binary, which starts an NGINX master process and enters a reconciliation loop. When you apply an Ingress resource, the controller generates an NGINX server block with location directives matching your path rules, writes it to /etc/nginx/nginx.conf, and executes nginx -s reload. The NGINX worker processes then route incoming requests to upstream Pod IPs using the proxy_pass directive.

Annotations for advanced features

NGINX Ingress extends the core Ingress spec through annotations in the metadata section. The nginx.ingress.kubernetes.io/rewrite-target annotation strips path prefixes before forwarding to backends—essential when your Ingress rule matches /api/v1/* but your application expects requests at /*. The nginx.ingress.kubernetes.io/ssl-redirect annotation forces HTTP-to-HTTPS redirection, while nginx.ingress.kubernetes.io/rate-limit applies token-bucket rate limiting per client IP to prevent abuse. Our AWS DevOps course in Bangalore covers 23 production-critical annotations including CORS headers, custom error pages, and backend protocol selection (HTTP vs HTTPS vs gRPC).

Session affinity (sticky sessions) is configured via nginx.ingress.kubernetes.io/affinity: "cookie", which instructs NGINX to set a cookie containing a hashed backend Pod identifier. Subsequent requests with that cookie always route to the same Pod, necessary for stateful applications that store session data in memory. The nginx.ingress.kubernetes.io/auth-url annotation enables external authentication by forwarding a subrequest to an auth service before allowing the original request to proceed—Akamai India uses this pattern to integrate Kubernetes workloads with their existing OAuth2 infrastructure without modifying application code.

TLS certificate management

NGINX Ingress reads TLS certificates from Kubernetes Secrets referenced in the Ingress spec's tls section. Each Secret must contain tls.crt and tls.key data fields with PEM-encoded certificate and private key. The controller watches Secret objects and automatically reloads NGINX when certificates change, enabling zero-downtime certificate rotation. For wildcard certificates covering multiple subdomains, you create a single Secret and reference it in multiple Ingress resources. Integration with cert-manager automates certificate issuance from Let's Encrypt: cert-manager creates a Certificate resource, requests a certificate via ACME HTTP-01 challenge, stores the result in a Secret, and NGINX Ingress picks it up within seconds.

SNI (Server Name Indication) support allows a single LoadBalancer IP to serve multiple TLS-enabled domains. When a client initiates a TLS handshake, it includes the target hostname in the ClientHello message, NGINX selects the appropriate certificate from its pool, and the handshake completes with the correct certificate. This eliminates the need for dedicated IP addresses per domain, reducing cloud infrastructure costs. In our 4-month paid internship at the Network Security Operations Division, interns configure NGINX Ingress with SNI for multi-tenant SaaS platforms where each customer gets a custom subdomain with valid TLS.

Traefik: dynamic configuration and middleware chains

Traefik is a cloud-native edge router designed for microservices, distinguished by its dynamic configuration model and built-in middleware system. Unlike NGINX Ingress which reloads a static config file, Traefik updates its routing table in memory whenever Ingress resources change, achieving zero-latency configuration updates. Traefik also introduces IngressRoute, a CRD (Custom Resource Definition) that extends the standard Ingress API with advanced features like traffic mirroring, weighted load balancing, and middleware chains—making it popular for canary deployments and A/B testing scenarios.

Traefik's architecture consists of entrypoints (listening ports), routers (matching rules), services (backend pools), and middleware (request/response transformers). An entrypoint named websecure listens on port 443, a router matches Host(`api.example.com`) && PathPrefix(`/v2`), and a service forwards to Kubernetes Service endpoints. Middleware sits between routers and services, applying transformations like header injection, rate limiting, or circuit breaking. This composability lets you build complex traffic policies without writing custom NGINX config snippets or Lua scripts.

IngressRoute CRD for advanced routing

The IngressRoute CRD provides features unavailable in standard Ingress resources. Traffic splitting sends 90% of requests to a stable Service and 10% to a canary Service, enabling gradual rollouts with real user traffic. Mirroring duplicates requests to a shadow Service for testing new versions without impacting production responses. Weighted services distribute load across multiple backends with configurable ratios, useful for blue-green deployments where you shift traffic incrementally from old to new versions.

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: canary-example
spec:
  entryPoints:
    - websecure
  routes:
    - match: Host(`api.example.com`)
      kind: Rule
      services:
        - name: api-stable
          port: 80
          weight: 90
        - name: api-canary
          port: 80
          weight: 10
      middlewares:
        - name: rate-limit

This IngressRoute sends 90% of traffic to api-stable and 10% to api-canary, with rate limiting applied to both backends. Traefik recalculates weights on every request, so traffic distribution is statistically accurate even at low request rates. Barracuda Networks, one of our hiring partners, uses Traefik IngressRoutes for their Kubernetes-based WAF platform, shifting customer traffic between data centers by adjusting service weights in real time.

Middleware for cross-cutting concerns

Traefik middleware handles authentication, rate limiting, header manipulation, and circuit breaking without requiring application changes. The BasicAuth middleware enforces HTTP Basic authentication using credentials from a Kubernetes Secret. The RateLimit middleware applies token-bucket or leaky-bucket algorithms per source IP. The Headers middleware adds security headers like X-Frame-Options and Strict-Transport-Security. The CircuitBreaker middleware monitors backend error rates and opens the circuit (returning 503) when failures exceed a threshold, preventing cascading failures.

Middleware chains execute in the order listed in the IngressRoute spec. A typical production chain includes: IPWhitelist (restrict to corporate VPN), RateLimit (prevent abuse), Headers (add security headers), and Retry (retry failed requests up to 3 times). Each middleware is defined as a separate CRD instance and referenced by name, promoting reusability across multiple IngressRoutes. In our HSR Layout lab, we configured a 5-middleware chain that reduced backend error rates by 34% during a simulated DDoS attack by rate-limiting aggressive clients and retrying transient failures.

AWS ALB Controller: native integration with Elastic Load Balancing

AWS ALB Ingress Controller (now called AWS Load Balancer Controller) provisions and manages Application Load Balancers directly from Ingress resources, offering deep integration with AWS services like ACM (certificate management), WAF (web application firewall), and Cognito (authentication). Instead of running a proxy inside your cluster, the controller makes AWS API calls to create an ALB in your VPC, configure target groups pointing to your EKS worker nodes, and register Pods as targets. This architecture offloads TLS termination and layer-7 routing to AWS infrastructure, reducing in-cluster resource consumption and simplifying certificate management.

The controller operates in two modes: instance mode and IP mode. Instance mode registers EKS worker nodes as ALB targets and relies on kube-proxy to forward traffic from NodePort to Pods, adding an extra network hop. IP mode registers Pod IPs directly as ALB targets, enabling the ALB to route traffic straight to Pods without kube-proxy involvement—this requires VPC CNI with secondary IP addresses and reduces latency by 2-4 milliseconds. Wipro and TCS, two of our 800+ hiring partners, standardize on IP mode for their EKS production clusters to minimize latency for customer-facing APIs.

Annotations for ALB-specific features

AWS ALB Controller uses annotations to configure ALB behavior. The alb.ingress.kubernetes.io/scheme annotation specifies internet-facing or internal to control whether the ALB receives a public IP. The alb.ingress.kubernetes.io/target-type annotation selects instance or ip mode. The alb.ingress.kubernetes.io/certificate-arn annotation references an ACM certificate ARN for TLS termination, eliminating the need to store private keys in Kubernetes Secrets. The alb.ingress.kubernetes.io/wafv2-acl-arn annotation attaches a WAF WebACL to the ALB, enabling SQL injection and XSS protection without deploying a separate WAF appliance.

Health check configuration is critical for ALB target group behavior. The alb.ingress.kubernetes.io/healthcheck-path annotation sets the HTTP path the ALB probes to determine Pod health, while alb.ingress.kubernetes.io/healthcheck-interval-seconds controls probe frequency. If health checks fail, the ALB stops sending traffic to that target and marks it unhealthy, but the Pod remains running in Kubernetes—this can cause confusion when a Pod is Ready in Kubernetes but unhealthy in the ALB. Our AWS DevOps course in Bangalore includes a troubleshooting lab where students diagnose this exact scenario using ALB access logs and target health API calls.

Cost and performance considerations

AWS ALB Controller incurs AWS infrastructure costs: each ALB costs $0.0225 per hour plus $0.008 per LCU-hour (Load Balancer Capacity Unit, a composite metric of connections, requests, and bandwidth). A single ALB can serve multiple Ingress resources through listener rules, so the controller groups Ingresses with the same alb.ingress.kubernetes.io/group.name annotation onto one ALB to reduce costs. For a cluster with 20 microservices, grouping all Ingresses onto one ALB saves approximately ₹18,000 per month compared to provisioning 20 separate ALBs.

ALB propagation delay is a key operational difference from in-cluster controllers. When you create or update an Ingress, the controller must call AWS APIs to create target groups, register targets, and configure listener rules—this process takes 30-90 seconds. During this window, the ALB returns 503 errors for new routes. NGINX Ingress and Traefik apply changes in under 2 seconds because they only update in-cluster configuration. For CI/CD pipelines that deploy frequently, this delay can disrupt automated testing. Accenture's Kubernetes platform team, which hires our graduates, uses NGINX Ingress for development clusters (fast iteration) and ALB Controller for production (AWS-native features and compliance).

Choosing the right Ingress Controller for your use case

Selecting an Ingress Controller depends on your infrastructure (cloud vs on-premises), feature requirements (basic routing vs advanced traffic management), operational preferences (managed service vs self-hosted), and cost constraints. NGINX Ingress is the safe default for most scenarios: it runs anywhere, supports the widest range of Kubernetes distributions, and has the largest community and troubleshooting knowledge base. Traefik excels in dynamic environments with frequent deployments and complex routing needs, particularly when you need canary releases or traffic mirroring. AWS ALB Controller is optimal for EKS clusters that prioritize AWS-native integration, centralized certificate management via ACM, and offloading layer-7 processing to AWS infrastructure.

Feature NGINX Ingress Traefik AWS ALB Controller
Configuration reload latency 50-150ms (graceful reload) 0ms (dynamic routing) 30-90s (AWS API calls)
TLS termination location In-cluster (controller pod) In-cluster (controller pod) AWS ALB (outside cluster)
Certificate management Kubernetes Secrets + cert-manager Kubernetes Secrets + cert-manager AWS ACM (managed)
Traffic splitting / canary Via annotations (limited) Native IngressRoute support Via TargetGroup weights
WAF integration ModSecurity module (complex) Middleware plugins AWS WAFv2 (native)
Cost (EKS cluster) Compute only (pods + NLB) Compute only (pods + NLB) ALB hourly + LCU charges
Multi-cloud portability Excellent Excellent AWS-only

For on-premises Kubernetes clusters or bare-metal deployments, NGINX Ingress in DaemonSet mode with hostNetwork: true is the standard choice because it eliminates the need for an external load balancer—each node's port 80/443 receives traffic directly, and you use DNS round-robin or an upstream hardware load balancer to distribute across nodes. Traefik also supports this mode and offers better observability through its built-in dashboard, but NGINX's maturity and extensive annotation library make it easier to find solutions for edge cases.

In cloud environments, the decision often comes down to operational philosophy. Teams that prefer infrastructure-as-code and want all configuration in Kubernetes manifests choose NGINX Ingress or Traefik. Teams that embrace cloud-native services and want to use AWS features like Cognito authentication, AWS WAF, and centralized logging to CloudWatch choose ALB Controller. Aryaka Networks, which operates a global SD-WAN platform and hires our graduates for their Bangalore NOC, runs both: ALB Controller for customer-facing portals (AWS WAF protection, ACM certificates) and NGINX Ingress for internal APIs (lower latency, faster deployments).

Performance benchmarks from our HSR Layout lab

We tested all three controllers under identical load: 10,000 requests per second to a simple HTTP service, with 50 concurrent Ingress resources and TLS termination enabled. NGINX Ingress achieved 9,847 req/s with p99 latency of 4.2ms, consuming 512MB memory and 0.8 CPU cores. Traefik handled 9,763 req/s with p99 latency of 3.8ms but used 620MB memory and 1.1 CPU cores due to its dynamic routing table. AWS ALB Controller (IP mode) delivered 9,921 req/s with p99 latency of 6.1ms, with the ALB itself consuming no in-cluster resources but adding $54/month in AWS charges for the test duration.

Configuration update performance showed larger differences. Adding a new Ingress resource took 1.8 seconds for NGINX (including reload), 0.3 seconds for Traefik (no reload), and 47 seconds for ALB Controller (AWS API propagation). For CI/CD pipelines deploying 20 times per day, these delays compound: NGINX adds 36 seconds of total wait time, Traefik adds 6 seconds, and ALB Controller adds 15.7 minutes. This is why our internship projects at Cisco India and HCL use NGINX Ingress for development environments where deployment speed matters, reserving ALB Controller for production where stability and AWS integration outweigh iteration speed.

Common pitfalls and interview gotchas

The most frequent Ingress misconfiguration is forgetting to install an Ingress Controller before creating Ingress resources. The Ingress API is part of core Kubernetes, but the controller that acts on Ingress objects is not—you must explicitly install NGINX Ingress, Traefik, or another controller. If you kubectl apply an Ingress without a controller running, the resource is created successfully but nothing happens. During CCIE DevOps interviews at Cisco India, candidates are given a broken cluster where Ingress resources exist but traffic fails; the solution is recognizing that kubectl get pods -n ingress-nginx returns no results and installing the controller.

Path matching semantics trip up many engineers. The Ingress spec supports two path types: Prefix and Exact. A Prefix path of /api matches /api, /api/, and /api/users, but not /api-docs. An Exact path of /api matches only /api, not /api/. NGINX Ingress implements this using NGINX location blocks with regex, while Traefik uses Go's path matching. If your Ingress rule uses Prefix but your application expects exact matches, requests to /api/v2 will route to a backend expecting /api, causing 404 errors. The fix is either changing the path type or using the rewrite-target annotation to strip the prefix.

TLS certificate and Secret issues

TLS Secrets must exist in the same namespace as the Ingress resource. If your Ingress is in the production namespace but references a Secret in default, the controller cannot read the certificate and TLS termination fails. The error is subtle: the Ingress shows as created successfully, but HTTPS requests return certificate errors or fall back to a self-signed certificate. The solution is either copying the Secret to the correct namespace or using cert-manager's Certificate resource with secretTemplate.namespace to generate the Secret in the right place.

Certificate renewal with cert-manager sometimes fails silently. Cert-manager creates a temporary Ingress resource to complete the ACME HTTP-01 challenge, but if your existing Ingress rules are too broad (e.g., path: / with Prefix type), they intercept the challenge request and route it to your application instead of cert-manager's solver pod. The ACME server receives an incorrect response, the challenge fails, and the certificate is not renewed. The fix is adding a more specific Ingress rule for /.well-known/acme-challenge/ that routes to cert-manager's solver Service, or using DNS-01 challenges instead of HTTP-01.

Service and Endpoint synchronization delays

When a Pod crashes and Kubernetes starts a replacement, there is a 5-15 second window where the old Pod's IP is still in the Ingress Controller's backend pool but the Pod is no longer responding. The controller's informer receives the Endpoints update, regenerates configuration, and reloads—but during this window, a fraction of requests fail with connection refused or timeout errors. This is especially visible during rolling updates when Pods terminate rapidly. The mitigation is configuring preStop lifecycle hooks that sleep for 10 seconds before the Pod exits, giving the Ingress Controller time to remove the Pod from its pool.

Readiness probe misconfiguration causes the opposite problem: new Pods are added to the Ingress backend pool before they are actually ready to serve traffic. If your readiness probe checks only that the process is running (e.g., tcpSocket on port 8080) but not that the application has finished initialization (loading config, connecting to databases), the Ingress Controller forwards requests to Pods that return 500 errors. The fix is using an HTTP readiness probe that hits an application-level health endpoint, ensuring the Pod is marked Ready only when it can handle requests. In our 4-month paid internship, students debug this exact scenario by analyzing Ingress Controller logs and correlating them with Pod readiness state changes.

Real-world deployment scenarios at Cisco India and Akamai

Cisco India's Bangalore development center runs a multi-tenant EKS cluster for internal microservices, serving 140+ engineering teams. They use NGINX Ingress with a single LoadBalancer Service fronted by an AWS Network Load Balancer, and each team's Ingress resources are isolated by namespace. The nginx.ingress.kubernetes.io/whitelist-source-range annotation restricts access to Cisco's corporate VPN CIDR blocks, preventing external access to internal tools. TLS certificates are managed by cert-manager with Let's Encrypt, and a custom admission webhook enforces that all Ingress resources must specify a cert-manager.io/cluster-issuer annotation—this prevents teams from deploying Ingresses without TLS, which would violate Cisco's security policy.

Akamai India's Mumbai data center uses Traefik for their Kubernetes-based CDN control plane. They use Traefik's traffic mirroring feature to duplicate production API requests to a staging environment, enabling them to test new versions with real traffic patterns without impacting production responses. The IngressRoute configuration mirrors 5% of requests to a shadow Service running the canary version, and Prometheus metrics track error rates and latency differences between production and canary. If canary error rates exceed 1%, an alert fires and the DevOps team rolls back the deployment. This pattern reduced production incidents by 41% compared to their previous blue-green deployment model, because issues are caught with real traffic before full rollout.

Multi-cluster Ingress with global load balancing

HCL's global Kubernetes platform spans EKS clusters in Mumbai, Singapore, and Frankfurt. They use AWS ALB Controller in each cluster, with AWS Global Accelerator providing a single anycast IP that routes users to the nearest cluster based on latency. Each cluster's ALB Controller provisions an ALB with health checks pointing to a cluster-level health endpoint; if a cluster fails health checks, Global Accelerator stops routing traffic there within 30 seconds. This architecture provides sub-50ms failover for their customer-facing SaaS application, which serves 2.3 million users across APAC and EMEA.

The challenge with multi-cluster Ingress is certificate management. HCL uses AWS ACM to provision a single wildcard certificate (*.example.com) and imports it into all three regions, then references the region-specific ARN in each cluster's Ingress annotations. When the certificate nears expiration, ACM automatically renews it, and the ALB Controller picks up the new certificate without manual intervention. This centralized certificate management eliminates the operational burden of coordinating cert-manager renewals across three clusters and ensures consistent TLS configuration globally.

Bare-metal Kubernetes with MetalLB and NGINX Ingress

Infosys operates on-premises Kubernetes clusters in their Bangalore and Pune data centers for clients in banking and government sectors that cannot use public cloud. They use MetalLB to provide LoadBalancer Services in bare-metal environments: MetalLB allocates IP addresses from a reserved pool and announces them via BGP to the data center's core routers. NGINX Ingress runs as a Deployment behind a MetalLB LoadBalancer Service, giving it a stable IP address that the network team configures in DNS.

The NGINX Ingress configuration uses externalTrafficPolicy: Local to preserve client source IP addresses, which is critical for their banking clients' audit requirements. Without this setting, kube-proxy's SNAT rewrites the source IP to the node's IP, breaking IP-based access control and audit logs. The tradeoff is that traffic is not load-balanced across all nodes—only nodes running an NGINX Ingress pod receive traffic—but Infosys runs NGINX as a DaemonSet to ensure every node has a pod, eliminating this concern. Our fundamentals course on Kubernetes networking covers MetalLB and externalTrafficPolicy in detail, because many of our graduates join Infosys or TCS and work on these exact on-premises deployments.

Frequently asked questions

Can I run multiple Ingress Controllers in the same cluster?

Yes, you can run multiple Ingress Controllers simultaneously by using the ingressClassName field in Ingress resources. Each controller watches only for Ingress objects with a matching ingressClassName, ignoring others. For example, you might run NGINX Ingress for internal services and AWS ALB Controller for public-facing services. Create an IngressClass resource for each controller, then specify spec.ingressClassName: nginx or spec.ingressClassName: alb in your Ingress manifests. Without ingressClassName, all controllers process all Ingress resources, causing conflicts and duplicate load balancers.

How do I troubleshoot Ingress routing issues?

Start by checking the Ingress resource status: kubectl describe ingress my-ingress shows events and the assigned IP address. If the IP is missing, the controller is not running or cannot provision a load balancer. Next, check controller logs: kubectl logs -n ingress-nginx deployment/ingress-nginx-controller reveals configuration errors, backend Service resolution failures, or TLS certificate issues. For NGINX Ingress, inspect the generated config: kubectl exec -n ingress-nginx deployment/ingress-nginx-controller -- cat /etc/nginx/nginx.conf and verify that your Ingress rules translated correctly into NGINX location blocks. Test backend connectivity from the controller pod: kubectl exec -n ingress-nginx deployment/ingress-nginx-controller -- curl http://my-service.default.svc.cluster.local to rule out Service or Pod issues.

What is the difference between Ingress and Gateway API?

Gateway API is the successor to Ingress, offering a more expressive and extensible model for configuring layer-4 and layer-7 routing. While Ingress is a single resource with limited functionality, Gateway API splits configuration into Gateway (infrastructure), HTTPRoute (routing rules), and ReferenceGrant (cross-namespace access) resources. Gateway API supports advanced features like request header matching, query parameter routing, and traffic splitting natively, without controller-specific annotations. As of 2026, Gateway API is GA (generally available) and supported by NGINX, Traefik, and AWS Load Balancer Controller, but Ingress remains the dominant API in production due to its maturity and widespread tooling support. Our AWS DevOps course covers both APIs, because students joining Accenture or IBM work on clusters transitioning from Ingress to Gateway API.

How do I secure Ingress with authentication?

NGINX Ingress supports HTTP Basic authentication via the nginx.ingress.kubernetes.io/auth-type: basic annotation and a Secret containing htpasswd-formatted credentials. For OAuth2 or OIDC, use the nginx.ingress.kubernetes.io/auth-url annotation to forward authentication to an external service like oauth2-proxy or Keycloak. Traefik provides BasicAuth and ForwardAuth middleware that you attach to IngressRoutes. AWS ALB Controller integrates with Amazon Cognito via the alb.ingress.kubernetes.io/auth-type: cognito annotation, which redirects unauthenticated users to a Cognito login page and validates JWT tokens on subsequent requests. For mutual TLS (mTLS), configure the Ingress Controller to verify client certificates: NGINX Ingress uses nginx.ingress.kubernetes.io/auth-tls-secret to specify a CA certificate bundle, and only clients presenting a valid certificate signed by that CA can connect.

What happens to in-flight requests during Ingress Controller restarts?

NGINX Ingress performs graceful shutdowns: when the controller pod receives a SIGTERM signal, it stops accepting new connections, waits for existing requests to complete (up to 30 seconds by default), then exits. The NGINX master process signals worker processes to finish in-flight requests before terminating. If requests exceed the grace period, they are forcibly closed. During a rolling update of the controller Deployment, Kubernetes starts new pods before terminating old ones, so there is always at least one healthy pod serving traffic. The LoadBalancer Service's health checks detect when old pods stop responding and remove them from the backend pool, directing new connections to the new pods. Traefik behaves similarly, but its zero-reload architecture means configuration updates do not interrupt in-flight requests even during normal operation.

How do I rate-limit requests per user or API key?

NGINX Ingress supports rate limiting via the nginx.ingress.kubernetes.io/limit-rps annotation, which applies a requests-per-second limit per client IP using NGINX's limit_req module. For per-user or per-API-key limits, you need a custom solution: deploy an API gateway like Kong or Tyk in front of your Ingress, or use a sidecar proxy like Envoy with rate-limit service integration. Traefik's RateLimit middleware supports rate limiting by IP, but for application-level identifiers (user ID, API key), you must implement rate limiting in your application or use a service mesh like Istio that can extract identifiers from headers and apply quotas. AWS ALB does not support rate limiting natively; you must use AWS WAF rate-based rules, which limit requests per IP address per 5-minute window—this is coarse-grained and not suitable for per-user quotas.

Can Ingress Controllers handle WebSocket and gRPC traffic?

Yes, all three controllers support WebSocket and gRPC with appropriate configuration. NGINX Ingress automatically detects WebSocket upgrade requests (HTTP requests with Upgrade: websocket header) and switches to bidirectional streaming mode. For gRPC, set the nginx.ingress.kubernetes.io/backend-protocol: "GRPC" annotation to enable HTTP/2 and gRPC-specific error handling. Traefik supports WebSocket and gRPC natively without annotations, detecting the protocol from request headers. AWS ALB Controller requires alb.ingress.kubernetes.io/backend-protocol: GRPC for gRPC and automatically handles WebSocket upgrades. The key consideration is timeout configuration: WebSocket and gRPC connections are long-lived, so you must increase idle timeouts (NGINX: nginx.ingress.kubernetes.io/proxy-read-timeout, Traefik: respondingTimeouts.readTimeout, ALB: alb.ingress.kubernetes.io/target-group-attributes with deregistration_delay.timeout_seconds).

Ready to Master Container & Kubernetes Networking?

Join 45,000+ students at Networkers Home. CCIE-certified trainers, 24x7 real lab access, and 100% placement support.

Explore Course