What Container Networking Is and Why It Matters in 2026
Container networking is the set of kernel-level primitives and user-space tools that enable isolated processes (containers) to communicate with each other, the host system, and external networks while maintaining security boundaries. At its core, Linux containers rely on three foundational technologies: network namespaces for isolation, virtual Ethernet (veth) pairs for point-to-point links, and software bridges for Layer 2 switching. In 2026, every DevOps engineer working with Docker, Kubernetes, or OpenShift must understand these primitives because they underpin service mesh architectures, CNI plugins, and zero-trust network policies deployed at scale by Cisco India, Akamai, and Aryaka Networks across production environments.
Unlike traditional virtual machines that each run a full network stack with dedicated NICs, containers share the host kernel but achieve network isolation through namespaces—a Linux kernel feature that partitions network resources so each container sees its own routing table, firewall rules, and interface list. This lightweight approach enables the density required for microservices: a single Kubernetes worker node can host 110+ pods (the Kubernetes term for container groups) compared to 10-15 VMs on equivalent hardware. Organizations hiring through Networkers Home's 800+ partner network—including HCL, Wipro, TCS, and IBM—expect candidates to demonstrate hands-on proficiency with namespace manipulation, veth pair creation, and bridge configuration during technical rounds.
The shift from monolithic applications to containerized microservices has made container networking a mandatory skill for network engineers transitioning into cloud-native roles. In our HSR Layout lab, we observe that freshers who master these fundamentals during the best AWS DevOps course in Bangalore secure placements 40% faster than peers who only understand VM networking. The 4-month paid internship at our Network Security Operations Division requires interns to troubleshoot container connectivity issues in live environments, mirroring the challenges faced by Barracuda and Movate operations teams.
How Network Namespaces Provide Container Isolation
A network namespace is a kernel construct that creates a completely isolated copy of the network stack—including interfaces, IP addresses, routing tables, iptables rules, and socket bindings. When you launch a container, the runtime (Docker, containerd, CRI-O) creates a new network namespace and moves the container's root process into it. From inside that namespace, the process sees only the interfaces explicitly assigned to it; the host's physical NICs and other containers' interfaces are invisible unless explicitly bridged.
The ip netns command suite manages namespaces manually. Creating a namespace is straightforward:
sudo ip netns add blue-ns
sudo ip netns list
Once created, you can execute commands inside the namespace using ip netns exec:
sudo ip netns exec blue-ns ip addr show
sudo ip netns exec blue-ns ip route show
Initially, a new namespace contains only a loopback interface in DOWN state. No default route exists, no DNS resolver is configured, and no connection to the outside world is possible. This total isolation is the security foundation: a compromised container cannot directly access the host network or peer containers without explicit plumbing.
Namespaces also isolate iptables/nftables rules. Firewall policies applied inside a namespace affect only that namespace's traffic. This enables per-container network policies—the mechanism behind Kubernetes NetworkPolicy objects and Calico/Cilium enforcement. When Cisco India deploys ACI with Kubernetes integration, the CNI plugin creates namespaces for each pod and injects iptables rules that map to ACI contracts, ensuring micro-segmentation at the container level.
Understanding namespace lifecycle is critical for troubleshooting. Namespaces persist until explicitly deleted or until all processes referencing them terminate. Orphaned namespaces—common after unclean container shutdowns—consume kernel memory and can exhaust the net.core.netns_max limit (default 256 on older kernels, 4096 on modern systems). During the AWS DevOps course in Bangalore practicals, students learn to audit namespaces with lsns -t net and correlate them to running containers using docker inspect or crictl inspectp.
Virtual Ethernet Pairs: The Point-to-Point Link Between Namespaces
A veth (virtual Ethernet) pair functions as a software-emulated Ethernet cable with two ends. Packets transmitted into one end immediately appear at the other end, regardless of which namespace each end resides in. This bidirectional pipe is the fundamental building block for connecting isolated namespaces to each other or to the host's network stack.
Creating a veth pair and assigning ends to different namespaces establishes connectivity:
sudo ip link add veth-blue type veth peer name veth-blue-br
sudo ip link set veth-blue netns blue-ns
sudo ip netns exec blue-ns ip addr add 10.200.1.2/24 dev veth-blue
sudo ip netns exec blue-ns ip link set veth-blue up
sudo ip link set veth-blue-br up
In this example, veth-blue now exists inside the blue-ns namespace with IP 10.200.1.2/24, while veth-blue-br remains in the host's default namespace. The host can now communicate directly with the container by assigning an IP in the same subnet to veth-blue-br or by attaching veth-blue-br to a bridge.
Veth pairs have zero hardware offload capabilities—no TSO, GSO, or checksum offload—because they are purely software constructs. This means high-throughput workloads (10+ Gbps) can saturate CPU cores with packet processing. In production Kubernetes clusters at Akamai India, network architects mitigate this by enabling XDP (eXpress Data Path) on veth interfaces when using Cilium CNI, offloading packet filtering to eBPF programs that run before the kernel network stack.
The naming convention matters for operational clarity. Docker uses veth<random> on the host side and eth0 inside the container. Kubernetes CNI plugins vary: Calico uses cali<hash>, Flannel uses veth<hash>, and Cilium uses lxc<hash>. During troubleshooting, mapping a container's eth0 to its host-side veth peer requires inspecting /sys/class/net/eth0/iflink inside the namespace and matching it to /sys/class/net/veth*/ifindex on the host—a technique we drill extensively in the lab practicals at L-149 Sector 6, HSR Layout.
Linux Bridges: The Layer 2 Switch for Container Networks
A Linux bridge is a kernel module that emulates a hardware Ethernet switch, forwarding frames between attached interfaces based on learned MAC addresses. In container networking, the bridge serves as the central switching fabric: each container's veth peer attaches to the bridge, enabling any-to-any communication within the same subnet without routing.
Docker's default docker0 bridge exemplifies this architecture. When Docker starts, it creates a bridge interface, assigns it an IP (typically 172.17.0.1/16), and configures iptables NAT rules for outbound traffic. Each new container gets a veth pair with one end attached to docker0 and the other end (renamed eth0) inside the container's namespace with an IP from the bridge subnet.
Creating a custom bridge manually demonstrates the mechanics:
sudo ip link add name br-containers type bridge
sudo ip addr add 10.200.1.1/24 dev br-containers
sudo ip link set br-containers up
sudo ip link set veth-blue-br master br-containers
sudo ip link set veth-red-br master br-containers
Now containers in blue-ns and red-ns (assuming both have veth peers attached to br-containers) can ping each other via Layer 2 forwarding. The bridge maintains a MAC address table viewable with bridge fdb show dev br-containers, analogous to show mac address-table on Cisco switches.
Bridges support VLAN filtering (802.1Q), STP (Spanning Tree Protocol), and IGMP snooping, though these features are rarely enabled in container environments due to the ephemeral nature of container networks. However, in hybrid deployments where containers coexist with VMs on the same host—common in OpenStack with Kubernetes integration—VLAN tagging on veth interfaces allows containers to participate in existing VLANs. Aryaka's SD-WAN edge appliances, which run containerized network functions, use this technique to segregate management, data-plane, and control-plane traffic onto separate VLANs.
Bridge performance scales to thousands of attached interfaces, but broadcast/multicast traffic impacts all members. In large Kubernetes clusters (500+ nodes, 50,000+ pods), CNI plugins like Calico avoid bridges entirely, using pure Layer 3 routing with BGP to eliminate broadcast domains. This architectural choice—bridge-based vs. routed—is a frequent interview topic for senior DevOps roles at Cisco India and HCL, where candidates must justify trade-offs between simplicity and scale.
Container Networking Basics vs. Traditional VM Networking
Understanding the distinctions between container and VM networking clarifies why containers dominate cloud-native architectures and where VMs remain superior. The table below compares key dimensions:
| Dimension | Container Networking | VM Networking |
|---|---|---|
| Isolation Mechanism | Kernel namespaces (shared kernel) | Hypervisor with separate kernel per VM |
| Network Interface | veth pair (software-only) | virtio-net, vmxnet3 (paravirtualized or emulated NIC) |
| Startup Time | Milliseconds (namespace creation) | Seconds to minutes (full OS boot) |
| Density per Host | 100+ containers typical | 10-20 VMs typical |
| IP Address Assignment | Overlay networks (VXLAN, Geneve) or host routing | Direct VLAN assignment or overlay |
| Hardware Offload | Limited (no TSO/GSO on veth by default) | Full offload via SR-IOV or virtio |
| Security Boundary | Process-level (kernel vulnerabilities affect all) | Hardware-assisted (Intel VT-x, AMD-V) |
| Use Case | Microservices, CI/CD, ephemeral workloads | Legacy apps, Windows workloads, strong isolation |
The shared-kernel model of containers delivers speed and density but introduces a security trade-off: a kernel exploit in one container can potentially compromise the host and all sibling containers. This is why financial institutions and government agencies—including those regulated by RBI and CERT-In—often mandate VM-based isolation for sensitive workloads while using containers for stateless application tiers. During the best AWS DevOps course in Bangalore, we dedicate an entire module to hybrid architectures where Kubernetes runs inside VMs (e.g., EKS on EC2, GKE on Compute Engine) to combine container agility with hypervisor-grade isolation.
Step-by-Step: Building a Two-Container Network from Scratch
Constructing a minimal container network manually—without Docker or Kubernetes—solidifies understanding of the primitives. This procedure creates two isolated namespaces, connects them via a bridge, and enables bidirectional communication:
- Create two network namespaces:
sudo ip netns add ns-web sudo ip netns add ns-db - Create a bridge to act as the switch:
sudo ip link add name br0 type bridge sudo ip addr add 192.168.100.1/24 dev br0 sudo ip link set br0 up - Create veth pairs for each namespace:
sudo ip link add veth-web type veth peer name veth-web-br sudo ip link add veth-db type veth peer name veth-db-br - Move one end of each pair into its namespace:
sudo ip link set veth-web netns ns-web sudo ip link set veth-db netns ns-db - Attach the bridge-side ends to the bridge:
sudo ip link set veth-web-br master br0 sudo ip link set veth-db-br master br0 sudo ip link set veth-web-br up sudo ip link set veth-db-br up - Configure IP addresses inside each namespace:
sudo ip netns exec ns-web ip addr add 192.168.100.10/24 dev veth-web sudo ip netns exec ns-web ip link set veth-web up sudo ip netns exec ns-web ip link set lo up sudo ip netns exec ns-db ip addr add 192.168.100.20/24 dev veth-db sudo ip netns exec ns-db ip link set veth-db up sudo ip netns exec ns-db ip link set lo up - Add default routes pointing to the bridge:
sudo ip netns exec ns-web ip route add default via 192.168.100.1 sudo ip netns exec ns-db ip route add default via 192.168.100.1 - Test connectivity between namespaces:
sudo ip netns exec ns-web ping -c 3 192.168.100.20 - Enable NAT for outbound internet access (optional):
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o eth0 -j MASQUERADE sudo sysctl -w net.ipv4.ip_forward=1
This nine-step procedure replicates what Docker does automatically when you run docker run. In our 24×7 rack access lab at HSR Layout, students execute this workflow on bare-metal Ubuntu servers, then use tcpdump on the bridge and veth interfaces to observe ARP requests, ICMP echo packets, and TCP three-way handshakes—building the packet-level intuition that distinguishes senior engineers from junior operators. Employers like Infosys and Accenture specifically test this hands-on capability during technical assessments for cloud infrastructure roles.
Common Pitfalls and Interview Gotchas in Container Networking
Technical interviews for DevOps and cloud roles at Cisco India, Akamai, and Barracuda frequently probe edge cases and failure modes in container networking. Candidates who demonstrate troubleshooting methodology—not just theoretical knowledge—advance to final rounds. Below are the most common pitfalls:
Forgetting to Bring Interfaces UP
Newly created veth interfaces and bridges default to the DOWN state. A common mistake is assigning IP addresses but forgetting ip link set <interface> up. Symptoms include "Network is unreachable" errors despite correct IP configuration. Always verify with ip link show that the interface state is UP and LOWER_UP (the latter indicates the peer end is also up).
Incorrect Subnet Masks Leading to Routing Failures
Assigning a /32 mask instead of /24 on a container interface creates a host route with no local subnet. The container cannot ARP for the gateway because the gateway IP is not considered on-link. For example, ip addr add 192.168.100.10/32 dev veth-web requires an explicit on-link route: ip route add 192.168.100.0/24 dev veth-web. This subtlety trips up candidates who memorize commands without understanding CIDR implications.
MTU Mismatches Causing Silent Packet Loss
If the bridge MTU is 1500 but a veth pair has MTU 1450, packets larger than 1450 bytes are dropped without ICMP fragmentation-needed messages (because veth pairs don't generate them by default). This manifests as working ping (small packets) but failing HTTP downloads (large packets). The fix: ensure all interfaces in the path share the same MTU, typically 1500 for standard Ethernet or 1450 for VXLAN overlays (accounting for 50-byte encapsulation overhead).
Namespace Deletion Without Cleaning Up Interfaces
Deleting a namespace with ip netns del automatically removes interfaces inside it, but the peer veth end in the host namespace persists in a DOWN state. Over time, hundreds of orphaned veth interfaces accumulate, visible in ip link show. Best practice: explicitly delete veth pairs before deleting namespaces, or use ip link del <veth-name> which removes both ends atomically.
iptables Rules Blocking Container Traffic
Host firewall rules in the FORWARD chain can silently drop container traffic. Docker and Kubernetes CNI plugins insert ACCEPT rules, but custom iptables configurations may have a default DROP policy. Debugging requires checking iptables -L FORWARD -v -n and ensuring rules permit traffic between the bridge subnet and external networks. In our 4-month paid internship, interns troubleshoot scenarios where corporate firewall policies conflict with container networking, mirroring real issues at Wipro and TCS data centers.
Real-World Deployment Scenarios: How Enterprises Use These Primitives
Understanding container networking primitives is not academic—these technologies underpin production systems at India's largest enterprises and global service providers. Below are three deployment patterns observed across Networkers Home's 800+ hiring partners:
Scenario 1: Docker Bridge Networks at HCL and Infosys
HCL's DevOps teams use Docker's default bridge network for development and testing environments where simplicity outweighs performance. Each developer workstation runs 10-20 containers on docker0, with NAT providing outbound internet access. For production, HCL migrates to overlay networks (Docker Swarm or Kubernetes) to enable multi-host communication, but the underlying veth-bridge architecture remains identical—just extended with VXLAN tunnels between hosts.
Scenario 2: Kubernetes with Calico CNI at Cisco India
Cisco India's ACI-integrated Kubernetes clusters use Calico in BGP mode, eliminating bridges entirely. Each pod gets a /32 IP from a cluster-wide IPAM pool, and the host runs a BGP daemon (BIRD) that advertises pod routes to upstream leaf switches. Veth pairs still connect pods to the host, but instead of attaching to a bridge, the host-side veth has IP forwarding enabled and routes packets directly. This pure Layer 3 approach scales to 10,000+ nodes and integrates seamlessly with ACI's policy model, where Kubernetes NetworkPolicy objects map to ACI contracts enforced in hardware.
Scenario 3: Service Mesh with Istio at Akamai India
Akamai's edge computing platform runs Kubernetes with Istio service mesh. Each pod contains an application container and an Envoy sidecar proxy. The CNI plugin (Cilium with eBPF) creates a veth pair per pod, but instead of traditional iptables NAT, Cilium uses eBPF programs attached to the veth interfaces to redirect traffic to the Envoy proxy. This approach reduces latency by 30% compared to iptables-based redirection and enables per-connection telemetry for Akamai's observability platform. Founder Vikas Swami architected QuickZTNA using a similar eBPF-based interception mechanism, demonstrating how container networking primitives extend into zero-trust security architectures.
How Container Networking Maps to Cisco Certification Tracks
Container networking concepts increasingly appear in Cisco certification exams as the industry converges on cloud-native infrastructure. Understanding how these topics align with CCNA, CCNP, and CCIE blueprints helps candidates prioritize study efforts:
CCNA 200-301
The current CCNA blueprint includes "Explain the role of network virtualization" and "Describe characteristics of overlay networks." While the exam doesn't test Linux namespaces directly, understanding veth pairs and bridges provides the conceptual foundation for VXLAN and LISP overlays covered in the curriculum. Candidates who grasp container networking find VXLAN encapsulation intuitive—it's the same principle as connecting namespaces across physical hosts.
CCNP Enterprise and Data Center
CCNP Enterprise (350-401 ENCOR) covers SD-Access fabric, which uses VXLAN overlays and LISP for endpoint mobility—directly analogous to Kubernetes overlay networks. CCNP Data Center (350-601 DCCOR) includes ACI integration with container orchestrators, requiring knowledge of how Kubernetes CNI plugins interact with ACI OpFlex agents. Our Container & Kubernetes Networking course dedicates two weeks to ACI-Kubernetes integration, preparing students for both CCNP Data Center and real-world deployments at Cisco India partners.
CCIE Enterprise Infrastructure and Data Center
CCIE lab exams now include troubleshooting scenarios where containerized network functions (routers, firewalls) run on Linux hosts. Candidates must diagnose connectivity issues involving namespaces, veth pairs, and iptables rules—skills identical to those used in production Kubernetes clusters. The CCIE Data Center v3.0 lab explicitly tests ACI integration with OpenShift, requiring candidates to configure CNI plugins and verify pod-to-endpoint communication across the fabric. Vikas Swami, Dual CCIE #22239, designed our lab topology to mirror these exam scenarios, giving students hands-on experience with the exact configurations tested in San Jose and Bangalore CCIE labs.
Tools and Commands for Container Network Troubleshooting
Effective troubleshooting requires fluency with Linux networking tools. Below are the essential commands and their use cases, practiced daily in our HSR Layout lab:
Namespace and Interface Inspection
# List all network namespaces
ip netns list
# Show interfaces in a specific namespace
ip netns exec <namespace> ip link show
# Find which namespace a process belongs to
sudo ls -l /proc/<PID>/ns/net
# Map container eth0 to host veth peer
# Inside container:
cat /sys/class/net/eth0/iflink
# On host, find interface with matching ifindex:
grep -l <iflink-number> /sys/class/net/veth*/ifindex
Bridge and Forwarding Table Inspection
# Show all bridges and attached interfaces
bridge link show
# Display MAC address forwarding table
bridge fdb show dev <bridge-name>
# Show bridge STP state (if enabled)
bridge -d link show
Packet Capture Across Namespace Boundaries
# Capture on host-side veth
sudo tcpdump -i veth-web-br -nn
# Capture inside namespace
sudo ip netns exec ns-web tcpdump -i veth-web -nn
# Capture on bridge (sees all member traffic)
sudo tcpdump -i br0 -nn
Route and ARP Verification
# Show routing table in namespace
ip netns exec <namespace> ip route show
# Display ARP cache
ip netns exec <namespace> ip neigh show
# Force ARP resolution
ip netns exec <namespace> arping -I veth-web 192.168.100.1
During the 4-month paid internship at our Network Security Operations Division, interns use these commands to diagnose production issues in containerized environments at Movate and Barracuda, building the muscle memory required for rapid incident response. The 8-month verified experience letter provided upon completion documents proficiency with these tools, strengthening resumes for roles at Cisco, Akamai, and Aryaka.
Container Networking Security Considerations for 2026
As container adoption grows, so does the attack surface. Security teams at organizations regulated by CERT-In and RBI mandate specific controls around container networking. Key considerations include:
Namespace Escape Vulnerabilities
Kernel vulnerabilities (e.g., CVE-2022-0847 "Dirty Pipe") can allow a container process to escape its namespace and access the host network stack. Mitigation requires running containers with minimal capabilities (--cap-drop=ALL), using seccomp profiles to block dangerous syscalls, and applying kernel patches promptly. In our lab, we demonstrate namespace escape techniques in a sandboxed environment so students understand the threat model and can design defense-in-depth strategies.
Inter-Container Traffic Encryption
By default, traffic between containers on the same bridge is unencrypted Layer 2 forwarding. For compliance with DPDP (Digital Personal Data Protection Act 2023), financial services companies encrypt inter-pod traffic using service mesh (Istio, Linkerd) or WireGuard tunnels between nodes. Calico supports WireGuard encryption natively, adding approximately 5% CPU overhead—a trade-off acceptable for PCI-DSS and RBI-regulated workloads.
Network Policy Enforcement
Kubernetes NetworkPolicy objects define ingress and egress rules per pod, but enforcement depends on the CNI plugin. Calico and Cilium implement policies using iptables or eBPF, while Flannel does not support NetworkPolicy at all. During interviews at Cisco India and HCL, candidates must explain how NetworkPolicy translates to iptables rules and why default-deny policies are essential for zero-trust architectures. We cover this extensively in the AWS DevOps course in Bangalore, including hands-on labs where students write and test policies against simulated attack scenarios.
Bridge Hairpin Mode and MAC Spoofing
Linux bridges support hairpin mode, allowing a packet received on a port to be sent back out the same port—necessary for containers to reach their own published services via the host IP. However, hairpin mode combined with promiscuous mode enables MAC spoofing attacks where one container impersonates another. Production bridges should disable promiscuous mode (ip link set <bridge> promisc off) and use ebtables to filter spoofed MAC addresses.
Frequently Asked Questions About Container Networking Basics
What is the difference between a Docker bridge network and a Kubernetes pod network?
A Docker bridge network is a single-host construct: all containers attached to docker0 or a custom bridge must reside on the same physical or virtual machine. Kubernetes pod networks are cluster-wide: pods on different nodes communicate via overlay networks (VXLAN, Geneve) or routed fabrics (BGP), with the CNI plugin abstracting the underlying transport. Both use veth pairs and namespaces at the node level, but Kubernetes adds cross-node connectivity and centralized IPAM.
Can containers on different bridges communicate without routing?
No. Bridges operate at Layer 2 and forward frames only between their member ports. Containers on br0 and br1 require Layer 3 routing to communicate, either via the host's routing table or an external router. This is why Docker's default bridge network cannot reach user-defined bridge networks without explicit docker network connect commands or host routing rules.
Why do some CNI plugins avoid bridges entirely?
Bridges introduce broadcast domains and require MAC learning, which doesn't scale to 50,000+ pods. CNI plugins like Calico and Cilium use pure Layer 3 routing: each pod gets a /32 route advertised via BGP or programmed into the kernel routing table. This eliminates ARP traffic, reduces latency, and integrates cleanly with existing data center fabrics. The trade-off is increased complexity in routing configuration and troubleshooting.
How does Docker assign IP addresses to containers?
Docker runs an embedded IPAM (IP Address Management) service that allocates IPs from the bridge subnet (default 172.17.0.0/16 for docker0). Each container receives the next available IP via DHCP-like assignment, stored in Docker's internal database. For user-defined networks, you can specify custom subnets with docker network create --subnet=10.0.0.0/24 mynet. Kubernetes delegates IPAM to the CNI plugin; Calico uses IPAM pools, Flannel uses per-node subnets, and AWS VPC CNI assigns IPs from the VPC subnet.
What happens if two containers are assigned the same IP address?
IP conflicts cause intermittent connectivity as the bridge's MAC learning table flips between the two containers' MAC addresses. Symptoms include 50% packet loss (half the packets reach the correct container) and ARP cache poisoning. Docker's IPAM prevents this within a single daemon, but in multi-host clusters with misconfigured CNI plugins, conflicts can occur. Calico's IPAM includes conflict detection via BGP route advertisements; if two nodes advertise the same /32, Calico logs an error and blocks the conflicting pod from starting.
How do I troubleshoot "No route to host" errors in container networks?
This error indicates a Layer 3 routing failure or firewall block. Systematic troubleshooting steps: (1) Verify the container has a default route (ip route show); (2) Confirm the gateway IP is reachable (ping <gateway>); (3) Check host iptables FORWARD chain for DROP rules (iptables -L FORWARD -v -n); (4) Verify the destination IP is routable from the host (ping <destination> from host namespace); (5) Use traceroute inside the container to identify where packets are dropped. In 80% of cases, the issue is a missing default route or iptables DROP rule—both covered in our lab troubleshooting drills.
Are veth pairs a performance bottleneck for high-throughput applications?
Yes, for workloads exceeding 5-10 Gbps per container. Veth pairs lack hardware offload (TSO, GSO, checksum offload), forcing the CPU to process every packet. Solutions include: (1) SR-IOV passthrough, giving containers direct access to physical NIC VFs (requires compatible hardware and drivers); (2) DPDK (Data Plane Development Kit) for userspace packet processing, bypassing the kernel entirely; (3) eBPF-based CNI plugins like Cilium, which reduce per-packet overhead by 40% compared to iptables. For typical microservices (HTTP APIs, databases), veth performance is sufficient; for NFV (network functions virtualization) workloads like routers and firewalls, SR-IOV or DPDK is mandatory.
How does container networking integrate with existing VLANs and VXLANs?
Two approaches: (1) VLAN trunking to the host—the physical NIC is configured as a trunk, and the CNI plugin creates VLAN sub-interfaces (eth0.100, eth0.200) that attach to separate bridges, allowing containers to participate in existing VLANs; (2) VXLAN overlay—the CNI plugin encapsulates container traffic in VXLAN and integrates with the data center's VXLAN fabric (e.g., Cisco ACI, VMware NSX). Aryaka's SD-WAN edge devices use approach #1 to segregate customer traffic onto per-tenant VLANs, while Akamai's Kubernetes clusters use approach #2 to span pods across multiple availability zones.