HSR Sector 6 · Bangalore +91 96110 27980 Mon–Sat · 09:30–20:30
Chapter 5 of 20 — DevOps Fundamentals
intermediate Chapter 5 of 20

Docker Fundamentals — Containers, Images, Volumes & Networking

By Vikas Swami, CCIE #22239 | Updated Mar 2026 | Free Course

What Docker is and why it matters in 2026

Docker is an open-source containerization platform that packages applications and their dependencies into isolated, portable units called containers. Unlike virtual machines that virtualize hardware, Docker containers share the host operating system kernel while maintaining process-level isolation, enabling developers to ship software that runs identically across laptop, staging, and production environments. In 2026, Docker remains the de facto standard for microservices deployment, CI/CD pipelines, and cloud-native infrastructure—skills that Cisco India, HCL, Akamai, and Aryaka actively seek when hiring DevOps engineers across Bengaluru and Hyderabad.

The platform solves the "works on my machine" problem by bundling application code, runtime, system libraries, and configuration files into a single image. When you execute docker run, the Docker Engine instantiates that image into a running container with its own filesystem, network stack, and process tree. This architecture reduces deployment friction, accelerates testing cycles, and enables horizontal scaling—critical capabilities for organizations migrating legacy monoliths to Kubernetes-orchestrated microservices. Our AWS DevOps course in Bangalore dedicates four weeks to Docker fundamentals, container orchestration, and production troubleshooting because 78% of our hiring partners now mandate hands-on container experience during technical rounds.

Docker's relevance extends beyond application deployment. Network engineers use containerized test environments to simulate multi-vendor topologies without physical hardware. Security teams deploy honeypots and malware sandboxes in ephemeral containers that reset after each analysis session. In our HSR Layout lab, we run 240+ concurrent containers across 18 physical hosts to provide each student with isolated practice environments for Ansible playbooks, Terraform modules, and Jenkins pipelines—infrastructure that would require 60+ virtual machines under traditional hypervisor architectures.

How Docker containers differ from virtual machines

Virtual machines emulate complete hardware stacks. Each VM runs a full guest operating system atop a hypervisor (VMware ESXi, KVM, Hyper-V), consuming gigabytes of RAM and minutes to boot. A typical Ubuntu 22.04 VM requires 2 GB RAM minimum and 30-45 seconds to reach a login prompt. Containers share the host kernel and isolate processes using Linux namespaces and cgroups, resulting in sub-second startup times and memory footprints measured in megabytes rather than gigabytes.

Dimension Virtual Machine Docker Container
Boot Time 30-60 seconds 0.1-2 seconds
Memory Overhead 1-4 GB per instance 5-50 MB per instance
Isolation Level Hardware-level (hypervisor) Process-level (kernel namespaces)
Portability Hypervisor-dependent (OVA/VMDK) Platform-agnostic (OCI image spec)
Density 10-20 VMs per host 100-1000 containers per host

This architectural difference makes containers ideal for microservices where you need to scale individual components independently. A typical e-commerce platform might run separate containers for product catalog, payment gateway, inventory management, and recommendation engine—each scaling based on real-time load. Akamai India's CDN edge nodes use containerized workloads to deploy custom logic at 2,400+ global locations without provisioning full VMs at each point of presence.

However, containers sacrifice some isolation guarantees. Because all containers share the host kernel, a kernel exploit in one container can theoretically compromise the host. Virtual machines provide stronger security boundaries—critical for multi-tenant cloud providers or regulated workloads under RBI or SEBI compliance mandates. Modern production environments often combine both: VMs for tenant isolation, containers within each VM for application density. Our 4-month paid internship at the Network Security Operations Division exposes students to hybrid architectures where Cisco Secure Workload monitors container-to-container traffic within Kubernetes clusters running atop VMware vSphere.

Linux kernel primitives that enable containerization

Docker uses six core Linux kernel features to achieve isolation without hypervisor overhead:

  • Namespaces — Isolate process IDs, network interfaces, mount points, user IDs, IPC mechanisms, and hostnames. A container sees only its own process tree (PID namespace) and network stack (network namespace).
  • Control groups (cgroups) — Limit CPU shares, memory allocation, disk I/O, and network bandwidth per container. Prevents one container from starving others of resources.
  • Union filesystems (OverlayFS, AUFS) — Layer read-only image layers with a writable container layer, enabling efficient storage and fast image pulls.
  • Seccomp profiles — Restrict system calls available to containerized processes, reducing attack surface.
  • AppArmor/SELinux — Mandatory access control policies that confine container capabilities beyond standard Unix permissions.
  • Capabilities — Fine-grained privilege model that grants specific root powers (e.g., CAP_NET_ADMIN for network configuration) without full root access.

Understanding these primitives helps troubleshoot production issues. When a container cannot bind to port 80, the problem often traces to missing CAP_NET_BIND_SERVICE capability. When disk I/O throttles unexpectedly, cgroup blkio limits are the culprit. Cisco's DevNet Associate certification now includes container networking questions that probe namespace isolation and bridge networking—topics we cover in week three of our DevOps fundamentals track.

Docker images: anatomy, layers, and the build process

A Docker image is a read-only template containing application code, runtime, libraries, and filesystem snapshots. Images are built from Dockerfiles—text files with instructions like FROM, COPY, RUN, and CMD. Each instruction creates a new filesystem layer. Docker uses content-addressable storage: identical layers across multiple images are stored once and shared, dramatically reducing disk usage.

FROM ubuntu:22.04
RUN apt-get update && apt-get install -y nginx
COPY ./website /var/www/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

This five-line Dockerfile produces an image with five layers. The FROM instruction pulls the official Ubuntu 22.04 base image (approximately 77 MB). The RUN instruction executes package installation, creating a new layer with nginx binaries and dependencies (approximately 60 MB). The COPY instruction adds your website files as another layer. EXPOSE and CMD add metadata layers with negligible size. When you push this image to Docker Hub or a private registry, only layers that don't already exist remotely are uploaded—a 200 MB image might transfer only 15 MB if base layers are cached.

Multi-stage builds for production optimization

Development images often include compilers, debuggers, and build tools that bloat production deployments. Multi-stage builds solve this by using one image to compile code and a second minimal image to run the binary:

# Build stage
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go build -o api-server

# Production stage
FROM alpine:3.19
COPY --from=builder /app/api-server /usr/local/bin/
CMD ["api-server"]

The golang:1.21 image weighs 800 MB with full toolchain. The alpine:3.19 runtime image weighs 7 MB. The final production image contains only the compiled binary and Alpine base—total size under 20 MB. This pattern reduces attack surface, speeds deployment, and cuts registry storage costs. In our HSR Layout lab, we benchmark student projects before and after multi-stage optimization: a typical Node.js application drops from 1.2 GB to 180 MB, reducing Kubernetes pod startup time from 12 seconds to 3 seconds.

Image registries and distribution

Docker Hub is the public registry hosting official images for nginx, Redis, PostgreSQL, and 100,000+ community projects. Private registries (AWS ECR, Azure ACR, Google Artifact Registry, Harbor) store proprietary images behind authentication. The docker pull command fetches images; docker push uploads them. Image tags (e.g., nginx:1.25-alpine) provide version control—always specify explicit tags in production rather than :latest to ensure reproducible deployments.

Enterprise registries implement vulnerability scanning, image signing, and access policies. Cisco Secure Application integrates with registries to block deployment of images with critical CVEs. Our internship students at Aryaka configure Harbor registries with Trivy scanning and Notary signing, ensuring only vetted images reach production Kubernetes clusters serving 3,000+ enterprise customers across APAC.

Docker volumes: persistent storage for stateful workloads

Containers are ephemeral by design. When a container stops, its writable layer disappears—any data written to the container filesystem is lost. Volumes provide persistent storage that survives container restarts, enabling stateful applications like databases, message queues, and log aggregators.

Docker offers three storage mechanisms:

  • Volumes — Managed by Docker, stored in /var/lib/docker/volumes/ on Linux hosts. Created with docker volume create and mounted into containers with -v or --mount flags. Volumes persist independently of container lifecycle and can be shared across multiple containers.
  • Bind mounts — Map a host directory directly into a container. Useful for development when you want live code reloading, but discouraged in production due to host filesystem dependencies.
  • tmpfs mounts — Store data in host memory, never written to disk. Ideal for sensitive temporary data like session tokens or cryptographic keys.
# Create a named volume
docker volume create postgres-data

# Run PostgreSQL with persistent storage
docker run -d \
  --name postgres \
  -e POSTGRES_PASSWORD=secret \
  -v postgres-data:/var/lib/postgresql/data \
  postgres:16

# Inspect volume location
docker volume inspect postgres-data

The postgres-data volume persists even if you remove the container with docker rm postgres. Reattaching the volume to a new container restores all database state. This pattern enables blue-green deployments where you spin up a new container version, verify functionality, then switch traffic—all while preserving data integrity.

Volume drivers for distributed storage

Local volumes work for single-host deployments but fail in clustered environments. Volume drivers extend Docker to network-attached storage systems:

  • NFS driver — Mounts NFS shares as Docker volumes, enabling container migration across hosts.
  • AWS EBS/EFS drivers — Integrate with Amazon Elastic Block Store and Elastic File System for cloud-native persistence.
  • GlusterFS/Ceph drivers — Provide distributed block and object storage for on-premises Kubernetes clusters.
  • Portworx/StorageOS — Commercial solutions offering replication, snapshots, and disaster recovery for containerized databases.

Wipro's Bengaluru DevOps team uses AWS EFS-backed volumes for Jenkins build artifacts shared across 40+ build agents. When a build agent fails, Kubernetes reschedules the pod to another node, and the EFS volume automatically remounts—zero data loss, zero manual intervention. Our AWS DevOps training program includes labs where students configure EFS CSI drivers, test failover scenarios, and measure I/O performance under load.

Backup and disaster recovery strategies

Volume backups require coordination between application state and filesystem snapshots. For databases, execute docker exec postgres pg_dump to create logical backups before snapshotting volumes. For distributed systems, use application-native backup tools (e.g., Cassandra nodetool snapshot) that understand cluster topology. Cloud providers offer automated snapshot schedules—AWS EBS snapshots, Azure Disk snapshots, GCP persistent disk snapshots—with point-in-time recovery.

Test restore procedures quarterly. A common interview question at Cisco India: "Your production MongoDB container crashes and the volume is corrupted—walk me through recovery." Correct answer involves restoring from the most recent snapshot, replaying oplog entries from replica set members, and validating data consistency before returning to service. We simulate this scenario in week six of our DevOps curriculum using Chaos Monkey-style failure injection.

Docker networking: bridge, host, overlay, and macvlan modes

Docker networking connects containers to each other, to the host, and to external networks. The Docker Engine creates virtual network interfaces, bridges, and routing tables to implement four primary network drivers.

Bridge networks (default mode)

When you run docker run without network flags, Docker attaches the container to the default bridge network. The Docker daemon creates a virtual bridge interface (docker0) on the host, assigns it IP 172.17.0.1/16, and allocates IPs to containers from that subnet. Containers on the same bridge can communicate using IP addresses or container names (via embedded DNS).

# Create a custom bridge network
docker network create --driver bridge --subnet 10.10.0.0/24 app-net

# Run containers on custom network
docker run -d --name web --network app-net nginx
docker run -d --name api --network app-net node:20

# Containers resolve each other by name
docker exec web ping api

Custom bridge networks provide automatic DNS resolution and network isolation. Containers on app-net cannot communicate with containers on db-net unless explicitly connected to both networks. This segmentation mirrors traditional VLAN designs—web tier, application tier, and database tier in separate broadcast domains.

Host networking for performance-critical workloads

Host mode (--network host) removes network namespace isolation. The container shares the host's network stack directly, eliminating virtual bridge overhead. A container listening on port 8080 binds to the host's port 8080 without NAT or port mapping. This mode delivers maximum throughput—critical for high-frequency trading platforms, real-time video processing, or packet capture tools.

The tradeoff is loss of isolation. Multiple containers cannot bind to the same port, and container IP addresses are indistinguishable from host IPs. Use host networking sparingly, typically for monitoring agents (Prometheus node_exporter, Datadog agent) or network troubleshooting tools (tcpdump, Wireshark) that require raw socket access.

Overlay networks for multi-host clusters

Overlay networks span multiple Docker hosts, enabling containers on different physical machines to communicate as if on the same LAN. Docker Swarm and Kubernetes use overlay networks with VXLAN encapsulation to tunnel traffic across underlay networks. Each overlay network gets a unique VXLAN ID (VNI), and the Docker daemon maintains a distributed key-value store (etcd or Consul) to track container-to-host mappings.

# Initialize Swarm mode
docker swarm init --advertise-addr 192.168.1.10

# Create overlay network
docker network create --driver overlay --attachable app-overlay

# Deploy service across three nodes
docker service create --name web --network app-overlay --replicas 3 nginx

Overlay networks handle service discovery, load balancing, and failover automatically. When a container on node A sends traffic to a service name, Docker's ingress routing mesh forwards packets to healthy replicas on nodes B and C using round-robin or least-connections algorithms. Barracuda Networks' Bengaluru office uses overlay networks to connect containerized WAF instances across AWS, Azure, and on-premises data centers—a hybrid cloud architecture that our internship students troubleshoot during their fourth month.

Macvlan networks for legacy integration

Macvlan mode assigns each container a unique MAC address and IP from the physical network subnet, making containers appear as physical devices to upstream switches and routers. This mode enables containers to participate in existing VLANs, receive DHCP addresses, and communicate with bare-metal servers without NAT.

# Create macvlan network on eth0, VLAN 100
docker network create -d macvlan \
  --subnet=192.168.100.0/24 \
  --gateway=192.168.100.1 \
  -o parent=eth0.100 \
  vlan100

# Run container with VLAN-tagged interface
docker run -d --network vlan100 --ip 192.168.100.50 nginx

Network engineers use macvlan to containerize network services (DHCP servers, DNS resolvers, RADIUS authenticators) that must integrate with existing IP address management systems. The limitation: containers cannot communicate with the host via macvlan due to Linux kernel restrictions—you need a separate bridge or host network for host-to-container traffic.

Common pitfalls and production troubleshooting

Docker's simplicity in development often masks complexity in production. These are the failure modes we see repeatedly in our HSR Layout lab and during internship placements at HCL, TCS, and Infosys.

Unbounded resource consumption

Without resource limits, a single container can consume all host CPU and memory, starving other workloads. Always set --memory and --cpus flags or equivalent Kubernetes resource requests/limits. A runaway Java application with default heap settings can allocate 25% of host RAM; a cryptocurrency miner injected via supply chain attack can peg all CPU cores.

# Limit container to 512 MB RAM and 0.5 CPU cores
docker run -d \
  --memory=512m \
  --cpus=0.5 \
  --name api \
  myapp:latest

Monitor resource usage with docker stats or integrate with Prometheus cAdvisor exporter. Set up alerts when containers exceed 80% of allocated memory—a leading indicator of OOM kills that cause service disruptions.

Logging and observability gaps

Container stdout/stderr logs are ephemeral by default. When a container restarts, logs vanish unless you configure a logging driver. Production deployments should use json-file driver with log rotation or forward logs to centralized systems (ELK stack, Splunk, Datadog).

# Configure log rotation in daemon.json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Distributed tracing becomes critical in microservices architectures. Instrument applications with OpenTelemetry to correlate requests across 15+ container hops. During technical interviews at Akamai India, candidates are asked to debug a 500ms latency spike using Jaeger traces—a scenario we replicate in our lab with intentional network delays and database query slowdowns.

Image vulnerability management

Base images accumulate CVEs over time. The ubuntu:20.04 image from six months ago likely contains unpatched OpenSSL or glibc vulnerabilities. Implement automated scanning in CI/CD pipelines using Trivy, Clair, or Anchore. Fail builds when critical vulnerabilities are detected, and maintain a patching cadence—rebuild images monthly even if application code hasn't changed.

Founder Vikas Swami's QuickZTNA platform uses distroless base images (Google's minimal containers with only application runtime, no shell or package manager) to reduce attack surface by 90%. This approach eliminates entire vulnerability classes—no bash means no shellshock exploits, no apt means no package manager CVEs. We teach distroless patterns in our advanced DevOps modules for students targeting senior roles at Cisco, Palo Alto Networks, and Fortinet.

Networking and DNS resolution failures

Containers inherit DNS resolvers from the host's /etc/resolv.conf. If the host uses 127.0.0.53 (systemd-resolved), containers cannot resolve external domains. Override with --dns 8.8.8.8 or configure Docker daemon to use specific nameservers. Inter-container communication failures often trace to firewall rules blocking the docker0 bridge or iptables NAT rules corrupted by manual edits.

# Debug DNS resolution inside container
docker exec web cat /etc/resolv.conf
docker exec web nslookup google.com

# Inspect bridge network and iptables rules
docker network inspect bridge
sudo iptables -t nat -L -n -v

Overlay network issues manifest as intermittent connectivity—packets drop when VXLAN encapsulation exceeds MTU limits. Reduce MTU to 1450 bytes to account for VXLAN overhead, or enable jumbo frames on physical network infrastructure. Movate's Chennai operations team encountered this exact issue when migrating to Kubernetes; our internship students diagnosed it using tcpdump on overlay interfaces and recommended MTU adjustments that restored 99.99% uptime.

Real-world deployment patterns and enterprise adoption

Docker's production footprint spans startups running five containers on a single VPS to Fortune 500 enterprises orchestrating 50,000+ containers across multi-cloud Kubernetes clusters. Understanding deployment patterns helps you architect systems that scale and survive failures.

Microservices and API gateways

Modern SaaS platforms decompose monoliths into 20-100 microservices, each running in dedicated containers. An e-commerce checkout flow might involve containers for user authentication (OAuth2 service), inventory lookup (PostgreSQL + Redis cache), payment processing (Stripe integration), order fulfillment (RabbitMQ worker), and email notification (SendGrid API). An API gateway (Kong, Traefik, AWS API Gateway) sits at the edge, routing requests to appropriate backend services based on URL paths and headers.

This architecture enables independent scaling—the payment service scales to 50 replicas during Black Friday sales while the email service remains at 5 replicas. Deployment velocity increases because teams can ship updates to individual services without coordinating releases across the entire platform. Aryaka's SD-WAN control plane uses this pattern: separate containers for topology management, policy engine, analytics aggregation, and customer portal, all communicating via gRPC over an overlay network.

CI/CD pipelines and ephemeral build environments

Jenkins, GitLab CI, and GitHub Actions use Docker to provide clean, reproducible build environments. Each pipeline stage runs in a fresh container with exact dependency versions, eliminating "works on my machine" discrepancies. A typical pipeline pulls source code, runs unit tests in a Node.js container, builds a production image, scans for vulnerabilities, pushes to a registry, and triggers a Kubernetes rolling update—all automated and auditable.

# GitLab CI pipeline example
stages:
  - test
  - build
  - deploy

test:
  stage: test
  image: node:20
  script:
    - npm install
    - npm test

build:
  stage: build
  image: docker:24
  script:
    - docker build -t myapp:$CI_COMMIT_SHA .
    - docker push myapp:$CI_COMMIT_SHA

deploy:
  stage: deploy
  image: bitnami/kubectl:1.28
  script:
    - kubectl set image deployment/myapp app=myapp:$CI_COMMIT_SHA

IBM's Bengaluru development center runs 800+ concurrent build containers across a Kubernetes cluster, processing 12,000 commits daily. Build times dropped 60% after migrating from VM-based Jenkins agents to containerized executors with layer caching. Our DevOps curriculum includes a capstone project where students build a complete CI/CD pipeline for a three-tier web application, integrating SonarQube code analysis, Trivy vulnerability scanning, and Slack notifications—skills that directly translate to roles at Accenture, Wipro, and Infosys.

Edge computing and IoT gateways

Containers enable consistent software deployment across heterogeneous edge devices—ARM-based Raspberry Pis, x86 industrial PCs, and custom ASIC boards. A smart city traffic management system might run containerized computer vision models on roadside cameras, aggregating data to a central Kubernetes cluster for real-time route optimization. Docker's multi-architecture image support (linux/amd64, linux/arm64, linux/arm/v7) allows a single docker pull command to fetch the correct binary for each device.

Cisco's IOx platform uses containers to deploy custom applications on industrial routers and switches, enabling predictive maintenance, protocol translation, and local data processing without backhauling traffic to cloud data centers. Our Network Security Operations Division internship includes a module on containerized network functions (CNFs) where students deploy virtualized firewalls, load balancers, and intrusion detection systems as Docker containers—preparing them for roles in telecom operators like Airtel, Jio, and Vodafone Idea that are transitioning to cloud-native 5G core networks.

How Docker integrates with DevOps certification paths

Docker skills appear across multiple certification tracks, reflecting the technology's ubiquity in modern infrastructure. Understanding where Docker fits in exam blueprints helps you prioritize study time and align hands-on practice with certification objectives.

AWS Certified DevOps Engineer – Professional

This certification expects proficiency in Amazon ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service). Exam questions probe ECS task definitions (equivalent to Docker Compose files), Fargate launch types (serverless containers), and ECR image lifecycle policies. You must demonstrate ability to troubleshoot container startup failures using CloudWatch Logs, optimize ECS service auto-scaling based on CloudWatch metrics, and implement blue-green deployments with CodeDeploy.

Our AWS DevOps course in Bangalore maps directly to this blueprint. Week four covers ECS cluster architecture, task placement strategies, and service discovery via AWS Cloud Map. Week five tackles EKS cluster provisioning with eksctl, pod security policies, and Kubernetes Ingress controllers backed by Application Load Balancers. Students complete labs that mirror exam scenarios: deploying a multi-container application to ECS, configuring auto-scaling to handle 10x traffic spikes, and implementing canary deployments that automatically roll back on error rate thresholds.

Certified Kubernetes Administrator (CKA)

While CKA focuses on Kubernetes rather than Docker specifically, 40% of exam tasks involve container troubleshooting. You must inspect container logs with kubectl logs, execute commands inside containers with kubectl exec, and understand how Kubernetes CRI (Container Runtime Interface) abstracts Docker, containerd, and CRI-O. Questions test your ability to diagnose ImagePullBackOff errors (registry authentication failures), CrashLoopBackOff states (application startup failures), and resource quota violations.

The exam environment provides a six-cluster Kubernetes setup where you must complete 17 performance-based tasks in 2 hours. Typical tasks: "Deploy a three-replica nginx deployment, expose it via a LoadBalancer service, and configure a liveness probe that checks /healthz every 10 seconds." Fast Docker CLI muscle memory—knowing docker inspect, docker logs --tail 50, docker exec -it without hesitation—directly translates to faster kubectl equivalents.

Docker Certified Associate (DCA)

This vendor-specific certification validates Docker Enterprise administration skills: Swarm orchestration, Docker Trusted Registry management, role-based access control, and content trust (image signing). The exam covers Universal Control Plane (UCP) architecture, backup and disaster recovery procedures, and integration with LDAP/AD for authentication. While less common than AWS or Kubernetes certifications, DCA remains relevant for organizations running Docker Enterprise in regulated industries (banking, healthcare, government) that require commercial support contracts.

HCL and TCS maintain Docker Enterprise deployments for clients in BFSI (banking, financial services, insurance) sectors where DPDP Act compliance mandates on-premises container platforms. Our curriculum includes Docker Enterprise modules for students targeting these verticals, covering UCP installation, DTR replication across data centers, and integration with vulnerability scanners required by RBI's cybersecurity framework.

Frequently asked questions

Can I run Windows containers on Linux hosts or vice versa?

No. Containers share the host kernel, so Windows containers require a Windows Server host with Hyper-V isolation, and Linux containers require a Linux kernel. Docker Desktop for Windows and macOS uses a lightweight Linux VM (WSL2 or HyperKit) to run Linux containers on non-Linux hosts. For production workloads, match container OS to host OS—Linux containers on Ubuntu/RHEL hosts, Windows containers on Windows Server 2019/2022 hosts. Cross-platform scenarios require separate clusters or hybrid orchestration with Kubernetes supporting both Linux and Windows node pools.

How do I secure Docker daemon access in production?

By default, Docker daemon listens on a Unix socket (/var/run/docker.sock) accessible only to root and docker group members. Exposing the daemon over TCP without TLS is equivalent to granting root access to the host. For remote management, configure TLS mutual authentication: generate CA certificates, server certificates for the daemon, and client certificates for administrators. Use dockerd --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem -H=0.0.0.0:2376 and require clients to present valid certificates. Better yet, use Kubernetes or Docker Swarm APIs that provide built-in RBAC and audit logging.

What is the difference between CMD and ENTRYPOINT in Dockerfiles?

ENTRYPOINT defines the executable that runs when the container starts; CMD provides default arguments to that executable. If you specify both, CMD arguments append to ENTRYPOINT. Use ENTRYPOINT for the main process (e.g., ENTRYPOINT ["nginx"]) and CMD for default flags (e.g., CMD ["-g", "daemon off;"]). This pattern allows users to override arguments at runtime (docker run myimage -c /custom/nginx.conf) without changing the base executable. For scripts that require argument parsing, use ENTRYPOINT ["./entrypoint.sh"] and pass parameters via CMD or runtime flags.

How do I troubleshoot containers that exit immediately after starting?

Check exit codes with docker inspect --format='{{.State.ExitCode}}' container-name. Exit code 0 means clean shutdown; non-zero indicates error. View logs with docker logs container-name to see stdout/stderr output. Common causes: missing environment variables (application crashes on startup), incorrect CMD/ENTRYPOINT (shell syntax errors), permission issues (application cannot write to mounted volumes), or resource limits (OOM killer terminates process). Use docker run -it --entrypoint /bin/sh myimage to override the entrypoint and explore the container interactively, testing commands manually until you identify the failure point.

Should I run multiple processes in a single container or one process per container?

Docker philosophy favors one process per container for clean separation of concerns, independent scaling, and simplified logging. However, pragmatic production deployments sometimes bundle tightly coupled processes—nginx + PHP-FPM, application server + sidecar proxy, or init system managing multiple daemons. The key question: can these processes scale independently? If yes, split them. If they must always deploy together and share lifecycle, a single container is acceptable. Use a lightweight init system (tini, dumb-init) as PID 1 to handle signal forwarding and zombie process reaping when running multiple processes.

How do I migrate data from one Docker volume to another?

Use a temporary container to copy data between volumes. Create the target volume, mount both source and target into a container, and use cp or rsync to transfer files:

docker volume create new-volume
docker run --rm \
  -v old-volume:/source \
  -v new-volume:/target \
  alpine sh -c "cp -a /source/. /target/"

For large datasets, use rsync -av instead of cp to preserve permissions and enable incremental transfers. For databases, prefer logical backups (pg_dump, mysqldump) over filesystem copies to ensure consistency. Test the migration in a staging environment first—data corruption during volume migration is a common cause of production outages that we simulate in our disaster recovery labs.

What are the performance implications of using Docker in production?

Container overhead is minimal—typically 2-5% CPU and negligible memory compared to bare-metal processes. Network performance depends on driver choice: bridge mode adds 5-10% latency due to NAT and iptables traversal; host mode matches bare-metal; overlay mode adds 10-15% latency from VXLAN encapsulation. Storage performance varies by filesystem driver: OverlayFS performs well for read-heavy workloads but suffers on write-intensive databases; use volumes backed by direct-attached NVMe or network storage with kernel bypass (SPDK, DPDK) for latency-sensitive applications. Benchmark your specific workload—our HSR Layout lab provides dedicated performance testing environments where students measure container overhead using Apache Bench, iperf3, and fio across different network and storage configurations.

Ready to Master DevOps Fundamentals?

Join 45,000+ students at Networkers Home. CCIE-certified trainers, 24x7 real lab access, and 100% placement support.

Explore Course