HSR Sector 6 · Bangalore +91 96110 27980 Mon–Sat · 09:30–20:30
Chapter 19 of 20 — Linux Administration
advanced Chapter 19 of 20

Containers on Linux — Docker, LXC, Namespaces & Cgroups

By Vikas Swami, CCIE #22239 | Updated Mar 2026 | Free Course

Container Technology — What Makes Containers Possible on Linux

Linux containers have revolutionized the way applications are developed, deployed, and managed by enabling lightweight, isolated environments on a single host. Unlike traditional virtualization, which requires full OS instances per VM, Linux containers share the host kernel, providing efficiency and speed. This is made possible through a combination of kernel features that allow multiple isolated user spaces to coexist seamlessly.

Fundamentally, Linux containers leverage several core technologies: Linux namespaces, control groups (cgroups), and auxiliary features like chroot, seccomp, and AppArmor. These components work together to isolate processes, control resource consumption, and enforce security boundaries. For example, namespaces provide process, network, and filesystem isolation, while cgroups restrict CPU, memory, and I/O usage.

Container runtimes such as Docker, LXC, and Podman utilize these kernel features to create flexible and portable container environments. The evolution of container technology on Linux has led to a rich ecosystem, supporting both system containers—running entire OS instances—and application containers focused on single services or microservices architectures. As an advanced Linux administrator, understanding these foundational elements is crucial for designing scalable, secure, and efficient containerized systems, whether on-premises or in cloud environments. For comprehensive training, Networkers Home offers the best courses in Bangalore.

Linux Namespaces — PID, Network, Mount, User & IPC Isolation

Linux namespaces form the core mechanism enabling container isolation by creating separate "views" of system resources for each containerized process. Each namespace type isolates a specific aspect of the system, ensuring that processes within a container see only their own environment, independently of others on the host or in other containers.

PID Namespace: Isolates process IDs, ensuring processes inside a container have their own init process (PID 1). This prevents processes in different containers from interfering with each other’s process trees and enhances security and process management.

unshare --pid --fork --mount-proc bash
# Starts a new shell with a separate PID namespace

Network Namespace: Provides each container with its own network stack, including interfaces, IP addresses, routing tables, and firewall rules. This allows containers to have independent network configurations, making them appear as separate hosts.

ip netns add mynamespace
ip netns exec mynamespace bash

Mount Namespace: Isolates filesystem mount points, allowing containers to have their own root filesystem views. Changes in mount points within a container do not affect the host or other containers.

mount --bind /my/container/root /mnt
mount --make-private /mnt

User Namespace: Maps user and group IDs inside a container to different IDs outside, enabling root inside the container without granting root privileges on the host. This is critical for security, especially when running untrusted containers.

newuidmap 1234 0 100000 65536
newgidmap 1234 0 100000 65536

IPC Namespace: Isolates inter-process communication mechanisms such as semaphores, message queues, and shared memory segments, preventing processes in different containers from interfering with each other's IPC resources.

Mastering Linux namespaces allows administrators to craft finely tuned container environments, ensuring security, resource separation, and operational independence. These features underpin container runtimes like Docker and LXC, making them indispensable tools in modern Linux system administration. To explore more advanced container concepts, visit Networkers Home Blog.

Cgroups — Resource Limits for CPU, Memory & I/O

Control groups (cgroups) are a Linux kernel feature that enables fine-grained control over resource allocation and management for processes. In containerized environments, cgroups are essential for ensuring fair resource distribution, preventing resource starvation, and maintaining system stability. They allow administrators to specify limits on CPU usage, memory consumption, disk I/O, and network bandwidth for individual containers or groups of containers.

CPU Subsystem: Cgroups can restrict CPU time using parameters like cpu.shares and cpu.cfs_quota_us. For example, limiting a container to 50% CPU can be done via:

echo 512 > /sys/fs/cgroup/cpu/mycontainer/cpu.shares
echo 100000 > /sys/fs/cgroup/cpu/mycontainer/cpu.cfs_quota_us

Memory Subsystem: Memory limits are enforced through parameters like memory.limit_in_bytes. This prevents containers from exhausting host memory, which could lead to system instability:

echo 2G > /sys/fs/cgroup/memory/mycontainer/memory.limit_in_bytes

I/O Subsystem: I/O bandwidth can be controlled using the blkio subsystem, setting limits on disk read/write rates:

echo 10485760 > /sys/fs/cgroup/blkio/mycontainer/blkio.throttle.read_bps_device

Practical Example: To create a cgroup for a container and limit its CPU and memory, the commands are:

mkdir /sys/fs/cgroup/cpu/mycontainer
echo 512 > /sys/fs/cgroup/cpu/mycontainer/cpu.shares
mkdir /sys/fs/cgroup/memory/mycontainer
echo 1G > /sys/fs/cgroup/memory/mycontainer/memory.limit_in_bytes
# Assign process to cgroup
echo  > /sys/fs/cgroup/cpu/mycontainer/tasks
echo  > /sys/fs/cgroup/memory/mycontainer/tasks

Comparison of container resource management tools:

Feature Cgroups Docker Resource Limits LXC/LXD
Granularity Fine-grained control at process level Applies at container level Configurable per container
Ease of Use Requires manual setup or scripting Integrated with Docker CLI Configurable via profile files
Flexibility Highly flexible, supports multiple subsystems Limited to container scope Flexible, with detailed profiles

Understanding cgroups is key for advanced Linux administrators who need to optimize resource utilization, ensure container isolation, and prevent resource contention. For deeper insights, check out the Networkers Home Blog.

LXC/LXD — System Containers vs Application Containers

Linux Containers (LXC) and its successor, LXD, are mature container management tools that provide lightweight virtualization at the system level. They enable running multiple isolated Linux systems (containers) on a single host, sharing the kernel but maintaining separate user spaces. LXC is a low-level toolset, while LXD offers a more user-friendly, REST API-driven experience, making it suitable for managing large-scale container deployments.

System Containers: These are designed to run full Linux distributions, similar to lightweight virtual machines. They include init systems, package managers, and full OS environments. LXC/LXD excels here, allowing for complete OS environments within containers, making them suitable for development, testing, and isolated server environments.

Application Containers: Focused on single application or microservice deployment, these are more lightweight, often managed with Docker or Podman. They typically do not include a full OS but package only the application and its dependencies.

Key differences summarized:

Feature LXC/LXD Docker
Use Case System containers, full OS environment Application containers, microservices
Isolation Level Complete OS-level isolation Application process isolation
Management Command-line, REST API, GUI (LXD) CLI, Docker Compose, Docker Swarm
Resource Overhead Higher, due to full OS environment Lower, minimal footprint

Choosing between LXC/LXD and Docker depends on the use case: LXC/LXD for full system virtualization with multiple Linux distributions, and Docker for rapid deployment of isolated applications. Both complement each other in complex environments. For hands-on training, explore courses at Networkers Home.

Docker on Linux — Installation, Daemon & Storage Drivers

Docker is the most popular platform for containerization, offering a simple yet powerful interface to build, run, and manage containers on Linux. Installing Docker on Linux involves setting up the Docker Engine, which comprises a daemon process, CLI, and container runtime. The installation process varies slightly across distributions but generally follows a standardized procedure using repositories.

Installation Steps (Ubuntu/Debian):

  1. Add Docker’s official GPG key and repository:
  2. curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
    
  3. Update package index and install Docker:
  4. sudo apt update
    sudo apt install docker-ce docker-ce-cli containerd.io
    

Once installed, start and enable the Docker daemon:

sudo systemctl start docker
sudo systemctl enable docker

Docker’s architecture consists of the Docker daemon, which manages containers, images, networks, and storage, and the CLI for user interaction. The daemon communicates via REST API, allowing integration with orchestration tools.

Storage drivers are critical for container filesystem management. Popular options include overlay2, aufs, and btrfs. Each has trade-offs in performance, stability, and feature support. For example, overlay2 is the default for most Linux distributions due to its efficiency and stability.

docker info | grep Storage
# Displays current storage driver in use

Choosing the right storage driver impacts container startup times, snapshot capabilities, and filesystem performance. Advanced users should monitor and tune these settings based on workload requirements. For more detailed tutorials, refer to Networkers Home Blog.

Podman — Rootless, Daemonless Docker Alternative

Podman has emerged as a compelling alternative to Docker, especially suited for environments emphasizing security and simplicity. Unlike Docker, Podman operates without a central daemon, running containers as individual processes. This rootless architecture reduces attack surfaces and simplifies permissions management, making it ideal for multi-tenant or sensitive deployments.

Podman's CLI syntax is compatible with Docker, easing migration and adoption. To run containers as a non-root user, simply install Podman and execute commands like:

podman run -d -p 8080:80 nginx

Podman leverages the same container image formats as Docker, supporting OCI images. It integrates seamlessly with systemd, enabling containers to be managed as native system services, which is advantageous for production environments.

Key advantages include:

  • Rootless operation for enhanced security
  • No daemon, reducing complexity and resource usage
  • Compatibility with Docker images and commands
  • Integration with existing Linux security mechanisms

Despite its benefits, Podman may lack some ecosystem integrations available to Docker, such as Docker Compose or Swarm. Nevertheless, it is gaining popularity among advanced Linux administrators seeking secure, lightweight container solutions. To master Podman, explore courses at Networkers Home.

Container Networking on Linux — Bridge, macvlan & Overlay

Networking is fundamental to container orchestration, enabling containers to communicate both internally and externally. Linux provides multiple networking modes for containers, each suitable for different use cases:

  • Bridge Networking: The default mode, creating an internal virtual network with a Linux bridge (e.g., docker0). Containers connected to this bridge can communicate with each other and with the host via NAT.
  • Macvlan Networking: Assigns containers unique MAC addresses, making them appear as separate physical devices on the network. Ideal for legacy systems or when containers need direct access to the physical network.
  • Overlay Networking: Used in multi-host container clusters, overlay networks create a virtual network across multiple hosts, enabling container-to-container communication over the physical network. Technologies like VXLAN underpin overlay networks in Docker Swarm or Kubernetes.

Example: Creating a macvlan network in Docker:

docker network create -d macvlan \
  --subnet=192.168.1.0/24 \
  -o parent=eth0 my_macvlan_network
docker run --net=my_macvlan_network --name=mycontainer nginx

Overlay networks typically require a container orchestrator like Kubernetes or Docker Swarm to manage multi-host communication. They provide scalability and flexibility but introduce complexity in setup and security considerations.

Proper network configuration ensures container environments are secure, scalable, and performant. For advanced networking topics, visit Networkers Home Blog.

Container Security — Seccomp, Capabilities & Rootless Containers

Security remains a critical concern in container deployment. Linux provides various mechanisms to harden containers against exploits and unauthorized access:

  • Seccomp: Filters system calls made by containers, limiting the attack surface. Docker, for instance, uses default seccomp profiles that restrict dangerous syscalls, but custom profiles can be crafted for specific security policies.
  • Capabilities: Linux capabilities divide root privileges into discrete units. Containers can be granted only the necessary capabilities, such as CAP_NET_ADMIN, reducing the risk if compromised. Dropping all capabilities and adding only those needed enhances security.
  • Rootless Containers: Running containers without root privileges minimizes risk and aligns with principle of least privilege. Tools like Podman facilitate rootless container execution, enabling secure multi-user environments.

Example: Running a container with limited capabilities:

docker run --cap-drop=ALL --cap-add=NET_ADMIN nginx

Additional security layers include using AppArmor or SELinux policies, encrypted container images, and regular security audits. Container security strategies must be integrated into deployment pipelines to prevent vulnerabilities. For more insights, refer to Networkers Home Blog.

Key Takeaways

  • Linux containers leverage kernel features like namespaces and cgroups for isolation and resource management.
  • Namespaces isolate processes, network, filesystem, user IDs, and IPC, forming the backbone of container security.
  • Cgroups enable precise control over CPU, memory, and I/O, preventing resource contention.
  • LXC/LXD are suitable for full system containers, while Docker and Podman focus on application containers.
  • Docker on Linux simplifies container management but requires understanding storage drivers and daemon configurations.
  • Podman offers a rootless, daemonless alternative, enhancing security for containerized workloads.
  • Container networking modes like bridge, macvlan, and overlay cater to diverse deployment needs, from simple setups to multi-host clustering.
  • Security mechanisms such as seccomp, capabilities, and rootless containers are essential for protecting container environments.

Frequently Asked Questions

What are the key kernel features that enable Linux containers?

Linux containers rely primarily on namespaces, cgroups, and seccomp. Namespaces provide process, network, mount, user, and IPC isolation, ensuring containers run in separate environments. Cgroups manage resource allocation, preventing containers from monopolizing system resources. Seccomp filters system calls, reducing attack surfaces. These kernel features work collectively to deliver lightweight, secure, and isolated container environments on Linux.

How does Docker differ from LXC/LXD in container management?

Docker primarily focuses on application containers, providing an easy-to-use interface for building and deploying microservices with less overhead. LXC/LXD, on the other hand, manage system containers that run full Linux distributions, offering a more complete OS environment. Docker uses a daemon-based architecture with container images optimized for application deployment, while LXC/LXD provides system-level virtualization akin to lightweight VMs. Both serve different use cases but can complement each other in complex infrastructure setups.

Why are security features like seccomp and capabilities crucial in container environments?

Containers share the host kernel, making security a priority. Seccomp filters restrict system calls that containers can execute, reducing the risk of kernel exploits. Capabilities divide root privileges into finer-grained permissions, limiting what containers can do even if compromised. Using these features, along with rootless containers and security modules like AppArmor or SELinux, helps prevent privilege escalation, data breaches, and containment of potential threats, ensuring a secure containerized infrastructure.

Ready to Master Linux Administration?

Join 45,000+ students at Networkers Home. CCIE-certified trainers, 24x7 real lab access, and 100% placement support.

Explore Course