Docker Container Size Calculator
Estimate Docker image and container sizes from base image, layers, and dependencies. Enter values for instant results with step-by-step formulas.
Formula
Image Size = Base Image + (Layers x Avg Layer Size) + Dependencies + App Code
The total uncompressed image size sums all layer sizes starting from the base image. Compressed registry size is typically 45% of uncompressed. Container disk includes the shared image layers plus a per-container writable layer and runtime overhead. Multiple containers share read-only image layers through the copy-on-write filesystem.
Worked Examples
Example 1: Node.js Production Application
Problem: A Node.js app uses node:18-slim base (150 MB), 8 layers averaging 25 MB, 200 MB node_modules, and 50 MB app code. Running 10 containers with 2 registry replicas.
Solution: Total layer size = 8 x 25 = 200 MB\nUncompressed image = 150 + 200 + 200 + 50 = 600 MB\nCompressed (registry) = 600 x 0.45 = 270 MB\nRuntime overhead = 10 + (600 x 0.05) = 40 MB\nPer container writable layer = 50 MB\nShared layers = 600 MB\nTotal disk (10 containers) = 600 + 10 x (50 + 40) = 1,500 MB = 1.46 GB\nRegistry storage = 270 x 2 = 540 MB = 0.53 GB
Result: 600 MB image | 270 MB compressed | 1.46 GB total disk for 10 containers | 0.53 GB registry storage
Example 2: Go Microservice with Multi-Stage Build
Problem: Before optimization: golang:1.21 base (800 MB), 5 layers at 30 MB, 150 MB deps, 20 MB app. After multi-stage: scratch base (0 MB), 1 layer, 15 MB static binary. Running 50 containers.
Solution: Before optimization:\nImage = 800 + 150 + 150 + 20 = 1,120 MB\nDisk (50 containers) = 1,120 + 50 x 60 = 4,120 MB = 4.02 GB\n\nAfter multi-stage:\nImage = 0 + 15 = 15 MB\nDisk (50 containers) = 15 + 50 x 11 = 565 MB = 0.55 GB\nSavings = 1,120 - 15 = 1,105 MB (98.7% reduction)
Result: 1,120 MB reduced to 15 MB (98.7% savings) | 4.02 GB to 0.55 GB total disk | Multi-stage builds are transformative for compiled languages
Frequently Asked Questions
What determines the final size of a Docker image?
A Docker image size is determined by the cumulative size of all its layers, starting from the base image. Each instruction in a Dockerfile (RUN, COPY, ADD) creates a new layer that adds to the total image size. The base image is typically the largest component, ranging from 5 MB for Alpine Linux to over 900 MB for full Ubuntu distributions. Dependencies installed via package managers (apt-get, npm, pip) often add hundreds of megabytes. Importantly, deleting files in a later layer does not reduce the image size because earlier layers are immutable. This is why multi-stage builds are essential for optimization, as they allow you to copy only the final artifacts into a clean final image without carrying build-time dependencies.
How do Docker image layers work and why do they matter for storage?
Docker images use a layered filesystem where each Dockerfile instruction creates an immutable layer stacked on top of previous ones. When you pull an image, Docker downloads each layer independently and caches them locally. This layering system enables efficient storage because containers sharing the same base image reuse those cached layers rather than duplicating them. For example, if you run 10 containers from the same image, the shared read-only layers exist only once on disk. Each container adds only a thin writable layer for runtime changes. However, this means that a single RUN command that installs packages and then cleans up in a separate RUN command still retains the full package data in the installation layer. Combining commands with && in a single RUN instruction reduces layer count and total size.
What is the difference between Docker image size and container size?
Docker image size refers to the total size of all read-only layers that define the filesystem template. Container size includes the image layers plus a thin writable layer (called the container layer or upper dir) where all runtime filesystem changes are stored. The writable layer is copy-on-write, meaning files from the image are only copied to the writable layer when they are modified. A container also consumes memory for its running processes, network stacks, and mount points, but this is separate from disk size. When examining disk usage with docker system df, you will see the distinction between image storage (shared across containers) and container-specific writable layer storage. The writable layer is lost when the container is removed unless data is persisted using volumes or bind mounts.
How can I reduce Docker image size effectively?
The most impactful optimization techniques are multi-stage builds, base image selection, and layer management. Multi-stage builds let you use a full development environment for building but copy only the compiled output into a minimal runtime image, often saving 50-80 percent of the final size. Switching from ubuntu or debian base images to alpine (5 MB) or distroless images (2-20 MB) can save hundreds of megabytes. Combine RUN commands using && to reduce layer count and remove temporary files in the same layer where they are created. Use .dockerignore to prevent unnecessary files from being added to the build context. Pin specific package versions and avoid installing recommended or suggested packages by using apt-get install --no-install-recommends. For compiled languages like Go, consider building fully static binaries and using scratch as the base image for the absolute minimum size.
How do multi-stage builds reduce image size?
Multi-stage builds use multiple FROM instructions in a single Dockerfile, where each FROM starts a new build stage with its own base image. The key insight is that only the final stage becomes the output image. Earlier stages can install heavy build tools, compilers, and development dependencies needed to build your application, but these tools are not included in the final image. You selectively copy only the built artifacts (compiled binaries, bundled assets, or transpiled code) from earlier stages using COPY --from=stagename. For a Node.js application, a build stage might include node_modules with dev dependencies totaling 800 MB, but the final stage copies only the bundled production output of 50 MB. For Go applications, the savings are even more dramatic: a build stage with the full Go toolchain (over 1 GB) produces a single binary that runs in a scratch image under 20 MB.
How should I manage Docker image storage in CI/CD pipelines?
CI/CD pipelines can consume enormous amounts of Docker storage because each build creates new image layers, and build caches accumulate over time. Implement regular cleanup policies using docker system prune to remove unused images, stopped containers, and build cache. Configure your CI system to only retain the most recent N image versions in the local cache, deleting older ones automatically. Use build cache efficiently by ordering Dockerfile instructions from least to most frequently changed, so base image and dependency installation layers remain cached while application code changes trigger only the final layers. Consider using remote build caches (BuildKit cache export) shared across CI runners to avoid redundant builds. Monitor CI runner disk usage and set alerts at 70 percent capacity to prevent build failures from disk exhaustion.