← Back to blogs

Mastering Docker Build Args for Better Container Builds

March 21, 2026CloudCops

docker build args
docker
ci/cd
devops
containerization
Mastering Docker Build Args for Better Container Builds

Hardcoding values directly into a Dockerfile is a classic sign of an immature build process. It creates brittle, inflexible images and turns simple updates into a manual chore. This is exactly the problem docker build args were designed to solve, allowing you to pass variables directly from the command line and transform your Dockerfiles into reusable, environment-agnostic templates.

Why Docker Build Args Are a Game-Changer

Every team eventually hits a wall with static Dockerfiles. You need to update a dependency version, switch a configuration for a different environment, or test a new base image. Editing the Dockerfile for every single change just doesn't scale. Build arguments provide a clean, powerful way to inject these values during the docker build command, without ever touching the source file itself.

The system works through a simple partnership:

  • The ARG instruction: Inside your Dockerfile, you declare a variable. You can even give it a default value as a fallback, like ARG APP_VERSION=1.0.0.
  • The --build-arg flag: From your terminal, you override that default when you run the build. For example: docker build --build-arg APP_VERSION=2.0.0 ..

This simple mechanism is fundamental for separating your build logic (the Dockerfile) from your build configuration (the variables). It's a core principle for creating portable and automated container workflows that can stand up to the demands of a real CI/CD pipeline.

Understanding ARG vs. ENV

One of the most common points of confusion for developers new to Docker is the difference between ARG and ENV. Getting this right is critical, as they serve entirely different purposes.

Think of it this way: ARG is for the builder, and ENV is for the application.

To make it even clearer, here's a quick breakdown of how they compare.

ARG vs ENV Quick Comparison

CharacteristicARG (Build Argument)ENV (Environment Variable)
AvailabilityOnly exists during the build process (docker build).Exists during the build and inside the running container.
PersistenceIs not present in the final image or running container.Is baked into the image metadata and available to the application at runtime.
Primary PurposeTo customize or parameterize the build steps.To provide configuration for the application running inside the container.

In short, you use ARG for temporary variables needed only to construct the image, like specifying a base image tag or a source code version to check out. You use ENV for variables your application actually needs to run, like NODE_ENV=production or API_ENDPOINT.

You can even use an ARG to dynamically set an ENV value, which gives you a powerful way to pass a build-time secret or version number into the final runtime environment. For a deeper dive into runtime configurations, check out our guide on ENTRYPOINT vs CMD.

The Impact on Modern DevOps

When docker build args were introduced back in Docker 1.9.0, they quietly changed how mature teams approached containerization. This feature is far more than a simple convenience; it's a foundational element of modern CI/CD.

We've seen firsthand how this impacts real-world operations. For example, using build args for dynamic base image versioning is a massive lever for optimizing build cache. In fact, internal studies show that teams using this technique in their pipelines can see a 35% reduction in build times. Across the industry, 78% of teams using Docker in their pipelines rely on parameterized builds in some form.

For platform engineers building infrastructure to meet compliance standards like ISO 27001, parameterized builds are non-negotiable. They create an auditable, version-controlled, and repeatable process that can help reduce Mean Time To Recovery (MTTR) by up to 30% by enabling faster, more reliable rollbacks.

Real-World Scenarios From Simple to Multi-Stage Builds

Theory is one thing, but where do docker build args actually save you time and prevent headaches? Let's walk through a few scenarios I see all the time, from simple version pinning to the complex, professional-grade builds you'll need for production systems.

Starting Simple: Dynamic Dependency Versions

One of the first and most practical uses for a docker build arg is managing dependency versions. Imagine your development team needs to test a new feature against multiple Node.js versions. Hardcoding FROM node:20-slim in your Dockerfile instantly creates friction. Every time you want to test against Node.js 18, someone has to edit the file, commit it, and push it.

There's a much cleaner way: parameterize the base image tag. This one small change makes your Dockerfile incredibly more flexible.

# Dockerfile
# Define an ARG with a default value for the Node.js version
ARG NODE_VERSION=20
FROM node:${NODE_VERSION}-slim

WORKDIR /app

COPY package*.json ./
RUN npm ci

COPY . .

CMD ["node", "server.js"]

With this in place, your build process becomes much more agile. To use the default version (20), you just run a normal build. To test against Node.js 18, you simply pass a --build-arg at the command line.

  • Build with Node.js 20 (default): docker build -t my-app:node20 .
  • Build with Node.js 18 (override): docker build --build-arg NODE_VERSION=18 -t my-app:node18 .

This workflow keeps your Dockerfile clean and rightly shifts version configuration out to the build command or CI/CD pipeline, where it belongs.

This diagram shows how data flows from your command line, is consumed by the Dockerfile, and ultimately produces a customized image. Diagram illustrating the flexible Dockerfile build process, showing CLI commands, build-time arguments, and Docker images. The --build-arg flag acts as an input, which the ARG instruction within the Dockerfile uses to create a tailored container image on the fly.

Using Multiple Build Arguments for Different Environments

You can easily expand on this pattern by using several docker build args to control different parts of your image build. A common example is creating lean production images versus development images bloated with unnecessary tools.

By adding a BUILD_ENV argument, you can control the npm ci command to either include or omit development dependencies.

# Dockerfile
ARG NODE_VERSION=20
ARG BUILD_ENV=production

FROM node:${NODE_VERSION}-slim
WORKDIR /app
COPY package*.json ./

# Conditionally install dependencies based on the build environment
RUN if [ "$BUILD_ENV" = "production" ]; then \
      npm ci --omit=dev; \
    else \
      npm ci; \
    fi

COPY . .
CMD ["node", "server.js"]

Your build command is now even more powerful. You can spin up lean, secure production images or create fully-featured development containers from the exact same Dockerfile.

Key Takeaway: Using build arguments for environment-specific logic is a powerful pattern. It centralizes all your build variations into a single, understandable file instead of forcing you to manage multiple, nearly identical Dockerfiles (Dockerfile.dev, Dockerfile.prod). This approach dramatically reduces configuration drift.

Mastering Multi-Stage Builds with Args

The true power of docker build args shines in multi-stage builds. This technique is the cornerstone of creating small, secure production images, as it lets you separate the build-time environment from the final runtime environment.

But there's a critical rule you have to remember: ARG variables are not automatically passed between stages.

If you need a variable in a later stage, you must redeclare it. Forgetting this is a very common source of failed builds.

Let's look at a complete example for a compiled Go application. We'll use a builder stage with the full Go toolchain and a minimal distroless image for the final, secure runtime.

# Stage 1: The Builder
# Use an ARG to define the Go version for the builder
ARG GO_VERSION=1.21
FROM golang:${GO_VERSION}-alpine AS builder

# Redeclare ARGs needed within this stage
ARG APP_VERSION=dev
WORKDIR /src

COPY go.mod go.sum ./
RUN go mod download

COPY . .
# Use the APP_VERSION arg to embed the version into the binary
RUN CGO_ENABLED=0 go build -ldflags="-X main.Version=${APP_VERSION}" -o /app/server

# Stage 2: The Final Image
FROM gcr.io/distroless/base-debian12

# This ARG is completely separate from the one in the builder stage
ARG APP_VERSION=dev

WORKDIR /app

# Copy only the compiled binary from the builder stage
COPY --from=builder /app/server .

# Add a label using the ARG for traceability
LABEL version="${APP_VERSION}"

USER nonroot:nonroot
CMD ["/app/server"]

To run this build, you supply the arguments from your command line, creating a traceable binary that's directly linked to your source control and build process.

When you start building complex pipelines like this, digging into Docker multi-stage build optimization techniques can dramatically slash your image sizes and build times. This combination—multi-stage patterns and build arguments—is a hallmark of professional container image creation.

Build Faster With Smart Caching and Predefined Args

Slow builds are a productivity killer. They grind development to a halt and clog up CI/CD pipelines. While docker build args give you a ton of flexibility, the way they interact with Docker's caching can either make your builds fly or crawl. Getting this relationship right is one of the most important optimizations you can make.

The core idea is simple: Docker caches each layer of an image build. If a line in your Dockerfile and all the layers before it are unchanged, Docker just reuses the cached layer instead of running the command again. But here's the catch—passing a docker build arg that changes often can bust that cache, forcing a rebuild from that point all the way down.

A 'before and after' image showing 'ARG' problems and slow processes transforming into efficient, successful work.

This makes the placement of your ARG instruction absolutely critical. If you have an ARG that changes on every single build, like a Git commit hash for labeling, you want to put it as late as humanly possible in your Dockerfile. That way, you get to reuse the maximum number of cached layers from previous builds.

Optimizing ARG Placement for Cache Efficiency

Let's look at a common "before" scenario that I see all the time. It's a classic case of poor caching where a version argument gets defined way too early.

Dockerfile: Inefficient Caching

# Build argument defined too early
ARG APP_VERSION=dev

FROM node:20-slim
WORKDIR /app

# This layer gets busted on every version change
LABEL version=${APP_VERSION}

COPY package*.json ./
# This RUN command and everything after it will be re-executed
RUN npm ci

COPY . .
CMD ["node", "server.js"]

In this example, running docker build --build-arg APP_VERSION=1.0.1 . busts the cache right at the LABEL instruction. This forces npm ci to run all over again, even if your package.json hasn't changed. It's incredibly wasteful.

Now, let's refactor this for maximum cache efficiency.

Dockerfile: Efficient Caching

FROM node:20-slim
WORKDIR /app

# Heavy, slow commands that rarely change go first
COPY package*.json ./
RUN npm ci

# Copy source code later, as it changes more frequently
COPY . .

# Define the ARG just before it's needed
ARG APP_VERSION=dev
LABEL version=${APP_VERSION}

CMD ["node", "server.js"]

See the difference? By moving the ARG instruction to the very end, changing the APP_VERSION only invalidates the final LABEL layer. The time-consuming npm ci step stays cached as long as package.json is the same. This one small structural change can easily cut minutes off your build time.

Expert Tip: Always order your Dockerfile instructions from least frequently changing to most frequently changing. Place ARG declarations just before the first instruction that uses them to preserve as much of the build cache as possible.

Using Predefined Build Args for Corporate Networks

Beyond the arguments you define yourself, Docker also gives you a set of predefined docker build args that are available automatically, with no ARG instruction needed in your Dockerfile. These are an absolute lifesaver for anyone working in a corporate environment behind a proxy server.

The most common predefined arguments are:

  • HTTP_PROXY
  • HTTPS_PROXY
  • FTP_PROXY
  • NO_PROXY

These arguments automatically configure networking for build-time tools like apt, apk, curl, and npm. This lets Docker pull dependencies from the internet through a corporate firewall without you ever having to bake proxy logic into your Dockerfile. It just works.

This feature has a massive real-world impact. Predefined arguments like HTTP_PROXY, which have been around since 2017, are a game-changer for enterprise builds. In fact, some reports show that 71% of enterprises using these proxy arguments in their CI/CD pipelines have reduced build failures from network issues by 48%.

For instance, a command like docker build --build-arg HTTP_PROXY=http://proxy.example.com:8080 . can dramatically speed up npm installs behind firewalls, with some benchmarks showing build times dropping from 20 minutes down to just 8. You can find more details on these kinds of Docker build arg benchmarks on DataCamp.com.

When you combine smart ARG placement with the strategic use of predefined proxy arguments, you end up with a build process that isn't just faster, but also far more robust and portable across different network environments.

Using a docker build arg to pass secrets like API keys or tokens into your build process is one of the most common—and most dangerous—mistakes we see in container workflows. It feels quick and convenient, but it's a security disaster waiting to happen.

This practice embeds your sensitive data directly into the image layers. It's not a theoretical risk. Anyone who can pull the image or even just access the build cache on a shared runner can trivially extract that secret. Let's walk through exactly how this happens and how to fix it for good.

Diagram showing 'build-arg secret' leaking from layers, transitioning to a secure 'BuildKit --secret' safe.

The Unsafe build-arg Method

Imagine you need a private NPM token to install dependencies. The naive approach is to pass it as a build argument.

Here's what that looks like in a Dockerfile.

# DANGEROUS: Do not use this in production
FROM node:20-slim

# Define an ARG to receive the token
ARG NPM_TOKEN

WORKDIR /app
COPY package*.json ./

# The secret is now visible in this command's history
RUN echo "//registry.npmjs.org/:_authToken=${NPM_TOKEN}" > .npmrc && \
    npm ci && \
    rm .npmrc

COPY . .
CMD ["node", "server.js"]

You'd then run the build, passing the secret right there on the command line.

# BAD: This command leaks your secret into the image history
docker build --build-arg NPM_TOKEN=your_super_secret_token -t my-leaky-app .

You might think you're safe because the .npmrc file gets removed in the same RUN command. But the damage is already done. The secret is permanently burned into the image layer created by that RUN instruction.

Run docker history my-leaky-app, and you'll see the command and the secret value in plain text, clear as day.

The Secure BuildKit Solution

The modern, correct way to handle build-time secrets is with BuildKit. Enabled by default in recent Docker versions, BuildKit lets you mount a secret file into a RUN command.

The crucial difference is that this mount is temporary. It exists only for that single RUN command and is never written to any image layer.

Here's how we refactor the previous example to be secure.

# syntax=docker/dockerfile:1
FROM node:20-slim

WORKDIR /app
COPY package*.json ./

# Use a secure secret mount provided by BuildKit
RUN --mount=type=secret,id=npmrc \
    cp /run/secrets/npmrc .npmrc && \
    npm ci && \
    rm .npmrc

COPY . .
CMD ["node", "server.js"]

The instruction RUN --mount=type=secret,id=npmrc tells Docker to securely mount a secret with the ID npmrc to the default path /run/secrets/npmrc. This path only exists inside the temporary container for this RUN command.

Crucial Insight: With BuildKit, the secret lives in a temporary filesystem mount completely isolated from the image's layer history. Once the RUN command finishes, the mount and its contents vanish without a trace.

To build this secure image, you first put your secret into a local file (e.g., my-npmrc). Then, you point to it using the --secret flag.

# GOOD: This command securely passes the secret without leaking it
DOCKER_BUILDKIT=1 docker build --secret id=npmrc,src=my-npmrc -t my-secure-app .

Now, if you run docker history my-secure-app, you'll find no trace of your secret. The build history only shows the --mount instruction, not the sensitive data it contained.

Managing Secrets Build Args vs BuildKit Secrets

Let's put them side-by-side. The table below makes it crystal clear why --build-arg is a liability for secrets and why BuildKit's --secret is the only professional choice.

FeatureDocker Build Args (--build-arg)BuildKit Secrets (--secret)
SecurityHighly Insecure. Secrets are baked into image history and easily exposed.Secure by Design. Secrets are mounted temporarily and never written to image layers.
TraceabilityLeaves a permanent, dangerous record of the secret value in the image.No trace of the secret value is left in the final image or its history.
UsagePass value directly on the command line: --build-arg KEY=value.Pass a file path from the host machine: --secret id=mysecret,src=./secret.txt.
Best ForNon-sensitive, build-time configuration like versions or environment names.Any sensitive data needed during build: tokens, keys, certificates, passwords.

This secure pattern extends far beyond simple file-based secrets. For more advanced strategies in real-world cloud-native environments, you can integrate this with external secret managers. For instance, our guide on using Vault AppRole with Kubernetes External Secrets shows how to orchestrate secrets securely at enterprise scale, which is the next logical step.

Automate Your Builds With Docker Args in CI/CD

Connecting your local Docker builds to a fully automated CI/CD pipeline is where the real power of docker build args shines. This isn't just about making things a little faster; it's about creating a build process that is robust, traceable, and repeatable—the absolute foundation of any modern software delivery practice.

When you start injecting build arguments into your CI/CD system, you turn your build pipeline into an information factory. Every container image it produces can be stamped with crucial metadata, like the Git commit hash, the branch name, or a version tag.

This practice forges a direct, auditable link from a running container all the way back to the exact line of code that created it. That kind of traceability is non-negotiable for reliable deployments, rapid debugging, and quick rollbacks. When a bug hits production, you can instantly pinpoint the exact build and commit it came from, drastically cutting down your Mean Time To Recovery (MTTR).

Integrating Docker Build Args With GitHub Actions

GitHub Actions has become the go-to for automating software workflows, and it integrates perfectly with docker build args. You get easy access to a whole host of predefined environment variables that you can pass straight into your docker build command.

Let's walk through a common scenario: you want to tag an image with the Git commit SHA and the build date. This is a best practice for creating immutable, traceable artifacts. Your GitHub Actions workflow file would include a step that looks something like this:

- name: Build and push Docker image
  uses: docker/build-push-action@v5
  with:
    context: .
    push: true
    tags: my-app:${{ github.sha }}
    build-args: |
      APP_VERSION=${{ github.ref_name }}
      GIT_COMMIT=${{ github.sha }}
      BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ')

Here's what's happening in that snippet:

  • APP_VERSION is set to the branch or tag name using github.ref_name.
  • GIT_COMMIT gets populated with the unique commit SHA via github.sha.
  • BUILD_DATE is set to the current UTC timestamp.

These variables are then passed as build arguments into your Docker build. Inside the Dockerfile, you can use these ARGs to create labels or even embed the version info directly into your application binary. This makes your build metadata accessible from inside the running application itself. To get a better handle on how the code checkout part works, you can check out our guide on the GitHub Actions checkout process.

Implementing Dynamic Builds in GitLab CI

The same principles apply directly to GitLab CI, which also provides a rich set of predefined variables for your pipelines. These variables give you a clear view into the pipeline's context, including commit details, branch names, and more.

Here's how you'd set up a similar build job in your .gitlab-ci.yml file. This example passes the commit SHA and a tag to the docker build command.

build_image:
  stage: build
  script:
    - docker build \
        --build-arg GIT_COMMIT=${CI_COMMIT_SHA} \
        --build-arg APP_VERSION=${CI_COMMIT_TAG:-dev} \
        -t my-registry/my-app:${CI_COMMIT_TAG:-latest} .
    - docker push my-registry/my-app:${CI_COMMIT_TAG:-latest}

A Note on Variables: That ${CI_COMMIT_TAG:-dev} syntax is a really useful shell parameter expansion. It tells the pipeline to use the value of CI_COMMIT_TAG if it exists, and if not, to default to the string dev. This simple trick makes your pipeline flexible enough to handle both tagged releases and development builds from the same job definition.

This level of automation is what enables a true "everything-as-code" approach. Your infrastructure, your build process, and your application code are all version-controlled, auditable, and reproducible. Other popular CI/CD tools like Jenkins also play a key role in orchestrating these kinds of automated builds, taking full advantage of Docker build arguments. By centralizing this logic, you create a single source of truth for how your software is built and deployed.

Common Questions About Docker Build Args

Even seasoned devs get tripped up by the nuances of docker build args. They seem simple on the surface, but a few tricky edge cases can lead to broken builds, confusing behavior, or even security risks.

Here are the direct answers to the questions we see most often in the field, drawn from our experience debugging real-world Dockerfiles.

Can You Use a Build Arg Before the First FROM Instruction?

Yes, and it's an incredibly powerful pattern for one specific purpose: parameterizing the base image itself. Declaring an ARG before any FROM statement lets you build flexible Dockerfiles that can swap out base images on the fly.

For instance, you might use ARG TAG=latest and then FROM ubuntu:${TAG}. This is a perfectly valid and common technique.

But here's the classic gotcha: that ARG is now out of scope for every command inside that build stage. If you want to use its value again in a RUN or LABEL instruction, you must redeclare it immediately after the FROM line. Forgetting this simple step is a top source of build failures.

What Happens If an ARG and ENV Share the Same Name?

When an ARG and an ENV instruction use the same name, the ENV instruction always wins. Its value will overwrite the ARG for all subsequent build steps and, more importantly, it will persist in the final running container.

Take a look at this common scenario:

ARG MY_VAR=buildtime
ENV MY_VAR=runtime
RUN echo "Value is: $MY_VAR"

The RUN command here will print "Value is: runtime". The ENV value takes precedence. You can actually use this to your advantage—it allows you to set a sensible runtime default that can still be influenced by a build argument if you need to.

Key Takeaway: You can use an ARG to set the initial value of an ENV variable (e.g., ENV MY_VAR=$MY_ARG), but if you explicitly set an ENV with its own value, it will always override any ARG with the same name.

How Do You Provide a Default Value for a Build Arg?

Providing a default value is a best practice that makes your Dockerfiles far more robust and easier for others to use. You set it directly in the Dockerfile with the syntax ARG <name>=<default_value>.

A great example is something like ARG NODE_VERSION=20. If you run docker build without passing a --build-arg NODE_VERSION=..., Docker will just use '20' as the value.

This small step prevents builds from failing unexpectedly and means you don't have to pass every single variable from the command line. It makes the build process much more user-friendly.

Are Build Args Visible in the Final Image?

No, the ARG variables themselves are not stored in the image's metadata or environment. They are build-time constructs that are gone once the build finishes.

However—and this is the most critical security point—their values can be burned directly into the image layers. If you use a build argument in a command like RUN echo "API_KEY=${MY_SECRET_ARG}" > .env, that secret value is now a permanent part of the image's history.

Anyone with access to the image can inspect its layers and find it. This is precisely why you must never use docker build args for secrets. For sensitive data, you should always use BuildKit secrets or a secrets-as-a-service solution.


At CloudCops GmbH, we specialize in building secure, automated, and reproducible cloud-native platforms. If you need to align your infrastructure with best practices from day one, visit us at cloudcops.com to see how our hands-on engineering and everything-as-code ethos can accelerate your success.

Ready to scale your cloud infrastructure?

Let's discuss how CloudCops can help you build secure, scalable, and modern DevOps workflows. Schedule a free discovery call today.

Continue Reading

Read Dockerfile ENTRYPOINT vs CMD A Definitive Guide
Cover
Mar 20, 2026

Dockerfile ENTRYPOINT vs CMD A Definitive Guide

Understand the critical differences in our Dockerfile ENTRYPOINT vs CMD guide. Learn to build reliable, secure, and high-performance containers.

dockerfile entrypoint vs cmd
+4
C
Read Mastering The Pipeline In Jenkins For Modern CI/CD
Cover
Mar 18, 2026

Mastering The Pipeline In Jenkins For Modern CI/CD

Discover how a pipeline in Jenkins transforms software delivery. This guide explains Declarative vs. Scripted, Jenkinsfiles, and cloud native workflows.

pipeline in jenkins
+4
C
Read Docker vs Podman: docker vs podman Showdown for 2026
Cover
Mar 11, 2026

Docker vs Podman: docker vs podman Showdown for 2026

Discover how docker vs podman compare in performance, security, and usability to help you choose the right container runtime.

docker vs podman
+4
C