-
Docker for Rocky Linux 10
- Update your system and install necessary utilities:
sudo dnf update -ysudo dnf install dnf-plugins-core -y
- Add the official Docker repository:
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- Install Docker Engine, CLI, containerd, and Compose:
sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
- Start and enable the Docker service:
sudo systemctl enable --now docker
- Verify the installation:
sudo docker run hello-world
- Optional Post-Installation Step: Manage Docker as a non-root user. By default, you must use sudo to run Docker commands. To run them without sudo, add your user to the docker group that was created during installation:
sudo usermod -aG docker $USER
- Update your system and install necessary utilities:
-
https://docs.docker.com/get-started/introduction/whats-next/
https://youtu.be/W1kWqFkiu7k?si=mAjpfz2RqVNd_1ER
Imagine you're developing a killer web app that has three main components - a React frontend, a Python API, and a PostgreSQL database. If you wanted to work on this project, you'd have to install Node, Python, and PostgreSQL.
How do you make sure you have the same versions as the other developers on your team? Or your CI/CD system? Or what's used in production?
How do you ensure the version of Python (or Node or the database) your app needs isn't affected by what's already on your machine? How do you manage potential conflicts?
Enter containers!
What is a container? Simply put, containers are isolated processes for each of your app's components. Each component - the frontend React app, the Python API engine, and the database - runs in its own isolated environment, completely isolated from everything else on your machine.
Here's what makes them awesome. Containers are:
- Self-contained. Each container has everything it needs to function with no reliance on any pre-installed dependencies on the host machine.
- Isolated. Since containers run in isolation, they have minimal influence on the host and other containers, increasing the security of your applications.
- Independent. Each container is independently managed. Deleting one container won't affect any others.
- Portable. Containers can run anywhere! The container that runs on your development machine will work the same way in a data center or anywhere in the cloud!
Without getting too deep, a VM is an entire operating system with its own kernel, hardware drivers, programs, and applications. Spinning up a VM only to isolate a single application is a lot of overhead.
A container is simply an isolated process with all of the files it needs to run. If you run multiple containers, they all share the same kernel, allowing you to run more applications on less infrastructure.
Using VMs and containers together
Quite often, you will see containers and VMs used together. As an example, in a cloud environment, the provisioned machines are typically VMs. However, instead of provisioning one machine to run one application, a VM with a container runtime can run multiple containerized applications, increasing resource utilization and reducing costs.
In this hands-on, you will see how to run a Docker container using the Docker Desktop GUI.
Using the GUI
Use the following instructions to run a container.
-
Open Docker Desktop and select the Search field on the top navigation bar.
-
Specify
welcome-to-dockerin the search input and then select the Pull button. -
Once the image is successfully pulled, select the Run button.
-
Expand the Optional settings.
-
In the Container name, specify
welcome-to-docker. -
In the Host port, specify
8080. -
Select Run to start your container.
Congratulations! You just ran your first container! 🎉
You can view all of your containers by going to the Containers view of the Docker Desktop Dashboard.
This container runs a web server that displays a simple website. When working with more complex projects, you'll run different parts in different containers. For example, you might run a different container for the frontend, backend, and database.
When you launched the container, you exposed one of the container's ports onto your machine. Think of this as creating configuration to let you to connect through the isolated environment of the container.
For this container, the frontend is accessible on port 8080. To open the website, select the link in the Port(s) column of your container or visit http://localhost:8080 in your browser.
Docker Desktop lets you explore and interact with different aspects of your container. Try it out yourself.
-
Go to the Containers view in the Docker Desktop Dashboard.
-
Select your container.
-
Select the Files tab to explore your container's isolated file system.
The docker/welcome-to-docker container continues to run until you stop it.
-
Go to the Containers view in the Docker Desktop Dashboard.
-
Locate the container you'd like to stop.
-
Select the Stop action in the Actions column.
Using the CLI
Follow the instructions to run a container using the CLI:
-
Open your CLI terminal and start a container by using the
docker runcommand:$ docker run -d -p 8080:80 docker/welcome-to-dockerThe output from this command is the full container ID.
Congratulations! You just fired up your first container! 🎉
You can verify if the container is up and running by using the docker ps command:
docker psYou will see output like the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a1f7a4bb3a27 docker/welcome-to-docker "/docker-entrypoint.…" 11 seconds ago Up 11 seconds 0.0.0.0:8080->80/tcp gracious_keldyshThis container runs a web server that displays a simple website. When working with more complex projects, you'll run different parts in different containers. For example, a different container for the frontend, backend, and database.
Tip
The docker ps command will show you only running containers. To view stopped containers, add the -a flag to list all containers: docker ps -a
When you launched the container, you exposed one of the container's ports onto your machine. Think of this as creating configuration to let you to connect through the isolated environment of the container.
For this container, the frontend is accessible on port 8080. To open the website, select the link in the Port(s) column of your container or visit http://localhost:8080 in your browser.
The docker/welcome-to-docker container continues to run until you stop it. You can stop a container using the docker stop command.
-
Run
docker psto get the ID of the container -
Provide the container ID or name to the
docker stopcommand:docker stop <the-container-id>
Tip
When referencing containers by ID, you don't need to provide the full ID. You only need to provide enough of the ID to make it unique. As an example, the previous container could be stopped by running the following command:
docker stop a1fThe following links provide additional guidance into containers:
https://youtu.be/NyvT9REqLe4?si=Wwnad5zBDUHkaoYB
Seeing as a container is an isolated process, where does it get its files and configuration? How do you share those environments?
That's where container images come in. A container image is a standardized package that includes all of the files, binaries, libraries, and configurations to run a container.
For a PostgreSQL image, that image will package the database binaries, config files, and other dependencies. For a Python web app, it'll include the Python runtime, your app code, and all of its dependencies.
There are two important principles of images:
-
Images are immutable. Once an image is created, it can't be modified. You can only make a new image or add changes on top of it.
-
Container images are composed of layers. Each layer represents a set of file system changes that add, remove, or modify files.
These two principles let you to extend or add to existing images. For example, if you are building a Python app, you can start from the Python image and add additional layers to install your app's dependencies and add your code. This lets you focus on your app, rather than Python itself.
Docker Hub is the default global marketplace for storing and distributing images. It has over 100,000 images created by developers that you can run locally. You can search for Docker Hub images and run them directly from Docker Desktop.
Docker Hub provides a variety of Docker-supported and endorsed images known as Docker Trusted Content. These provide fully managed services or great starters for your own images. These include:
- Docker Official Images - a curated set of Docker repositories, serve as the starting point for the majority of users, and are some of the most secure on Docker Hub
- Docker Hardened Images - minimal, secure, production-ready images with near-zero CVEs, designed to reduce attack surface and simplify compliance. Free and open source under Apache 2.0
- Docker Verified Publishers - high-quality images from commercial publishers verified by Docker
- Docker-Sponsored Open Source - images published and maintained by open-source projects sponsored by Docker through Docker's open source program
For example, Redis and Memcached are a few popular ready-to-go Docker Official Images. You can download these images and have these services up and running in a matter of seconds. There are also base images, like the Node.js Docker image, that you can use as a starting point and add your own files and configurations. For production workloads requiring enhanced security, Docker Hardened Images offer minimal variants of popular images like Node.js, Python, and Go.
Using the GUI
In this hands-on, you will learn how to search and pull a container image using the Docker Desktop GUI.
-
Open the Docker Desktop Dashboard and select the Images view in the left-hand navigation menu.
-
Select the Search images to run button. If you don't see it, select the global search bar at the top of the screen.
-
In the Search field, enter "welcome-to-docker". Once the search has completed, select the
docker/welcome-to-dockerimage. -
Select Pull to download the image.
Once you have an image downloaded, you can learn quite a few details about the image either through the GUI or the CLI.
-
In the Docker Desktop Dashboard, select the Images view.
-
Select the docker/welcome-to-docker image to open details about the image.
-
The image details page presents you with information regarding the layers of the image, the packages and libraries installed in the image, and any discovered vulnerabilities.
Using the CLI
Follow the instructions to search and pull a Docker image using CLI to view its layers.
-
Open a terminal and search for images using the
docker searchcommand:docker search docker/welcome-to-dockerYou will see output like the following:
NAME DESCRIPTION STARS OFFICIAL docker/welcome-to-docker Docker image for new users getting started w… 20
This output shows you information about relevant images available on Docker Hub.
-
Pull the image using the
docker pullcommand.docker pull docker/welcome-to-dockerYou will see output like the following:
Using default tag: latest latest: Pulling from docker/welcome-to-docker 579b34f0a95b: Download complete d11a451e6399: Download complete 1c2214f9937c: Download complete b42a2f288f4d: Download complete 54b19e12c655: Download complete 1fb28e078240: Download complete 94be7e780731: Download complete 89578ce72c35: Download complete Digest: sha256:eedaff45e3c78538087bdd9dc7afafac7e110061bbdd836af4104b10f10ab693 Status: Downloaded newer image for docker/welcome-to-docker:latest docker.io/docker/welcome-to-docker:latest
Each of line represents a different downloaded layer of the image. Remember that each layer is a set of filesystem changes and provides functionality of the image.
-
List your downloaded images using the
docker image lscommand:docker image lsYou will see output like the following:
REPOSITORY TAG IMAGE ID CREATED SIZE docker/welcome-to-docker latest eedaff45e3c7 4 months ago 29.7MB
The command shows a list of Docker images currently available on your system. The
docker/welcome-to-dockerhas a total size of approximately 29.7MB.Image size
The image size represented here reflects the uncompressed size of the image, not the download size of the layers.
-
List the image's layers using the
docker image historycommand:docker image history docker/welcome-to-dockerYou will see output like the following:
IMAGE CREATED CREATED BY SIZE COMMENT 648f93a1ba7d 4 months ago COPY /app/build /usr/share/nginx/html # buil… 1.6MB buildkit.dockerfile.v0 <missing> 5 months ago /bin/sh -c #(nop) CMD ["nginx" "-g" "daemon… 0B <missing> 5 months ago /bin/sh -c #(nop) STOPSIGNAL SIGQUIT 0B <missing> 5 months ago /bin/sh -c #(nop) EXPOSE 80 0B <missing> 5 months ago /bin/sh -c #(nop) ENTRYPOINT ["/docker-entr… 0B <missing> 5 months ago /bin/sh -c #(nop) COPY file:9e3b2b63db9f8fc7… 4.62kB <missing> 5 months ago /bin/sh -c #(nop) COPY file:57846632accc8975… 3.02kB <missing> 5 months ago /bin/sh -c #(nop) COPY file:3b1b9915b7dd898a… 298B <missing> 5 months ago /bin/sh -c #(nop) COPY file:caec368f5a54f70a… 2.12kB <missing> 5 months ago /bin/sh -c #(nop) COPY file:01e75c6dd0ce317d… 1.62kB <missing> 5 months ago /bin/sh -c set -x && addgroup -g 101 -S … 9.7MB <missing> 5 months ago /bin/sh -c #(nop) ENV PKG_RELEASE=1 0B <missing> 5 months ago /bin/sh -c #(nop) ENV NGINX_VERSION=1.25.3 0B <missing> 5 months ago /bin/sh -c #(nop) LABEL maintainer=NGINX Do… 0B <missing> 5 months ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B <missing> 5 months ago /bin/sh -c #(nop) ADD file:ff3112828967e8004… 7.66MB
This output shows you all of the layers, their sizes, and the command used to create the layer.
Viewing the full command
If you add the
--no-truncflag to the command, you will see the full command. Note that, since the output is in a table-like format, longer commands will cause the output to be very difficult to navigate.
In this walkthrough, you searched and pulled a Docker image. In addition to pulling a Docker image, you also learned about the layers of a Docker Image.
The following resources will help you learn more about exploring, finding, and building images:
https://youtu.be/2WDl10Wv5rs?si=Vkaddntu7-Tq5Kcn
Now that you know what a container image is and how it works, you might wonder - where do you store these images?
Well, you can store your container images on your computer system, but what if you want to share them with your friends or use them on another machine? That's where the image registry comes in.
An image registry is a centralized location for storing and sharing your container images. It can be either public or private. Docker Hub is a public registry that anyone can use and is the default registry.
While Docker Hub is a popular option, there are many other available container registries available today, including Amazon Elastic Container Registry (ECR), Azure Container Registry (ACR), and Google Container Registry (GCR). You can even run your private registry on your local system or inside your organization. For example, Harbor, JFrog Artifactory, GitLab Container registry etc.
While you're working with registries, you might hear the terms registry and repository as if they're interchangeable. Even though they're related, they're not quite the same thing.
A registry is a centralized location that stores and manages container images, whereas a repository is a collection of related container images within a registry. Think of it as a folder where you organize your images based on projects. Each repository contains one or more container images.
The following diagram shows the relationship between a registry, repositories, and images.
+---------------------------------------+
| Registry |
|---------------------------------------|
| |
| +-----------------------------+ |
| | Repository A | |
| |-----------------------------| |
| | Image: project-a:v1.0 | |
| | Image: project-a:v2.0 | |
| +-----------------------------+ |
| |
| +-----------------------------+ |
| | Repository B | |
| |-----------------------------| |
| | Image: project-b:v1.0 | |
| | Image: project-b:v1.1 | |
| | Image: project-b:v2.0 | |
| +-----------------------------+ |
| |
+---------------------------------------+
Note
You can create one private repository and unlimited public repositories using the free version of Docker Hub. For more information, visit the Docker Hub subscription page.
In this hands-on, you will learn how to build and push a Docker image to the Docker Hub repository.
-
If you haven't created one yet, head over to the Docker Hub page to sign up for a new Docker account. Be sure to finish the verification steps sent to your email.
You can use your Google or GitHub account to authenticate.
-
Sign in to Docker Hub.
-
Select the Create repository button in the top-right corner.
-
Select your namespace (most likely your username) and enter
docker-quickstartas the repository name. -
Set the visibility to Public.
-
Select the Create button to create the repository.
That's it. You've successfully created your first repository. 🎉
This repository is empty right now. You'll now fix this by pushing an image to it.
- Download and install Docker Desktop, if not already installed.
- In the Docker Desktop GUI, select the Sign in button in the top-right corner
In order to create an image, you first need a project. To get you started quickly, you'll use a sample Node.js project found at github.com/dockersamples/helloworld-demo-node. This repository contains a pre-built Dockerfile necessary for building a Docker image.
Don't worry about the specifics of the Dockerfile, as you'll learn about that in later sections.
-
Clone the GitHub repository using the following command:
git clone https://github.com/dockersamples/helloworld-demo-node -
Navigate into the newly created directory.
cd helloworld-demo-node -
Run the following command to build a Docker image, swapping out
YOUR_DOCKER_USERNAMEwith your username.docker build -t <YOUR_DOCKER_USERNAME>/docker-quickstart .[!NOTE]
Make sure you include the dot (.) at the end of the
docker buildcommand. This tells Docker where to find the Dockerfile. -
Run the following command to list the newly created Docker image:
docker imagesYou will see output like the following:
REPOSITORY TAG IMAGE ID CREATED SIZE <YOUR_DOCKER_USERNAME>/docker-quickstart latest 476de364f70e 2 minutes ago 170MB
-
Start a container to test the image by running the following command (swap out the username with your own username):
docker run -d -p 8080:8080 <YOUR_DOCKER_USERNAME>/docker-quickstartYou can verify if the container is working by visiting http://localhost:8080 with your browser.
-
Use the
docker tagcommand to tag the Docker image. Docker tags allow you to label and version your images.docker tag <YOUR_DOCKER_USERNAME>/docker-quickstart <YOUR_DOCKER_USERNAME>/docker-quickstart:1.0 -
Finally, it's time to push the newly built image to your Docker Hub repository by using the
docker pushcommand:docker push <YOUR_DOCKER_USERNAME>/docker-quickstart:1.0 -
Open Docker Hub and navigate to your repository. Navigate to the Tags section and see your newly pushed image.
In this walkthrough, you signed up for a Docker account, created your first Docker Hub repository, and built, tagged, and pushed a container image to your Docker Hub repository.
If you've been following the guides so far, you've been working with single container applications. But, now you're wanting to do something more complicated - run databases, message queues, caches, or a variety of other services. Do you install everything in a single container? Run multiple containers? If you run multiple, how do you connect them all together?
One best practice for containers is that each container should do one thing and do it well. While there are exceptions to this rule, avoid the tendency to have one container do multiple things.
You can use multiple docker run commands to start multiple containers. But, you'll soon realize you'll need to manage networks, all of the flags needed to connect containers to those networks, and more. And when you're done, cleanup is a little more complicated.
With Docker Compose, you can define all of your containers and their configurations in a single YAML file. If you include this file in your code repository, anyone that clones your repository can get up and running with a single command.
It's important to understand that Compose is a declarative tool - you simply define it and go. You don't always need to recreate everything from scratch. If you make a change, run docker compose up again and Compose will reconcile the changes in your file and apply them intelligently.
Dockerfile versus Compose file
A Dockerfile provides instructions to build a container image while a Compose file defines your running containers. Quite often, a Compose file references a Dockerfile to build an image to use for a particular service.
In this hands-on, you will learn how to use a Docker Compose to run a multi-container application. You'll use a simple to-do list app built with Node.js and MySQL as a database server.
Follow the instructions to run the to-do list app on your system.
-
Download and install Docker Desktop.
-
Open a terminal and clone this sample application.
git clone https://github.com/dockersamples/todo-list-app -
Navigate into the
todo-list-appdirectory:cd todo-list-appInside this directory, you'll find a file named
compose.yaml. This YAML file is where all the magic happens! It defines all the services that make up your application, along with their configurations. Each service specifies its image, ports, volumes, networks, and any other settings necessary for its functionality. Take some time to explore the YAML file and familiarize yourself with its structure. -
Use the
docker compose upcommand to start the application:docker compose up -d --buildWhen you run this command, you should see an output like this:
[+] Running 5/5 ✔ app 3 layers [⣿⣿⣿] 0B/0B Pulled 7.1s ✔ e6f4e57cc59e Download complete 0.9s ✔ df998480d81d Download complete 1.0s ✔ 31e174fedd23 Download complete 2.5s ✔ 43c47a581c29 Download complete 2.0s [+] Running 4/4 ⠸ Network todo-list-app_default Created 0.3s ⠸ Volume "todo-list-app_todo-mysql-data" Created 0.3s ✔ Container todo-list-app-app-1 Started 0.3s ✔ Container todo-list-app-mysql-1 Started 0.3s
A lot happened here! A couple of things to call out:
- Two container images were downloaded from Docker Hub - node and MySQL
- A network was created for your application
- A volume was created to persist the database files between container restarts
- Two containers were started with all of their necessary config
If this feels overwhelming, don't worry! You'll get there!
-
With everything now up and running, you can open http://localhost:3000 in your browser to see the site. Note that the application may take 10-15 seconds to fully start. If the page doesn't load right away, wait a moment and refresh. Feel free to add items to the list, check them off, and remove them.
-
If you look at the Docker Desktop GUI, you can see the containers and dive deeper into their configuration.
Since this application was started using Docker Compose, it's easy to tear it all down when you're done.
-
In the CLI, use the
docker compose downcommand to remove everything:docker compose downYou'll see output similar to the following:
[+] Running 3/3 ✔ Container todo-list-app-mysql-1 Removed 2.9s ✔ Container todo-list-app-app-1 Removed 0.1s ✔ Network todo-list-app_default Removed 0.1s
Volume persistence
By default, volumes aren't automatically removed when you tear down a Compose stack. The idea is that you might want the data back if you start the stack again.
If you do want to remove the volumes, add the
--volumesflag when running thedocker compose downcommand:docker compose down --volumes [+] Running 1/0 ✔ Volume todo-list-app_todo-mysql-data Removed
-
Alternatively, you can use the Docker Desktop GUI to remove the containers by selecting the application stack and selecting the Delete button.
Using the GUI for Compose stacks
Note that if you remove the containers for a Compose app in the GUI, it's removing only the containers. You'll have to manually remove the network and volumes if you want to do so.
In this walkthrough, you learned how to use Docker Compose to start and stop a multi-container application.
This page was a brief introduction to Compose. In the following resources, you can dive deeper into Compose and how to write Compose files.
As you learned in What is an image?, container images are composed of layers. And each of these layers, once created, are immutable. But, what does that actually mean? And how are those layers used to create the filesystem a container can use?
Each layer in an image contains a set of filesystem changes - additions, deletions, or modifications. Let's look at a theoretical image:
- The first layer adds basic commands and a package manager, such as apt.
- The second layer installs a Python runtime and pip for dependency management.
- The third layer copies in an application's specific requirements.txt file.
- The fourth layer installs that application's specific dependencies.
- The fifth layer copies in the actual source code of the application.
This example might look like:
This is beneficial because it allows layers to be reused between images. For example, imagine you wanted to create another Python application. Due to layering, you can leverage the same Python base. This will make builds faster and reduce the amount of storage and bandwidth required to distribute the images. The image layering might look similar to the following:
Layers let you extend images of others by reusing their base layers, allowing you to add only the data that your application needs.
Layering is made possible by content-addressable storage and union filesystems. While this will get technical, here's how it works:
- After each layer is downloaded, it is extracted into its own directory on the host filesystem.
- When you run a container from an image, a union filesystem is created where layers are stacked on top of each other, creating a new and unified view.
- When the container starts, its root directory is set to the location of this unified directory, using
chroot.
When the union filesystem is created, in addition to the image layers, a directory is created specifically for the running container. This allows the container to make filesystem changes while allowing the original image layers to remain untouched. This enables you to run multiple containers from the same underlying image.
In this hands-on guide, you will create new image layers manually using the docker container commit command. Note that you'll rarely create images this way, as you'll normally use a Dockerfile. But, it makes it easier to understand how it's all working.
In this first step, you will create your own base image that you will then use for the following steps.
-
Download and install Docker Desktop.
-
In a terminal, run the following command to start a new container:
$ docker run --name=base-container -ti ubuntuOnce the image has been downloaded and the container has started, you should see a new shell prompt. This is running inside your container. It will look similar to the following (the container ID will vary):
root@d8c5ca119fcd:/# -
Inside the container, run the following command to install Node.js:
$ apt update && apt install -y nodejsWhen this command runs, it downloads and installs Node inside the container. In the context of the union filesystem, these filesystem changes occur within the directory unique to this container.
-
Validate if Node is installed by running the following command:
$ node -e 'console.log("Hello world!")'You should then see a “Hello world!” appear in the console.
-
Now that you have Node installed, you're ready to save the changes you've made as a new image layer, from which you can start new containers or build new images. To do so, you will use the
docker container commitcommand. Run the following command in a new terminal:$ docker container commit -m "Add node" base-container node-base -
View the layers of your image using the
docker image historycommand:$ docker image history node-baseYou will see output similar to the following:
IMAGE CREATED CREATED BY SIZE COMMENT 9e274734bb25 10 seconds ago /bin/bash 157MB Add node cd1dba651b30 7 days ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B <missing> 7 days ago /bin/sh -c #(nop) ADD file:6089c6bede9eca8ec… 110MB <missing> 7 days ago /bin/sh -c #(nop) LABEL org.opencontainers.… 0B <missing> 7 days ago /bin/sh -c #(nop) LABEL org.opencontainers.… 0B <missing> 7 days ago /bin/sh -c #(nop) ARG LAUNCHPAD_BUILD_ARCH 0B <missing> 7 days ago /bin/sh -c #(nop) ARG RELEASE 0B
Note the “Add node” comment on the top line. This layer contains the Node.js install you just made.
-
To prove your image has Node installed, you can start a new container using this new image:
$ docker run node-base node -e "console.log('Hello again')"With that, you should get a “Hello again” output in the terminal, showing Node was installed and working.
-
Now that you're done creating your base image, you can remove that container:
$ docker rm -f base-container
Base image definition
A base image is a foundation for building other images. It's possible to use any images as a base image. However, some images are intentionally created as building blocks, providing a foundation or starting point for an application.
In this example, you probably won't deploy this
node-baseimage, as it doesn't actually do anything yet. But it's a base you can use for other builds.
Now that you have a base image, you can extend that image to build additional images.
-
Start a new container using the newly created node-base image:
$ docker run --name=app-container -ti node-base -
Inside of this container, run the following command to create a Node program:
$ echo 'console.log("Hello from an app")' > app.jsTo run this Node program, you can use the following command and see the message printed on the screen:
$ node app.js -
In another terminal, run the following command to save this container's changes as a new image:
$ docker container commit -c "CMD node app.js" -m "Add app" app-container sample-appThis command not only creates a new image named
sample-app, but also adds additional configuration to the image to set the default command when starting a container. In this case, you are setting it to automatically runnode app.js. -
In a terminal outside of the container, run the following command to view the updated layers:
$ docker image history sample-appYou'll then see output that looks like the following. Note the top layer comment has “Add app” and the next layer has “Add node”:
IMAGE CREATED CREATED BY SIZE COMMENT c1502e2ec875 About a minute ago /bin/bash 33B Add app 5310da79c50a 4 minutes ago /bin/bash 126MB Add node 2b7cc08dcdbb 5 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B <missing> 5 weeks ago /bin/sh -c #(nop) ADD file:07cdbabf782942af0… 69.2MB <missing> 5 weeks ago /bin/sh -c #(nop) LABEL org.opencontainers.… 0B <missing> 5 weeks ago /bin/sh -c #(nop) LABEL org.opencontainers.… 0B <missing> 5 weeks ago /bin/sh -c #(nop) ARG LAUNCHPAD_BUILD_ARCH 0B <missing> 5 weeks ago /bin/sh -c #(nop) ARG RELEASE 0B
-
Finally, start a new container using the brand new image. Since you specified the default command, you can use the following command:
$ docker run sample-appYou should see your greeting appear in the terminal, coming from your Node program.
-
Now that you're done with your containers, you can remove them using the following command:
$ docker rm -f app-container
If you'd like to dive deeper into the things you learned, check out the following resources:
A Dockerfile is a text-based document that's used to create a container image. It provides instructions to the image builder on the commands to run, files to copy, startup command, and more.
As an example, the following Dockerfile would produce a ready-to-run Python application:
FROM python:3.13
WORKDIR /usr/local/app
# Install the application dependencies
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# Copy in the source code
COPY src ./src
EXPOSE 8080
# Setup an app user so the container doesn't run as the root user
RUN useradd app
USER app
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8080"]This Dockerfile builds a container image that runs a Python web application with Uvicorn. It installs dependencies from , copies the app source into the image, creates a non‑root user, and sets the container's default command to start Uvicorn on port 8080.
FROM python:3.13
Sets the base image to the official Python image for version 3.13. All subsequent instructions build on top of this image.
WORKDIR /usr/local/app
Sets the working directory inside the image. Subsequent
COPY,RUN, and the default command execute with/usr/local/appas the current directory.
COPY requirements.txt ./
Copies the local
requirements.txtfile into the image at/usr/local/app/requirements.txt. This is done before copying source code to take advantage of Docker layer caching for dependency installation.
RUN pip install --no-cache-dir -r requirements.txt
Installs Python dependencies listed in
requirements.txt. The--no-cache-dirflag prevents pip from storing package caches in the image, keeping the image smaller.
COPY src ./src
Copies the application source directory
srcfrom the build context into/usr/local/app/srcinside the image.
EXPOSE 8080
Documents that the container listens on port 8080. This does not publish the port to the host by itself; it's informational and used by tooling.
RUN useradd app
Creates a new system user named
app. This is intended so the container does not run as root.
USER app
Switches the default user for subsequent commands and for the running container to
app. This improves security by avoiding running the app as root.
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8080"]
Sets the container's default executable. When the container starts without an overriding command, Docker runs Uvicorn to serve the ASGI app
app.main:appon all interfaces at port 8080.
- When you
docker runthe image, Uvicorn starts and serves the application on port 8080 inside the container. To reach it from the host you must publish the port, for exampledocker run -p 8080:8080 <image>. - The container runs as the
appuser, not root. Files copied earlier may still be owned by root unless ownership is adjusted during build.
- Pin the base image tag to a specific digest or minor version to ensure reproducible builds, for example
python:3.13.2-slim. - Create a non‑root user with a fixed UID and home and set ownership of app files so the app user can read them:
RUN useradd --create-home --uid 1000 app \
&& chown -R app:app /usr/local/app
USER app- Use a slim image to reduce size, e.g.,
python:3.13-slimorpython:3.13-alpine(note compatibility differences). - Add a .dockerignore to exclude files not needed in the image (tests, local env files, .git) to speed builds and reduce image size.
- Install only production deps or separate dev/test deps into different requirement files to avoid shipping unnecessary packages.
- Use multi‑stage builds if you need build tools for compiling dependencies, keeping the final image minimal.
- Set explicit file ownership after COPY so the non‑root user can access the files:
COPY --chown=app:app src ./src- Consider ENTRYPOINT vs CMD: use
ENTRYPOINTfor the executable andCMDfor default args if you want easier argument overrides. - Add a HEALTHCHECK to let orchestrators detect unhealthy containers.
- Avoid
docker container commitstyle workflows; prefer a Dockerfile anddocker buildfor reproducible images.
FROM python:3.13-slim
WORKDIR /usr/local/app
# Install dependencies
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# Copy source and set ownership for non-root user
COPY --chown=1000:1000 src ./src
# Create app user with fixed UID and switch to it
RUN useradd --create-home --uid 1000 app
USER app
EXPOSE 8080
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8080"]- Ensure
requirements.txtandsrcexist in the build context. - Add
.dockerignoreto exclude unnecessary files. - When running locally, publish the port:
docker run -p 8080:8080 <image> - Verify file permissions inside the container if the app fails to read files after switching to the non‑root user.
Some of the most common instructions in a Dockerfile include:
FROM <image>- this specifies the base image that the build will extend.WORKDIR <path>- this instruction specifies the "working directory" or the path in the image where files will be copied and commands will be executed.COPY <host-path> <image-path>- this instruction tells the builder to copy files from the host and put them into the container image.RUN <command>- this instruction tells the builder to run the specified command.ENV <name> <value>- this instruction sets an environment variable that a running container will use.EXPOSE <port-number>- this instruction sets configuration on the image that indicates a port the image would like to expose.USER <user-or-uid>- this instruction sets the default user for all subsequent instructions.CMD ["<command>", "<arg1>"]- this instruction sets the default command a container using this image will run.
To read through all of the instructions or go into greater detail, check out the Dockerfile reference.
Just as you saw with the previous example, a Dockerfile typically follows these steps:
- Determine your base image
- Install application dependencies
- Copy in any relevant source code and/or binaries
- Configure the final image
In this quick hands-on guide, you'll write a Dockerfile that builds a simple Node.js application. If you're not familiar with JavaScript-based applications, don't worry. It isn't necessary for following along with this guide.
Download this ZIP file and extract the contents into a directory on your machine.
If you'd rather not download a ZIP file, clone the https://github.com/docker/getting-started-todo-app project and checkout the build-image-from-scratch branch.
Now that you have the project, you're ready to create the Dockerfile.
-
Download and install Docker Desktop.
-
Examine the project.
Explore the contents of
getting-started-todo-app/app/. You'll notice that aDockerfilealready exists. It is a simple text file that you can open in any text or code editor. -
Delete the existing
Dockerfile.For this exercise, you'll pretend you're starting from scratch and will create a new
Dockerfile. -
Create a file named
Dockerfilein thegetting-started-todo-app/app/folder.Dockerfile file extensions
It's important to note that the
Dockerfilehas no file extension. Some editors will automatically add an extension to the file (or complain it doesn't have one). -
In the
Dockerfile, define your base image by adding the following line:FROM node:22-alpine -
Now, define the working directory by using the
WORKDIRinstruction. This will specify where future commands will run and the directory files will be copied inside the container image.WORKDIR /app -
Copy all of the files from your project on your machine into the container image by using the
COPYinstruction:COPY . . -
Install the app's dependencies by using the
yarnCLI and package manager. To do so, run a command using theRUNinstruction:RUN yarn install --production -
Finally, specify the default command to run by using the
CMDinstruction:CMD ["node", "./src/index.js"]
And with that, you should have the following Dockerfile:
FROM node:22-alpine WORKDIR /app COPY . . RUN yarn install --production CMD ["node", "./src/index.js"]
This Dockerfile isn't production-ready yet
It's important to note that this Dockerfile is not following all of the best practices yet (by design). It will build the app, but the builds won't be as fast, or the images as secure, as they could be.
Keep reading to learn more about how to make the image maximize the build cache, run as a non-root user, and multi-stage builds.
Containerize new projects quickly with
docker initThe
docker initcommand will analyze your project and quickly create a Dockerfile, acompose.yaml, and a.dockerignore, helping you get up and going. Since you're learning about Dockerfiles specifically here, you won't use it now. But, learn more about it here.
To learn more about writing a Dockerfile, visit the following resources:
Now that you have created a Dockerfile and learned the basics, it's time to learn about building, tagging, and pushing the images.
In this guide, you will learn the following:
- Building images - the process of building an image based on a
Dockerfile - Tagging images - the process of giving an image a name, which also determines where the image can be distributed
- Publishing images - the process to distribute or share the newly created image using a container registry
Most often, images are built using a Dockerfile. The most basic docker build command might look like the following:
docker build .The final . in the command provides the path or URL to the build context. At this location, the builder will find the Dockerfile and other referenced files.
When you run a build, the builder pulls the base image, if needed, and then runs the instructions specified in the Dockerfile.
With the previous command, the image will have no name, but the output will provide the ID of the image. As an example, the previous command might produce the following output:
$ docker build .
[+] Building 3.5s (11/11) FINISHED docker:desktop-linux
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 308B 0.0s
=> [internal] load metadata for docker.io/library/python:3.12 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/6] FROM docker.io/library/python:3.12 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 123B 0.0s
=> [2/6] WORKDIR /usr/local/app 0.0s
=> [3/6] RUN useradd app 0.1s
=> [4/6] COPY ./requirements.txt ./requirements.txt 0.0s
=> [5/6] RUN pip install --no-cache-dir --upgrade -r requirements.txt 3.2s
=> [6/6] COPY ./app ./app 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.1s
=> => writing image sha256:9924dfd9350407b3df01d1a0e1033b1e543523ce7d5d5e2c83a724480ebe8f00 0.0sWith the previous output, you could start a container by using the referenced image:
docker run sha256:9924dfd9350407b3df01d1a0e1033b1e543523ce7d5d5e2c83a724480ebe8f00That name certainly isn't memorable, which is where tagging becomes useful.
Tagging images is the method to provide an image with a memorable name. However, there is a structure to the name of an image. A full image name has the following structure:
[HOST[:PORT_NUMBER]/]PATH[:TAG]
HOST: The optional registry hostname where the image is located. If no host is specified, Docker's public registry atdocker.iois used by default.PORT_NUMBER: The registry port number if a hostname is providedPATH: The path of the image, consisting of slash-separated components. For Docker Hub, the format follows[NAMESPACE/]REPOSITORY, where namespace is either a user's or organization's name. If no namespace is specified,libraryis used, which is the namespace for Docker Official Images.TAG: A custom, human-readable identifier that's typically used to identify different versions or variants of an image. If no tag is specified,latestis used by default.
Some examples of image names include:
nginx, equivalent todocker.io/library/nginx:latest: this pulls an image from thedocker.ioregistry, thelibrarynamespace, thenginximage repository, and thelatesttag.docker/welcome-to-docker, equivalent todocker.io/docker/welcome-to-docker:latest: this pulls an image from thedocker.ioregistry, thedockernamespace, thewelcome-to-dockerimage repository, and thelatesttagghcr.io/dockersamples/example-voting-app-vote:pr-311: this pulls an image from the GitHub Container Registry, thedockersamplesnamespace, theexample-voting-app-voteimage repository, and thepr-311tag
To tag an image during a build, add the -t or --tag flag:
docker build -t my-username/my-image .If you've already built an image, you can add another tag to the image by using the docker image tag command:
docker image tag my-username/my-image another-username/another-image:v1Once you have an image built and tagged, you're ready to push it to a registry. To do so, use the docker push command:
docker push my-username/my-imageWithin a few seconds, all of the layers for your image will be pushed to the registry.
Requiring authentication
Before you're able to push an image to a repository, you will need to be authenticated. To do so, simply use the docker login command. { .information }
In this hands-on guide, you will build a simple image using a provided Dockerfile and push it to Docker Hub.
-
Get the sample application.
If you have Git, you can clone the repository for the sample application. Otherwise, you can download the sample application. Choose one of the following options.
Clone with git
Use the following command in a terminal to clone the sample application repository.
$ git clone https://github.com/docker/getting-started-todo-appDownload
Download the source and extract it.
-
Download and install Docker Desktop.
-
If you don't have a Docker account yet, create one now. Once you've done that, sign in to Docker Desktop using that account.
Now that you have a repository on Docker Hub, it's time for you to build an image and push it to the repository.
-
Using a terminal in the root of the sample app repository, run the following command. Replace
YOUR_DOCKER_USERNAMEwith your Docker Hub username:$ docker build -t <YOUR_DOCKER_USERNAME>/concepts-build-image-demo .As an example, if your username is
mobywhale, you would run the command:$ docker build -t mobywhale/concepts-build-image-demo . -
Once the build has completed, you can view the image by using the following command:
$ docker image lsThe command will produce output similar to the following:
REPOSITORY TAG IMAGE ID CREATED SIZE mobywhale/concepts-build-image-demo latest 746c7e06537f 24 seconds ago 354MB -
You can actually view the history (or how the image was created) by using the docker image history command:
$ docker image history mobywhale/concepts-build-image-demoYou'll then see output similar to the following:
IMAGE CREATED CREATED BY SIZE COMMENT f279389d5f01 8 seconds ago CMD ["node" "./src/index.js"] 0B buildkit.dockerfile.v0 <missing> 8 seconds ago EXPOSE map[3000/tcp:{}] 0B buildkit.dockerfile.v0 <missing> 8 seconds ago WORKDIR /app 8.19kB buildkit.dockerfile.v0 <missing> 4 days ago /bin/sh -c #(nop) CMD ["node"] 0B <missing> 4 days ago /bin/sh -c #(nop) ENTRYPOINT ["docker-entry… 0B <missing> 4 days ago /bin/sh -c #(nop) COPY file:4d192565a7220e13… 20.5kB <missing> 4 days ago /bin/sh -c apk add --no-cache --virtual .bui… 7.92MB <missing> 4 days ago /bin/sh -c #(nop) ENV YARN_VERSION=1.22.19 0B <missing> 4 days ago /bin/sh -c addgroup -g 1000 node && addu… 126MB <missing> 4 days ago /bin/sh -c #(nop) ENV NODE_VERSION=20.12.0 0B <missing> 2 months ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B <missing> 2 months ago /bin/sh -c #(nop) ADD file:d0764a717d1e9d0af… 8.42MBThis output shows the layers of the image, highlighting the layers you added and those that were inherited from your base image.
Now that you have an image built, it's time to push the image to a registry.
-
Push the image using the docker push command:
$ docker push <YOUR_DOCKER_USERNAME>/concepts-build-image-demoIf you receive a
requested access to the resource is denied, make sure you are both logged in and that your Docker username is correct in the image tag.After a moment, your image should be pushed to Docker Hub.
To learn more about building, tagging, and publishing images, visit the following resources:
- What is a build context?
- docker build reference
- docker image tag reference
- docker push reference
- What is a registry?
Now that you have learned about building and publishing images, it's time to learn how to speed up the build process using the Docker build cache.
Consider the following Dockerfile that you created for the getting-started app.
FROM node:22-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "./src/index.js"]When you run the docker build command to create a new image, Docker executes each instruction in your Dockerfile, creating a layer for each command and in the order specified. For each instruction, Docker checks whether it can reuse the instruction from a previous build. If it finds that you've already executed a similar instruction before, Docker doesn't need to redo it. Instead, it'll use the cached result. This way, your build process becomes faster and more efficient, saving you valuable time and resources.
Using the build cache effectively lets you achieve faster builds by reusing results from previous builds and skipping unnecessary work. In order to maximize cache usage and avoid resource-intensive and time-consuming rebuilds, it's important to understand how cache invalidation works. Here are a few examples of situations that can cause cache to be invalidated:
-
Any changes to the command of a
RUNinstruction invalidates that layer. Docker detects the change and invalidates the build cache if there's any modification to aRUNcommand in your Dockerfile. -
Any changes to files copied into the image with the
COPYorADDinstructions. Docker keeps an eye on any alterations to files within your project directory. Whether it's a change in content or properties like permissions, Docker considers these modifications as triggers to invalidate the cache. -
Once one layer is invalidated, all following layers are also invalidated. If any previous layer, including the base image or intermediary layers, has been invalidated due to changes, Docker ensures that subsequent layers relying on it are also invalidated. This keeps the build process synchronized and prevents inconsistencies.
When you're writing or editing a Dockerfile, keep an eye out for unnecessary cache misses to ensure that builds run as fast and efficiently as possible.
In this hands-on guide, you will learn how to use the Docker build cache effectively for a Node.js application.
-
Download and install Docker Desktop.
-
Open a terminal and clone this sample application.
$ git clone https://github.com/dockersamples/todo-list-app -
Navigate into the
todo-list-appdirectory:$ cd todo-list-appInside this directory, you'll find a file named
Dockerfilewith the following content:FROM node:22-alpine WORKDIR /app COPY . . RUN yarn install --production EXPOSE 3000 CMD ["node", "./src/index.js"]
-
Execute the following command to build the Docker image:
$ docker build .Here's the result of the build process:
[+] Building 20.0s (10/10) FINISHEDThe first line indicates that the entire build process took 20.0 seconds. The first build may take some time as it installs dependencies.
-
Rebuild without making changes.
Now, re-run the
docker buildcommand without making any change in the source code or Dockerfile as shown:$ docker build .Subsequent builds after the initial are faster due to the caching mechanism, as long as the commands and context remain unchanged. Docker caches the intermediate layers generated during the build process. When you rebuild the image without making any changes to the Dockerfile or the source code, Docker can reuse the cached layers, significantly speeding up the build process.
[+] Building 1.0s (9/9) FINISHED docker:desktop-linux => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 187B 0.0s ... => [internal] load build context 0.0s => => transferring context: 8.16kB 0.0s => CACHED [2/4] WORKDIR /app 0.0s => CACHED [3/4] COPY . . 0.0s => CACHED [4/4] RUN yarn install --production 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => exporting manifest
The subsequent build was completed in just 1.0 second by leveraging the cached layers. No need to repeat time-consuming steps like installing dependencies.
Steps Description Time Taken (1st Run) Time Taken (2nd Run) 1 Load build definition from Dockerfile0.0 seconds 0.0 seconds 2 Load metadata for docker.io/library/node:22-alpine2.7 seconds 0.9 seconds 3 Load .dockerignore0.0 seconds 0.0 seconds 4 Load build context(Context size: 4.60MB)
0.1 seconds 0.0 seconds 5 Set the working directory (WORKDIR)0.1 seconds 0.0 seconds 6 Copy the local code into the container0.0 seconds 0.0 seconds 7 Run yarn install --production10.0 seconds 0.0 seconds 8 Exporting layers2.2 seconds 0.0 seconds 9 Exporting the final image3.0 seconds 0.0 seconds Going back to the
docker image historyoutput, you see that each command in the Dockerfile becomes a new layer in the image. You might remember that when you made a change to the image, theyarndependencies had to be reinstalled. Is there a way to fix this? It doesn't make much sense to reinstall the same dependencies every time you build, right?To fix this, restructure your Dockerfile so that the dependency cache remains valid unless it really needs to be invalidated. For Node-based applications, dependencies are defined in the
package.jsonfile. You'll want to reinstall the dependencies if that file changes, but use cached dependencies if the file is unchanged. So, start by copying only that file first, then install the dependencies, and finally copy everything else. Then, you only need to recreate the yarn dependencies if there was a change to thepackage.jsonfile. -
Update the Dockerfile to copy in the
package.jsonfile first, install dependencies, and then copy everything else in.FROM node:22-alpine WORKDIR /app COPY package.json yarn.lock ./ RUN yarn install --production COPY . . EXPOSE 3000 CMD ["node", "src/index.js"]
-
Create a file named
.dockerignorein the same folder as the Dockerfile with the following contents.node_modules -
Build the new image:
$ docker build .You'll then see output similar to the following:
[+] Building 16.1s (10/10) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 175B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/node:22-alpine 0.0s => [internal] load build context 0.8s => => transferring context: 53.37MB 0.8s => [1/5] FROM docker.io/library/node:22-alpine 0.0s => CACHED [2/5] WORKDIR /app 0.0s => [3/5] COPY package.json yarn.lock ./ 0.2s => [4/5] RUN yarn install --production 14.0s => [5/5] COPY . . 0.5s => exporting to image 0.6s => => exporting layers 0.6s => => writing image sha256:d6f819013566c54c50124ed94d5e66c452325327217f4f04399b45f94e37d25 0.0s => => naming to docker.io/library/node-app:2.0 0.0s
You'll see that all layers were rebuilt. Perfectly fine since you changed the Dockerfile quite a bit.
-
Now, make a change to the
src/static/index.htmlfile (like change the title to say "The Awesome Todo App"). -
Build the Docker image. This time, your output should look a little different.
$ docker build -t node-app:3.0 .You'll then see output similar to the following:
[+] Building 1.2s (10/10) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 37B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/node:22-alpine 0.0s => [internal] load build context 0.2s => => transferring context: 450.43kB 0.2s => [1/5] FROM docker.io/library/node:22-alpine 0.0s => CACHED [2/5] WORKDIR /app 0.0s => CACHED [3/5] COPY package.json yarn.lock ./ 0.0s => CACHED [4/5] RUN yarn install --production 0.0s => [5/5] COPY . . 0.5s => exporting to image 0.3s => => exporting layers 0.3s => => writing image sha256:91790c87bcb096a83c2bd4eb512bc8b134c757cda0bdee4038187f98148e2eda 0.0s => => naming to docker.io/library/node-app:3.0 0.0s
First off, you should notice that the build was much faster. You'll see that several steps are using previously cached layers. That's good news; you're using the build cache. Pushing and pulling this image and updates to it will be much faster as well.
By following these optimization techniques, you can make your Docker builds faster and more efficient, leading to quicker iteration cycles and improved development productivity.
Now that you understand how to use the Docker build cache effectively, you're ready to learn about Multi-stage builds.
In a traditional build, all build instructions are executed in sequence, and in a single build container: downloading dependencies, compiling code, and packaging the application. All those layers end up in your final image. This approach works, but it leads to bulky images carrying unnecessary weight and increasing your security risks. This is where multi-stage builds come in.
Multi-stage builds introduce multiple stages in your Dockerfile, each with a specific purpose. Think of it like the ability to run different parts of a build in multiple different environments, concurrently. By separating the build environment from the final runtime environment, you can significantly reduce the image size and attack surface. This is especially beneficial for applications with large build dependencies.
Multi-stage builds are recommended for all types of applications.
- For interpreted languages, like JavaScript or Ruby or Python, you can build and minify your code in one stage, and copy the production-ready files to a smaller runtime image. This optimizes your image for deployment.
- For compiled languages, like C or Go or Rust, multi-stage builds let you compile in one stage and copy the compiled binaries into a final runtime image. No need to bundle the entire compiler in your final image.
Here's a simplified example of a multi-stage build structure using pseudo-code. Notice there are multiple FROM statements and a new AS <stage-name>. In addition, the COPY statement in the second stage is copying --from the previous stage.
# Stage 1: Build Environment
FROM builder-image AS build-stage
# Install build tools (e.g., Maven, Gradle)
# Copy source code
# Build commands (e.g., compile, package)
# Stage 2: Runtime environment
FROM runtime-image AS final-stage
# Copy application artifacts from the build stage (e.g., JAR file)
COPY --from=build-stage /path/in/build/stage /path/to/place/in/final/stage
# Define runtime configuration (e.g., CMD, ENTRYPOINT) This Dockerfile uses two stages:
- The build stage uses a base image containing build tools needed to compile your application. It includes commands to install build tools, copy source code, and execute build commands.
- The final stage uses a smaller base image suitable for running your application. It copies the compiled artifacts (a JAR file, for example) from the build stage. Finally, it defines the runtime configuration (using
CMDorENTRYPOINT) for starting your application.
In this hands-on guide, you'll unlock the power of multi-stage builds to create lean and efficient Docker images for a sample Java application. You'll use a simple “Hello World” Spring Boot-based application built with Maven as your example.
-
Download and install Docker Desktop.
-
Open this pre-initialized project to generate a ZIP file. Here's how that looks:
Spring Initializr is a quickstart generator for Spring projects. It provides an extensible API to generate JVM-based projects with implementations for several common concepts — like basic language generation for Java, Kotlin, Groovy, and Maven.
Select Generate to create and download the zip file for this project.
For this demonstration, you've paired Maven build automation with Java, a Spring Web dependency, and Java 21 for your metadata.
-
Navigate the project directory. Once you unzip the file, you'll see the following project directory structure:
spring-boot-docker ├── HELP.md ├── mvnw ├── mvnw.cmd ├── pom.xml └── src ├── main │ ├── java │ │ └── com │ │ └── example │ │ └── spring_boot_docker │ │ └── SpringBootDockerApplication.java │ └── resources │ ├── application.properties │ ├── static │ └── templates └── test └── java └── com └── example └── spring_boot_docker └── SpringBootDockerApplicationTests.java 15 directories, 7 filesThe
src/main/javadirectory contains your project's source code, thesrc/test/javadirectory
contains the test source, and thepom.xmlfile is your project's Project Object Model (POM).The
pom.xmlfile is the core of a Maven project's configuration. It's a single configuration file that
contains most of the information needed to build a customized project. The POM is huge and can seem
daunting. Thankfully, you don't yet need to understand every intricacy to use it effectively. -
Create a RESTful web service that displays "Hello World!".
Under the
src/main/java/com/example/spring_boot_docker/directory, you can modify your
SpringBootDockerApplication.javafile with the following content:package com.example.spring_boot_docker; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController @SpringBootApplication public class SpringBootDockerApplication { @RequestMapping("/") public String home() { return "Hello World"; } public static void main(String[] args) { SpringApplication.run(SpringBootDockerApplication.class, args); } }
The
SpringbootDockerApplication.javafile starts by declaring yourcom.example.spring_boot_dockerpackage and importing necessary Spring frameworks. This Java file creates a simple Spring Boot web application that responds with "Hello World" when a user visits its homepage.
Now that you have the project, you're ready to create the Dockerfile.
-
Create a file named
Dockerfilein the same folder that contains all the other folders and files (like src, pom.xml, etc.). -
In the
Dockerfile, define your base image by adding the following line:FROM eclipse-temurin:21.0.8_9-jdk-jammy -
Now, define the working directory by using the
WORKDIRinstruction. This will specify where future commands will run and the directory files will be copied inside the container image.WORKDIR /app -
Copy both the Maven wrapper script and your project's
pom.xmlfile into the current working directory/appwithin the Docker container.COPY .mvn/ .mvn COPY mvnw pom.xml ./
-
Execute a command within the container. It runs the
./mvnw dependency:go-offlinecommand, which uses the Maven wrapper (./mvnw) to download all dependencies for your project without building the final JAR file (useful for faster builds).RUN ./mvnw dependency:go-offline -
Copy the
srcdirectory from your project on the host machine to the/appdirectory within the container.COPY src ./src -
Set the default command to be executed when the container starts. This command instructs the container to run the Maven wrapper (
./mvnw) with thespring-boot:rungoal, which will build and execute your Spring Boot application.CMD ["./mvnw", "spring-boot:run"]
And with that, you should have the following Dockerfile:
FROM eclipse-temurin:21.0.8_9-jdk-jammy WORKDIR /app COPY .mvn/ .mvn COPY mvnw pom.xml ./ RUN ./mvnw dependency:go-offline COPY src ./src CMD ["./mvnw", "spring-boot:run"]
-
Execute the following command to build the Docker image:
$ docker build -t spring-helloworld . -
Check the size of the Docker image by using the
docker imagescommand:$ docker imagesDoing so will produce output like the following:
REPOSITORY TAG IMAGE ID CREATED SIZE spring-helloworld latest ff708d5ee194 3 minutes ago 880MB
This output shows that your image is 880MB in size. It contains the full JDK, Maven toolchain, and more. In production, you don't need that in your final image.
-
Now that you have an image built, it's time to run the container.
$ docker run -p 8080:8080 spring-helloworldYou'll then see output similar to the following in the container log:
[INFO] --- spring-boot:3.3.4:run (default-cli) @ spring-boot-docker --- [INFO] Attaching agents: [] . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v3.3.4) 2024-09-29T23:54:07.157Z INFO 159 --- [spring-boot-docker] [ main] c.e.s.SpringBootDockerApplication : Starting SpringBootDockerApplication using Java 21.0.2 with PID 159 (/app/target/classes started by root in /app) …. -
Access your “Hello World” page through your web browser at http://localhost:8080, or via this curl command:
$ curl localhost:8080 Hello World
-
Consider the following Dockerfile:
FROM eclipse-temurin:21.0.8_9-jdk-jammy AS builder WORKDIR /opt/app COPY .mvn/ .mvn COPY mvnw pom.xml ./ RUN ./mvnw dependency:go-offline COPY ./src ./src RUN ./mvnw clean install FROM eclipse-temurin:21.0.8_9-jre-jammy AS final WORKDIR /opt/app EXPOSE 8080 COPY --from=builder /opt/app/target/*.jar /opt/app/*.jar ENTRYPOINT ["java", "-jar", "/opt/app/*.jar"]
Notice that this Dockerfile has been split into two stages.
-
The first stage remains the same as the previous Dockerfile, providing a Java Development Kit (JDK) environment for building the application. This stage is given the name of builder.
-
The second stage is a new stage named
final. It uses a slimmereclipse-temurin:21.0.2_13-jre-jammyimage, containing just the Java Runtime Environment (JRE) needed to run the application. This image provides a Java Runtime Environment (JRE) which is enough for running the compiled application (JAR file).
For production use, it's highly recommended that you produce a custom JRE-like runtime using jlink. JRE images are available for all versions of Eclipse Temurin, but
jlinkallows you to create a minimal runtime containing only the necessary Java modules for your application. This can significantly reduce the size and improve the security of your final image. Refer to this page for more information.With multi-stage builds, a Docker build uses one base image for compilation, packaging, and unit tests and then a separate image for the application runtime. As a result, the final image is smaller in size since it doesn't contain any development or debugging tools. By separating the build environment from the final runtime environment, you can significantly reduce the image size and increase the security of your final images.
-
-
Now, rebuild your image and run your ready-to-use production build.
$ docker build -t spring-helloworld-builder .This command builds a Docker image named
spring-helloworld-builderusing the final stage from yourDockerfilefile located in the current directory.[!NOTE]
In your multi-stage Dockerfile, the final stage (final) is the default target for building. This means that if you don't explicitly specify a target stage using the
--targetflag in thedocker buildcommand, Docker will automatically build the last stage by default. You could usedocker build -t spring-helloworld-builder --target builder .to build only the builder stage with the JDK environment. -
Look at the image size difference by using the
docker imagescommand:$ docker imagesYou'll get output similar to the following:
spring-helloworld-builder latest c5c76cb815c0 24 minutes ago 428MB spring-helloworld latest ff708d5ee194 About an hour ago 880MB
Your final image is just 428 MB, compared to the original build size of 880 MB.
By optimizing each stage and only including what's necessary, you were able to significantly reduce the overall image size while still achieving the same functionality. This not only improves performance but also makes your Docker images more lightweight, more secure, and easier to manage.
If you've been following the guides so far, you understand that containers provide isolated processes for each component of your application. Each component - a React frontend, a Python API, and a Postgres database - runs in its own sandbox environment, completely isolated from everything else on your host machine. This isolation is great for security and managing dependencies, but it also means you can't access them directly. For example, you can't access the web app in your browser.
That's where port publishing comes in.
Publishing a port provides the ability to break through a little bit of networking isolation by setting up a forwarding rule. As an example, you can indicate that requests on your host's port 8080 should be forwarded to the container's port 80. Publishing ports happens during container creation using the -p (or --publish) flag with docker run. The syntax is:
$ docker run -d -p HOST_PORT:CONTAINER_PORT nginxHOST_PORT: The port number on your host machine where you want to receive trafficCONTAINER_PORT: The port number within the container that's listening for connections
For example, to publish the container's port 80 to host port 8080:
$ docker run -d -p 8080:80 nginxNow, any traffic sent to port 8080 on your host machine will be forwarded to port 80 within the container.
Important
When a port is published, it's published to all network interfaces by default. This means any traffic that reaches your machine can access the published application. Be mindful of publishing databases or any sensitive information. Learn more about published ports here.
At times, you may want to simply publish the port but don't care which host port is used. In these cases, you can let Docker pick the port for you. To do so, simply omit the HOST_PORT configuration.
For example, the following command will publish the container's port 80 onto an ephemeral port on the host:
$ docker run -p 80 nginxOnce the container is running, using docker ps will show you the port that was chosen:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a527355c9c53 nginx "/docker-entrypoint.…" 4 seconds ago Up 3 seconds 0.0.0.0:54772->80/tcp romantic_williamsonIn this example, the app is exposed on the host at port 54772.
When creating a container image, the EXPOSE instruction is used to indicate the packaged application will use the specified port. These ports aren't published by default.
With the -P or --publish-all flag, you can automatically publish all exposed ports to ephemeral ports. This is quite useful when you're trying to avoid port conflicts in development or testing environments.
For example, the following command will publish all of the exposed ports configured by the image:
$ docker run -P nginxIn this hands-on guide, you'll learn how to publish container ports using both the CLI and Docker Compose for deploying a web application.
In this step, you will run a container and publish its port using the Docker CLI.
-
Download and install Docker Desktop.
-
In a terminal, run the following command to start a new container:
$ docker run -d -p 8080:80 docker/welcome-to-dockerThe first
8080refers to the host port. This is the port on your local machine that will be used to access the application running inside the container. The second80refers to the container port. This is the port that the application inside the container listens on for incoming connections. Hence, the command binds to port8080of the host to port80on the container system. -
Verify the published port by going to the Containers view of the Docker Desktop Dashboard.
-
Open the website by either selecting the link in the Port(s) column of your container or visiting http://localhost:8080 in your browser.
This example will launch the same application using Docker Compose:
-
Create a new directory and inside that directory, create a
compose.yamlfile with the following contents:services: app: image: docker/welcome-to-docker ports: - 8080:80
The
portsconfiguration accepts a few different forms of syntax for the port definition. In this case, you're using the sameHOST_PORT:CONTAINER_PORTused in thedocker runcommand. -
Open a terminal and navigate to the directory you created in the previous step.
-
Use the
docker compose upcommand to start the application. -
Open your browser to http://localhost:8080.
If you'd like to dive in deeper on this topic, be sure to check out the following resources:
Now that you understand how to publish and expose ports, you're ready to learn how to override the container defaults using the docker run command.
When a Docker container starts, it executes an application or command. The container gets this executable (script or file) from its image's configuration. Containers come with default settings that usually work well, but you can change them if needed. These adjustments help the container's program run exactly how you want it to.
For example, if you have an existing database container that listens on the standard port and you want to run a new instance of the same database container, then you might want to change the port settings the new container listens on so that it doesn't conflict with the existing container. Sometimes you might want to increase the memory available to the container if the program needs more resources to handle a heavy workload or set the environment variables to provide specific configuration details the program needs to function properly.
The docker run command offers a powerful way to override these defaults and tailor the container's behavior to your liking. The command offers several flags that let you to customize container behavior on the fly.
Here's a few ways you can achieve this.
Sometimes you might want to use separate database instances for development and testing purposes. Running these database instances on the same port might conflict. You can use the -p option in docker run to map container ports to host ports, allowing you to run the multiple instances of the container without any conflict.
$ docker run -d -p HOST_PORT:CONTAINER_PORT postgresThis option sets an environment variable foo inside the container with the value bar.
$ docker run -e foo=bar postgres envYou will see output like the following:
HOSTNAME=2042f2e6ebe4
foo=barTip
The .env file acts as a convenient way to set environment variables for your Docker containers without cluttering your command line with numerous -e flags. To use a .env file, you can pass --env-file option with the docker run command.
$ docker run --env-file .env postgres envYou can use the --memory and --cpus flags with the docker run command to restrict how much CPU and memory a container can use. For example, you can set a memory limit for the Python API container, preventing it from consuming excessive resources on your host. Here's the command:
$ docker run -e POSTGRES_PASSWORD=secret --memory="512m" --cpus="0.5" postgresThis command limits container memory usage to 512 MB and defines the CPU quota of 0.5 for half a core.
Monitor the real-time resource usage
You can use the
docker statscommand to monitor the real-time resource usage of running containers. This helps you understand whether the allocated resources are sufficient or need adjustment.
By effectively using these docker run flags, you can tailor your containerized application's behavior to fit your specific requirements.
In this hands-on guide, you'll see how to use the docker run command to override the container defaults.
- Download and install Docker Desktop.
-
Start a container using the Postgres image with the following command:
$ docker run -d -e POSTGRES_PASSWORD=secret -p 5432:5432 postgresThis will start the Postgres database in the background, listening on the standard container port
5432and mapped to port5432on the host machine. -
Start a second Postgres container mapped to a different port.
$ docker run -d -e POSTGRES_PASSWORD=secret -p 5433:5432 postgresThis will start another Postgres container in the background, listening on the standard postgres port
5432in the container, but mapped to port5433on the host machine. You override the host port just to ensure that this new container doesn't conflict with the existing running container. -
Verify that both containers are running by going to the Containers view in the Docker Desktop Dashboard.
By default, containers automatically connect to a special network called a bridge network when you run them. This bridge network acts like a virtual bridge, allowing containers on the same host to communicate with each other while keeping them isolated from the outside world and other hosts. It's a convenient starting point for most container interactions. However, for specific scenarios, you might want more control over the network configuration.
Here's where the custom network comes in. You create a custom network by passing --network flag with the docker run command. All containers without a --network flag are attached to the default bridge network.
Follow the steps to see how to connect a Postgres container to a custom network.
-
Create a new custom network by using the following command:
$ docker network create mynetwork -
Verify the network by running the following command:
$ docker network lsThis command lists all networks, including the newly created "mynetwork".
-
Connect Postgres to the custom network by using the following command:
$ docker run -d -e POSTGRES_PASSWORD=secret -p 5434:5432 --network mynetwork postgresThis will start Postgres container in the background, mapped to the host port 5434 and attached to the
mynetworknetwork. You passed the--networkparameter to override the container default by connecting the container to custom Docker network for better isolation and communication with other containers. You can usedocker network inspectcommand to see if the container is tied to this new bridge network.Key difference between default bridge and custom networks
- DNS resolution: By default, containers connected to the default bridge network can communicate with each other, but only by IP address. (unless you use
--linkoption which is considered legacy). It is not recommended for production use due to the various technical shortcomings. On a custom network, containers can resolve each other by name or alias. - Isolation: All containers without a
--networkspecified are attached to the default bridge network, hence can be a risk, as unrelated containers are then able to communicate. Using a custom network provides a scoped network in which only containers attached to that network are able to communicate, hence providing better isolation.
- DNS resolution: By default, containers connected to the default bridge network can communicate with each other, but only by IP address. (unless you use
By default, containers are not limited in their resource usage. However, on shared systems, it's crucial to manage resources effectively. It's important not to let a running container consume too much of the host machine's memory.
This is where the docker run command shines again. It offers flags like --memory and --cpus to restrict how much CPU and memory a container can use.
$ docker run -d -e POSTGRES_PASSWORD=secret --memory="512m" --cpus=".5" postgresThe --cpus flag specifies the CPU quota for the container. Here, it's set to half a CPU core (0.5) whereas the --memory flag specifies the memory limit for the container. In this case, it's set to 512 MB.
Sometimes, you might need to override the default commands (CMD) or entry points (ENTRYPOINT) defined in a Docker image, especially when using Docker Compose.
-
Create a
compose.ymlfile with the following content:services: postgres: image: postgres:18 entrypoint: ["docker-entrypoint.sh", "postgres"] command: ["-h", "localhost", "-p", "5432"] environment: POSTGRES_PASSWORD: secret
The Compose file defines a service named
postgresthat uses the official Postgres image, sets an entrypoint script, and starts the container with password authentication. -
Bring up the service by running the following command:
$ docker compose up -dThis command starts the Postgres service defined in the Docker Compose file.
-
Verify the authentication with Docker Desktop Dashboard.
Open the Docker Desktop Dashboard, select the Postgres container and select Exec to enter into the container shell. You can type the following command to connect to the Postgres database:
# psql -U postgres[!NOTE]
The PostgreSQL image sets up trust authentication locally so you may notice a password isn't required when connecting from localhost (inside the same container). However, a password will be required if connecting from a different host/container.
You can also override defaults directly using the docker run command with the following command:
$ docker run -e POSTGRES_PASSWORD=secret postgres docker-entrypoint.sh -h localhost -p 5432This command runs a Postgres container, sets an environment variable for password authentication, overrides the default startup commands and configures hostname and port mapping.
Now that you have learned about overriding container defaults, it's time to learn how to persist container data.
When a container starts, it uses the files and configuration provided by the image. Each container is able to create, modify, and delete files and does so without affecting any other containers. When the container is deleted, these file changes are also deleted.
While this ephemeral nature of containers is great, it poses a challenge when you want to persist the data. For example, if you restart a database container, you might not want to start with an empty database. So, how do you persist files?
Volumes are a storage mechanism that provide the ability to persist data beyond the lifecycle of an individual container. Think of it like providing a shortcut or symlink from inside the container to outside the container.
As an example, imagine you create a volume named log-data.
$ docker volume create log-dataWhen starting a container with the following command, the volume will be mounted (or attached) into the container at /logs:
$ docker run -d -p 80:80 -v log-data:/logs docker/welcome-to-dockerIf the volume log-data doesn't exist, Docker will automatically create it for you.
When the container runs, all files it writes into the /logs folder will be saved in this volume, outside of the container. If you delete the container and start a new container using the same volume, the files will still be there.
Sharing files using volumes
You can attach the same volume to multiple containers to share files between containers. This might be helpful in scenarios such as log aggregation, data pipelines, or other event-driven applications.
Volumes have their own lifecycle beyond that of containers and can grow quite large depending on the type of data and applications you're using. The following commands will be helpful to manage volumes:
docker volume ls- list all volumesdocker volume rm <volume-name-or-id>- remove a volume (only works when the volume is not attached to any containers)docker volume prune- remove all unused (unattached) volumes
In this guide, you'll practice creating and using volumes to persist data created by a Postgres container. When the database runs, it stores files into the /var/lib/postgresql directory. By attaching the volume here, you will be able to restart the container multiple times while keeping the data.
-
Download and install Docker Desktop.
-
Start a container using the Postgres image with the following command:
$ docker run --name=db -e POSTGRES_PASSWORD=secret -d -v postgres_data:/var/lib/postgresql postgres:18This will start the database in the background, configure it with a password, and attach a volume to the directory PostgreSQL will persist the database files.
-
Connect to the database by using the following command:
$ docker exec -ti db psql -U postgres -
In the PostgreSQL command line, run the following to create a database table and insert two records:
CREATE TABLE tasks ( id SERIAL PRIMARY KEY, description VARCHAR(100) ); INSERT INTO tasks (description) VALUES ('Finish work'), ('Have fun'); -
Verify the data is in the database by running the following in the PostgreSQL command line:
SELECT * FROM tasks;You should get output that looks like the following:
id | description ----+------------- 1 | Finish work 2 | Have fun (2 rows) -
Exit out of the PostgreSQL shell by running the following command:
\q -
Stop and remove the database container. Remember that, even though the container has been deleted, the data is persisted in the
postgres_datavolume.$ docker stop db $ docker rm db
-
Start a new container by running the following command, attaching the same volume with the persisted data:
$ docker run --name=new-db -d -v postgres_data:/var/lib/postgresql postgres:18You might have noticed that the
POSTGRES_PASSWORDenvironment variable has been omitted. That's because that variable is only used when bootstrapping a new database. -
Verify the database still has the records by running the following command:
$ docker exec -ti new-db psql -U postgres -c "SELECT * FROM tasks"
The Docker Desktop Dashboard provides the ability to view the contents of any volume, as well as the ability to export, import, empty, delete and clone volumes.
-
Open the Docker Desktop Dashboard and navigate to the Volumes view. In this view, you should see the postgres_data volume.
-
Select the postgres_data volume's name.
-
The Stored Data tab shows the contents of the volume and provides the ability to navigate the files. The Container in-use tab displays the name of the container using the volume, the image name, the port number used by the container, and the target. A target is a path inside a container that gives access to the files in the volume. The Exports tab lets you export the volume. Double-clicking on a file will let you see the contents and make changes.
-
Right-click on any file to save it or delete it.
Before removing a volume, it must not be attached to any containers. If you haven't removed the previous container, do so with the following command (the -f will stop the container first and then remove it):
$ docker rm -f new-dbThere are a few methods to remove volumes, including the following:
-
Select the Delete Volume option on a volume in the Docker Desktop Dashboard.
-
Use the
docker volume rmcommand:$ docker volume rm postgres_data -
Use the
docker volume prunecommand to remove all unused volumes:$ docker volume prune
The following resources will help you learn more about volumes:
Now that you have learned about persisting container data, it's time to learn about sharing local files with containers.
Each container has everything it needs to function with no reliance on any pre-installed dependencies on the host machine. Since containers run in isolation, they have minimal influence on the host and other containers. This isolation has a major benefit: containers minimize conflicts with the host system and other containers. However, this isolation also means containers can't directly access data on the host machine by default.
Consider a scenario where you have a web application container that requires access to configuration settings stored in a file on your host system. This file may contain sensitive data such as database credentials or API keys. Storing such sensitive information directly within the container image poses security risks, especially during image sharing. To address this challenge, Docker offers storage options that bridge the gap between container isolation and your host machine's data.
Docker offers two primary storage options for persisting data and sharing files between the host machine and containers: volumes and bind mounts.
If you want to ensure that data generated or modified inside the container persists even after the container stops running, you would opt for a volume. See Persisting container data to learn more about volumes and their use cases.
If you have specific files or directories on your host system that you want to directly share with your container, like configuration files or development code, then you would use a bind mount. It's like opening a direct portal between your host and container for sharing. Bind mounts are ideal for development environments where real-time file access and sharing between the host and container are crucial.
Both -v (or --volume) and --mount flags used with the docker run command let you share files or directories between your local machine (host) and a Docker container. However, there are some key differences in their behavior and usage.
The -v flag is simpler and more convenient for basic volume or bind mount operations. If the host location doesn't exist when using -v or --volume, a directory will be automatically created.
Imagine you're a developer working on a project. You have a source directory on your development machine where your code resides. When you compile or build your code, the generated artifacts (compiled code, executables, images, etc.) are saved in a separate subdirectory within your source directory. In the following examples, this subdirectory is /HOST/PATH. Now you want these build artifacts to be accessible within a Docker container running your application. Additionally, you want the container to automatically access the latest build artifacts whenever you rebuild your code.
Here's a way to use docker run to start a container using a bind mount and map it to the container file location.
$ docker run -v /HOST/PATH:/CONTAINER/PATH -it nginxThe --mount flag offers more advanced features and granular control, making it suitable for complex mount scenarios or production deployments. If you use --mount to bind-mount a file or directory that doesn't yet exist on the Docker host, the docker run command doesn't automatically create it for you but generates an error.
$ docker run --mount type=bind,source=/HOST/PATH,target=/CONTAINER/PATH,readonly nginxNote
Docker recommends using the --mount syntax instead of -v. It provides better control over the mounting process and avoids potential issues with missing directories.
When using bind mounts, it's crucial to ensure that Docker has the necessary permissions to access the host directory. To grant read/write access, you can use the :ro flag (read-only) or :rw (read-write) with the -v or --mount flag during container creation.
For example, the following command grants read-write access permission.
$ docker run -v HOST-DIRECTORY:/CONTAINER-DIRECTORY:rw nginxRead-only bind mounts let the container access the mounted files on the host for reading, but it can't change or delete the files. With read-write bind mounts, containers can modify or delete mounted files, and these changes or deletions will also be reflected on the host system. Read-only bind mounts ensures that files on the host can't be accidentally modified or deleted by a container.
Synchronized File Share
As your codebase grows larger, traditional methods of file sharing like bind mounts may become inefficient or slow, especially in development environments where frequent access to files is necessary. Synchronized file shares improve bind mount performance by leveraging synchronized filesystem caches. This optimization ensures that file access between the host and virtual machine (VM) is fast and efficient.
In this hands-on guide, you'll practice how to create and use a bind mount to share files between a host and a container.
-
Download and install Docker Desktop.
-
Start a container using the httpd image with the following command:
$ docker run -d -p 8080:80 --name my_site httpd:2.4This will start the
httpdservice in the background, and publish the webpage to port8080on the host. -
Open the browser and access http://localhost:8080 or use the curl command to verify if it's working fine or not.
$ curl localhost:8080
Using a bind mount, you can map the configuration file on your host computer to a specific location within the container. In this example, you'll see how to change the look and feel of the webpage by using bind mount:
-
Delete the existing container by using the Docker Desktop Dashboard:
-
Create a new directory called
public_htmlon your host system.$ mkdir public_html -
Navigate into the newly created directory
public_htmland create a file calledindex.htmlwith the following content. This is a basic HTML document that creates a simple webpage that welcomes you with a friendly whale.<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title> My Website with a Whale & Docker!</title> </head> <body> <h1>Whalecome!!</h1> <p>Look! There's a friendly whale greeting you!</p> <pre id="docker-art"> ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === { / ===- \______ O __/ \ \ __/ \____\_______/ Hello from Docker! </pre> </body> </html>
-
It's time to run the container. The
--mountand-vexamples produce the same result. You can't run them both unless you remove themy_sitecontainer after running the first one.-v$ docker run -d --name my_site -p 8080:80 -v .:/usr/local/apache2/htdocs/ httpd:2.4--mount$ docker run -d --name my_site -p 8080:80 --mount type=bind,source=./,target=/usr/local/apache2/htdocs/ httpd:2.4[!TIP]
When using the-vor--mountflag in Windows PowerShell, you need to provide the absolute path to your directory instead of just./. This is because PowerShell handles relative paths differently from bash (commonly used in Mac and Linux environments).With everything now up and running, you should be able to access the site via http://localhost:8080 and find a new webpage that welcomes you with a friendly whale.
-
You can view the mounted files inside a container by selecting the container's Files tab and then selecting a file inside the
/usr/local/apache2/htdocs/directory. Then, select Open file editor. -
Delete the file on the host and verify the file is also deleted in the container. You will find that the files no longer exist under Files in the Docker Desktop Dashboard.
-
Recreate the HTML file on the host system and see that file re-appears under the Files tab under Containers on the Docker Desktop Dashboard. By now, you will be able to access the site too.
The container continues to run until you stop it.
-
Go to the Containers view in the Docker Desktop Dashboard.
-
Locate the container you'd like to stop.
-
Select the Stop action in the Actions column.
The following resources will help you learn more about bind mounts:
- Manage data in Docker
- Volumes
- Bind mounts
- Running containers
- Troubleshoot storage errors
- Persisting container data
Now that you have learned about sharing local files with containers, it's time to learn about multi-container applications.
Starting up a single-container application is easy. For example, a Python script that performs a specific data processing task runs within a container with all its dependencies. Similarly, a Node.js application serving a static website with a small API endpoint can be effectively containerized with all its necessary libraries and dependencies. However, as applications grow in size, managing them as individual containers becomes more difficult.
Imagine the data processing Python script needs to connect to a database. Suddenly, you're now managing not just the script but also a database server within the same container. If the script requires user logins, you'll need an authentication mechanism, further bloating the container size.
One best practice for containers is that each container should do one thing and do it well. While there are exceptions to this rule, avoid the tendency to have one container do multiple things.
Now you might ask, "Do I need to run these containers separately? If I run them separately, how shall I connect them all together?"
While docker run is a convenient tool for launching containers, it becomes difficult to manage a growing application stack with it. Here's why:
- Imagine running several
docker runcommands (frontend, backend, and database) with different configurations for development, testing, and production environments. It's error-prone and time-consuming. - Applications often rely on each other. Manually starting containers in a specific order and managing network connections become difficult as the stack expands.
- Each application needs its
docker runcommand, making it difficult to scale individual services. Scaling the entire application means potentially wasting resources on components that don't need a boost. - Persisting data for each application requires separate volume mounts or configurations within each
docker runcommand. This creates a scattered data management approach. - Setting environment variables for each application through separate
docker runcommands is tedious and error-prone.
That's where Docker Compose comes to the rescue.
Docker Compose defines your entire multi-container application in a single YAML file called compose.yml. This file specifies configurations for all your containers, their dependencies, environment variables, and even volumes and networks. With Docker Compose:
- You don't need to run multiple
docker runcommands. All you need to do is define your entire multi-container application in a single YAML file. This centralizes configuration and simplifies management. - You can run containers in a specific order and manage network connections easily.
- You can simply scale individual services up or down within the multi-container setup. This allows for efficient allocation based on real-time needs.
- You can implement persistent volumes with ease.
- It's easy to set environment variables once in your Docker Compose file.
By leveraging Docker Compose for running multi-container setups, you can build complex applications with modularity, scalability, and consistency at their core.
In this hands-on guide, you'll first see how to build and run a counter web application based on Node.js, an Nginx reverse proxy, and a Redis database using the docker run commands. You'll also see how you can simplify the entire deployment process using Docker Compose.
-
Get the sample application. If you have Git, you can clone the repository for the sample application. Otherwise, you can download the sample application. Choose one of the following options.
Clone with git
Use the following command in a terminal to clone the sample application repository.
$ git clone https://github.com/dockersamples/nginx-node-redisNavigate into the
nginx-node-redisdirectory:$ cd nginx-node-redisInside this directory, you'll find two sub-directories -
nginxandweb.Download
Download the source and extract it.
Navigate into the
nginx-node-redis-maindirectory:$ cd nginx-node-redis-mainInside this directory, you'll find two sub-directories -
nginxandweb. -
Download and install Docker Desktop.
-
Navigate into the
nginxdirectory to build the image by running the following command:$ docker build -t nginx . -
Navigate into the
webdirectory and run the following command to build the first web image:$ docker build -t web .
-
Before you can run a multi-container application, you need to create a network for them all to communicate through. You can do so using the
docker network createcommand:$ docker network create sample-app -
Start the Redis container by running the following command, which will attach it to the previously created network and create a network alias (useful for DNS lookups):
$ docker run -d --name redis --network sample-app --network-alias redis redis -
Start the first web container by running the following command:
$ docker run -d --name web1 -h web1 --network sample-app --network-alias web1 web -
Start the second web container by running the following:
$ docker run -d --name web2 -h web2 --network sample-app --network-alias web2 web -
Start the Nginx container by running the following command:
$ docker run -d --name nginx --network sample-app -p 80:80 nginx[!NOTE]
Nginx is typically used as a reverse proxy for web applications, routing traffic to backend servers. In this case, it routes to the Node.js backend containers (web1 or web2).
-
Verify the containers are up by running the following command:
$ docker psYou will see output like the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2cf7c484c144 nginx "/docker-entrypoint.…" 9 seconds ago Up 8 seconds 0.0.0.0:80->80/tcp nginx 7a070c9ffeaa web "docker-entrypoint.s…" 19 seconds ago Up 18 seconds web2 6dc6d4e60aaf web "docker-entrypoint.s…" 34 seconds ago Up 33 seconds web1 008e0ecf4f36 redis "docker-entrypoint.s…" About a minute ago Up About a minute 6379/tcp redis -
If you look at the Docker Desktop Dashboard, you can see the containers and dive deeper into their configuration.
-
With everything up and running, you can open http://localhost in your browser to see the site. Refresh the page several times to see the host that's handling the request and the total number of requests:
web2: Number of visits is: 9 web1: Number of visits is: 10 web2: Number of visits is: 11 web1: Number of visits is: 12
[!NOTE]
You might have noticed that Nginx, acting as a reverse proxy, likely distributes incoming requests in a round-robin fashion between the two backend containers. This means each request might be directed to a different container (web1 and web2) on a rotating basis. The output shows consecutive increments for both the web1 and web2 containers and the actual counter value stored in Redis is updated only after the response is sent back to the client.
-
You can use the Docker Desktop Dashboard to remove the containers by selecting the containers and selecting the Delete button.
Docker Compose provides a structured and streamlined approach for managing multi-container deployments. As stated earlier, with Docker Compose, you don't need to run multiple docker run commands. All you need to do is define your entire multi-container application in a single YAML file called compose.yml. Let's see how it works.
Navigate to the root of the project directory. Inside this directory, you'll find a file named compose.yml. This YAML file is where all the magic happens. It defines all the services that make up your application, along with their configurations. Each service specifies its image, ports, volumes, networks, and any other settings necessary for its functionality.
-
Use the
docker compose upcommand to start the application:$ docker compose up -d --buildWhen you run this command, you should see output similar to the following:
✔ Network nginx-node-redis_default Created 0.0s ✔ Container nginx-node-redis-web2-1 Created 0.1s ✔ Container nginx-node-redis-web1-1 Created 0.1s ✔ Container nginx-node-redis-redis-1 Created 0.1s ✔ Container nginx-node-redis-nginx-1 Created
-
If you look at the Docker Desktop Dashboard, you can see the containers and dive deeper into their configuration.
-
Alternatively, you can use the Docker Desktop Dashboard to remove the containers by selecting the application stack and selecting the Delete button.
In this guide, you learned how easy it is to use Docker Compose to start and stop a multi-container application compared to docker run which is error-prone and difficult to manage.