Skip to content

Instantly share code, notes, and snippets.

@PatrickKennedy
Last active July 18, 2017 21:06
Show Gist options
  • Select an option

  • Save PatrickKennedy/027b2745e6a88b2e229382cced3ba49a to your computer and use it in GitHub Desktop.

Select an option

Save PatrickKennedy/027b2745e6a88b2e229382cced3ba49a to your computer and use it in GitHub Desktop.
A quick reference to Docker

The Basics

There are a ton of great resources for Docker, so with that in mind this document will aim to serve as a collection of the best resources, and a practical guide to using Docker.

For any OS specific resources this introduction will focus on Docker for Windows as it's the recommended way of using Docker on Windows, but much of the information here and across the Docker docs is applicable across all platforms.

What is Docker?

Docker is a standard and set of tools that facilitates a Build Once, Run Anywhere methodology of interacting with applications.

Docker has a phenomenal Getting Started guide which consists both of an overview of Docker and a tutorial on using it.

Installation $

Windows $

Docker has two solutions for using Docker on Windows, both work respectively the same across Windows 7+:

Docker for Windows provides a Hyper-V backed solution on Windows. This is the recommended solution, but enabling Hyper-V disables VirtualBox.

Docker Toolbox uses VirtualBox for it's virtualization. This version isn't as nicely integrated with Windows, and interacting with the Docker VM requires some addition command line setup.

The installation process is intended to be a full-service operation. It will install and enable the tools and services it needs to run. Both install the Docker CLI tools and Kitematic, a GUI interface for managing containers.

OSX $

Docker for Mac provides a very similar experience to Docker for Windows. Once installed it includes Kitematic and the docker CLI tools the same as Windows.

Troubleshooting $

By and large Docker does a reasonable job of self-correcting errors, but if it's unable to they've provided great resources for fixing common problems.

Usage $

As mentioned earlier, Docker has a great official Getting Started introduction. This section will focus on the concepts that aren't necessarily well explained else where in more of a cookbook style configuration.

Basics $

Conventions $

The Docker cli has some typical conventions:

  • colon separated values are in host:container order e.g. -p 8080:80 means port 8080 on the host wired to port 80 in the container
  • positional arguments can be placed anywhere in a command e.g. docker build . -t name:tag and docker build -t name:tag . are equivalent commands
  • much like git, unnamed containers and images can be references with a truncated id e.g. 4d2eab1c0b9a13c83abd72b38e5d4b4315de3c9967165f78a7b817ca99bf191e can be referenced by 4d2eab1 (or shorter if there's no collisions)

Run vs Attach vs Exec $

Depending on the image it may have an entrypoint to run a long running process, a cmd to run a short-lived process that may take parameters, or have neither and expect the process to be decided by docker-compose or the user at runtime.

Most application images are designed to be run and then left alone, in those cases docker run image is probably enough. However, there are times when you want to run an interactive process with the image (e.g. a Mongo console), in those cases you'll need to add -it (-i == interactive, keeping STDIN open, -t == TTY) and generally --rm to clean up the container once you exit the process.

For example you can run bash in centos container with:

docker run --rm -it centos bash

If you want to see the stdout of a container that's running:

docker attach <container-name or container-id>

It's also possible to create new processes in already running containers:

docker exec <container-name> bash

Running a general purpose development container $

There are a few ways of running a general purpose containers.

Note: I include --rm to clean-up the container to emphasize the idea that containers are transient, you could keep a container around like a typical VM by removing the --rm and using docker start <container-name> but if you need to remove your default docker machine that container and it's state will be lost. Instead think of containers as a lightweight tool you create for singular purposes. If there's something you need to setup each time better to do it once and create an image.

If you're just looking to test something in an OS it can be as simple as running a base image:

docker run --rm -it centos bash

Customized Images

On the other hand, if there's a non-vanilla package you want to install each time you can create a custom image based on that base image:

FROM centos:7

# install python3
RUN yum -y update
RUN yum -y install yum-utils
RUN yum -y groupinstall development
RUN yum -y install https://centos7.iuscommunity.org/ius-release.rpm
RUN yum -y install python36u
RUN yum -y install python36u-pip
RUN yum -y install python36u-devel

# create a virtualenv to run the package in
RUN mkdir -p /usr/src/venv
WORKDIR /usr/src/venv
RUN python3.6 -m venv default

And then in the directory with the Dockerfile you can build it into a reusable image:

docker build . -t centos:py3

At which point you can use it in other Dockerfiles or run it directly:

docker run --rm -it centos:py3 python3.6

Note: It's also possible to create a new image from the current state of a container with docker commit <container-name> <new-image-name> and then export it and make it shareable with docker save <new-image-name> > /tmp/<new-image-name>.tar

Containers with volumes

Finally, if you want to interact with your local file system or persistent storage use volumes to make it available to the container:

docker run --rm -it \
--volumes-from persistent-data \
-v c:\src\app:/opt/app \
centos bash

Let's talk about volumes $

Briefly, volumes are a way of mounting drives into a container. Docker has an excellent write up about Volumes that explains how to create and use them.

There are two main types of volumes outside of host volumes which refer to a location on the host OS itself:

  1. Data Volumes
  2. Data Volume Containers

Data volumes are persistent across containers of a given image, while data volume containers can be mounted in containers built with any image.


Networking is important $

Hint: it's not that important. From a local development environment perspective networking rarely extends beyond the simplest networks. Generally docker-compose will handle them for you, and in the case where you do need one docker will handle the DNS for you.

Like volumes Docker has a great Networking reference.

The important take away is that named containers on the same network can be referenced by their name.


Docker Compose yourself $

docker-compose is a tool that simplifies the management of complex container clusters. The docker-compose.yml file describes what the cluster should look like. Most (maybe all at this point) cli arguments can be configured within the file.

The official docker-compose documentation is also quite good.

Additionally, docker-compose can be used to interact with it's cluster similarly to the docker cli but using the service names defined in docker-compose.yml

For example, if there's a service named web in the configuration after running docker-compose up --build you can run:

docker-compose exec web bash

docker-compose can also run individual containers, making running tests very convenient:

docker-compose build; if ($?) { docker-compose run --rm web py.test -v}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment