Docker Bootcamp

I wanted check out Docker for quiet a while now. Eventually I found the time to do so and started to look for some learning materials online. Since Docker exists since three years now there are tons of materials out there. Therefore I’m not going to write an in depth tutorial about all the various commands but rather a quick overview of the technology.

What is Docker?

Docker is an open-source project that automates the deployment of Linux applications inside software containers.

This is how wikipedia defines it. Docker itself describes it similarly with the following words:

Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

Summarized, Docker is a container technology that executes your software in a sandboxed environment.

Each of these containers can be shared via a public or private repository (similar to maven artifacts). Docker runs its own public repository, the Docker Hub, where you can download prebuilt images and share your own images.

What is Docker not?

Docker is no virtual machine technology. In contrast to virtual machines Docker containers don’t boot up their own operating system but depend on the kernel of their host OS. When a Docker image is pulled on another machine than it was built on it uses the host OS of that new machine.

Getting started

First you have to install Docker on your laptop. For instructions on this please check out the official docs on that topic.

To run your first container just type in docker run ubuntu. This will pull down the Ubuntu image and run it on your laptop. Since we didn’t execute any command, the image is exited immediately. If you want to do something more spectacular you can run docker run -it ubuntu sh. This will start and interactive shell in your ubuntu container you can play with. When you want to exit the container just type in the exit command.

After you started your first container you can take a look at all the images Docker already pulled down. To do this run docker images and you should get an output that looks something like this:

Of course, you can also run a Docker container as a daemon. For example you could start an elasticsearch container that runs in the background like this: docker run -d elasticsearch. The -d option tells Docker to deamonize the container.

Elasticsearch runs on the port 9200 by default, but since it is running inside a container you cannot reach it from outside. To do this we need to map the port that is used inside the container to a port that can be reached from outside the container. If you wanted to publish all internal ports to random ports you can also add the -P option. If you want to publish only specific ports (e.g. its default http port) you can add the -p 9200:9200 parameter. The full command is then docker run -d -p 9200:9200 elasticsearch.

Now you can reach Elasticsearch from the browser at http://localhost:9200 (it might take some time until Elasticsearch is booted up).

In the meantime you can check Docker for its running processes using docker ps. This will display a list of running commands similar to this one:

When you want to stop a Docker container again you do this by running docker stop 865d8295fe70. As you might have noticed the parameter passed to the stop command is the container id displayed in the docker ps listing above. If you want to stop all Docker containers at once you can do this using docker stop $(docker ps -q).

The last command I quickly want to discuss here is the docker build command. With docker build you can create your own Docker images. In the Dockerfile <ou define all the necessary steps to build your image. The following file shows a very minimalistic (and not so useful) example of such a Dockerfile.

With docker build -t <username>/<imagename> . (change <username> to your username and the <imagename> to the name of the image) you can build the Docker image.

Advanced Docker tools

When you switch to Docker for managing your application you will have to create a few Docker images, at least one for your application and one for your datastore. You can run each Docker image individually or you can use Docker Compose, which is a Docker tool for multi container applications.

You can create a Compose file to configure how your application starts. Each container still has a Dockerfile which configures the Docker image itself but additionally you create a docker-compose.yml configuration file where you define how the individual Docker images are started. Lastly you can run docker-compose up to tell Docker Compose to run your entire app.

Another useful tool is Docker Machine. With Docker machine you can manage remote machines easily. The following video gives you a great overview of how it works and what you can do with it.

The last Docker tool I want to present here is Docker Swarm. It is the native clustering tool for Docker and works tightly with Docker Machine. With the swarm image you can launch Docker Swarm on the machines in your cluster. There you have a master node, called the Swarm manager, and client nodes, the so called swarm nodes.

The Swarm manager manages the cluster and is responsible of starting the correct amount of Docker containers on the right nodes in your cluster. Similar to Docker Machine, Docker Swarm serves the Docker API which makes it incredibly easy to run highly distributed Docker applications on your cluster with just a few commands.

Next steps

This was a very quick overview of my selection of the most important commands and features of Docker. To get a better overview of all the details you can check out the following resources:

Of course you should also use Docker for your next or current project. Only this way you will truly learn how Docker works.