Docker for Development: Service Containers vs Executable Containers

Chris Morrow
Level Up Coding
Published in
10 min readJul 10, 2020

--

Photo by Author via Flickr

Quite often, software projects rely on third-party services and software. Examples include a relational database, message broker, or email service. When starting a new software project or joining the development team for an existing project, it can be a daunting and time consuming task to install all of the required dependencies. Instead of pouring over numerous on-boarding instructions and meticulously installing all the required software dependencies, you can use Docker to ensure you’re using the correct version and configuration of the required software. This is the beauty of working with Docker containers!

What is Docker?

Docker is a software platform for launching and managing containers. A Docker container is a self-contained environment for running processes and commands. It is isolated from the rest of the host machine’s operating system as well as from other Docker containers. Each container has its own network, file system, and starts up fast because it doesn’t have everything else you would find in a full operating system (such as a graphical user interface). Containers are instantiated from images, where an image is all the “stuff” you want Docker to run. Images start with a base image, which might be a particular version of the Postgresql database, the Ubuntu Linux Distribution, or any other custom software that can be run on Linux. You can even make your own Docker images by layering custom configurations on top of other images!

Docker for Development

Docker is a great tool that can make development easier by making a project’s software dependencies reliable, repeatable, easy to install, and run. However, for all of Docker’s flexibility comes a steep learning curve and wisdom of Docker best practices. In some cases, you might think it’s easier to just install the software directly on your machine. Installing software natively on your machine vs using Docker containers is kind of like buying a book vs checking it out from the library. As you start to buy more and more books, you have to make space for all these books in your home. If you mostly check books out from the library, you return them when you’re done and don’t need additional bookshelf space.

Another problem developers new to Docker often encounter is knowing what software is worth running as a Docker container. It might be tempting to run multiple applications and frameworks all in one Docker container or to develop entirely within a Docker container. This is a sure-fire way to build containers that are massive in size or make your development environment much more complex than it needs to be. Simplicity is key, and by sticking to only running individual services or executables in Docker containers, you can save time and work more efficiently.

Service Containers

Typically, services are good candidates to run as a Docker container. A service is any third-party software running as a separate, independent, long running background process, that your project will interact with. Examples include databases, web servers, and full-stack applications like JIRA or Jenkins. Running a service locally as a Docker container is easy because you will install the version of the service your project requires, and you don’t have to install all of its various dependencies. (They’re already included in the container!)

As an example, let’s look at how to run the RabbitMQ message broker as a Docker container. First, make sure you have Docker installed. You can follow the instructions here to install Docker for your operating system. Next, copy and execute the code below in a terminal:

What does this command do? It will run a new Docker container named dev-rabbit in detached mode based on the official RabbitMQ Docker image tagged management. It will also change the hostname of the container to dev-rabbit and map the container’s port 15672 to our host machine’s port 15672. What does all this mean? The official RabbitMQ Docker image will be downloaded from Docker Hub if it doesn’t already exist on the host machine and specifically, the management tagged version will be downloaded and used. More on this in a minute. By setting the port mapping and hostname, RabbitMQ in the container will act just as if RabbitMQ was running on a native Linux host. The management tagged version includes the Management Plugin, so we can go to http://localhost:15672 with user/pass guest/guest to access the RabbitMQ Management Plugin’s browser-based UI.

The container will continue to run until the host machine is shutdown or we manually tell Docker to stop running the container. We can stop the container at any time with the command docker stop dev-rabbit and manually start it back up again with docker start dev-rabbit. Because we initially ran the Docker container in detached mode, the dev-rabbit container will always run as a background process, and it’s output to stdout can be viewed with docker logs dev-rabbit.

Let’s look at another example of a service container. Let’s run the Neo4j Graph Database as a Docker container. Run the terminal command below to configure and start the container:

This command runs a new Docker container named dev-neo4j in detached mode based on the official neo4j Docker image tagged latest. This command also maps ports 7474 and 7687 on the host machine to 7474 and 7687 in the container. The primary difference in this example compared to the RabbitMQ example is the --volume option. This option maps the $HOME/neo4j/data directory (on a Unix-like operating system) on the host machine to the /data directory in the Docker container. The /data directory happens to be where Neo4j will store the data for its database. By using the --volume option this way, we’ve added data persistence, meaning even if the container were deleted, the data would remain on the host machine’s disk. You can test this by adding a few nodes to a new Neo4j database (visit http://localhost:7474 to use Neo4j), then run the following command to delete the container:

docker rm dev-neo4j

Running the below command will show that the data is still there!

ls $HOME/neo4j/data

This means the same data can be used for different versions of Neo4j, or even different Docker containers.

As you can see, running a service container locally on your machine is straightforward and easy to manage. Even upgrading to a newer version of the service is easy by changing the tagged version, or running the Docker container with the latest tag. Since there are numerous tagged versions of official Docker images, it shouldn’t be a problem to find the version of the service that matches your production deployment. Also, if you finish working on the project and don’t need the service installed locally anymore, it’s as simple as deleting the Docker container with the following command to remove the service from your machine:

docker rm [container name]

Executable Containers

Services are the most common use case for Docker containers. However, executable command line applications can run as Docker containers too! An executable is a command line application that launches, performs an action, and quits when it completes or encounters an error. They are short-lived and singularly focused. Some examples of executables are command line tools like grep, ls, and cp. Other examples are command line applications you might have installed like curl, ImageMagick, or git. It turns out, with a little bit more work, you can run executables as Docker containers too!

Why would you want to run a command line application as a container though? It’s easy enough to install applications from a package manager or an installer. As it turns out, this isn’t always so simple. What if you need to install different versions of the same application? What if an installer isn’t available for your operating system, or the application isn’t available from a package manager? What if your only choice is to install from source? To alleviate the pain caused by these situations, you can create an ephemeral Docker container that runs your application and destroys the container when it’s done.

Let’s look at an example. Say for instance you’re working on a Python project that also has a portion written in Go. You could install Go on your machine and remove it when you’re done working on the project, or you can install the go command as a Docker container. This will allow you to compile Go code for the target system regardless of what operating system you are developing on, all without installing Go!

First, let’s use the example Hello World program below as the Go code we want to compile. Copy the code below to a file named hello.go:

Next, we need to create our Docker container run command as a shell script that we can call from the command line. Copy the following code to a file named go:

This shell script runs a new Docker container by using the official latest golang image from Docker Hub. Importantly, the--rm option tells Docker to “remove”, i.e. delete the container when it’s done executing. We also need a way of getting our Go code into the container. This happens with the --volume option by mapping the environment variable $PWD, the directory you’re running the shell script from, to /usr/src/app in the container. The -w option sets the working directory in the container to /usr/src/app — the directory where the Go code will be compiled. The -e options set environment variables in the container, in this case GOOS and GOARCH to the value of $GOOS and $GOARCH in our shell. The Go language has the really useful ability to cross-compile Go code for a specific operating system and architecture regardless of the operating system and architecture you are compiling on. Both of these environment variables allow us to set which operating system and architecture we want to compile our code for. The last line of our shell script tells Docker to run the go command in the container along with any arguments that are passed to the shell script. Notice how there is no --name option this time. Without this option, Docker will assign a random, anonymous name for the container.

Next, save the go file to /usr/local/bin or equivalent directory that is in your $PATH. Make sure the file is executable by changing the permissions with:

chmod 755 /usr/local/bin/go

Be sure that you don’t already have Go installed on your system. If you do, make sure the shell script we just created will be called with:

which go

As long as the result is /usr/local/bin/go, you’re all set, otherwise, you will need to move your go shell script to another directory called earlier in $PATH, modify your $PATH order, or rename the go shell script.

Be sure to set your $GOOS and $GOARCH environment variables for the machine you are currently working from. This is important because it might prevent your compiled program from being executable. In my case, I’m using macOS, so I would set my environment variables with:

export GOOS=darwin
export GOARCH=amd64

Navigate to the directory where you saved hello.go. Now, run the below command on a terminal to compile hello.go:

go build hello.go

After a second or two, the compilation will complete and you will have a hello executable in your current directory. Run the compiled program with ./hello.

Congratulations, you compiled Go code without installing Go!

Let’s look at one last example. We can run the the open-source Tesseract OCR (Optical Character Recognition) command line application with an executable Docker container. In this case, there isn’t an official image on Docker Hub, but we can make a Docker image for Tesseract by creating a Dockerfile. A Dockerfile has all the instructions for building a Docker image. Copy the code below into a new file named Dockerfile:

Line 1 indicates that the ubuntu Docker image will be used as the base for our new Docker image. Line 3 says to install the Tesseract packages once the Docker container is run. Line 4 indicates that the working directory should be changed to /tesseract. Finally, the tesseract command is executed. When we create a new Docker container from this image, these commands will be run in order, but before we can run our container, we first we need to build the Docker image.

We can build our Docker image using the command below in the same directory as the Dockerfile:

docker build -t tesseract_ocr:latest .

The -t option specifies the name of the image as tesseract_ocr with the tag latest.

Now that we have a Tesseract image, we can write a shell script that creates and executes a Docker container based on the tesseract_ocr image. Copy the code below into a new file named tesseract:

This shell script runs a new Docker container using our tesseract_ocr image. Just as with the Go script, the--rm option tells Docker to “remove”, i.e. delete the container when it’s done executing. The first volume binding maps the directory where Tesseract is executed from to the /tesseract directory, which as you may recall, is the working directory of the tesseract_ocr image. The second volume binding maps the /var/folders directory to the same directory on the host machine which Tesseract uses for temporary file storage. Finally, the last line executes the tesseract_ocr image, which in turn executes the tesseract command along with any command line arguments we provide.

Copy the tesseract script to /usr/local/bin or another directory in your $PATH and be sure to change the permission with:

chmod 755 /usr/local/bin/tesseract
hello.png

Using the image hello.png above, we can pass the image to Tesseract to read the text “Hello World!” from the image. The following command runs Tesseract OCR on hello.png, saves the text to hello.txt, and displays the result to stdout:

tesseract hello.png hello && cat hello.txt

Pretty cool right?

Conclusion

We’ve seen examples of service containers and executable containers. Service containers are fairly simple and run long-running processes that need to be started and stopped. Executable containers are generally more complex to configure and run singularly-focused, short-lived applications. Both have their use cases and can help you work more efficiently. With a bit of creativity and experimenting, you’ll be able to turn any service or executable into a Docker container. Happy hacking!

--

--