In a world of development, we were once building large monolithic applications, capable of performing all the tasks required to do the intended job. However, as time has passed the practices of decoupling that we so often have used within the lines of code, has spread to the architecture of the application. Perhaps that is why we today see a bigger tendency to build microservice architectures.
In a microservice architecture, we gain the advantages of decoupling the application modules. This means that we can use the most appropriate tools to build the individual module. (e.g. Java backend, angular frontend, python utility module) Additionally, it also means that any failure in the system can be contained to that specific module. Ideally, a microservice application could be made from set of microservices, which run of different operating systems (OS) or systems with different prerequisites (e.g. JRE (Java Runtime Environment), Python, MySQL). These individual microservices might only communicated with each other through an agreed-on API. (e.g. http, MQTT) Whereas a monolithic application would always have the same OS as its foundation, a microservice application might have several different OS foundations. (e.g. Windows, Ubuntu, CentOS) A tool perfect of solving this issue is Docker with the addition of Docker-compose.
For the purposes of this article, I use a simple work example. This example can be used to illustrate how to setup a microservice application architecture and show how and why docker and docker-compose are effective at facilitating this architecture.
The example is based on a Python, and a Java -service with a MQTT application as the communication bridge between the two services.
Initially we will start out by finding the appropriate container to facilitate the services. A large collection of public docker containers are available on Docker Hub. Through searching and investigation of the container’s documentation, it is highly likely that we will be able to find containers, which suit our purposes.
These public containers can be viewed as environments setup with all the requirements to run our services. What we need to do now is to get our services onto the containers. This is achieved through a Dockerfile. An example of the Dockerfile for the Java application can be seen below. The general idea is to specify the parent Docker image, add the necessary file to run the Java application, and finally prescribe the command which the container is intended to use to get the application running.
# Use the official image as a parent image.
# Add the necessary files to run the application. (jar, lib, logging etc…)
# Run the specified command within the container.
CMD java -jar application.jar
For the purpose of this tutorial I have kept the Dockerfile to a minimum. Other tags could have been added and provided functionality, but that will have to wait for another day. For the same reason, I only include the example for the Java application. However, the method for creating the other Dockerfiles follow the same pattern.
To ensure the containers are running as expected try starting it using the commands below. A prerequisite for starting a Docker container is of cause to have Docker installed. The official guide for how to do this can be found here.
This should start the application within the Docker container. The output of the applications log should be visible in the command terminal.
docker build --tag .
docker run --name
With all Dockerfiles created and tested, it has become time to put them together in a Docker-compose framework. Docker-compose is an addition to Docker, which allow for multiple Docker containers to be started as one. All containers started within the Docker-compose unit will be able to see and access each other through the service network. This means that Docker-compose provides us with an easy initialization of all the microservices and at the same time a contained environment.
Below we can see an example of how the docker-compose.yml file will look.
The docker-compose file is divided into two: the service- and network section. The service section defines:
The network section in this case only defines one internal network, the service network. If desired networks and individual services can be exposed to an external environment, which can be very useful. However, that is beyond the scope of this tutorial.
It should be mentioned that this tutorial is only the minimum functionality required to get a Docker-compose container environment up and running. Both Docker and Docker-compose includes many other features, which can provide versatility to any microservices architecture.
In case you have not already figured out how to install docker compose, you will be able to find the official guide here.
This concludes the basic guide for how the get started with Docker and Docker-compose. Hope that it helped you
My name is Daniel H. Jacobsen and I’m a dedicated and highly motivated software developer with a masters engineering degree within the field of ICT.
I have through many years of constantly learning and adapting to new challenges, gained a well-rounded understanding of what it takes to stay up to date with new technologies, tools and utilities.
The purpose of this blog is to share both my learnings and knowledge with other likeminded developers as well as illustrating how these topics can be taught in a different and alternative manner.
If you like the idea of that, I would encourage you to sign up for the newsletter.