Using Container Juggler to manage development environments

In a previous post, we shared some of our experiences using a microservice architecture in production for the last few years.

One of the practical issues which come up during development when working with such an architecture is that in order to test cross-service integration locally, one needs strategies to either mock or run other services and their whole stack of databases and caches somewhere.

The Problem

Depending on the area of the system you work on, these dependencies can reach from one service with a database to several interconnected services.

This doesn’t mean that the services can’t run on their own – they should always be able to be run independently. But for some tasks you will want to integrate with the services which actually provide the functionality you rely on in the real cluster.

One valid strategy at this point is Mocking. Instead of really running the services you need, you simply provide fakes for them. This is a fine approach, but has its limits.

First of all, you depend on your mock being correct and because these mocks are also just code, they can have bugs.
Another thing are subtle interactions, which will naturally happen in a sufficiently complex web of services. These interactions are very hard, or downright impossible, to reliably simulate using mocks.

Also, maintaining and documenting these mocks is extra work. Furthermore, it will add some technical complexity and over time likely some technical debt as well.

So optimally, we would like to run the actual services locally. Without the fuzz of having to keep all of them locally available, up to date and properly configured for testing.

One technology that comes to mind thinking of these attributes is docker.

In our case, because we use docker containers for deploying services to our kubernetes cluster anyway, this is a perfect fit, as we have ready-to-run docker images of all services available at any time.

Great, so let’s use docker for running the services. This means we only need one piece of software installed to run all services – docker. However, the next problem lurks right around the corner.

How can we run several services at the same time? Luckily, there is docker-compose. Here users can define and run multi-container applications with minimal configuration.

This already gets us quite far and if the whole system doesn’t consist of too many services, we could stop here.

However, depending on the complexity of the system, we might only want to run a subset of services, which might change depending on what we’re working on.

If the amount of configurations isn’t too high, we could create several configs and be done with it. But at some point, this will get a bit tedious to manage.

Another common problem with this approach is that depending on the use-case, we might want to run some services locally in a debug/development environment and some in our local docker cluster.
In this case, you would have to add /etc/hosts entries to the services running via docker with the extra-hosts option for every service you want to run outside the cluster, so they can communicate with each other.

In a real-world system with many services, databases, caches, workers etc., creating these docker-compose configurations becomes a real maintenance burden and having to edit them all the time is tedious and error-prone.

The Solution

This is where the Container Juggler comes into play! This nifty tool solves exactly the problem outlined above – managing complex combinations of docker-compose configurations.

Container Juggler has two key concepts, templates and scenarios. Templates are essentially definitions of a runnable entity in the system. This could be, for example, a service, or a database.

A template can be as simple as this for a redis cache:

A scenario, on the other hand, is a combination of these templates to run together. Let’s say we have the templates redis, auth-db and auth-service. If we wanted to run these services together, we could define a scenario such as this:

Then, we could create a docker-compose config for this by using the command:

And this will spit out a docker-compose.yml file with all the configured templates, variables and with all existing templates which are NOT in the config, added to each service as hosts entries.

An example could look like this:

For these hosts-entries to work, the service needs to be configured to call e.g. http://other-service when it calls that service.

For that purpose, it makes sense to add specific container-juggler-configurations to each of your services.

If we run this using docker-compose up, the redis cache, auth-db and auth-service will start up inside docker.
If we run our other-service locally – say it’s a Rust service using cargo run, then as long as everything is configured properly, auth-service will be able to call our locally running service and vice-versa.

Another more niche, but quite useful feature for persistent containers such as databases is volume-init. The idea is that you can define a .zip file to initialize the volume of a container.
This is useful if you have a baseline for a database you always want to reset to. Or if you migrate from one machine to another.


As mentioned above, this approach makes sense if you have several different scenarios you need regularly. An example could be a scenario that starts the whole cluster except for the reverse proxy in front of it to test new proxy rules locally.

Another common use-case is to start a set of “basics” many of your services might rely on to be there. This could be a cache, a queue and cross-cutting concern services such as authentication and user-management.
Such a scenario is useful as a cookie-cutter for services to run, regardless of what you plan to do.

You can imagine the list of possibilities goes on and on and they will be different for each workload. Once you created the templates, infrastructure parts and scenarios, there is no additional maintenance work and any combination of services is right at your fingertips.


Managing local development environments is considerably more convenient since the dawn of docker.

With several complex scenarios, for example in a microservice context, configuring these environments can get time-consuming and error-prone.

Container-Juggler fixes this problem in a simple way, combining convenient configuration with the power to model many different setups.

We’ve been using the juggler for years and we never hit a point where this simple, yet powerful tool didn’t meet our needs. 🙂