Dockerizing Icinga any benefits?

With the rush to containerize all and sundry I have noticed a docker image is available from

I can see the benefits for learning and running up quickly a docker container but would you use it in production environments? What advantages does it bring for Icinga?

I think if you like pain, go and use docker for icinga2 in production. Even for test i would not recommend it.




one main benefit of running applications in containers and layered images is that your base system doesn’t actually install the application, its dependencies, etc. One thing to keep in mind here - each application should be running in its own container, for Icinga this would be:

  • Icinga 2 core
  • MySQL/PostgreSQL
  • Web server (Apache, Nginx) with Icinga Web 2

Additional containers would add InfluxDB, Graphite, Elasticsearch, etc.

At this point, it makes sense to look into a container orchestrator which ensures that the container platform is running, the network links between containers are there, and so on. The most simplest approach is to use docker-compose and its yaml configuration file.

Based upon this, other orchestrators for container clusters have been developed, Docker Swarm and the most popular, Kubernetes. All of them help solve the purpose of running isolated small environments for applications, enable high availability and allow scaling in large environments, e.g. sharing the workload of web applications, or database backends.

For Icinga as a master instance, there are certain things to keep in mind:

  • How is the monitored object configuration being deployed?
    • Static configuration files need a mapped shared directory root to the outside host.
    • REST API with runtime created objects
    • Or a deployment via the Icinga Director from the web container to the core container (REST API)
  • Stateful data, mapping /var/lib/icinga2 outside as persistent storage
  • Enabling specific features, e.g. InfluxDB/Graphite writers via environment variable on-demand

The web container is relatively straight forward, with e.g. Nginx, PHP and Icinga Web inside. Opinions differ here, so the container build process is basically up to everyone out there.

For the database container, one can re-use existing mysql:5.7 or mariadb containers for example.

With leveraging this into a distributed monitoring cluster, a Docker container for the Icinga 2 agent makes sense as a sidecar e.g. in a container cluster such as Kubernetes. Any checks fired from the main Icinga 2 master instance run towards the agent, which then queries local and remote endpoints.

Thing is, containers are rather short lived. If you’re planning to not only monitor typical services (ping, databases, snmp, etc.) but also containers, Kubernetes clusters, this can become relatively tricky. Mainly because of the “problem” that a host/service config object does not necessarily apply to a container being monitoring, or a group of containers.

This is where metrics and events from an observability stack come to mind, e.g. Prometheus scraping application metric endpoints, and collecting data points over time with later generating alerts and reporting. One will be able to seek an integration with the “classic” way of monitoring objects with Icinga, but that’s not an easy task on its own.

Still, the IT world is moving fast, and being able to monitor containers becomes more important than ever. If you e.g. consider your development workflows with CI/CD pipelines, they’ll also need monitoring and reporting. Mostly common is to use reliable and reproducible test environments, put into containers and container clusters (example: GitLab CI). Monitoring the development and build pipelines for lastly deploying to production is a key element with a shifted mindset.

In order to make this happen with Icinga, there are some architectural changes required which may or may not happen in the future. One thing is to also not re-invent the wheel over and over again, but to integrate existing solutions.

Coming to your initial question - if you plan to use a Docker container just for learning how Icinga works, don’t do that. Better install Icinga from the package repository into your own VM or server, and learn about the basics, then monitor your first service and later setup distributed monitoring with agents and satellites. Once you feel confident enough, and you e.g. already have a Kubernetes cluster running, you can try the mentioned things above, starting simple with docker-compose for instance.


PS: 5 years ago, I wasn’t convinced by the maturity of containers. Nowadays, they help me everyday e.g. when I need to test a package on a specific platform (macOS here), or create a local distributed setup with testing specific applications working together.

My personal website runs in Docker as well, with Ghost and MySQL containers -


Thank you all for the useful answers, appreciated.

It is like pain
i have been going through the whole topic for a long time and have been dealing with restarting and persistence of configurations.
my recommendation: no monitoring stuffed into a container!


Hi Bodo,

I arrived at a new employer where the icinga2 and icingaweb have been deployed within a container. See my comment at the top of this thread. Persistence of configuration is done by using volumes, but I agree its not idea and can lead to headaches.

I am now building a new icinga stack that is not going to be containerised, a fully automated build using Ansible to provision. It will be much easier to manage.

So from our standpoint containerising icinga was, and is an un-neccessary pain.

1 Like

Also I am currently creating ansible roles for icinga2 and icingaweb2.
At the moment these work so far that I use them in a customer project.
If you are interested, want to join in or maybe shake your head:


As always, I would keep it as simple as possible. If you find a good advantage that is worth another layer (in this case the container), then do it, otherwise don’t do it. Oh, and if you find one, tell me about it :wink:

Best regards

While it might work, we strongly suggest you shouldn’t do it for several reasons:

  • Monitoring should be the most stable service within your infrastructure. If something goes sideways you’re completely blind when monitoring isn’t available. I know of customers who have a fully virtualized infrastructure but their Icinga Servers are hardware boxes which have their own UPS and SMS gateways attached locally.
  • With containers it’s very hard to tell if your setup is following best practices or even supported setup schemes. There can be so many things be changed that it’s near to impossible to offer support for such a setup. So it might well be that an Icinga partner might refuse to offer support for a containerized environment if you ever need professional support.

Hi Bodo,

thanks for your offer, I will take a look at your Gitlab projects. Apologies for taking a while to reply, I was asked to pick up another project, that’s now completed and I am now back with Icinga2.

I hope you are fit and well.

I’m fine, thanks for asking! :slight_smile:

Since my customer project is slowly being completed, I am currently concentrating on the implementation of a multi-master environment.

If you have any questions, wishes or suggestions … always bring it to me!

Regards, Bodo

1 Like

After long time …
you can now find my roles in the ansible-galaxy:

have fun.

1 Like