Hi,
one main benefit of running applications in containers and layered images is that your base system doesn’t actually install the application, its dependencies, etc. One thing to keep in mind here - each application should be running in its own container, for Icinga this would be:
- Icinga 2 core
- MySQL/PostgreSQL
- Web server (Apache, Nginx) with Icinga Web 2
Additional containers would add InfluxDB, Graphite, Elasticsearch, etc.
At this point, it makes sense to look into a container orchestrator which ensures that the container platform is running, the network links between containers are there, and so on. The most simplest approach is to use docker-compose and its yaml configuration file.
Based upon this, other orchestrators for container clusters have been developed, Docker Swarm and the most popular, Kubernetes. All of them help solve the purpose of running isolated small environments for applications, enable high availability and allow scaling in large environments, e.g. sharing the workload of web applications, or database backends.
For Icinga as a master instance, there are certain things to keep in mind:
- How is the monitored object configuration being deployed?
- Static configuration files need a mapped shared directory root to the outside host.
- REST API with runtime created objects
- Or a deployment via the Icinga Director from the web container to the core container (REST API)
- Stateful data, mapping
/var/lib/icinga2
outside as persistent storage
- Enabling specific features, e.g. InfluxDB/Graphite writers via environment variable on-demand
The web container is relatively straight forward, with e.g. Nginx, PHP and Icinga Web inside. Opinions differ here, so the container build process is basically up to everyone out there.
For the database container, one can re-use existing mysql:5.7 or mariadb containers for example.
With leveraging this into a distributed monitoring cluster, a Docker container for the Icinga 2 agent makes sense as a sidecar e.g. in a container cluster such as Kubernetes. Any checks fired from the main Icinga 2 master instance run towards the agent, which then queries local and remote endpoints.
Thing is, containers are rather short lived. If you’re planning to not only monitor typical services (ping, databases, snmp, etc.) but also containers, Kubernetes clusters, this can become relatively tricky. Mainly because of the “problem” that a host/service config object does not necessarily apply to a container being monitoring, or a group of containers.
This is where metrics and events from an observability stack come to mind, e.g. Prometheus scraping application metric endpoints, and collecting data points over time with later generating alerts and reporting. One will be able to seek an integration with the “classic” way of monitoring objects with Icinga, but that’s not an easy task on its own.
Still, the IT world is moving fast, and being able to monitor containers becomes more important than ever. If you e.g. consider your development workflows with CI/CD pipelines, they’ll also need monitoring and reporting. Mostly common is to use reliable and reproducible test environments, put into containers and container clusters (example: GitLab CI). Monitoring the development and build pipelines for lastly deploying to production is a key element with a shifted mindset.
In order to make this happen with Icinga, there are some architectural changes required which may or may not happen in the future. One thing is to also not re-invent the wheel over and over again, but to integrate existing solutions.
Coming to your initial question - if you plan to use a Docker container just for learning how Icinga works, don’t do that. Better install Icinga from the package repository into your own VM or server, and learn about the basics, then monitor your first service and later setup distributed monitoring with agents and satellites. Once you feel confident enough, and you e.g. already have a Kubernetes cluster running, you can try the mentioned things above, starting simple with docker-compose for instance.
Cheers,
Michael
PS: 5 years ago, I wasn’t convinced by the maturity of containers. Nowadays, they help me everyday e.g. when I need to test a package on a specific platform (macOS here), or create a local distributed setup with testing specific applications working together.
My personal website dnsmichi.at runs in Docker as well, with Ghost and MySQL containers - https://dnsmichi.at/new-blog/