Feedback Week Thread!

A good way of synching config would be config management tools like Puppet, Chef, Ansible and the like.
Since Icinga Web is pretty much exclusively configured in files this is actually rather easy to do, imho

That’s what I mean. Imho it is not a good way to use a third party tool for syncing configs. Particularly not, when the core has a built in config sync/ha feature via API. Why not having an API for syncing Icingaweb configs, too? Yes, I built my own config sync via GitLab and CI/CD, but that was sth. I had to do by myself. Imho a professional monitoring solution, which focus on high availability (and a monitoring has to be HA ) should have this implemented. At least for the most important module, which is icingaweb beside the core.
There are pros and contras with the modules of Icingaweb. E.g. You cannot just sync the config of the x509 module. If you just copy the configs, the scan jobs will run twice at the same time from both icingaweb instances and penetrate your database. Or the reporting module, where a simple config sync will end in sending 2 reports instead of only one. And all that ends in the fact, that you don’t have a real HA-Icingaweb instance, because you always have a “master” where you make config changes, which are synced to the slave, where your x509 scan job runs, where your reporting jobs run and a “slave” which you can use as fallback, when the master is down. Don’t get me wrong, I love Icinga and I am into it for over 5 years(?) now. But as @bodsch once very detailed described in his thread Architecture at icinga (a little bit of criticism ...) - #15 by bodsch there should be more standardization, more inbuilt HA-features and last but not least, one module should not be dependent on another (e.g. no module should depend on the director, as there are still DSL lover [like me], who don’t use it). I don’t know if it is still present, but there where director dependencies in the past.

1 Like