I have active-active pacemaker clusters with 2 or more resource groups containing floating IP addresses what I want to monitoring with Icinga2 agent. My clients accepts configs and commands also and the agents connect to my master(s).
I prepared service checks for certain services. I defined a “services” array in my host definitions and if it contains e.g. “apache” then all of my apache2 checks applied to the host. I’d like to use this method on my clusters.
In case of Icinga1 I used NRPE to monitor my cluster services on the floating IP and worked great. I’d like to do something similar with Icinga2 clients. What’s the official suggestion to monitor clusters like these?
I found 2 solutions but they are quite ugly…
#1: I assign my service checks to all cluster nodes. There will be a node with “OK” checks and there will be nodes with “CRITICAL”. Then I create a “virtual” host object and assign dummy checks to it and in the config with “if” statements I can set the status of the dummy checks based on the real check running on the nodes. I don’t like it because I always have a lot of misleading “CRITICAL” checks… If I could hide somehow the real checks and show only the “virtual” checks it would be a good solution.
#2: I found this solution: https://www.netways.de/blog/2018/06/08/wie-ueberwache-ich-eine-cluster-applikation-in-icinga-2/
It’s working also but I can’t use my already existing checks with it.
So, my question what’s the best practice to monitoring pacemaker clusters with Icinga2?