2 Icinga Masters sync, but no checks on second Master

Hello,

I have a hopefully small issue with my config, but I have no idea where in that puzzle my piece is missing. I followed the officia guide for clustering (and some others).

Setup:
Master 1, Icingaweb2, Director, Database
Master 2, Icingaweb2, Database
“enable_ha = false” is applied, each master has its own database

What works:
Master 1 does checks, shows data, everything fine.
Changing Config in Director on M1 is applied and synced to Master 2
/var/lib/icinga2/api/zones on M2 is populated with fresh config every time it changes.
icinga2.log shows successful sync

What does not work:
M2 does no checks (at least none are showing in log.
icinga2web on M2 is empty. No hosts, no checks.

Thank you for your hints on whats missing. :slight_smile:

APIListener has "accept_config = true"

cat /etc/icinga2/zones.conf

object Endpoint "M1" {
        host = "IP"
        port = "PORT"
}

object Zone "master" {
        endpoints = [ "M1", "M2" ]
}

object Endpoint "M2" {
}

object Zone "global-templates" {
        global = true
}

object Zone "director-global" {
        global = true
}

iirc only 1 master at a time can hold the IDO object, which might cause the behavior you’re seeing. What happens on M2 if you stop Icinga2 on M1?

1 Like

Hi Ben,
forgot to mention that:
When I stop icinga on M1, nothing happens on M2. Nothing in Icinga2web, no checking activity visible in the Logfile.

You need to have different zones.conf on both master. The host/port parameter defines, if this node should actively try to connect to the other host. So e.g. both should try to connect each other:

on M1:

object Endpoint "M1" {
  // That's us
}

object Endpoint "M2" {
  Host = <M2 IP>
}
object Zone "master" {
        endpoints = [ "M1", "M2" ]
}
[...]

and on M2:

object Endpoint "M1" {
   Host = <M1 IP>
}

object Endpoint "M2" {
  // That's us
}
object Zone "master" {
        endpoints = [ "M1", "M2" ]
}
[...]
2 Likes

Sorry hijack this thread but I’m dealing with the similar issue.

Do the zones.conf have to be in that fashion on both master?

I’ve similar zones.conf on both masters.

Thanks

Yes. Per the documentation (or at least examples), you never want to have an object Endpoint connect to itself. In my experience it does create some noisy logs if you do so.

https://icinga.com/docs/icinga-2/latest/doc/06-distributed-monitoring/#master-with-agents

1 Like

The issue seems to be solved, but I don’t really know where the error was in the end.

The zones.conf was looking like Marcus’ example more or less (I only added IPs to the M2 Endpoint on first Master, like it was recommended somewhere), I tried changing that which lead to weird errors with the director complaining that the Endpoint of M2 is in more than one Zone (I only have one and could not find another one anywhere).
After removing and reconfiguring the director in the same way like before all errors were gone and the second Master takes over. When both are running, they share the work, but only one monitoring backend is running in icinga2web, I assume this is expected behaviour.

Now…one last Question: how can I define which node is “primary”? Because at the moment as soon as both masters are available, the active Master is always M2. I would prefer M1 as the director is also there.

I haven’t had experience with it directly, but I recall this post containing the steps necessary – TLDR you rename the IDO object on both masters… I assume you change it on the server that you wish to be the primary master first, but I could be wrong (you may need to figure out which master to rename it on first).

1 Like

Thank you. That was a really helpful pointer in the right direction. Now it works as it should :slight_smile:

1 Like