Icinga Cluster+Satellites Agents CA Sign

Hi,

we have an icinga2 cluster, with two masters as cluster, with multiple satellites.
The database with icingaweb runs on a separate system. No we have the problem, that the self generated certificates (icinga2 ca list) for the connectet agents did not get automaticly renewed/signed from the masters, althoug we also have some certificates staying on satellites, that also could not be signed, because of missing ca (key file). I think its an config problem bur did not get it.
So maybe someone can give me a hint how to get rid of it.

Systems informations:

  • Version used: 2.14.2-1
  • Operating System and version: debian11, 5.10.0-31
  • Enabled features: api checker graphite ido-mysql mainlog
  • Icinga Web 2 version and modules: 2.12.1, director(1.11.0), doc(2.12.1), graphite(1.2.1), idoreports(0.10.0), incubator(0.20.0), ipl(v0.5.0), monitoring(2.12.1), pdfexport(0.10.2), reporting(0.10.0), translation(2.12.1), x509(1.3.2)
  • the zones.conf file master:

Blockquote
object Endpoint “master02.tld.local” {
host = “192.168.100.102”
port = “5665”
}
object Endpoint “master01.tld.local” {
}
object Zone “masters” {
endpoints = [ “master02.tld.local”, “master01.tld.local” ]
}
object Zone “global-templates” {
global = true
}
object Zone “director-global” {
global = true
}
object Endpoint “master00.tld.local” {
host = “192.168.100.100”
port = “5665”
}
#satellites
object Endpoint “sat01.tld.local” {
host = “192.168.100.111”
port = “5665”
}
object Endpoint “sat02.tld.local” {
host = “192.168.100.112”
port = “5665”
}
object Endpoint “sat03.tld.local” {
host = “192.168.100.113”
port = “5665”
}
object Zone “satellites” {
endpoints = [ “sat01.tld.local”, an so one… ]
parent = “masters”
}
object Zone “master00.tld.local” {
endpoints = [ “master00.tld.local” ]
parent = “masters”
}

  • the zones.conf file of one satellite:

Blockquote
object Endpoint “master01.tld.local” {
host = “192.168.100.101”
port = “5665”
}
object Endpoint “master02.tld.local” {
host = “192.168.100.102”
port = “5665”
}
object Zone “masters” {
endpoints = [ “master01.tld.local”, “master02.tld.local” ]
}
object Endpoint “sat02.tld.local” {
}
object Zone “global-templates” {
global = true
}
object Zone “director-global” {
global = true
}
#Connect the others
object Endpoint “sat03.tld.local” {
host = “192.168.100.113”
port = “5665”
}
object Zone “satellites” {
endpoints = [ “sat01.tld.local”, and so one… ]
parent = “masters”
}

Satellite do not need to communicate with each other, hence, you could remove other zone and endpoint objects from satellite’s zones.conf. More precisely, each satellite should have its own zone it they are not configured for HA. And HA allows a maximum of 2 nodes within a zone.

1 Like

Hi :slight_smile:
What @rsx said is correct, so

will not work for more than two satellites.
From the docs

There is a known problem with >2 endpoints in a zone and a message routing loop. The config validation will log a warning to let you know about this too.

In addition you should decide who should initiate the connection between master and satellite.
Currently you have both side trying to connect to the other side, because you have the host attribute of the endpoints filled.
I would suggest leaving the Endpoint objects of the satellites in the masters zones.conf emtpy, so that they expect a connection from the satellites. It keeps a bit of load off and clutter out of the log for the master.
See Endpoint Connection Direction

Regarding your question about the certificate signing:
To have them automatically signed you need to supply a ticket when you request them from the agent.
See CSR Auto-Signing
We, for example, generate the ticket for the agent node via an API call against the master and then add the ticket to the cert request. This way the master can confirm the authenticity and will sign the request.