It is depending how you decided to check your clients. There are several ways to do this.
For example on linux hosts you can check via:
Icinga Agent
SSH
NRPE
If you use the icinga agent, you have to define the command endpoint in the service (correct me if I am wrong, I don’t use the Icinga agent). If you use NRPE or SSH, the Satellite connects to the client, executes the plugin and collects the result.
Afaik you can see the client just like a satellite, with the sole difference that it only checks itself.
A satellite, or better the term “satellite”, refers to an Icinga 2 instance that executes checks on itself and remote hosts as well. For example SNMP checks to remote machines/switches or triggering NRPE checks.
The command endpoint can only be set in a host or service template, at least in the Icinga Director.
I’m not sure if the Director sets the command endpoint to the client itself if it is an Agent. But the behavior is correct for the apt check, as it has to be run locally on the machine, or else you would have x checks reporting the updates of you satellite.
That’s the “agent” option, Director automatically sets the command_endpoint configuration then.
Since the agent checks are triggered from the satellite zone, the corresponding agent host also needs to be put into the satellite cluster zone.
Cheers,
Michael
PS: I strongly advise against NRPE having multiple security flaws. SSH should be an option, the configuration is not really a breeze compared to what’s possible with Director and agents.
Well, since I am just using the DSL (not the director) and checking all my linux hosts via SSH I must say, it works like a charm (+1100 Hosts, + 9200 Services). And since Windows Server 2019, you can also use SSH to get a Powershell Session and trigger PS scripts. So you will got a totally agentless monitoring solution via ssh, wmi and snmp. For me, the director is much more complicated than the DSL.
Than what i tried. The Director set command_enpoint to host_name when i use Agent = yes.
For my understanding, Set the Host to the satellite Zone. Than the checks are triggered from satellite and not from master, but the main checks are executed on Client and that’s why check_source = client
correct?
Sorry for many Questions, i’am into a Tunnel, and i tried a lot
Right. NRPE is a small daemon which runs on the agent, and can be queried with check_nrpe. This originates from Nagios and is their preferred way. NRPE is known for weak TLS, and possible MITM attacks. Also, it doesn’t integrate that transparent as you’ll have with the Icinga 2 Agent sharing the same binary and configuration language as a satellite/master instance.
Coming back to the question - does that then work on a deployment?
In terms of the zones.conf files, I’d suggest changes.
Avoid using ZoneName and NodeName constants. These have been removed from the CLI wizards in 2.10, and use real FQDN strings instead.
On the master and satellites, only configure their endpoints and zones.
Master:
object Endpoint "icinga-master" {
//host is not needed
}
object Zone "master" {
endpoints = [ "icinga-master" ]
}
object Endpoint "icinga-sat-1" {
host = "icinga-sat-1"
}
object Zone "satellite-zone-1" {
endpoints = [ "icinga-sat-1" ]
parent = "master"
}
object Zone "global-templates" {
global = true
}
object Zone "director-global" {
global = true
}
Satellite:
object Endpoint "icinga-master" {
host = "XXX"
port = "5665"
}
object Zone "director-global" {
global = true
}
object Endpoint "icinga-sat-1" {
host = "icinga-sat-1"
}
object Zone "satellite-zone-1" {
endpoints = [ "icinga-sat-1" ]
parent = "master"
}
object Zone "master" {
endpoints = [ "icinga-master" ]
}
object Zone "global-templates" {
global = true
}
On the agent, configure the local endpoint and the parent endpoint and zones.
object Endpoint "agent-FQDN" {}
object Endpoint "icinga-sat-1" {
//set the host attribute, if the agent should actively connect to the satellite
}
object Zone "satellite-zone-1" {
endpoints = [ "icinga-sat-1" ]
}
object Zone "agent-FQDN" {
endpoints = [ "agent-FQDN" ]
parent = "satellite-zone-1"
}
object Zone "global-templates" {
global = true
}
object Zone "director-global" {
global = true
}
SSH can be rather complex, also in combination with the Director. Coming back to the original question, what exactly is not working with the agent in the Director?
Install the second server and use the node wizard to add the second master as a satellite to the first master
then edit the zones.conf on both masters to have both of them in the master zone
– add the second master endpoint in zones.conf of the first master and put them in the master zone
– add the first master endpoint in the zones.conf of the second master, add the master zone and remove the satellite zone of the second master
add the second master to the master zone on the satellites
The second master needs the same features activated as the first master.
A second DB is not really needed (unless you want some form of HA there to, with all its complications like master/slave, pacemaker, cluster ip and so on).
The masters automatically decide which one is writing to the database. You just have to configure the ido-mysql.conf on the second master like on the first master. They also automatically distribute the checks based on an algorithm.
As the second master connects to the first master (node wizard setup as satellite) the CA from the first master is used.
That’s all I can think of now. Hope I haven’t told something stupid/wrong here
Later i’ll create also Replications for satellite Hosts
All works fine.
Only i miss a flag on icinga2 node setup statement to set custom local Endpoint name
or i overlook this
--zone arg The name of the local zone
--endpoint arg Connect to remote endpoint; syntax: cn[,host,port]
--parent_host arg The name of the parent host for auto-signing the csr; syntax: host[,port]
--parent_zone arg The name of the parent zone
--listen arg Listen on host,port
--ticket arg Generated ticket number for this request (optional)
--trustedcert arg Trusted master certificate file
--cn arg The certificates common name
--accept-config Accept config from master
--accept-commands Accept commands from master
--master Use setup for a master instance
--global_zones arg The names of the additional global zones to 'global-templates' and 'director-global'.
--disable-confd Disables the conf.d directory during the setup