Then I changed the service load form the file /etc/icinga2/conf.d/services.conf
apply Service "load" {
import "generic-service"
check_command = "load"
/* Used by the ScheduledDowntime apply rule in `downtimes.conf`. */
vars.backup_downtime = "02:00-03:00"
// assign where host.name == NodeName
assign where host.vars.os == "Linux"
}
I know this can’t work because load has to be check by an agent on the host but I was curious to see what was going to happen.
The service load is show (green) in both hosts (localhost and zimbra) even if I shutdown zimbra.
Could you help me understanding how should I configure zimbra load check and how is load check performed on localhost?
and in your service definition command_endpoint = host.vars.client_endpoint:
apply Service "load" {
import "generic-service"
check_command = "load"
/* Used by the ScheduledDowntime apply rule in `downtimes.conf`. */
vars.backup_downtime = "02:00-03:00"
// specify where the check is executed
command_endpoint = host.vars.client_endpoint
// assign where host.name == NodeName
assign where host.vars.os == "Linux"
}
Icinga2 knows then, that this command should be executed on the host and not on localhost.
More info in the docs.
furthermore, you’ll need to take care about setting up the master for a distributed setup. Likewise the agent/client by using the CLI tools.
At last, I’d suggest moving the configuration into the master zone in /etc/icinga2/zones.d/master. Likewise move additionally created commands into the global-templates zone directory to sync them to the agent, if needed.
Thank you Alex and Michael for the answers.
Right now I’m struggling a bit with the end-point concept.
“Nodes which are a member of a zone are so-called Endpoint objects.”
First of all I wish to clear out what end-point is not.
Let say I want to check only exposed services like http or ssh, I just need a host definition without any endpoint/zone declaration, right?
In case I have an agent running on the host to check things like load and disks, I enter the distributed monitoring scenario.
So for the same zimbra host I need to declare 3 objects:
object Host (with address = “192.168.178.16”)
object Endpoint (with host = “192.168.178.16”) + unique object Zone
This is a very simple scenario with a master and a single endpoint.
If I change zimbra ip, I have to edit both Host and Endpoint object, right?
cat /etc/icinga2/zones.conf
object Endpoint "icinga.domain.com" {
}
object Zone "master" {
endpoints = [ "icinga.domain.com" ]
}
object Zone "global-templates" {
global = true
}
object Zone "director-global" {
global = true
}
object Endpoint "zimbra.domain.com" {
}
object Zone "zimbra.domain.com" {
endpoints = [ "zimbra.domain.com" ]
parent = "master"
}
And these are my zoned definitions on the client (zimbra)
cat /etc/icinga2/zones.conf
object Endpoint "icinga.domain.com" {
host = "icinga.domain.com"
port = "5665"
}
object Zone "master" {
endpoints = [ "icinga.domain.com" ]
}
object Endpoint "zimbra.domain.com" {
}
object Zone "zimbra.domain.com" {
endpoints = [ "zimbra.domain.com" ]
parent = "master"
}
object Zone "global-templates" {
global = true
}
object Zone "director-global" {
global = true
}
The wizzard (run on zimbra) didn’t add “host =” to the Endpoint definition.
I copy/pasted this definition on master zones.conf.
I guess host= can be omitted if the endpoint fqdn can be resolved with a valid ip.
Right?
Then I modified the load; proc; swap; users; services this way:
// assign where host.name == NodeName
assign where host.vars.os == "Linux"
command_endpoint = host.vars.client_endpoint
And it seems to work
By default only ‘include “zones.conf”’ is enabled in icinga.conf.
I can add ‘include_recursive “zones.d”’ to split configuration later, once I have more hosts to monitor.
Please note that the wizard added also director-global zone.
Is this zone added in case of later installation of icing director or what else?
The master then doesn’t need to forcefully connect to the client. It is advised to only use one connection direction, for performance reasons. Icinga will detect two similar connections, and kill one. But that’s eating resources.
the network being reachable. Typically firewalls prevent one or the other connection direction
Performance. If the master actively connects to all clients, this eats CPU resources. If that’s not a problem, let the master connect. If you prefer to leverage the reconnects to the clients, and save some CPU cycles on the master, go for that.
Hi Michael, thanks again for you reply.
My concern right now isn’t about a real use case, so no firewall or cpu problems…it’s just the syntax and the logic.
In my testing configuration right now I have only one “host =” statement , on client side (zimbra)
That make me think the direction in from client to master.
Notice there’s no “host” parameter in the zimbra endpoint declaration (nor client side neither master side).
object Endpoint "zimbra.pbds.eu" {
}
In the documentation endpoint declarations are the same for both cases ( Top Down Command Endpoint and Top Down Config Sync).
Nonetheless it doesn’t mention how Endpoints are declared on master side.
That’s why I’m confused.