Understand command checks and agents

Hi, I’m new to icinga2.
I installed icinga 2.10-4 and icingaweb2 on ubuntu 18.04.
There are some localhost service sample.

I added a new host in /etc/icinga2/conf.d/test.conf

object Host "zimbra" {
  import "generic-host"
  address = "192.168.178.16"
  check_command = "hostalive"
  vars.os = "Linux"
}

Then I changed the service load form the file /etc/icinga2/conf.d/services.conf

apply Service "load" {
  import "generic-service"

  check_command = "load"

  /* Used by the ScheduledDowntime apply rule in `downtimes.conf`. */
  vars.backup_downtime = "02:00-03:00"

//   assign where host.name == NodeName
  assign where host.vars.os == "Linux"
}

I know this can’t work because load has to be check by an agent on the host but I was curious to see what was going to happen.
The service load is show (green) in both hosts (localhost and zimbra) even if I shutdown zimbra.
Could you help me understanding how should I configure zimbra load check and how is load check performed on localhost?

Thank you.

Hi and welcome to our community!

You have to use in your Host object vars.client_endpoint = name:

object Host "zimbra" {
  import "generic-host"
  address = "192.168.178.16"
  check_command = "hostalive"
  vars.os = "Linux"
  vars.client_endpoint = name
}

and in your service definition command_endpoint = host.vars.client_endpoint:

apply Service "load" {
  import "generic-service"

  check_command = "load"

  /* Used by the ScheduledDowntime apply rule in `downtimes.conf`. */
  vars.backup_downtime = "02:00-03:00"

  // specify where the check is executed
  command_endpoint = host.vars.client_endpoint

  //   assign where host.name == NodeName
  assign where host.vars.os == "Linux"
}

Icinga2 knows then, that this command should be executed on the host and not on localhost.
More info in the docs.

Greetz

1 Like

Hi,

furthermore, you’ll need to take care about setting up the master for a distributed setup. Likewise the agent/client by using the CLI tools.

At last, I’d suggest moving the configuration into the master zone in /etc/icinga2/zones.d/master. Likewise move additionally created commands into the global-templates zone directory to sync them to the agent, if needed.

Cheers,
Michael

Thank you Alex and Michael for the answers.
Right now I’m struggling a bit with the end-point concept.
“Nodes which are a member of a zone are so-called Endpoint objects.”

First of all I wish to clear out what end-point is not.
Let say I want to check only exposed services like http or ssh, I just need a host definition without any endpoint/zone declaration, right?

In case I have an agent running on the host to check things like load and disks, I enter the distributed monitoring scenario.
So for the same zimbra host I need to declare 3 objects:

  • object Host (with address = “192.168.178.16”)
  • object Endpoint (with host = “192.168.178.16”) + unique object Zone

This is a very simple scenario with a master and a single endpoint.

If I change zimbra ip, I have to edit both Host and Endpoint object, right?

Yes, and yes. Go on and try it out, then you’ll see what’s done by this concept.

I succeed in setting a master -> client configuration.

I run icinga2 node wizard on master (icinga.domain.com) and on the the client (zimbra.domain.com).
I didn’t disable conf.d in master yet.

This is the zimbra host definition:

mv /etc/icinga2/conf.d/test.conf /etc/icinga2/conf.d/zimbra.conf
cat /etc/icinga2/conf.d/zimbra.conf

object Host "zimbra.domain.com" {
  import "generic-host"
  address = "192.168.178.16"
  check_command = "hostalive"
  vars.os = "Linux"
  vars.http_vhosts["http"] = {
    http_uri = "/"
  }
  vars.client_endpoint = name
}

These are my zone definitions on icinga master

cat /etc/icinga2/zones.conf
object Endpoint "icinga.domain.com" {
}

object Zone "master" {
	endpoints = [ "icinga.domain.com" ]
}

object Zone "global-templates" {
	global = true
}

object Zone "director-global" {
	global = true
}

object Endpoint "zimbra.domain.com" {
}

object Zone "zimbra.domain.com" {
        endpoints = [ "zimbra.domain.com" ]
        parent = "master"
}

And these are my zoned definitions on the client (zimbra)

cat /etc/icinga2/zones.conf
object Endpoint "icinga.domain.com" {
	host = "icinga.domain.com"
	port = "5665"
}

object Zone "master" {
	endpoints = [ "icinga.domain.com" ]
}

object Endpoint "zimbra.domain.com" {
}

object Zone "zimbra.domain.com" {
	endpoints = [ "zimbra.domain.com" ]
	parent = "master"
}

object Zone "global-templates" {
	global = true
}

object Zone "director-global" {
	global = true
}

The wizzard (run on zimbra) didn’t add “host =” to the Endpoint definition.
I copy/pasted this definition on master zones.conf.
I guess host= can be omitted if the endpoint fqdn can be resolved with a valid ip.
Right?

Then I modified the load; proc; swap; users; services this way:

//   assign where host.name == NodeName
  assign where host.vars.os == "Linux"
  command_endpoint = host.vars.client_endpoint

And it seems to work :slight_smile:

By default only ‘include “zones.conf”’ is enabled in icinga.conf.
I can add ‘include_recursive “zones.d”’ to split configuration later, once I have more hosts to monitor.

Please note that the wizard added also director-global zone.
Is this zone added in case of later installation of icing director or what else?

Thank you.

No. host specified for this endpoint will make the local host attempt to connect to this endpoint in a regular interval.

Meaning to say, your client currently connects to the master with this configuration:

object Endpoint "icinga.domain.com" {
	host = "icinga.domain.com"
	port = "5665"
}

The master then doesn’t need to forcefully connect to the client. It is advised to only use one connection direction, for performance reasons. Icinga will detect two similar connections, and kill one. But that’s eating resources.

Cheers,
Michael

The master then doesn’t need to forcefully connect to the client.

Honestly, I’m confused.
First of all, shall the direction be chosen by the “host” attribute in the Endpoint object declaration this way:

TOP DOWN ENDPOINT

master

        object Endpoint "icinga2-client1.localdomain" {
            host = "192.168.56.111"
        }

client

        object Endpoint "icinga2-master1.localdomain" {
            //
        }

TOP DOWN SYNC

master

        object Endpoint "icinga2-client1.localdomain" {
            //
        }

client

        object Endpoint "icinga2-master1.localdomain" {
            host = "192.168.56.101"
        }

This seems clear enough reading “Master with clients”.

This depends on

  1. the network being reachable. Typically firewalls prevent one or the other connection direction
  2. Performance. If the master actively connects to all clients, this eats CPU resources. If that’s not a problem, let the master connect. If you prefer to leverage the reconnects to the clients, and save some CPU cycles on the master, go for that.

Have a nice weekend,
Michael

Hi Michael, thanks again for you reply.
My concern right now isn’t about a real use case, so no firewall or cpu problems…it’s just the syntax and the logic.

In my testing configuration right now I have only one “host =” statement , on client side (zimbra)

object Endpoint "icinga.domain.com" {
	host = "icinga.pbds.eu"
	port = "5665"
}

That make me think the direction in from client to master.
Notice there’s no “host” parameter in the zimbra endpoint declaration (nor client side neither master side).

object Endpoint "zimbra.pbds.eu" {
}

In the documentation endpoint declarations are the same for both cases ( Top Down Command Endpoint and Top Down Config Sync).
Nonetheless it doesn’t mention how Endpoints are declared on master side.
That’s why I’m confused.