How setup groups sattelite nodes

Hi,
I had setup a Master node and several client. Every thing was ok. I can create Hostgroups, Servicegroups.
But now I install the icinga agent on the client and and get the config frotm the client under /etc/icinga2/repository.d/…
Where can I now define for example a Hostgroup. Before I configured it under /etc/icinga2/conf.d/groups with:

object HostGroup “ep-staging-server” {
display_name = “EP Staging All Servers”

assign where host.vars.stage == “ep-staging”
}

and under /etc/icinga2/conf.d/ep-staging-host.conf with:

object Host “stagingserver” {
import “generic-host”
address = “xxx.xxx.xxx.xxx”
check_command = “hostalive”
vars.stage = “ep-staging”
vars.host_group = “ep-staging-front”
vars.load[“load”] = {}
vars.nginx_status[“nginx_status”] = {
nginx_status_servername = “$hostname$”
}
vars.disks[“disk”] = { }
vars.notification[“mail”] = {
groups = [ “webadmin” ]
}
vars.notification[“sms”] = {
groups = [ “webadmin” ]
}
}

But now I have to remove the stagingserver from the hostfile because the stagingserver is now under the repository directory.

I put the variable vars.stage = “ep-staging” now on the agent but this should’t work.

Any idea?

BTW. All check runs well. I get infos from the client. So the communication runs well between master and client node

Hi,

That’s the old bottom up mode which was deprecated in 2.6 and gone in 2.8. Sure you want to go this route?

Cheers,
Michael

No not really, but what is the right way:open_mouth:

Configure the agents and integrate them with command endpoint checks from the master. This chapter gives a good overview about the capabilities.

OK now it works for me, thx.
But I dont realy understand how I create more Zones. In my example I would like to have a Zone for each Environment like Development, Staging and Production) So I would like to create 3 Zones and put the Servers in these Zones. Does it make sense? The Environmets are in different Networks. This is my first hostfile for Env Development where I try to put serveral Servres in one Zone, but it dosent work :frowning:

object Zone “master” {
endpoints = [ “icinga-mon.net1.com” ]
}

object Endpoint “devfront1.net2.com” {
host = “xxx.xxx.xxx.xxx”
}

object Zone “development” {
endpoints = [ “devfront1.net2.com” ]
endpoints = [ “devmiddle1.net2.com” ]
parent = “master”
}

object Host “devfront1.net2.com” {
check_command = “hostalive” //check is executed on the master
address = “xxx.xxx.xxx.xxx”
vars.client_endpoint = name //follows the convention that host name == endpoint name

vars.stage = “ep-develop”
vars.load[“load”] = {}
vars.disks[“disk”] = { }
vars.notification[“mail”] = {
groups = [ “icingaadmins” ]
}
}

object Endpoint “devmiddle1.net2.com” {
host = “xxx.xxx.xxx.xxx”
}

object Host “devmiddle1.net2.com” {
check_command = “hostalive” //check is executed on the master
address = “xxx.xxx.xxx.xxx”
vars.client_endpoint = name //follows the convention that host name == endpoint name

vars.stage = “ep-develop”
vars.load[“load”] = {}
vars.disks[“disk”] = { }
vars.notification[“mail”] = {
groups = [ “icingaadmins” ]
}
}

Hi,

devfront1 and devmiddle1 are agent hosts, right? You would need a satellite endpoint host for each zone, e.g. development, which then schedules the checks against the agents. Otherwise the zone for each environment doesn’t really make sense.

If you need the environment split only for visualization purposes, use custom variables and/or hostgroups for this and fire the checks directly from the master zone.

If it is for network DMZ reasons, setup a satellite zone for each environment, and then proceed with adding agents to it. That’s the three level cluster scenario in the docs.

Cheers,
Michael

Yes Dev Server are agents.
And yes it’s only for visualisation, that what a have done at the moment
I don`t understand why the Object Zone is the same name as the Host Object.
Thats my hostfile for develoment. Is this the way I can continuous?

object Zone “master” {
endpoints = [ “icinga-mon.net1.com” ]
}

object Endpoint “devfront1.net2.com” {
host = “xxx.xxx.xxx.xxx”
}

object Zone “devfront1.net2…com” {
endpoints = [ “devfront1.net2.com” ]
parent = “master”
}

object Host “devfront1.net2.com” {
check_command = “hostalive” //check is executed on the master
address = “xxx.xxx.xxx.xxx”
vars.client_endpoint = name //follows the convention that host name == endpoint name

vars.stage = “ep-develop”
vars.load[“load”] = {}
vars.disks[“disk”] = { }
vars.notification[“mail”] = {
groups = [ “icingaadmins” ]
}
}

object Endpoint “devmiddle1.net2.com” {
host = “xxx.xxx.xxx.xxx”
}

object Zone “devmiddle1.net2.com” {
endpoints = [ “devmiddle1.net2.com” ]
parent = “master”
}

object Host “devmiddle1.net2.com” {
check_command = “hostalive” //check is executed on the master
address = “xxx.xxx.xxx.xxx”
vars.client_endpoint = name //follows the convention that host name == endpoint name

vars.stage = “ep-develop”
vars.load[“load”] = {}
vars.disks[“disk”] = { }
vars.notification[“mail”] = {
groups = [ “icingaadmins” ]
}
}

Hi,

ok then let’s keep it simple. The master zone schedules the checks being run on the agents. Pick one of the agents first, and ensure it works. Then iterate over the rest.

zones.conf on the Master

vim /etc/icinga2/zones.conf

object Endpoint "icinga-mon.net1.com" {

}

object Zone “master” {
  endpoints = [ “icinga-mon.net1.com” ]
}

Then add the agent’s endpoint and zone object which establishes the trust relationship between zones with the parent attribute. Each agent needs their own zone, this is mandatory for setting this up.

vim /etc/icinga2/zones.conf

object Endpoint “devfront1.net2.com” {
  host = “xxx.xxx.xxx.xxx” // master actively connects to the endpoint
}

object Zone “devfront1.net2…com” {
  endpoints = [ “devfront1.net2.com” ]
  parent = “master”
}

Host and Service Checks from the Master executed on the Agent

Best practice is to define these objects inside the master zone in zones.d. This allows to easier setup a secondary master later on.

mkdir -p /etc/icinga2/zones.d/master

vim /etc/icinga2/zones.d/master/devfront1.net2.com.conf

object Host “devfront1.net2.com” {
  check_command = “hostalive” //check is executed on the master
  address = “xxx.xxx.xxx.xxx”
  vars.client_endpoint = name //follows the convention that host name == endpoint name

  vars.stage = “ep-develop”
  vars.load[“load”] = {}
  vars.disks[“disk”] = { }
  vars.notification[“mail”] = {
    groups = [ “icingaadmins” ]
  }
}

In terms of the service apply rules, put them into zones.d/master/services.conf for example.

Start simple with just one service check.

vim /etc/icinga2/zones.d/master/services.conf

apply Service for (disk => config in host.vars.disks) {
  import "generic-service"

  check_command = "disk"

  command_endpoint = host.vars.client_endpoint // execute this check on the agent endpoint

  vars += config

  assign where host.vars.client_endpoint != ""
}

Run a config validation and restart Icinga 2. Then navigate into Icinga Web 2 and check the detail view for the disk service. Verify that the check source is the agent hostname.

Cheers,
Michael

PS: This follows best practices documented in the master with agents/clients chapter in the docs.