Hi,
I had setup a Master node and several client. Every thing was ok. I can create Hostgroups, Servicegroups.
But now I install the icinga agent on the client and and get the config frotm the client under /etc/icinga2/repository.d/…
Where can I now define for example a Hostgroup. Before I configured it under /etc/icinga2/conf.d/groups with:
…
object HostGroup “ep-staging-server” {
display_name = “EP Staging All Servers”
OK now it works for me, thx.
But I dont realy understand how I create more Zones. In my example I would like to have a Zone for each Environment like Development, Staging and Production) So I would like to create 3 Zones and put the Servers in these Zones. Does it make sense? The Environmets are in different Networks. This is my first hostfile for Env Development where I try to put serveral Servres in one Zone, but it dosent work
object Host “devfront1.net2.com” {
check_command = “hostalive” //check is executed on the master
address = “xxx.xxx.xxx.xxx”
vars.client_endpoint = name //follows the convention that host name == endpoint name
object Host “devmiddle1.net2.com” {
check_command = “hostalive” //check is executed on the master
address = “xxx.xxx.xxx.xxx”
vars.client_endpoint = name //follows the convention that host name == endpoint name
devfront1 and devmiddle1 are agent hosts, right? You would need a satellite endpoint host for each zone, e.g. development, which then schedules the checks against the agents. Otherwise the zone for each environment doesn’t really make sense.
If you need the environment split only for visualization purposes, use custom variables and/or hostgroups for this and fire the checks directly from the master zone.
If it is for network DMZ reasons, setup a satellite zone for each environment, and then proceed with adding agents to it. That’s the three level cluster scenario in the docs.
Yes Dev Server are agents.
And yes it’s only for visualisation, that what a have done at the moment
I don`t understand why the Object Zone is the same name as the Host Object.
Thats my hostfile for develoment. Is this the way I can continuous?
object Host “devfront1.net2.com” {
check_command = “hostalive” //check is executed on the master
address = “xxx.xxx.xxx.xxx”
vars.client_endpoint = name //follows the convention that host name == endpoint name
object Host “devmiddle1.net2.com” {
check_command = “hostalive” //check is executed on the master
address = “xxx.xxx.xxx.xxx”
vars.client_endpoint = name //follows the convention that host name == endpoint name
ok then let’s keep it simple. The master zone schedules the checks being run on the agents. Pick one of the agents first, and ensure it works. Then iterate over the rest.
zones.conf on the Master
vim /etc/icinga2/zones.conf
object Endpoint "icinga-mon.net1.com" {
}
object Zone “master” {
endpoints = [ “icinga-mon.net1.com” ]
}
Then add the agent’s endpoint and zone object which establishes the trust relationship between zones with the parent attribute. Each agent needs their own zone, this is mandatory for setting this up.
vim /etc/icinga2/zones.conf
object Endpoint “devfront1.net2.com” {
host = “xxx.xxx.xxx.xxx” // master actively connects to the endpoint
}
object Zone “devfront1.net2…com” {
endpoints = [ “devfront1.net2.com” ]
parent = “master”
}
Host and Service Checks from the Master executed on the Agent
Best practice is to define these objects inside the master zone in zones.d. This allows to easier setup a secondary master later on.
mkdir -p /etc/icinga2/zones.d/master
vim /etc/icinga2/zones.d/master/devfront1.net2.com.conf
object Host “devfront1.net2.com” {
check_command = “hostalive” //check is executed on the master
address = “xxx.xxx.xxx.xxx”
vars.client_endpoint = name //follows the convention that host name == endpoint name
vars.stage = “ep-develop”
vars.load[“load”] = {}
vars.disks[“disk”] = { }
vars.notification[“mail”] = {
groups = [ “icingaadmins” ]
}
}
In terms of the service apply rules, put them into zones.d/master/services.conf for example.
Start simple with just one service check.
vim /etc/icinga2/zones.d/master/services.conf
apply Service for (disk => config in host.vars.disks) {
import "generic-service"
check_command = "disk"
command_endpoint = host.vars.client_endpoint // execute this check on the agent endpoint
vars += config
assign where host.vars.client_endpoint != ""
}
Run a config validation and restart Icinga 2. Then navigate into Icinga Web 2 and check the detail view for the disk service. Verify that the check source is the agent hostname.
Cheers,
Michael
PS: This follows best practices documented in the master with agents/clients chapter in the docs.