Yes - you specify the address of the master on the agent (as its parent) but
you do not specify the address of the agent on the master.
Icinga will happily connect either way round; this forces the agent to connect
to the master and prevents the master from trying something that won’t work.
Once a connection is in place, started from either end, Icinga just works.
I noticed that I need to understand the differences btw active and passive checks better. If I shut down the port 5665 incoming, checks are not being scheduled as planned anymore. So I maybe need to understand how to let the agent do its own scheduling.
So I think I’m missing something in the direction of how to configure the master to not try to check by himself but to wait for results and how to configure the agent to send data instead of waiting for commands.
For the complete picture, I configured both using the node wizard as if they were in the same network and then shut down the agents firewall for incoming traffic.
I’m also interested in how this would work. Since looking at this setup, I’m asking myself how the agent would know when to check itself since it can not receive any configuration info like check intervall etc from the master.
Also I dont know how the agent would even schedule its own checks.
“Icinga2 Agent = Yes” will create the necessary “Endpoint” and “Zone” for the agent, which, together with “Accepts config”, should switch the check execution to the agent IF the service template has the command_endpoint attribute set (Run on Agent = Yes in the Director)
vars.notification[“mail”] = {
groups = [ “icingaadmins” ]
}
}
As already said, setup is very basic with accept commands and accept config. The idea of a centralized configuration will most likely not work if the Agent will not connect and ask for updates itself - yes or no?
As @moreamazingnick said, setting the host attribute for the master in the Agents zones.conf file is enough to tell the agent to connect to the master.
Therefore you then don’t need the host attribute set in the masters configuration for the agent.
Windows agent example from our setup (done via Director)
To then have the check executed on the agent the service template needs the command_endpoint attribute set. That’s why I wanted to see one of the service configs.
AND IT WORKS! Thanks for giving me this additional hint.
And yes, the services are being told to run on the Agent and not on the master. I don’t really see the benefit in running something on the master if it isnt some kind of “check through SSH” scenario.
Last thing I cant figure out: if the Agent now runs “stand alone” and cannot get config update through API, where is the Agents config stored so I can manipulate myself? The standard config in /etc/icinga2/conf.d has been excluded. Do I simply include it or do you suggest to find and update the config created by the API?
What makes you think/say that? Without a connection to the master/parent the agent doesn’t do anything, because it has no config.
Synced configuration ends up in /var/lib/icinga2/api/.
It will get replaced/updated with every agent reload or config deployment.
For the case that the agent is not connected you could have some form of check (executed by the parten, master in your case) that monitors the connection status.
The cluster-zone check command would be suitable for that. Either configure that as an additional service check or even as the host check command (instead of ping/hostalive/…)
Mostly lack of knowledge and experience. Sync did not work because of incorrect config of zones on the agent. Now, after I corrected the agent zones.conf, updates also work. This means, despite my lack of experience, my problem has been solved.
I don’t see anything I could configure in the given api path, but as long as the config sync from the master works, everything is good.
Apart from the folders that hold the configuration the agent got from the master
The whole process is explained in the docs: technical-concepts/#config-sync