Config Sync Mode

Is it possible to use “Top Down Config Sync” with the director (as stated here

@dnsmichi stated here Icinga Agent: Top Down Config Sync that the “local scheduler” on agents eventually get deprecated, how should we realize SLA reports then?


sure, you can always sync objects from masters to satellites. Agents/Clients should be used in combination with the parent zone scheduling the checks and using command endpoint in order to tell the agent to execute the check.

That’s common best practice. If you’re planning to let the agent use its own local scheduler, you can do so. This takes the overhead of config object sync, local CPU resources for scheduling and in addition, storing replay logs on connection loss. For a “dumb” agent which should just execute checks, typically local filesystem and processes, this is a bit of an overhead with the “top down config sync” method.

The above role is typically referred to masters and satellites, not agents. Might need some doc clarifications, but I am waiting for others to do that. I’ve written the entire distribution monitoring docs twice already.

I’m not sure how SLA reporting is related here - whenever the checks return check results, state changes are logged on the master and stored in the backend. This is what is used for SLA reporting.


Thanks @dnsmichi for your fast response (as always :wink: )
I know that it Icinga supports that but how to set it up in director?

My specific usecase is that i have a satellite/agent which executes some important checks but has an unreliable connection. Currently it works as a “dumb” execution bridge, but I want it to to execute the checks using config sync (because of the unrealiable connection to the master), so no check results are lost (or not executed).
Is this possible using the director? I tried to create a zone for the satellite/agent and assign my remote checked host to that zone. This fails because it complains about the zone-name being redefined (because its already setup by the director because of the agent setup).

Can/Should i name the zone differently than the satelite FQDN? Can the satellite even execute checks on its own or can it just “relay” and delegate checks to downstream agents?

Currently it seems that it is also not possible to “reuse” a remote check for multiple zones because you can’t dynamically set the Services zone, correct? (You can only specify zone once per service template)


the host must not be set as agent, but instead you’ll define its cluster zone to satellite (one of the many satellite zones you’ll likely have). That zone must be configured in Icinga 2’s zones.conf manually, and then imported with the kickstart wizard.

Once the host belongs to the satellite zone, the master expects that endpoints in this zone will actively execute the checks. The easiest way to verify is to assign a simple disk service check to that host and see what happens.


Thanks again for your response.
I manually created a zone in the zones.conf named by the satellites fqdn and imported it with the kickstart wizard.
Sadly it does not execute the checks or at least the master is not informed. According to the Deployment Log of the director the check is placed in the correct (=the satellites) zone but on the satellite there is only the “director-global” zone in /var/lib/icinga2/api/zones.
Thanks for your help and greetings from drageekeksi-city :wink:

Can you share the rendered config snippet from the Director deployment? Or, query the REST API on the master for this host object, e.g. curl -k -s -u root:icinga 'https://localhost:5665/v1/objects/hosts/<hostname>?pretty=1'. I’m primarily looking how the master sees that object.
On the satellite, check the logs upon reconnect, which zones are synced - post that here too.
Last but not least, please share the zones.conf you have now on the master and the satellite.


PS: Every city in Austria is a drageekeksi city :slight_smile: