Yeah thank you, I know the ITL, I just mentioned this to explain why the need to “segregate” only certain notification type to a specific master/satellite.
I see what you mean about assigning the notification object to a specific zone, but the Master would still need to monitoring and notify any cluster issue (through ITL), therefor still have the notification object assigned to itself as well.
My understanding was that by placing certain host objects in a specific zone.d folder, only that satellite would perform the needed check and send the notification, but it looks like that the Master still see them as well, and still perform any possible check/notification as well.
As I said, I still need the notification and checker objects on the Master as well to monitor itself.
Sending notification depends on how you define notification objects. If you apply all of them at your master, then the master will send notification no matter which of the satellites was performing the check.
I have to admit that then I am a bit lost on how exactly the host.zone == ZoneName works.
I thought it was a variable to use, and you were telling me to assign the notification only to host that were part of a specific satellite: assign where host.zone == satellite-02 || host.zone == satellite-01
Therefor I should assign this condition to all the services as well, in order for each zone to perform the check of the host objects of its own zone?
ZoneName is a constant which is defined individually on each icinga machine (master and satellites). The limitation host.zone == ZoneName will only create notification objects for hosts and services that belong to a particular icinga machine.
This has nothing to do with where a check is performed.
Satellite → send metrics/notification for the hosts it monitors
Master → send metrics/notification for the cluster check. And as only the master has the idodb feature (as suggested for best practice), it’s the only one that can properly do the cluster check and return the idodb metrics