IC2 API Persistence Question in Cluster Scenario


Can somebody give me a hint on how API persistence works in a cluster scenario (2 x masters)?

What I mean is there is a safe way to create objects (ex. Scheduled downtimes) and persistent them during cluster config change/reboot scenario. In my architecture, I have 2 x master nodes (one of them connecting to another) so when for example I’m adding new Endpoint (using config files) I need to restart IC2 on both servers via systemctl reload icinga2. Every time when I do it (one node at the time) my API created objects are disappearing.

I’ve seen this post:

and it’s explained why I cannot see my objects on icinga2 object list (created by API) - so icinga2 daemon -C resolving that problem. But it’s not clear for me what should be the process of changing the config in cluster scenario (we got zones.d on 1st master, and 2nd one receiving it via IC2 API) to persist API created downtimes.

I know I can create these in zones.d as .conf files but want to integrate IC2 REST API in some scripts that may be run ad-hoc and may need to create some extra/temporary downtimes.

Many thanks

As far as the Icinga2 API goes, only one master will hold the IDO object at a given time – from some very brief and not thourough observations of mine, if you make an API call that creates/updates/deletes an object to the Active Master, everything should be fine, but if you make it to the non-active master it doesn’t do anything with it (or maybe it forwards the call to the Active Master, I don’t remember :sweat: )

As far as integrating the API in some scripts in an HA master scenario:
if I’m right in the API behavior, I think you may need to have some logic/load balancing/something to determine which master is active to submit the calls.

It’s worth noting here that my HA master setup was short lived because I made the assumption that I could load balance the API this way, but that was not the case. I could have had something misconfigured and experienced the behavior that I mentioned. One this is clear though: only 1 master holds the IDO object at a given time, and HA master setups are active-passive.

As far as the post you linked goes, I can expand on that a little with an example (even though you seem to understand it) and say that when Icinga Director creates objects via the API, director must also trigger a reload (ie, “deployment” in Director terms) for Icinga to see the new objects.

Thanks a lot, so this may be part of my problem as currently I’m load balancing API calls over both of cluster members so technically servers without an IDO object can receive it.

I will test my behaviour after submitting API calls only to the master server. For now, my problem is that when I’m adding new endpoint/zones to zones.conf in my cluster I need to restart it to apply these config changes. When I’m doing it my REST API created objects (downtimes) dissapearing.