I have a question regarding the concept we want to implement. Before I proceed to test it in the test environment, I was hoping someone with experience could share some insights.
Currently, we have a setup with 2 masters in a high availability (HA) configuration. Both masters are using graphite object writers, and there is only one instance of the graphite server where the object writers from both master environments store performance data.
Now, my question is: If we were to use two instances or two graphite servers for the object writers, would this create a “split brain” scenario? For instance, Master1’s object writer pointing to graphite-srv1, and Master2’s object writer pointing to graphite-srv2.
I noticed that icinga_ido is responsible for ensuring that both masters store data in both databases (master1 has db1, and master2 has db2), which means the data should be persistent even if one master goes down.
Alternatively, do we need to consider graphite clustering, where we only need to point icinga2’s object writer configuration to the IP address/FQDN of the graphite cluster, if we plan to use more graphite nodes instead of just one?
Thank you all for brainstorming and sharing your insights.
so it should be by default “enable_ha = false”? If it is not specified under object writer and this means both masters can activly store perf data into Graphites databse, so it will not be “split brain” scenario?
So, if we want the object writers from both HA satellites in Zone 1 to store data simultaneously in both graphite servers for each satellite, we need to set the option “enable_ha” to “false,” right?
When one satellite goes offline, there might be missing data for that period. Is there some kind of synchronization that occurs when it comes online again, allowing the offline satellite to receive the missing data from the working one?
Thanks. Sorry if these questions seem basic, but the concept of HA in icinga2 is still a bit unclear to me.