Behavior Master HA cluster and migration

Hey everybody !

I have a few questions about how icinga2 works.

I have two Masters in HA cluster (normally it works well), a Satellite linked to the Masters and two Agents on the Satellite.

My main goal is to always be able to receive notifications via Slack AND be able to access the icingaweb2 web interface, even if one of the Masters is down. But the problem is when I turn off Master 1, Master 2 doesn’t send any notification and I can’t know what’s going on since icingaweb2 becomes inaccessible…

So first question: is it possible to do that with icinga2? If so, how can I do it?

Second question : when I set up a check to see how the cluster works, my check tells me that the cluster is on Satellite 1 and Master 1… But it’s supposed to be Master 1 and Master 2…

If you could help me please, I’ve been trying to find the answer for a lot of days.

Thank you

best regards,



can you please share more details about the setup, such as:

  • icinga2 --version from all nodes
  • zones.conf from all nodes, to get a better idea about the setup and possible missing configuration
  • Specific Notification and NotificationCommand objects via icinga2 object list --type ...
  • Are the notification scripts available on the second master?
  • For checks: Add a concrete host/service object, share screenshots and API queries to illustrate your problem

In case you haven’t found it yet, the troubleshooting docs also hold some more details on collecting insights into your system.


Hi Michael,
Thank you for reply :slight_smile:

For icinga2 --version :
Master 1 : r2.11.2-1
Master 2 : r2.11.2-1
Satellite 1 : r2.11.2-1
Agent 1 : r2.11.2-1
Agent 2 : r2.10.5-1

Are the notification scripts available on the second master?
yes, Master 2 retrieves all scripts and configuration from Master 1 during synchronization. The files are then located in : /var/lib/icinga2/api/zones/…

and these are my zones.conf :
Agent-1 :
Agent-2 :
Satellite-1 :
Master-1 :

Master-2 :

For Cluster check :
Conf file in /etc/icinga2/zones.d/master/cluster.conf

And the output of this :

thank you man ! :smiley:


That won’t work since the cluster sync is only made for configuration files.

Please show a specific host/service object where these notifications do not work, and extract the following bits:

icinga2 object list --type Host --name ...
icinga2 object list --type Notification --name ...

Also, please show the NotificationCommand used for Slack, including the file location on both masters (that’s what’s referenced in the command attribute).


only the configuration files ? i.e. the HA Cluster failover does not work ? That’s why when I switch off Master 1, Master 2 doesn’t take over ?

This is output for Host :
root@*****:/etc/icingaweb2# icinga2 object list --type Host --name master-2
Object ‘master-2’ of type ‘Host’:
% declared in ‘/etc/icinga2/zones.d/master/hosts-master.conf’, lines 35:1-35:29

  • __name = “master-2”
  • action_url = “”
  • address = “192.168.x.x”
    % = modified in ‘/etc/icinga2/zones.d/master/hosts-master.conf’, lines 38:2-38:26
  • address6 = “”
  • check_command = “hostalive”
    % = modified in ‘/etc/icinga2/zones.d/global-templates/templates.conf’, lines 19:3-19:29
    % = modified in ‘/etc/icinga2/zones.d/master/hosts-master.conf’, lines 37:2-37:28
  • check_interval = 60
    % = modified in ‘/etc/icinga2/zones.d/global-templates/templates.conf’, lines 16:3-16:21
  • check_period = “”
  • check_timeout = null
  • command_endpoint = “”
  • display_name = “master-2”
  • enable_active_checks = true
  • enable_event_handler = true
  • enable_flapping = false
  • enable_notifications = true
  • enable_passive_checks = true
  • enable_perfdata = true
  • event_command = “”
  • flapping_threshold = 0
  • flapping_threshold_high = 30
  • flapping_threshold_low = 25
  • groups = [ ]
  • icon_image = “”
  • icon_image_alt = “”
  • max_check_attempts = 3
    % = modified in ‘/etc/icinga2/zones.d/global-templates/templates.conf’, lines 15:3-15:24
  • name = “master-2”
  • notes = “”
  • notes_url = “”
  • package = “_etc”
  • retry_interval = 30
    % = modified in ‘/etc/icinga2/zones.d/global-templates/templates.conf’, lines 17:3-17:22
  • source_location
    • first_column = 1
    • first_line = 35
    • last_column = 29
    • last_line = 35
    • path = “/etc/icinga2/zones.d/master/hosts-master.conf”
  • templates = [ “master-2”, “generic-host” ]
    % = modified in ‘/etc/icinga2/zones.d/master/hosts-master.conf’, lines 35:1-35:29
    % = modified in ‘/etc/icinga2/zones.d/global-templates/templates.conf’, lines 14:1-14:28
  • type = “Host”
  • vars
    • slack_notifications = “enabled”
      % = modified in ‘/etc/icinga2/zones.d/global-templates/templates.conf’, lines 22:2-22:37
  • volatile = false
  • zone = “master”
    % = modified in ‘/etc/icinga2/zones.d/master/hosts-master.conf’, lines 39:2-39:16

And for Notification, when i ran “icinga2 object list --type Notification --name master-2” i don’t have an output…

This is my slack configuration in Master 1 at : /etc/icinga2/zones.d/master/slack-notifications/slack-notifications-user-configuration.conf


Not sure what you are talking about. Syncing a notification script with the cluster config sync is not supported, that’s what I wanted to say here.

The HA cluster failover is a different thing in this regard. The object authority updates will fire once master2 detects that master1 has been disconnected.

You can read more about this here:

In terms of the notification object, try something like icinga2 object list --type Notification --name 'master-2*'.

An alternate way would be to query the REST API at /v1/objects/notifications on both masters.

Still, I’d like to see the NotificationCommand itself, and the script location referenced in the command attribute.


Okay, let me rephrase that! With my current configuration, if Master 1 is Down, Master 2 takes over?
In this example, with Master 2 only, could I go on icingaWeb2 to see the hosts etc ? Because at the moment I can’t go on it because if Master 1 is Down, icingaWeb2 becomes inaccessible too. And precisely, I was thinking that with the HA Cluster, icingaweb2 would migrate to Master 2 (with all check) if Master 1 is Down to be able to access icingaWeb2.
Does my current configuration allow it? or are some parameters missing? Especially with HA IDO or others for example ?

not supported ? Oh, I see. So that means that even if the HA Cluster is working properly, Slack notifications won’t be able to be sent. However, if I set up email notifications, it should work ?

Oh yes, it’s better ! :slight_smile:
this is my notification output
output.txt (29.2 KB)


And this is output of : icinga2 object list --type NotificationCommand
output_command.txt (7.4 KB)

Since the host/service objects are located inside the master zone, the failover will happen and solely master2 does the checks and notifications.

Different thing, that’s the webserver application. That runs independent of an Icinga 2 HA cluster.

If you want to use two web server instances, and switching between, you’ll need to look into HAproxy and sorts of. @Carsten has written some howtos in this regard.

To conclude here - Icinga 2 and Icinga Web 2 are two different applications.


As with every notification script, you need to install it on the host, e.g. in /etc/icinga2/scripts or /usr/lib/nagios/plugins or your own custom path.

The configuration is a different bit.

Trust me, that works. I have designed and built this architecture together with Gunnar several years ago.

Since the NotificationCommand actually uses a Function, this isn’t printable. Please show the DSL code from /etc/icinga2/zones.d/master/slack-notifications/slack-notifications-command.conf instead.


1 Like

All right. So if I summarize well, my icinga2 cluster works well but not for the part icingaWeb2 with IDO MySQL that I still need to configure. Thanks, I’ll read the article from @Carsten

For slack notifications via script, it actually doesn’t work because I didn’t add it locally on the Master 2 ?

this is my slack-notifications-command.conf :
slack-notifications-command.conf (6.3 KB)

Thank you