Setup a Virtual master

api.conf.orig isn’t active and gets ignored.
What is in api.conf is important.

master 1
object ApiListener “api” {

ticket_salt = TicketSalt
}
master 2
object ApiListener “api” {
accept_config = true
accept_commands = false
}

ok…I updated the api.conf with accept_config and accept_commands set to true on both nodes and ticket_salt =TicketSalt on master1 but still second master doesn’t take over after shutting down first master

Can you see any problems in the logs? Maybe activate the debuglog feature and restart Icinga on both - I hope your number of hosts and services is limited as I can only activate if for minutes at a time or my disk fills fast.

I see only this in /var/log/icinga2/icinga2.log on master 1
Dumping program state to file ‘/var/lib/icinga2/icinga2.state’
[2022-09-28 10:25:12 +0200] information/WorkQueue: #7 (ApiListener, RelayQueue) items: 0, rate: 0.55/s (33/min 161/5min 491/15min);
[2022-09-28 10:25:12 +0200] information/WorkQueue: #8 (ApiListener, SyncQueue) items: 0, rate: 0/s (0/min 0/5min 0/15min);
[2022-09-28 10:25:32 +0200] information/IdoMysqlConnection: Pending queries: 5 (Input: 4/s; Output: 3/s)

I’m out of ideas but I would just use the Linuxfabrik lfops ansible roles to setup the master zone.
If that fails, I would file the bug as github issue :wink:

should i otherwise remove this satellite node which was made master later…and create a new node with master itself?

As far as I understand, it should all be defined in /etc/icinga2/ and /var/lib/icinga2/ so maybe there is some kind of remnant of node 2 as a satelite under /var/lib/icinga2/ in node1. Maybe try grep -r hostname_of_node2 /var/lib/icinga2/ on node1?

There is a big output coming up when i run grep -r ftest02 /var/lib/icinga2 on node1…I dont really understand what it is

Should be any line containing the host name but if line breaks aren’t there it will return everything.

There are some info about ftest02 like i took some …there were few more

1710:{“name”:“ftest02!ping4”,“type”:“Service”,“update”:{“acknowledgement”:0,“acknowledgement_expiry”:0,“acknowledgement_last_change”:0,“check_attempt”:1,“executions”:null,“flapping”:false,“flapping_buffer”:0,“flapping_current”:0,“flapping_index”:7,“flapping_last_change”:0,“flapping_last_state”:0,“force_next_check”:false,“force_next_notification”:false,“last_check_result”:{“active”:true,“check_source”:

“check_source”:“ftest02”,“command”:[“/usr/lib/nagios/plugins/check_ping”,“-H”,“127.0.0.1”,“-c”,“5000,100%”,“-w”,“3000,80%”],“execution_end”:1664368763.672962,“execution_start”:1664368759.560634,“exit_status”:0,“output”:"PING OK - Packet loss = 0

This looks like the config it gets form the config master. Does this end up on your second node?

I am not sure, its difficult to figure out…,…anything else from which it can be confirmed?

if you stop icinga on the second node, clear out most of the config in config.d/ and zones.d/ and delete /var/lib/icinga2/api and then start the icinga service. Most of the objects in /var/lib/icinga2/ should be from node 1. I’m not sure but on node 2 all in /var/lib/icinga2/api could be send by node 1 anyways.

i deleted the api dir in the node2 after stopping the icinga and then restarted …The api file came back…
should i also had to remove the files from conf.d and zone.d???

Hi Dominik,

Anything else we can give a try? or what could be the problem?

Just saw Distributed Monitoring - Icinga 2 and remembered this thread. Could this be the solution?

@rivad
When i add any checks like disk or ping on the master1 in zones.d/master directory then i could see those in master2 aswell in /var/lib/icinga2/api/zones/ …etc etc…So basically config sync is happening…but my question now is when i use web interface of master1 with added service and host of both master’s to it with local databases on both then when i shutdown master1 the web interface doesn’t load…How can this be troubleshooted

I use a shared DB for both masters so I don’t know what happens in your scenario.
Fist I would look in the web server logs.
Does the web interface on master2 work if master1 is running?

even i would like to have just 1 database instead of 2…
if i use web interface of master1 then both masters are there but when master1 is shutdown the web interface also doesn’t work. the same is with master2

Do i have to install icinga and icinga-ido-mysql on dedicated database server also which if i want to use shared database?