So we have a standalone Icinga2 server at a location and would like to set up a Master via VM just incase if any problem happens then the other can take over. I am pretty new to this and would like to explore more and set up a nice environment. Can someone guide me
So that basically means we will have 2 masters one at vm and the other standalone.
Version used (icinga2 --2.10.1)
Operating System and version -----Debian 11
Icinga Web 2 version and modules (System - About)
|grafana||1.3.4||
| — | — | — | — |
|idoreports||0.9.1||
|ipl||v0.3.0||
|monitoring||2.10.1||
|setup||2.10.1||
|test||2.10.1||
As per this document, it setup one as a master and second one as agent. but with the naming convention it says first master/second master. if i follow this process then one would support another right? means if one goes down then other will take over.
If you set up Icinga 2 in a cluster the nodes in the same zone (here: master) run simultaneously (active-active).
By default:
They will distribute the checks between them
Both will send notifications
both write into the database
both receive updates from child nodes
both sign cert requests
Therefore both masters have to have the same features enabled.
If one of the masters goes down, the remaining one will take over the checks.
For the configuration there is only one master (the first to be set up), meaning you won’t be able to do config changes/deploys when the config master is down.
I did not properly understand what this means
For the configuration there is only one master (the first to be set up), meaning you won’t be able to do config changes/deploys when the config master is down.
Also can you please share document where it guides to setup another master with already having a master.
The config master holds the configuration for hosts, services, templates, … under /etc/icinga2/[conf.d|zones.d]
This configuration get replicated to the second master via the API, when both are connected.
If the config master is down you obviously can’t change the config there, and therefore the second master will not receive any new configuration, but will remain running with the config it was last sent via the API.
As config received via the API resides in a different folder a is basically read-only changes there will not affect (and could possibly destroy) the configuration.
can we make config changes on second master when the first is down and later the first gets updated with config which were updated in second…Sorry for asking so many question as I want to understanf completely before starting. Hope you dont mind. Thanks for the documentation link
Maybe not the first original config master but would it be possible to make the second master node the new config master and add a new node to master zone that will then be receive the config form the former second master that is now the config master?
Hi
I am back.
So i set up a master with web UI and one satellite.
The web interface is up and running with master IP. Also i have added a check for satellite host.
I made config as per How to set up High-Availability Masters
After this i followed few more steps as per Initial Sync for new Endpoints in a Zone where i copied the content(/var/lib/icinga2/icinga2.state and /var/lib/icinga2/api/packages/_api) from master to satellite (this is to make satellite also a master)
I tested to shutdown the master to see if the other node will take over but it didn’t …may be i am incomplete with the process.
Could you please guide what would be the next steps.
Can you post the zones.conf of original master and the new master?
Also have a look into the logs and review the output of icinga2 feature list on both nodes.
I see no problem except that director global isn’t global if nodes don’t have it
Did you also enable the HA config of the enabled features like ido-mysql?
NodeName (/etc/icinga2/constants.conf) are matching the endpoints?
I actually commented the director part in master2 afterwards since it was not sync…yes the ido-mysql feature is already enabled.
Is it something to do with director? I mean any installation …i see something in this link Installation - Icinga Director
I am using database for each node. so in that case i set enable_ha = false…
So there are 2 conf file one with api.conf and other api.conf.orig…In api.conf.orig i see this on both nodes
object ApiListener “api” {
//accept_config = false
//accept_commands = false
ticket_salt = TicketSalt
}
should i change it to true and uncomment it?