Icingaweb2 director configuration Master/Slave, how to do?

Hi,

I agree with Roland and would just start from scratch.

Choose one node as the master, create all hosts and configure the remaining 8 nodes as satellites by using the node wizard.

And one thing that might be important when it comes to your boss’ expectations because you mention master/slave and master-satellites:
Master/Slave setup: There is no such thing in the Icinga2 universe compared to databases like MySQL for example. “Masters” are the “configuration master(s)”. If you have just a single node as master this node will always be the master. If you configure 2 nodes in the “master zone” Icinga2 will take care of the HA mechanisms and clustering and you should treat both nodes as master at any time.
(If you add a 3rd node to the zone -> everything will go BOOM! Seriously, don’t do it. :wink: A max of 2 nodes per cluster is what’s supported by Icinga).

And the actually important part is to steer your boss expectations and tell him that the setup you described means that you’ll end up with a single master, which is a SPOF. The connected satellites won’t (be able to) take over the service if the master node crashes. So I would recommend to add a 10th node to have HA and redundancy for the master.

All this and much more is also explained pretty well in the link @rsx shared above.

Cheers

1 Like

We had many conversation with managing zones and endpoints with the director and the conclusion and recommendation is to have the in zones.conf only (and import them afterwards with kickstart).

3 Likes

Thank for all answer, are very important.
Just for clarifications.
I’ll star from scratch building a parallel system.
I could to choose for 2 master nodes and 6 satellites if it’s better.

What the boss want is:

  • Connect to one server and watch the situation of all sites
  • Able to Add/remove services/check in all satellites.

assuming the solution with 1 master and all satellites

In master node i’ll install icinga2 + icingaweb2 + direct
In Satellites only icinga2

From cli (on master) ill create ticket and i’ll add the nodes from wizard, the same from each satellite specifying the master node name and with accept commando from master

Now via director i’ll create each zone for each satellites right?

If i want add a service to host B i’ll build service and i’ll specified as zone the zone name of the host.

Is it right?

Thanks
Davide

Most of your assumptions/expectations are correct, but still do not use director for managing zones and endpoints.

Certificates could be handled in two ways. I prefer on demand signing.

With that setup your boss will have just one interface for managing the whole environment.

1 Like

yes, actually i use demand certificate in all other installation.

Ho can manage without director in master? I’ll use it to create/mange hosts and services

Managing host, services, commands etc. with director is fine and of course intended. But not for zones and endpoints, the shall be in zones.conf only.

1 Like

ok understand.
Can configure zone in zones.conf but can i specify a zone for a service/host when use director?

Unfortunately I admit that I am still confusing fre endpoints and zones, I still do not understand how they work

yes for hosts (it is not needed for services as they belong to a host which has the zone information already).

Yeah, this is one of the more complicated topics for newcomer, but also crucial to understand. Therefore, I’d recommend a training or a consultant or just a test environment where you can learn and test the basics.

i’ll find info to learn that!

" Nodes which are a member of a zone are so-called Endpoint objects."

in my case with 1 master and 8 satellite:
Master zone for master server
Satellite zone with zone per each one

In my case, are endpoints the hosts that satellites monitor?

Hi @asyscom,

every node is an endpoint. The master as well as each satellite. So you’ll end up with 9 zones (1 master, 8 for the satellites) and 9 or 10 endpoints (9 = no redundant master; 10 endpoints = redundant master = 2 nodes + 8 satellites).

perfect, it’s clear now…hope :slight_smile:

Hello,
i’ve tried to build the conf via director and seems that all is ok
I’ve add 2 satellites using a dedicate host-template for each one where specified as zone itself but as parent the master.
I’ve added 4 service (ssh, disk, load, user) and verified via debug log on satellite that the check are running from it and not directly from master, tried to close icinga into satellite and i received error…so it works i suppose.

Now to finalize ma job i need another help because the situazione is little bit complicate.

The problem is that each satellite has 2 network interfaces, one for talking to the other satellites and one for monitoring the datacenter via the sensor gateway.

The class for making satellites speak is 10.7.XX and each satellite has a small switch with the 192.168.0.x class where the sensor gateway works

How should I deal with creating zones? If I make a template for each satellite I have to put ip of the class that speaks with the master, but locally he will have to check the other network interface.
I hope I was clear

Could I specify the satellite IP in the host and create an endpoint for each satellite with the IP of the gateway sensor class? It might work?

I’m sorry but I’m not able to understand what you’re trying to do. In general, satellites don’t talk to each other. A check is executed on master resp. satellite depending on the zone where a host is assigned to OR on an agent (where command_endpoint comes into play).

For example, http could be used to check a web server remotely means this kind of check is typically executed on a master or satellite. Whereas disk or load needs to be executed directly on a target machine means executed by the agent.

yes is similar, in all my satellites i add an host (the sensor gateway) and check are executed by satellite directly on it without agent but using check modbus on second network interface.

How can mange it by master machine? Master machine cannot reach directly the sensor gateway inerface

Looking at those concepts, can you tell me where the Director server is, or is the director running on the masters?