Master - Satellite - Client Setup (How?)


at first what I want:

  • One Server with Icinga Web2 and the director to manage configurations/hosts/groups etc. from Company/Network “B” & “C” and also to see the checks/problems…

  • Two other icinga2 servers that do there jobs/checks in Company/Network “B” and “C”. Per Company/Network I want to use one icinga2-server.

What I still have:

  • one ubuntu server with icinga2 + web + director (works fine)
  • two ubuntu server with icinga2

My question:

How can I connect these icinga2-servers to the “master” to manage these icinga2-instances from the master and also see the checks on the master-web?

P.S. I dont understand the part with the “endpoints” and the “zones”. I read a lot about it, but dont understand. I think the connection between the servers are using the zone to connect, right?

Please let me know, what can I do or send me tutorials to setup my wish. :slight_smile:

Best regards,


I understand at the beginning is looks complicated. But it isn’t

in each zone are 1 or 2 nodes (called Endpoints)

zone1 (e.g. called master) -> connects to zone 2 (e.g. zone worker1). If there are two nodes in a zone both agree with each other who then checks what.

You have to extend your zones.conf like described here


because my plan is e.g.:

Icinga2 + Web + Director (Network A - Overview over checks and problems from Icinga2 in Network B)
Icinga2 (Network B - Do checks like ping)
Switch / Desktop PC (Network B - Is a normal network device to monitor is it alive)

Yes, it’s like described in the docs.

the master zone (network a) has to know the child zone (network b), the child zone has know about the master zone.
If you install icinga as agent on the desktop pc, this installation has only to know about the parent zone (network b)

Okay, I need to run the icinga2 node wizard or not?

It depends.
The node wizard overwrites the existing config file like zone.conf, constants.conf etc. If it’s a problem (for you), is that to be considered. You could also extend the config files and restart icinga2. Certificates can also be created manually

I totally agree that at the beginning, the concept or zones, endpoint etc. is quite hard to understand.
Especially hard to get, if you are new to monitoring.

The master(s) are the alpha and omega. They know all hosts, services and so on.
In short: They know everything.
Masters can execute checks directly or indirectly by a satellite.
If the master checks directly, the endpoint (see below) is in the zone of the master.

They execute checks of specific zone. Instead of the masters, the checks for this zone are executed by the satellites and reported back to the master.
If the zone gets to big, another satellite can be installed and the satellites share the work for their zone.
It is not always necessary to have satellites, that depends on the infrastructure.

Each host in a zone is an endpoint; also master and satellite. Endpoints are in 1 (one) zone, and in only one zone.
The endpoints, which are not master or satellite, only know their parent and the zone they are in (master or satellite zone - whatever called) and their own, unique zone, not the other endpoints.

I described something here.

Every icinga-agent, whatever role, can be installed by the node-wizard, but there are also other ways. Since this is a totally other question than your initial one, I am not sure was is the actual problem.

Maybe it is a bit easier if you explain us which points are not clear.


Personally I think the word “zone” should be changed, and especially that the
examples in the documentation
which use “EU” and “USA” should be changed, because they really do make people
think that a zone is some geographical region, or at the very least a group of
related machines.

Anyone not using a High Availability setup will have each endpoint in exactly
one zone, and each zone will contain exactly one endpoint (an HA setup will
have exactly two endpoints in the Master zone, but that’s the only exception).

In general it is a direct one-to-one correspondence, and I think this is not
clear to people new to Icinga.

The word “client” got changed a few months ago to “agent”, and this has
probably helped people to understand the structure of things a bit better; I
hope the same can be done for the word “zone” to help avoid people thinking
that a zone is a group of several machines all sharing some common property
such as location, function, operating system, development/production, business
unit, etc. It isn’t.

I suggest the word “node” to replace “zone”, because the things shown on
distributed monitoring setups with Master, Satellites and Agents, joined to
each other by lines in a hierachy, are graph nodes.

Ther emay well be other, possibly better, suggestions :slight_smile:



My Problem is now. What is my step to do?

I have my three icinga-servers and all servers works fine. What should I do now?

I think I have to create the zone.conf, right? But how? Has maybe someone a example config for this thread to explain? Maybe it is easier to understand for me.

But so far, many thanks to all comments and help!

If I’ve understood everything correctly, you need would these steps (al least):

  1. Run icinga2 node wizard on your master (selecting to configure it as master, which is the very first question. And disable conf.d inclusion).
  2. Run icinga2 node wizard on your first satellite (selecting to configure it as satellite/agent with its own zone name and having your master as the parent. And disable conf.d inclusion).
  3. Do the same as #2 for your second satellite (selecting to configure it as satellite/agent with its own zone name and having your master as the parent. And disable conf.d inclusion).
  4. Manually add zone and endpoint objects to zones.conf on your master
  5. Sign your certificates if required
  6. restart all icinga cores
  7. Configure director and run kickstart wizard
  8. Start configuring your host and service objects


perfect instruction! Thats what I need.

Just one question:

Point “4.” - In my folder /etc/icinga2/zones.d/ I have to setup/create manual the folders / zones?

Like …:


And I have to create this on all icinga-cores manual right?

After that… Can I see the status of my satellites on the director modul? To check if the satellites are online/connected to the master?

One question for the future… Where in the director can I select in which zone I want to create or execute a host/service?

Big thanks!

No, as you are using the director (means it will do the job for you).

No, as you are using the director (means it will do the job for you).

It depends on how you arrange your service objects.

Please do not created any zones with director (even this is still possible). Each host object needs to be a member of a zone (you your case: master, sat1 or sat2) and this defines where a service check is executed by the default. For checks that should be executed on an agent, you need to define these checks with the option run on agent in the director.

1 Like


I’ll test it and come back tomorrow to this thread. :slight_smile:

Hey, I’m back.

Now my setup is installed.

I added via director one host (ubuntu server 20.04) with a simple ssh-check and a hostalive ping-check for a HP-Switch.

So far so good, all is fine, both checks works perfectly.

But now I see this:

icinga01 - satellite 1 for network A
icinga02 - satellite 2 for network B
icinga03 - master

The switch and my ubuntu-server are in the same network from the master.

But how can I change this to a satellite for a host that is in the network e.g. from satellite 1?

My plan is to just use the director to manage all services/hosts and commands. :slight_smile:

Best regards,

I think I got it.

Is the option to find in the part of “host templates” at the bottom? :slight_smile:

Just change the zone where a host belongs to and yes, it’s part of a host template or a host itself.

1 Like


But in the overview to create a new host I have no entry to set the zone, thats right?

It’s called Cluster Zone.

Many thanks.

All works fine. U are my man.