Icinga2/Icingaweb2 server running - now to add 4 remote hosts

Before I start, a story… the folks at our office do not allow our servers to have direct access to the internet. We have an internal yum server from which we must perform all our installs. I am the one doing monitoring so, on a separate instance that was connected to the internet, I ran all the downloads with yum downloader and then copied all the rpms to the yum server. I wrote a script so I can do it faster next time. I did a few manually, but this gets most of what was needed for Icinga2 and Icingaweb2. Perhaps it could be provided as a utility for others who face this predicament.


# downloads icinga2 and dependent packages

yum -y install epel-release yum-utils

yum -y install http://rpms.remirepo.net/enterprise/remi-release-7.rpm
yum-config-manager --disable remi-php56
yum-config-manager --enable  remi-php73


for i in "${pkgList[@]}"
   sudo yumdownloader --resolve $i

The one thing the above list provides is a list (perhaps incomplete) of what the heck I need to install the base components.

My new dilemma

But now with Icinga2 and Icingaweb2 installed, I am only monitoring the Icinga server.

I added 4 remote servers to the hosts.conf file so now I can see the extra 4 servers with 2 default services each; ping and ssh. Good start. Small simple accomplishment.

Next task - I must decide how I will run all the checks on remote servers.

I see the Configuration section and that all looks approachable. All the config files in the documentation look more or less like what I see in the /etc/icinga2/ directory.

Next I see a Chapter on Services and Plugins but this seems too detailed and I will read it later. I already saw and understand services.conf from the previous chapter.

Next up, Distributed Monitoring. Oh my. It’s a long chapter; and to my quick look it is 120 web screens long. I need a web server (Master?) with Icinga2/Icingaweb2 and I have 4 “satellites”. I am starting to wonder how much work this will be… and will I be done in time to meet my friends at the beer garden in a few hours.

So now I am desperate to know the fastest, best-practice, recommended approach to just get this done.

I just looked up in the Distributed Monitoring chapter and there are 26 “you want” references. IOW, if you want to, whether you want to, in case you want to, etc. There are too many choices and I hear my friends laughing and having fun.

I skip the Distributed Monitoring for now in the hopes that there is a more simple approach.

Ah. Here is Agent Based Monitoring. Now it feels like we get to the heart of monitoring… but wait; it says, Prior to installing and configuration an agent service, evaluate possible options based on these requirements (I count 8 things I must evaluate). Another decision about things upon which I will have to do google research.

The first agent described is Icinga as agent (I think it is just Icinga2) and here are the kind of words I’ve been looking for… “For the most common setups on Linux/Unix and Windows, we recommend to setup the Icinga agent in a distributed environment.” But now I’m back to Distributed Monitoring.

Next I see a list of key benefits of using the Icinga Agent - do I need convincing? The document says “For the most common setups on Linux” and we recommend - let’s get to it.

Then to learn how to Install the Icinga Agent, a link takes me back to the Distributed Monitoring chapter where it is titled: Agent/Satellite Setup and in the first sentence of that section it informs me that I must first run the master setup.

So it seems the most common and recommended approach is Distributed Monitoring with Icinga as agent. OK. Fine. Why didn’t the docs just say that in the beginning.

I can tell by now, I will not be meeting my friends for beer, or finishing this within the next few days.

If anyone has a TL;DR version or an outline for the least number of steps to set up a master/satellite configuration with ssh keys, I would appreciate any references.

I love Icinga. I’m trying to keep it light and a bit funny but also making a point as I did in another post that it feels like the documentation is getting in the way of just monitoring. It is a vitally critical function within any operation, but for me, it’s one of many pieces I manage. I don’t have the bandwidth to become an expert in the different ways to install and integrate this monitoring tool. I need the developers of Icinga to be the experts. I think my expertise comes near the end of all the configuring when it is time to make sense of the data flowing through graphite/grafana and how to make sure that only truly critical issues get sent as alerts.

One other point, if you please. I see references of using vim as an editor over and over when we just need to see the results of the config files. It seems to me, if we are actually logging into hosts to edit configs instead of deploying locally edited templated configs with tools like ansible, we’re dealing with too much tedium. In general if we edited it once, we’ll likely edit again and again, so we might as well deploy in a reliable repeatable manner. And I recommend ansible because I think it hits the right level between simplicity and power.

I’ll be impressed if you get here without thinking to yourself, TL;DR. But I appreciate if you can understand what I’m trying to say.

Now it’s late, but I’ll have a beer anyway. Cheers.

Continuing now on my journey…

As instructed in Master Setup I ran the node setup wizard on my icinga server and went with the defaults. It said everything is OK.

Then went to see my Icingaweb2 and everything is gone. I would expect the defaults I had for the server self monitoring to have been somehow transformed into whatever happened with my conf.d settings. I see that the conf.d reference was commented out (disabled) in icinga2.conf. Where do I now define the hosts and services I already had? I shall proceed and hope that the answer becomes obvious in a future step.

The next section describes Signing Certificates on the Master but I want to get back to what I was doing, Agent/Satellite Setup.

I see it says: Icinga 2 on the master node must be running and accepting connections on port 5665. Well, maybe, I hope so. How would I know? I ran the Node Wizard on my Icinga sever (Master) and I assume it got that all configured.

Agent/Satellite Setup on Linux - it says: Please ensure that you’ve run all the steps mentioned in the agent/satellite section… Wait, where am I? This is a few lines down from the section I’m reading and it is telling me I should ensure I’ve done steps… from the prior 2 sentences that told me to run the master setup; which I think I just did. Moving on…

" The next step is to run the node wizard CLI command." but actually it looks like I am to run icinga2 pki ticket command; although not sure how that fits. — Wait. I misread the instructions because now I am to go to my first remote server and install Icinga2 and the Plugins. — OK. Done. Proceeding.

So I go back to master and run icinga2 pki ticket --cn remote-1.localdomain - it gives me a key or ticket?

Now partially through the node wizard on the remote and realize port 5665 is not open on my master server:

firewall-cmd --zone=public --add-port=5665/tcp --permanent
firewall-cmd --reload

I go through the node wizard on my remote and go with defaults, enter the ticket generated on the master and at the end it says Done. Yay! I restart Icinga2 on the remote, I do the same on the Master and I go to Icingaweb2… still nothing. I would expect at least the hosts to be viewable…

Next I am directed to the section on Configuration Modes. That seems a little strange to me, because I thought I was already on a configuration mode; master/satellite with Icinga2 as agent.

Right off the bat I see a new term: Icinga 2 cluster nodes. When I see cluster I think of failover or HA implementations. I think maybe this just means the collection of severs in the master/satellite mode. We’ll see.

Next it says: “The preferred method is to configure monitoring objects on the master and distribute the configuration to satellites and agents.” - Perfect. That sounds like the way to go. Let’s go.

I see the Configuration mode Top Down, Top Down Command Endpoint., and I cannot understand exactly which one is the “The preferred method” so I go to the next chapter and fo with the scenario: Master with Agents

I add a dir called master to zones.d/ and I add 2 files to that dir:
hosts.conf - exactly as shown except with FQDNs and IPs changed.
services.conf - exactly as it is.

When I restart Icinga2, I see my agent node is shown in Icingaweb2 with the 2 services ping and disks. Woohoo. But I don’t see the master server with the collection of services I had when I was using conf.d/ files. Ah. I just tested adding the master server reference to the zones.d/master/hosts.conf file and it did not break anything. I now see it in Icingaweb2. Yay. Small steps.

Next session will try to understand how the sync thing works. Although I think I’d be happy with the approach I used with NRPE as agent. We’ll see.

The learning curve for distributed monitoring is very steep and its documentation is overwhelming. But once you got familiar everything looks easy and logical.

In every case you need a master and you already run node wizard to achieve this. During this step you’re ask to disable conf.d inclusion and the recommendation is yes. But this requires a new place for your conf files. In case you have satellite(s) and want you use config sync, you need to create a directory with identical name to your satellite(s) zone in /etc/icinga2/zones.d. In these directories you need to place host object’s conf files according to which satellite a host belongs. Service object conf files are recommended to store them in one or more global zones.

Without satellite(s) you can place all conf files in any directory as long it is included in icinga2.conf.

To be continued…