Making Icinga single endpoint to master

Good morning,

We have come to a point where we need to scale up our environment and I want to make the server that is running to a master since I didn’t do it when it was installed (which in retrospect is stupid) because we rather had it split up between one machine in the production environment, one for the test environment and one in the clouds to make a long story short.

Now the production machine needs to be run as a master with adding one or more satellites to grant monitoring for different zones that need to monitor different things.

And I’ve been googling and looking in the forum etc for the last week without having answers to my biggest concern, is it possible to do this without wiping the configuration for the checks that are set up, because it would be a great deal of work to restore the functionality if that was the case.

This is what I’m having to deal with today:

  • Version used icinga2 2.12.1-1, will be upgraded to 2.12.3 in the coming days.

  • Operating System: RHEL8

  • Enabled features api checker command ido-mysql influxdb mainlog notification

  • Icinga Web 2 2.8.2

  • director 1.7.2

  • doc 2.8.2

  • grafana 1.3.6

  • idoreports 0.9.1

  • incubator 0.5.0

  • ipl 0.4.0

  • monitoring 2.8.2

  • pdfexport 0.9.1

  • reactbundle 0.7.0

  • reporting 0.9.2

    % declared in ‘/etc/icinga2/zones.conf’, lines 7:1-7:24

    • __name = “the.machine.that.i.work.with”
    • host = “the.machine.that.i.work.with”
      % = modified in ‘/etc/icinga2/zones.conf’, lines 8:3-8:17
    • log_duration = 86400
    • name = “the.machine.that.i.work.with”
    • package = “_etc”
    • port = “5665”
    • source_location
      • first_column = 1
      • first_line = 7
      • last_column = 24
      • last_line = 7
      • path = “/etc/icinga2/zones.conf”
    • templates = [ “the.machine.that.i.work.with” ]
      % = modified in ‘/etc/icinga2/zones.conf’, lines 7:1-7:24
    • type = “Endpoint”
    • zone = “”

Thanks for any feedback and insights regarding this.

Best regards
Michael

Could you please explain what that mean? Did you run icinga2 node wizard at any time or did you configure the certificate staff in any other more manually way? Do you have any agents configured?

That depends. I’d assume you have at least to rearrange your configs regarding required new global zones. If your question is more related to the check result history, then there is no disruption (even if config files are rearranged) if your new setup can continue to use current host and services names. Example: Best practice for host names is to use FQDN. If you use shorter names e.g. server01 it will be hard to identify your machine across the dristributed setup. And you can have only one machine with that name in the whole environment.

Could you please explain what that mean? Did you run icinga2 node wizard at any time or did you configure the certificate staff in any other more manually way? Do you have any agents configured?

I didn’t run the node wizard during the installation which in hindsight might have been a stupid decision.
And whilst checking ca list, it is empty since there are nothing connected to it, it has been a 1-1 move from OP5 where we only had checks being sent with nrpe etc, which makes me think that there are a few steps that might have gone missing.

That depends. I’d assume you have at least to rearrange your configs regarding required new global zones. If your question is more related to the check result history, then there is no disruption (even if config files are rearranged) if your new setup can continue to use current host and services names. Example: Best practice for host names is to use FQDN. If you use shorter names e.g. server01 it will be hard to identify your machine across the dristributed setup. And you can have only one machine with that name in the whole environment.

That is good to know, the to be master has very specific hostname/FQDN so it shouldn’t be any issue.
But what I really are fishng for is the actual hosts and their service checks, if they wouldn’t disappear from the server, because that wouldn’t be a good step. If that would be the case, then I guess it’s better to just order a new machine and redo the whole thing rather than being wild and crazy and adventure the actual job being done up until today.

To explain it any further, the procedure will most likely stay the same throughout with the new machines as well.

Master → Satellite 1 + Satellite 2.

Where the master will still perform checks in the network that is situated whilst the satellites will perform checks in the networks they are situated in.

Today it’s only one machine that run checks to a whole lot of nrpe,endpoints etc.

I meant all of your monitoring hosts (and your icinga machine).

I don’t think icinga2 node wizard has an impact on your setup since your are using local checks and NRPE. If I were you, I’d take a snapshot or a copy of the monitoring machine and simply give a try.

Once this is working fine I’d continue to with satellites and finally replace NRPE with icinga agent.

We’ve had a discussion internally and we will start from scratch with new machines, it’s far better.

Since this was a work from scratch will all kinds of trial and errors this will be much smoother and to get it up and running in no time rather than having possible issues with things that shouldn’t be a hastle at all.

Thanks for the reply though, much appreciated.

BR
Michael