Automatized Installation from an Icinga2-Satellite via Puppet

Hi!

I’ve the task to automatize the Installation from an Icinga2-Satellite via Puppet. My colleague gave me a completely installed VM with an Icinga2-Master-Setup with the following /etc/icinga2/zones.conf

/*
 * Generated by Icinga 2 node setup commands
 * on 2019-05-13 08:35:52 +0200
 */

object Endpoint "icinga2master.vorlage.local" {
}

object Zone "master" {
        endpoints = [ "icinga2master.vorlage.local" ]
}

object Zone "global-templates" {
        global = true
}

object Zone "director-global" {
        global = true
}

object Zone "basic-checks" {
        global = true
}

I’ve now the following tasks to automatize via Puppet:

On this icinga2 master, the Satellites (we call it Workers) that are installed via Puppet should automatically integrate themselves without any involvement.

It should be possible to set the respective SatelliteZone (WorkerZone) and the SatelliteName (WorkerName) using the Hiera config

Via Hiera you should be able to control per node SatelliteZone (WorkerZone) and SatelliteName (WorkerName) and whether a web interface (icingaweb2) is installed or not."

I should use for this tasks the following Puppet-Modules:

For the first one of the three tasks (“On this icinga2 master, the Satellites (we call it Workers) that are installed via Puppet should automatically integrate themselves without any involvement.”) I’ve adapted
this example and added ‘pki’ and ‘fingerprint’ from the installed VM with an Icinga2-Master-Setup from my colleague.

Then my /etc/puppetlabs/code/environments/puppettest/manifests/icinga2/satellite.pp had this content:

$master_cert = 'icinga2master.vorlage.local'
$master_ip = '192.168.117.30'

# get it on CA host 'openssl x509 -noout -fingerprint -sha1 -inform pem -in /var/lib/icinga2/certs/master.localdomain.crt'
$fingerprint = '<very_safe_fingerprint>'


class { '::icinga2':
        manage_repo => true,
        manage_package => true,
        confd     => false,
        features  => ['checker','mainlog','notification','statusdata','compatlog','command'],
        constants => {
                'NodeName' => $facts['fqdn'],
        }
}

class { '::icinga2::feature::api':
        pki             => 'icinga2',
        ca_host         => $master_ip,
        ticket_salt     => '<very_safe_ticket_salt>',
        ensure          => 'present',
        accept_config   => true,
        accept_commands => true,
        endpoints       => {
          'NodeName'       => {},
          "${master_cert}" => {
             'host' => $master_ip,
          }
        },
        zones           => {
          'ZoneName' => {
            'endpoints' => [ 'NodeName' ],
            'parent'    => 'master',
          },
          'master' => {
            'endpoints' => [ $master_cert ],
          },
        },
        fingerprint     => $fingerprint,
}

icinga2::object::zone { 'global-templates':
  global => true,
}

But now I wanna move this content into Hiera. Therefore I’ve changed the content of my /etc/puppetlabs/code/environments/puppettest/manifests/icinga2/satellite.pp to this:

# Declaration

  class profile::icinga2::satellite {
    endpoints => $endpoints
    zones     => $zones

  class { '::icinga2':
                  manage_repo => $manage_repo
                  manage_package =>  $manage_package
                  confd => $confd
                  features => $features
                  constants => $constants
                }
  class { '::icinga2::features::api':
                  pki => $pki
                  ca_host => $ca_host
                  ticket_salt => $ticket_salt
                  ensure => $ensure
                  accept_config => $accept_config
                  accept_commands => $accept_commands
                  endpoints => $endpoints
                  zones => $zones
                  fingerprint => $fingerprint
                }

}

I’ve created for my Icinga2-Satellite-Test-Host an own YAML under '/etc/puppetlabs/code/environments/puppettest/hieradata/nodes/worker-template.local.yam with the following content:

---
classes:
  - 'profile::icinga2::satellite'
profile::icinga2::satellite::endpoints:
  "%{::fqdn}": {}
  icinga2master.vorlage.local:
    'host': '192.168.117.30'
profile::icinga2::satellite::zones:
  master:
    endpoints: ['icinga2master.vorlage.local']
  NOC:
    endpoints: "%{::fqdn}"
    parent: master
profile::icinga2::manage_repo: true
profile::icinga2::confd: false
profile::icinga2::features:
  - 'api'
  - 'checker'
  - 'mainlog'
profile::icinga2::constants:
  NodeName: "%{::fqdn}"
  ZoneName: 'NOC'
profile::icinga2::feature::api::pki:
  'icinga2'
profile::icinga2::feature::api::ca_host:
  'icinga2master.vorlage.local'
profile::icinga2::feature::api::ticket_salt:
  '<very_safe_ticket_salt>'
profile::icinga2::feature::api::ensure:
  'present'
profile::icinga2::feature::api::accept_config:
  'true'
profile::icinga2::feature::api::accept_commands:
  'true'
profile::icinga2::feature::api::endpoints:
  "%{::fqdn}": {}
  'icinga2master.vorlage.local':
    host: 192.168.117.20
profile::icinga2::feature::api::zones:
  master:
    endpoints:
      - 'icinga2master.vorlage.local'
  satellite:
    endpoints:
      - "%{::fqdn}"
    parent: 'master'
profile::icinga2::feature::fingerprint:
  '<very_safe_fingerprint>'
profile::object::zone::global: true

I’ve the problem when I do an ‘puppet agent -t --debug’ on my future Icinga2-Satellite I get the following error:

Debug: Caching connection for https://pm-neu.local:8140
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Could not parse for environment puppettest: Syntax error at '=>' (file: /etc/puppetlabs/code/environments/puppettest/manifests/icinga2/satellite.pp, line: 4, column: 15) on node worker-template.local
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Debug: Dynamically-bound server lookup failed, falling back to report_server setting: pm-neu.local

Does anyone can help me with my last 2 Tasks and have this combination (Icinga2-Satellite binding at an Icinga2-Master and Setup via Puppet before) also working?

Thanks for your help and best regards,

Matthias

This is no valid Puppet syntax…

I guess this should be some kind of site module? The path directly under <environment>/manifests seems weird…

A profile is like any other class:

class profile::icinga2::satellite(
    $endpoints = {}, # these are attributes that will be setable via hiera
    $zones     = {},
) {
  # content
}
1 Like

Okay, that means the curly braces show that I’ve an attribute which will be setable via Hiera?
But can I change my hole satellite.pp to this as you’ve written it?

What do you mean with “site module” and why seems the parth under <environment>/manifests weird?

hi,

I would like to explain our setup. First I have to say: we use a “grown” setup. We started with doing every via the PuppetDB. That means, we had to do execute “puppet agent -t (–noop)” twice on master and / or agent.

  1. First filling the PuppetDB
  2. Get all infos for setting up the agent
  3. Get new node out of PuppetDB on the Master

That was a huge mess after working with it, for a while on master and agents. Also it wasn’t working with our other datacenters, because every DC has its own PuppetDB, so the Icinga2 Master doesn’t know anything about the other DCs.
(Not all Puppet files gets a cleanup yet, so you will find lines, which makes no sense anymore in my Github Gist …)

So we switched to an other combination:

  • PuppetDB on every DC and Master
  • Icingaweb2 + Director + PuppetDB module

We have three sources for hosts, to be monitored:

  1. Hiera master node file -> Mostly all switches and routers, because of the dictionary usage
  2. PuppetDB PostgresQL DBs -> We pull the nodes via Icingaweb2 Director module, change or fill values, for our needs, like Icinga2 zone (host.zone = dc1 / host.zone = dc2 …)
  3. Director itself -> That is for our Windows hosts (maintained by a Windows Admin)

In the Director, we maintain also all host templates (and we have a lot of them) and we assign them via the Director puppetdb modul (facts.role = mariadb -> import Mariadb host …). For the templates, we had many datafields created, for usage with templates and later used by the apply rules).

If we have new (VM/Server) nodes for monitoring, the workflow is:

  1. Run puppet agent on the new node
  2. Let configure Puppet the agent configs / repo / plugins
  3. Go to the Icingaweb2 Director PuppetDB modul and import the new changes to the Director puppet DB
  4. Sync Changes
  5. Apply new configs to Icinga2 master
  6. Sign the certificate icinga2 ca sign <....>

Nearly all can executed automatically (but we don’t do it, I want to know the changes before roll out to Icinga2)

With this setup, we maintain four Icinga2 zones (master and three datacenters).

Here is my Github Gist

I changed some values / removed private things and some lines are obsolete …, but you get an idea, how we do it. It is far away from perfect and maybe it looks different in one or two years, but for now, it works pretty well.

If you have questions: ask me.

Update: I’ve forgotten the satellite yaml :slight_smile: Added to Gist.

1 Like