Set up Icinga2 automatically via Puppet

Hi!

I’ve the task to set up Icinga2 automatically via Puppet. I’ve an Icinga2-Master and the Satellites which
were configured via Puppet should be automatically integrated into the Icinga2-Master…

For this Task I should use this puppet-Module: https://forge.puppet.com/icinga/icinga2/changelog

I’ve installed it via ‘puppet module install icinga-icinga2 --version 2.1.0 --environment puppettest’ and I’ve build the following manifest for installing icinga2 on my Node:

class { '::icinga2':
        confd     => false,
        features  => ['checker','mainlog'],
        constants => {
                'ZoneName' => 'TESTZONE',
        },
}
class { '::icinga2::feature::api':
        pki             => 'icinga2',
        ca_host         => 'icinga2master.vorlage.local',
        fingerprint     => 'D8:98:82:1B:14:8A:6A:89:4B:7A:40:32:50:68:01:99:3D:96:72:72',
        ticket_salt     => '<very-save-ticket_salt>',
        accept_config   => true,
        accept_commands => true,
        endpoints       => {
                'NodeName'                   => {},
                'icinga2master.vorlage.local' => {
                  'host' => '192.168.117.30',
                }
        },
        zones           => {
                'NodeName' => {
                   'endpoints' => ["${facts['fqdn']}"],
                   'parent'    => 'master',
                },
                'master' => {
                   'endpoints' => ['icinga2master.vorlage.local']
                }

        }
}

The installation via Puppet on my Node works fine and Icinga2 is after an ‘puppet agent -t’ cleanly installed.

But I also got the follwing error:

Notice: /Stage[main]/Icinga2::Feature::Api/Exec[icinga2 pki request]/returns: critical/cli: Invalid ticket for CN ‘worker-template.local’.
Error: ‘"/usr/sbin/icinga2" pki request --host icinga2master.vorlage.local --port 5665 --ca /var/lib/icinga2/certs/ca.crt --key /var/lib/icinga2/certs/worker-template.local.key --cert /var/lib/icinga2/certs/worker-template.local.crt --trustedcert /var/lib/icinga2/certs/trusted-cert.crt --ticket <very-save-ticket_salt>’ returned 1 instead of one of [0]
Error: /Stage[main]/Icinga2::Feature::Api/Exec[icinga2 pki request]/returns: change from ‘notrun’ to [‘0’] failed: ‘"/usr/sbin/icinga2" pki request --host icinga2master.vorlage.local --port 5665 --ca /var/lib/icinga2/certs/ca.crt --key /var/lib/icinga2/certs/worker-template.local.key --cert /var/lib/icinga2/certs/worker-template.local.crt --trustedcert /var/lib/icinga2/certs/trusted-cert.crt --ticket <very-save-ticket_salt>’ returned 1 instead of one of [0] (corrective)
Debug: Class[Icinga2::Service]: Resource is being skipped, unscheduling all events
Notice: /Service[icinga2]: Dependency Exec[icinga2 pki request] has failures: true
Warning: /Service[icinga2]: Skipping because of failed dependencies
Debug: /Service[icinga2]: Resource is being skipped, unscheduling all events
Debug: Class[Icinga2::Service]: Resource is being skipped, unscheduling all events
Warning: /Stage[main]/Icinga2/Anchor[::icinga2::end]: Skipping because of failed dependencies
Debug: /Stage[main]/Icinga2/Anchor[::icinga2::end]: Resource is being skipped, unscheduling all events

I tried to execute the command manually on the Icinga2 satellite:

root@worker-template:~# /usr/sbin/icinga2 pki request --host icinga2master.vorlage.local --port 5665 --ca /var/lib/icinga2/certs/ca.crt --key /var/lib/icinga2/certs/worker-template.local.key --cert /var/lib/icinga2/certs/worker-template.local.crt --trustedcert /var/lib/icinga2/certs/trusted-cert.crt --ticket <very-save-ticket_salt>

…and got the following response:

critical/cli: Invalid ticket for CN ‘worker-template.local’.

When I have a look on my Icinga2-Master in /var/log/icinga2/icinga2.log I can see that my Icinga2-Satellite is doing an Request on the Icinga2-Master:

[2019-05-14 22:55:08 +0200] information/ConfigObject: Dumping program state to file '/var/lib/icinga2/icinga2.state'
[2019-05-14 22:55:08 +0200] information/WorkQueue: #10 (JsonRpcConnection, #0) items: 0, rate:  0/s (0/min 0/5min 2/15min);
[2019-05-14 22:55:58 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 933/5min 2813/15min);
[2019-05-14 22:56:04 +0200] information/ApiListener: New client connection for identity 'worker-template.local' from [192.168.117.25]:51972 (certificate validation failed: code 18: self signed certificate)
[2019-05-14 22:56:04 +0200] information/JsonRpcConnection: Received certificate request for CN 'worker-template.local' not signed by our CA.
[2019-05-14 22:56:04 +0200] warning/JsonRpcConnection: Ticket '6f912a8966ef9e46278e77847e93e901c83adde7' for CN 'worker-template.local' is invalid.
[2019-05-14 22:56:04 +0200] warning/TlsStream: TLS stream was disconnected.
[2019-05-14 22:56:04 +0200] warning/JsonRpcConnection: API client disconnected for identity 'worker-template.local'
[2019-05-14 22:56:18 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 0, rate: 3.01667/s (181/min 933/5min 2809/15min);
[2019-05-14 22:56:18 +0200] information/WorkQueue: #5 (ApiListener, RelayQueue) items: 0, rate: 0.65/s (39/min 195/5min 579/15min);
[2019-05-14 22:56:18 +0200] information/WorkQueue: #6 (ApiListener, SyncQueue) items: 0, rate:  0/s (0/min 0/5min 0/15min);
[2019-05-14 22:56:28 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 931/5min 2807/15min);
[2019-05-14 22:56:38 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 2.95/s (177/min 929/5min 2809/15min);
[2019-05-14 22:56:48 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 933/5min 2813/15min);
[2019-05-14 22:57:18 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 933/5min 2809/15min);
[2019-05-14 22:57:28 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 933/5min 2807/15min);
[2019-05-14 22:57:38 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 2.95/s (177/min 927/5min 2807/15min);
[2019-05-14 22:57:48 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 933/5min 2813/15min);
[2019-05-14 22:58:18 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 933/5min 2809/15min);
[2019-05-14 22:58:28 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 933/5min 2807/15min);
[2019-05-14 22:58:38 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 927/5min 2807/15min);
[2019-05-14 22:58:48 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 933/5min 2813/15min);
[2019-05-14 22:59:35 +0200] information/ApiListener: New client connection for identity 'worker-template.local' from [192.168.117.25]:51976 (certificate validation failed: code 18: self signed certificate)
[2019-05-14 22:59:35 +0200] information/JsonRpcConnection: Received certificate request for CN 'worker-template.local' not signed by our CA.
[2019-05-14 22:59:35 +0200] warning/JsonRpcConnection: Ticket '6f912a8966ef9e46278e77847e93e901c83adde7' for CN 'worker-template.local' is invalid.
[2019-05-14 22:59:35 +0200] warning/TlsStream: TLS stream was disconnected.
[2019-05-14 22:59:35 +0200] warning/JsonRpcConnection: API client disconnected for identity 'worker-template.local'

The only problem (when I believe this article seems to be the different Icinga2-Version:

Icinga2-Master:

root@icinga2master:~# dpkg -l icinga2
Gewünscht=Unbekannt/Installieren/R=Entfernen/P=Vollständig Löschen/Halten
| Status=Nicht/Installiert/Config/U=Entpackt/halb konFiguriert/
         Halb installiert/Trigger erWartet/Trigger anhängig
|/ Fehler?=(kein)/R=Neuinstallation notwendig (Status, Fehler: GROSS=schl                                                                                                    echt)
||/ Name           Version      Architektur  Beschreibung
+++-==============-============-============-============================                                                                                                    =====
ii  icinga2        2.10.4-1.xen amd64        host and network monitoring                                                                                                     syste

Icinga2-Satellite:

root@worker-template:~# dpkg -l icinga2                            Gewünscht=Unbekannt/Installieren/R=Entfernen/P=Vollständig Löschen/Halten
| Status=Nicht/Installiert/Config/U=Entpackt/halb konFiguriert/
         Halb installiert/Trigger erWartet/Trigger anhängig
|/ Fehler?=(kein)/R=Neuinstallation notwendig (Status, Fehler: GROSS=schlecht)
||/ Name           Version      Architektur  Beschreibung
+++-==============-============-============-=================================
ii  icinga2        2.4.1-2ubunt amd64        host and network monitoring syste
root@worker-template:~#

Has someone of you also installed Icinga2 via Puppet and know how to sign the satellite automatically on the Icinga2-Master?

And has someone of you also had the same problem with signing the Satellite on the Master?

Okay, I’ve reset my VM and adjusted my class again:

class { '::icinga2':
    manage_repo => true,
    manage_package => true,
    confd     => false,
    features  => ['checker','mainlog'],
    constants => {
            'ZoneName' => 'TESTZONE',
    },

}

Because of setting manage_repo => true, and manage_package => true, I’ve now the newest Version of Icinga2 on my Node…

But when I tried to execute this command manually:

root@worker-template:/etc/icinga2# /usr/sbin/icinga2 pki request --host icinga2master.vorlage.local --port 5665 --ca /var/lib/icinga2/certs/ca.crt --key /var/lib/icinga2/certs/worker-template.local.key --cert /var/lib/icinga2/certs/worker-template.local.crt --trustedcert /var/lib/icinga2/certs/trusted-cert.crt --ticket <very-save-ticket_salt>

…I got the following response:

information/cli: Writing CA certificate to file '/var/lib/icinga2/certs/ca.crt'.
critical/cli: !!! Invalid ticket for CN 'worker-template.local'.

Does anyone have an idea what I have to do to get working an automatically signing of my request?

Hi,

run icinga2 pki ticket --cn worker-template.local on the master, and verify that the generated ticket is the same as the command is using. Your Puppet manifest might not have triggered a reload of the daemon, and the private TicketSalt is not loaded into memory.

Btw, since you’ve posted the private TicketSalt here, I strongly advise to change that now for security reasons.

Cheers,
Michael

Thanks for your note, I’ve changed it :)…

But do you know whether there is a way to automate this? We’re currently doing this steps on our Icinga2-Master:

  1. Create an Certificate-Request with (in /var/lib/icinga2/ca):

    icinga2 pki new-cert --cn customer.hostname.local --key customer.hostname.local.key --csr customer.hostname.local.csr

  2. Sign the Request:

    icinga2 pki sign-csr --csr customer.hostname.local.csr --cert customer.hostname.local.crt

  3. Move the Certificates to the correct direction / change Ownership:

    mv customer.hostname.local.* /etc/icinga2/pki/
    chown nagios:nagios /etc/icinga2/pki/customer.hostname.local.*

…and then we’re doing this steps on our Icinga2-Master and -Satellite:

  1. Copy signed Certificates to the Icinga2-Satellite from the Icinga2-Master:

    ssh customer.hostname.local “mkdir /var/lib/icinga2/certs”
    scp /etc/icinga2/pki/customer.hostname.local.* /etc/icinga2/pki/ca.crt customer.hostname.local/var/lib/icinga2/certs/.
    ssh customer.hostname.local “chown -R nagios.nagios /var/lib/icinga2/certs/*”

Do anyone know an solution to automate this steps with Puppet?

Hi,

the Puppet module is capable of doing so, but for some reason the generated ticket from the given salt fails. In order to proceed, please answer the above question if the ticket is the same on the master.

Cheers,
Michael

Okay, I’ve run an “icinga2 pki ticket --cn worker-template.local” and put the output of this sign into my manifest for the Icinga2-Node host. After I’ve done an ‘puppet agent -t --debug’ I can see the following in /var/log/icinga2/icinga2.log:

**2019-05-16 15:24:10 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 943/5min 2823/15min);
[2019-05-16 15:24:22 +0200] information/ApiListener: New client connection from [192.168.117.25]:55888 (no client certificate)
[2019-05-16 15:24:22 +0200] warning/TlsStream: TLS stream was disconnected.
[2019-05-16 15:24:22 +0200] information/ApiListener: No data received on new API connection. Ensure that the remote endpoints are properly configured in a cluster setup.
[2019-05-16 15:24:30 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 3, rate: 3.08333/s (185/min 941/5min 2817/15min);
[2019-05-16 15:24:40 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 3, rate: 3.05/s (183/min 939/5min 2815/15min);
[2019-05-16 15:24:50 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.13333/s (188/min 942/5min 2820/15min);
[2019-05-16 15:25:50 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 939/5min 2819/15min);
[2019-05-16 15:26:00 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 935/5min 2821/15min);
[2019-05-16 15:26:20 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 937/5min 2821/15min);
[2019-05-16 15:26:40 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 935/5min 2815/15min);
[2019-05-16 15:26:50 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 939/5min 2819/15min);
[2019-05-16 15:27:00 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 935/5min 2821/15min);
[2019-05-16 15:27:10 +0200] information/ConfigObject: Dumping program state to file ‘/var/lib/icinga2/icinga2.state’
[2019-05-16 15:27:20 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 937/5min 2821/15min);
[2019-05-16 15:27:50 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 939/5min 2819/15min);
[2019-05-16 15:28:00 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 935/5min 2821/15min);
[2019-05-16 15:28:10 +0200] information/WorkQueue: #10 (JsonRpcConnection, #0) items: 0, rate: 0/s (0/min 0/5min 0/15min);
[2019-05-16 15:28:10 +0200] information/WorkQueue: #5 (ApiListener, RelayQueue) items: 0, rate: 0.633333/s (38/min 192/5min 585/15min);
[2019-05-16 15:28:10 +0200] information/WorkQueue: #6 (ApiListener, SyncQueue) items: 0, rate: 0/s (0/min 0/5min 0/15min);
[2019-05-16 15:28:20 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 937/5min 2821/15min);
[2019-05-16 15:28:40 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 935/5min 2815/15min);
[2019-05-16 15:29:00 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.05/s (183/min 935/5min 2819/15min);
[2019-05-16 15:29:10 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.08333/s (185/min 935/5min 2821/15min);
[2019-05-16 15:29:45 +0200] information/Application: Got reload command: Starting new instance.
[2019-05-16 15:29:45 +0200] information/Application: Reload requested, letting new process take over.
[2019-05-16 15:29:45 +0200] information/ApiListener: ‘api’ stopped.
[2019-05-16 15:29:45 +0200] information/CheckerComponent: ‘checker’ stopped.
[2019-05-16 15:29:45 +0200] information/FileLogger: ‘main-log’ started.
[2019-05-16 15:29:45 +0200] information/ApiListener: ‘api’ started.
[2019-05-16 15:29:45 +0200] information/ApiListener: Copying 1 zone configuration files for zone ‘global-templates’ to ‘/var/lib/icinga2/api/zones/global-templates’.
[2019-05-16 15:29:45 +0200] information/ApiListener: Applying configuration file update for path ‘/var/lib/icinga2/api/zones/global-templates’ (0 Bytes). Received timestamp ‘2019-05-16 15:29:45 +0200’ (1558013385.780473), Current timestamp ‘2019-05-15 22:24:06 +0200’ (1557951846.464606).
[2019-05-16 15:29:45 +0200] information/ApiListener: Copying 12 zone configuration files for zone ‘master’ to ‘/var/lib/icinga2/api/zones/master’.
[2019-05-16 15:29:45 +0200] information/ApiListener: Applying configuration file update for path ‘/var/lib/icinga2/api/zones/master’ (0 Bytes). Received timestamp ‘2019-05-16 15:29:45 +0200’ (1558013385.781473), Current timestamp ‘2019-05-15 22:24:06 +0200’ (1557951846.481061).
[2019-05-16 15:29:45 +0200] information/ApiListener: Started new listener on ‘[0.0.0.0]:5665’
[2019-05-16 15:29:45 +0200] information/CheckerComponent: ‘checker’ started.
[2019-05-16 15:29:45 +0200] information/DbConnection: ‘ido-mysql’ started.
[2019-05-16 15:29:45 +0200] information/ConfigItem: Activated all objects.
[2019-05-16 15:29:45 +0200] information/cli: Closing console log.
[2019-05-16 15:29:45 +0200] information/DbConnection: Resuming IDO connection: ido-mysql
[2019-05-16 15:29:45 +0200] information/IdoMysqlConnection: ‘ido-mysql’ resumed.
[2019-05-16 15:29:45 +0200] information/IdoMysqlConnection: MySQL IDO instance id: 1 (schema version: ‘1.14.3’)
[2019-05-16 15:29:45 +0200] information/IdoMysqlConnection: Finished reconnecting to MySQL IDO database in 0.0244312 second(s).
[2019-05-16 15:29:55 +0200] information/WorkQueue: #5 (ApiListener, RelayQueue) items: 0, rate: 0.25/s (15/min 15/5min 15/15min);
[2019-05-16 15:29:55 +0200] information/WorkQueue: #6 (ApiListener, SyncQueue) items: 0, rate: 0/s (0/min 0/5min 0/15min);
[2019-05-16 15:29:55 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 0, rate: 1.48333/s (89/min 89/5min 89/15min);

It looks as I have the problem anymore :(…

@dnsmichi: And is it right that I always have to generate an new Ticket, if a new host have to set up in the Monitoring? Because I always have a different Hostname / FQDN!?

Okay, I’ve tried an ‘service icinga2 reload’ on my via puppet installed Icinga2-Node… When I’m doing an ‘service icinga2 status’ I got the following:

root@worker-template:~# service icinga2 status

● icinga2.service - Icinga host/service/network monitoring system

Loaded: loaded (/lib/systemd/system/icinga2.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/icinga2.service.d

      └─limits.conf

Active: failed (Result: exit-code) since Do 2019-05-16 17:59:03 CEST; 2min 36s ago
Process: 23854 ExecStart=/usr/sbin/icinga2 daemon --close-stdio -e ${ICINGA2_ERROR_LOG} (code=exited, status=1/FAILURE)
Process: 23844 ExecStartPre=/usr/lib/icinga2/prepare-dirs /etc/default/icinga2 (code=exited, status=0/SUCCESS)

Main PID: 23854 (code=exited, status=1/FAILURE)

Mai 16 17:59:03 worker-template icinga2[23854]: /etc/icinga2/zones.conf(11): object Zone “master” {
Mai 16 17:59:03 worker-template icinga2[23854]: /etc/icinga2/zones.conf(12): endpoints = [ “icinga2master.vorlage.local”, ]
Mai 16 17:59:03 worker-template icinga2[23854]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mai 16 17:59:03 worker-template icinga2[23854]: /etc/icinga2/zones.conf(13): }
Mai 16 17:59:03 worker-template icinga2[23854]: /etc/icinga2/zones.conf(14):
Mai 16 17:59:03 worker-template icinga2[23854]: [2019-05-16 17:59:03 +0200] critical/config: 1 error
Mai 16 17:59:03 worker-template systemd[1]: icinga2.service: Main process exited, code=exited, status=1/FAILURE
Mai 16 17:59:03 worker-template systemd[1]: Failed to start Icinga host/service/network monitoring system.
Mai 16 17:59:03 worker-template systemd[1]: icinga2.service: Unit entered failed state.
Mai 16 17:59:03 worker-template systemd[1]: icinga2.service: Failed with result ‘exit-code’.
root@worker-template:~#

No, don’t do it this way. This was just to compare what’s going on. For some reason, the ticket calculation doesn’t work … and it will be cumbersome to do this for any new host. As said, the Puppet module should work OOTB with generating and implemented the ticket into the Puppet run for every new host.

Question aside - did you setup Icinga manually once before diving into Puppet already? This helps to understand how zones, endpoints and TLS certificates work before abstracting this into automation.

The reload output looks like a wrong configuration now being deployed, you can run icinga2 daemon -C for a full validation output.

Cheers,
Michael

I’ve setup Icinga manually but it’s a few month ago and I think I’ve forgot a little bit :(…

When I ran icinga2 daemon -C I got the following:

root@worker-template:~# icinga2 daemon -C
[2019-05-16 18:29:45 +0200] information/cli: Icinga application loader (version: r2.10.4-1)
[2019-05-16 18:29:45 +0200] information/cli: Loading configuration file(s).
[2019-05-16 18:29:45 +0200] information/ConfigItem: Committing config item(s).
[2019-05-16 18:29:45 +0200] information/ApiListener: My API identity: worker-template.local
[2019-05-16 18:29:45 +0200] critical/config: Error: Validation failed for object 'master' of type 'Zone'; Attribute 'endpoints': Object 'icinga2master.vorlage.local' of type 'Endpoint' does not exist.
Location: in /etc/icinga2/zones.conf: 12:3-12:48
/etc/icinga2/zones.conf(10):
/etc/icinga2/zones.conf(11): object Zone "master" {
/etc/icinga2/zones.conf(12):   endpoints = [ "icinga2master.vorlage.local", ]
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/etc/icinga2/zones.conf(13): }
/etc/icinga2/zones.conf(14):

[2019-05-16 18:29:45 +0200] critical/config: 1 error
root@worker-template:~#

It seems that I had to define an Endpoint for “icinga2master.vorlage.local”, right?

EDIT:

I’ve tried this and now my manifest looks like this:

# include icinga2

class { '::icinga2':
        manage_repo => true,
        manage_package => true,
        confd     => false,
        features  => ['checker','mainlog','notification','statusdata','compatlog','command'],
        constants => {
                'ZoneName' => 'master',
        },
}

class { '::icinga2::feature::api':
        pki             => 'icinga2',
        ca_host         => 'icinga2master.vorlage.local',
        ticket_salt     => '<very-save-ticket_salt>',
        ensure          => 'present',
        accept_config   => true,
        accept_commands => true,
        endpoints       => {
           $facts['fqdn'] => {
              'host' => $facts['ipaddress'],
           }, 'icinga2master.vorlage.local' => {
                  'host' => '192.168.117.30',
           },
        },
        zones           => {
           'master' => {
              'endpoints' => ['icinga2master.vorlage.local']
           },
     }
}

icinga2::object::zone { 'global-templates':
  global => true,
}

I’m wondering why the resource doesn’t create the Endpoint object, is it still defined like this?

...
        endpoints       => {
                'NodeName'                   => {},
                'icinga2master.vorlage.local' => {
                  'host' => '192.168.117.30',
                }
        },
        zones           => {
                'NodeName' => {
                   'endpoints' => ["${facts['fqdn']}"],
                   'parent'    => 'master',
                },
                'master' => {
                   'endpoints' => ['icinga2master.vorlage.local']
                }

        }

Can you extract the report from Puppet to see whether this actually happened?

Cheers,
Michael

I just added the entry manually :

endpoints       => {
           $facts['fqdn'] => {
              'host' => $facts['ipaddress'],
           }, 'icinga2master.vorlage.local' => {
                  'host' => '192.168.117.30',
           },
        },

Do you mean with ‘extract the report’ that I paste the Output of my ‘puppet agent -t --debug’?

Only the relevant parts which include creating icinga objects. I haven’t used the module for a while now, maybe 2.1.0 broke something or the behaviour changed. So to speak, could be a bug as well.

I hope this are the relavant parts :slight_smile: :

Debug: Executing: '/bin/systemctl start icinga2'
Debug: Running journalctl command to get logs for systemd start failure: journalctl -n 50 --since '5 minutes ago' -u icinga2 --no-pager
Debug: Executing: 'journalctl -n 50 --since '5 minutes ago' -u icinga2 --no-pager'
Error: Systemd start for icinga2 failed!
journalctl log for icinga2:
-- Logs begin at Thu 2019-05-16 14:03:32 CEST, end at Thu 2019-05-16 19:02:07 CEST. --
May 16 19:02:07 worker-template systemd[1]: Starting Icinga host/service/network monitoring system...
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] information/cli: Icinga application loader (version: r2.10.4-1)
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] information/cli: Loading configuration file(s).
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] information/ConfigItem: Committing config item(s).
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] information/ApiListener: My API identity: worker-template.local
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] critical/config: Error: Endpoint 'worker-template.local' does not belong to a zone.
May 16 19:02:07 worker-template icinga2[25989]: Location: in /etc/icinga2/zones.conf: 7:1-7:47
May 16 19:02:07 worker-template icinga2[25989]: /etc/icinga2/zones.conf(5): }
May 16 19:02:07 worker-template icinga2[25989]: /etc/icinga2/zones.conf(6):
May 16 19:02:07 worker-template icinga2[25989]: /etc/icinga2/zones.conf(7): object Endpoint "worker-template.local" {
May 16 19:02:07 worker-template icinga2[25989]:                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
May 16 19:02:07 worker-template icinga2[25989]: /etc/icinga2/zones.conf(8):   host = "192.168.117.25"
May 16 19:02:07 worker-template icinga2[25989]: /etc/icinga2/zones.conf(9): }
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] critical/config: 1 error
May 16 19:02:07 worker-template systemd[1]: icinga2.service: Main process exited, code=exited, status=1/FAILURE
May 16 19:02:07 worker-template systemd[1]: Failed to start Icinga host/service/network monitoring system.
May 16 19:02:07 worker-template systemd[1]: icinga2.service: Unit entered failed state.
May 16 19:02:07 worker-template systemd[1]: icinga2.service: Failed with result 'exit-code'.

Error: /Stage[main]/Icinga2::Service/Service[icinga2]/ensure: change from 'stopped' to 'running' failed: Systemd start for icinga2 failed!
journalctl log for icinga2:
-- Logs begin at Thu 2019-05-16 14:03:32 CEST, end at Thu 2019-05-16 19:02:07 CEST. --
May 16 19:02:07 worker-template systemd[1]: Starting Icinga host/service/network monitoring system...
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] information/cli: Icinga application loader (version: r2.10.4-1)
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] information/cli: Loading configuration file(s).
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] information/ConfigItem: Committing config item(s).
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] information/ApiListener: My API identity: worker-template.local
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] critical/config: Error: Endpoint 'worker-template.local' does not belong to a zone.
May 16 19:02:07 worker-template icinga2[25989]: Location: in /etc/icinga2/zones.conf: 7:1-7:47
May 16 19:02:07 worker-template icinga2[25989]: /etc/icinga2/zones.conf(5): }
May 16 19:02:07 worker-template icinga2[25989]: /etc/icinga2/zones.conf(6):
May 16 19:02:07 worker-template icinga2[25989]: /etc/icinga2/zones.conf(7): object Endpoint "worker-template.local" {
May 16 19:02:07 worker-template icinga2[25989]:                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
May 16 19:02:07 worker-template icinga2[25989]: /etc/icinga2/zones.conf(8):   host = "192.168.117.25"
May 16 19:02:07 worker-template icinga2[25989]: /etc/icinga2/zones.conf(9): }
May 16 19:02:07 worker-template icinga2[25989]: [2019-05-16 19:02:07 +0200] critical/config: 1 error
May 16 19:02:07 worker-template systemd[1]: icinga2.service: Main process exited, code=exited, status=1/FAILURE
May 16 19:02:07 worker-template systemd[1]: Failed to start Icinga host/service/network monitoring system.
May 16 19:02:07 worker-template systemd[1]: icinga2.service: Unit entered failed state.
May 16 19:02:07 worker-template systemd[1]: icinga2.service: Failed with result 'exit-code'.
 (corrective)
Debug: Class[Icinga2::Service]: Resource is being skipped, unscheduling all events
Notice: /Stage[main]/Icinga2/Anchor[::icinga2::end]: Dependency Service[icinga2] has failures: true
Warning: /Stage[main]/Icinga2/Anchor[::icinga2::end]: Skipping because of failed dependencies
Debug: /Stage[main]/Icinga2/Anchor[::icinga2::end]: Resource is being skipped, unscheduling all events
Debug: Class[Icinga2]: Resource is being skipped, unscheduling all events
Debug: Stage[main]: Resource is being skipped, unscheduling all events
Debug: Finishing transaction 29383120

Hmm, that’s too far in the future. I am looking for the endpoint/zone creation bits, specifically the calls here.

Cheers,
Michael

Hm, okay… Where can I find this calls?

…and I’ve another question…

How can I automatize this scenario with puppet:

"If the host ‘puppet-node1.customerA.local’ is signed at the Puppet Master, it should get e.g. the WorkerName ‘worker-customerA.local’. and the WorkerZone ‘customerA’. "

Is this also possible to automatize this with puppet and the module?

Hmm, I’ve reset my VM where I my Icinga2-Node via Puppet…
When I run now an ‘puppet agent -t’ Icinga2 will be installed properly…

When I do an service icinga2 status I got the following:

root@worker-template:~# service icinga2 status
● icinga2.service - Icinga host/service/network monitoring system
   Loaded: loaded (/lib/systemd/system/icinga2.service; enabled; vendor preset: enabl
  Drop-In: /etc/systemd/system/icinga2.service.d
           └─limits.conf
   Active: active (running) since Fr 2019-05-17 11:07:01 CEST; 5min ago
 Main PID: 18914 (icinga2)
   CGroup: /system.slice/icinga2.service
           ├─18914 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 --no-stack-rlimit d
           └─18948 /usr/lib/x86_64-linux-gnu/icinga2/sbin/icinga2 --no-stack-rlimit d

Mai 17 11:07:01 worker-template icinga2[18914]: [2019-05-17 11:07:01 +0200] i
Mai 17 11:07:01 worker-template icinga2[18914]: [2019-05-17 11:07:01 +0200] i
Mai 17 11:07:01 worker-template icinga2[18914]: [2019-05-17 11:07:01 +0200] i
Mai 17 11:07:01 worker-template icinga2[18914]: [2019-05-17 11:07:01 +0200] i
Mai 17 11:07:01 worker-template icinga2[18914]: [2019-05-17 11:07:01 +0200] i
Mai 17 11:07:01 worker-template icinga2[18914]: [2019-05-17 11:07:01 +0200] i
Mai 17 11:07:01 worker-template icinga2[18914]: [2019-05-17 11:07:01 +0200] i
Mai 17 11:07:01 worker-template icinga2[18914]: [2019-05-17 11:07:01 +0200] i
Mai 17 11:07:01 worker-template icinga2[18914]: [2019-05-17 11:07:01 +0200] i
Mai 17 11:07:01 worker-template systemd[1]: Started Icinga host/service/netwo
root@worker-template:~#

…and when I do after that an service icinga2 restart I got the following:

root@worker-template:~# service icinga2 restart
Job for icinga2.service failed because the control process exited with error code. Se                                                                                        e "systemctl status icinga2.service" and "journalctl -xe" for details.
root@worker-template:~# service icinga2 status
● icinga2.service - Icinga host/service/network monitoring system
   Loaded: loaded (/lib/systemd/system/icinga2.service; enabled; vendor preset: enabl
  Drop-In: /etc/systemd/system/icinga2.service.d
           └─limits.conf
   Active: failed (Result: exit-code) since Fr 2019-05-17 11:12:43 CEST; 8s ago
  Process: 20471 ExecStart=/usr/sbin/icinga2 daemon --close-stdio -e ${ICINGA2_ERROR_
  Process: 20460 ExecStartPre=/usr/lib/icinga2/prepare-dirs /etc/default/icinga2 (cod
 Main PID: 20471 (code=exited, status=1/FAILURE)

Mai 17 11:12:43 worker-template icinga2[20471]: /etc/icinga2/zones.conf(6):
Mai 17 11:12:43 worker-template icinga2[20471]: /etc/icinga2/zones.conf(7): o
Mai 17 11:12:43 worker-template icinga2[20471]:                             ^
Mai 17 11:12:43 worker-template icinga2[20471]: /etc/icinga2/zones.conf(8):
Mai 17 11:12:43 worker-template icinga2[20471]: /etc/icinga2/zones.conf(9): }
Mai 17 11:12:43 worker-template icinga2[20471]: [2019-05-17 11:12:43 +0200] c
Mai 17 11:12:43 worker-template systemd[1]: icinga2.service: Main process exi
Mai 17 11:12:43 worker-template systemd[1]: Failed to start Icinga host/servi
Mai 17 11:12:43 worker-template systemd[1]: icinga2.service: Unit entered fai
Mai 17 11:12:43 worker-template systemd[1]: icinga2.service: Failed with resu
lines 1-19/19 (END)
● icinga2.service - Icinga host/service/network monitoring system
   Loaded: loaded (/lib/systemd/system/icinga2.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/icinga2.service.d
           └─limits.conf
   Active: failed (Result: exit-code) since Fr 2019-05-17 11:12:43 CEST; 8s ago
  Process: 20471 ExecStart=/usr/sbin/icinga2 daemon --close-stdio -e ${ICINGA2_ERROR_LOG} (code=exited, status=1/FAILURE)
  Process: 20460 ExecStartPre=/usr/lib/icinga2/prepare-dirs /etc/default/icinga2 (code=exited, status=0/SUCCESS)
 Main PID: 20471 (code=exited, status=1/FAILURE)

Mai 17 11:12:43 worker-template icinga2[20471]: /etc/icinga2/zones.conf(6):
Mai 17 11:12:43 worker-template icinga2[20471]: /etc/icinga2/zones.conf(7): object Endpoint "worker-template.local" {
Mai 17 11:12:43 worker-template icinga2[20471]:                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mai 17 11:12:43 worker-template icinga2[20471]: /etc/icinga2/zones.conf(8):   host = "192.168.117.25"
Mai 17 11:12:43 worker-template icinga2[20471]: /etc/icinga2/zones.conf(9): }
Mai 17 11:12:43 worker-template icinga2[20471]: [2019-05-17 11:12:43 +0200] critical/config: 1 error
Mai 17 11:12:43 worker-template systemd[1]: icinga2.service: Main process exited, code=exited, status=1/FAILURE
Mai 17 11:12:43 worker-template systemd[1]: Failed to start Icinga host/service/network monitoring system.
Mai 17 11:12:43 worker-template systemd[1]: icinga2.service: Unit entered failed state.
Mai 17 11:12:43 worker-template systemd[1]: icinga2.service: Failed with result 'exit-code'.

When I look in my from puppet generated /etc/icinga2/zones.conf it has the following content:

# This file is managed by Puppet. DO NOT EDIT.

object Endpoint "icinga2master.vorlage.local" {
  host = "192.168.117.30"
}

object Endpoint "worker-template.local" {
  host = "192.168.117.25"
}

object Zone "global-templates" {
  global = true
}

object Zone "master" {
  endpoints = [ "icinga2master.vorlage.local", ]
}

I think it looks good but I don’t understand why I got an error anymore.

I’ve anymore the problem, that the generated ticket from my new Icinga2-Node (which is setup via Puppet) seems to be different than this on my Icinga2-Master. After the run of ‘puppet agent -t --debug’ I got the following Error:

Notice: /Stage[main]/Icinga2::Feature::Api/Exec[icinga2 pki request]/returns: information/cli: Writing CA certificate to file '/var/lib/icinga2/certs/ca.crt'.
Notice: /Stage[main]/Icinga2::Feature::Api/Exec[icinga2 pki request]/returns: critical/cli: !!! Invalid ticket for CN 'worker-template.local'.
Error: '"/usr/sbin/icinga2" pki request --host icinga2master.vorlage.local --port 5665 --ca /var/lib/icinga2/certs/ca.crt --key /var/lib/icinga2/certs/worker-template.local.key --cert /var/lib/icinga2/certs/worker-template.local.crt --trustedcert /var/lib/icinga2/certs/trusted-cert.crt --ticket <ticketsalt>' returned 1 instead of one of [0]
Error: /Stage[main]/Icinga2::Feature::Api/Exec[icinga2 pki request]/returns: change from 'notrun' to ['0'] failed: '"/usr/sbin/icinga2" pki request --host icinga2master.vorlage.local --port 5665 --ca /var/lib/icinga2/certs/ca.crt --key /var/lib/icinga2/certs/worker-template.local.key --cert /var/lib/icinga2/certs/worker-template.local.crt --trustedcert /var/lib/icinga2/certs/trusted-cert.crt --ticket <ticketsalt>' returned 1 instead of one of [0]
Debug: Executing: 'diff -u /etc/icinga2/features-available/checker.conf /tmp/puppet-file20190517-17846-179ftqa'

…and when I have a look on my Icinga2-Master into /var/log/icinga2/icinga2.log I got the following errors:

[2019-05-17 09:05:30 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 7, rate: 3.01667/s (181/min 933/5min 2811/15min);
[2019-05-17 09:05:40 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 7, rate: 3.01667/s (181/min 929/5min 2807/15min);
[2019-05-17 09:06:00 +0200] information/ConfigObject: Dumping program state to file '/var/lib/icinga2/icinga2.state'
[2019-05-17 09:06:00 +0200] information/WorkQueue: #6 (ApiListener, SyncQueue) items: 0, rate:  0/s (0/min 0/5min 0/15min);
[2019-05-17 09:06:00 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 0, rate: 3.13333/s (188/min 938/5min 2818/15min);
[2019-05-17 09:06:00 +0200] information/WorkQueue: #5 (ApiListener, RelayQueue) items: 0, rate: 0.65/s (39/min 192/5min 582/15min);
[2019-05-17 09:06:10 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 933/5min 2813/15min);
[2019-05-17 09:06:40 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 1, rate: 3.01667/s (181/min 929/5min 2807/15min);
[2019-05-17 09:07:08 +0200] information/ApiListener: New client connection from [192.168.117.25]:51044 (no client certificate)
[2019-05-17 09:07:08 +0200] warning/TlsStream: TLS stream was disconnected.
[2019-05-17 09:07:08 +0200] information/ApiListener: No data received on new API connection. Ensure that the remote endpoints are properly configured in a cluster setup.
[2019-05-17 09:07:08 +0200] information/ApiListener: New client connection for identity 'worker-template.local' from [192.168.117.25]:51046 (certificate validation failed: code 9: certificate is not yet valid)
[2019-05-17 09:07:08 +0200] information/JsonRpcConnection: Received certificate request for CN 'worker-template.local' not signed by our CA.
[2019-05-17 09:07:08 +0200] warning/JsonRpcConnection: Ticket '<ticketsalt>' for CN 'worker-template.local' is invalid.
[2019-05-17 09:07:08 +0200] warning/TlsStream: TLS stream was disconnected.
[2019-05-17 09:07:08 +0200] warning/JsonRpcConnection: API client disconnected for identity 'worker-template.local'
[2019-05-17 09:07:10 +0200] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 6, rate: 2.98333/s (179/min 931/5min 2811/15min);
[2019-05-17 09:07:18 +0200] information/WorkQueue: #11 (JsonRpcConnection, #0) items: 0, rate: 0.0166667/s (1/min 1/5min 1/15min);

Therefore I reexecuted @dnsmichi command icinga2 pki ticket --cn sws-noc-worker-template.local and got the following:

root@icinga2master:~# icinga2 pki ticket --cn worker-template.local
<ticketsalt>
root@icinga2master:~#

I don’t understand, why the generation of the ticket doesn’t seem to work properly :sleepy:!?

(By the way, @dnsmichi I’ve change the Salt-Ticket-IDs :slight_smile: :wink: )

Hi,

this message appears if

  1. the certificate name mismatches the associated endpoint name.
  2. the ticketsalt is wrong, has to be the same on master and agent/satellite.
  3. the optional fingerprint isn’t correct, it’s the fingerprint of the certificate of your CA master and not from the CA itself.

The procedure of configuration is described in the documentation and as complete code in examples/ini_master.pp and init_slave.pp (init_slave_validate.pp).

You get the fingerprint by
openssl x509 -noout -fingerprint -sha1 -inform pem -in master.crt

2 Likes

Hmm…

  1. the certificate name mismatches the associated endpoint name.

The relevant endpoint name is the FQDN (in that case worker-template.local) of the Icinga2-Node, right?

  1. the ticketsalt is wrong, has to be the same on master and agent/satellite.

Where can I find the ticketsalt? Is it the one, which is on the master in the /etc/icinga2/constants.conf:

/* Secret key for remote node tickets */

const TicketSalt = “very_secure_ticket_salt”

  1. the optional fingerprint isn’t correct, it’s the fingerprint of the certificate of your CA master and not from the CA itself.

I don’t use an optional fingerprint in my Test-Setup ;)… So I think then this shouldn’t be the problem, right?

1-2) right
3) you posted code and fingerprint was set

Please post yout complete code for master and slave without ticketsalt is shown.