How to sync endpoints and zones in master-satellite-client setup to satellite?

Hi,

I followed the blog-post “how-to-setup-icinga2-master-satellite-client-using-director-module” (no links allowed) to setup a master-satellite-client setup from scratch using icingaweb2.
I’ld like to run “run on agent”-services on the client using this setup but satellites config breaks with first sync because command_endpoint needs an endpoint declaration which is not synced by master.

log :

[2019-08-16 17:39:07 +0200] critical/config: Error: Validation failed for object 'test.local.seffner-schlesier.de!swap' of type 'Service'; Attribute 'command_endpoint': Object 'test.local.seffner-schlesier.de' of type 'Endpoint' does not exist.
Location: in /var/lib/icinga2/api/zones/director-global/director/service_templates.conf: 5:5-5:32
/var/lib/icinga2/api/zones/director-global/director/service_templates.conf(3):     check_interval = 1m
/var/lib/icinga2/api/zones/director-global/director/service_templates.conf(4):     retry_interval = 30s
/var/lib/icinga2/api/zones/director-global/director/service_templates.conf(5):     command_endpoint = host_name
                                                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/var/lib/icinga2/api/zones/director-global/director/service_templates.conf(6): }
/var/lib/icinga2/api/zones/director-global/director/service_templates.conf(7):

This happens only if i setup an “run on agent” service and is resolvabel by adding endpoint and zone of client to satellites zones.conf - but I’ld like to push the whole config from master.

How to force master to sync the needed definitions?

kind regards
Ronny

Hi Ronny,

I’m not 100% sure if the setup you’re looking for is supported and don’t have access to my laptop right now, but you can just drop everything in /var/lib/icinga2/api/zones/ and restart icinga2 to force a complete config sync from the master.

Hi,

the host itself needs to be put into the satellite cluster zone in order to sync Host, Endpoint, Zone objects for this host. Can you share a screenshot of your host definition inside the Director?

Which versions of Icinga 2 and the Director are involved here?

Cheers,
Michael

Dropping /var/lib/icinga2/api on different nodes is something I tried before. But your post gives me a bit more security that this is not too bad fpr icinga :wink:
But this does not solve my issue.

Im using icinga2 version r2.10.5-1 and icingaweb2 2.7.1.

I read sometimes only “zone” sometimes “cluster zone” - it’s the same?
At the moment I’ve “master” for master, “klipphausen*” for satellite and “test*” for the node behind the satellite.

satellite configuration

node configuration

How does the rendered configuration for this deployment look like inside the Director?

Did you mean the resolved text based config preview?

(some “.” replaced by “#” because of board restrictions)

satellite

zones.d/klipphausen#ddns#seffner-schlesier#de/hosts.conf

object Host “klipphausen#ddns#seffner-schlesier#de” {
display_name = “SUS GW”
address = “klipphausen#ddns#seffner-schlesier#de”
address6 = “klipphausen#ddns#seffner-schlesier#de”
check_command = “hostalive”
max_check_attempts = “3”
check_interval = 1m
retry_interval = 30s
enable_active_checks = true
enable_passive_checks = true
zone = “klipphausen#ddns#seffner-schlesier#de”
vars.os = “Linux”
vars.sla = “24x7”
}

node

zones.d/klipphausen.ddns.seffner-schlesier.de/hosts.conf

object Host “test#local#seffner-schlesier#de” {
display_name = “SUS test”
address = “192.168.100.159”
check_command = “hostalive”
max_check_attempts = “3”
check_interval = 1m
retry_interval = 30s
enable_active_checks = true
enable_passive_checks = true
zone = “klipphausen#ddns.seffner-schlesier#de”
vars.command_endpoint = “host.vars.client_endpoint”
vars.sla = “5x10”
}

You can use the three backticks notation for code blocks, that’s like <code> tags - Create topics and master Markdown formatting

So the rendered configuration for zones.d/klipphausen.ddns.seffner-schlesier.de/ does not include Endpoint and Zone configuration? Can you show the full output?

Cheers,
Michael

How to “render” the whole configuration?
For now I found zones and endpoints in Directors Infrastructure

zones.d/master/endpoints.conf
object Endpoint "klipphausen.ddns.seffner-schlesier.de" {
}
object Endpoint "test.local.seffner-schlesier.de" {
}

zones.d/master/zones.conf
object Zone "klipphausen.ddns.seffner-schlesier.de" {
    parent = "master"
    endpoints = [
        "klipphausen.ddns.seffner-schlesier.de",
        "test.local.seffner-schlesier.de"
    ]
}

Inside the deployment section, when you render and deploy the configuration, you can render the full config output.

In terms of the infrastructure - did you put them there by yourself?

Inside the deployment section, when you render and deploy the configuration, you can render the full config output.

Ah, I see there deployment times and inside this the rendered config files.

In terms of the infrastructure - did you put them there by yourself?

I don’t think so. Perhaps I should use ‘icinga2 object list --type Endpoint’ and so on to get a view over all directors and file config?

But back to the main question, it seems the endpoint (node) is right configured in satellites zone but these zone is’nt synced from master to satellite - why?

The rendered output shows the object in the master zone, this is what Icinga 2 receives from the Director and deploys it into zones.d/master instead of your satellite zone. So I see the problem somewhere in the Director settings … try resetting the cluster zone to “none”, deploy, let it fail, and then set it again. Maybe there’s a form storage bug preventing this setting from being real.

Which Director version is used here?

Cheers,
Michael

1 Like

Which Director version is used here?

I checked it out using git a few days ago. Version is “master”. Changing to stable 1.6.2 is’nt possible because of different db schemas ;-(

The rendered output shows the object in the master zone, this is what Icinga 2 receives from the Director and deploys it into zones.d/master instead of your satellite zone. So I see the problem somewhere in the Director settings … try resetting the cluster zone to “none”, deploy, let it fail, and then set it again. Maybe there’s a form storage bug preventing this setting from being real.

Oh I see. The host object is in zones.d/satellite-zone but the endpoint object in zones.d/master …
I set hosts cluster zone in one try to “select one” and in another try to “master”, in every case I rolled out, changed back to cluster zonen klipphausen.ddns.seffner-schlesier.de but nothing changes.

In rendered config I see the satellite and the agent endpoints in zones.d/master/endpoints but other agent endpoints in zones.d/master/agent_endpoints. As I understand you - for sync to satellite it needs the endpoint configuration under zones.d/klipphausen.ddns.seffner-schlesier.de.

The inital mentioned blog post told to manually setup an endpoint for the agent to use the api user. It seems that is only needed for the satellite but not for the node.
So I removed agents endpoint in director and everything works as expected after cleaning manuelly edited zones.conf at satellite.

Many thanks for your support.

1 Like

Strange, then the blogpost is wrong. What you typically need before starting with the Director is a master with a zones.conf specifying the master - satellite hierarchy. With the Director kickstart, this gets imported as external objects then, visible via Infrastructure section.

The satellite also need the master satellite hierarchy, but nothing else.

Then you can put a host into the satellite zone in the Director. Once you mark it as agent, the Director renders Host, Endpoint and Zone config objects for your convenience. Since the host is put into the satellite zone, this renders into zones.d/satellite.

Once Icinga receives the deployed config from the Director, it recognizes the directory as for syncing it straight to the satellite zone.

During the satellite connect, it gets that synced, including the Host, Endpoint, Zone object for the agent. It validates and reloads, then starts to schedule the checks, executed via command endpoint on the agent.

Cheers,
Michael

1 Like