I have some hosts that run the agent locally. Due to networking constraints, the master will connect to those hosts, but there are no other connections possible, especially not agent/satellite to master.
I wanted to execute some passive checks on those hosts and submit results to the agent installed on those hosts. Unfortunately, all I get on my API call is an empty result set. Checking the configuration with an API call against https://localhost:5665/v1/objects/hosts (or services) would yield an empty list as well. The only configuration that I can see is the checkcommands, but that seems to be the default list only.
Checking the documentation or googling examples I couldnât find any answer to my problem. So, how could I submit passive check results in a scenario like mine (master can only connect to the agent on the remote host)?
I have some hosts that run the agent locally. Due to networking
constraints, the master will connect to those hosts, but there are no
other connections possible, especially not agent/satellite to master.
Icinga should be perfectly happy with that arrangement.
I wanted to execute some passive checks on those hosts and submit results
to the agent installed on those hosts.
Do you also have active checks running on these machines, and are those
working as expected?
Unfortunately, all I get on my API call is an empty result set.
What API call are you using - and on which machine?
That seems odd. This is on the machine running the agent, yes?
Have you looked in /var/lib/icinga2/api/zones and its subdirectories to see
whether your service checks are defined there?
The only configuration that I can see is the checkcommands, but that seems to
be the default list only.
Checking the documentation or googling examples I couldnât find any answer
to my problem. So, how could I submit passive check results in a scenario
like mine (master can only connect to the agent on the remote host)?
I would have thought that you would POST the service check results to
/v1/actions/process-check-result on the agent, and that will then send the
results to the master when the master next asks it for an update.
an empty result set means that your agent doesnât know the service. You can verify that using icinga2 object list --type service --name backup on your agent.
It seems like you use apply rules in your global templates. In that case the service object is created in the zone of its host. That zone needs to be the agents zone and not the master zone.
Yes, because I might need the check in all zones, so I put it in the global templates.
Itâs really hard for me to wrap my head around the configuration, especially with file locations having a semantic as it seems.
Okay, I have tried to understand this and to re-configure everything, but Iâve failed.
What I have done is create a new folder in zones.d, named like the agent (i.e. the agentâs zone) and moved the host configuration of that particular host there. That unfortunately didnât change anything at all. I still donât see any services on the agent/host. (side note: I start wondering how any check on that agent/host is working anyway)
Can I use an apply rule to have the object created in the agentâs zone? If not, do I need to create a service object configuration for each agent (i.e. zone) individually?
thatâs fine and how it should be. Itâs just important to know that the zones of the services resulting from those apply rules will by default equal the hosts zone.
Thatâs the right way of doing it. Are you able to find the host on your agent using icinga2 object list --type host? If so, there is something wrong with your apply rule. Maybe youâve set the zone attribute on the apply rule to something?
Another way of executing checks on an agent is to use the command_endpoint attribute. Maybe youâre using that? Unfortunately thatâs not an options for your backup services.
As I said above, itâs fine to use global apply rules, as long as the host objects are in the right place (zone directory in your case).
If nothing works, could you show us your apply rule and host config?
Okay, thatâs odd. Did the host disappear completely or did it still appear on your master and/or in your frontend (if youâre using any)?
If it disappeared completely, try icinga2 daemon -C on your master and look for something like this:
[2020-06-17 09:30:51 +0200] warning/config: Ignoring directory '/etc/icinga2/zones.d/my.host.name' for unknown zone 'my.host.name'.
If such a message appears you either messed up your names or you have configured your zones via the Director. Configuring your zones via the Director and using those zones manually with config files wonât work because of the order in which Icinga 2 evaluates the config.
Your host config looks fine to me, nothing special here. For now try to focus fixing the host first and ignoring the service. Because if that doesnât appear on your agent, your service wonât either.
One thing that also can cause this issue, is your agent not accepting config. Have a look at /etc/icinga2/features-enabled/api.conf and make sure accept_config is set to true.
It has always been there and always worked as expected, as well as the checks themselves. When I execute icinga2 object list --type host on the master I can see the host and its variables. There have been no changes after moving the hostâs configuration from zones.d/master/hosts to zones.d/my.host.name/...
In Icingaweb2 (and the check objects) I can see the backup check configured on that host.
My configuration (as reported by icinga2 daemon -C) also is free of any warnings.
All check objects that I can see on the host are declared somewhere in /usr/share/icinga2/include/. I can also see the proper Zone object, but no host object.
OK, good to know. I think I will start with a re-installation of icinga2 on that particular host and see where that takes me. As (most of) the configuration is on the master, that should be no big dealâŚ
Okay, so there was a change after all. I didnât notice it at first, but it seems that once I moved the config from zones.d/master/hosts.conf to zones.d/my.host.name/... all the checks became marked as âoverdueâ. I would have expected some kind of alert on those but thatâs another topic.
I have moved the config back to the master zone which fixed the overdue issue. HoweverâŚ
Well, I obvously was wrong. Now Iâm missing ALL Service objects for that host. Checks still seem to work except all the checks that are not part of the default library. Those are marked as unknown with âCheck command âcommandnameâ does not exist.â
Until three days ago, I thought I had figured out Icinga2 and itâs configuration and all. But now, with every little piece that I touch I realise boy, was I wrong
OK, so just to clarify: I have a master and only hosts with agents. I donât have a satellite that I need as a âjump hostâ to monitor hosts.
From what I understood, âsatelliteâ and âagentâ are interchangeable in the documentation, so I have a âmasterâ zone and for each host running an agent I have an âagentâ zone. As a best practice/convention, zone names are equal to the hostâs FQDN.
Assuming I have the master host and a host my.host.name running the agent, then - generally speaking - I would have a zone and an endpoint configured in ./zones.conf for each of them - two zones, two endpoints. One zone and endpoint is master, the other my.host.name and zone my.host.name, the latter with master as parent.
The configuration for the host running the agent my.host.name (or, to be more precise: the configuration for the agent running on the host my.host.name) would need to be in the zones.d/my.host.name/ directory, because thatâs also the zone name of that agent.
Okay, so @rsx comment sort of pushed me into the right direction. It seems that the upgrade from 2.10 to 2.11 (or rather: the wrong configuration of 2.10) was the issue here, which I didnât notice until I tried to use the API on one of the agents.
Simply moving the agent configuration to zones.d/* wasnât enough, because I still had global configs in conf.d. I had to move the appropriate configuration items to zones.d/global-templates as well.
I assume that the other agents didnât fail the same way as the one I moved around for testing and subsequently reinstalled because they still had proper configurations sitting in /var/lib/icinga2/api/zones/ locally.