Submit passive result via remote agent API?

Hi everyone!

I have some hosts that run the agent locally. Due to networking constraints, the master will connect to those hosts, but there are no other connections possible, especially not agent/satellite to master.

I wanted to execute some passive checks on those hosts and submit results to the agent installed on those hosts. Unfortunately, all I get on my API call is an empty result set. Checking the configuration with an API call against https://localhost:5665/v1/objects/hosts (or services) would yield an empty list as well. The only configuration that I can see is the checkcommands, but that seems to be the default list only.

Checking the documentation or googling examples I couldn’t find any answer to my problem. So, how could I submit passive check results in a scenario like mine (master can only connect to the agent on the remote host)?

Thank you

Cheers
Steffen

Hi everyone!

I have some hosts that run the agent locally. Due to networking
constraints, the master will connect to those hosts, but there are no
other connections possible, especially not agent/satellite to master.

Icinga should be perfectly happy with that arrangement.

I wanted to execute some passive checks on those hosts and submit results
to the agent installed on those hosts.

Do you also have active checks running on these machines, and are those
working as expected?

Unfortunately, all I get on my API call is an empty result set.

What API call are you using - and on which machine?

https://icinga.com/docs/icinga2/latest/doc/12-icinga2-api/
#icinga2-api-actions-process-check-result suggests that you should be doing a
POST to /v1/actions/process-check-result

Checking the configuration with an API call against
https://localhost:5665/v1/objects/hosts (or services) would yield
an empty list as well.

That seems odd. This is on the machine running the agent, yes?

Have you looked in /var/lib/icinga2/api/zones and its subdirectories to see
whether your service checks are defined there?

The only configuration that I can see is the checkcommands, but that seems to
be the default list only.

Checking the documentation or googling examples I couldn’t find any answer
to my problem. So, how could I submit passive check results in a scenario
like mine (master can only connect to the agent on the remote host)?

I would have thought that you would POST the service check results to
/v1/actions/process-check-result on the agent, and that will then send the
results to the master when the master next asks it for an update.

Regards,

Antony.

Yes

The one mentioned in the documentation. To be more precise:

curl -k -s -u root:password -H 'Accept: application/json' -X POST 'https://127.0.0.1:5665/v1/actions/process-check-result' -d '{ "type": "Service", "filter": "host.name==\"myhostname\" && service.name==\"backup\"", "exit_status": 0, "plugin_output": "OK: backup successful" }'

Yes, because that’s where I would submit the passive check result.

I can see the “backup” service configuration in /var/lib/icinga2/api/zones/global-templates/_etc/services.conf if that’s what you mean.

Yeah, me too.
:man_shrugging:

I assume something went wrong when enabling the API, but I have no idea what…

Hi @Bebef,

an empty result set means that your agent doesn’t know the service. You can verify that using icinga2 object list --type service --name backup on your agent.

It seems like you use apply rules in your global templates. In that case the service object is created in the zone of its host. That zone needs to be the agents zone and not the master zone.

Greetings
Noah

Yes, because I might need the check in all zones, so I put it in the global templates.

It’s really hard for me to wrap my head around the configuration, especially with file locations having a semantic as it seems.

Okay, I have tried to understand this and to re-configure everything, but I’ve failed.

What I have done is create a new folder in zones.d, named like the agent (i.e. the agent’s zone) and moved the host configuration of that particular host there. That unfortunately didn’t change anything at all. I still don’t see any services on the agent/host. (side note: I start wondering how any check on that agent/host is working anyway)

Can I use an apply rule to have the object created in the agent’s zone? If not, do I need to create a service object configuration for each agent (i.e. zone) individually?

:weary:

Hi @Bebef,

that’s fine and how it should be. It’s just important to know that the zones of the services resulting from those apply rules will by default equal the hosts zone.

That’s the right way of doing it. Are you able to find the host on your agent using icinga2 object list --type host? If so, there is something wrong with your apply rule. Maybe you’ve set the zone attribute on the apply rule to something?

Another way of executing checks on an agent is to use the command_endpoint attribute. Maybe you’re using that? Unfortunately that’s not an options for your backup services.

As I said above, it’s fine to use global apply rules, as long as the host objects are in the right place (zone directory in your case).

If nothing works, could you show us your apply rule and host config?

Greetings
Noah

Nope, no host (and no services either).

Unfortunately, nothing worked :frowning_face: So, here goes nothing:

zones.d/global-templates/services.conf:
apply Service "backup" {
  import "generic-service"

  enable_active_checks = false
  enable_passive_checks = true

  check_command = "dummy"
  check_interval = 1m
  vars.dummy_state = 3
  vars.dummy_text = {{
    var service = get_service(macro("$host.name$"), macro("$service.name$"))
    var lastCheck = DateTime(service.last_check).to_string()

    return "No check results received. Last result time: " + lastCheck
  }}
  vars.grafana_graph_disable = true

  assign where host.zone == "my.host.name"
}


zones.d/my.host.name/my.host.name.conf
object Host "my.host.name" {
  display_name = "My Host"
  import "generic-host"
  check_command = "hostalive"
  address = "1.2.3.4"
  vars.agent_endpoint = name //follows the convention that host name == endpoint name
  vars.os = "Linux"
  vars.backup = true
}

I took the liberty and changed sensible data and and removed vars for all other checks. I hope that’s OK.

It also is the latest state of all the iterations, “assign where host.zone” was my last act of desperation.

Okay, that’s odd. Did the host disappear completely or did it still appear on your master and/or in your frontend (if you’re using any)?

If it disappeared completely, try icinga2 daemon -C on your master and look for something like this:

[2020-06-17 09:30:51 +0200] warning/config: Ignoring directory '/etc/icinga2/zones.d/my.host.name' for unknown zone 'my.host.name'.

If such a message appears you either messed up your names or you have configured your zones via the Director. Configuring your zones via the Director and using those zones manually with config files won’t work because of the order in which Icinga 2 evaluates the config.

Your host config looks fine to me, nothing special here. For now try to focus fixing the host first and ignoring the service. Because if that doesn’t appear on your agent, your service won’t either.

One thing that also can cause this issue, is your agent not accepting config. Have a look at /etc/icinga2/features-enabled/api.conf and make sure accept_config is set to true.

It has always been there and always worked as expected, as well as the checks themselves. When I execute icinga2 object list --type host on the master I can see the host and its variables. There have been no changes after moving the host’s configuration from zones.d/master/hosts to zones.d/my.host.name/...
In Icingaweb2 (and the check objects) I can see the backup check configured on that host.

My configuration (as reported by icinga2 daemon -C) also is free of any warnings.

All check objects that I can see on the host are declared somewhere in /usr/share/icinga2/include/. I can also see the proper Zone object, but no host object.

OK, good to know. I think I will start with a re-installation of icinga2 on that particular host and see where that takes me. As (most of) the configuration is on the master, that should be no big deal…

Okay, so there was a change after all. I didn’t notice it at first, but it seems that once I moved the config from zones.d/master/hosts.conf to zones.d/my.host.name/... all the checks became marked as “overdue”. I would have expected some kind of alert on those but that’s another topic.

I have moved the config back to the master zone which fixed the overdue issue. However…

Well, I obvously was wrong. Now I’m missing ALL Service objects for that host. Checks still seem to work except all the checks that are not part of the default library. Those are marked as unknown with “Check command ‘commandname’ does not exist.”

Until three days ago, I thought I had figured out Icinga2 and it’s configuration and all. But now, with every little piece that I touch I realise boy, was I wrong :weary:

I’d assume your zones definition is not ok e.g. zones in zones. Also global zones needs to be defined in zones.conf only.

OK, so just to clarify: I have a master and only hosts with agents. I don’t have a satellite that I need as a ‘jump host’ to monitor hosts.

From what I understood, “satellite” and “agent” are interchangeable in the documentation, so I have a “master” zone and for each host running an agent I have an “agent” zone. As a best practice/convention, zone names are equal to the host’s FQDN.

Assuming I have the master host and a host my.host.name running the agent, then - generally speaking - I would have a zone and an endpoint configured in ./zones.conf for each of them - two zones, two endpoints. One zone and endpoint is master, the other my.host.name and zone my.host.name, the latter with master as parent.

The configuration for the host running the agent my.host.name (or, to be more precise: the configuration for the agent running on the host my.host.name) would need to be in the zones.d/my.host.name/ directory, because that’s also the zone name of that agent.

Is that correct?

Okay, so @rsx comment sort of pushed me into the right direction. It seems that the upgrade from 2.10 to 2.11 (or rather: the wrong configuration of 2.10) was the issue here, which I didn’t notice until I tried to use the API on one of the agents.

Simply moving the agent configuration to zones.d/* wasn’t enough, because I still had global configs in conf.d. I had to move the appropriate configuration items to zones.d/global-templates as well.

I assume that the other agents didn’t fail the same way as the one I moved around for testing and subsequently reinstalled because they still had proper configurations sitting in /var/lib/icinga2/api/zones/ locally.

Anyway, it works again and makes sense now.

Thanks to @nhilverling for your patience as well!

2 Likes