Multiple zones and vSphereDB module

We got one master and two satellite instances, each of the latter is monitoring a single site. As per documentation, the master runs Icinga2, Icingaweb2 and DB module and redis. Additionally, we have deployed the director and the vSphereDB plugin.

The satellites are communicating correctly with the master and each agent is talking to their respective satellite. All passive checks are performed on the agent, active checks on the satellite. All works perfectly, except the vSphereDB plugin.

Here is the issue. The vSphereDB plugin only run on the master, so the import/sync jobs etc are defined there. I would love to use the “icingacli vsphere check vm --uuid xxxx” command, however, only the master server can do that. As an additional bit of info, all sites share a common vCenter which is located in the master site.

What do I need to do in order to get those checks down to the satellites? Am I correct assuming, that I would need to install the vSphereDB module plus Icingaweb2 on both satellites? I am not really sure on how to proceed, as the doc says clearly, that the web instance needs access to the DB, which can only be reached at the master site, and to redis (same).

I tried to specify the command_endpoint for the VMware check service to point to the master, but Icinga2 is too smart and tells me, that the endpoint needs to be inside or a direct child of each satellite zone. So that doesn’t work either.

Any insight would be highly appreciated.

1 Like

I’m also very interested in this topic. We have a setup with two masters in a remote (cloud) location and two zones with a satellite each. I fail to see where the vSphere DB module, service and database should be located and activated.

Icingaweb2 is served by the two masters, the VMware servers are to be found in the satellite zones.

If I understand correctly this is simple.

The vSphereDB modules has 3 parts:

  • Service
  • DB
  • Icingaweb2 module

The Check runs against the Icingaweb2 module.
You stated that the vCenter is central and located at the master site.
So the vSphereDB service can run on the master, the Icingaweb2 vSphereDB Module is also installed at the master. This is why you need to state in your service “vSphereDB - Host Status” that the check needs to run on the master and this isn’t a problem at all.

image

image

this results in command_endpoint = null in the config so it will run on the master.

I don’t get this, active checks most of the time need to be executed on the target machine by the agent.
Passive checks need to reach a Icinga2 API endpoint that knows of the object or it gets ignored or results in a error - the master knows all objects.

Thank you @rivad for your reply. Indeed, the vCenter is central, however, this is going to change soon. We want to make the monitoring local to each site, involving the satellites only, if that is possible.

Guess I phrased it quite badly :slight_smile: What I was trying to say is, that each satellite is responsible for both passive and active checks for all objects at its site, so the active checks are scheduled by the satellite but executed by the agent, reporting back to the satellite. For passive checks, the agent will report back to the satellite for its site. Does that make more sense?

So basically, I want to make sure, that all checks are performed locally in each site.

The satellite isn’t doing much besides establishing connections and passing config to the agents in the attached zones, provide a API endpoint and sending results back to the parent zone.
@tgelf is working on making the vSphereDB module able to use satellites (Feature Request: Using vspheredb via remote agent · Issue #105 · Icinga/icingaweb2-module-vspheredb · GitHub) and such like but this changes nothing on the fact that the check has to run on a node that has the Icingaweb2 vSphereDB module installed as it checks against the DB and not against the vCenter directly. It just needs a connection to the DB, the vSphereDB service doesn’t need to run node for the check to work.

Thank you Dominik, for all your assistance. We have finally figured out what the issue was in my deployment.

The VsphereDB component is required to be installed on the endpoint doing the active check. The check is actually a two step process, in the first step the daemon of VsphereDB will read out all info from the vCenter server and update its internal database. The second step (the actual check command) will only query this database for the current state of the host, datastore or virtual machine.

In our case, we wanted to limit the number of cross-datacenter requests to offload the checks to the satellite servers, however, I originally thought the second step is the actual check against vCenter, which it clearly is not. So in order to achieve our goal, we would basically need to deploy a full Icinga stack in each datacenter, including VshereDB and Icinga Web, in order to run both steps independently of the other datacenters. I will work on that in the near future and update this thread when I was able to finish the setup.

In the meantime, for any environment that this might apply to, we can still assign the clients to our satellite zones and have the Vsphere checks being executed by the master only. For that to work the service template must be deployed to the zone of the master and with the help of https://github.com/Icinga/icingaweb2-module-director/issues/808 the endpoint to execute the check also needs to be set to the master.