This works well enough except for one thing: it looks like api-users.conf isn’t synced as part of the configuration. Do I need to set that up manually on the clients, or is there something I’ve missed?
In fact, if I have to create api-users.conf on the clients, does that mean I have to give up the top-down architecture, or can I get away with just creating that one file on the clients?
for security reasons, I wouldn’t sync the API credentials to those nodes but rather have them setup locally where this really needed. If you’re really sure that this doesn’t open up security exploits, you can use the cluster config sync zones, e.g. put api-users.conf into the master or satellite zones then. I would also advise to use different ApIUsers for each zone to allow identifying (un)wanted access easier.
Before digging into the risks here, please clarify why you need the REST API on the client? The client/agent should solely execute checks e.g. via command endpoint, and nothing else.
Perhaps I use the wrong terminlogy? We have a master zone and two client/satelite (?) zones; I wish to implement passive checks, and the master can only be reached from the two clients/satelites due to configurations out side my control. It seems to work, BTW.
So, you want to feed in passive check results to the satellites which have the host/service objects synchronized, e.g. with an external script executed on a specific server, right?
The reason I am nitpicking here is that when you just sync that “one” ApiUser with a high grant of permissions, one can hijack the agent and extract that credentials. Having such, the most logical attempt to go up the chain and test whether this also works with the satellite and/or master.
In terms of passive check results, limit the permissions to exactly that (actions/process-check-result, check the docs for the proper setting). Do not use an ApiUser object which provides more or all (*) permissions.
Meaning to say, the ApiUser synced to that zone should have descriptive name and limited permissions.