Host not monitored if configured within a satellite zone

I have two servers:

master.x.y.z: 10.0.0.1
satellite.x.y.z: 10.2.0.3

On master.x.y.z:

#icinga2.conf...
include_recursive "conf.d"
#zones.conf 
object Endpoint "master.x.y.z" {
        host = "10.0.0.1"
}
object Endpoint "satellite.x.y.z" {
	host = "10.2.0.3"
}
object Zone "master" {
        endpoints = [ "master.x.y.z" ]
}
object Zone "satellite" {
        endpoints = ["satellite.x.y.z"]
        parent = "master";
}
object Zone "director-global" {
        global = true
}
object Zone "global-templates" {
        global = true
}
#constants.conf...
const NodeName = "master.x.y.z"
const ZoneName = "master"
const TicketSalt = "ABC....ZYZ"
#conf.d/templates.conf
template Host "generic-host" {
  max_check_attempts = 3
  check_interval = 3m
  retry_interval = 30s
  check_command = "hostalive"
}
#zones.d/master/master.conf
object Host "Device 10.1.0.1" { 
	import "generic-host" 
	address			=	"10.1.0.1"
	zone			=	"master" 
}	
#zones.d/master/satellite/satellite.conf
object Host "Device 10.2.0.1" { 
	import "generic-host" 
	address			=	"10.2.0.1"
	zone			=	"satellite" 
}	

On satellite.x.y.z:

#icinga2.conf...
include_recursive "conf.d"
#zones.conf 
object Endpoint "master.x.y.z" {
        host = "10.0.0.1"
}
object Endpoint "satellite.x.y.z" {
	host = "10.0.0.3"
}
object Zone "master" {
        endpoints = [ "master.x.y.z" ]
}
object Zone "satellite" {
        endpoints = ["satellite.x.y.z"]
        parent = "master";
}
object Zone "director-global" {
        global = true
}
object Zone "global-templates" {
        global = true
}
#constants.conf...
const NodeName = "satellite.x.y.z"
const ZoneName = "satellite"
const TicketSalt = "ABC....ZYZ"

Device 10.1.0.1 is correctly monitored but Device 10.2.0.1 is not monitored. If I change Device 10.2.0.1 to use command_endpoint then it works. If I change Device 10.1.0.1 to use another zone name then it fails (suggesting that zone mapping is working). I need to move to a load-balanced zone configuration, hence the change from previous command_endpoint configuration. (Note: This example is simplified for this post. Actual configuration has many thousands of hosts and services. Services not shown here for simplification. If I can fix the hosts then the services should similarly be monitored.)
Why does the satellite zone not behave as I expect? What configuration changes would fix this?

this has to be

#zones.d/satellite/satellite.conf

otherwise it would not be synchronized to the satellite.

BTW: Some } are missing in your examples, but I’d assume they are just copy/paste errors?

I started with zones.d/satellite/satellite.conf but this did not work. I moved it back to master when I had to revert to using command_endpoint for each host. It would not work otherwise.

So, with zones.d/satellite/satellite.conf I have the same issue with none of the satellite hosts being monitored.

Regarding closing curly brackets - yes just a typo whilst I created this contrived example. I will fix the syntax in the question.

I’d recommend to add a cluster-zone check to keep an eye on the connection first. You may also find some hints in the icinga log files.

command_endpoint is typical for agents only. Hence, you service definition might be wrong and/or in a wrong directory. Could you please share an example?

Sorry, I am not sure what more example to share. I posted what I thought is a pretty extensive example of configuration demonstrating my attempt to monitor host availability within two zones. Each of the servers acts as an ICMP (ping) actor, identifying availability of hosts within its zone. I want to get this working before I re-enable the services which are mostly SNMP polls from each of these servers to hosts within their respective zones.

Please tell me what I can post to give more explicit example.

Okay - I now have a development environment configured (separate to our staging environment) so can play…

I have got the configurations I want to work. I think the issue is that I need to ensure configurations exist within the zones.d path. Our current configuration has all services under the conf.d path which I think may be why this isn’t working. I have created directories below zones.d:

zones.d/master
zones.d/satellite
zones.d/global-templates

I have moved the templates to the global-templates folder and the specific services to their relevant zone directories. This works on my small development environment so next step is to apply to the large data set within our staging environment.

The main lesson learned here (and please shout if I have this wrong) is that when using zones, all host and service configuration should live within the zones.d path. (This isn’t explicitly evident to me from the documentation.)

Yeah, you got it. It took me some time to understand as well, even it’s described here and here.