Scheduled Downtime Missing

Hi,

I’m having issue when scheduled downtimes are visible only on Endpoint object - and not on masters / Web2 interface. I think problem is described here:

But struggling to understand end solution here. Please note not sure is guid (i guess generate) after ! in object name need to be

What I have is scheduled downtime for example

apply ScheduledDowntime "somedowntime" to Service {
    author = "scheduled-downtime"
    comment = "Example"
    ranges = {
        monday = "14:25-00:45"
    }
    assign where "somedowntime" in service.vars.downtimes
}

Then In my service template I’m doing config += vars (Apply for Rules) where config is content of my dictionary, so for example on host:

vars.some_check["something"] = { downtimes = [ "somedowntime" ] }

This generates kind of weird state:

  • Downtime is not on Web2 interface
  • Downtime is not visible (any downtime ) on master1 (master in HA)
  • Downtime is visible on on master2 (another master in HA pair)
  • Downtime is visible on satellite01 (satellite)
  • Downtime is not visible on Endpoint

By visible I mean I can see same downtime object with ! (that I believe need to match across servers). I can see some downtime’s on Endpoint but not one that I can see on 2nd master and Satellite server. 1st Master where I keep my config (and getting synced to 2nd master) does not see any downtime object for this host/service

From zones this is how it works

master1 (master in HA pair where I keep config)

object Endpoint "master1" {
}

object Endpoint "master2" {
    host = "master2"
}

object Zone "master" {
  endpoints = [ "master1", "master2" ]
}

master2 (2nd master in HA pair, that get config sync from 04)

object Endpoint "master1" {
}

object Zone "master" {
  endpoints = [ "master1", "master2" ]
}

object Endpoint "master2" {
}

Then both masters have same entry for satellite and endpoint

object Zone "endpoint" { endpoints = [ "endpoint" ], parent = "sattellite" }
object Endpoint "endpointl" { log_duration = 0 }

object Endpoint "sattellite" {
object Zone "sattellite" {
    endpoints = [ "sattellite" ]

Then this is on satellite that checking this Endpoint:

object Zone "endpoint" { endpoints = [ "endpoint" ], parent = "sattellite" }
object Endpoint "endpoint" { }

object Endpoint "master1" { }
object Endpoint "master2" { }
object Zone "master" {
        endpoints = [ "master1", "master2" ]
}
object Endpoint "sattellite" {
}
object Zone "sattellite" {
        endpoints = [ "sattellite" ]
        parent = "master"
}

I know a lot of info but would be great if somebody can advise for sure what I’m doing wrong!

Thanks
Dariusz

Hi Dariusz!

Please share /etc/icinga2/features-available/api.conf of all nodes you think they should see the respective downtimes.

Also,

where is all of this located?

Best,
A/K

Hi @Al2Klimov

Sorry for late answer!

Regarding second question this apply-for-rule is located in global-templates so it’s replicated everywhere. Then host object where we using service of course its in own zone (endpoint). Hope this answers question.

So regarding this API settings on all hosts, here they are:

master1: (one with config in /etc/icinga2/zones.d)
  //accept_config = false
  //accept_commands = false

master2:
  accept_config = true
  accept_commands = true

sattelite:
  accept_config = true
  accept_commands = true

agent:
  accept_config = true
  accept_commands = true

Here is direct example of this missing on one of masters - HOSTNAME and SERVICE are of course replaced but this all should equal right?

Command to count:
icinga2 object list --type downtime --name 'HOSTNAMEl!SERCICE-*' | grep -i 'of type' | wc -l

Results:
Master1: 4
Master2: 6
Satellite: 6
Agent: 6

Not sure why my main master missing something if it’s the one with config so should create all runtime objects yeah?

Thanks
D

what happens if you set:

master1:
  accept_config = true
  accept_commands = true

Hi,

So I’ve changed on master1 both settings to true. Is that OK now in cluster scenario where both got same settings?

Then I done systemctl reload → not changed. So I said let me try systemctl restart and now i can see all downtimes.

So to confirm all objects are now present on both masters and I can see all 6 created and propagated down.

Thanks
D

as I understand it the two masters divide their tasks, if master2 schedules the downtime the master1 has to get this information somehow.

also hit the solution button