Trouble changing/removing scheduled downtime created from apply rules

Hi,

I use Director and creates a ScheduledDowntime apply rule. For example:

apply ScheduledDowntime "downer" to Service {
    author = "me"
    comment = "TEST"
    fixed = true
    assign where match("several-hosts-*", host.name) && service.name == "service-name"
    ranges = {
        "wednesday"	= "21:01-00:00"
    }
}

But when I remove it, it doesn’t disappear from the list of scheduled downtimes. And if I make changes to the ranges and such, even the comment, it’s not propagated either.

The service is also created through an apply rule, if that’s the issue.

I’m really after a more complex ranges setup, with hours until midnight and weekends. But I don’t think it’s an issue with the ranges. I just cant get rid of the old apply rule until I make a new one. How do I cancel them completely?

2 Likes

Icinga Core 2.12.0-1
Icingaweb2 2.8.2
Director 1.7.2

Two master node setup.

Hello @daniel_hamberg!

Please share a screenshot showing that list.

Best,
AK

Here you can see that I’ve deleted the apply rules, and that they are deployed. But still active under Scheduled Downtimes under /icingaweb2/monitoring/list/downtimes

Strange. This should have been fixed:

https://github.com/Icinga/icinga2/commit/8f03adf76f59e5108c6c9e13fe03646da20ec504

Is the downtime’s checkable on a satellite/agent? If yes, which Icinga 2 version?

Not sure how to check if specifically downtimes are checkable, but checks are run on both nodes, yes. Both nodes are recently installed and have the exact same setup. I have a master/master setup with just two nodes, no satellites/agents apart from this.

And both are v2.12.0?

Exactly. I have even double checked the versions in icingaweb2 and using icinga2 -- version. Because I have upgraded the environment once since installation.

Any Updatre on that, having same issue in a bit different way.
no applyrule to set that downtime but same behavior.

  • a removed scheduled downtime is not removed from the host/serviceobject it was assingned to
  • it is still available in icingaweb
  • but it is removed from database out of the “icinga_scheduled_downtime” table
  • still every downtime on fs in
    ls /var/lib/icinga2/api/packages/_api/53a45f8d-aaea-4873-bc96-0b38dc866eef/conf.d/downtimes/
    Display all 39806 possibilities? (y or n)

removing manually in icingaweb
Log says :
[2021-12-15 23:25:13 +0100] information/ExternalCommandListener: Executing external command: [1639607113] DEL_HOST_DOWNTIME;2228
…[2021-12-15 23:25:13 +0100] warning/Downtime: Cannot remove downtime ‘host-bla-bla-bla!c59e6db2-64fc-4214-86eb-464a625f9a17’. It is owned by scheduled downtime object ‘bla bla bla’

API Call remove says 200 successfully removed
But log says:
[2021-12-15 23:23:15 +0100] warning/Downtime: Cannot remove downtime ‘host-bla-bla-bla!c03fc0b5-9b5c-4a2b-9aec-c4c958604080’. It is owned by scheduled downtime object bla bla bla’
[2021-12-15 23:23:15 +0100] information/HttpServerConnection: Request: POST /v1/actions/remove-downtime (from [0.0.0.0]:57334), user: bla, agent: curl/7.68.0, status: OK).

Strange thing → same problem/bug i guess

Having the same issue with Icinga2 (2.10.1).
I’m even removing the records from the DB directly and they still reappear.

I had to change (or remove) several thousand downtimes, because of wrong date (my fault).
I use module ‘Director’ → ‘Scheduled Downtimes’ (by Apply Rules)

what worked fine for me:

  1. disable (or delete) the wrong ‘ScheduledDowntime Apply Rules’ + deploy
  2. disable the hosts holding the downtime objects + deploy
  3. enable the hosts again + deploy

result:
all nnnn unwanted downtime entries are gone ! :+1:
now I did setup a regular new ‘Scheduled Downtimes’ Apply Rule (+ deploy).
after a few minutes all correct downtimes appeared nicely.
total used time (incl coffee) aprox. 15 min.

:sunglasses:

1 Like

This did nor work for me. Any other solutions to remove a scheduled downtime created with director or director api?

Yup. Wait for 2.14.3 to be released.
This is a(nother) known bug: