Removing downtime not possible


I ran into a strange behaviour, when I accidentally created a wrong downtime for a host and and tried to remove the downtime afterwards. The creation of the downtime object was not successful (says the API):

    "results": [                                                                                                                                                                        
            "code": 500.0,                                                                                                                                                              
            "status": "Action execution failed: 'Error: Function call 'localtime_r' failed with error code 75, 'Value too large for defined data type'\n'."                             

Although I can see the downtime in Icinga Web 2:

… but if I try to remove the Downtime, I get an error:

icinga2: Can't send external Icinga command: 404 No objects found.

Even if I try to remove all downtimes for this host by API, the broken downtime is still there.

Any ideas? Did I oversee something or is this a bug (which I should report)?

Icinga2 version: 2.10.2-1
OS: CentOS 7.6.1810

I’ve seen a similar thing with which seems to happen only when specific times are 0 or lower. What parameters are needed to reproduce the problem?

I used a value for time_end which was bigger than allowed (by datatype), thus the downtime has no end. I’d guess that the downtime only resist as zombie in the IDO.

Ok, so you’ve hit an overflow with localtime_r which causes the core to fail. For some reason, the IDO creation event happens thus leaving this zombie entry behind. If you’re using an end_time after 2038, MySQL tends to throw an error as well.

1 Like

OK. Can I safely delete the downtime entry in the IDO or do I have to keep something else in mind?

icinga_scheduleddowntime and icinga_downtimehistory should do the trick.

In terms of the problem itself, how should we fix this … putting in sanitizers directly in the API sounds the best, doesn’t it? Or should the caller receive an immediate error?


The best would be imho to prevent creating such broken downtimes (check boundaries and datatypes), so the REST request would fail with a certain error message.


Ok, then we’ll have a lot of fun between 32bit time_t (year 2038) and 64bit time_t (year 3000) when using INT_MAX.

TL;DR - 32 bit systems should die.

1 Like

Don’t know much about C++, but isn’t there any kind of “exception” raised if the value is to great? Or is this too late in context?

The scope where the exception is thrown is different … actually I am thinking that the Log() call with FormatDateTime() triggers this in AddDowntime(). I haven’t yet had the change to reproduce this error with lldb attached.