we are using Icinga with InfluxDB (1.8) feature in an environment with about 2000 Hosts and 30.000 service checks.
We upgraded Icinga2 from 2.13.4 to 2.14.0 two moths ago.
It looks like we have way more disk usage on our influxDB machines since upgrading to 2.14.0. Its the exact moment when the upgrade was installed
I recently upgraded my system to Icinga 2.14.0 as well and I’m not seeing any incline in the usage of my InfluxDB’s data partition (but we are using InfluxDB2).
I am a bit surprised about your host and service templates for your InfluxdbWriter though, since you put in alot of additional information in there.
Depending on the cardinality of your tag values, writing so many tags in the time series can definitely become a problem with storage overhead.
Since you started to see an increase of data usage after an Icinga2 update, my guess would be that the amount of metadata that is being sent to InfluxDB has increased, but can’t find anything in the changelogs regarding that.
Is there a specific reason for sending metadata into InfluxDB?
Hi all, only to say “me too” to the first post of Devopstt. Since I upgraded to 2.14.0 from 2.13.8 the disk usage of my 1.8 influxdb has increased a lot, about a 100%, and my setup is pretty similar to the Devopstt one. The overall performance has not been affected, only the disk usage. The only valid workaround has been to reduce the influxdb retention period.
Hi everyone,
I can confirm this too after the upgrade from 1.8 to 2.14. We see this in our prod- and test-environment. Retention of the “icinga2”-bucket is 8736h0m0s (=364d), Shard group: 168h.