I still have few gaps to maintain the graphite data… We have around 120 hosts & we have created separate mount point that is around 200G. right now this is 65% filled… I would like to understand how to maintain graphite data… is something i need to schedule cron job to clean the data which is older than 7 days? i am little scared to purge any graphite data…
So when whisper files are created by the carbon cache daemon, they claim all the disk size they need. That is, if you tell a metric to retain 1,800 days worth of data, it creates a file large enough to fit all of that data into.
The second link you provided is basically a good way to remove metrics that are obsolete; service checks you’ve removed from Icinga.
If you want less retentions for existing Graphite data, look into the whisper-resize script that comes with it. Be careful though, this irreversibly changes these files.
/usr/local/bin/whisper-resize.py --help
Usage: whisper-resize.py path timePerPoint:timeToStore [timePerPoint:timeToStore]*
timePerPoint and timeToStore specify lengths of time, for example:
60:1440 60 seconds per datapoint, 1440 datapoints = 1 day of retention
15m:8 15 minutes per datapoint, 8 datapoints = 2 hours of retention
1h:7d 1 hour per datapoint, 7 days of retention
12h:2y 12 hours per datapoint, 2 years of retention
Options:
-h, --help show this help message and exit
--xFilesFactor=XFILESFACTOR
Change the xFilesFactor
--aggregationMethod=AGGREGATIONMETHOD
Change the aggregation function (average, sum, last,
max, min, avg_zero, absmax, absmin)
--force Perform a destructive change
--newfile=NEWFILE Create a new database file without removing the
existing one
--nobackup Delete the .bak file after successful execution
--aggregate Try to aggregate the values to fit the new archive
better. Note that this will make things slower and use
more memory.