I migrated Icinga2 from an old server running CentOS 6 to a new server running Oracle Linux 8.1. As part of the migration, I performed an SQL dump from the old server and restored it on the new one. Additionally, I transferred the necessary files:
/etc/icinga2/
/var/lib/icinga2/
/etc/icingaweb2/
/var/cache/icinga2/
After restoring the database, all hosts, alerts, and email notifications appear correctly in the Icinga Web interface. However, after restarting the icinga2 service, all data disappears.
Any advice on resolving this issue would be greatly appreciated.
Hi @Dani, icinga2 itself does not store state in the SQL database (that’s mostly for the web interface), but in the directory /var/lib/icinga2.
My guess ist, that a) the migration process was faulty (e.g. /var/lib/icinga2 was not replicated properly) or b)some other configuration was used bei icinga2, not the one of the old setup.
Hard to tell without more information.
Could you provide some more information about both setups?
For example:
Single node setup/HA-Setup? (Satellites?)
Version of icinga2 and icingaweb2 (and Web-Modules) on both Machines
Configuration via Director or directly in config files?
I’ve migrated the /var/lib/icinga2 directory again, but it only worked after modifying the zones.conf file, specifically the object Endpoint and object Zone "master" settings.
However, I’m still using the old server’s hostname.
Icinga2 version: r2.14.3-1
Icinga Web 2 version: 2.9.5
Do you have any suggestions on how I can test the newly migrated Icinga setup, considering the old server is still running as production?
I do have a suspicion. Could it be that /etc/icinga2/constants.conf was not migrated? In it (in most cases) is the configuration which tells icinga2 who it is (its own Endpoint and Zone). If that is is different it might cause icinga2 to throw away all Downtimes, since it might not responsible (according to the configuration logic). Especially the Zone would interesting.
As for Testing it, do you have a “Distributed Setup”, meaning, more than one instance of icinga2 running in the whole system?
We have a single icinga running as the master from my understanding (stand alone), its running centos 6 which is EOL,
So basically we want to do a full migrate to a newer server,
and set the old one for termination when we will see that everything works correctly,
Thanks
Dani.
As you might guess, if it shows 0 for all the objects, you a problem with the configuration file somewhere and icinga2 does not properly read and evaluate them
Thanks Lorenz, looks same output both sides,
Looks like after changing on the NEW server: zones.conf
to OLD server icinga hostname, the data remains the same and doesn’t disapires after restarting the service THANKS!
Now the problem is I can’t test the new server without powering off or stoping Icinga services on the old one,
Any suggestion how can I test when the old one is still running (Production),
when I changed it to object Zone “old-server”
All data is back,
/etc/icinga2/zones.d/
is empty on both servers,
/etc/icinga2/conf.d
has the following :
api-users.con
api-users.con
app.conf
commands.conf
commands.conf
downtimes.con
groups.conf
hosts.conf
notifications
services.conf
templates.con
timeperiods.c
users.conf
Now the problem is I can’t test the new server without powering off or stoping Icinga services on the old one, as the user the same hostname in zones.conf
Any suggestion how can I test when the old one is still running (Production),
OK, some semantics here first, every icinga2 instance has two variables which define who this instance is, NodeName and ZoneName.
Since this is a single node setup, NodeName is not as important right now, but ZoneName is.
The components of an icinga2 system (meaning one or more connected icinga2 instances) are, regarding the configuration logic of Host and Service objects the Zones. Every Host and Service is positioned in a Zone and the Endpoints of a Zone are responsible for that object.
Neither the NodeName (the Endpoint name of an icinga2 instance) nor the ZoneName (the Zone an icinga2 belongs to) do have to be the same as the hostname. icinga2 effectively does not care about what your hostname is.
Therefore changing NodeName and ZoneName is not strictly necessary
But, since the mainZone you used previously seems to be the hostname of the monitoring machine it would be rather irritating to NOT change it in the future.
In any case, my suspicion is:
you changed the configuration /etc/icinga2/zones.conf but not in /etc/icinga2/constants.conf. The local icinga2 then had a new Zone (something something “new-host”) but thought it that it was not responsible (since ZoneName in /etc/icinga2/constants.conf was still “old-host”) and there just did nothing as there was nothing to do really.
Could you check that for me?
Thank you once again for your time, I really appreciate it.
The issue was that after migrating the zones.conf to the new server, I updated it with the new server name. However, all the monitored hosts were still configured to the old server name. Once I reverted the zones.conf to the old server name and restarted the Icinga server, all the hosts and details were restored.
Currently, I’m doing the final tunings and installing the required plugins. The next steps will involve testing the migration. We plan to power off both the old and new Icinga servers, power on the new server, and change its IP and hostname at the OS level. I will then verify if everything works as expected. As of now, I can see that the new server is successfully monitoring passive checks (HTTP, ping, etc.).
Please let me know if you have any other suggestion on how we can test or if we can test new server while the old server is running ?