So far I’ve been running our Icinga2 Setup with a Master and 2 Satellites both of which are just runing the Icinga2 Agent and sitting in there respective zones (one beeing our test environment, the other beeing the office internal infrastructure).
I’m now tasked with expending the setup with another Zone and respective Satellite for an external client site with Icingaweb2 running on the Satellite for local dashboard access. Configuration and notification will still be handled on the Master (in Director mostly) but with Icingaweb2 needing IcingaDB and Redis nowadays I find myself at a bit of an impass with the configuration.
If I am correct I need to configure /etc/icingadb-redis/icingadb-redis.conf with the following flags:
bind x.x.x.x to bind the apropriate interface
protected-mode no to open for external access by the satellite IcingaDB Instance used by Icingaweb2
- Set the ACL for the user
The last part is (mostly) what’s giving me pause, what is the apropriate ACL for such a user? Do I only use one user with user icingaredis on +@all ~* >SUPERSTRONGPASSfor both the Master and the external Icingaweb2 instance or is a more limited user in addition to the user for the master the correct way and in what way should such a user be limited?
This usecase will be coming up a good bit more often for me now (satellite with local Icingaweb2) so getting it right the first time around would be very much prefered.
Versions on all machines involved is as follows:
- Icinga DB Web version (System - About): 1.2.2
- Icinga Web 2 version (System - About): 2.12.5
- Web browser: Chrome
- Icinga 2 version (
icinga2 --version): 2.15.0
- Icinga DB version (
icingadb --version): 1.4.0
- PHP version used (
php --version): 8.2.28
- Server operating system and version: Debian 12
With a weekend to think about the problem a cralification might be in order to simplify the problem description:
I’d like to run Icingaweb2 on a satellite which will mostly function as a glorified dashboard for this specific satellite zone in order to have monitoring capability on site in case the connection to the master goes down (internet drop). This is to have a dashboard and zone health overview running locally even if the internet is down and to reduce the amount of users on the master Icingaweb2.
Will such an installation need it’s own local redis instance or do i need to connect it to the redis running on my master, and if so what sort of user rights am I looking at giving such a user.
My current leaning would be to install redis on the local satellite as well and just connect Icingaweb2 to the local instance but I have no real clue if this will cause trouble down the line.
Another day, another round of testing around, low and behold I seem to have figured out how to achive what I wanted to do. If this is best practice or works without breaking something down the line however I can’t predict.
What I did now was:
- set up a satellite and all services / hosts in Icinga Director
- install IcingaDB, IcingaDB-Redis, MariaDB and Icingaweb2 on the satellite
- configure Icingaweb2 to have everything local (database, auth, redis) except the API connection that points to the master
- verify the zone is set up correctly and don’t miss that your zone template does not have the actuall zone configured (don’t ask…)
This seems to work as I wan’t it to so far, the satellite is scheduling and check source for the zone, Icingaweb2 showns only the zone it is in and the generall configuration is still pushed by the master via Director.
While this technically does not answer my initial question on what sort of user rights / ACL a pure Icingaweb2 user would need when Redis is not running on the same host it does solve the overarching problem for me and I’m just documenting my findings here so it might help someone else running into the same problem.
Bonus fact: If you run a small satellite on a raspberry Pi 5 IcingaDB-Redis will NOT run with the normal setup due to page size differences, you will have to set kernel=kernel8.img somewhere at teh beginning of /boot/firmware/config.txtand reboot to get the needed page size of 4096 from the 16k the regular Pi 5 Kernel is running.