How to store passwords/credentials?

Hello community!
We run our Icinga cluster with 3 zones (master zone, satellite zone 1 and satellite zone 2) and many Windows and Linux agents. For the configuration we use the Icinga director and furthermore the zone “director-global”, which by default contains all configured check commands, service templates, service sets, … are contained.
This way we can take advantage of the fact that the configuration is available on every single Icinga node and also on every Icinga agent.

But this could also be a problem. For example, if you store your password as an argument to one of your check commands/service templates, then the password would be made available on every node. Even on nodes that don’t need that command/service template.

I wanted to ask you how you handle such situations?
In my case, I wanted to set up a command/service for sat zone 1 and sat zone 2, but not for all agents as they should not receive the password.
So I created a service template for Sat Zone 1 and another service template for Sat Zone 2 and stored the password for the check command in them. Unfortunately, it is not possible to store a service template for multiple zones…
The next step was to create two rules for services (one for Sat Zone 1 and one for Sat Zone 2) and apply them to the correct hosts in each zone.
So the configs and passwords will be stored in /var/lib/icinga2/api/zones/sat-zone1/director … and our agents don’t receive it.

I find this configuration very cumbersome. How do you store your passwords for commands/services so that not every Icinga node gets them?

1 Like

We store passwords only at host objects (using Director data fields).

1 Like

We do the same (albeit, it’s just the SNMP RO string). In our case we are using the director plugin to run a SQL query on our Network Management DB (much like a CMDB) that has all of the device info, such as snmp, make, model.

If you have users (such as an entry level help desk) that don’t need to see the password, you may want to restrict their permissions as far as what they can see. ("protected custom variables under Configuration > Modules > Monitoring > Security)

Another way:
I have a few check plugins that I wrote myself that perform passive checking, and for those I run as root on cron and have a password file that it opens and gets the string – but in this case the script only ever runs on one node.

Of course this only works if you also write your own plugin. If it runs on multiple nodes you’ll need to run some sort of config management tool to push the pw file out with the right permissions.

I have also started to shift to using consul (which our environment already has set up for other applications we run). I can then use a consul client (in python for instance) to grab a username/password from a kv store.

Okay, so storing password to the host object would be the solution for security problems, because agents do not receive hosts.conf file after deployment!?
I will think about it. It’s more complicated because you always have to check that new hosts get the passwords applied.
If I store passwords to services/commands I have to maintain only one service, but passwords would be in plain text on all agents if configured in director-global zone…

When I write my own checks, I make sure that no passwords are passed as arguments. But when I use Icinga checks like check_snmp I have to store the password in the icinga configuration.

You could use one or more (dedicated) host template(s) containing according data fields and credentials. And import them to your host objects. It would be even easier if you can “resuse” an already existing host template.