I am trying to write a playbook that will run on the icinga2 master. This playbook will generate the .crt and .key files for each of my client nodes and sign the certs as well. I had a question on how to use the icinga2_host module for doing this. Right now I have :
But I am receiving an error “missing required arguments name, ip”.
So for would i put the hostname of the node I am targetting and for ip the ip of the node I am targeting?
Hello again. Thanks again for helping me on my previous issue and your reply to this one. Your playbook will definitely come in handy. I had a couple of questions if you don’t mind. I was originally using shell as well but changed it to local_action because wont shell commit the action on the target node? I was trying to generate and sign all the tickets on the master node and then transfer the files over to the target node. Also my version of icinga is a little older so ca list and ca sign command is not available.
Yes shell will run on the target node unless you use the delegate_to like in the example above that way you can run the playbook from your laptop or ansible server and delegate (give the task) to a different server in the example above:
Delegate_to: {{ ic_parenthost }}
Is our satelite node that will sign the cert for the agent
As example it will then run all the tasks on the new agent including generating the new certs and connect to the satellite before we sign/ accept it. we then connect to the satellite through the delegate option and sign the request from the new agent this way you dont need to copy the certificates over. And this will enable us to sign using the built in icinga tools
I do recommend upgrading icinga tough a lot of bugs have been fixed.
Hey just marked it solved. So i can delegate to my master node as well and everything should work. I’m assuming you set a variable at the beginning of the playbook for ‘ic_parenthost’? Did you just insert the satellites IP/hostname? Thanks again this is what I was looking for !
Depending on how you are using things if you have a role you need to put this on 2 spots in your:
role\defaults\main.yml
and / or in group_vars / host_vars
That way each host can have its own master or satellite to connect to depending on how your cluster is set-up.
You could also put this in the beginning of the playbook. But things will get complicated quick
So best is to create a role then you will get this file structure:
drwxr-xr-x 15 w staff 480B Nov 18 11:01 .git
-rw-r--r-- 1 w staff 3.6K Jul 7 11:36 defaults
drwxr-xr-x 6 w staff 192B May 18 2020 files
drwxr-xr-x 3 w staff 96B Apr 28 2020 handlers
drwxr-xr-x 3 w staff 96B May 11 2020 meta
drwxr-xr-x 8 w staff 256B Aug 17 18:57 tasks
drwxr-xr-x 37 w staff 1.2K Nov 16 16:49 templates
If you then do a deeper look into tasks:
-rw-r--r-- 1 w staff 454B Aug 13 12:14 certs.yml
-rw-r--r-- 1 w staff 2.7K Nov 16 16:47 config.yml
-rw-r--r-- 1 w staff 1.4K May 14 2020 ec2.yml
-rw-r--r-- 1 w staff 6.7K Aug 13 10:56 install.yml
-rw-r--r-- 1 w staff 341B Jul 9 11:46 main.yml
-rw-r--r-- 1 w staff 4.6K Jul 17 13:53 win_install.yml
Ok i understand. Thanks again for all your help. I’ll be sure to check out icingas playbooks on github as well. You’re right , if i have any ansible related questions I’ll be sure to contact you just not in this forum