Agent machine wont connect to master

I’m attempting to set up a simple master to agent set up using linux machines. The icinga2 daemon -C commands on both machine works without outputing any errors and on the master it shows that there is 2 hosts/endpoints/zones which is correct. The features enabled on the client are “api checker mainlog debuglog notification”. On the master the features are “api command debuglog mainlog ido-mysql notification”. On the icingaweb2 interface the agent machine shows up but it stays in a permanent DOWN state.

There are two messages from the debug log which I thought I should mention as they could be relevant. On the master debug log i see "debug/ApiListener: Not connecting to endpoint ‘clienthostname.com’ because the host/port attributes are missing.’

Second message is in the clients debug log. "debug/ApiListener: Not connecting to zone ‘master’ because it’s not in the same zone, a parent or a child zone.’

Machines can ping each other and networking is working correctly. Both machines icinga2 service is listening on port 5665.

Hello @hza2331,

according to the debug logs it looks like you didn’t configure it properly. Can you post the content of /etc/icinga2/zones.conf from master and the agent? You probably forgot to set the agent parent zone to master.

Best,
Yonas

Hello. Thanks for your reply. I am using an older version of icinga so the way I have done my configuration is still valid. I know with newer versions my config will no longer be valid.

Master Config starts here
Masters zone.conf is empty. Instead i have defined the hosts/endpoints/zones under the repository.d directory.

/etc/icinga2/repository.d/hosts/clienthostname.conf

object Host “clienthostname.com” {
import “satellite-host”
check_command = “cluster-zone”
vars.os = “com”
}

/etc/icinga2/repository.d/endpoints/clienthostname.com.conf

object Endpoint “clienthostname.com” {
}

/etc/icinga2/repository.d/zones/clienthostname.conf

object Zone “clienthostname.com” {
endpoints = [ “clienthostname.com” ]
parent = “masterhostname.com
}

Agent config starts here

object Endpoint “masterhostname.com” {
host = “masterhostname.com
port = “5665”
}
object Zone “master” {
endpoints = [ “masterhostname.com”]
}

Sorry, but this way I can’t see at all which configs are in the master and in the agent. This is a bit confusing, please structure the configs between the master and the agent such as…
Master:
Master configs here. And…

Agent:
Agent configs here.

But if I understand it correctly, you are still missing endpoint and zone definition in Client zones.conf for the client itself.

Best,
Yonas

Sorry about that. I have put the master config and agent config in bold. Let me know if I should edit it another way for better readability. Essentially all the config is done on the master asides from the clients zones.conf. In regards to the client zones/endpoints definition. I’m a little confused. What else would I need to add to the clients zone.conf to make this a valid config?

The readability looks better now, thanks. However, I am a bit confused why you are doing the client endpoint and zone definition in the master. To test my suggestion, please comment out all endpoint and zone definition in master and add these following configs in zones.conf.

Master:
Please comment out the Èndpoint and Zone definition from these files. /etc/icinga2/repository.d/endpoints/clienthostname.com.conf and /etc/icinga2/repository.d/zones/clienthostname.conf.

And add the following config to masters /etc/icinga2/zones.conf.

object Endpoint "masterhostname.com" {
}

object Endpoint "clienthostname.com" {
}

object Zone "master" {
    endpoints = [ "masterhostname.com"]
}

object Zone "clienthostname.com" {
    endpoints = [ "clienthostname.com" ]
    parent = "master"
}

Agent:
And also comment out the master endpoint and zone definition in zones.conf on the client and add the following changes.

object Endpoint "masterhostname.com" {
    host = "masterhostname.com"
    port = "5665"
}

object Zone "master" {
    endpoints = [ "masterhostname.com"]
}

object Endpoint "clienthostname.com" {
}

object Zone "clienthostname.com" {
    endpoints = [ "clienthostname.com" ]
    parent = "master"
}

Please make sure that neither object Endpoint ... nor object Zone ... is defined nowhere except in zones.conf and restart icinga2 daemon.

Hope this help!
Yonas

I have applied your changes to my setup. The agent machine still shows down on the icingaweb interface but I think the changes you suggested have me on the right track. So once again thank you for you help. What’s interesting is that I had defined a ping check in /etc/icinga2/repository.d/hosts/client.hostname.com/ping4.conf. Before I applied your changes this service on the icingaweb was stuck in “PENDING” but now it says “OK” even though the actual host is showing down.

Master debug.log and icinga.log messages

I’m still getting the message in the debug log "Not connecting to Endpoint ‘client.hostname.com’ because the host/port attributes are missing. ’ . But there is also a message that’s a little later in the same log that says ‘notice/ApiListenre: Connected endpoints : client.hostname.com(1)’ . So I think the machines are at least connected now. In the icinga.log on the master I also see a message 'Ignoring config update from ‘client.hostname.com’ for object ‘client.hostname.com!load!client.hostname.comf of type ‘Downtime’. api does not accept config’. But the clients api.conf I have specified ‘accept_config = true’ and in this log it seems like icinga is sending config updates to the client machine.

Agent icinga.log and debug.log messages

The logs also seem to show that the machine is connected to the master. I’m seeing messages like ‘Not connecting to Endpoint ‘master.hostname.com’ because we’re already connected to it.’ and "Received ‘event::Heartbeat’ message from ‘master.hostname.com’. So it looks like the machines are communicating now i just need to figure out why the agent host shows down on the icingaweb interface.

Why the host keeps having DOWN state in Icinga Web 2 I can’t tell you much unfortunately because I don’t know the configs yet but if you share them here clearly I might be able to help you.

1. And why you are getting these log messages in the master is because the connection is established from the agent and not from the master i.e. the master does not actively try to connect to the agent but only accepts incoming connections. But if you want the connection to be initiated from the master, you can set the host and port attributes of the Agent in master’s zones.conf within object Endpoint "clienthostname.com" { ... and remove the host and port attributes of the Master from agent’s zones.conf within object Endpoint "masterhostname.com" .. { ....

2. And why the master ignores the config updates from the agent is because you didn’t set accept_configs in masters api.conf, which it totally correct. After all, you don’t want the master to be updated by the agent. The agent on the other hand accepts all config updates from the master as you have confirmed and that is what you want.

Hint:
These are Icinga2 Basic configurations, which you can and should actually read beforehand here in the Distributed Monitoring und Technical Concepts, so you can understand for yourself what’s going on.

Best,
Yonas

Yes I can provide the configs for you. Just let me know what exactly you need.
Ok so those log messages aren’t anything to be concerned about. I have read over the icinga docs although I still don’t have a great understanding of it. I will read them over to get a better understanding . Thanks.

Also just wanted to add on the icingaweb interface on the agent that is down, when i click on the agent and go to the “Check Executions” section I click on ‘check now’ but for some reason the “Last check” says it was checked almost two weeks ago. Just thought I’d mention since it might be relevant to why the host is showing down.

Ahh, Ok, then please make sure that you don’t have object Host "clienthostname.com" definition in master anywhere. Now add the following config to /etc/icinga2/zones.d/master/hosts.conf on the Master and restart Icinga2. Then check in Icinga Web 2 if it is still on DOWN state.

object Host "clienthostname.com" {
  check_command = "hostalive"
  address = "clienthostname.com's IP-Adress"
}

Best,
Yonas

Hey Yonas sorry for the delayed response. I edited out my other Host definition and created the directory and hosts.conf as you suggested but the results are the same. Machine is down and the last check result occurred two weeks ago. If i scroll over the red button next to Last Check it displays “Check result is late” which i guess is obvious at this point, just thought I’d mention it. Do you think I should open a new issue for this? As to why my ping for the agent is working but the actual agent is showing down?

Hi @hza2331,

I don’t really think that the issue is due to Icinga2 but most likely to the configuration. Do you have only master <-> agent setup or also HA setups i.e. two masters in the same zone? can you please share screen shot from /icingaweb2/monitoring/health/info?

So far i just have one master and one agent. Eventually I would like to add more agents to the setup. Under the /etc/icingaweb2 directory there isn’t a directory ‘monitoring’. There is a enabledModules and Modules directory under icingaweb2 which have a monitoring directory but there isn’t a /health/info. the monitoring directory just includes ‘backends.ini’ , ‘commandtransports.ini’ and ‘config.ini’.

I have solved this issue! I thought i had enabled the 'checker ’ feature but it was disabled on my master. I enabled it and now the host is showing up. Thank you so much for you guidance Yonas. You have helped me a lot in working through this issue. Enjoy your weekend.