Check disk not showing the right value and right result for my server

I suspect you are measuring the disk usage of the Icinga server and not of the
object server@domain.com.

You’ll probably find that this applies to all your other service checks too,
unless you have command_endpoint defined for those, and you just missed it out
of this one for some reason…

You need to have “command_endpoint = host.name” in order to run the checks on
the host itself.

Antony.

1 Like

Hello Again , Thanks for replying
So i should make in my service.conf

apply Service "disk" {
  check_command = "disk"

  // Specify the remote agent as command execution endpoint, fetch the host custom variable
  command_endpoint = host.name.domain.com

  // Only assign where a host is marked as agent endpoint
 assign where host.name == "name.domain.com"
}

is that right ? cause i tested it and it does not provide the right value

The command endpoint method requires the icinga2 agent to be installed on the system you want to monitor.

If you don’t want to install the agent you can “fall back” to methods like snmp, check_by_ssh, nrpe/nsclient.

Is “host.name.domain.com” the value of the “name” in the host definition?

I somehow doubt it.

Did you try “command_endpoint = host.name”?

Maybe give us some quoted examples from your config files…

Regards,

Antony.

Hello , Here is my config

object Host "pro.test.com" {
  import "generic-host"
  display_name = "ICEWARP"
  address = "x.x.x.x"
  check_command = "hostalive"
  vars.agent_endpoint = "pro.test.com"
vars.notification["mail"] = {
    groups = [ "icingaadmins" ]
    users = ["icingaadmin"]
  }
enable_notifications = true
vars.notification_type = "email"
}

for my service :

apply Service "disk" {
  check_command = "disk"
command_endpoint = host.vars.pro.test.com
assign where host.vars.pro.test.com
}

Its already installed the agent and for my host.conf :

object Host "pro.test.com" {
  import "generic-host"
  display_name = "ICEWARP"
  address = "x.x.x.x"
  check_command = "hostalive"
  vars.agent_endpoint = "pro.test.com"
vars.notification["mail"] = {
    groups = [ "icingaadmins" ]
    users = ["icingaadmin"]
  }
enable_notifications = true
vars.notification_type = "email"
}

If the agent is already installed, the command endpoint has to be set to exactly the name of the endpoint.
Check the zones.conf and constants.conf file on the agent to see the name of the endpoint.

Also to improve readability please use backticks to encase code snippets.
Here is a FAQ/HowTo on posting :slight_smile:

1 Like

Further discussion about this problem here:

Hello everyone,
i have posted an issue about the check of disk for my remote hosts ( installed by agent) but the result i get is only for my master
for example i have used the monitoring of procs and disk but the result for my remote server are the same as my master server ( I’m using architecture of Master->client)
Here is an example in how i monitor procs :
host.conf ( exemple of node)
object Host “ssl3.domaine.com” {
import “generic-host”
address = “x.x.x.x”
check_command = “dummy”
vars.notification[“mail”] = {
groups = [ “icingaadmins” ]
users = [“icingaadmin”]
}
enable_notifications = true
vars.notification_type = “email”
}

My service.conf

apply Service “procs” {
display_name = “PROCS”
import “generic-service”
check_command = “procs”
vars.procs_warning = “800”
vars.procs_critical = “900”
assign where host.name == “ssl3.domaine.com
vars.notification[“mail”] = {
groups = [ “icingaadmins” ]
users = [“icingaadmin”]
}
}

its happening in disk and memory all time
What should i do please ?

PS : i’m NOT disabling the inclusion of conf.d
Thank you

Have you defined “command_endpoint = host.name” in your service check?

I’ve suggested this previously but I still do not see it in your configuration.

Antony.

1 Like

Hi.

Your question seems quite similar to this one.
Is there something wrong with the answers?

As already said:
It might be enough to just add

command_endpoint = host.name

to your service-definition. It says “execute this check at the host-object the service is assigned to”. So a host which gets this service assigned, executes this check locally.

Example
apply Service "your_awesome_service" {
  ...
  command_endpoint = host.name
  ...
}

You also might have a look here.

Greetings.

Thank you so much ,
But here i should re-configure the node wizard by making disable the inclusions of conf.d to yes ( my case is no ) cause when doing that and you are right i added the command this output comme out

Remote Icinga instance ‘ssl3.domaine.com’ is not connected to ‘icingamaster.domaine.com
i think that i should making inclusion of conf.d to yes ?
Thank you so much

Hello Antony ,
Thank you for replying
yes i forgot to add it , but now, after moving the host.conf to /etc/icinga2/zone.d/master ( for service.conf also )
i cant get the disk value bcz of this error
Remote agent is not connected to the master
i should make YES to disable the inclusions of conf.d

Did you make any of the changes suggested in your other thread about the same problem?

Also quoting myself again:

Hey @log1c,
I saw that you closed the other topic, but I think we should just merge the two instead, as it is basically one discussion :slight_smile:
Makes it a lot easier to follow, when you don’t have to switch tabs all the time :slight_smile:

1 Like

Hi again.

I spinned up a new icinga-master with a single host “ssl3.domaine.com” (don’t worry, just as an example - adapted from your config).
Here the relevant configuration parts to

  • check disk on the agent, not on the master and
  • connect agent and master.

The “address” (see below in the example) has to be changed

I assume:

  • that you still have your hosts.conf in /etc/icinga2/zones.d/master/
  • you have still included the other files than zones.conf under /etc/icinga2/conf.d/ (check if “include_recursive conf.d” is not commented out in the /etc/icinga2/icinga2.conf)

Here the relevant parts of the configs with paths. Other parts are missing and presented by dots (…)

The part on the icinga2 master
# File: /etc/icinga2/zones.d/master/hosts.conf
...
object Endpoint "ssl3.domaine.com" {
}

object Zone "ssl3.domaine.com" {
  endpoints = [ "ssl3.domaine.com", ]
  parent = "master"
}

object Host "ssl3.domaine.com" {

  ...
  import "generic-host"

  address = "<change to your ip or fqdn here, without braces>"

  vars.disks["disk /"] = {
    disk_partitions = "/"
  }
 ...
}

# ------------------

# File: /etc/icinga2/conf.d/services.conf
...
apply Service for (disk => config in host.vars.disks) {
  import "generic-service"

  command_endpoint = host.name
  check_command = "disk"

  vars += config
}
...
The part on the agent
# This config-part will be generated by the "icinga2 node wizard"

# File: /etc/icinga2/zones.conf
...
object Endpoint "ssl3.domaine.com" {
}

object Zone "ssl3.domaine.com" {
        endpoints = [ "ssl3.domaine.com" ]
        parent = "master"
}
...

The name of the agent node has to match the name on the master-node to connect to it successfully.

Watch the “Check Source” at icingaweb2:
check_source

Greetings

1 Like

You need to put every zone and endpoint object in /etc/icinga2/zones.conf only if you are suing V2.11.

Tbh that was my original plan, but I did not find the option to do that :blush:

1 Like