Icinga certs questions

Hello Community and Devs,
I have several questions about what’s possible with icinga CA.

  1. Is it possible (now or in the future) to use an external CA (a company one for example) for icinga to sign endpoints csr instead of using the icinga generated one ?
    As far i understood this related post, it seems possible but may introduce unexpected behaviors which could be hard to debug and solve.
    Own CA for Icinga Cluster/API communication?

  2. Is it possible (now or in the future) to have multiple SAN (suject alternative name, endpoint fqdn for icinga) for an endpoint certificate ?
    The idea is to have both the endpoint fqdn and an other fqdn pointing to a virtual address to ensure high availability.

Thanks by advance,

1 Like

Hello there,
Is there any chances this could come true in the future ?

Hello @sysres-dev!

  1. You understand the mentioned post correctly. You can use your own CA – at your own risk.
  2. Please describe more detailed what you’d like to do and what for.

Best
AK

I agree. I’d like to see it just support standard Openssl and its interactions with CA’s/Cert’s. Or just use the OS Trusted CA’s and certs stores like any application basically. Linux and Windows both have them, and browsers too and its the way of the world nowdays. Only getting more so.

I would like to answer this question in regard of our use case:
We have some ha clusters with two endpoints. each endpoint has an icinga agent certificate with common name equals endpoint-name.
There is also a cluster configured in icinga (which also has an endpoint).
The ip of the cluster is mounted as a virtual ip to one of the node endpoints. when the cluster endpoint is requested, of course a node endpoint with the endpoint certificate answers which generates a log entry similar to warning/ApiListener: Unexpected certificate common name while connecting to endpoint 'cluster_endpoint': got 'node_endpoint',
also because of the dissimilar common name the check on the cluster endpoint fails to receive an appropriate response.

My idea is that when both the node endpoints certificates contain the common name (which is the node endpoint) and as a multi san (e.g. DNS.1) the cluster endpoint name, the check could successfully connect to the cluster and receive a valid response.

I hope this is as detailed as needed.

that is ok for icingaweb2 but not for icinga2 since it manages the cluster on its own.

Your agent config should look something like this if there are no satellites involved and if the connction is established by the agent:

object Endpoint "icinga2-agent1" {
}

object Zone "icinga2-agent1" {
  endpoints = [ "icinga2-agent1"]
  parent = "master"
}

object Endpoint "icinga2-master1.localdomain" {
  host = "192.168.56.101"
}

object Endpoint "icinga2-master2.localdomain" {
  host = "192.168.56.102"
}

object Zone "master" {
  endpoints = [ "icinga2-master1.localdomain", "icinga2-master2.localdomain" ]
}

the icinga agent will connect to both endpoints, if one connection is gone the other one will handle the check updates

1 Like

Thanks Nick, but I am not talking about an icinga (master) cluster.
I am talking about for example a ha webserver cluster.
Both webserver nodes have one unique hostname / icinga-agent-endpoint (e.g. ubuntu-apache-node1.local and ubuntu-apache-node2.local) and one unique physical IP.
Also the webserver cluster has a unique endpoint name (ubuntu-apache-cluster.local) and a virtual ip that resolves to ubuntu-apache-cluster.local and is floating between both nodes (or simply on the master while the other node is a slave).

Hope this is understandable

your floating ip can float between 2 nodes, your icinga2 should still be reachable via individual ips that are not floating

warning/ApiListener: Unexpected certificate common name while connecting to endpoint 'cluster_endpoint': got 'node_endpoint'

and this means your config / and or ip addresses are not correct and will get you into trouble sooner or later.

1 Like

This part I don’t understand. The way we have done it is to define a Host for the cluster IP, but no Endpoint. On this Host, we have an HTTP Service (port 443), checked from the Satellite.

As Moreamazingnick stated, we do have a Host + Endpoint (agent) for each cluster member (with their individual name and IP), on which we have an HTTP Service (port 8085 for instance) checked locally by the agent, locally on the server (to avoid opening flows on the network firewalls).

I know, this was just an example to clarify the issue why checking the cluster won’t work.

Yes, that is one way how we execute some checks that only work on the cluster object.
They are no problem because no icinga-agent is involved.

What I try to approach for example are checks that would succeed on the cluster, succeed on the active master node and would fail on the inactive slave node, e.g. a check_disk for a cluster-resource like /data/drbd which is always mounted on the master node but never on the inactive one.

What were your approach for such cases? Besides for example rivad’s approach with a dummy-check combined with icinga-dsl:

From my point of view an icinga-agent certificate with a node’s endpoint-name as CN and the clusters endpoint-name as Subject Alternative Name were one without much icinga-dsl wizardry.

You may want to read through this question: Monitoring Avaya Communication Management - Service Monitoring - Icinga Community

It starts with Avaya stuff, but later in the conversation, @rivad very kindly shared his “Icinga DSL wizardry”, as you call it :slight_smile:.

To my understanding and knowledge, there is no iso-functionality alternative.

Oh yes !:slight_smile:
I know of his approach, already implemented it for tests as a “at least one service”-command and as his “only one service”-command and like it.
I was trying to get into different approaches, thus the answer in this certificate-related thread