Hello Community and Devs,
I have several questions about what’s possible with icinga CA.
Is it possible (now or in the future) to use an external CA (a company one for example) for icinga to sign endpoints csr instead of using the icinga generated one ?
As far i understood this related post, it seems possible but may introduce unexpected behaviors which could be hard to debug and solve. Own CA for Icinga Cluster/API communication?
Is it possible (now or in the future) to have multiple SAN (suject alternative name, endpoint fqdn for icinga) for an endpoint certificate ?
The idea is to have both the endpoint fqdn and an other fqdn pointing to a virtual address to ensure high availability.
I agree. I’d like to see it just support standard Openssl and its interactions with CA’s/Cert’s. Or just use the OS Trusted CA’s and certs stores like any application basically. Linux and Windows both have them, and browsers too and its the way of the world nowdays. Only getting more so.
I would like to answer this question in regard of our use case:
We have some ha clusters with two endpoints. each endpoint has an icinga agent certificate with common name equals endpoint-name.
There is also a cluster configured in icinga (which also has an endpoint).
The ip of the cluster is mounted as a virtual ip to one of the node endpoints. when the cluster endpoint is requested, of course a node endpoint with the endpoint certificate answers which generates a log entry similar to warning/ApiListener: Unexpected certificate common name while connecting to endpoint 'cluster_endpoint': got 'node_endpoint',
also because of the dissimilar common name the check on the cluster endpoint fails to receive an appropriate response.
My idea is that when both the node endpoints certificates contain the common name (which is the node endpoint) and as a multi san (e.g. DNS.1) the cluster endpoint name, the check could successfully connect to the cluster and receive a valid response.
Thanks Nick, but I am not talking about an icinga (master) cluster.
I am talking about for example a ha webserver cluster.
Both webserver nodes have one unique hostname / icinga-agent-endpoint (e.g. ubuntu-apache-node1.local and ubuntu-apache-node2.local) and one unique physical IP.
Also the webserver cluster has a unique endpoint name (ubuntu-apache-cluster.local) and a virtual ip that resolves to ubuntu-apache-cluster.local and is floating between both nodes (or simply on the master while the other node is a slave).
This part I don’t understand. The way we have done it is to define a Host for the cluster IP, but no Endpoint. On this Host, we have an HTTP Service (port 443), checked from the Satellite.
As Moreamazingnick stated, we do have a Host + Endpoint (agent) for each cluster member (with their individual name and IP), on which we have an HTTP Service (port 8085 for instance) checked locally by the agent, locally on the server (to avoid opening flows on the network firewalls).
I know, this was just an example to clarify the issue why checking the cluster won’t work.
Yes, that is one way how we execute some checks that only work on the cluster object.
They are no problem because no icinga-agent is involved.
What I try to approach for example are checks that would succeed on the cluster, succeed on the active master node and would fail on the inactive slave node, e.g. a check_disk for a cluster-resource like /data/drbd which is always mounted on the master node but never on the inactive one.
What were your approach for such cases? Besides for example rivad’s approach with a dummy-check combined with icinga-dsl:
From my point of view an icinga-agent certificate with a node’s endpoint-name as CN and the clusters endpoint-name as Subject Alternative Name were one without much icinga-dsl wizardry.
Oh yes !
I know of his approach, already implemented it for tests as a “at least one service”-command and as his “only one service”-command and like it.
I was trying to get into different approaches, thus the answer in this certificate-related thread