dcz01
March 28, 2024, 2:20pm
1
Hello,
We’re using the lastest version of icinga2 and icinga2web.
We got the problem or better we want to change the user from which the api call for acknowledge of problems in icinga2web.
Where can we change that the logged in user is used for the api call of the acknowledgement of the problem?
Actual it is always our or the default “Icinga2” user.
Greetings
dcz01
you can change the user in command-transports:
icingadb:
https://your-icinga/icingaweb2/icingadb/command-transport
or
monitoring/ido:
https://your-icinga/icingaweb2/monitoring/config
but this is of course also used for manuall “check now”
nilmerg
(Johannes Meyer)
March 28, 2024, 3:45pm
3
The authentication of the API doesn’t work with another user than what is configured for it and this has nothing to do with who is responsible for an acknowledgement.
If an acknowledgement is made through Icinga Web, the user who’s logged in and performing the action should be the author of it. This is also visible in Web when looking at the acknowledgement just made.
dcz01
April 2, 2024, 2:43pm
4
Thanks a lot for the help.
And how can i get the tool “icinga2opsgenie” to send the author of the acknowledgement?
dcz01
April 3, 2024, 5:08am
6
Thanks for the reply and the code part.
But how can i make this variable flexible for multiple users?
We have some users that acknowledge problems and any user should appear correct in opsgenie.
Can this be done?
dcz01
April 3, 2024, 8:22am
8
Thanks a lot for that post.
But thats the way from opsgenie to icinga2.
Is there also the way from icinga2(web) to opsgenie?
Actually icinga2 isn’t sending any user to opsgenie…
that’s right
but you can add almost anything to the opsgenie command
as last argument of you opsgenie command add something like notification_author
without any dash as a value use $notification.author$
this data will be added to your opsgenie alert and you can use it there, don’t ask me exactly how to do it.
here is the doc for the notification object to see what else you could append to your opsgenie alert/ack:
https://icinga.com/docs/icinga-2/latest/doc/09-object-types/#notification
here is the codesnipped (just as a reference) that adds everything that is not an argument to the parameters that will be sent to opsgenie:
https://github.com/opsgenie/opsgenie-integration/blob/b2a6cb78d01225fc07d0084a399192069ecb1188/icinga2/icinga2/icinga2opsgenie.go#L392
dcz01
April 3, 2024, 10:43am
10
Thanks for that good answer.
That was the solution but when i use that extra code in my notification config for icinga2, opsgenie gets the test alert message from icinga2 but doesn’t escalate anymore an alarm.
It ignores then any incoming alarm
[Icinga2-Icinga2] Skipping incomingData, no matching actions found
Object
_incomingData
_parsedData
service_down_time 0
apiKey ...
last_service_state_change 1712140070
-lhc 1712139943
service_output DISK CRITICAL - free space: CRITICAL [ / 37 MB (0% inode=94%)];
host_attempt
service_attempt 1
event_type Create
-hpd rta=0.299000ms;200.000000;300.000000;0.000000 pl=0%;10;20;0
host_state
-hgns mariadb
last_service_check 1712140070
long_date_time
host_address
max_host_attempts 3
host_percent_change
max_service_attempts 5
_logTruncated 75/76 items shown
host_display_name
host_execution_time
service_state_id 2
last_host_state_id
**notification_author**
-hdn pr-be-mdb-galera02-03
service_desc disk_root
host_group_name
host_perf_data
-ldt 2024-04-03 12:27:50 +0200
-hds 112272.706592
-lhs UP
-hdt 0
service_latency 0.000357
service_execution_time 0.494404
notification_type PROBLEM
-originalTags [ ]
service_state CRITICAL
host_group_names
_originalExtraProperties { icinga_server: default, max_host_attempts: 3 }
host_duration_sec
service_group_names
service_perf_data
-hal pr-be-mdb-galera02-03
host_down_time
last_host_state_change
-lhsi 0
-lhsc 1712027797
logPath /var/log/opsgenie/icinga2opsgenie.log
service_duration_sec 0.000688
delayIfDoesNotExists true
last_service_state_id 1
host_alias
-het 4.071381
service_state_type HARD
service_check_command check_disk_dynamic_by_ssh
-entityType service
-ha 1
-recipients [ ]
-hsi 0
host_state_type
-hst HARD
icinga_server default
-hl 0.000477
last_service_state WARNING
-hn pr-be-mdb-galera02-03
entity_type
service_display_name Host - Disk:Root
-ho PING OK - Packet loss = 0%, RTA = 0.30 ms
-teams [ ]
last_host_check
last_host_state
-tags [ ]
-hs UP
host_latency
host_state_id
-haddr 10.49.80.137
host_name
integrationType Icinga2
integrationName Icinga2
integrationId 933063bf-243b-4909-9ede-9776d45fe098
incomingDataId 15333816-6563-4a8b-89a0-50b7a152977f
And without the new notification_author it workes normally:
[Icinga2-Icinga2] Started to execute action: Create Service Alert
Object
_incomingData
_parsedData
service_down_time 0
apiKey ...
last_service_state_change 1712140473
service_output DISK CRITICAL - free space: CRITICAL [ / 38 MB (0% inode=94%)];
host_attempt 1
service_attempt 1
event_type Create
host_state UP
last_service_check 1712140473
long_date_time 2024-04-03 12:34:33 +0200
host_address 10.49.80.137
max_host_attempts 3
host_percent_change
max_service_attempts 5
host_display_name pr-be-mdb-galera02-03
host_execution_time 4.083692
service_state_id 2
last_host_state_id 0
service_desc disk_root
host_group_name
host_perf_data rta=0.280000ms;200.000000;300.000000;0.000000 pl=0%;10;20;0
service_latency 0.000451
service_execution_time 0.495932
notification_type PROBLEM
-originalTags [ ]
service_state CRITICAL
host_group_names mariadb
_originalExtraProperties { service_state: CRITICAL, host_address: 10.49.80.137, max_host_attempts: 3, max_service_attempts: HARD, host_group_names: mariadb, service_check_command: check_disk_dynamic_by_ssh, host_state_type: HARD, last_service_state_change: 1712140473, service_output: DISK CRITICAL - free space: CRITICAL [ / 38 MB (0% inode=94%)];, icinga_server: default, last_host_state_change: 1712027797, service_attempt: 1, host_attempt: 1, service_desc: disk_root, last_host_check: 1712140238, host_perf_data: rta=0.280000ms;200.000000;300.000000;0.000000 pl=0%;10;20;0, host_state: UP, host_latency: 0.000355, host_alias: pr-be-mdb-galera02-03, last_service_check: 1712140473, service_latency: 0.000451, service_state_type: HARD, host_name: pr-be-mdb-galera02-03, host_output: PING OK - Packet loss = 0%, RTA = 0.28 ms }
host_duration_sec 112675.946984
service_group_names
service_perf_data
host_down_time 0
last_host_state_change 1712027797
logPath /var/log/opsgenie/icinga2opsgenie.log
service_duration_sec 0.009055
delayIfDoesNotExists true
last_service_state_id 1
host_alias pr-be-mdb-galera02-03
service_state_type HARD
service_check_command check_disk_dynamic_by_ssh
-recipients [ ]
host_state_type HARD
icinga_server default
last_service_state WARNING
entity_type service
service_display_name Host - Disk:Root
-teams [ ]
last_host_check 1712140238
last_host_state UP
-tags [ ]
host_latency 0.000355
host_state_id 0
host_name pr-be-mdb-galera02-03
host_output PING OK - Packet loss = 0%, RTA = 0.28 ms
integrationType Icinga2
integrationName Icinga2
integrationId 933063bf-243b-4909-9ede-9776d45fe098
incomingDataId 2d906b9c-8eaa-48fd-a716-c48fcb28010b
Do you know a solution for that?
It seems that the problem is now on the opsgenie side, what does their support say?
dcz01
April 3, 2024, 1:39pm
12
I opened now a ticket at Atlassian and i will post the solution then here.
dcz01
April 10, 2024, 1:19pm
13
The problem was now solved with the Atlassian/Opsgenie support.
The error was in the icinga config file for the icinga2opsgenie command:
object NotificationCommand "opsgenie-service-notification" {
import "plugin-notification-command"
vars.hgns = {{ host.groups.join(",") }}
vars.sgns = {{ service.groups.join(",") }}
command = [ "/usr/bin/icinga2opsgenie" ]
arguments = {
"-entityType" = "service"
"-t" = "$notification.type$"
"-ldt" = "$icinga.long_date_time$"
"-hn" = "$host.name$"
"-hdn" = "$host.display_name$"
"-hal" = "$host.display_name$"
"-haddr" = "$host.address$"
"-hs" = "$host.state$"
"-hsi" = "$host.state_id$"
"-lhs" = "$host.last_state$"
"-lhsi" = "$host.last_state_id$"
"-hst" = "$host.state_type$"
"-ha" = "$host.check_attempt$"
"-mha" = "$host.max_check_attempts$"
"-hl" = "$host.latency$"
"-het" = "$host.execution_time$"
"-hds" = "$host.duration_sec$"
"-hdt" = "$host.downtime_depth$"
"-hgn" = "$host.group$"
"-hgns" = "$command.vars.hgns$"
"-lhc" = "$host.last_check$"
"-lhsc" = "$host.last_state_change$"
"-ho" = "$host.output$"
"-hpd" = "$host.perfdata$"
"-s" = "$service.name$"
"-sdn" = "$service.display_name$"
"-ss" = "$service.state$"
"-ssi" = "$service.state_id$"
"-lss" = "$service.last_state$"
"-lssi" = "$service.last_state_id$"
"-sst" = "$service.state_type$"
"-sa" = "$service.check_attempt$"
"-sc" = "$service.check_command$"
"-msa" = "$service.max_check_attempts$"
"-sl" = "$service.latency$"
"-set" = "$service.execution_time$"
"-sds" = "$service.duration_sec$"
"-sdt" = "$service.downtime_depth$"
"-sgns" = "$command.vars.sgns$"
"-lsch" = "$service.last_check$"
"-lssc" = "$service.last_state_change$"
"-so" = "$service.output$"
"-spd" = "$service.perfdata$"
"notification_author" = "$notification.author$"
"entity_type" = "service"
}
}
object NotificationCommand "opsgenie-host-notification" {
import "plugin-notification-command"
vars.hgns = {{ host.groups.join(",") }}
command = [ "/usr/bin/icinga2opsgenie" ]
arguments = {
"-entityType" = "host"
"-t" = "$notification.type$"
"-ldt" = "$icinga.long_date_time$"
"-hn" = "$host.name$"
"-hdn" = "$host.display_name$"
"-hal" = "$host.display_name$"
"-haddr" = "$host.address$"
"-hs" = "$host.state$"
"-hsi" = "$host.state_id$"
"-lhs" = "$host.last_state$"
"-lhsi" = "$host.last_state_id$"
"-hst" = "$host.state_type$"
"-ha" = "$host.check_attempt$"
"-mha" = "$host.max_check_attempts$"
"-hl" = "$host.latency$"
"-het" = "$host.execution_time$"
"-hds" = "$host.duration_sec$"
"-hdt" = "$host.downtime_depth$"
"-hgn" = "$host.group$"
"-hgns" = "$command.vars.hgns$"
"-lhc" = "$host.last_check$"
"-lhsc" = "$host.last_state_change$"
"-ho" = "$host.output$"
"-hpd" = "$host.perfdata$"
"notification_author" = "$notification.author$"
"entity_type" = "host"
}
}
We needed both the notification_author and the entity_type without dash.
1 Like