57 KiB
Migration from Icinga 1.x
Configuration Migration
The Icinga 2 configuration format introduces plenty of behavioural changes. In order to ease migration from Icinga 1.x, Icinga 2 ships its own config migration script.
Configuration Migration Script
A standalone configuration migration script is available at https://github.com/Icinga/icinga2-migration. All further details on the command line parameters are documented there too.
This script will be merged back upstream into the Icinga Web 2 CLI once there is a final stable release.
Please note that not each configuration detail, trick or attribute will work. Some specific migration steps will be still required to be done manually, especially if you want to preserve your existing file layout, or any other object specific policies.
If you encounter a bug, please open an issue at https://dev.icinga.org
Manual Config Migration
For a long-term migration of your configuration you should consider re-creating your configuration based on the proposed Icinga 2 configuration paradigm.
Please read the next chapter to find out more about the differences between 1.x and 2.
Manual Config Migration Hints
These hints should provide you with enough details for manually migrating your configuration, or to adapt your configuration export tool to dump Icinga 2 configuration instead of Icinga 1.x configuration.
The examples are taken from Icinga 1.x test and production environments and converted straight into a possible Icinga 2 format. If you found a different strategy, send a patch!
If you require in-depth explanations, please check the next chapter.
Manual Config Migration Hints for Intervals
By default all intervals without any duration literal are interpreted as seconds. Therefore
all existing Icinga 1.x *_interval
attributes require an additional m
duration literal.
Icinga 1.x:
define service {
service_description service1
host_name localhost1
check_command test_customvar
use generic-service
check_interval 5
retry_interval 1
}
Icinga 2:
object Service "service1" {
import "generic-service"
host_name = "localhost1"
check_command = "test_customvar"
check_interval = 5m
retry_interval = 1m
}
Manual Config Migration Hints for Services
If you have used the host_name
attribute in Icinga 1.x with one or more host names this service
belongs to, you can migrate this to the apply rules syntax.
Icinga 1.x:
define service {
service_description service1
host_name localhost1,localhost2
check_command test_check
use generic-service
}
Icinga 2:
apply Service "service1" {
import "generic-service"
check_command = "test_check"
assign where host.name in [ "localhost1", "localhost2" ]
}
In Icinga 1.x you would have organized your services with hostgroups using the hostgroup_name
attribute
like the following example:
define service {
service_description servicewithhostgroups
hostgroup_name hostgroup1,hostgroup3
check_command test_check
use generic-service
}
Using Icinga 2 you can migrate this to the apply rules syntax:
apply Service "servicewithhostgroups" {
import "generic-service"
check_command = "test_check"
assign where "hostgroup1" in host.groups
assign where "hostgroup3" in host.groups
}
Manual Config Migration Hints for Group Members
The Icinga 1.x hostgroup hg1
has two members host1
and host2
. The hostgroup hg2
has host3
as
a member and includes all members of the hg1
hostgroup.
define hostgroup {
hostgroup_name hg1
members host1,host2
}
define hostgroup {
hostgroup_name hg2
members host3
hostgroup_members hg1
}
This can be migrated to Icinga 2 and using group assign. The additional nested hostgroup
hg1
is included into hg2
with the groups
attribute.
object HostGroup "hg1" {
assign where host.name in [ "host1", "host2" ]
}
object HostGroup "hg2" {
groups = [ "hg1" ]
assign where host.name == "host3"
}
These assign rules can be applied for all groups: HostGroup
, ServiceGroup
and UserGroup
(requires renaming from contactgroup
).
Tip
Define custom attributes and assign/ignore members based on these attribute pattern matches.
Manual Config Migration Hints for Check Command Arguments
Host and service check command arguments are separated by a !
in Icinga 1.x. Their order is important and they
are referenced as $ARGn$
where n
is the argument counter.
define command {
command_name my-ping
command_line $USER1$/check_ping -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p 5
}
define service {
use generic-service
host_name my-server
service_description my-ping
check_command my-ping-check!100.0,20%!500.0,60%
}
While you could manually migrate this like (please note the new generic command arguments and default argument values!):
object CheckCommand "my-ping-check" {
import "plugin-check-command"
command = [
PluginDir + "/check_ping", "-4"
]
arguments = {
"-H" = "$ping_address$"
"-w" = "$ping_wrta$,$ping_wpl$%"
"-c" = "$ping_crta$,$ping_cpl$%"
"-p" = "$ping_packets$"
"-t" = "$ping_timeout$"
}
vars.ping_address = "$address$"
vars.ping_wrta = 100
vars.ping_wpl = 5
vars.ping_crta = 200
vars.ping_cpl = 15
}
object Service "my-ping" {
import "generic-service"
host_name = "my-server"
check_command = "my-ping-check"
vars.ping_wrta = 100
vars.ping_wpl = 20
vars.ping_crta = 500
vars.ping_cpl = 60
}
There also is a quick programmatical workaround for this (example exported from LConf). Define a generic
check command importing the basic template, and also setting the $USER1$
macro. Assign it to the global
PluginDir
constant.
template CheckCommand "generic-check-command" {
import "plugin-check-command"
vars.USER1 = PluginDir
}
Every check command importing the generic-check-command
template will now automatically set the new plugin
directory - one major problem solved.
For the check command it is required to
- Escape all double quotes with an additional
\
. - Replace all runtime macros, e.g.
$HOSTADDRESS$
with$address$
. - Replace custom variable macros if any.
- Keep
$ARGn$
macros.
The final check command looks like this in Icinga2:
object CheckCommand "ping4" {
import "generic-check-command"
command = "$USER1$/check_ping -H $address$ -w $ARG1$ -c $ARG2$ -p 5"
}
The service object will now set the command arguments as ARGn
custom attributes.
check_command ping4!100.0,20%!500.0,60%
This command line can be split by the !
separator into
ping4
(command name, keep it for Icinga 2)100.0,20%
asvars.ARG1
500.0,60%
asvars.ARG2
The final service could look like:
apply Service "ping4" {
import "generic-service"
check_command = "ping4"
vars.ARG1 = "100.0,20%"
vars.ARG2 = "500.0,60%"
assign where host.name == "my-server"
}
That way the old command arguments fashion can be applied for Icinga 2, although it's not recommended.
Manual Config Migration Hints for Runtime Macros
Runtime macros have been renamed. A detailed comparison table can be found here.
For example, accessing the service check output looks like the following in Icinga 1.x:
$SERVICEOUTPUT$
In Icinga 2 you will need to write:
$service.output$
Manual Config Migration Hints for Runtime Custom Attributes
Custom variables from Icinga 1.x are available as Icinga 2 custom attributes.
define command {
command_name test_customvar
command_line echo "Host CV: $_HOSTCVTEST$ Service CV: $_SERVICECVTEST$\n"
}
define host {
host_name localhost1
check_command test_customvar
use generic-host
_CVTEST host cv value
}
define service {
service_description service1
host_name localhost1
check_command test_customvar
use generic-service
_CVTEST service cv value
}
Can be written as the following in Icinga 2:
object CheckCommand "test_customvar" {
import "plugin-check-command"
command = "echo "Host CV: $host.vars.CVTEST$ Service CV: $service.vars.CVTEST$\n""
}
object Host "localhost1" {
import "generic-host"
check_command = "test_customvar"
vars.CVTEST = "host cv value"
}
object Service "service1" {
host_name = "localhost1"
check_command = "test_customvar"
vars.CVTEST = "service cv value"
}
If you are just defining $CVTEST$
in your command definition its value depends on the
execution scope - the host check command will fetch the host attribute value of vars.CVTEST
while the service check command resolves its value to the service attribute attribute vars.CVTEST
.
Manual Config Migration Hints for Contacts (Users)
Contacts in Icinga 1.x act as users in Icinga 2, but do not have any notification commands specified. This migration part is explained in the next chapter.
define contact{
contact_name testconfig-user
use generic-user
alias Icinga Test User
service_notification_options c,f,s,u
email icinga@localhost
}
The service_notification_options
can be mapped
into generic state
and type
filters, if additional notification filtering is required. alias
gets
renamed to display_name
.
object User "testconfig-user" {
import "generic-user"
display_name = "Icinga Test User"
email = "icinga@localhost"
}
This user can be put into usergroups (former contactgroups) or referenced in newly migration notification objects.
Manual Config Migration Hints for Notifications
If you are migrating a host or service notification, you'll need to extract the following information from your existing Icinga 1.x configuration objects
- host/service attribute
contacts
andcontact_groups
- host/service attribute
notification_options
- host/service attribute
notification_period
- host/service attribute
notification_interval
The clean approach is to refactor your current contacts and their notification command methods into a generic strategy
- host or service has a notification type (for example mail)
- which contacts (users) are notified by mail?
- do the notification filters, periods, intervals still apply for them? (do a cleanup during migration)
- assign users and groups to these notifications
- Redesign the notifications into generic apply rules
The ugly workaround solution could look like this:
Extract all contacts from the remaining groups, and create a unique list. This is required for determining the host and service notification commands involved.
- contact attributes
host_notification_commands
andservice_notification_commands
(can be a comma separated list) - get the command line for each notification command and store them for later
- create a new notification name and command name
Generate a new notification object based on these values. Import the generic template based on the type (host
or service
).
Assign it to the host or service and set the newly generated notification command name as command
attribute.
object Notification "<notificationname>" {
import "mail-host-notification"
host_name = "<thishostname>"
command = "<notificationcommandname>"
Convert the notification_options
attribute from Icinga 1.x to Icinga 2 states
and types
. Details
here. Add the notification period.
states = [ OK, Warning, Critical ]
types = [ Recovery, Problem, Custom ]
period = "24x7"
The current contact acts as users
attribute.
users = [ "<contactwithnotificationcommand>" ]
}
Do this in a loop for all notification commands (depending if host or service contact). Once done, dump the collected notification commands.
The result of this migration are lots of unnecessary notification objects and commands but it will unroll the Icinga 1.x logic into the revamped Icinga 2 notification object schema. If you are looking for code examples, try LConf.
Manual Config Migration Hints for Notification Filters
Icinga 1.x defines all notification filters in an attribute called notification_options
. Using Icinga 2 you will
have to split these values into the states
and types
attributes.
Note
Recovery
type requires theOk
state.Custom
andProblem
should always be set astype
filter.
Icinga 1.x option | Icinga 2 state | Icinga 2 type |
---|---|---|
o | OK (Up for hosts) | |
w | Warning | Problem |
c | Critical | Problem |
u | Unknown | Problem |
d | Down | Problem |
s | . | DowntimeStart / DowntimeEnd / DowntimeRemoved |
r | Ok | Recovery |
f | . | FlappingStart / FlappingEnd |
n | 0 (none) | 0 (none) |
. | . | Custom |
Manual Config Migration Hints for Escalations
Escalations in Icinga 1.x are a bit tricky. By default service escalations can be applied to hosts and hostgroups and require a defined service object.
The following example applies a service escalation to the service dep_svc01
and all hosts in the hg_svcdep2
hostgroup. The default notification_interval
is set to 10
minutes notifying the cg_admin
contact.
After 20 minutes (10*2
, notification_interval * first_notification) the notification is escalated to the
cg_ops
contactgroup until 60 minutes (10*6
) have passed.
define service {
service_description dep_svc01
host_name dep_hostsvc01,dep_hostsvc03
check_command test2
use generic-service
notification_interval 10
contact_groups cg_admin
}
define hostgroup {
hostgroup_name hg_svcdep2
members dep_hostsvc03
}
# with hostgroup_name and service_description
define serviceescalation {
hostgroup_name hg_svcdep2
service_description dep_svc01
first_notification 2
last_notification 6
contact_groups cg_ops
}
In Icinga 2 the service and hostgroup definition will look quite the same. Save the notification_interval
and contact_groups
attribute for an additional notification.
apply Service "dep_svc01" {
import "generic-service"
check_command = "test2"
assign where host.name == "dep_hostsvc01"
assign where host.name == "dep_hostsvc03"
}
object HostGroup "hg_svcdep2" {
assign where host.name == "dep_hostsvc03"
}
apply Notification "email" to Service {
import "service-mail-notification"
interval = 10m
user_groups = [ "cg_admin" ]
assign where service.name == "dep_svc01" && (host.name == "dep_hostsvc01" || host.name == "dep_hostsvc03")
}
Calculate the begin and end time for the newly created escalation notification:
- begin = first_notification * notification_interval = 2 * 10m = 20m
- end = last_notification * notification_interval = 6 * 10m = 60m = 1h
Assign the notification escalation to the service dep_svc01
on all hosts in the hostgroup hg_svcdep2
.
apply Notification "email-escalation" to Service {
import "service-mail-notification"
interval = 10m
user_groups = [ "cg_ops" ]
times = {
begin = 20m
end = 1h
}
assign where service.name == "dep_svc01" && "hg_svcdep2" in host.groups
}
The assign rule could be made more generic and the notification be applied to more than just this service belonging to hosts in the matched hostgroup.
Note
When the notification is escalated, Icinga 1.x suppresses notifications to the default contacts. In Icinga 2 an escalation is an additional notification with a defined begin and end time. The
Manual Config Migration Hints for Dependencies
There are some dependency examples already in the basics chapter. Dependencies in Icinga 1.x can be confusing in terms of which host/service is the parent and which host/service acts as the child.
While Icinga 1.x defines notification_failure_criteria
and execution_failure_criteria
as dependency
filters, this behaviour has changed in Icinga 2. There is no 1:1 migration but generally speaking
the state filter defined in the execution_failure_criteria
defines the Icinga 2 state
attribute.
If the state filter matches, you can define whether to disable checks and notifications or not.
The following example describes service dependencies. If you migrate from Icinga 1.x you will only
want to use the classic Host-to-Host
and Service-to-Service
dependency relationships.
define service {
service_description dep_svc01
hostgroup_name hg_svcdep1
check_command test2
use generic-service
}
define service {
service_description dep_svc02
hostgroup_name hg_svcdep2
check_command test2
use generic-service
}
define hostgroup {
hostgroup_name hg_svcdep2
members host2
}
define host{
use linux-server-template
host_name host1
address 192.168.1.10
}
# with hostgroup_name and service_description
define servicedependency {
host_name host1
dependent_hostgroup_name hg_svcdep2
service_description dep_svc01
dependent_service_description *
execution_failure_criteria u,c
notification_failure_criteria w,u,c
inherits_parent 1
}
Map the dependency attributes accordingly.
Icinga 1.x | Icinga 2 |
---|---|
host_name | parent_host_name |
dependent_host_name | child_host_name (used in assign/ignore) |
dependent_hostgroup_name | all child hosts in group (used in assign/ignore) |
service_description | parent_service_name |
dependent_service_description | child_service_name (used in assign/ignore) |
And migrate the host and services.
object Host "host1" {
import "linux-server-template"
address = "192.168.1.10"
}
object HostGroup "hg_svcdep2" {
assign where host.name == "host2"
}
apply Service "dep_svc01" {
import "generic-service"
check_command = "test2"
assign where "hp_svcdep1" in host.groups
}
apply Service "dep_svc02" {
import "generic-service"
check_command = "test2"
assign where "hp_svcdep2" in host.groups
}
When it comes to the execution_failure_criteria
and notification_failure_criteria
attribute migration,
you will need to map the most common values, in this example u,c
(Unknown
and Critical
will cause the
dependency to fail). Therefore the Dependency
should be ok on Ok and Warning. inherits_parents
is always
enabled.
apply Dependency "all-svc-for-hg-hg_svcdep2-on-host1-dep_svc01" to Service {
parent_host_name = "host1"
parent_service_name = "dep_svc01"
states = [ Ok, Warning ]
disable_checks = true
disable_notifications = true
assign where "hg_svcdep2" in host.groups
}
Host dependencies are explained in the next chapter.
Manual Config Migration Hints for Host Parents
Host parents from Icinga 1.x are migrated into Host-to-Host
dependencies in Icinga 2.
The following example defines the vmware-master
host as parent host for the guest
virtual machines vmware-vm1
and vmware-vm2
.
By default all hosts in the hostgroup vmware
should get the parent assigned. This isn't really
solvable with Icinga 1.x parents, but only with host dependencies.
define host{
use linux-server-template
host_name vmware-master
hostgroups vmware
address 192.168.1.10
}
define host{
use linux-server-template
host_name vmware-vm1
hostgroups vmware
address 192.168.27.1
parents vmware-master
}
define host{
use linux-server-template
host_name vmware-vm2
hostgroups vmware
address 192.168.28.1
parents vmware-master
}
By default all hosts in the hostgroup vmware
should get the parent assigned (but not the vmware-master
host itself). This isn't really solvable with Icinga 1.x parents, but only with host dependencies as shown
below:
define hostdependency {
dependent_hostgroup_name vmware
dependent_host_name !vmware-master
host_name vmware-master
inherits_parent 1
notification_failure_criteria d,u
execution_failure_criteria d,u
dependency_period testconfig-24x7
}
When migrating to Icinga 2, the parents must be changed to a newly created host dependency.
Map the following attributes
Icinga 1.x | Icinga 2 |
---|---|
host_name | parent_host_name |
dependent_host_name | child_host_name (used in assign/ignore) |
dependent_hostgroup_name | all child hosts in group (used in assign/ignore) |
The Icinga 2 configuration looks like this:
object Host "vmware-master" {
import "linux-server-template"
groups += [ "vmware" ]
address = "192.168.1.10"
vars.is_vmware_master = true
}
object Host "vmware-vm1" {
import "linux-server-template"
groups += [ "vmware" ]
address = "192.168.27.1"
}
object Host "vmware-vm2" {
import "linux-server-template"
groups += [ "vmware" ]
address = "192.168.28.1"
}
apply Dependency "vmware-master" to Host {
parent_host_name = "vmware-master"
assign where "vmware" in host.groups
ignore where host.vars.is_vmware_master
ignore where host.name == "vmware-master"
}
For easier identification you could add the vars.is_vmware_master
attribute to the vmware-master
host and let the dependency ignore that instead of the hardcoded host name. That's different
to the Icinga 1.x example and a best practice hint only.
Manual Config Migration Hints for Distributed Setups
-
Icinga 2 does not use active/passive instances calling OSCP commands and requiring the NSCA daemon for passing check results between instances.
-
Icinga 2 does not support any 1.x NEB addons for check load distribution
-
If your current setup consists of instances distributing the check load, you should consider building a load distribution setup with Icinga 2.
-
If your current setup includes active/passive clustering with external tools like Pacemaker/DRBD consider the High Availability setup.
-
If you have build your own custom configuration deployment and check result collecting mechanism you should re-design your setup and re-evaluate your requirements, and how they may be fulfilled using the Icinga 2 cluster capabilities.
Differences between Icinga 1.x and 2
Configuration Format
Icinga 1.x supports two configuration formats: key-value-based settings in the
icinga.cfg
configuration file and object-based in included files (cfg_dir
,
cfg_file
). The path to the icinga.cfg
configuration file must be passed to
the Icinga daemon at startup.
enable_notifications=1
define service {
notifications_enabled 0
}
Icinga 2 supports objects and (global) variables, but does not make a difference if it's the main configuration file, or any included file.
const EnableNotifications = true
object Service "test" {
enable_notifications = 0
}
Sample Configuration and ITL
While Icinga 1.x ships sample configuration and templates spread in various object files, Icinga 2 moves all templates into the Icinga Template Library (ITL) and includes them in the sample configuration.
Additional plugin check commands are shipped with Icinga 2 as well.
The ITL will be updated on every release and should not be edited by the user.
There are still generic templates available for your convenience which may or may
not be re-used in your configuration. For instance, generic-service
includes
all required attributes except check_command
for a service.
Sample configuration files are located in the conf.d/
directory which is
included in icinga2.conf
by default.
Main Config File
In Icinga 1.x there are many global configuration settings available in icinga.cfg
.
Icinga 2 only uses a small set of global constants allowing
you to specify certain different setting such as the NodeName
in a cluster scenario.
Aside from that, the icinga2.conf should take care of including global constants, enabled features and the object configuration.
Include Files and Directories
In Icinga 1.x the icinga.cfg
file contains cfg_file
and cfg_dir
directives. The cfg_dir
directive recursively includes all files with a .cfg
suffix in the given directory. Only absolute paths may be used. The cfg_file
and cfg_dir
directives can include the same file twice which leads to
configuration errors in Icinga 1.x.
cfg_file=/etc/icinga/objects/commands.cfg
cfg_dir=/etc/icinga/objects
Icinga 2 supports wildcard includes and relative paths, e.g. for including
conf.d/*.conf
in the same directory.
include "conf.d/*.conf"
If you want to include files and directories recursively, you need to define a separate option and add the directory and an optional pattern.
include_recursive "conf.d"
A global search path for includes is available for advanced features like the Icinga Template Library (ITL) or additional monitoring plugins check command configuration.
include <itl>
include <plugins>
By convention the .conf
suffix is used for Icinga 2 configuration files.
Resource File and Global Macros
Global macros such as for the plugin directory, usernames and passwords can be
set in the resource.cfg
configuration file in Icinga 1.x. By convention the
USER1
macro is used to define the directory for the plugins.
Icinga 2 uses global constants instead. In the default config these are
set in the constants.conf
configuration file:
/**
* This file defines global constants which can be used in
* the other configuration files. At a minimum the
* PluginDir constant should be defined.
*/
const PluginDir = "/usr/lib/nagios/plugins"
Global macros can only be defined once. Trying to modify a global constant will result in an error.
Configuration Comments
In Icinga 1.x comments are made using a leading hash (#
) or a semi-colon (;
)
for inline comments.
In Icinga 2 comments can either be encapsulated by /*
and */
(allowing for
multi-line comments) or starting with two slashes (//
). A leading hash (#
)
could also be used.
Object names
Object names must not contain an exclamation mark (!
). Use the display_name
attribute
to specify user-friendly names which should be shown in UIs (supported by
Icinga 1.x Classic UI and Web).
Object names are not specified using attributes (e.g. service_description
for
services) like in Icinga 1.x but directly after their type definition.
define service {
host_name localhost
service_description ping4
}
object Service "ping4" {
host_name = "localhost"
}
Templates
In Icinga 1.x templates are identified using the register 0
setting. Icinga 2
uses the template
identifier:
template Service "ping4-template" { }
Icinga 1.x objects inherit from templates using the use
attribute.
Icinga 2 uses the keyword import
with template names in double quotes.
define service {
service_description testservice
use tmpl1,tmpl2,tmpl3
}
object Service "testservice" {
import "tmpl1"
import "tmpl2"
import "tmpl3"
}
The last template overrides previously set values.
Object attributes
Icinga 1.x separates attribute and value pairs with whitespaces/tabs. Icinga 2 requires an equal sign (=) between them.
define service {
check_interval 5
}
object Service "test" {
check_interval = 5m
}
Please note that the default time value is seconds, if no duration literal
is given. check_interval = 5
behaves the same as check_interval = 5s
.
All strings require double quotes in Icinga 2. Therefore a double quote must be escaped by a backslash (e.g. in command line). If an attribute identifier starts with a number, it must be enclosed in double quotes as well.
Alias vs. Display Name
In Icinga 1.x a host can have an alias
and a display_name
attribute used
for a more descriptive name. A service only can have a display_name
attribute.
The alias
is used for group, timeperiod, etc. objects too.
Icinga 2 only supports the display_name
attribute which is also taken into
account by Icinga web interfaces.
Custom Attributes
Icinga 2 allows you to define custom attributes in the vars
dictionary.
The notes
, notes_url
, action_url
, icon_image
, icon_image_alt
attributes for host and service objects are still available in Icinga 2.
2d_coords
and statusmap_image
are not supported in Icinga 2.
Custom Variables
Icinga 1.x custom variable attributes must be prefixed using an underscore (_
).
In Icinga 2 these attributes must be added to the vars
dictionary as custom attributes.
vars.dn = "cn=icinga2-dev-host,ou=icinga,ou=main,ou=IcingaConfig,ou=LConf,dc=icinga,dc=org"
vars.cv = "my custom cmdb description"
These custom attributes are also used as command parameters.
Host Service Relation
In Icinga 1.x a service object is associated with a host by defining the
host_name
attribute in the service definition. Alternate methods refer
to hostgroup_name
or behaviour changing regular expression.
The preferred way of associating hosts with services in Icinga 2 is by using the apply keyword.
Users
Contacts have been renamed to users (same for groups). A user does not only provide attributes and custom attributes used for notifications, but is also used for authorization checks.
In Icinga 2 notification commands are not directly associated with users.
Instead the notification command is specified using Notification
objects.
The StatusDataWriter
, IdoMySqlConnection
and LivestatusListener
types will
provide the contact and contactgroups attributes for services for compatibility
reasons. These values are calculated from all services, their notifications,
and their users.
Macros
Various object attributes and runtime variables can be accessed as macros in commands in Icinga 1.x - Icinga 2 supports all required custom attributes.
Command Arguments
If you have previously used Icinga 1.x you may already be familiar with
user and argument definitions (e.g., USER1
or ARG1
). Unlike in Icinga 1.x
the Icinga 2 custom attributes may have arbitrary names and arguments are no
longer specified in the check_command
setting.
In Icinga 1.x arguments are specified in the check_command
attribute and
are separated from the command name using an exclamation mark (!
).
define command {
command_name ping4
command_line $USER1$/check_ping -H $address$ -w $ARG1$ -c $ARG2$ -p 5
}
define service {
use local-service
host_name localhost
service_description PING
check_command ping4!100.0,20%!500.0,60%
}
With the freely definable custom attributes in Icinga 2 it looks like this:
object CheckCommand "ping4" {
command = PluginDir + "/check_ping -H $address$ -w $wrta$,$wpl%$ -c $crta$,$cpl%$"
}
object Service "PING" {
check_command = "ping4"
vars.wrta = 100
vars.wpl = 20
vars.crta = 500
vars.cpl = 60
}
Note
For better maintainability you should consider using command arguments for your check commands.
Note
The Classic UI feature named
Command Expander
does not work with Icinga 2.
Environment Macros
The global configuration setting enable_environment_macros
does not exist in
Icinga 2.
Macros exported into the environment
must be set using the env
attribute in command objects.
Runtime Macros
Icinga 2 requires an object specific namespace when accessing configuration and stateful runtime macros. Custom attributes can be accessed directly.
Changes to user (contact) runtime macros
Icinga 1.x | Icinga 2 |
---|---|
CONTACTNAME | user.name |
CONTACTALIAS | user.display_name |
CONTACTEMAIL | user.email |
CONTACTPAGER | user.pager |
CONTACTADDRESS*
is not supported but can be accessed as $user.vars.address1$
if set.
Changes to service runtime macros
Icinga 1.x | Icinga 2 |
---|---|
SERVICEDESC | service.name |
SERVICEDISPLAYNAME | service.display_name |
SERVICECHECKCOMMAND | service.check_command |
SERVICESTATE | service.state |
SERVICESTATEID | service.state_id |
SERVICESTATETYPE | service.state_type |
SERVICEATTEMPT | service.check_attempt |
MAXSERVICEATTEMPT | service.max_check_attempts |
LASTSERVICESTATE | service.last_state |
LASTSERVICESTATEID | service.last_state_id |
LASTSERVICESTATETYPE | service.last_state_type |
LASTSERVICESTATECHANGE | service.last_state_change |
SERVICEDURATIONSEC | service.duration_sec |
SERVICELATENCY | service.latency |
SERVICEEXECUTIONTIME | service.execution_time |
SERVICEOUTPUT | service.output |
SERVICEPERFDATA | service.perfdata |
LASTSERVICECHECK | service.last_check |
SERVICENOTES | service.notes |
SERVICENOTESURL | service.notes_url |
SERVICEACTIONURL | service.action_url |
Changes to host runtime macros
Icinga 1.x | Icinga 2 |
---|---|
HOSTNAME | host.name |
HOSTADDRESS | host.address |
HOSTADDRESS6 | host.address6 |
HOSTDISPLAYNAME | host.display_name |
HOSTALIAS | (use host.display_name instead) |
HOSTCHECKCOMMAND | host.check_command |
HOSTSTATE | host.state |
HOSTSTATEID | host.state_id |
HOSTSTATETYPE | host.state_type |
HOSTATTEMPT | host.check_attempt |
MAXHOSTATTEMPT | host.max_check_attempts |
LASTHOSTSTATE | host.last_state |
LASTHOSTSTATEID | host.last_state_id |
LASTHOSTSTATETYPE | host.last_state_type |
LASTHOSTSTATECHANGE | host.last_state_change |
HOSTDURATIONSEC | host.duration_sec |
HOSTLATENCY | host.latency |
HOSTEXECUTIONTIME | host.execution_time |
HOSTOUTPUT | host.output |
HOSTPERFDATA | host.perfdata |
LASTHOSTCHECK | host.last_check |
HOSTNOTES | host.notes |
HOSTNOTESURL | host.notes_url |
HOSTACTIONURL | host.action_url |
TOTALSERVICES | host.num_services |
TOTALSERVICESOK | host.num_services_ok |
TOTALSERVICESWARNING | host.num_services_warning |
TOTALSERVICESUNKNOWN | host.num_services_unknown |
TOTALSERVICESCRITICAL | host.num_services_critical |
Changes to command runtime macros
Icinga 1.x | Icinga 2 |
---|---|
COMMANDNAME | command.name |
Changes to notification runtime macros
Icinga 1.x | Icinga 2 |
---|---|
NOTIFICATIONTYPE | notification.type |
NOTIFICATIONAUTHOR | notification.author |
NOTIFICATIONCOMMENT | notification.comment |
NOTIFICATIONAUTHORNAME | (use notification.author ) |
NOTIFICATIONAUTHORALIAS | (use notification.author ) |
Changes to global runtime macros:
Icinga 1.x | Icinga 2 |
---|---|
TIMET | icinga.timet |
LONGDATETIME | icinga.long_date_time |
SHORTDATETIME | icinga.short_date_time |
DATE | icinga.date |
TIME | icinga.time |
PROCESSSTARTTIME | icinga.uptime |
Changes to global statistic macros:
Icinga 1.x | Icinga 2 |
---|---|
TOTALHOSTSUP | icinga.num_hosts_up |
TOTALHOSTSDOWN | icinga.num_hosts_down |
TOTALHOSTSUNREACHABLE | icinga.num_hosts_unreachable |
TOTALHOSTSDOWNUNHANDLED | -- |
TOTALHOSTSUNREACHABLEUNHANDLED | -- |
TOTALHOSTPROBLEMS | down |
TOTALHOSTPROBLEMSUNHANDLED | down-(downtime+acknowledged) |
TOTALSERVICESOK | icinga.num_services_ok |
TOTALSERVICESWARNING | icinga.num_services_warning |
TOTALSERVICESCRITICAL | icinga.num_services_critical |
TOTALSERVICESUNKNOWN | icinga.num_services_unknown |
TOTALSERVICESWARNINGUNHANDLED | -- |
TOTALSERVICESCRITICALUNHANDLED | -- |
TOTALSERVICESUNKNOWNUNHANDLED | -- |
TOTALSERVICEPROBLEMS | ok+warning+critical+unknown |
TOTALSERVICEPROBLEMSUNHANDLED | warning+critical+unknown-(downtime+acknowledged) |
External Commands
CHANGE_CUSTOM_CONTACT_VAR
was renamed to CHANGE_CUSTOM_USER_VAR
.
CHANGE_CONTACT_MODATTR
was renamed to CHANGE_USER_MODATTR
.
The following external commands are not supported:
CHANGE_CONTACT_HOST_NOTIFICATION_TIMEPERIOD
CHANGE_HOST_NOTIFICATION_TIMEPERIOD
CHANGE_SVC_NOTIFICATION_TIMEPERIOD
DEL_DOWNTIME_BY_HOSTGROUP_NAME
DEL_DOWNTIME_BY_START_TIME_COMMENT
DISABLE_ALL_NOTIFICATIONS_BEYOND_HOST
DISABLE_CONTACT_HOST_NOTIFICATIONS
DISABLE_CONTACT_SVC_NOTIFICATIONS
DISABLE_CONTACTGROUP_HOST_NOTIFICATIONS
DISABLE_CONTACTGROUP_SVC_NOTIFICATIONS
DISABLE_FAILURE_PREDICTION
DISABLE_HOST_AND_CHILD_NOTIFICATIONS
DISABLE_HOST_FRESHNESS_CHECKS
DISABLE_HOST_SVC_NOTIFICATIONS
DISABLE_NOTIFICATIONS_EXPIRE_TIME
DISABLE_SERVICE_FRESHNESS_CHECKS
ENABLE_ALL_NOTIFICATIONS_BEYOND_HOST
ENABLE_CONTACT_HOST_NOTIFICATIONS
ENABLE_CONTACT_SVC_NOTIFICATIONS
ENABLE_CONTACTGROUP_HOST_NOTIFICATIONS
ENABLE_CONTACTGROUP_SVC_NOTIFICATIONS
ENABLE_FAILURE_PREDICTION
ENABLE_HOST_AND_CHILD_NOTIFICATIONS
ENABLE_HOST_FRESHNESS_CHECKS
ENABLE_HOST_SVC_NOTIFICATIONS
ENABLE_SERVICE_FRESHNESS_CHECKS
READ_STATE_INFORMATION
SAVE_STATE_INFORMATION
SCHEDULE_AND_PROPAGATE_HOST_DOWNTIME
SCHEDULE_AND_PROPAGATE_TRIGGERED_HOST_DOWNTIME
SET_HOST_NOTIFICATION_NUMBER
SET_SVC_NOTIFICATION_NUMBER
START_ACCEPTING_PASSIVE_HOST_CHECKS
START_ACCEPTING_PASSIVE_SVC_CHECKS
START_OBSESSING_OVER_HOST
START_OBSESSING_OVER_HOST_CHECKS
START_OBSESSING_OVER_SVC
START_OBSESSING_OVER_SVC_CHECKS
STOP_ACCEPTING_PASSIVE_HOST_CHECKS
STOP_ACCEPTING_PASSIVE_SVC_CHECKS
STOP_OBSESSING_OVER_HOST
STOP_OBSESSING_OVER_HOST_CHECKS
STOP_OBSESSING_OVER_SVC
STOP_OBSESSING_OVER_SVC_CHECKS
Asynchronous Event Execution
Unlike Icinga 1.x, Icinga 2 does not block when it waits for a check command being executed. Similar when a notification or event handler is triggered - they run asynchronously in their own thread.
Writing performance data files or status data and log files doesn't block either. Last but not least the external command pipe runs asynchronously and accepts multiple connections at once.
Checks
Check Output
Icinga 2 does not make a difference between output
(first line) and
long_output
(remaining lines) like in Icinga 1.x. Performance Data is
provided separately.
There is no output length restriction as known from Icinga 1.x using an 8KB static buffer.
The StatusDataWriter
, IdoMysqlConnection
and LivestatusListener
types
split the raw output into output
(first line) and long_output
(remaining
lines) for compatibility reasons.
Initial State
Icinga 1.x uses the max_service_check_spread
setting to specify a timerange
where the initial state checks must have happened. Icinga 2 will use the
retry_interval
setting instead and check_interval
divided by 5 if
retry_interval
is not defined.
Comments
Icinga 2 doesn't support non-persistent comments.
Commands
Unlike in Icinga 1.x there are three different command types in Icinga 2:
CheckCommand
, NotificationCommand
, and EventCommand
.
For example in Icinga 1.x it is possible to accidently use a notification command as an event handler which might cause problems depending on which runtime macros are used in the notification command.
In Icinga 2 these command types are separated and will generate an error on configuration validation if used in the wrong context.
While Icinga 2 still supports the complete command line in command objects, it's
also possible to encapsulate all arguments into double quotes and passing them
as array to the command_line
attribute i.e. for better readability.
It's also possible to define default custom attributes for the command itself which can be overridden by a service macro.
Command Timeouts
In Icinga 1.x there were two global options defining a host and service check timeout. This was essentially bad when there only was a couple of check plugins requiring some command timeouts to be extended.
Icinga 2 allows you to specify the command timeout directly on the command. So if your VMVware check plugin takes 15 minutes, increase the timeout accordingly.
Groups
In Icinga 2 hosts, services and users are added to groups using the groups
attribute in the object. The old way of listing all group members in the group's
members
attribute is available through assign where
and ignore where
conditions.
object Host "web-dev" {
import "generic-host"
}
object HostGroup "dev-hosts" {
display_name = "Dev Hosts"
assign where match("*-dev", host.name)
}
Add Service to Hostgroup where Host is Member
In order to associate a service with all hosts in a host group the apply
keyword can be used:
apply Service "ping4" {
import "generic-service"
check_command = "ping4"
assign where "dev-hosts" in host.groups
}
Notifications
Notifications are a new object type in Icinga 2. Imagine the following notification configuration problem in Icinga 1.x:
- Service A should notify contact X via SMS
- Service B should notify contact X via Mail
- Service C should notify contact Y via Mail and SMS
- Contact X and Y should also be used for authorization (e.g. in Classic UI)
The only way achieving a semi-clean solution is to
- Create contact X-sms, set service_notification_command for sms, assign contact to service A
- Create contact X-mail, set service_notification_command for mail, assign contact to service B
- Create contact Y, set service_notification_command for sms and mail, assign contact to service C
- Create contact X without notification commands, assign to service A and B
Basically you are required to create duplicated contacts for either each notification method or used for authorization only.
Icinga 2 attempts to solve that problem in this way
- Create user X, set SMS and Mail attributes, used for authorization
- Create user Y, set SMS and Mail attributes, used for authorization
- Create notification A-SMS, set command for sms, add user X, assign notification A-SMS to service A
- Create notification B-Mail, set command for mail, add user X, assign notification Mail to service B
- Create notification C-SMS, set command for sms, add user Y, assign notification C-SMS to service C
- Create notification C-Mail, set command for mail, add user Y, assign notification C-Mail to service C
Previously in Icinga 1.x it looked like this:
service -> (contact, contactgroup) -> notification command
In Icinga 2 it will look like this:
Service -> Notification -> NotificationCommand
-> User, UserGroup
Escalations
Escalations in Icinga 1.x require a separated object matching on existing objects. Escalations happen between a defined start and end time which is calculated from the notification_interval:
start = notification start + (notification_interval * first_notification)
end = notification start + (notification_interval * last_notification)
In theory first_notification and last_notification can be set to readable numbers. In practice users are manipulating those attributes in combination with notification_interval in order to get a start and end time.
In Icinga 2 the notification object can be used as notification escalation if the start and end times are defined within the 'times' attribute using duration literals (e.g. 30m).
The Icinga 2 escalation does not replace the current running notification. In Icinga 1.x it's required to copy the contacts from the service notification to the escalation to guarantee the normal notifications once an escalation happens. That's not necessary with Icinga 2 only requiring an additional notification object for the escalation itself.
Notification Options
Unlike Icinga 1.x with the 'notification_options' attribute with comma-separated state and type filters, Icinga 2 uses two configuration attributes for that. All state and type filter use long names OR'd with a pipe together
notification_options w,u,c,r,f,s
states = [ Warning, Unknown, Critical ]
filters = [ Problem, Recovery, FlappingStart, FlappingEnd, DowntimeStart, DowntimeEnd, DowntimeRemoved ]
Icinga 2 adds more fine-grained type filters for acknowledgements, downtime and flapping type (start, end, ...).
Dependencies and Parents
In Icinga 1.x it's possible to define host parents to determine network reachability and keep a host's state unreachable rather than down. Furthermore there are host and service dependencies preventing unnecessary checks and notifications. A host must not depend on a service, and vice versa. All dependencies are configured as separate objects and cannot be set directly on the host or service object.
A service can now depend on a host, and vice versa. A service has an implicit dependency (parent) to its host. A host to host dependency acts implicitly as host parent relation.
The former host_name
and dependent_host_name
have been renamed to parent_host_name
and child_host_name
(same for the service attribute). When using apply rules the
child attributes may be omitted.
For detailed examples on how to use the dependencies please check the dependencies chapter.
Dependencies can be applied to hosts or services using the apply rules.
The StatusDataWriter
, IdoMysqlConnection
and LivestatusListener
types
support the Icinga 1.x schema with dependencies and parent attributes for
compatibility reasons.
Flapping
The Icinga 1.x flapping detection uses the last 21 states of a service. This value is hardcoded and cannot be changed. The algorithm on determining a flapping state is as follows:
flapping value = (number of actual state changes / number of possible state changes)
The flapping value is then compared to the low and high flapping thresholds.
The algorithm used in Icinga 2 does not store the past states but calculcates the flapping threshold from a single value based on counters and half-life values. Icinga 2 compares the value with a single flapping threshold configuration attribute.
Check Result Freshness
Freshness of check results must be enabled explicitly in Icinga 1.x. The attribute
freshness_threshold
defines the threshold in seconds. Once the threshold is triggered, an
active freshness check is executed defined by the check_command
attribute. Both check
methods (active and passive) use the same freshness check method.
In Icinga 2 active check freshness is determined by the check_interval
attribute and no
incoming check results in that period of time (last check + check interval). Passive check
freshness is calculated from the check_interval
attribute if set. There is no extra
freshness_threshold
attribute in Icinga 2. If the freshness checks are invalid, a new
service check is forced.
Real Reload
In Nagios / Icinga 1.x a daemon reload happens like this
- receive reload signal SIGHUP
- stop all events (checks, notifications, etc)
- read the configuration from disk and validate all config objects in a single threaded fashion
- validation NOT ok: stop the daemon (cannot restore old config state)
- validation ok: start with new objects, dump status.dat / ido
Unlike Icinga 1.x the Icinga 2 daemon reload happens asynchronously.
- receive reload signal SIGHUP
- fork a child process, start configuration validation in parallel work queues
- parent process continues with old configuration objects and the event scheduling (doing checks, replicating cluster events, triggering alert notifications, etc.)
- validation NOT ok: child process terminates, parent process continues with old configuration state (this is ESSENTIAL for the cluster config synchronisation)
- validation ok: child process signals parent process to terminate and save its current state (all events til now) into the icinga2 state file
- parent process shuts down writing icinga2.state file
- child process waits for parent process gone, reads the icinga2 state file and synchronizes all historical and status data
- child becomes the new session leader
The DB IDO configuration dump and status/historical event updates also runs asynchronously in a queue not blocking the core anymore. Same goes for any other enabled feature running in its own thread. The configuration validation itself runs in parallel allowing fast verification checks.
That way you are not blind (anymore) during a configuration reload and benefit from a real scalable architecture.
State Retention
Icinga 1.x uses the retention.dat
file to save its state in order to be able
to reload it after a restart. In Icinga 2 this file is called icinga2.state
.
The format objects are stored in is not compatible with Icinga 1.x.
Logging
Icinga 1.x supports syslog facilities and writes its own icinga.log
log file
and archives. These logs are used in Icinga 1.x Classic UI to generate
historical reports.
Icinga 2 compat library provides the CompatLogger object which writes the icinga.log and archive in Icinga 1.x format in order to stay compatible with Classic UI and other addons.
The native Icinga 2 logging facilities are split into three configuration objects: SyslogLogger, FileLogger, StreamLogger. Each of them has their own severity and target configuration.
The Icinga 2 daemon log does not log any alerts but is considered an application log only.
Broker Modules and Features
Icinga 1.x broker modules are incompatible with Icinga 2.
In order to provide compatibility with Icinga 1.x the functionality of several popular broker modules was implemented for Icinga 2:
- IDOUtils
- Livestatus
- Cluster (allows for high availability and load balancing)
Distributed Monitoring
Icinga 1.x uses the native "obsess over host/service" method which requires the NSCA addon passing the slave's check results passively onto the master's external command pipe. While this method may be used for check load distribution, it does not provide any configuration distribution out-of-the-box. Furthermore comments, downtimes and other stateful runtime data is not synced between the master and slave nodes. There are addons available solving the check and configuration distribution problems Icinga 1.x distributed monitoring currently suffers from.
Icinga 2 implements a new built-in distributed monitoring architecture, including config and check distribution, IPv4/IPv6 support, SSL certificates and zone support for DMZ. High Availability and load balancing are also part of the Icinga 2 Cluster setup.