mirror of https://github.com/Icinga/icinga2.git
Merge pull request #6998 from Icinga/bugfix/doc-code-formatting
Unify documentation code formatting
This commit is contained in:
commit
b9b171b084
|
@ -77,12 +77,14 @@ If you prefer to organize your own local object tree, you can also remove
|
|||
Create a new configuration directory, e.g. `objects.d` and include it
|
||||
in your icinga2.conf file.
|
||||
|
||||
[root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/objects.d
|
||||
```
|
||||
[root@icinga2-master1.localdomain /]# mkdir -p /etc/icinga2/objects.d
|
||||
|
||||
[root@icinga2-master1.localdomain /]# vim /etc/icinga2/icinga2.conf
|
||||
[root@icinga2-master1.localdomain /]# vim /etc/icinga2/icinga2.conf
|
||||
|
||||
/* Local object configuration on our master instance. */
|
||||
include_recursive "objects.d"
|
||||
/* Local object configuration on our master instance. */
|
||||
include_recursive "objects.d"
|
||||
```
|
||||
|
||||
This approach is used by the [Icinga 2 Puppet module](https://github.com/Icinga/puppet-icinga2).
|
||||
|
||||
|
@ -97,74 +99,82 @@ An example configuration file is installed for you in `/etc/icinga2/icinga2.conf
|
|||
|
||||
Here's a brief description of the example configuration:
|
||||
|
||||
/**
|
||||
* Icinga 2 configuration file
|
||||
* -- this is where you define settings for the Icinga application including
|
||||
* which hosts/services to check.
|
||||
*
|
||||
* For an overview of all available configuration options please refer
|
||||
* to the documentation that is distributed as part of Icinga 2.
|
||||
*/
|
||||
```
|
||||
/**
|
||||
* Icinga 2 configuration file
|
||||
* -- this is where you define settings for the Icinga application including
|
||||
* which hosts/services to check.
|
||||
*
|
||||
* For an overview of all available configuration options please refer
|
||||
* to the documentation that is distributed as part of Icinga 2.
|
||||
*/
|
||||
```
|
||||
|
||||
Icinga 2 supports [C/C++-style comments](17-language-reference.md#comments).
|
||||
|
||||
/**
|
||||
* The constants.conf defines global constants.
|
||||
*/
|
||||
include "constants.conf"
|
||||
/**
|
||||
* The constants.conf defines global constants.
|
||||
*/
|
||||
include "constants.conf"
|
||||
|
||||
The `include` directive can be used to include other files.
|
||||
|
||||
/**
|
||||
* The zones.conf defines zones for a cluster setup.
|
||||
* Not required for single instance setups.
|
||||
*/
|
||||
include "zones.conf"
|
||||
```
|
||||
/**
|
||||
* The zones.conf defines zones for a cluster setup.
|
||||
* Not required for single instance setups.
|
||||
*/
|
||||
include "zones.conf"
|
||||
```
|
||||
|
||||
The [Icinga Template Library](10-icinga-template-library.md#icinga-template-library) provides a set of common templates
|
||||
and [CheckCommand](03-monitoring-basics.md#check-commands) definitions.
|
||||
|
||||
/**
|
||||
* The Icinga Template Library (ITL) provides a number of useful templates
|
||||
* and command definitions.
|
||||
* Common monitoring plugin command definitions are included separately.
|
||||
*/
|
||||
include <itl>
|
||||
include <plugins>
|
||||
include <plugins-contrib>
|
||||
include <manubulon>
|
||||
```
|
||||
/**
|
||||
* The Icinga Template Library (ITL) provides a number of useful templates
|
||||
* and command definitions.
|
||||
* Common monitoring plugin command definitions are included separately.
|
||||
*/
|
||||
include <itl>
|
||||
include <plugins>
|
||||
include <plugins-contrib>
|
||||
include <manubulon>
|
||||
|
||||
/**
|
||||
* This includes the Icinga 2 Windows plugins. These command definitions
|
||||
* are required on a master node when a client is used as command endpoint.
|
||||
*/
|
||||
include <windows-plugins>
|
||||
/**
|
||||
* This includes the Icinga 2 Windows plugins. These command definitions
|
||||
* are required on a master node when a client is used as command endpoint.
|
||||
*/
|
||||
include <windows-plugins>
|
||||
|
||||
/**
|
||||
* This includes the NSClient++ check commands. These command definitions
|
||||
* are required on a master node when a client is used as command endpoint.
|
||||
*/
|
||||
include <nscp>
|
||||
/**
|
||||
* This includes the NSClient++ check commands. These command definitions
|
||||
* are required on a master node when a client is used as command endpoint.
|
||||
*/
|
||||
include <nscp>
|
||||
|
||||
/**
|
||||
* The features-available directory contains a number of configuration
|
||||
* files for features which can be enabled and disabled using the
|
||||
* icinga2 feature enable / icinga2 feature disable CLI commands.
|
||||
* These commands work by creating and removing symbolic links in
|
||||
* the features-enabled directory.
|
||||
*/
|
||||
include "features-enabled/*.conf"
|
||||
/**
|
||||
* The features-available directory contains a number of configuration
|
||||
* files for features which can be enabled and disabled using the
|
||||
* icinga2 feature enable / icinga2 feature disable CLI commands.
|
||||
* These commands work by creating and removing symbolic links in
|
||||
* the features-enabled directory.
|
||||
*/
|
||||
include "features-enabled/*.conf"
|
||||
```
|
||||
|
||||
This `include` directive takes care of including the configuration files for all
|
||||
the features which have been enabled with `icinga2 feature enable`. See
|
||||
[Enabling/Disabling Features](11-cli-commands.md#enable-features) for more details.
|
||||
|
||||
/**
|
||||
* Although in theory you could define all your objects in this file
|
||||
* the preferred way is to create separate directories and files in the conf.d
|
||||
* directory. Each of these files must have the file extension ".conf".
|
||||
*/
|
||||
include_recursive "conf.d"
|
||||
```
|
||||
/**
|
||||
* Although in theory you could define all your objects in this file
|
||||
* the preferred way is to create separate directories and files in the conf.d
|
||||
* directory. Each of these files must have the file extension ".conf".
|
||||
*/
|
||||
include_recursive "conf.d"
|
||||
```
|
||||
|
||||
You can put your own configuration files in the [conf.d](04-configuring-icinga-2.md#conf-d) directory. This
|
||||
directive makes sure that all of your own configuration files are included.
|
||||
|
@ -184,24 +194,26 @@ cluster setup.
|
|||
|
||||
Example:
|
||||
|
||||
/* The directory which contains the plugins from the Monitoring Plugins project. */
|
||||
const PluginDir = "/usr/lib64/nagios/plugins"
|
||||
```
|
||||
/* The directory which contains the plugins from the Monitoring Plugins project. */
|
||||
const PluginDir = "/usr/lib64/nagios/plugins"
|
||||
|
||||
/* The directory which contains the Manubulon plugins.
|
||||
* Check the documentation, chapter "SNMP Manubulon Plugin Check Commands", for details.
|
||||
*/
|
||||
const ManubulonPluginDir = "/usr/lib64/nagios/plugins"
|
||||
/* The directory which contains the Manubulon plugins.
|
||||
* Check the documentation, chapter "SNMP Manubulon Plugin Check Commands", for details.
|
||||
*/
|
||||
const ManubulonPluginDir = "/usr/lib64/nagios/plugins"
|
||||
|
||||
/* Our local instance name. By default this is the server's hostname as returned by `hostname --fqdn`.
|
||||
* This should be the common name from the API certificate.
|
||||
*/
|
||||
//const NodeName = "localhost"
|
||||
/* Our local instance name. By default this is the server's hostname as returned by `hostname --fqdn`.
|
||||
* This should be the common name from the API certificate.
|
||||
*/
|
||||
//const NodeName = "localhost"
|
||||
|
||||
/* Our local zone name. */
|
||||
const ZoneName = NodeName
|
||||
/* Our local zone name. */
|
||||
const ZoneName = NodeName
|
||||
|
||||
/* Secret key for remote node tickets */
|
||||
const TicketSalt = ""
|
||||
/* Secret key for remote node tickets */
|
||||
const TicketSalt = ""
|
||||
```
|
||||
|
||||
The `ZoneName` and `TicketSalt` constants are required for remote client
|
||||
and distributed setups only.
|
||||
|
@ -286,57 +298,59 @@ rules in [services.conf](04-configuring-icinga-2.md#services-conf) will automati
|
|||
generate a new service checking the `/icingaweb2` URI using the `http`
|
||||
check.
|
||||
|
||||
/*
|
||||
* Host definitions with object attributes
|
||||
* used for apply rules for Service, Notification,
|
||||
* Dependency and ScheduledDowntime objects.
|
||||
*
|
||||
* Tip: Use `icinga2 object list --type Host` to
|
||||
* list all host objects after running
|
||||
* configuration validation (`icinga2 daemon -C`).
|
||||
*/
|
||||
```
|
||||
/*
|
||||
* Host definitions with object attributes
|
||||
* used for apply rules for Service, Notification,
|
||||
* Dependency and ScheduledDowntime objects.
|
||||
*
|
||||
* Tip: Use `icinga2 object list --type Host` to
|
||||
* list all host objects after running
|
||||
* configuration validation (`icinga2 daemon -C`).
|
||||
*/
|
||||
|
||||
/*
|
||||
* This is an example host based on your
|
||||
* local host's FQDN. Specify the NodeName
|
||||
* constant in `constants.conf` or use your
|
||||
* own description, e.g. "db-host-1".
|
||||
*/
|
||||
/*
|
||||
* This is an example host based on your
|
||||
* local host's FQDN. Specify the NodeName
|
||||
* constant in `constants.conf` or use your
|
||||
* own description, e.g. "db-host-1".
|
||||
*/
|
||||
|
||||
object Host NodeName {
|
||||
/* Import the default host template defined in `templates.conf`. */
|
||||
import "generic-host"
|
||||
object Host NodeName {
|
||||
/* Import the default host template defined in `templates.conf`. */
|
||||
import "generic-host"
|
||||
|
||||
/* Specify the address attributes for checks e.g. `ssh` or `http`. */
|
||||
address = "127.0.0.1"
|
||||
address6 = "::1"
|
||||
/* Specify the address attributes for checks e.g. `ssh` or `http`. */
|
||||
address = "127.0.0.1"
|
||||
address6 = "::1"
|
||||
|
||||
/* Set custom attribute `os` for hostgroup assignment in `groups.conf`. */
|
||||
vars.os = "Linux"
|
||||
/* Set custom attribute `os` for hostgroup assignment in `groups.conf`. */
|
||||
vars.os = "Linux"
|
||||
|
||||
/* Define http vhost attributes for service apply rules in `services.conf`. */
|
||||
vars.http_vhosts["http"] = {
|
||||
http_uri = "/"
|
||||
}
|
||||
/* Uncomment if you've sucessfully installed Icinga Web 2. */
|
||||
//vars.http_vhosts["Icinga Web 2"] = {
|
||||
// http_uri = "/icingaweb2"
|
||||
//}
|
||||
/* Define http vhost attributes for service apply rules in `services.conf`. */
|
||||
vars.http_vhosts["http"] = {
|
||||
http_uri = "/"
|
||||
}
|
||||
/* Uncomment if you've sucessfully installed Icinga Web 2. */
|
||||
//vars.http_vhosts["Icinga Web 2"] = {
|
||||
// http_uri = "/icingaweb2"
|
||||
//}
|
||||
|
||||
/* Define disks and attributes for service apply rules in `services.conf`. */
|
||||
vars.disks["disk"] = {
|
||||
/* No parameters. */
|
||||
}
|
||||
vars.disks["disk /"] = {
|
||||
disk_partitions = "/"
|
||||
}
|
||||
/* Define disks and attributes for service apply rules in `services.conf`. */
|
||||
vars.disks["disk"] = {
|
||||
/* No parameters. */
|
||||
}
|
||||
vars.disks["disk /"] = {
|
||||
disk_partitions = "/"
|
||||
}
|
||||
|
||||
/* Define notification mail attributes for notification apply rules in `notifications.conf`. */
|
||||
vars.notification["mail"] = {
|
||||
/* The UserGroup `icingaadmins` is defined in `users.conf`. */
|
||||
groups = [ "icingaadmins" ]
|
||||
}
|
||||
}
|
||||
/* Define notification mail attributes for notification apply rules in `notifications.conf`. */
|
||||
vars.notification["mail"] = {
|
||||
/* The UserGroup `icingaadmins` is defined in `users.conf`. */
|
||||
groups = [ "icingaadmins" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This is only the host object definition. Now we'll need to make sure that this
|
||||
host and your additional hosts are getting [services](04-configuring-icinga-2.md#services-conf) applied.
|
||||
|
@ -376,16 +390,18 @@ which we enabled earlier by including the `itl` and `plugins` configuration file
|
|||
|
||||
Example `load` service apply rule:
|
||||
|
||||
apply Service "load" {
|
||||
import "generic-service"
|
||||
```
|
||||
apply Service "load" {
|
||||
import "generic-service"
|
||||
|
||||
check_command = "load"
|
||||
check_command = "load"
|
||||
|
||||
/* Used by the ScheduledDowntime apply rule in `downtimes.conf`. */
|
||||
vars.backup_downtime = "02:00-03:00"
|
||||
/* Used by the ScheduledDowntime apply rule in `downtimes.conf`. */
|
||||
vars.backup_downtime = "02:00-03:00"
|
||||
|
||||
assign where host.name == NodeName
|
||||
}
|
||||
assign where host.name == NodeName
|
||||
}
|
||||
```
|
||||
|
||||
The `apply` keyword can be used to create new objects which are associated with
|
||||
another group of objects. You can `import` existing templates, define (custom)
|
||||
|
@ -403,13 +419,15 @@ may be used in `assign where` conditions.
|
|||
Multiple `assign where` condition can be combined with `AND` using the `&&` operator
|
||||
as shown in the `ssh` example:
|
||||
|
||||
apply Service "ssh" {
|
||||
import "generic-service"
|
||||
```
|
||||
apply Service "ssh" {
|
||||
import "generic-service"
|
||||
|
||||
check_command = "ssh"
|
||||
check_command = "ssh"
|
||||
|
||||
assign where host.address && host.vars.os == "Linux"
|
||||
}
|
||||
assign where host.address && host.vars.os == "Linux"
|
||||
}
|
||||
```
|
||||
|
||||
In this example, the service `ssh` is applied to all hosts having the `address`
|
||||
attribute defined `AND` having the custom attribute `os` set to the string
|
||||
|
@ -429,16 +447,17 @@ The idea is simple: Your host in [hosts.conf](04-configuring-icinga-2.md#hosts-c
|
|||
|
||||
Remember the example from [hosts.conf](04-configuring-icinga-2.md#hosts-conf):
|
||||
|
||||
...
|
||||
/* Define disks and attributes for service apply rules in `services.conf`. */
|
||||
vars.disks["disk"] = {
|
||||
/* No parameters. */
|
||||
}
|
||||
vars.disks["disk /"] = {
|
||||
disk_partition = "/"
|
||||
}
|
||||
...
|
||||
|
||||
```
|
||||
...
|
||||
/* Define disks and attributes for service apply rules in `services.conf`. */
|
||||
vars.disks["disk"] = {
|
||||
/* No parameters. */
|
||||
}
|
||||
vars.disks["disk /"] = {
|
||||
disk_partition = "/"
|
||||
}
|
||||
...
|
||||
```
|
||||
|
||||
This dictionary contains multiple service names we want to monitor. `disk`
|
||||
should just check all available disks, while `disk /` will pass an additional
|
||||
|
@ -466,13 +485,15 @@ generated service
|
|||
|
||||
Configuration example:
|
||||
|
||||
apply Service for (disk => config in host.vars.disks) {
|
||||
import "generic-service"
|
||||
```
|
||||
apply Service for (disk => config in host.vars.disks) {
|
||||
import "generic-service"
|
||||
|
||||
check_command = "disk"
|
||||
check_command = "disk"
|
||||
|
||||
vars += config
|
||||
}
|
||||
vars += config
|
||||
}
|
||||
```
|
||||
|
||||
A similar example is used for the `http` services. That way you can make your
|
||||
host the information provider for all apply rules. Define them once, and only
|
||||
|
@ -494,19 +515,20 @@ Defines the `icingaadmin` User and the `icingaadmins` UserGroup. The latter is u
|
|||
[hosts.conf](04-configuring-icinga-2.md#hosts-conf) for defining a custom host attribute later used in
|
||||
[notifications.conf](04-configuring-icinga-2.md#notifications-conf) for notification apply rules.
|
||||
|
||||
object User "icingaadmin" {
|
||||
import "generic-user"
|
||||
```
|
||||
object User "icingaadmin" {
|
||||
import "generic-user"
|
||||
|
||||
display_name = "Icinga 2 Admin"
|
||||
groups = [ "icingaadmins" ]
|
||||
display_name = "Icinga 2 Admin"
|
||||
groups = [ "icingaadmins" ]
|
||||
|
||||
email = "icinga@localhost"
|
||||
}
|
||||
|
||||
object UserGroup "icingaadmins" {
|
||||
display_name = "Icinga 2 Admin Group"
|
||||
}
|
||||
email = "icinga@localhost"
|
||||
}
|
||||
|
||||
object UserGroup "icingaadmins" {
|
||||
display_name = "Icinga 2 Admin Group"
|
||||
}
|
||||
```
|
||||
|
||||
#### notifications.conf <a id="notifications-conf"></a>
|
||||
|
||||
|
@ -527,23 +549,25 @@ By setting the `user_groups` to the value provided by the
|
|||
respective [host.vars.notification.mail](04-configuring-icinga-2.md#hosts-conf) attribute we'll
|
||||
implicitely use the `icingaadmins` UserGroup defined in [users.conf](04-configuring-icinga-2.md#users-conf).
|
||||
|
||||
apply Notification "mail-icingaadmin" to Host {
|
||||
import "mail-host-notification"
|
||||
```
|
||||
apply Notification "mail-icingaadmin" to Host {
|
||||
import "mail-host-notification"
|
||||
|
||||
user_groups = host.vars.notification.mail.groups
|
||||
users = host.vars.notification.mail.users
|
||||
user_groups = host.vars.notification.mail.groups
|
||||
users = host.vars.notification.mail.users
|
||||
|
||||
assign where host.vars.notification.mail
|
||||
}
|
||||
assign where host.vars.notification.mail
|
||||
}
|
||||
|
||||
apply Notification "mail-icingaadmin" to Service {
|
||||
import "mail-service-notification"
|
||||
apply Notification "mail-icingaadmin" to Service {
|
||||
import "mail-service-notification"
|
||||
|
||||
user_groups = host.vars.notification.mail.groups
|
||||
users = host.vars.notification.mail.users
|
||||
user_groups = host.vars.notification.mail.groups
|
||||
users = host.vars.notification.mail.users
|
||||
|
||||
assign where host.vars.notification.mail
|
||||
}
|
||||
assign where host.vars.notification.mail
|
||||
}
|
||||
```
|
||||
|
||||
More details on defining notifications and their additional attributes such as
|
||||
filters can be read in [this chapter](03-monitoring-basics.md#alert-notifications).
|
||||
|
@ -565,85 +589,91 @@ a member of the host group `linux-servers`.
|
|||
This is done by using the [group assign](17-language-reference.md#group-assign) expressions similar
|
||||
to previously seen [apply rules](03-monitoring-basics.md#using-apply).
|
||||
|
||||
object HostGroup "linux-servers" {
|
||||
display_name = "Linux Servers"
|
||||
```
|
||||
object HostGroup "linux-servers" {
|
||||
display_name = "Linux Servers"
|
||||
|
||||
assign where host.vars.os == "Linux"
|
||||
}
|
||||
assign where host.vars.os == "Linux"
|
||||
}
|
||||
|
||||
object HostGroup "windows-servers" {
|
||||
display_name = "Windows Servers"
|
||||
object HostGroup "windows-servers" {
|
||||
display_name = "Windows Servers"
|
||||
|
||||
assign where host.vars.os == "Windows"
|
||||
}
|
||||
assign where host.vars.os == "Windows"
|
||||
}
|
||||
```
|
||||
|
||||
Service groups can be grouped together by similar pattern matches.
|
||||
The [match function](18-library-reference.md#global-functions-match) expects a wildcard match string
|
||||
and the attribute string to match with.
|
||||
|
||||
object ServiceGroup "ping" {
|
||||
display_name = "Ping Checks"
|
||||
```
|
||||
object ServiceGroup "ping" {
|
||||
display_name = "Ping Checks"
|
||||
|
||||
assign where match("ping*", service.name)
|
||||
}
|
||||
assign where match("ping*", service.name)
|
||||
}
|
||||
|
||||
object ServiceGroup "http" {
|
||||
display_name = "HTTP Checks"
|
||||
object ServiceGroup "http" {
|
||||
display_name = "HTTP Checks"
|
||||
|
||||
assign where match("http*", service.check_command)
|
||||
}
|
||||
assign where match("http*", service.check_command)
|
||||
}
|
||||
|
||||
object ServiceGroup "disk" {
|
||||
display_name = "Disk Checks"
|
||||
|
||||
assign where match("disk*", service.check_command)
|
||||
}
|
||||
object ServiceGroup "disk" {
|
||||
display_name = "Disk Checks"
|
||||
|
||||
assign where match("disk*", service.check_command)
|
||||
}
|
||||
```
|
||||
|
||||
#### templates.conf <a id="templates-conf"></a>
|
||||
|
||||
Most of the example configuration objects use generic global templates by
|
||||
default:
|
||||
|
||||
template Host "generic-host" {
|
||||
max_check_attempts = 5
|
||||
check_interval = 1m
|
||||
retry_interval = 30s
|
||||
```
|
||||
template Host "generic-host" {
|
||||
max_check_attempts = 5
|
||||
check_interval = 1m
|
||||
retry_interval = 30s
|
||||
|
||||
check_command = "hostalive"
|
||||
}
|
||||
check_command = "hostalive"
|
||||
}
|
||||
|
||||
template Service "generic-service" {
|
||||
max_check_attempts = 3
|
||||
check_interval = 1m
|
||||
retry_interval = 30s
|
||||
}
|
||||
template Service "generic-service" {
|
||||
max_check_attempts = 3
|
||||
check_interval = 1m
|
||||
retry_interval = 30s
|
||||
}
|
||||
```
|
||||
|
||||
The `hostalive` check command is part of the
|
||||
[Plugin Check Commands](10-icinga-template-library.md#icinga-template-library).
|
||||
|
||||
```
|
||||
template Notification "mail-host-notification" {
|
||||
command = "mail-host-notification"
|
||||
|
||||
template Notification "mail-host-notification" {
|
||||
command = "mail-host-notification"
|
||||
states = [ Up, Down ]
|
||||
types = [ Problem, Acknowledgement, Recovery, Custom,
|
||||
FlappingStart, FlappingEnd,
|
||||
DowntimeStart, DowntimeEnd, DowntimeRemoved ]
|
||||
|
||||
states = [ Up, Down ]
|
||||
types = [ Problem, Acknowledgement, Recovery, Custom,
|
||||
FlappingStart, FlappingEnd,
|
||||
DowntimeStart, DowntimeEnd, DowntimeRemoved ]
|
||||
period = "24x7"
|
||||
}
|
||||
|
||||
period = "24x7"
|
||||
}
|
||||
template Notification "mail-service-notification" {
|
||||
command = "mail-service-notification"
|
||||
|
||||
template Notification "mail-service-notification" {
|
||||
command = "mail-service-notification"
|
||||
states = [ OK, Warning, Critical, Unknown ]
|
||||
types = [ Problem, Acknowledgement, Recovery, Custom,
|
||||
FlappingStart, FlappingEnd,
|
||||
DowntimeStart, DowntimeEnd, DowntimeRemoved ]
|
||||
|
||||
states = [ OK, Warning, Critical, Unknown ]
|
||||
types = [ Problem, Acknowledgement, Recovery, Custom,
|
||||
FlappingStart, FlappingEnd,
|
||||
DowntimeStart, DowntimeEnd, DowntimeRemoved ]
|
||||
|
||||
period = "24x7"
|
||||
}
|
||||
period = "24x7"
|
||||
}
|
||||
```
|
||||
|
||||
More details on `Notification` object attributes can be found [here](09-object-types.md#objecttype-notification).
|
||||
|
||||
|
@ -658,23 +688,24 @@ for the time ranges required for recurring downtime slots.
|
|||
|
||||
Learn more about downtimes in [this chapter](08-advanced-topics.md#downtimes).
|
||||
|
||||
apply ScheduledDowntime "backup-downtime" to Service {
|
||||
author = "icingaadmin"
|
||||
comment = "Scheduled downtime for backup"
|
||||
```
|
||||
apply ScheduledDowntime "backup-downtime" to Service {
|
||||
author = "icingaadmin"
|
||||
comment = "Scheduled downtime for backup"
|
||||
|
||||
ranges = {
|
||||
monday = service.vars.backup_downtime
|
||||
tuesday = service.vars.backup_downtime
|
||||
wednesday = service.vars.backup_downtime
|
||||
thursday = service.vars.backup_downtime
|
||||
friday = service.vars.backup_downtime
|
||||
saturday = service.vars.backup_downtime
|
||||
sunday = service.vars.backup_downtime
|
||||
}
|
||||
|
||||
assign where service.vars.backup_downtime != ""
|
||||
}
|
||||
ranges = {
|
||||
monday = service.vars.backup_downtime
|
||||
tuesday = service.vars.backup_downtime
|
||||
wednesday = service.vars.backup_downtime
|
||||
thursday = service.vars.backup_downtime
|
||||
friday = service.vars.backup_downtime
|
||||
saturday = service.vars.backup_downtime
|
||||
sunday = service.vars.backup_downtime
|
||||
}
|
||||
|
||||
assign where service.vars.backup_downtime != ""
|
||||
}
|
||||
```
|
||||
|
||||
#### timeperiods.conf <a id="timeperiods-conf"></a>
|
||||
|
||||
|
|
|
@ -15,24 +15,28 @@ The recommended way of setting up these plugins is to copy them to a common dire
|
|||
and create a new global constant, e.g. `CustomPluginDir` in your [constants.conf](04-configuring-icinga-2.md#constants-conf)
|
||||
configuration file:
|
||||
|
||||
# cp check_snmp_int.pl /opt/monitoring/plugins
|
||||
# chmod +x /opt/monitoring/plugins/check_snmp_int.pl
|
||||
```
|
||||
# cp check_snmp_int.pl /opt/monitoring/plugins
|
||||
# chmod +x /opt/monitoring/plugins/check_snmp_int.pl
|
||||
|
||||
# cat /etc/icinga2/constants.conf
|
||||
/**
|
||||
* This file defines global constants which can be used in
|
||||
* the other configuration files. At a minimum the
|
||||
* PluginDir constant should be defined.
|
||||
*/
|
||||
# cat /etc/icinga2/constants.conf
|
||||
/**
|
||||
* This file defines global constants which can be used in
|
||||
* the other configuration files. At a minimum the
|
||||
* PluginDir constant should be defined.
|
||||
*/
|
||||
|
||||
const PluginDir = "/usr/lib/nagios/plugins"
|
||||
const CustomPluginDir = "/opt/monitoring/plugins"
|
||||
const PluginDir = "/usr/lib/nagios/plugins"
|
||||
const CustomPluginDir = "/opt/monitoring/plugins"
|
||||
```
|
||||
|
||||
Prior to using the check plugin with Icinga 2 you should ensure that it is working properly
|
||||
by trying to run it on the console using whichever user Icinga 2 is running as:
|
||||
|
||||
# su - icinga -s /bin/bash
|
||||
$ /opt/monitoring/plugins/check_snmp_int.pl --help
|
||||
```
|
||||
# su - icinga -s /bin/bash
|
||||
$ /opt/monitoring/plugins/check_snmp_int.pl --help
|
||||
```
|
||||
|
||||
Additional libraries may be required for some plugins. Please consult the plugin
|
||||
documentation and/or the included README file for installation instructions.
|
||||
|
@ -64,30 +68,31 @@ set them on host/service level and you'll always know which command they control
|
|||
|
||||
This is an example for a custom `my-snmp-int` check command:
|
||||
|
||||
object CheckCommand "my-snmp-int" {
|
||||
command = [ CustomPluginDir + "/check_snmp_int.pl" ]
|
||||
```
|
||||
object CheckCommand "my-snmp-int" {
|
||||
command = [ CustomPluginDir + "/check_snmp_int.pl" ]
|
||||
|
||||
arguments = {
|
||||
"-H" = "$snmp_address$"
|
||||
"-C" = "$snmp_community$"
|
||||
"-p" = "$snmp_port$"
|
||||
"-2" = {
|
||||
set_if = "$snmp_v2$"
|
||||
}
|
||||
"-n" = "$snmp_interface$"
|
||||
"-f" = {
|
||||
set_if = "$snmp_perf$"
|
||||
}
|
||||
"-w" = "$snmp_warn$"
|
||||
"-c" = "$snmp_crit$"
|
||||
}
|
||||
|
||||
vars.snmp_v2 = true
|
||||
vars.snmp_perf = true
|
||||
vars.snmp_warn = "300,400"
|
||||
vars.snmp_crit = "0,600"
|
||||
arguments = {
|
||||
"-H" = "$snmp_address$"
|
||||
"-C" = "$snmp_community$"
|
||||
"-p" = "$snmp_port$"
|
||||
"-2" = {
|
||||
set_if = "$snmp_v2$"
|
||||
}
|
||||
"-n" = "$snmp_interface$"
|
||||
"-f" = {
|
||||
set_if = "$snmp_perf$"
|
||||
}
|
||||
"-w" = "$snmp_warn$"
|
||||
"-c" = "$snmp_crit$"
|
||||
}
|
||||
|
||||
vars.snmp_v2 = true
|
||||
vars.snmp_perf = true
|
||||
vars.snmp_warn = "300,400"
|
||||
vars.snmp_crit = "0,600"
|
||||
}
|
||||
```
|
||||
|
||||
For further information on your monitoring configuration read the
|
||||
[Monitoring Basics](03-monitoring-basics.md#monitoring-basics) chapter.
|
||||
|
@ -127,28 +132,30 @@ Common best practices when creating a new plugin are for example:
|
|||
|
||||
Example skeleton:
|
||||
|
||||
# 1. include optional libraries
|
||||
# 2. global variables
|
||||
# 3. helper functions and/or classes
|
||||
# 4. define timeout condition
|
||||
```
|
||||
# 1. include optional libraries
|
||||
# 2. global variables
|
||||
# 3. helper functions and/or classes
|
||||
# 4. define timeout condition
|
||||
|
||||
if (<timeout_reached>) then
|
||||
print "UNKNOWN - Timeout (...) reached | 'time'=30.0
|
||||
endif
|
||||
if (<timeout_reached>) then
|
||||
print "UNKNOWN - Timeout (...) reached | 'time'=30.0
|
||||
endif
|
||||
|
||||
# 5. main method
|
||||
# 5. main method
|
||||
|
||||
<execute and fetch data>
|
||||
<execute and fetch data>
|
||||
|
||||
if (<threshold_critical_condition>) then
|
||||
print "CRITICAL - ... | 'time'=0.1 'myperfdatavalue'=5.0
|
||||
exit(2)
|
||||
else if (<threshold_warning_condition>) then
|
||||
print "WARNING - ... | 'time'=0.1 'myperfdatavalue'=3.0
|
||||
exit(1)
|
||||
else
|
||||
print "OK - ... | 'time'=0.2 'myperfdatavalue'=1.0
|
||||
endif
|
||||
if (<threshold_critical_condition>) then
|
||||
print "CRITICAL - ... | 'time'=0.1 'myperfdatavalue'=5.0
|
||||
exit(2)
|
||||
else if (<threshold_warning_condition>) then
|
||||
print "WARNING - ... | 'time'=0.1 'myperfdatavalue'=3.0
|
||||
exit(1)
|
||||
else
|
||||
print "OK - ... | 'time'=0.2 'myperfdatavalue'=1.0
|
||||
endif
|
||||
```
|
||||
|
||||
There are various plugin libraries available which will help
|
||||
with plugin execution and output formatting too, for example
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -15,15 +15,17 @@ The following example uses the [SNMP ITL](10-icinga-template-library.md#plugin-c
|
|||
overrides the `snmp_oid` custom attribute. A service is created for all hosts which
|
||||
have the `snmp-community` custom attribute.
|
||||
|
||||
apply Service "uptime" {
|
||||
import "generic-service"
|
||||
```
|
||||
apply Service "uptime" {
|
||||
import "generic-service"
|
||||
|
||||
check_command = "snmp"
|
||||
vars.snmp_oid = "1.3.6.1.2.1.1.3.0"
|
||||
vars.snmp_miblist = "DISMAN-EVENT-MIB"
|
||||
check_command = "snmp"
|
||||
vars.snmp_oid = "1.3.6.1.2.1.1.3.0"
|
||||
vars.snmp_miblist = "DISMAN-EVENT-MIB"
|
||||
|
||||
assign where host.vars.snmp_community != ""
|
||||
}
|
||||
assign where host.vars.snmp_community != ""
|
||||
}
|
||||
```
|
||||
|
||||
Additional SNMP plugins are available using the [Manubulon SNMP Plugins](10-icinga-template-library.md#snmp-manubulon-plugin-check-commands).
|
||||
|
||||
|
@ -37,23 +39,25 @@ Calling a plugin using the SSH protocol to execute a plugin on the remote server
|
|||
its return code and output. The `by_ssh` command object is part of the built-in templates and
|
||||
requires the `check_by_ssh` check plugin which is available in the [Monitoring Plugins package](02-getting-started.md#setting-up-check-plugins).
|
||||
|
||||
object CheckCommand "by_ssh_swap" {
|
||||
import "by_ssh"
|
||||
```
|
||||
object CheckCommand "by_ssh_swap" {
|
||||
import "by_ssh"
|
||||
|
||||
vars.by_ssh_command = "/usr/lib/nagios/plugins/check_swap -w $by_ssh_swap_warn$ -c $by_ssh_swap_crit$"
|
||||
vars.by_ssh_swap_warn = "75%"
|
||||
vars.by_ssh_swap_crit = "50%"
|
||||
}
|
||||
vars.by_ssh_command = "/usr/lib/nagios/plugins/check_swap -w $by_ssh_swap_warn$ -c $by_ssh_swap_crit$"
|
||||
vars.by_ssh_swap_warn = "75%"
|
||||
vars.by_ssh_swap_crit = "50%"
|
||||
}
|
||||
|
||||
object Service "swap" {
|
||||
import "generic-service"
|
||||
object Service "swap" {
|
||||
import "generic-service"
|
||||
|
||||
host_name = "remote-ssh-host"
|
||||
host_name = "remote-ssh-host"
|
||||
|
||||
check_command = "by_ssh_swap"
|
||||
check_command = "by_ssh_swap"
|
||||
|
||||
vars.by_ssh_logname = "icinga"
|
||||
}
|
||||
vars.by_ssh_logname = "icinga"
|
||||
}
|
||||
```
|
||||
|
||||
## NSClient++ <a id="agent-based-checks-nsclient"></a>
|
||||
|
||||
|
@ -67,18 +71,20 @@ Icinga 2 provides the [nscp check command](10-icinga-template-library.md#plugin-
|
|||
|
||||
Example:
|
||||
|
||||
object Service "disk" {
|
||||
import "generic-service"
|
||||
```
|
||||
object Service "disk" {
|
||||
import "generic-service"
|
||||
|
||||
host_name = "remote-windows-host"
|
||||
host_name = "remote-windows-host"
|
||||
|
||||
check_command = "nscp"
|
||||
check_command = "nscp"
|
||||
|
||||
vars.nscp_variable = "USEDDISKSPACE"
|
||||
vars.nscp_params = "c"
|
||||
vars.nscp_warn = 70
|
||||
vars.nscp_crit = 80
|
||||
}
|
||||
vars.nscp_variable = "USEDDISKSPACE"
|
||||
vars.nscp_params = "c"
|
||||
vars.nscp_warn = 70
|
||||
vars.nscp_crit = 80
|
||||
}
|
||||
```
|
||||
|
||||
For details on the `NSClient++` configuration please refer to the [official documentation](https://docs.nsclient.org/).
|
||||
|
||||
|
@ -116,18 +122,22 @@ Icinga 2 provides the [nrpe check command](10-icinga-template-library.md#plugin-
|
|||
|
||||
Example:
|
||||
|
||||
object Service "users" {
|
||||
import "generic-service"
|
||||
```
|
||||
object Service "users" {
|
||||
import "generic-service"
|
||||
|
||||
host_name = "remote-nrpe-host"
|
||||
host_name = "remote-nrpe-host"
|
||||
|
||||
check_command = "nrpe"
|
||||
vars.nrpe_command = "check_users"
|
||||
}
|
||||
check_command = "nrpe"
|
||||
vars.nrpe_command = "check_users"
|
||||
}
|
||||
```
|
||||
|
||||
nrpe.cfg:
|
||||
|
||||
command[check_users]=/usr/local/icinga/libexec/check_users -w 5 -c 10
|
||||
```
|
||||
command[check_users]=/usr/local/icinga/libexec/check_users -w 5 -c 10
|
||||
```
|
||||
|
||||
If you are planning to pass arguments to NRPE using the `-a`
|
||||
command line parameter, make sure that your NRPE daemon has them
|
||||
|
@ -144,19 +154,23 @@ attribute which expects either a single value or an array of values.
|
|||
|
||||
Example:
|
||||
|
||||
object Service "nrpe-disk-/" {
|
||||
import "generic-service"
|
||||
```
|
||||
object Service "nrpe-disk-/" {
|
||||
import "generic-service"
|
||||
|
||||
host_name = "remote-nrpe-host"
|
||||
host_name = "remote-nrpe-host"
|
||||
|
||||
check_command = "nrpe"
|
||||
vars.nrpe_command = "check_disk"
|
||||
vars.nrpe_arguments = [ "20%", "10%", "/" ]
|
||||
}
|
||||
check_command = "nrpe"
|
||||
vars.nrpe_command = "check_disk"
|
||||
vars.nrpe_arguments = [ "20%", "10%", "/" ]
|
||||
}
|
||||
```
|
||||
|
||||
Icinga 2 will execute the nrpe plugin like this:
|
||||
|
||||
/usr/lib/nagios/plugins/check_nrpe -H <remote-nrpe-host> -c 'check_disk' -a '20%' '10%' '/'
|
||||
```
|
||||
/usr/lib/nagios/plugins/check_nrpe -H <remote-nrpe-host> -c 'check_disk' -a '20%' '10%' '/'
|
||||
```
|
||||
|
||||
NRPE expects all additional arguments in an ordered fashion
|
||||
and interprets the first value as `$ARG1$` macro, the second
|
||||
|
@ -164,12 +178,16 @@ value as `$ARG2$`, and so on.
|
|||
|
||||
nrpe.cfg:
|
||||
|
||||
command[check_disk]=/usr/local/icinga/libexec/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$
|
||||
```
|
||||
command[check_disk]=/usr/local/icinga/libexec/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$
|
||||
```
|
||||
|
||||
Using the above example with `nrpe_arguments` the command
|
||||
executed by the NRPE daemon looks similar to that:
|
||||
|
||||
/usr/local/icinga/libexec/check_disk -w 20% -c 10% -p /
|
||||
```
|
||||
/usr/local/icinga/libexec/check_disk -w 20% -c 10% -p /
|
||||
```
|
||||
|
||||
You can pass arguments in a similar manner to [NSClient++](07-agent-based-monitoring.md#agent-based-checks-nsclient)
|
||||
when using its NRPE supported check method.
|
||||
|
@ -193,14 +211,16 @@ state or from a missed reset event.
|
|||
|
||||
Add a directive in `snmptt.conf`
|
||||
|
||||
EVENT coldStart .1.3.6.1.6.3.1.1.5.1 "Status Events" Normal
|
||||
FORMAT Device reinitialized (coldStart)
|
||||
EXEC echo "[$@] PROCESS_SERVICE_CHECK_RESULT;$A;Coldstart;2;The snmp agent has reinitialized." >> /var/run/icinga2/cmd/icinga2.cmd
|
||||
SDESC
|
||||
A coldStart trap signifies that the SNMPv2 entity, acting
|
||||
in an agent role, is reinitializing itself and that its
|
||||
configuration may have been altered.
|
||||
EDESC
|
||||
```
|
||||
EVENT coldStart .1.3.6.1.6.3.1.1.5.1 "Status Events" Normal
|
||||
FORMAT Device reinitialized (coldStart)
|
||||
EXEC echo "[$@] PROCESS_SERVICE_CHECK_RESULT;$A;Coldstart;2;The snmp agent has reinitialized." >> /var/run/icinga2/cmd/icinga2.cmd
|
||||
SDESC
|
||||
A coldStart trap signifies that the SNMPv2 entity, acting
|
||||
in an agent role, is reinitializing itself and that its
|
||||
configuration may have been altered.
|
||||
EDESC
|
||||
```
|
||||
|
||||
1. Define the `EVENT` as per your need.
|
||||
2. Construct the `EXEC` statement with the service name matching your template
|
||||
|
@ -210,105 +230,111 @@ match your Icinga convention.
|
|||
|
||||
Add an `EventCommand` configuration object for the passive service auto reset event.
|
||||
|
||||
object EventCommand "coldstart-reset-event" {
|
||||
command = [ ConfigDir + "/conf.d/custom/scripts/coldstart_reset_event.sh" ]
|
||||
```
|
||||
object EventCommand "coldstart-reset-event" {
|
||||
command = [ ConfigDir + "/conf.d/custom/scripts/coldstart_reset_event.sh" ]
|
||||
|
||||
arguments = {
|
||||
"-i" = "$service.state_id$"
|
||||
"-n" = "$host.name$"
|
||||
"-s" = "$service.name$"
|
||||
}
|
||||
}
|
||||
arguments = {
|
||||
"-i" = "$service.state_id$"
|
||||
"-n" = "$host.name$"
|
||||
"-s" = "$service.name$"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Create the `coldstart_reset_event.sh` shell script to pass the expanded variable
|
||||
data in. The `$service.state_id$` is important in order to prevent an endless loop
|
||||
of event firing after the service has been reset.
|
||||
|
||||
#!/bin/bash
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
SERVICE_STATE_ID=""
|
||||
HOST_NAME=""
|
||||
SERVICE_NAME=""
|
||||
SERVICE_STATE_ID=""
|
||||
HOST_NAME=""
|
||||
SERVICE_NAME=""
|
||||
|
||||
show_help()
|
||||
{
|
||||
cat <<-EOF
|
||||
Usage: ${0##*/} [-h] -n HOST_NAME -s SERVICE_NAME
|
||||
Writes a coldstart reset event to the Icinga command pipe.
|
||||
show_help()
|
||||
{
|
||||
cat <<-EOF
|
||||
Usage: ${0##*/} [-h] -n HOST_NAME -s SERVICE_NAME
|
||||
Writes a coldstart reset event to the Icinga command pipe.
|
||||
|
||||
-h Display this help and exit.
|
||||
-i SERVICE_STATE_ID The associated service state id.
|
||||
-n HOST_NAME The associated host name.
|
||||
-s SERVICE_NAME The associated service name.
|
||||
EOF
|
||||
}
|
||||
-h Display this help and exit.
|
||||
-i SERVICE_STATE_ID The associated service state id.
|
||||
-n HOST_NAME The associated host name.
|
||||
-s SERVICE_NAME The associated service name.
|
||||
EOF
|
||||
}
|
||||
|
||||
while getopts "hi:n:s:" opt; do
|
||||
case "$opt" in
|
||||
h)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
i)
|
||||
SERVICE_STATE_ID=$OPTARG
|
||||
;;
|
||||
n)
|
||||
HOST_NAME=$OPTARG
|
||||
;;
|
||||
s)
|
||||
SERVICE_NAME=$OPTARG
|
||||
;;
|
||||
'?')
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
done
|
||||
while getopts "hi:n:s:" opt; do
|
||||
case "$opt" in
|
||||
h)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
i)
|
||||
SERVICE_STATE_ID=$OPTARG
|
||||
;;
|
||||
n)
|
||||
HOST_NAME=$OPTARG
|
||||
;;
|
||||
s)
|
||||
SERVICE_NAME=$OPTARG
|
||||
;;
|
||||
'?')
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ -z "$SERVICE_STATE_ID" ]; then
|
||||
show_help
|
||||
printf "\n Error: -i required.\n"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "$SERVICE_STATE_ID" ]; then
|
||||
show_help
|
||||
printf "\n Error: -i required.\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$HOST_NAME" ]; then
|
||||
show_help
|
||||
printf "\n Error: -n required.\n"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "$HOST_NAME" ]; then
|
||||
show_help
|
||||
printf "\n Error: -n required.\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$SERVICE_NAME" ]; then
|
||||
show_help
|
||||
printf "\n Error: -s required.\n"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "$SERVICE_NAME" ]; then
|
||||
show_help
|
||||
printf "\n Error: -s required.\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$SERVICE_STATE_ID" -gt 0 ]; then
|
||||
echo "[`date +%s`] PROCESS_SERVICE_CHECK_RESULT;$HOST_NAME;$SERVICE_NAME;0;Auto-reset (`date +"%m-%d-%Y %T"`)." >> /var/run/icinga2/cmd/icinga2.cmd
|
||||
fi
|
||||
if [ "$SERVICE_STATE_ID" -gt 0 ]; then
|
||||
echo "[`date +%s`] PROCESS_SERVICE_CHECK_RESULT;$HOST_NAME;$SERVICE_NAME;0;Auto-reset (`date +"%m-%d-%Y %T"`)." >> /var/run/icinga2/cmd/icinga2.cmd
|
||||
fi
|
||||
```
|
||||
|
||||
Finally create the `Service` and assign it:
|
||||
|
||||
apply Service "Coldstart" {
|
||||
import "generic-service-custom"
|
||||
```
|
||||
apply Service "Coldstart" {
|
||||
import "generic-service-custom"
|
||||
|
||||
check_command = "dummy"
|
||||
event_command = "coldstart-reset-event"
|
||||
check_command = "dummy"
|
||||
event_command = "coldstart-reset-event"
|
||||
|
||||
enable_notifications = 1
|
||||
enable_active_checks = 0
|
||||
enable_passive_checks = 1
|
||||
enable_flapping = 0
|
||||
volatile = 1
|
||||
enable_perfdata = 0
|
||||
enable_notifications = 1
|
||||
enable_active_checks = 0
|
||||
enable_passive_checks = 1
|
||||
enable_flapping = 0
|
||||
volatile = 1
|
||||
enable_perfdata = 0
|
||||
|
||||
vars.dummy_state = 0
|
||||
vars.dummy_text = "Manual reset."
|
||||
vars.dummy_state = 0
|
||||
vars.dummy_text = "Manual reset."
|
||||
|
||||
vars.sla = "24x7"
|
||||
vars.sla = "24x7"
|
||||
|
||||
assign where (host.vars.os == "Linux" || host.vars.os == "Windows")
|
||||
}
|
||||
assign where (host.vars.os == "Linux" || host.vars.os == "Windows")
|
||||
}
|
||||
```
|
||||
|
||||
### Complex SNMP Traps <a id="complex-traps"></a>
|
||||
|
||||
|
@ -321,13 +347,15 @@ As long as the most recent passive update has occurred, the active check is bypa
|
|||
|
||||
Add a directive in `snmptt.conf`
|
||||
|
||||
EVENT enterpriseSpecific <YOUR OID> "Status Events" Normal
|
||||
FORMAT Enterprise specific trap
|
||||
EXEC echo "[$@] PROCESS_SERVICE_CHECK_RESULT;$A;$1;$2;$3" >> /var/run/icinga2/cmd/icinga2.cmd
|
||||
SDESC
|
||||
An enterprise specific trap.
|
||||
The varbinds in order denote the Icinga service name, state and text.
|
||||
EDESC
|
||||
```
|
||||
EVENT enterpriseSpecific <YOUR OID> "Status Events" Normal
|
||||
FORMAT Enterprise specific trap
|
||||
EXEC echo "[$@] PROCESS_SERVICE_CHECK_RESULT;$A;$1;$2;$3" >> /var/run/icinga2/cmd/icinga2.cmd
|
||||
SDESC
|
||||
An enterprise specific trap.
|
||||
The varbinds in order denote the Icinga service name, state and text.
|
||||
EDESC
|
||||
```
|
||||
|
||||
1. Define the `EVENT` as per your need using your actual oid.
|
||||
2. The service name, state and text are extracted from the first three varbinds.
|
||||
|
@ -337,22 +365,24 @@ Create a `Service` for the specific use case associated to the host. If the host
|
|||
matches and the first varbind value is `Backup`, SNMPTT will submit the corresponding
|
||||
passive update with the state and text from the second and third varbind:
|
||||
|
||||
object Service "Backup" {
|
||||
import "generic-service-custom"
|
||||
```
|
||||
object Service "Backup" {
|
||||
import "generic-service-custom"
|
||||
|
||||
host_name = "host.domain.com"
|
||||
check_command = "dummy"
|
||||
host_name = "host.domain.com"
|
||||
check_command = "dummy"
|
||||
|
||||
enable_notifications = 1
|
||||
enable_active_checks = 1
|
||||
enable_passive_checks = 1
|
||||
enable_flapping = 0
|
||||
volatile = 1
|
||||
max_check_attempts = 1
|
||||
check_interval = 87000
|
||||
enable_perfdata = 0
|
||||
enable_notifications = 1
|
||||
enable_active_checks = 1
|
||||
enable_passive_checks = 1
|
||||
enable_flapping = 0
|
||||
volatile = 1
|
||||
max_check_attempts = 1
|
||||
check_interval = 87000
|
||||
enable_perfdata = 0
|
||||
|
||||
vars.sla = "24x7"
|
||||
vars.dummy_state = 2
|
||||
vars.dummy_text = "No passive check result received."
|
||||
}
|
||||
vars.sla = "24x7"
|
||||
vars.dummy_state = 2
|
||||
vars.dummy_text = "No passive check result received."
|
||||
}
|
||||
```
|
|
@ -387,12 +387,16 @@ In Icinga 2 active check freshness is enabled by default. It is determined by th
|
|||
|
||||
The threshold is calculated based on the last check execution time for actively executed checks:
|
||||
|
||||
(last check execution time + check interval) > current time
|
||||
```
|
||||
(last check execution time + check interval) > current time
|
||||
```
|
||||
|
||||
If this host/service receives check results from an [external source](08-advanced-topics.md#external-check-results),
|
||||
the threshold is based on the last time a check result was received:
|
||||
|
||||
(last check result time + check interval) > current time
|
||||
```
|
||||
(last check result time + check interval) > current time
|
||||
```
|
||||
|
||||
> **Tip**
|
||||
>
|
||||
|
@ -579,65 +583,69 @@ In addition to that you can optionally define the `ssl` attribute which enables
|
|||
|
||||
Host definition:
|
||||
|
||||
object Host "webserver01" {
|
||||
import "generic-host"
|
||||
address = "192.168.56.200"
|
||||
vars.os = "Linux"
|
||||
```
|
||||
object Host "webserver01" {
|
||||
import "generic-host"
|
||||
address = "192.168.56.200"
|
||||
vars.os = "Linux"
|
||||
|
||||
vars.webserver = {
|
||||
instance["status"] = {
|
||||
address = "192.168.56.201"
|
||||
port = "80"
|
||||
url = "/status"
|
||||
}
|
||||
instance["tomcat"] = {
|
||||
address = "192.168.56.202"
|
||||
port = "8080"
|
||||
}
|
||||
instance["icingaweb2"] = {
|
||||
address = "192.168.56.210"
|
||||
port = "443"
|
||||
url = "/icingaweb2"
|
||||
ssl = true
|
||||
}
|
||||
}
|
||||
vars.webserver = {
|
||||
instance["status"] = {
|
||||
address = "192.168.56.201"
|
||||
port = "80"
|
||||
url = "/status"
|
||||
}
|
||||
instance["tomcat"] = {
|
||||
address = "192.168.56.202"
|
||||
port = "8080"
|
||||
}
|
||||
instance["icingaweb2"] = {
|
||||
address = "192.168.56.210"
|
||||
port = "443"
|
||||
url = "/icingaweb2"
|
||||
ssl = true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Service apply for definitions:
|
||||
|
||||
apply Service "webserver_ping" for (instance => config in host.vars.webserver.instance) {
|
||||
display_name = "webserver_" + instance
|
||||
check_command = "ping4"
|
||||
```
|
||||
apply Service "webserver_ping" for (instance => config in host.vars.webserver.instance) {
|
||||
display_name = "webserver_" + instance
|
||||
check_command = "ping4"
|
||||
|
||||
vars.ping_address = config.address
|
||||
vars.ping_address = config.address
|
||||
|
||||
assign where host.vars.webserver.instance
|
||||
}
|
||||
assign where host.vars.webserver.instance
|
||||
}
|
||||
|
||||
apply Service "webserver_port" for (instance => config in host.vars.webserver.instance) {
|
||||
display_name = "webserver_" + instance + "_" + config.port
|
||||
check_command = "tcp"
|
||||
apply Service "webserver_port" for (instance => config in host.vars.webserver.instance) {
|
||||
display_name = "webserver_" + instance + "_" + config.port
|
||||
check_command = "tcp"
|
||||
|
||||
vars.tcp_address = config.address
|
||||
vars.tcp_port = config.port
|
||||
vars.tcp_address = config.address
|
||||
vars.tcp_port = config.port
|
||||
|
||||
assign where host.vars.webserver.instance
|
||||
}
|
||||
assign where host.vars.webserver.instance
|
||||
}
|
||||
|
||||
apply Service "webserver_url" for (instance => config in host.vars.webserver.instance) {
|
||||
display_name = "webserver_" + instance + "_" + config.url
|
||||
check_command = "http"
|
||||
apply Service "webserver_url" for (instance => config in host.vars.webserver.instance) {
|
||||
display_name = "webserver_" + instance + "_" + config.url
|
||||
check_command = "http"
|
||||
|
||||
vars.http_address = config.address
|
||||
vars.http_port = config.port
|
||||
vars.http_uri = config.url
|
||||
vars.http_address = config.address
|
||||
vars.http_port = config.port
|
||||
vars.http_uri = config.url
|
||||
|
||||
if (config.ssl) {
|
||||
vars.http_ssl = config.ssl
|
||||
}
|
||||
if (config.ssl) {
|
||||
vars.http_ssl = config.ssl
|
||||
}
|
||||
|
||||
assign where config.url != ""
|
||||
}
|
||||
assign where config.url != ""
|
||||
}
|
||||
```
|
||||
|
||||
The variables defined in the host dictionary are not using the typical custom attribute
|
||||
prefix recommended for CheckCommand parameters. Instead they are re-used for multiple
|
||||
|
@ -756,25 +764,27 @@ slightly unexpected way. The following example shows how to assign values
|
|||
depending on group membership. All hosts in the `slow-lan` host group use 300
|
||||
as value for `ping_wrta`, all other hosts use 100.
|
||||
|
||||
globals.group_specific_value = function(group, group_value, non_group_value) {
|
||||
return function() use (group, group_value, non_group_value) {
|
||||
if (group in host.groups) {
|
||||
return group_value
|
||||
} else {
|
||||
return non_group_value
|
||||
}
|
||||
```
|
||||
globals.group_specific_value = function(group, group_value, non_group_value) {
|
||||
return function() use (group, group_value, non_group_value) {
|
||||
if (group in host.groups) {
|
||||
return group_value
|
||||
} else {
|
||||
return non_group_value
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
apply Service "ping4" {
|
||||
import "generic-service"
|
||||
check_command = "ping4"
|
||||
apply Service "ping4" {
|
||||
import "generic-service"
|
||||
check_command = "ping4"
|
||||
|
||||
vars.ping_wrta = group_specific_value("slow-lan", 300, 100)
|
||||
vars.ping_crta = group_specific_value("slow-lan", 500, 200)
|
||||
vars.ping_wrta = group_specific_value("slow-lan", 300, 100)
|
||||
vars.ping_crta = group_specific_value("slow-lan", 500, 200)
|
||||
|
||||
assign where true
|
||||
}
|
||||
assign where true
|
||||
}
|
||||
```
|
||||
|
||||
#### Use Functions in Assign Where Expressions <a id="use-functions-assign-where"></a>
|
||||
|
||||
|
@ -790,36 +800,37 @@ The following example requires the host `myprinter` being added
|
|||
to the host group `printers-lexmark` but only if the host uses
|
||||
a template matching the name `lexmark*`.
|
||||
|
||||
template Host "lexmark-printer-host" {
|
||||
vars.printer_type = "Lexmark"
|
||||
```
|
||||
template Host "lexmark-printer-host" {
|
||||
vars.printer_type = "Lexmark"
|
||||
}
|
||||
|
||||
object Host "myprinter" {
|
||||
import "generic-host"
|
||||
import "lexmark-printer-host"
|
||||
|
||||
address = "192.168.1.1"
|
||||
}
|
||||
|
||||
/* register a global function for the assign where call */
|
||||
globals.check_host_templates = function(host, search) {
|
||||
/* iterate over all host templates and check if the search matches */
|
||||
for (tmpl in host.templates) {
|
||||
if (match(search, tmpl)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
object Host "myprinter" {
|
||||
import "generic-host"
|
||||
import "lexmark-printer-host"
|
||||
|
||||
address = "192.168.1.1"
|
||||
}
|
||||
|
||||
/* register a global function for the assign where call */
|
||||
globals.check_host_templates = function(host, search) {
|
||||
/* iterate over all host templates and check if the search matches */
|
||||
for (tmpl in host.templates) {
|
||||
if (match(search, tmpl)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
/* nothing matched */
|
||||
return false
|
||||
}
|
||||
|
||||
object HostGroup "printers-lexmark" {
|
||||
display_name = "Lexmark Printers"
|
||||
/* call the global function and pass the arguments */
|
||||
assign where check_host_templates(host, "lexmark*")
|
||||
}
|
||||
/* nothing matched */
|
||||
return false
|
||||
}
|
||||
|
||||
object HostGroup "printers-lexmark" {
|
||||
display_name = "Lexmark Printers"
|
||||
/* call the global function and pass the arguments */
|
||||
assign where check_host_templates(host, "lexmark*")
|
||||
}
|
||||
```
|
||||
|
||||
Take a different more complex example: All hosts with the
|
||||
custom attribute `vars_app` as nested dictionary should be
|
||||
|
@ -828,43 +839,46 @@ added to the host group `ABAP-app-server`. But only if the
|
|||
|
||||
It could read as wildcard match for nested dictionaries:
|
||||
|
||||
```
|
||||
where host.vars.vars_app["*"].app_type == "ABAP"
|
||||
```
|
||||
|
||||
The solution for this problem is to register a global
|
||||
function which checks the `app_type` for all hosts
|
||||
with the `vars_app` dictionary.
|
||||
|
||||
object Host "appserver01" {
|
||||
check_command = "dummy"
|
||||
vars.vars_app["ABC"] = { app_type = "ABAP" }
|
||||
}
|
||||
object Host "appserver02" {
|
||||
check_command = "dummy"
|
||||
vars.vars_app["DEF"] = { app_type = "ABAP" }
|
||||
```
|
||||
object Host "appserver01" {
|
||||
check_command = "dummy"
|
||||
vars.vars_app["ABC"] = { app_type = "ABAP" }
|
||||
}
|
||||
object Host "appserver02" {
|
||||
check_command = "dummy"
|
||||
vars.vars_app["DEF"] = { app_type = "ABAP" }
|
||||
}
|
||||
|
||||
globals.check_app_type = function(host, type) {
|
||||
/* ensure that other hosts without the custom attribute do not match */
|
||||
if (typeof(host.vars.vars_app) != Dictionary) {
|
||||
return false
|
||||
}
|
||||
|
||||
/* iterate over the vars_app dictionary */
|
||||
for (key => val in host.vars.vars_app) {
|
||||
/* if the value is a dictionary and if contains the app_type being the requested type */
|
||||
if (typeof(val) == Dictionary && val.app_type == type) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
globals.check_app_type = function(host, type) {
|
||||
/* ensure that other hosts without the custom attribute do not match */
|
||||
if (typeof(host.vars.vars_app) != Dictionary) {
|
||||
return false
|
||||
}
|
||||
|
||||
/* iterate over the vars_app dictionary */
|
||||
for (key => val in host.vars.vars_app) {
|
||||
/* if the value is a dictionary and if contains the app_type being the requested type */
|
||||
if (typeof(val) == Dictionary && val.app_type == type) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
/* nothing matched */
|
||||
return false
|
||||
}
|
||||
|
||||
object HostGroup "ABAP-app-server" {
|
||||
assign where check_app_type(host, "ABAP")
|
||||
}
|
||||
/* nothing matched */
|
||||
return false
|
||||
}
|
||||
|
||||
object HostGroup "ABAP-app-server" {
|
||||
assign where check_app_type(host, "ABAP")
|
||||
}
|
||||
```
|
||||
|
||||
#### Use Functions in Command Arguments set_if <a id="use-functions-command-arguments-setif"></a>
|
||||
|
||||
|
@ -879,13 +893,15 @@ multiple conditions and attributes.
|
|||
The following example was found on the community support channels. The user had defined a host
|
||||
dictionary named `compellent` with the key `disks`. This was then used inside service apply for rules.
|
||||
|
||||
object Host "dict-host" {
|
||||
check_command = "check_compellent"
|
||||
vars.compellent["disks"] = {
|
||||
file = "/var/lib/check_compellent/san_disks.0.json",
|
||||
checks = ["disks"]
|
||||
}
|
||||
}
|
||||
```
|
||||
object Host "dict-host" {
|
||||
check_command = "check_compellent"
|
||||
vars.compellent["disks"] = {
|
||||
file = "/var/lib/check_compellent/san_disks.0.json",
|
||||
checks = ["disks"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The more significant problem was to only add the command parameter `--disk` to the plugin call
|
||||
when the dictionary `compellent` contains the key `disks`, and omit it if not found.
|
||||
|
@ -894,20 +910,22 @@ By defining `set_if` as [abbreviated lambda function](17-language-reference.md#n
|
|||
and evaluating the host custom attribute `compellent` containing the `disks` this problem was
|
||||
solved like this:
|
||||
|
||||
object CheckCommand "check_compellent" {
|
||||
command = [ "/usr/bin/check_compellent" ]
|
||||
arguments = {
|
||||
"--disks" = {
|
||||
set_if = {{
|
||||
var host_vars = host.vars
|
||||
log(host_vars)
|
||||
var compel = host_vars.compellent
|
||||
log(compel)
|
||||
compel.contains("disks")
|
||||
}}
|
||||
}
|
||||
}
|
||||
```
|
||||
object CheckCommand "check_compellent" {
|
||||
command = [ "/usr/bin/check_compellent" ]
|
||||
arguments = {
|
||||
"--disks" = {
|
||||
set_if = {{
|
||||
var host_vars = host.vars
|
||||
log(host_vars)
|
||||
var compel = host_vars.compellent
|
||||
log(compel)
|
||||
compel.contains("disks")
|
||||
}}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This implementation uses the dictionary type method [contains](18-library-reference.md#dictionary-contains)
|
||||
and will fail if `host.vars.compellent` is not of the type `Dictionary`.
|
||||
|
@ -915,35 +933,38 @@ Therefore you can extend the checks using the [typeof](17-language-reference.md#
|
|||
|
||||
You can test the types using the `icinga2 console`:
|
||||
|
||||
# icinga2 console
|
||||
Icinga (version: v2.3.0-193-g3eb55ad)
|
||||
<1> => srv_vars.compellent["check_a"] = { file="outfile_a.json", checks = [ "disks", "fans" ] }
|
||||
null
|
||||
<2> => srv_vars.compellent["check_b"] = { file="outfile_b.json", checks = [ "power", "voltages" ] }
|
||||
null
|
||||
<3> => typeof(srv_vars.compellent)
|
||||
type 'Dictionary'
|
||||
<4> =>
|
||||
```
|
||||
# icinga2 console
|
||||
Icinga (version: v2.3.0-193-g3eb55ad)
|
||||
<1> => srv_vars.compellent["check_a"] = { file="outfile_a.json", checks = [ "disks", "fans" ] }
|
||||
null
|
||||
<2> => srv_vars.compellent["check_b"] = { file="outfile_b.json", checks = [ "power", "voltages" ] }
|
||||
null
|
||||
<3> => typeof(srv_vars.compellent)
|
||||
type 'Dictionary'
|
||||
<4> =>
|
||||
```
|
||||
|
||||
The more programmatic approach for `set_if` could look like this:
|
||||
|
||||
"--disks" = {
|
||||
set_if = {{
|
||||
var srv_vars = service.vars
|
||||
if(len(srv_vars) > 0) {
|
||||
if (typeof(srv_vars.compellent) == Dictionary) {
|
||||
return srv_vars.compellent.contains("disks")
|
||||
} else {
|
||||
log(LogInformationen, "checkcommand set_if", "custom attribute compellent_checks is not a dictionary, ignoring it.")
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
log(LogWarning, "checkcommand set_if", "empty custom attributes")
|
||||
return false
|
||||
}
|
||||
}}
|
||||
```
|
||||
"--disks" = {
|
||||
set_if = {{
|
||||
var srv_vars = service.vars
|
||||
if(len(srv_vars) > 0) {
|
||||
if (typeof(srv_vars.compellent) == Dictionary) {
|
||||
return srv_vars.compellent.contains("disks")
|
||||
} else {
|
||||
log(LogInformationen, "checkcommand set_if", "custom attribute compellent_checks is not a dictionary, ignoring it.")
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
log(LogWarning, "checkcommand set_if", "empty custom attributes")
|
||||
return false
|
||||
}
|
||||
|
||||
}}
|
||||
}
|
||||
```
|
||||
|
||||
#### Use Functions as Command Attribute <a id="use-functions-command-attribute"></a>
|
||||
|
||||
|
@ -955,20 +976,22 @@ The following example was taken from the community support channels. The require
|
|||
specify a custom attribute inside the notification apply rule and decide which notification
|
||||
script to call based on that.
|
||||
|
||||
object User "short-dummy" {
|
||||
}
|
||||
```
|
||||
object User "short-dummy" {
|
||||
}
|
||||
|
||||
object UserGroup "short-dummy-group" {
|
||||
assign where user.name == "short-dummy"
|
||||
}
|
||||
object UserGroup "short-dummy-group" {
|
||||
assign where user.name == "short-dummy"
|
||||
}
|
||||
|
||||
apply Notification "mail-admins-short" to Host {
|
||||
import "mail-host-notification"
|
||||
command = "mail-host-notification-test"
|
||||
user_groups = [ "short-dummy-group" ]
|
||||
vars.short = true
|
||||
assign where host.vars.notification.mail
|
||||
}
|
||||
apply Notification "mail-admins-short" to Host {
|
||||
import "mail-host-notification"
|
||||
command = "mail-host-notification-test"
|
||||
user_groups = [ "short-dummy-group" ]
|
||||
vars.short = true
|
||||
assign where host.vars.notification.mail
|
||||
}
|
||||
```
|
||||
|
||||
The solution is fairly simple: The `command` attribute is implemented as function returning
|
||||
an array required by the caller Icinga 2.
|
||||
|
@ -980,25 +1003,26 @@ returned.
|
|||
|
||||
You can omit the `log()` calls, they only help debugging.
|
||||
|
||||
object NotificationCommand "mail-host-notification-test" {
|
||||
command = {{
|
||||
log("command as function")
|
||||
var mailscript = "mail-host-notification-long.sh"
|
||||
if (notification.vars.short) {
|
||||
mailscript = "mail-host-notification-short.sh"
|
||||
}
|
||||
log("Running command")
|
||||
log(mailscript)
|
||||
|
||||
var cmd = [ ConfigDir + "/scripts/" + mailscript ]
|
||||
log(LogCritical, "me", cmd)
|
||||
return cmd
|
||||
}}
|
||||
|
||||
env = {
|
||||
}
|
||||
```
|
||||
object NotificationCommand "mail-host-notification-test" {
|
||||
command = {{
|
||||
log("command as function")
|
||||
var mailscript = "mail-host-notification-long.sh"
|
||||
if (notification.vars.short) {
|
||||
mailscript = "mail-host-notification-short.sh"
|
||||
}
|
||||
log("Running command")
|
||||
log(mailscript)
|
||||
|
||||
var cmd = [ ConfigDir + "/scripts/" + mailscript ]
|
||||
log(LogCritical, "me", cmd)
|
||||
return cmd
|
||||
}}
|
||||
|
||||
env = {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Access Object Attributes at Runtime <a id="access-object-attributes-at-runtime"></a>
|
||||
|
||||
|
|
|
@ -396,12 +396,14 @@ Configuration Attributes:
|
|||
|
||||
Available state filters:
|
||||
|
||||
OK
|
||||
Warning
|
||||
Critical
|
||||
Unknown
|
||||
Up
|
||||
Down
|
||||
```
|
||||
OK
|
||||
Warning
|
||||
Critical
|
||||
Unknown
|
||||
Up
|
||||
Down
|
||||
```
|
||||
|
||||
When using [apply rules](03-monitoring-basics.md#using-apply) for dependencies, you can leave out certain attributes which will be
|
||||
automatically determined by Icinga 2.
|
||||
|
@ -1154,27 +1156,33 @@ Configuration Attributes:
|
|||
|
||||
Available notification state filters for Service:
|
||||
|
||||
OK
|
||||
Warning
|
||||
Critical
|
||||
Unknown
|
||||
```
|
||||
OK
|
||||
Warning
|
||||
Critical
|
||||
Unknown
|
||||
```
|
||||
|
||||
Available notification state filters for Host:
|
||||
|
||||
Up
|
||||
Down
|
||||
```
|
||||
Up
|
||||
Down
|
||||
```
|
||||
|
||||
Available notification type filters:
|
||||
|
||||
DowntimeStart
|
||||
DowntimeEnd
|
||||
DowntimeRemoved
|
||||
Custom
|
||||
Acknowledgement
|
||||
Problem
|
||||
Recovery
|
||||
FlappingStart
|
||||
FlappingEnd
|
||||
```
|
||||
DowntimeStart
|
||||
DowntimeEnd
|
||||
DowntimeRemoved
|
||||
Custom
|
||||
Acknowledgement
|
||||
Problem
|
||||
Recovery
|
||||
FlappingStart
|
||||
FlappingEnd
|
||||
```
|
||||
|
||||
Runtime Attributes:
|
||||
|
||||
|
@ -1687,24 +1695,28 @@ object User "icingaadmin" {
|
|||
|
||||
Available notification state filters:
|
||||
|
||||
OK
|
||||
Warning
|
||||
Critical
|
||||
Unknown
|
||||
Up
|
||||
Down
|
||||
```
|
||||
OK
|
||||
Warning
|
||||
Critical
|
||||
Unknown
|
||||
Up
|
||||
Down
|
||||
```
|
||||
|
||||
Available notification type filters:
|
||||
|
||||
DowntimeStart
|
||||
DowntimeEnd
|
||||
DowntimeRemoved
|
||||
Custom
|
||||
Acknowledgement
|
||||
Problem
|
||||
Recovery
|
||||
FlappingStart
|
||||
FlappingEnd
|
||||
```
|
||||
DowntimeStart
|
||||
DowntimeEnd
|
||||
DowntimeRemoved
|
||||
Custom
|
||||
Acknowledgement
|
||||
Problem
|
||||
Recovery
|
||||
FlappingStart
|
||||
FlappingEnd
|
||||
```
|
||||
|
||||
Configuration Attributes:
|
||||
|
||||
|
|
|
@ -25,7 +25,9 @@ You are advised to create your own CheckCommand definitions in
|
|||
|
||||
By default the generic templates are included in the [icinga2.conf](04-configuring-icinga-2.md#icinga2-conf) configuration file:
|
||||
|
||||
include <itl>
|
||||
```
|
||||
include <itl>
|
||||
```
|
||||
|
||||
These templates are imported by the provided example configuration.
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -19,7 +19,9 @@ You need to install Graphite first, then proceed with configuring it in Icinga 2
|
|||
Use the [GraphiteWriter](14-features.md#graphite-carbon-cache-writer) feature
|
||||
for sending real-time metrics from Icinga 2 to Graphite.
|
||||
|
||||
# icinga2 feature enable graphite
|
||||
```
|
||||
# icinga2 feature enable graphite
|
||||
```
|
||||
|
||||
A popular alternative frontend for Graphite is for example [Grafana](https://grafana.org).
|
||||
|
||||
|
@ -36,7 +38,9 @@ It’s written in Go and has no external dependencies.
|
|||
Use the [InfluxdbWriter](14-features.md#influxdb-writer) feature
|
||||
for sending real-time metrics from Icinga 2 to InfluxDB.
|
||||
|
||||
# icinga2 feature enable influxdb
|
||||
```
|
||||
# icinga2 feature enable influxdb
|
||||
```
|
||||
|
||||
A popular frontend for InfluxDB is for example [Grafana](https://grafana.org).
|
||||
|
||||
|
@ -61,11 +65,15 @@ data files which Icinga 2 generates.
|
|||
|
||||
Enable performance data writer in icinga 2
|
||||
|
||||
# icinga2 feature enable perfdata
|
||||
```
|
||||
# icinga2 feature enable perfdata
|
||||
```
|
||||
|
||||
Configure npcd to use the performance data created by Icinga 2:
|
||||
|
||||
vim /etc/pnp4nagios/npcd.cfg
|
||||
```
|
||||
vim /etc/pnp4nagios/npcd.cfg
|
||||
```
|
||||
|
||||
Set `perfdata_spool_dir = /var/spool/icinga2/perfdata` and restart the `npcd` daemon.
|
||||
|
||||
|
@ -120,9 +128,11 @@ based on your monitoring configuration and status data using [NagVis](https://ww
|
|||
|
||||
The configuration in nagvis.ini.php should look like this for Livestatus for example:
|
||||
|
||||
[backend_live_1]
|
||||
backendtype="mklivestatus"
|
||||
socket="unix:/var/run/icinga2/cmd/livestatus"
|
||||
```
|
||||
[backend_live_1]
|
||||
backendtype="mklivestatus"
|
||||
socket="unix:/var/run/icinga2/cmd/livestatus"
|
||||
```
|
||||
|
||||
If you are planning an integration into Icinga Web 2, look at [this module](https://github.com/Icinga/icingaweb2-module-nagvis).
|
||||
|
||||
|
@ -190,13 +200,15 @@ These tools are currently in development and require feedback and tests:
|
|||
They work in a similar fashion for Icinga 2 and are used for 1.x web interfaces (Icinga Web 2 doesn't require
|
||||
the action url attribute in its own module).
|
||||
|
||||
template Host "pnp-hst" {
|
||||
action_url = "/pnp4nagios/graph?host=$HOSTNAME$"
|
||||
}
|
||||
```
|
||||
template Host "pnp-hst" {
|
||||
action_url = "/pnp4nagios/graph?host=$HOSTNAME$"
|
||||
}
|
||||
|
||||
template Service "pnp-svc" {
|
||||
action_url = "/pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$"
|
||||
}
|
||||
template Service "pnp-svc" {
|
||||
action_url = "/pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$"
|
||||
}
|
||||
```
|
||||
|
||||
### PNP Custom Templates with Icinga 2 <a id="addons-graphing-pnp-custom-templates"></a>
|
||||
|
||||
|
@ -213,24 +225,26 @@ and use that inside the formatting templates as `SERVICECHECKCOMMAND` for instan
|
|||
|
||||
Example for services:
|
||||
|
||||
# vim /etc/icinga2/features-enabled/perfdata.conf
|
||||
```
|
||||
# vim /etc/icinga2/features-enabled/perfdata.conf
|
||||
|
||||
service_format_template = "DATATYPE::SERVICEPERFDATA\tTIMET::$icinga.timet$\tHOSTNAME::$host.name$\tSERVICEDESC::$service.name$\tSERVICEPERFDATA::$service.perfdata$\tSERVICECHECKCOMMAND::$service.check_command$$pnp_check_arg1$\tHOSTSTATE::$host.state$\tHOSTSTATETYPE::$host.state_type$\tSERVICESTATE::$service.state$\tSERVICESTATETYPE::$service.state_type$"
|
||||
service_format_template = "DATATYPE::SERVICEPERFDATA\tTIMET::$icinga.timet$\tHOSTNAME::$host.name$\tSERVICEDESC::$service.name$\tSERVICEPERFDATA::$service.perfdata$\tSERVICECHECKCOMMAND::$service.check_command$$pnp_check_arg1$\tHOSTSTATE::$host.state$\tHOSTSTATETYPE::$host.state_type$\tSERVICESTATE::$service.state$\tSERVICESTATETYPE::$service.state_type$"
|
||||
|
||||
# vim /etc/icinga2/conf.d/services.conf
|
||||
# vim /etc/icinga2/conf.d/services.conf
|
||||
|
||||
template Service "pnp-svc" {
|
||||
action_url = "/pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$"
|
||||
vars.pnp_check_arg1 = ""
|
||||
}
|
||||
template Service "pnp-svc" {
|
||||
action_url = "/pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$"
|
||||
vars.pnp_check_arg1 = ""
|
||||
}
|
||||
|
||||
apply Service "nrpe-check" {
|
||||
import "pnp-svc"
|
||||
check_command = nrpe
|
||||
vars.nrpe_command = "check_disk"
|
||||
apply Service "nrpe-check" {
|
||||
import "pnp-svc"
|
||||
check_command = nrpe
|
||||
vars.nrpe_command = "check_disk"
|
||||
|
||||
vars.pnp_check_arg1 = "!$nrpe_command$"
|
||||
}
|
||||
vars.pnp_check_arg1 = "!$nrpe_command$"
|
||||
}
|
||||
```
|
||||
|
||||
If there are warnings about unresolved macros, make sure to specify a default value for `vars.pnp_check_arg1` inside the
|
||||
|
||||
|
|
|
@ -172,7 +172,9 @@ through the web interface).
|
|||
In order to enable the `ExternalCommandListener` configuration use the
|
||||
following command and restart Icinga 2 afterwards:
|
||||
|
||||
# icinga2 feature enable command
|
||||
```
|
||||
# icinga2 feature enable command
|
||||
```
|
||||
|
||||
Icinga 2 creates the command pipe file as `/var/run/icinga2/cmd/icinga2.cmd`
|
||||
using the default configuration.
|
||||
|
@ -181,12 +183,14 @@ Web interfaces and other Icinga addons are able to send commands to
|
|||
Icinga 2 through the external command pipe, for example for rescheduling
|
||||
a forced service check:
|
||||
|
||||
# /bin/echo "[`date +%s`] SCHEDULE_FORCED_SVC_CHECK;localhost;ping4;`date +%s`" >> /var/run/icinga2/cmd/icinga2.cmd
|
||||
```
|
||||
# /bin/echo "[`date +%s`] SCHEDULE_FORCED_SVC_CHECK;localhost;ping4;`date +%s`" >> /var/run/icinga2/cmd/icinga2.cmd
|
||||
|
||||
# tail -f /var/log/messages
|
||||
# tail -f /var/log/messages
|
||||
|
||||
Oct 17 15:01:25 icinga-server icinga2: Executing external command: [1382014885] SCHEDULE_FORCED_SVC_CHECK;localhost;ping4;1382014885
|
||||
Oct 17 15:01:25 icinga-server icinga2: Rescheduling next check for service 'ping4'
|
||||
Oct 17 15:01:25 icinga-server icinga2: Executing external command: [1382014885] SCHEDULE_FORCED_SVC_CHECK;localhost;ping4;1382014885
|
||||
Oct 17 15:01:25 icinga-server icinga2: Rescheduling next check for service 'ping4'
|
||||
```
|
||||
|
||||
A list of currently supported external commands can be found [here](24-appendix.md#external-commands-list-detail).
|
||||
|
||||
|
@ -216,13 +220,17 @@ Therefore the Icinga 2 [PerfdataWriter](09-object-types.md#objecttype-perfdatawr
|
|||
feature allows you to define the output template format for host and services helped
|
||||
with Icinga 2 runtime vars.
|
||||
|
||||
host_format_template = "DATATYPE::HOSTPERFDATA\tTIMET::$icinga.timet$\tHOSTNAME::$host.name$\tHOSTPERFDATA::$host.perfdata$\tHOSTCHECKCOMMAND::$host.check_command$\tHOSTSTATE::$host.state$\tHOSTSTATETYPE::$host.state_type$"
|
||||
service_format_template = "DATATYPE::SERVICEPERFDATA\tTIMET::$icinga.timet$\tHOSTNAME::$host.name$\tSERVICEDESC::$service.name$\tSERVICEPERFDATA::$service.perfdata$\tSERVICECHECKCOMMAND::$service.check_command$\tHOSTSTATE::$host.state$\tHOSTSTATETYPE::$host.state_type$\tSERVICESTATE::$service.state$\tSERVICESTATETYPE::$service.state_type$"
|
||||
```
|
||||
host_format_template = "DATATYPE::HOSTPERFDATA\tTIMET::$icinga.timet$\tHOSTNAME::$host.name$\tHOSTPERFDATA::$host.perfdata$\tHOSTCHECKCOMMAND::$host.check_command$\tHOSTSTATE::$host.state$\tHOSTSTATETYPE::$host.state_type$"
|
||||
service_format_template = "DATATYPE::SERVICEPERFDATA\tTIMET::$icinga.timet$\tHOSTNAME::$host.name$\tSERVICEDESC::$service.name$\tSERVICEPERFDATA::$service.perfdata$\tSERVICECHECKCOMMAND::$service.check_command$\tHOSTSTATE::$host.state$\tHOSTSTATETYPE::$host.state_type$\tSERVICESTATE::$service.state$\tSERVICESTATETYPE::$service.state_type$"
|
||||
```
|
||||
|
||||
The default templates are already provided with the Icinga 2 feature configuration
|
||||
which can be enabled using
|
||||
|
||||
# icinga2 feature enable perfdata
|
||||
```
|
||||
# icinga2 feature enable perfdata
|
||||
```
|
||||
|
||||
By default all performance data files are rotated in a 15 seconds interval into
|
||||
the `/var/spool/icinga2/perfdata/` directory as `host-perfdata.<timestamp>` and
|
||||
|
@ -240,7 +248,9 @@ write them to the defined Graphite Carbon daemon tcp socket.
|
|||
|
||||
You can enable the feature using
|
||||
|
||||
# icinga2 feature enable graphite
|
||||
```
|
||||
# icinga2 feature enable graphite
|
||||
```
|
||||
|
||||
By default the [GraphiteWriter](09-object-types.md#objecttype-graphitewriter) feature
|
||||
expects the Graphite Carbon Cache to listen at `127.0.0.1` on TCP port `2003`.
|
||||
|
@ -253,8 +263,10 @@ depends on this schema.
|
|||
The default prefix for hosts and services is configured using
|
||||
[runtime macros](03-monitoring-basics.md#runtime-macros)like this:
|
||||
|
||||
icinga2.$host.name$.host.$host.check_command$
|
||||
icinga2.$host.name$.services.$service.name$.$service.check_command$
|
||||
```
|
||||
icinga2.$host.name$.host.$host.check_command$
|
||||
icinga2.$host.name$.services.$service.name$.$service.check_command$
|
||||
```
|
||||
|
||||
You can customize the prefix name by using the `host_name_template` and
|
||||
`service_name_template` configuration attributes.
|
||||
|
@ -274,7 +286,9 @@ The following characters are escaped in prefix labels:
|
|||
|
||||
Metric values are stored like this:
|
||||
|
||||
<prefix>.perfdata.<perfdata-label>.value
|
||||
```
|
||||
<prefix>.perfdata.<perfdata-label>.value
|
||||
```
|
||||
|
||||
The following characters are escaped in perfdata labels:
|
||||
|
||||
|
@ -292,22 +306,26 @@ and is therefore replaced by `.`.
|
|||
|
||||
By enabling `enable_send_thresholds` Icinga 2 automatically adds the following threshold metrics:
|
||||
|
||||
<prefix>.perfdata.<perfdata-label>.min
|
||||
<prefix>.perfdata.<perfdata-label>.max
|
||||
<prefix>.perfdata.<perfdata-label>.warn
|
||||
<prefix>.perfdata.<perfdata-label>.crit
|
||||
```
|
||||
<prefix>.perfdata.<perfdata-label>.min
|
||||
<prefix>.perfdata.<perfdata-label>.max
|
||||
<prefix>.perfdata.<perfdata-label>.warn
|
||||
<prefix>.perfdata.<perfdata-label>.crit
|
||||
```
|
||||
|
||||
By enabling `enable_send_metadata` Icinga 2 automatically adds the following metadata metrics:
|
||||
|
||||
<prefix>.metadata.current_attempt
|
||||
<prefix>.metadata.downtime_depth
|
||||
<prefix>.metadata.acknowledgement
|
||||
<prefix>.metadata.execution_time
|
||||
<prefix>.metadata.latency
|
||||
<prefix>.metadata.max_check_attempts
|
||||
<prefix>.metadata.reachable
|
||||
<prefix>.metadata.state
|
||||
<prefix>.metadata.state_type
|
||||
```
|
||||
<prefix>.metadata.current_attempt
|
||||
<prefix>.metadata.downtime_depth
|
||||
<prefix>.metadata.acknowledgement
|
||||
<prefix>.metadata.execution_time
|
||||
<prefix>.metadata.latency
|
||||
<prefix>.metadata.max_check_attempts
|
||||
<prefix>.metadata.reachable
|
||||
<prefix>.metadata.state
|
||||
<prefix>.metadata.state_type
|
||||
```
|
||||
|
||||
Metadata metric overview:
|
||||
|
||||
|
@ -326,10 +344,12 @@ Metadata metric overview:
|
|||
The following example illustrates how to configure the storage schemas for Graphite Carbon
|
||||
Cache.
|
||||
|
||||
[icinga2_default]
|
||||
# intervals like PNP4Nagios uses them per default
|
||||
pattern = ^icinga2\.
|
||||
retentions = 1m:2d,5m:10d,30m:90d,360m:4y
|
||||
```
|
||||
[icinga2_default]
|
||||
# intervals like PNP4Nagios uses them per default
|
||||
pattern = ^icinga2\.
|
||||
retentions = 1m:2d,5m:10d,30m:90d,360m:4y
|
||||
```
|
||||
|
||||
|
||||
### InfluxDB Writer <a id="influxdb-writer"></a>
|
||||
|
@ -339,7 +359,9 @@ defined InfluxDB HTTP API.
|
|||
|
||||
You can enable the feature using
|
||||
|
||||
# icinga2 feature enable influxdb
|
||||
```
|
||||
# icinga2 feature enable influxdb
|
||||
```
|
||||
|
||||
By default the [InfluxdbWriter](09-object-types.md#objecttype-influxdbwriter) feature
|
||||
expects the InfluxDB daemon to listen at `127.0.0.1` on port `8086`.
|
||||
|
@ -456,7 +478,9 @@ attribute.
|
|||
|
||||
Metric values are stored like this:
|
||||
|
||||
check_result.perfdata.<perfdata-label>.value
|
||||
```
|
||||
check_result.perfdata.<perfdata-label>.value
|
||||
```
|
||||
|
||||
The following characters are escaped in perfdata labels:
|
||||
|
||||
|
@ -475,10 +499,12 @@ and is therefore replaced by `.`.
|
|||
Icinga 2 automatically adds the following threshold metrics
|
||||
if existing:
|
||||
|
||||
check_result.perfdata.<perfdata-label>.min
|
||||
check_result.perfdata.<perfdata-label>.max
|
||||
check_result.perfdata.<perfdata-label>.warn
|
||||
check_result.perfdata.<perfdata-label>.crit
|
||||
```
|
||||
check_result.perfdata.<perfdata-label>.min
|
||||
check_result.perfdata.<perfdata-label>.max
|
||||
check_result.perfdata.<perfdata-label>.warn
|
||||
check_result.perfdata.<perfdata-label>.crit
|
||||
```
|
||||
|
||||
### Graylog Integration <a id="graylog-integration"></a>
|
||||
|
||||
|
@ -494,7 +520,9 @@ While it has been specified by the [Graylog](https://www.graylog.org) project as
|
|||
|
||||
You can enable the feature using
|
||||
|
||||
# icinga2 feature enable gelf
|
||||
```
|
||||
# icinga2 feature enable gelf
|
||||
```
|
||||
|
||||
By default the `GelfWriter` object expects the GELF receiver to listen at `127.0.0.1` on TCP port `12201`.
|
||||
The default `source` attribute is set to `icinga2`. You can customize that for your needs if required.
|
||||
|
@ -514,27 +542,35 @@ write them to the defined TSDB TCP socket.
|
|||
|
||||
You can enable the feature using
|
||||
|
||||
# icinga2 feature enable opentsdb
|
||||
```
|
||||
# icinga2 feature enable opentsdb
|
||||
```
|
||||
|
||||
By default the `OpenTsdbWriter` object expects the TSD to listen at
|
||||
`127.0.0.1` on port `4242`.
|
||||
|
||||
The current naming schema is
|
||||
|
||||
icinga.host.<metricname>
|
||||
icinga.service.<servicename>.<metricname>
|
||||
```
|
||||
icinga.host.<metricname>
|
||||
icinga.service.<servicename>.<metricname>
|
||||
```
|
||||
|
||||
for host and service checks. The tag host is always applied.
|
||||
|
||||
To make sure Icinga 2 writes a valid metric into OpenTSDB some characters are replaced
|
||||
with `_` in the target name:
|
||||
|
||||
\ (and space)
|
||||
```
|
||||
\ (and space)
|
||||
```
|
||||
|
||||
The resulting name in OpenTSDB might look like:
|
||||
|
||||
www-01 / http-cert / response time
|
||||
icinga.http_cert.response_time
|
||||
```
|
||||
www-01 / http-cert / response time
|
||||
icinga.http_cert.response_time
|
||||
```
|
||||
|
||||
In addition to the performance data retrieved from the check plugin, Icinga 2 sends
|
||||
internal check statistic data to OpenTSDB:
|
||||
|
@ -554,7 +590,9 @@ internal check statistic data to OpenTSDB:
|
|||
While reachable, state and state_type are metrics for the host or service the
|
||||
other metrics follow the current naming schema
|
||||
|
||||
icinga.check.<metricname>
|
||||
```
|
||||
icinga.check.<metricname>
|
||||
```
|
||||
|
||||
with the following tags
|
||||
|
||||
|
@ -592,18 +630,24 @@ in the [Livestatus Schema](24-appendix.md#schema-livestatus) section.
|
|||
|
||||
You can enable Livestatus using icinga2 feature enable:
|
||||
|
||||
# icinga2 feature enable livestatus
|
||||
```
|
||||
# icinga2 feature enable livestatus
|
||||
```
|
||||
|
||||
After that you will have to restart Icinga 2:
|
||||
|
||||
# systemctl restart icinga2
|
||||
```
|
||||
# systemctl restart icinga2
|
||||
```
|
||||
|
||||
By default the Livestatus socket is available in `/var/run/icinga2/cmd/livestatus`.
|
||||
|
||||
In order for queries and commands to work you will need to add your query user
|
||||
(e.g. your web server) to the `icingacmd` group:
|
||||
|
||||
# usermod -a -G icingacmd www-data
|
||||
```
|
||||
# usermod -a -G icingacmd www-data
|
||||
```
|
||||
|
||||
The Debian packages use `nagios` as the user and group name. Make sure to change `icingacmd` to
|
||||
`nagios` if you're using Debian.
|
||||
|
@ -615,8 +659,9 @@ In order to use the historical tables provided by the livestatus feature (for ex
|
|||
are expected to be in `/var/log/icinga2/compat`. A different path can be set using the
|
||||
`compat_log_path` configuration attribute.
|
||||
|
||||
# icinga2 feature enable compatlog
|
||||
|
||||
```
|
||||
# icinga2 feature enable compatlog
|
||||
```
|
||||
|
||||
### Livestatus Sockets <a id="livestatus-sockets"></a>
|
||||
|
||||
|
@ -642,26 +687,28 @@ programmatically: [Monitoring::Livestatus](http://search.cpan.org/~nierlein/Moni
|
|||
|
||||
Example using the unix socket:
|
||||
|
||||
# echo -e "GET services\n" | /usr/bin/nc -U /var/run/icinga2/cmd/livestatus
|
||||
```
|
||||
# echo -e "GET services\n" | /usr/bin/nc -U /var/run/icinga2/cmd/livestatus
|
||||
|
||||
Example using the tcp socket listening on port `6558`:
|
||||
|
||||
# echo -e 'GET services\n' | netcat 127.0.0.1 6558
|
||||
# echo -e 'GET services\n' | netcat 127.0.0.1 6558
|
||||
|
||||
# cat servicegroups <<EOF
|
||||
GET servicegroups
|
||||
# cat servicegroups <<EOF
|
||||
GET servicegroups
|
||||
|
||||
EOF
|
||||
|
||||
(cat servicegroups; sleep 1) | netcat 127.0.0.1 6558
|
||||
EOF
|
||||
|
||||
(cat servicegroups; sleep 1) | netcat 127.0.0.1 6558
|
||||
```
|
||||
|
||||
### Livestatus COMMAND Queries <a id="livestatus-command-queries"></a>
|
||||
|
||||
A list of available external commands and their parameters can be found [here](24-appendix.md#external-commands-list-detail)
|
||||
|
||||
$ echo -e 'COMMAND <externalcommandstring>' | netcat 127.0.0.1 6558
|
||||
|
||||
```
|
||||
$ echo -e 'COMMAND <externalcommandstring>' | netcat 127.0.0.1 6558
|
||||
```
|
||||
|
||||
### Livestatus Filters <a id="livestatus-filters"></a>
|
||||
|
||||
|
@ -696,20 +743,22 @@ Schema: "Stats: aggregatefunction aggregateattribute"
|
|||
|
||||
Example:
|
||||
|
||||
GET hosts
|
||||
Filter: has_been_checked = 1
|
||||
Filter: check_type = 0
|
||||
Stats: sum execution_time
|
||||
Stats: sum latency
|
||||
Stats: sum percent_state_change
|
||||
Stats: min execution_time
|
||||
Stats: min latency
|
||||
Stats: min percent_state_change
|
||||
Stats: max execution_time
|
||||
Stats: max latency
|
||||
Stats: max percent_state_change
|
||||
OutputFormat: json
|
||||
ResponseHeader: fixed16
|
||||
```
|
||||
GET hosts
|
||||
Filter: has_been_checked = 1
|
||||
Filter: check_type = 0
|
||||
Stats: sum execution_time
|
||||
Stats: sum latency
|
||||
Stats: sum percent_state_change
|
||||
Stats: min execution_time
|
||||
Stats: min latency
|
||||
Stats: min percent_state_change
|
||||
Stats: max execution_time
|
||||
Stats: max latency
|
||||
Stats: max percent_state_change
|
||||
OutputFormat: json
|
||||
ResponseHeader: fixed16
|
||||
```
|
||||
|
||||
### Livestatus Output <a id="livestatus-output"></a>
|
||||
|
||||
|
@ -721,7 +770,9 @@ is a pipe (2nd level).
|
|||
|
||||
Separators can be set using ASCII codes like:
|
||||
|
||||
Separators: 10 59 44 124
|
||||
```
|
||||
Separators: 10 59 44 124
|
||||
```
|
||||
|
||||
* JSON
|
||||
|
||||
|
@ -773,7 +824,9 @@ interval to its `objects.cache` and `status.dat` files. Icinga 2 provides
|
|||
the `StatusDataWriter` object which dumps all configuration objects and
|
||||
status updates in a regular interval.
|
||||
|
||||
# icinga2 feature enable statusdata
|
||||
```
|
||||
# icinga2 feature enable statusdata
|
||||
```
|
||||
|
||||
If you are not using any web interface or addon which uses these files,
|
||||
you can safely disable this feature.
|
||||
|
@ -795,7 +848,9 @@ for answering queries to historical tables.
|
|||
|
||||
The `CompatLogger` object can be enabled with
|
||||
|
||||
# icinga2 feature enable compatlog
|
||||
```
|
||||
# icinga2 feature enable compatlog
|
||||
```
|
||||
|
||||
By default, the Icinga 1.x log file called `icinga.log` is located
|
||||
in `/var/log/icinga2/compat`. Rotated log files are moved into
|
||||
|
@ -820,7 +875,8 @@ environments, Icinga 2 supports the `CheckResultReader` object.
|
|||
There is no feature configuration available, but it must be defined
|
||||
on-demand in your Icinga 2 objects configuration.
|
||||
|
||||
object CheckResultReader "reader" {
|
||||
spool_dir = "/data/check-results"
|
||||
}
|
||||
|
||||
```
|
||||
object CheckResultReader "reader" {
|
||||
spool_dir = "/data/check-results"
|
||||
}
|
||||
```
|
|
@ -547,15 +547,19 @@ settings of the Icinga 2 systemd service by creating
|
|||
`/etc/systemd/system/icinga2.service.d/override.conf` with the following
|
||||
content:
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
RestartSec=1
|
||||
StartLimitInterval=10
|
||||
StartLimitBurst=3
|
||||
```
|
||||
[Service]
|
||||
Restart=always
|
||||
RestartSec=1
|
||||
StartLimitInterval=10
|
||||
StartLimitBurst=3
|
||||
```
|
||||
|
||||
Using the watchdog can also help with monitoring Icinga 2, to activate and use it add the following to the override:
|
||||
|
||||
WatchdogSec=30s
|
||||
```
|
||||
WatchdogSec=30s
|
||||
```
|
||||
|
||||
This way systemd will kill Icinga 2 if does not notify for over 30 seconds, a timout of less than 10 seconds is not
|
||||
recommended. When the watchdog is activated, `Restart=` can be set to `watchdog` to restart Icinga 2 in the case of a
|
||||
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -3,7 +3,9 @@
|
|||
You can run the Icinga 2 daemon with the `-X` (`--script-debugger`)
|
||||
parameter to enable the script debugger:
|
||||
|
||||
# icinga2 daemon -X
|
||||
```
|
||||
# icinga2 daemon -X
|
||||
```
|
||||
|
||||
When an exception occurs or the [debugger](17-language-reference.md#breakpoints)
|
||||
keyword is encountered in a user script, Icinga 2 launches a console that
|
||||
|
@ -11,7 +13,9 @@ allows the user to debug the script.
|
|||
|
||||
You can also attach the script debugger to the [configuration validation](11-cli-commands.md#config-validation):
|
||||
|
||||
# icinga2 daemon -C -X
|
||||
```
|
||||
# icinga2 daemon -C -X
|
||||
```
|
||||
|
||||
Here is a list of common errors which can be diagnosed with the script debugger:
|
||||
|
||||
|
@ -24,48 +28,54 @@ The following example illustrates the problem of a service [apply rule](03-monit
|
|||
which expects a dictionary value for `config`, but the host custom attribute only
|
||||
provides a string value:
|
||||
|
||||
object Host "script-debugger-host" {
|
||||
check_command = "icinga"
|
||||
```
|
||||
object Host "script-debugger-host" {
|
||||
check_command = "icinga"
|
||||
|
||||
vars.http_vhosts["example.org"] = "192.168.1.100" // a string value
|
||||
}
|
||||
vars.http_vhosts["example.org"] = "192.168.1.100" // a string value
|
||||
}
|
||||
|
||||
apply Service for (http_vhost => config in host.vars.http_vhosts) {
|
||||
import "generic-service"
|
||||
apply Service for (http_vhost => config in host.vars.http_vhosts) {
|
||||
import "generic-service"
|
||||
|
||||
vars += config // expects a dictionary
|
||||
vars += config // expects a dictionary
|
||||
|
||||
check_command = "http"
|
||||
}
|
||||
check_command = "http"
|
||||
}
|
||||
```
|
||||
|
||||
The error message on config validation will warn about the wrong value type,
|
||||
but does not provide any context which objects are affected.
|
||||
|
||||
Enable the script debugger and run the config validation:
|
||||
|
||||
# icinga2 daemon -C -X
|
||||
```
|
||||
# icinga2 daemon -C -X
|
||||
|
||||
Breakpoint encountered in /etc/icinga2/conf.d/services.conf: 59:67-65:1
|
||||
Exception: Error: Error while evaluating expression: Cannot convert value of type 'String' to an object.
|
||||
Location:
|
||||
/etc/icinga2/conf.d/services.conf(62): check_command = "http"
|
||||
/etc/icinga2/conf.d/services.conf(63):
|
||||
/etc/icinga2/conf.d/services.conf(64): vars += config
|
||||
^^^^^^^^^^^^^^
|
||||
/etc/icinga2/conf.d/services.conf(65): }
|
||||
/etc/icinga2/conf.d/services.conf(66):
|
||||
You can inspect expressions (such as variables) by entering them at the prompt.
|
||||
To leave the debugger and continue the program use "$continue".
|
||||
<1> =>
|
||||
Breakpoint encountered in /etc/icinga2/conf.d/services.conf: 59:67-65:1
|
||||
Exception: Error: Error while evaluating expression: Cannot convert value of type 'String' to an object.
|
||||
Location:
|
||||
/etc/icinga2/conf.d/services.conf(62): check_command = "http"
|
||||
/etc/icinga2/conf.d/services.conf(63):
|
||||
/etc/icinga2/conf.d/services.conf(64): vars += config
|
||||
^^^^^^^^^^^^^^
|
||||
/etc/icinga2/conf.d/services.conf(65): }
|
||||
/etc/icinga2/conf.d/services.conf(66):
|
||||
You can inspect expressions (such as variables) by entering them at the prompt.
|
||||
To leave the debugger and continue the program use "$continue".
|
||||
<1> =>
|
||||
```
|
||||
|
||||
You can print the variables `vars` and `config` to get an idea about
|
||||
their values:
|
||||
|
||||
<1> => vars
|
||||
null
|
||||
<2> => config
|
||||
"192.168.1.100"
|
||||
<3> =>
|
||||
```
|
||||
<1> => vars
|
||||
null
|
||||
<2> => config
|
||||
"192.168.1.100"
|
||||
<3> =>
|
||||
```
|
||||
|
||||
The `vars` attribute has to be a dictionary. Trying to set this attribute to a string caused
|
||||
the error in our configuration example.
|
||||
|
@ -73,10 +83,12 @@ the error in our configuration example.
|
|||
In order to determine the name of the host where the value of the `config` variable came from
|
||||
you can inspect attributes of the service object:
|
||||
|
||||
<3> => host_name
|
||||
"script-debugger-host-01"
|
||||
<4> => name
|
||||
"http"
|
||||
```
|
||||
<3> => host_name
|
||||
"script-debugger-host-01"
|
||||
<4> => name
|
||||
"http"
|
||||
```
|
||||
|
||||
Additionally you can view the service object attributes by printing the value of `this`.
|
||||
|
||||
|
@ -84,28 +96,31 @@ Additionally you can view the service object attributes by printing the value of
|
|||
|
||||
In order to halt execution in a script you can use the `debugger` keyword:
|
||||
|
||||
object Host "script-debugger-host-02" {
|
||||
check_command = "dummy"
|
||||
check_interval = 5s
|
||||
```
|
||||
object Host "script-debugger-host-02" {
|
||||
check_command = "dummy"
|
||||
check_interval = 5s
|
||||
|
||||
vars.dummy_text = {{
|
||||
var text = "Hello from " + macro("$name$")
|
||||
debugger
|
||||
return text
|
||||
}}
|
||||
}
|
||||
vars.dummy_text = {{
|
||||
var text = "Hello from " + macro("$name$")
|
||||
debugger
|
||||
return text
|
||||
}}
|
||||
}
|
||||
```
|
||||
|
||||
Icinga 2 will spawn a debugger console every time the function is executed:
|
||||
|
||||
# icinga2 daemon -X
|
||||
...
|
||||
Breakpoint encountered in /etc/icinga2/tests/script-debugger.conf: 7:5-7:12
|
||||
You can inspect expressions (such as variables) by entering them at the prompt.
|
||||
To leave the debugger and continue the program use "$continue".
|
||||
<1> => text
|
||||
"Hello from script-debugger-host-02"
|
||||
<2> => $continue
|
||||
|
||||
```
|
||||
# icinga2 daemon -X
|
||||
...
|
||||
Breakpoint encountered in /etc/icinga2/tests/script-debugger.conf: 7:5-7:12
|
||||
You can inspect expressions (such as variables) by entering them at the prompt.
|
||||
To leave the debugger and continue the program use "$continue".
|
||||
<1> => text
|
||||
"Hello from script-debugger-host-02"
|
||||
<2> => $continue
|
||||
```
|
||||
|
||||
## Debugging API Filters <a id="script-debugger-api-filters"></a>
|
||||
|
||||
|
|
|
@ -213,32 +213,34 @@ If you want to delete all breakpoints, use `d` and select `yes`.
|
|||
|
||||
Breakpoint Example:
|
||||
|
||||
(gdb) b __cxa_throw
|
||||
(gdb) r
|
||||
(gdb) up
|
||||
....
|
||||
(gdb) up
|
||||
#11 0x00007ffff7cbf9ff in icinga::Utility::GlobRecursive(icinga::String const&, icinga::String const&, boost::function<void (icinga::String const&)> const&, int) (path=..., pattern=..., callback=..., type=1)
|
||||
at /home/michi/coding/icinga/icinga2/lib/base/utility.cpp:609
|
||||
609 callback(cpath);
|
||||
(gdb) l
|
||||
604
|
||||
605 #endif /* _WIN32 */
|
||||
606
|
||||
607 std::sort(files.begin(), files.end());
|
||||
608 BOOST_FOREACH(const String& cpath, files) {
|
||||
609 callback(cpath);
|
||||
610 }
|
||||
611
|
||||
612 std::sort(dirs.begin(), dirs.end());
|
||||
613 BOOST_FOREACH(const String& cpath, dirs) {
|
||||
(gdb) p files
|
||||
$3 = std::vector of length 11, capacity 16 = {{static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/agent.conf"}, {static NPos = 18446744073709551615,
|
||||
m_Data = "/etc/icinga2/conf.d/commands.conf"}, {static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/downtimes.conf"}, {static NPos = 18446744073709551615,
|
||||
m_Data = "/etc/icinga2/conf.d/groups.conf"}, {static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/notifications.conf"}, {static NPos = 18446744073709551615,
|
||||
m_Data = "/etc/icinga2/conf.d/satellite.conf"}, {static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/services.conf"}, {static NPos = 18446744073709551615,
|
||||
m_Data = "/etc/icinga2/conf.d/templates.conf"}, {static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/test.conf"}, {static NPos = 18446744073709551615,
|
||||
m_Data = "/etc/icinga2/conf.d/timeperiods.conf"}, {static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/users.conf"}}
|
||||
```
|
||||
(gdb) b __cxa_throw
|
||||
(gdb) r
|
||||
(gdb) up
|
||||
....
|
||||
(gdb) up
|
||||
#11 0x00007ffff7cbf9ff in icinga::Utility::GlobRecursive(icinga::String const&, icinga::String const&, boost::function<void (icinga::String const&)> const&, int) (path=..., pattern=..., callback=..., type=1)
|
||||
at /home/michi/coding/icinga/icinga2/lib/base/utility.cpp:609
|
||||
609 callback(cpath);
|
||||
(gdb) l
|
||||
604
|
||||
605 #endif /* _WIN32 */
|
||||
606
|
||||
607 std::sort(files.begin(), files.end());
|
||||
608 BOOST_FOREACH(const String& cpath, files) {
|
||||
609 callback(cpath);
|
||||
610 }
|
||||
611
|
||||
612 std::sort(dirs.begin(), dirs.end());
|
||||
613 BOOST_FOREACH(const String& cpath, dirs) {
|
||||
(gdb) p files
|
||||
$3 = std::vector of length 11, capacity 16 = {{static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/agent.conf"}, {static NPos = 18446744073709551615,
|
||||
m_Data = "/etc/icinga2/conf.d/commands.conf"}, {static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/downtimes.conf"}, {static NPos = 18446744073709551615,
|
||||
m_Data = "/etc/icinga2/conf.d/groups.conf"}, {static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/notifications.conf"}, {static NPos = 18446744073709551615,
|
||||
m_Data = "/etc/icinga2/conf.d/satellite.conf"}, {static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/services.conf"}, {static NPos = 18446744073709551615,
|
||||
m_Data = "/etc/icinga2/conf.d/templates.conf"}, {static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/test.conf"}, {static NPos = 18446744073709551615,
|
||||
m_Data = "/etc/icinga2/conf.d/timeperiods.conf"}, {static NPos = 18446744073709551615, m_Data = "/etc/icinga2/conf.d/users.conf"}}
|
||||
```
|
||||
|
||||
|
||||
### Core Dump <a id="development-debug-core-dump"></a>
|
||||
|
@ -1580,66 +1582,76 @@ Please check `appveyor.yml` for instructions.
|
|||
Install the `boost`, `python` and `icinga2` pretty printers. Absolute paths are required,
|
||||
so please make sure to update the installation paths accordingly (`pwd`).
|
||||
|
||||
$ mkdir -p ~/.gdb_printers && cd ~/.gdb_printers
|
||||
```
|
||||
$ mkdir -p ~/.gdb_printers && cd ~/.gdb_printers
|
||||
```
|
||||
|
||||
Boost Pretty Printers compatible with Python 3:
|
||||
|
||||
$ git clone https://github.com/mateidavid/Boost-Pretty-Printer.git && cd Boost-Pretty-Printer
|
||||
$ git checkout python-3
|
||||
$ pwd
|
||||
/home/michi/.gdb_printers/Boost-Pretty-Printer
|
||||
```
|
||||
$ git clone https://github.com/mateidavid/Boost-Pretty-Printer.git && cd Boost-Pretty-Printer
|
||||
$ git checkout python-3
|
||||
$ pwd
|
||||
/home/michi/.gdb_printers/Boost-Pretty-Printer
|
||||
```
|
||||
|
||||
Python Pretty Printers:
|
||||
|
||||
$ cd ~/.gdb_printers
|
||||
$ svn co svn://gcc.gnu.org/svn/gcc/trunk/libstdc++-v3/python
|
||||
```
|
||||
$ cd ~/.gdb_printers
|
||||
$ svn co svn://gcc.gnu.org/svn/gcc/trunk/libstdc++-v3/python
|
||||
```
|
||||
|
||||
Icinga 2 Pretty Printers:
|
||||
|
||||
$ mkdir -p ~/.gdb_printers/icinga2 && cd ~/.gdb_printers/icinga2
|
||||
$ wget https://raw.githubusercontent.com/Icinga/icinga2/master/tools/debug/gdb/icingadbg.py
|
||||
```
|
||||
$ mkdir -p ~/.gdb_printers/icinga2 && cd ~/.gdb_printers/icinga2
|
||||
$ wget https://raw.githubusercontent.com/Icinga/icinga2/master/tools/debug/gdb/icingadbg.py
|
||||
```
|
||||
|
||||
Now you'll need to modify/setup your `~/.gdbinit` configuration file.
|
||||
You can download the one from Icinga 2 and modify all paths.
|
||||
|
||||
Example on Fedora 22:
|
||||
|
||||
$ wget https://raw.githubusercontent.com/Icinga/icinga2/master/tools/debug/gdb/gdbinit -O ~/.gdbinit
|
||||
$ vim ~/.gdbinit
|
||||
```
|
||||
$ wget https://raw.githubusercontent.com/Icinga/icinga2/master/tools/debug/gdb/gdbinit -O ~/.gdbinit
|
||||
$ vim ~/.gdbinit
|
||||
|
||||
set print pretty on
|
||||
|
||||
python
|
||||
import sys
|
||||
sys.path.insert(0, '/home/michi/.gdb_printers/icinga2')
|
||||
from icingadbg import register_icinga_printers
|
||||
register_icinga_printers()
|
||||
end
|
||||
|
||||
python
|
||||
import sys
|
||||
sys.path.insert(0, '/home/michi/.gdb_printers/python')
|
||||
from libstdcxx.v6.printers import register_libstdcxx_printers
|
||||
try:
|
||||
register_libstdcxx_printers(None)
|
||||
except:
|
||||
pass
|
||||
end
|
||||
|
||||
python
|
||||
import sys
|
||||
sys.path.insert(0, '/home/michi/.gdb_printers/Boost-Pretty-Printer')
|
||||
import boost_print
|
||||
boost_print.register_printers()
|
||||
end
|
||||
set print pretty on
|
||||
|
||||
python
|
||||
import sys
|
||||
sys.path.insert(0, '/home/michi/.gdb_printers/icinga2')
|
||||
from icingadbg import register_icinga_printers
|
||||
register_icinga_printers()
|
||||
end
|
||||
|
||||
python
|
||||
import sys
|
||||
sys.path.insert(0, '/home/michi/.gdb_printers/python')
|
||||
from libstdcxx.v6.printers import register_libstdcxx_printers
|
||||
try:
|
||||
register_libstdcxx_printers(None)
|
||||
except:
|
||||
pass
|
||||
end
|
||||
|
||||
python
|
||||
import sys
|
||||
sys.path.insert(0, '/home/michi/.gdb_printers/Boost-Pretty-Printer')
|
||||
import boost_print
|
||||
boost_print.register_printers()
|
||||
end
|
||||
```
|
||||
|
||||
If you are getting the following error when running gdb, the `libstdcxx`
|
||||
printers are already preloaded in your environment and you can remove
|
||||
the duplicate import in your `~/.gdbinit` file.
|
||||
|
||||
RuntimeError: pretty-printer already registered: libstdc++-v6
|
||||
|
||||
```
|
||||
RuntimeError: pretty-printer already registered: libstdc++-v6
|
||||
```
|
||||
|
||||
## Development Tests <a id="development-tests"></a>
|
||||
|
||||
|
|
|
@ -18,16 +18,18 @@ There are two ways of installing the SELinux Policy for Icinga 2 on Enterprise L
|
|||
|
||||
If the system runs in enforcing mode and you encounter problems you can set Icinga 2's domain to permissive mode.
|
||||
|
||||
# sestatus
|
||||
SELinux status: enabled
|
||||
SELinuxfs mount: /sys/fs/selinux
|
||||
SELinux root directory: /etc/selinux
|
||||
Loaded policy name: targeted
|
||||
Current mode: enforcing
|
||||
Mode from config file: enforcing
|
||||
Policy MLS status: enabled
|
||||
Policy deny_unknown status: allowed
|
||||
Max kernel policy version: 28
|
||||
```
|
||||
# sestatus
|
||||
SELinux status: enabled
|
||||
SELinuxfs mount: /sys/fs/selinux
|
||||
SELinux root directory: /etc/selinux
|
||||
Loaded policy name: targeted
|
||||
Current mode: enforcing
|
||||
Mode from config file: enforcing
|
||||
Policy MLS status: enabled
|
||||
Policy deny_unknown status: allowed
|
||||
Max kernel policy version: 28
|
||||
```
|
||||
|
||||
You can change the configured mode by editing `/etc/selinux/config` and the current mode by executing `setenforce 0`.
|
||||
|
||||
|
@ -35,13 +37,17 @@ You can change the configured mode by editing `/etc/selinux/config` and the curr
|
|||
|
||||
Simply add the `icinga2-selinux` package to your installation.
|
||||
|
||||
# yum install icinga2-selinux
|
||||
```
|
||||
# yum install icinga2-selinux
|
||||
```
|
||||
|
||||
Ensure that the `icinga2` process is running in its own `icinga2_t` domain after installing the policy package:
|
||||
|
||||
# systemctl restart icinga2.service
|
||||
# ps -eZ | grep icinga2
|
||||
system_u:system_r:icinga2_t:s0 2825 ? 00:00:00 icinga2
|
||||
```
|
||||
# systemctl restart icinga2.service
|
||||
# ps -eZ | grep icinga2
|
||||
system_u:system_r:icinga2_t:s0 2825 ? 00:00:00 icinga2
|
||||
```
|
||||
|
||||
#### Manual installation <a id="selinux-policy-installation-manual"></a>
|
||||
|
||||
|
@ -49,24 +55,32 @@ This section describes the installation to support development and testing. It a
|
|||
|
||||
As a prerequisite install the `git`, `selinux-policy-devel` and `audit` packages. Enable and start the audit daemon afterwards:
|
||||
|
||||
# yum install git selinux-policy-devel audit
|
||||
# systemctl enable auditd.service
|
||||
# systemctl start auditd.service
|
||||
```
|
||||
# yum install git selinux-policy-devel audit
|
||||
# systemctl enable auditd.service
|
||||
# systemctl start auditd.service
|
||||
```
|
||||
|
||||
After that clone the icinga2 git repository:
|
||||
|
||||
# git clone https://github.com/icinga/icinga2
|
||||
```
|
||||
# git clone https://github.com/icinga/icinga2
|
||||
```
|
||||
|
||||
To create and install the policy package run the installation script which also labels the resources. (The script assumes Icinga 2 was started once after system startup, the labeling of the port will only happen once and fail later on.)
|
||||
|
||||
# cd tools/selinux/
|
||||
# ./icinga.sh
|
||||
```
|
||||
# cd tools/selinux/
|
||||
# ./icinga.sh
|
||||
```
|
||||
|
||||
After that restart Icinga 2 and verify it running in its own domain `icinga2_t`.
|
||||
|
||||
# systemctl restart icinga2.service
|
||||
# ps -eZ | grep icinga2
|
||||
system_u:system_r:icinga2_t:s0 2825 ? 00:00:00 icinga2
|
||||
```
|
||||
# systemctl restart icinga2.service
|
||||
# ps -eZ | grep icinga2
|
||||
system_u:system_r:icinga2_t:s0 2825 ? 00:00:00 icinga2
|
||||
```
|
||||
|
||||
### General <a id="selinux-policy-general"></a>
|
||||
|
||||
|
@ -130,23 +144,29 @@ Make sure to report the bugs in the policy afterwards.
|
|||
|
||||
Download and install a plugin, for example check_mysql_health.
|
||||
|
||||
# wget https://labs.consol.de/download/shinken-nagios-plugins/check_mysql_health-2.1.9.2.tar.gz
|
||||
# tar xvzf check_mysql_health-2.1.9.2.tar.gz
|
||||
# cd check_mysql_health-2.1.9.2/
|
||||
# ./configure --libexecdir /usr/lib64/nagios/plugins
|
||||
# make
|
||||
# make install
|
||||
```
|
||||
# wget https://labs.consol.de/download/shinken-nagios-plugins/check_mysql_health-2.1.9.2.tar.gz
|
||||
# tar xvzf check_mysql_health-2.1.9.2.tar.gz
|
||||
# cd check_mysql_health-2.1.9.2/
|
||||
# ./configure --libexecdir /usr/lib64/nagios/plugins
|
||||
# make
|
||||
# make install
|
||||
```
|
||||
|
||||
It is labeled `nagios_unconfined_plugins_exec_t` by default, so it runs without restrictions.
|
||||
|
||||
# ls -lZ /usr/lib64/nagios/plugins/check_mysql_health
|
||||
-rwxr-xr-x. root root system_u:object_r:nagios_unconfined_plugin_exec_t:s0 /usr/lib64/nagios/plugins/check_mysql_health
|
||||
```
|
||||
# ls -lZ /usr/lib64/nagios/plugins/check_mysql_health
|
||||
-rwxr-xr-x. root root system_u:object_r:nagios_unconfined_plugin_exec_t:s0 /usr/lib64/nagios/plugins/check_mysql_health
|
||||
```
|
||||
|
||||
In this case the plugin is monitoring a service, so it should be labeled `nagios_services_plugin_exec_t` to restrict its permissions.
|
||||
|
||||
# chcon -t nagios_services_plugin_exec_t /usr/lib64/nagios/plugins/check_mysql_health
|
||||
# ls -lZ /usr/lib64/nagios/plugins/check_mysql_health
|
||||
-rwxr-xr-x. root root system_u:object_r:nagios_services_plugin_exec_t:s0 /usr/lib64/nagios/plugins/check_mysql_health
|
||||
```
|
||||
# chcon -t nagios_services_plugin_exec_t /usr/lib64/nagios/plugins/check_mysql_health
|
||||
# ls -lZ /usr/lib64/nagios/plugins/check_mysql_health
|
||||
-rwxr-xr-x. root root system_u:object_r:nagios_services_plugin_exec_t:s0 /usr/lib64/nagios/plugins/check_mysql_health
|
||||
```
|
||||
|
||||
The plugin still runs fine but if someone changes the script to do weird stuff it will fail to do so.
|
||||
|
||||
|
@ -156,25 +176,29 @@ You are running graphite on a different port than `2003` and want `icinga2` to c
|
|||
|
||||
Change the port value for the graphite feature according to your graphite installation before enabling it.
|
||||
|
||||
# cat /etc/icinga2/features-enabled/graphite.conf
|
||||
/**
|
||||
* The GraphiteWriter type writes check result metrics and
|
||||
* performance data to a graphite tcp socket.
|
||||
*/
|
||||
```
|
||||
# cat /etc/icinga2/features-enabled/graphite.conf
|
||||
/**
|
||||
* The GraphiteWriter type writes check result metrics and
|
||||
* performance data to a graphite tcp socket.
|
||||
*/
|
||||
|
||||
library "perfdata"
|
||||
library "perfdata"
|
||||
|
||||
object GraphiteWriter "graphite" {
|
||||
//host = "127.0.0.1"
|
||||
//port = 2003
|
||||
port = 2004
|
||||
}
|
||||
# icinga2 feature enable graphite
|
||||
object GraphiteWriter "graphite" {
|
||||
//host = "127.0.0.1"
|
||||
//port = 2003
|
||||
port = 2004
|
||||
}
|
||||
# icinga2 feature enable graphite
|
||||
```
|
||||
|
||||
Before you restart the icinga2 service allow it to connect to all ports by enabling the boolean `icinga2_can_connect_all` (now and permanent).
|
||||
|
||||
# setsebool icinga2_can_connect_all true
|
||||
# setsebool -P icinga2_can_connect_all true
|
||||
```
|
||||
# setsebool icinga2_can_connect_all true
|
||||
# setsebool -P icinga2_can_connect_all true
|
||||
```
|
||||
|
||||
If you restart the daemon now it will successfully connect to graphite.
|
||||
|
||||
|
@ -209,49 +233,63 @@ this user. This is completly optional!
|
|||
|
||||
Start by adding the Icinga 2 administrator role `icinga2adm_r` to the administrative SELinux user `staff_u`.
|
||||
|
||||
# semanage user -m -R "staff_r sysadm_r system_r unconfined_r icinga2adm_r" staff_u
|
||||
```
|
||||
# semanage user -m -R "staff_r sysadm_r system_r unconfined_r icinga2adm_r" staff_u
|
||||
```
|
||||
|
||||
Confine your user login and create a sudo rule.
|
||||
|
||||
# semanage login -a dirk -s staff_u
|
||||
# echo "dirk ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/dirk
|
||||
```
|
||||
# semanage login -a dirk -s staff_u
|
||||
# echo "dirk ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/dirk
|
||||
```
|
||||
|
||||
Login to the system using ssh and verify your id.
|
||||
|
||||
$ id -Z
|
||||
staff_u:staff_r:staff_t:s0-s0:c0.c1023
|
||||
```
|
||||
$ id -Z
|
||||
staff_u:staff_r:staff_t:s0-s0:c0.c1023
|
||||
```
|
||||
|
||||
Try to execute some commands as root using sudo.
|
||||
|
||||
$ sudo id -Z
|
||||
staff_u:staff_r:staff_t:s0-s0:c0.c1023
|
||||
$ sudo vi /etc/icinga2/icinga2.conf
|
||||
"/etc/icinga2/icinga2.conf" [Permission Denied]
|
||||
$ sudo cat /var/log/icinga2/icinga2.log
|
||||
cat: /var/log/icinga2/icinga2.log: Keine Berechtigung
|
||||
$ sudo systemctl reload icinga2.service
|
||||
Failed to get D-Bus connection: No connection to service manager.
|
||||
```
|
||||
$ sudo id -Z
|
||||
staff_u:staff_r:staff_t:s0-s0:c0.c1023
|
||||
$ sudo vi /etc/icinga2/icinga2.conf
|
||||
"/etc/icinga2/icinga2.conf" [Permission Denied]
|
||||
$ sudo cat /var/log/icinga2/icinga2.log
|
||||
cat: /var/log/icinga2/icinga2.log: Keine Berechtigung
|
||||
$ sudo systemctl reload icinga2.service
|
||||
Failed to get D-Bus connection: No connection to service manager.
|
||||
```
|
||||
|
||||
Those commands fail because you only switch to root but do not change your SELinux role. Try again but tell sudo also to switch the SELinux role and type.
|
||||
|
||||
$ sudo -r icinga2adm_r -t icinga2adm_t id -Z
|
||||
staff_u:icinga2adm_r:icinga2adm_t:s0-s0:c0.c1023
|
||||
$ sudo -r icinga2adm_r -t icinga2adm_t vi /etc/icinga2/icinga2.conf
|
||||
"/etc/icinga2/icinga2.conf"
|
||||
$ sudo -r icinga2adm_r -t icinga2adm_t cat /var/log/icinga2/icinga2.log
|
||||
[2015-03-26 20:48:14 +0000] information/DynamicObject: Dumping program state to file '/var/lib/icinga2/icinga2.state'
|
||||
$ sudo -r icinga2adm_r -t icinga2adm_t systemctl reload icinga2.service
|
||||
```
|
||||
$ sudo -r icinga2adm_r -t icinga2adm_t id -Z
|
||||
staff_u:icinga2adm_r:icinga2adm_t:s0-s0:c0.c1023
|
||||
$ sudo -r icinga2adm_r -t icinga2adm_t vi /etc/icinga2/icinga2.conf
|
||||
"/etc/icinga2/icinga2.conf"
|
||||
$ sudo -r icinga2adm_r -t icinga2adm_t cat /var/log/icinga2/icinga2.log
|
||||
[2015-03-26 20:48:14 +0000] information/DynamicObject: Dumping program state to file '/var/lib/icinga2/icinga2.state'
|
||||
$ sudo -r icinga2adm_r -t icinga2adm_t systemctl reload icinga2.service
|
||||
```
|
||||
|
||||
Now the commands will work, but you have always to remember to add the arguments, so change the sudo rule to set it by default.
|
||||
|
||||
# echo "dirk ALL=(ALL) ROLE=icinga2adm_r TYPE=icinga2adm_t NOPASSWD: ALL" > /etc/sudoers.d/dirk
|
||||
```
|
||||
# echo "dirk ALL=(ALL) ROLE=icinga2adm_r TYPE=icinga2adm_t NOPASSWD: ALL" > /etc/sudoers.d/dirk
|
||||
```
|
||||
|
||||
Now try the commands again without providing the role and type and they will work, but if you try to read apache logs or restart apache for example it will still fail.
|
||||
|
||||
$ sudo cat /var/log/httpd/error_log
|
||||
/bin/cat: /var/log/httpd/error_log: Keine Berechtigung
|
||||
$ sudo systemctl reload httpd.service
|
||||
Failed to issue method call: Access denied
|
||||
```
|
||||
$ sudo cat /var/log/httpd/error_log
|
||||
/bin/cat: /var/log/httpd/error_log: Keine Berechtigung
|
||||
$ sudo systemctl reload httpd.service
|
||||
Failed to issue method call: Access denied
|
||||
```
|
||||
|
||||
## Bugreports <a id="selinux-bugreports"></a>
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue