Update documentation

This commit is contained in:
Gunnar Beutner 2015-01-23 15:32:41 +01:00
parent ceaaf58145
commit 51aa2dff13
10 changed files with 114 additions and 107 deletions

View File

@ -109,7 +109,7 @@ or similar.
* Check the debug log to see if the check command gets executed * Check the debug log to see if the check command gets executed
* Verify that failed depedencies do not prevent command execution * Verify that failed depedencies do not prevent command execution
* Make sure that the plugin is executable by the Icinga 2 user (run a manual test) * Make sure that the plugin is executable by the Icinga 2 user (run a manual test)
* Make sure the [checker](8-cli-commands.md#features) feature is enabled. * Make sure the [checker](7-cli-commands.md#features) feature is enabled.
Examples: Examples:
@ -131,7 +131,7 @@ Verify the following configuration
* Do the notification attributes `states`, `types`, `period` match the notification conditions? * Do the notification attributes `states`, `types`, `period` match the notification conditions?
* Do the user attributes `states`, `types`, `period` match the notification conditions? * Do the user attributes `states`, `types`, `period` match the notification conditions?
* Are there any notification `begin` and `end` times configured? * Are there any notification `begin` and `end` times configured?
* Make sure the [notification](8-cli-commands.md#features) feature is enabled. * Make sure the [notification](7-cli-commands.md#features) feature is enabled.
* Does the referenced NotificationCommand work when executed as Icinga user on the shell? * Does the referenced NotificationCommand work when executed as Icinga user on the shell?
If notifications are to be sent via mail make sure that the mail program specified exists. If notifications are to be sent via mail make sure that the mail program specified exists.
@ -164,7 +164,7 @@ or modify these attributes in the current object.
## <a id="troubleshooting-cluster"></a> Cluster Troubleshooting ## <a id="troubleshooting-cluster"></a> Cluster Troubleshooting
You should configure the [cluster health checks](7-monitoring-remote-systems.md#cluster-health-check) if you haven't You should configure the [cluster health checks](8-monitoring-remote-systems.md#cluster-health-check) if you haven't
done so already. done so already.
> **Note** > **Note**
@ -218,7 +218,7 @@ If the cluster zones do not sync their configuration, make sure to check the fol
* Within a config master zone, only one configuration master is allowed to have its config in `/etc/icinga2/zones.d`. * Within a config master zone, only one configuration master is allowed to have its config in `/etc/icinga2/zones.d`.
** The master syncs the configuration to `/var/lib/icinga2/api/zones/` during startup and only syncs valid configuration to the other nodes ** The master syncs the configuration to `/var/lib/icinga2/api/zones/` during startup and only syncs valid configuration to the other nodes
** The other nodes receive the configuration into `/var/lib/icinga2/api/zones/` ** The other nodes receive the configuration into `/var/lib/icinga2/api/zones/`
* The `icinga2.log` log file will indicate whether this ApiListener [accepts config](7-monitoring-remote-systems.md#zone-config-sync-permissions), or not * The `icinga2.log` log file will indicate whether this ApiListener [accepts config](8-monitoring-remote-systems.md#zone-config-sync-permissions), or not
## <a id="debug"></a> Debug Icinga 2 ## <a id="debug"></a> Debug Icinga 2

View File

@ -721,9 +721,9 @@ daemon for passing check results between instances.
* Icinga 2 does not support any 1.x NEB addons for check load distribution * Icinga 2 does not support any 1.x NEB addons for check load distribution
* If your current setup consists of instances distributing the check load, you should consider * If your current setup consists of instances distributing the check load, you should consider
building a [load distribution](7-monitoring-remote-systems.md#cluster-scenarios-load-distribution) setup with Icinga 2. building a [load distribution](8-monitoring-remote-systems.md#cluster-scenarios-load-distribution) setup with Icinga 2.
* If your current setup includes active/passive clustering with external tools like Pacemaker/DRBD * If your current setup includes active/passive clustering with external tools like Pacemaker/DRBD
consider the [High Availability](7-monitoring-remote-systems.md#cluster-scenarios-high-availability) setup. consider the [High Availability](8-monitoring-remote-systems.md#cluster-scenarios-high-availability) setup.
* If you have build your own custom configuration deployment and check result collecting mechanism * If you have build your own custom configuration deployment and check result collecting mechanism
you should re-design your setup and re-evaluate your requirements, and how they may be fulfilled you should re-design your setup and re-evaluate your requirements, and how they may be fulfilled
using the Icinga 2 cluster capabilities. using the Icinga 2 cluster capabilities.
@ -777,7 +777,7 @@ Icinga 2 only uses a small set of [global constants](15-language-reference.md#co
you to specify certain different setting such as the `NodeName` in a cluster scenario. you to specify certain different setting such as the `NodeName` in a cluster scenario.
Aside from that, the [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf) should take care of including Aside from that, the [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf) should take care of including
global constants, enabled [features](8-cli-commands.md#features) and the object configuration. global constants, enabled [features](7-cli-commands.md#features) and the object configuration.
### <a id="differences-1x-2-include-files-dirs"></a> Include Files and Directories ### <a id="differences-1x-2-include-files-dirs"></a> Include Files and Directories
@ -1436,7 +1436,7 @@ Unlike Icinga 1.x the Icinga 2 daemon reload happens asynchronously.
* parent process continues with old configuration objects and the event scheduling * parent process continues with old configuration objects and the event scheduling
(doing checks, replicating cluster events, triggering alert notifications, etc.) (doing checks, replicating cluster events, triggering alert notifications, etc.)
* validation NOT ok: child process terminates, parent process continues with old configuration state * validation NOT ok: child process terminates, parent process continues with old configuration state
(this is ESSENTIAL for the [cluster config synchronisation](7-monitoring-remote-systems.md#cluster-zone-config-sync)) (this is ESSENTIAL for the [cluster config synchronisation](8-monitoring-remote-systems.md#cluster-zone-config-sync))
* validation ok: child process signals parent process to terminate and save its current state * validation ok: child process signals parent process to terminate and save its current state
(all events until now) into the icinga2 state file (all events until now) into the icinga2 state file
* parent process shuts down writing icinga2.state file * parent process shuts down writing icinga2.state file
@ -1491,6 +1491,6 @@ distribution out-of-the-box. Furthermore comments, downtimes, and other stateful
not synced between the master and slave nodes. There are addons available solving the check not synced between the master and slave nodes. There are addons available solving the check
and configuration distribution problems Icinga 1.x distributed monitoring currently suffers from. and configuration distribution problems Icinga 1.x distributed monitoring currently suffers from.
Icinga 2 implements a new built-in [distributed monitoring architecture](7-monitoring-remote-systems.md#distributed-monitoring-high-availability), Icinga 2 implements a new built-in [distributed monitoring architecture](8-monitoring-remote-systems.md#distributed-monitoring-high-availability),
including config and check distribution, IPv4/IPv6 support, SSL certificates and zone support for DMZ. including config and check distribution, IPv4/IPv6 support, SSL certificates and zone support for DMZ.
High Availability and load balancing are also part of the Icinga 2 Cluster setup. High Availability and load balancing are also part of the Icinga 2 Cluster setup.

View File

@ -487,7 +487,7 @@ Note the use of angle brackets instead of double quotes. This causes the
config compiler to search the include search paths for the specified config compiler to search the include search paths for the specified
file. By default $PREFIX/share/icinga2/include is included in the list of search file. By default $PREFIX/share/icinga2/include is included in the list of search
paths. Additional include search paths can be added using paths. Additional include search paths can be added using
[command-line options](8-cli-commands.md#config-include-path). [command-line options](7-cli-commands.md#config-include-path).
Wildcards are not permitted when using angle brackets. Wildcards are not permitted when using angle brackets.

View File

@ -112,7 +112,7 @@ Icinga 2 installation:
* `notification` for sending notifications * `notification` for sending notifications
* `mainlog` for writing the `icinga2.log` file * `mainlog` for writing the `icinga2.log` file
You can verify that by calling `icinga2 feature list` [CLI command](8-cli-commands.md#cli-command-feature) You can verify that by calling `icinga2 feature list` [CLI command](7-cli-commands.md#cli-command-feature)
to see which features are enabled and disabled. to see which features are enabled and disabled.
# icinga2 feature list # icinga2 feature list
@ -491,7 +491,7 @@ The `systemctl` command supports the following actions:
status | The `status` action checks if Icinga 2 is running. status | The `status` action checks if Icinga 2 is running.
enable | The `enable` action enables the service being started at system boot time (similar to `chkconfig`) enable | The `enable` action enables the service being started at system boot time (similar to `chkconfig`)
If you're stuck with configuration errors, you can manually invoke the [configuration validation](8-cli-commands.md#config-validation). If you're stuck with configuration errors, you can manually invoke the [configuration validation](7-cli-commands.md#config-validation).
# systemctl enable icinga2 # systemctl enable icinga2

View File

@ -248,8 +248,8 @@ dictionaries](#using-apply-for) for example provided by
> **Tip** > **Tip**
> >
> Building configuration in that dynamic way requires detailed information > Building configuration in that dynamic way requires detailed information
> of the generated objects. Use the `object list` [CLI command](8-cli-commands.md#cli-command-object) > of the generated objects. Use the `object list` [CLI command](7-cli-commands.md#cli-command-object)
> after successful [configuration validation](8-cli-commands.md#config-validation). > after successful [configuration validation](7-cli-commands.md#config-validation).
#### <a id="using-apply-expressions"></a> Apply Rules Expressions #### <a id="using-apply-expressions"></a> Apply Rules Expressions
@ -437,8 +437,8 @@ This can be achieved by wrapping them into the [string()](15-language-reference.
> **Tip** > **Tip**
> >
> Building configuration in that dynamic way requires detailed information > Building configuration in that dynamic way requires detailed information
> of the generated objects. Use the `object list` [CLI command](8-cli-commands.md#cli-command-object) > of the generated objects. Use the `object list` [CLI command](7-cli-commands.md#cli-command-object)
> after successful [configuration validation](8-cli-commands.md#config-validation). > after successful [configuration validation](7-cli-commands.md#config-validation).
#### <a id="using-apply-object attributes"></a> Use Object Attributes in Apply Rules #### <a id="using-apply-object attributes"></a> Use Object Attributes in Apply Rules
@ -594,7 +594,7 @@ Details on troubleshooting notification problems can be found [here](12-troubles
> **Note** > **Note**
> >
> Make sure that the [notification](8-cli-commands.md#features) feature is enabled on your master instance > Make sure that the [notification](7-cli-commands.md#features) feature is enabled on your master instance
> in order to execute notification commands. > in order to execute notification commands.
You should choose which information you (and your notified users) are interested in You should choose which information you (and your notified users) are interested in
@ -895,7 +895,7 @@ using the `check_command` attribute.
> **Note** > **Note**
> >
> Make sure that the [checker](8-cli-commands.md#features) feature is enabled in order to > Make sure that the [checker](7-cli-commands.md#features) feature is enabled in order to
> execute checks. > execute checks.
#### <a id="command-plugin-integration"></a> Integrate the Plugin with a CheckCommand Definition #### <a id="command-plugin-integration"></a> Integrate the Plugin with a CheckCommand Definition
@ -1156,7 +1156,7 @@ interfaces (E-Mail, XMPP, IRC, Twitter, etc).
> **Note** > **Note**
> >
> Make sure that the [notification](8-cli-commands.md#features) feature is enabled on your master instance > Make sure that the [notification](7-cli-commands.md#features) feature is enabled on your master instance
> in order to execute notification commands. > in order to execute notification commands.
Below is an example using runtime macros from Icinga 2 (such as `$service.output$` for Below is an example using runtime macros from Icinga 2 (such as `$service.output$` for
@ -2437,7 +2437,7 @@ chapter. Details on the configuration can be found in the
[IdoMysqlConnection](5-object-types.md#objecttype-idomysqlconnection) and [IdoMysqlConnection](5-object-types.md#objecttype-idomysqlconnection) and
[IdoPgsqlConnection](5-object-types.md#objecttype-idopgsqlconnection) [IdoPgsqlConnection](5-object-types.md#objecttype-idopgsqlconnection)
object configuration documentation. object configuration documentation.
The DB IDO feature supports [High Availability](7-monitoring-remote-systems.md#high-availability-db-ido) in The DB IDO feature supports [High Availability](8-monitoring-remote-systems.md#high-availability-db-ido) in
the Icinga 2 cluster. the Icinga 2 cluster.
The following example query checks the health of the current Icinga 2 instance The following example query checks the health of the current Icinga 2 instance

View File

@ -1,5 +1,12 @@
# <a id="configuring-icinga2-first-steps"></a> Configuring Icinga 2: First Steps # <a id="configuring-icinga2-first-steps"></a> Configuring Icinga 2: First Steps
This chapter prodides an introduction to the configuration files which are automatically created
when installing the Icinga 2 packages.
If you're interested in a detailed explanation of each language feature used in those
configuration files you can find more information in the [Language Reference](15-language-reference.md#language-reference)
chapter.
## <a id="icinga2-conf"></a> icinga2.conf ## <a id="icinga2-conf"></a> icinga2.conf
An example configuration file is installed for you in `/etc/icinga2/icinga2.conf`. An example configuration file is installed for you in `/etc/icinga2/icinga2.conf`.
@ -49,7 +56,7 @@ The `include` directive can be used to include other files.
This `include` directive takes care of including the configuration files for all This `include` directive takes care of including the configuration files for all
the features which have been enabled with `icinga2 feature enable`. See the features which have been enabled with `icinga2 feature enable`. See
[Enabling/Disabling Features](8-cli-commands.md#features) for more details. [Enabling/Disabling Features](7-cli-commands.md#features) for more details.
/** /**
* The repository.d directory contains all configuration objects * The repository.d directory contains all configuration objects
@ -59,7 +66,7 @@ the features which have been enabled with `icinga2 feature enable`. See
This `include_recursive` directive is used for discovery of services on remote clients This `include_recursive` directive is used for discovery of services on remote clients
and their generated configuration described in and their generated configuration described in
[this chapter](7-monitoring-remote-systems.md#icinga2-remote-monitoring-master-discovery-generate-config). [this chapter](8-monitoring-remote-systems.md#icinga2-remote-monitoring-master-discovery-generate-config).
/** /**
@ -586,8 +593,8 @@ objects such as hosts, services or notifications.
### <a id="satellite-conf"></a> satellite.conf ### <a id="satellite-conf"></a> satellite.conf
Ships default templates and dependencies for [monitoring remote clients](7-monitoring-remote-systems.md#icinga2-remote-client-monitoring) Ships default templates and dependencies for [monitoring remote clients](8-monitoring-remote-systems.md#icinga2-remote-client-monitoring)
using service discovery and [config generation](7-monitoring-remote-systems.md#icinga2-remote-monitoring-master-discovery-generate-config) using service discovery and [config generation](8-monitoring-remote-systems.md#icinga2-remote-monitoring-master-discovery-generate-config)
on the master. Can be ignored/removed on setups not using this features. on the master. Can be ignored/removed on setups not using this features.

View File

@ -805,8 +805,8 @@ Attributes:
table\_prefix |**Optional.** MySQL database table prefix. Defaults to "icinga\_". table\_prefix |**Optional.** MySQL database table prefix. Defaults to "icinga\_".
instance\_name |**Optional.** Unique identifier for the local Icinga 2 instance. Defaults to "default". instance\_name |**Optional.** Unique identifier for the local Icinga 2 instance. Defaults to "default".
instance\_description|**Optional.** Description for the Icinga 2 instance. instance\_description|**Optional.** Description for the Icinga 2 instance.
enable_ha |**Optional.** Enable the high availability functionality. Only valid in a [cluster setup](7-monitoring-remote-systems.md#high-availability-db-ido). Defaults to "true". enable_ha |**Optional.** Enable the high availability functionality. Only valid in a [cluster setup](8-monitoring-remote-systems.md#high-availability-db-ido). Defaults to "true".
failover_timeout | **Optional.** Set the failover timeout in a [HA cluster](7-monitoring-remote-systems.md#high-availability-db-ido). Must not be lower than 60s. Defaults to "60s". failover_timeout | **Optional.** Set the failover timeout in a [HA cluster](8-monitoring-remote-systems.md#high-availability-db-ido). Must not be lower than 60s. Defaults to "60s".
cleanup |**Optional.** Dictionary with items for historical table cleanup. cleanup |**Optional.** Dictionary with items for historical table cleanup.
categories |**Optional.** The types of information that should be written to the database. categories |**Optional.** The types of information that should be written to the database.
@ -894,8 +894,8 @@ Attributes:
table\_prefix |**Optional.** PostgreSQL database table prefix. Defaults to "icinga\_". table\_prefix |**Optional.** PostgreSQL database table prefix. Defaults to "icinga\_".
instance\_name |**Optional.** Unique identifier for the local Icinga 2 instance. Defaults to "default". instance\_name |**Optional.** Unique identifier for the local Icinga 2 instance. Defaults to "default".
instance\_description|**Optional.** Description for the Icinga 2 instance. instance\_description|**Optional.** Description for the Icinga 2 instance.
enable_ha |**Optional.** Enable the high availability functionality. Only valid in a [cluster setup](7-monitoring-remote-systems.md#high-availability-db-ido). Defaults to "true". enable_ha |**Optional.** Enable the high availability functionality. Only valid in a [cluster setup](8-monitoring-remote-systems.md#high-availability-db-ido). Defaults to "true".
failover_timeout | **Optional.** Set the failover timeout in a [HA cluster](7-monitoring-remote-systems.md#high-availability-db-ido). Must not be lower than 60s. Defaults to "60s". failover_timeout | **Optional.** Set the failover timeout in a [HA cluster](8-monitoring-remote-systems.md#high-availability-db-ido). Must not be lower than 60s. Defaults to "60s".
cleanup |**Optional.** Dictionary with items for historical table cleanup. cleanup |**Optional.** Dictionary with items for historical table cleanup.
categories |**Optional.** The types of information that should be written to the database. categories |**Optional.** The types of information that should be written to the database.

View File

@ -132,7 +132,7 @@ added.
### <a id="cli-command-daemon"></a> CLI command: Daemon ### <a id="cli-command-daemon"></a> CLI command: Daemon
The CLI command `daemon` provides the functionality to start/stop Icinga 2. The CLI command `daemon` provides the functionality to start/stop Icinga 2.
Furthermore it provides the [configuration validation](8-cli-commands.md#config-validation). Furthermore it provides the [configuration validation](7-cli-commands.md#config-validation).
# icinga2 daemon --help # icinga2 daemon --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.1.1-299-gf695275) icinga2 - The Icinga 2 network monitoring daemon (version: v2.1.1-299-gf695275)
@ -176,7 +176,7 @@ Icinga 2 automatically falls back to using the configuration file
The `--validate` option can be used to check if your configuration files The `--validate` option can be used to check if your configuration files
contain errors. If any errors are found the exit status is 1, otherwise 0 contain errors. If any errors are found the exit status is 1, otherwise 0
is returned. More details in the [configuration validation](8-cli-commands.md#config-validation) chapter. is returned. More details in the [configuration validation](7-cli-commands.md#config-validation) chapter.
### <a id="cli-command-feature"></a> CLI command: Feature ### <a id="cli-command-feature"></a> CLI command: Feature
@ -195,8 +195,8 @@ feature will only bring up all enabled features.
### <a id="cli-command-node"></a> CLI command: Node ### <a id="cli-command-node"></a> CLI command: Node
Provides the functionality to install and manage master and client Provides the functionality to install and manage master and client
nodes in a [remote monitoring ](7-monitoring-remote-systems.md#icinga2-remote-client-monitoring) or nodes in a [remote monitoring ](8-monitoring-remote-systems.md#icinga2-remote-client-monitoring) or
[distributed cluster](7-monitoring-remote-systems.md#distributed-monitoring-high-availability) scenario. [distributed cluster](8-monitoring-remote-systems.md#distributed-monitoring-high-availability) scenario.
# icinga2 node --help # icinga2 node --help
@ -281,7 +281,7 @@ Provides the CLI commands to
* request a signed certificate from the master * request a signed certificate from the master
* generate a new ticket for the client setup * generate a new ticket for the client setup
This functionality is used by the [node setup/wizard](8-cli-commands.md#cli-command-pki) CLI commands too. This functionality is used by the [node setup/wizard](7-cli-commands.md#cli-command-pki) CLI commands too.
# icinga2 pki --help # icinga2 pki --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.1.1-299-gf695275) icinga2 - The Icinga 2 network monitoring daemon (version: v2.1.1-299-gf695275)
@ -370,7 +370,7 @@ cleared after review.
### <a id="cli-command-variable"></a> CLI command: Variable ### <a id="cli-command-variable"></a> CLI command: Variable
Lists all configured variables (constants) in a similar fasion like [object list](8-cli-commands.md#cli-command-object). Lists all configured variables (constants) in a similar fasion like [object list](7-cli-commands.md#cli-command-object).
# icinga2 variable --help # icinga2 variable --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.1.1-299-gf695275) icinga2 - The Icinga 2 network monitoring daemon (version: v2.1.1-299-gf695275)
@ -406,7 +406,7 @@ Lists all configured variables (constants) in a similar fasion like [object list
Icinga 2 provides configuration files for some commonly used features. These Icinga 2 provides configuration files for some commonly used features. These
are installed in the `/etc/icinga2/features-available` directory and can be are installed in the `/etc/icinga2/features-available` directory and can be
enabled and disabled using the `icinga2 feature enable` and `icinga2 feature disable` enabled and disabled using the `icinga2 feature enable` and `icinga2 feature disable`
[CLI commands](8-cli-commands.md#cli-command-feature), respectively. [CLI commands](7-cli-commands.md#cli-command-feature), respectively.
The `icinga2 feature enable` CLI command creates symlinks in the The `icinga2 feature enable` CLI command creates symlinks in the
`/etc/icinga2/features-enabled` directory which is included by default `/etc/icinga2/features-enabled` directory which is included by default
@ -486,7 +486,7 @@ Or manually passing the `-C` argument:
If you encouter errors during configuration validation, please make sure If you encouter errors during configuration validation, please make sure
to read the [troubleshooting](12-troubleshooting.md#troubleshooting) chapter. to read the [troubleshooting](12-troubleshooting.md#troubleshooting) chapter.
You can also use the [CLI command](8-cli-commands.md#cli-command-object) `icinga2 object list` You can also use the [CLI command](7-cli-commands.md#cli-command-object) `icinga2 object list`
after validation passes to analyze object attributes, inheritance or created after validation passes to analyze object attributes, inheritance or created
objects by apply rules. objects by apply rules.
Find more on troubleshooting with `object list` in [this chapter](12-troubleshooting.md#list-configuration-objects). Find more on troubleshooting with `object list` in [this chapter](12-troubleshooting.md#list-configuration-objects).
@ -522,7 +522,7 @@ Example filtered by `Service` objects with the name `ping*`:
## <a id="config-change-reload"></a> Reload on Configuration Changes ## <a id="config-change-reload"></a> Reload on Configuration Changes
Everytime you have changed your configuration you should first tell Icinga 2 Everytime you have changed your configuration you should first tell Icinga 2
to [validate](8-cli-commands.md#config-validation). If there are no validation errors you can to [validate](7-cli-commands.md#config-validation). If there are no validation errors you can
safely reload the Icinga 2 daemon. safely reload the Icinga 2 daemon.
# /etc/init.d/icinga2 reload # /etc/init.d/icinga2 reload

View File

@ -1,17 +1,17 @@
# <a id="monitoring-remote-systems"></a> Monitoring Remote Systems # <a id="monitoring-remote-systems"></a> Monitoring Remote Systems
There are multiple ways you can monitor remote clients. Be it using [agent-less](7-monitoring-remote-systems.md#agent-less-checks) There are multiple ways you can monitor remote clients. Be it using [agent-less](8-monitoring-remote-systems.md#agent-less-checks)
or [agent-based](agent-based-checks-addons) using additional addons & tools. or [agent-based](agent-based-checks-addons) using additional addons & tools.
Icinga 2 uses its own unique and secure communitication protol amongst instances. Icinga 2 uses its own unique and secure communitication protol amongst instances.
Be it an High-Availability cluster setup, distributed load-balanced setup or just a single Be it an High-Availability cluster setup, distributed load-balanced setup or just a single
agent [monitoring a remote client](7-monitoring-remote-systems.md#icinga2-remote-client-monitoring). agent [monitoring a remote client](8-monitoring-remote-systems.md#icinga2-remote-client-monitoring).
All communication is secured by TLS with certificates, and fully supports IPv4 and IPv6. All communication is secured by TLS with certificates, and fully supports IPv4 and IPv6.
If you are planning to use the native Icinga 2 cluster feature for distributed If you are planning to use the native Icinga 2 cluster feature for distributed
monitoring and high-availability, please continue reading in monitoring and high-availability, please continue reading in
[this chapter](7-monitoring-remote-systems.md#distributed-monitoring-high-availability). [this chapter](8-monitoring-remote-systems.md#distributed-monitoring-high-availability).
> **Tip** > **Tip**
> >
@ -58,13 +58,13 @@ First, you should decide which role the remote client has:
* a remote command execution client (similar to NRPE, NSClient++, etc) * a remote command execution client (similar to NRPE, NSClient++, etc)
Later on, you will be asked again and told how to proceed with these Later on, you will be asked again and told how to proceed with these
different [roles](7-monitoring-remote-systems.md#icinga2-remote-monitoring-client-roles). different [roles](8-monitoring-remote-systems.md#icinga2-remote-monitoring-client-roles).
> **Note** > **Note**
> >
> If you are planning to build an Icinga 2 distributed setup using the cluster feature, please skip > If you are planning to build an Icinga 2 distributed setup using the cluster feature, please skip
> the following instructions and jump directly to the > the following instructions and jump directly to the
> [cluster setup instructions](7-monitoring-remote-systems.md#distributed-monitoring-high-availability). > [cluster setup instructions](8-monitoring-remote-systems.md#distributed-monitoring-high-availability).
> **Note** > **Note**
> >
@ -73,7 +73,7 @@ different [roles](7-monitoring-remote-systems.md#icinga2-remote-monitoring-clien
## <a id="icinga2-remote-monitoring-master"></a> Master Setup for Remote Monitoring ## <a id="icinga2-remote-monitoring-master"></a> Master Setup for Remote Monitoring
If you are planning to use the [remote Icinga 2 clients](7-monitoring-remote-systems.md#icinga2-remote-monitoring-client) If you are planning to use the [remote Icinga 2 clients](8-monitoring-remote-systems.md#icinga2-remote-monitoring-client)
you'll first need to update your master setup. you'll first need to update your master setup.
Your master setup requires the following Your master setup requires the following
@ -82,7 +82,7 @@ Your master setup requires the following
* Enabled API feature, and a local Endpoint and Zone object configuration * Enabled API feature, and a local Endpoint and Zone object configuration
* Firewall ACLs for the communication port (default 5665) * Firewall ACLs for the communication port (default 5665)
You can use the [CLI command](8-cli-commands.md#cli-command-node) `node wizard` for setting up a new node You can use the [CLI command](7-cli-commands.md#cli-command-node) `node wizard` for setting up a new node
on the master. The command must be run as root, all Icinga 2 specific files on the master. The command must be run as root, all Icinga 2 specific files
will be updated to the icinga user the daemon is running as (certificate files will be updated to the icinga user the daemon is running as (certificate files
for example). for example).
@ -148,13 +148,13 @@ The setup wizard does not automatically restart Icinga 2.
## <a id="icinga2-remote-monitoring-client"></a> Client Setup for Remote Monitoring ## <a id="icinga2-remote-monitoring-client"></a> Client Setup for Remote Monitoring
Icinga 2 can be installed on Linux/Unix and Windows. While Icinga 2 can be installed on Linux/Unix and Windows. While
[Linux/Unix](7-monitoring-remote-systems.md#icinga2-remote-monitoring-client-linux) will be using the [CLI command](8-cli-commands.md#cli-command-node) [Linux/Unix](8-monitoring-remote-systems.md#icinga2-remote-monitoring-client-linux) will be using the [CLI command](7-cli-commands.md#cli-command-node)
`node wizard` for a guided setup, you will need to use the `node wizard` for a guided setup, you will need to use the
graphical installer for Windows based client setup. graphical installer for Windows based client setup.
Your client setup requires the following Your client setup requires the following
* A ready configured and installed [master node](7-monitoring-remote-systems.md#icinga2-remote-monitoring-master) * A ready configured and installed [master node](8-monitoring-remote-systems.md#icinga2-remote-monitoring-master)
* SSL signed certificate for communication with the master (Use [CSR auto-signing](certifiates-csr-autosigning)). * SSL signed certificate for communication with the master (Use [CSR auto-signing](certifiates-csr-autosigning)).
* Enabled API feature, and a local Endpoint and Zone object configuration * Enabled API feature, and a local Endpoint and Zone object configuration
* Firewall ACLs for the communication port (default 5665) * Firewall ACLs for the communication port (default 5665)
@ -169,7 +169,7 @@ If your remote clients are capable of connecting to the central master, Icinga 2
supports CSR auto-signing. supports CSR auto-signing.
First you'll need to define a secure ticket salt in the [constants.conf](4-configuring-icinga-2.md#constants-conf). First you'll need to define a secure ticket salt in the [constants.conf](4-configuring-icinga-2.md#constants-conf).
The [setup wizard for the master setup](7-monitoring-remote-systems.md#icinga2-remote-monitoring-master) will create The [setup wizard for the master setup](8-monitoring-remote-systems.md#icinga2-remote-monitoring-master) will create
one for you already. one for you already.
# grep TicketSalt /etc/icinga2/constants.conf # grep TicketSalt /etc/icinga2/constants.conf
@ -193,11 +193,11 @@ Example for a client notebook:
#### <a id="certificates-manual-creation"></a> Manual SSL Certificate Generation #### <a id="certificates-manual-creation"></a> Manual SSL Certificate Generation
This is described separately in the [cluster setup chapter](7-monitoring-remote-systems.md#manual-certificate-generation). This is described separately in the [cluster setup chapter](8-monitoring-remote-systems.md#manual-certificate-generation).
> **Note** > **Note**
> >
> If you're using [CSR Auto-Signing](7-monitoring-remote-systems.md#csr-autosigning-requirements), skip this step. > If you're using [CSR Auto-Signing](8-monitoring-remote-systems.md#csr-autosigning-requirements), skip this step.
#### <a id="icinga2-remote-monitoring-client-linux-setup"></a> Linux Client Setup Wizard for Remote Monitoring #### <a id="icinga2-remote-monitoring-client-linux-setup"></a> Linux Client Setup Wizard for Remote Monitoring
@ -205,8 +205,8 @@ This is described separately in the [cluster setup chapter](7-monitoring-remote-
Install Icinga 2 from your distribution's package repository as described in the Install Icinga 2 from your distribution's package repository as described in the
general [installation instructions](2-getting-started.md#setting-up-icinga2). general [installation instructions](2-getting-started.md#setting-up-icinga2).
Please make sure that either [CSR Auto-Signing](7-monitoring-remote-systems.md#csr-autosigning-requirements) requirements Please make sure that either [CSR Auto-Signing](8-monitoring-remote-systems.md#csr-autosigning-requirements) requirements
are fulfilled, or that you're using [manual SSL certificate generation](7-monitoring-remote-systems.md#manual-certificate-generation). are fulfilled, or that you're using [manual SSL certificate generation](8-monitoring-remote-systems.md#manual-certificate-generation).
> **Note** > **Note**
> >
@ -222,7 +222,7 @@ You'll need the following configuration details:
* The client's local zone name. Defaults to FQDN. * The client's local zone name. Defaults to FQDN.
* The master endpoint name. Look into your master setup `zones.conf` file for the proper name. * The master endpoint name. Look into your master setup `zones.conf` file for the proper name.
* The master endpoint connection information. Your master's IP address and port (defaults to 5665) * The master endpoint connection information. Your master's IP address and port (defaults to 5665)
* The [request ticket number](7-monitoring-remote-systems.md#csr-autosigning-requirements) generated on your master * The [request ticket number](8-monitoring-remote-systems.md#csr-autosigning-requirements) generated on your master
for CSR Auto-Signing for CSR Auto-Signing
* Bind host/port for the Api feature (optional) * Bind host/port for the Api feature (optional)
@ -325,7 +325,7 @@ You'll need the following configuration details:
* The client's local zone name. Defaults to FQDN. * The client's local zone name. Defaults to FQDN.
* The master endpoint name. Look into your master setup `zones.conf` file for the proper name. * The master endpoint name. Look into your master setup `zones.conf` file for the proper name.
* The master endpoint connection information. Your master's IP address and port (defaults to 5665) * The master endpoint connection information. Your master's IP address and port (defaults to 5665)
* The [request ticket number](7-monitoring-remote-systems.md#csr-autosigning-requirements) generated on your master * The [request ticket number](8-monitoring-remote-systems.md#csr-autosigning-requirements) generated on your master
for CSR Auto-Signing for CSR Auto-Signing
* Bind host/port for the Api feature (optional) * Bind host/port for the Api feature (optional)
@ -395,8 +395,8 @@ in [zones.conf](#zones-conf) and define a trusted master zone as `parent`.
} }
More details here: More details here:
* [configure endpoints](7-monitoring-remote-systems.md#configure-cluster-endpoints) * [configure endpoints](8-monitoring-remote-systems.md#configure-cluster-endpoints)
* [configure zones](7-monitoring-remote-systems.md#configure-cluster-zones) * [configure zones](8-monitoring-remote-systems.md#configure-cluster-zones)
Configuration example for host and service objects running commands on the remote endpoint `remote-client1`: Configuration example for host and service objects running commands on the remote endpoint `remote-client1`:
@ -447,7 +447,7 @@ schedule client updates in your management tool (e.g. Puppet).
> clients. There are no local configured objects available. > clients. There are no local configured objects available.
> >
> If you require this, please install a full-featured > If you require this, please install a full-featured
> [local client](7-monitoring-remote-systems.md#icinga2-remote-monitoring-client-local-config). > [local client](8-monitoring-remote-systems.md#icinga2-remote-monitoring-client-local-config).
### <a id="icinga2-remote-monitoring-client-local-config"></a> Remote Client with Local Configuration ### <a id="icinga2-remote-monitoring-client-local-config"></a> Remote Client with Local Configuration
@ -519,7 +519,7 @@ using the following CLI command:
> **Note** > **Note**
> >
> Better use [blacklists and/or whitelists](7-monitoring-remote-systems.md#icinga2-remote-monitoring-master-discovery-blacklist-whitelist) > Better use [blacklists and/or whitelists](8-monitoring-remote-systems.md#icinga2-remote-monitoring-master-discovery-blacklist-whitelist)
> to control which clients and hosts/services are integrated into your master configuration repository. > to control which clients and hosts/services are integrated into your master configuration repository.
### <a id="icinga2-remote-monitoring-master-discovery-generate-config"></a> Generate Icinga 2 Configuration for Client Services on the Master ### <a id="icinga2-remote-monitoring-master-discovery-generate-config"></a> Generate Icinga 2 Configuration for Client Services on the Master
@ -602,13 +602,13 @@ You can `list` and `remove` existing blacklists:
Define a [Zone](5-object-types.md#objecttype-zone) with a new [Endpoint](5-object-types.md#objecttype-endpoint) similar to the cluster setup. Define a [Zone](5-object-types.md#objecttype-zone) with a new [Endpoint](5-object-types.md#objecttype-endpoint) similar to the cluster setup.
* [configure the node name](7-monitoring-remote-systems.md#configure-nodename) * [configure the node name](8-monitoring-remote-systems.md#configure-nodename)
* [configure the ApiListener object](7-monitoring-remote-systems.md#configure-apilistener-object) * [configure the ApiListener object](8-monitoring-remote-systems.md#configure-apilistener-object)
* [configure cluster endpoints](7-monitoring-remote-systems.md#configure-cluster-endpoints) * [configure cluster endpoints](8-monitoring-remote-systems.md#configure-cluster-endpoints)
* [configure cluster zones](7-monitoring-remote-systems.md#configure-cluster-zones) * [configure cluster zones](8-monitoring-remote-systems.md#configure-cluster-zones)
on a per remote client basis. If you prefer to synchronize the configuration to remote on a per remote client basis. If you prefer to synchronize the configuration to remote
clients, you can also use the cluster provided [configuration sync](7-monitoring-remote-systems.md#cluster-zone-config-sync) clients, you can also use the cluster provided [configuration sync](8-monitoring-remote-systems.md#cluster-zone-config-sync)
in `zones.d`. in `zones.d`.
@ -676,7 +676,7 @@ remote client.
> The NRPE protocol is considered insecure and has multiple flaws in its > The NRPE protocol is considered insecure and has multiple flaws in its
> design. Upstream is not willing to fix these issues. > design. Upstream is not willing to fix these issues.
> >
> In order to stay safe, please use the native [Icinga 2 client](7-monitoring-remote-systems.md#icinga2-remote-monitoring-master) > In order to stay safe, please use the native [Icinga 2 client](8-monitoring-remote-systems.md#icinga2-remote-monitoring-master)
> instead. > instead.
The NRPE daemon uses its own configuration format in nrpe.cfg while `check_nrpe` The NRPE daemon uses its own configuration format in nrpe.cfg while `check_nrpe`
@ -742,7 +742,7 @@ executed by the NRPE daemon looks similar to that:
/usr/local/icinga/libexec/check_disk -w 20% -c 10% -p / /usr/local/icinga/libexec/check_disk -w 20% -c 10% -p /
You can pass arguments in a similar manner to [NSClient++](7-monitoring-remote-systems.md#agent-based-checks-nsclient) You can pass arguments in a similar manner to [NSClient++](8-monitoring-remote-systems.md#agent-based-checks-nsclient)
when using its NRPE supported check method. when using its NRPE supported check method.
### <a id="agent-based-checks-nsclient"></a> NSClient++ ### <a id="agent-based-checks-nsclient"></a> NSClient++
@ -973,9 +973,9 @@ passive update with the state and text from the second and third varbind:
Building distributed environments with high availability included is fairly easy with Icinga 2. Building distributed environments with high availability included is fairly easy with Icinga 2.
The cluster feature is built-in and allows you to build many scenarios based on your requirements: The cluster feature is built-in and allows you to build many scenarios based on your requirements:
* [High Availability](7-monitoring-remote-systems.md#cluster-scenarios-high-availability). All instances in the `Zone` elect one active master and run as Active/Active cluster. * [High Availability](8-monitoring-remote-systems.md#cluster-scenarios-high-availability). All instances in the `Zone` elect one active master and run as Active/Active cluster.
* [Distributed Zones](7-monitoring-remote-systems.md#cluster-scenarios-distributed-zones). A master zone and one or more satellites in their zones. * [Distributed Zones](8-monitoring-remote-systems.md#cluster-scenarios-distributed-zones). A master zone and one or more satellites in their zones.
* [Load Distribution](7-monitoring-remote-systems.md#cluster-scenarios-load-distribution). A configuration master and multiple checker satellites. * [Load Distribution](8-monitoring-remote-systems.md#cluster-scenarios-load-distribution). A configuration master and multiple checker satellites.
You can combine these scenarios into a global setup fitting your requirements. You can combine these scenarios into a global setup fitting your requirements.
@ -999,7 +999,7 @@ Before you start deploying, keep the following things in mind:
* cluster zones can be built in a Top-Down-design where the child trusts the parent * cluster zones can be built in a Top-Down-design where the child trusts the parent
* communication between zones happens bi-directional which means that a DMZ-located node can still reach the master node, or vice versa * communication between zones happens bi-directional which means that a DMZ-located node can still reach the master node, or vice versa
* Update firewall rules and ACLs * Update firewall rules and ACLs
* Decide whether to use the built-in [configuration syncronization](7-monitoring-remote-systems.md#cluster-zone-config-sync) or use an external tool (Puppet, Ansible, Chef, Salt, etc) to manage the configuration deployment * Decide whether to use the built-in [configuration syncronization](8-monitoring-remote-systems.md#cluster-zone-config-sync) or use an external tool (Puppet, Ansible, Chef, Salt, etc) to manage the configuration deployment
> **Tip** > **Tip**
@ -1010,7 +1010,7 @@ Before you start deploying, keep the following things in mind:
### <a id="manual-certificate-generation"></a> Manual SSL Certificate Generation ### <a id="manual-certificate-generation"></a> Manual SSL Certificate Generation
Icinga 2 ships [CLI commands](8-cli-commands.md#cli-command-pki) assisting with CA and node certificate creation Icinga 2 ships [CLI commands](7-cli-commands.md#cli-command-pki) assisting with CA and node certificate creation
for your Icinga 2 distributed setup. for your Icinga 2 distributed setup.
> **Note** > **Note**
@ -1079,7 +1079,7 @@ The [Endpoint](5-object-types.md#objecttype-endpoint) name is further referenced
endpoints = [ "icinga2a", "icinga2b" ] endpoints = [ "icinga2a", "icinga2b" ]
} }
Specifying the local node name using the [NodeName](7-monitoring-remote-systems.md#configure-nodename) variable requires Specifying the local node name using the [NodeName](8-monitoring-remote-systems.md#configure-nodename) variable requires
the same name as used for the endpoint name and common name above. If not set, the FQDN is used. the same name as used for the endpoint name and common name above. If not set, the FQDN is used.
const NodeName = "icinga2a" const NodeName = "icinga2a"
@ -1090,14 +1090,14 @@ the same name as used for the endpoint name and common name above. If not set, t
The following section describe which configuration must be updated/created The following section describe which configuration must be updated/created
in order to get your cluster running with basic functionality. in order to get your cluster running with basic functionality.
* [configure the node name](7-monitoring-remote-systems.md#configure-nodename) * [configure the node name](8-monitoring-remote-systems.md#configure-nodename)
* [configure the ApiListener object](7-monitoring-remote-systems.md#configure-apilistener-object) * [configure the ApiListener object](8-monitoring-remote-systems.md#configure-apilistener-object)
* [configure cluster endpoints](7-monitoring-remote-systems.md#configure-cluster-endpoints) * [configure cluster endpoints](8-monitoring-remote-systems.md#configure-cluster-endpoints)
* [configure cluster zones](7-monitoring-remote-systems.md#configure-cluster-zones) * [configure cluster zones](8-monitoring-remote-systems.md#configure-cluster-zones)
Once you're finished with the basic setup the following section will Once you're finished with the basic setup the following section will
describe how to use [zone configuration synchronisation](7-monitoring-remote-systems.md#cluster-zone-config-sync) describe how to use [zone configuration synchronisation](8-monitoring-remote-systems.md#cluster-zone-config-sync)
and configure [cluster scenarios](7-monitoring-remote-systems.md#cluster-scenarios). and configure [cluster scenarios](8-monitoring-remote-systems.md#cluster-scenarios).
#### <a id="configure-nodename"></a> Configure the Icinga Node Name #### <a id="configure-nodename"></a> Configure the Icinga Node Name
@ -1112,7 +1112,7 @@ that value using the [NodeName](15-language-reference.md#constants) constant.
This setting must be unique for each node, and must also match This setting must be unique for each node, and must also match
the name of the local [Endpoint](5-object-types.md#objecttype-endpoint) object and the the name of the local [Endpoint](5-object-types.md#objecttype-endpoint) object and the
SSL certificate common name as described in the SSL certificate common name as described in the
[cluster naming convention](7-monitoring-remote-systems.md#cluster-naming-convention). [cluster naming convention](8-monitoring-remote-systems.md#cluster-naming-convention).
vim /etc/icinga2/constants.conf vim /etc/icinga2/constants.conf
@ -1122,7 +1122,7 @@ SSL certificate common name as described in the
const NodeName = "icinga2a" const NodeName = "icinga2a"
Read further about additional [naming conventions](7-monitoring-remote-systems.md#cluster-naming-convention). Read further about additional [naming conventions](8-monitoring-remote-systems.md#cluster-naming-convention).
Not specifying the node name will make Icinga 2 using the FQDN. Make sure that all Not specifying the node name will make Icinga 2 using the FQDN. Make sure that all
configured endpoint names and common names are in sync. configured endpoint names and common names are in sync.
@ -1177,9 +1177,9 @@ If this endpoint object is reachable on a different port, you must configure the
`Zone` objects specify the endpoints located in a zone. That way your distributed setup can be `Zone` objects specify the endpoints located in a zone. That way your distributed setup can be
seen as zones connected together instead of multiple instances in that specific zone. seen as zones connected together instead of multiple instances in that specific zone.
Zones can be used for [high availability](7-monitoring-remote-systems.md#cluster-scenarios-high-availability), Zones can be used for [high availability](8-monitoring-remote-systems.md#cluster-scenarios-high-availability),
[distributed setups](7-monitoring-remote-systems.md#cluster-scenarios-distributed-zones) and [distributed setups](8-monitoring-remote-systems.md#cluster-scenarios-distributed-zones) and
[load distribution](7-monitoring-remote-systems.md#cluster-scenarios-load-distribution). [load distribution](8-monitoring-remote-systems.md#cluster-scenarios-load-distribution).
Each Icinga 2 `Endpoint` must be put into its respective `Zone`. In this example, you will Each Icinga 2 `Endpoint` must be put into its respective `Zone`. In this example, you will
define the zone `config-ha-master` where the `icinga2a` and `icinga2b` endpoints define the zone `config-ha-master` where the `icinga2a` and `icinga2b` endpoints
@ -1214,7 +1214,7 @@ on the configuration master.
Your child zones and endpoint members **must not** have their config copied to `zones.d`. Your child zones and endpoint members **must not** have their config copied to `zones.d`.
The built-in configuration synchronisation takes care of that if your nodes accept The built-in configuration synchronisation takes care of that if your nodes accept
configuration from the parent zone. You can define that in the configuration from the parent zone. You can define that in the
[ApiListener](7-monitoring-remote-systems.md#configure-apilistener-object) object by configuring the `accept_config` [ApiListener](8-monitoring-remote-systems.md#configure-apilistener-object) object by configuring the `accept_config`
attribute accordingly. attribute accordingly.
You should remove the sample config included in `conf.d` by commenting the `recursive_include` You should remove the sample config included in `conf.d` by commenting the `recursive_include`
@ -1224,11 +1224,11 @@ statement in [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf):
Better use a dedicated directory name like `cluster` or similar, and include that Better use a dedicated directory name like `cluster` or similar, and include that
one if your nodes require local configuration not being synced to other nodes. That's one if your nodes require local configuration not being synced to other nodes. That's
useful for local [health checks](7-monitoring-remote-systems.md#cluster-health-check) for example. useful for local [health checks](8-monitoring-remote-systems.md#cluster-health-check) for example.
> **Note** > **Note**
> >
> In a [high availability](7-monitoring-remote-systems.md#cluster-scenarios-high-availability) > In a [high availability](8-monitoring-remote-systems.md#cluster-scenarios-high-availability)
> setup only one assigned node can act as configuration master. All other zone > setup only one assigned node can act as configuration master. All other zone
> member nodes **must not** have the `/etc/icinga2/zones.d` directory populated. > member nodes **must not** have the `/etc/icinga2/zones.d` directory populated.
@ -1237,7 +1237,7 @@ to their respective target zone instances.
Each configured zone must exist with the same directory name. The parent zone Each configured zone must exist with the same directory name. The parent zone
syncs the configuration to the child zones, if allowed using the `accept_config` syncs the configuration to the child zones, if allowed using the `accept_config`
attribute of the [ApiListener](7-monitoring-remote-systems.md#configure-apilistener-object) object. attribute of the [ApiListener](8-monitoring-remote-systems.md#configure-apilistener-object) object.
Config on node `icinga2a`: Config on node `icinga2a`:
@ -1395,7 +1395,7 @@ additional security itself:
* Child zones only receive event updates (check results, commands, etc) for their configured updates. * Child zones only receive event updates (check results, commands, etc) for their configured updates.
* Zones cannot influence/interfere other zones. Each checked object is assigned to only one zone. * Zones cannot influence/interfere other zones. Each checked object is assigned to only one zone.
* All nodes in a zone trust each other. * All nodes in a zone trust each other.
* [Configuration sync](7-monitoring-remote-systems.md#zone-config-sync-permissions) is disabled by default. * [Configuration sync](8-monitoring-remote-systems.md#zone-config-sync-permissions) is disabled by default.
#### <a id="cluster-scenarios-features"></a> Features in Cluster Zones #### <a id="cluster-scenarios-features"></a> Features in Cluster Zones
@ -1406,11 +1406,11 @@ re-schedule a check or acknowledge a problem on the master, and it gets replicat
actual slave checker node. actual slave checker node.
DB IDO on the left, graphite on the right side - works (if you disable DB IDO on the left, graphite on the right side - works (if you disable
[DB IDO HA](7-monitoring-remote-systems.md#high-availability-db-ido)). [DB IDO HA](8-monitoring-remote-systems.md#high-availability-db-ido)).
Icinga Web 2 on the left, checker and notifications on the right side - works too. Icinga Web 2 on the left, checker and notifications on the right side - works too.
Everything on the left and on the right side - make sure to deal with Everything on the left and on the right side - make sure to deal with
[load-balanced notifications and checks](7-monitoring-remote-systems.md#high-availability-features) in a [load-balanced notifications and checks](8-monitoring-remote-systems.md#high-availability-features) in a
[HA zone](7-monitoring-remote-systems.md#cluster-scenarios-high-availability). [HA zone](8-monitoring-remote-systems.md#cluster-scenarios-high-availability).
configure-cluster-zones configure-cluster-zones
#### <a id="cluster-scenarios-distributed-zones"></a> Distributed Zones #### <a id="cluster-scenarios-distributed-zones"></a> Distributed Zones
@ -1425,7 +1425,7 @@ graphing, etc. in their own specified zone.
Imagine the following example with a master node in Nuremberg, and two remote DMZ Imagine the following example with a master node in Nuremberg, and two remote DMZ
based instances in Berlin and Vienna. Additonally you'll specify based instances in Berlin and Vienna. Additonally you'll specify
[global templates](7-monitoring-remote-systems.md#zone-global-config-templates) available in all zones. [global templates](8-monitoring-remote-systems.md#zone-global-config-templates) available in all zones.
The configuration tree on the master instance `nuremberg` could look like this: The configuration tree on the master instance `nuremberg` could look like this:
@ -1489,7 +1489,7 @@ check results from the satellite nodes in the zones `berlin` and `vienna`.
> The child zones `berlin` and `vienna` will get their configuration synchronised > The child zones `berlin` and `vienna` will get their configuration synchronised
> from the configuration master 'nuremberg'. The endpoints in the child > from the configuration master 'nuremberg'. The endpoints in the child
> zones **must not** have their `zones.d` directory populated if this endpoint > zones **must not** have their `zones.d` directory populated if this endpoint
> [accepts synced configuration](7-monitoring-remote-systems.md#zone-config-sync-permissions). > [accepts synced configuration](8-monitoring-remote-systems.md#zone-config-sync-permissions).
#### <a id="cluster-scenarios-load-distribution"></a> Load Distribution #### <a id="cluster-scenarios-load-distribution"></a> Load Distribution
@ -1548,15 +1548,15 @@ Zones:
> The child zones `checker` will get its configuration synchronised > The child zones `checker` will get its configuration synchronised
> from the configuration master 'master'. The endpoints in the child > from the configuration master 'master'. The endpoints in the child
> zone **must not** have their `zones.d` directory populated if this endpoint > zone **must not** have their `zones.d` directory populated if this endpoint
> [accepts synced configuration](7-monitoring-remote-systems.md#zone-config-sync-permissions). > [accepts synced configuration](8-monitoring-remote-systems.md#zone-config-sync-permissions).
#### <a id="cluster-scenarios-high-availability"></a> Cluster High Availability #### <a id="cluster-scenarios-high-availability"></a> Cluster High Availability
High availability with Icinga 2 is possible by putting multiple nodes into High availability with Icinga 2 is possible by putting multiple nodes into
a dedicated [zone](7-monitoring-remote-systems.md#configure-cluster-zones). All nodes will elect one a dedicated [zone](8-monitoring-remote-systems.md#configure-cluster-zones). All nodes will elect one
active master, and retry an election once the current active master is down. active master, and retry an election once the current active master is down.
Selected features provide advanced [HA functionality](7-monitoring-remote-systems.md#high-availability-features). Selected features provide advanced [HA functionality](8-monitoring-remote-systems.md#high-availability-features).
Checks and notifications are load-balanced between nodes in the high availability Checks and notifications are load-balanced between nodes in the high availability
zone. zone.
@ -1568,17 +1568,17 @@ commands, etc.
endpoints = [ "icinga2a", "icinga2b", "icinga2c" ] endpoints = [ "icinga2a", "icinga2b", "icinga2c" ]
} }
Two or more nodes in a high availability setup require an [initial cluster sync](7-monitoring-remote-systems.md#initial-cluster-sync). Two or more nodes in a high availability setup require an [initial cluster sync](8-monitoring-remote-systems.md#initial-cluster-sync).
> **Note** > **Note**
> >
> Keep in mind that **only one node acts as configuration master** having the > Keep in mind that **only one node acts as configuration master** having the
> configuration files in the `zones.d` directory. All other nodes **must not** > configuration files in the `zones.d` directory. All other nodes **must not**
> have that directory populated. Instead they are required to > have that directory populated. Instead they are required to
> [accept synced configuration](7-monitoring-remote-systems.md#zone-config-sync-permissions). > [accept synced configuration](8-monitoring-remote-systems.md#zone-config-sync-permissions).
> Details in the [Configuration Sync Chapter](7-monitoring-remote-systems.md#cluster-zone-config-sync). > Details in the [Configuration Sync Chapter](8-monitoring-remote-systems.md#cluster-zone-config-sync).
#### <a id="cluster-scenarios-multiple-hierachies"></a> Multiple Hierachies #### <a id="cluster-scenarios-multiple-hierarchies"></a> Multiple Hierarchies
Your master zone collects all check results for reporting and graphing and also Your master zone collects all check results for reporting and graphing and also
does some sort of additional notifications. does some sort of additional notifications.
@ -1610,9 +1610,9 @@ amongst them.
By default the following features provide advanced HA functionality: By default the following features provide advanced HA functionality:
* [Checks](7-monitoring-remote-systems.md#high-availability-checks) (load balanced, automated failover) * [Checks](8-monitoring-remote-systems.md#high-availability-checks) (load balanced, automated failover)
* [Notifications](7-monitoring-remote-systems.md#high-availability-notifications) (load balanced, automated failover) * [Notifications](8-monitoring-remote-systems.md#high-availability-notifications) (load balanced, automated failover)
* [DB IDO](7-monitoring-remote-systems.md#high-availability-db-ido) (Run-Once, automated failover) * [DB IDO](8-monitoring-remote-systems.md#high-availability-db-ido) (Run-Once, automated failover)
#### <a id="high-availability-checks"></a> High Availability with Checks #### <a id="high-availability-checks"></a> High Availability with Checks
@ -1682,11 +1682,11 @@ These steps are required for integrating a new cluster endpoint:
* generate a new [SSL client certificate](#certificate-authority-certificates) * generate a new [SSL client certificate](#certificate-authority-certificates)
* identify its location in the zones * identify its location in the zones
* update the `zones.conf` file on each involved node ([endpoint](7-monitoring-remote-systems.md#configure-cluster-endpoints), [zones](7-monitoring-remote-systems.md#configure-cluster-zones)) * update the `zones.conf` file on each involved node ([endpoint](8-monitoring-remote-systems.md#configure-cluster-endpoints), [zones](8-monitoring-remote-systems.md#configure-cluster-zones))
* a new slave zone node requires updates for the master and slave zones * a new slave zone node requires updates for the master and slave zones
* verify if this endpoints requires [configuration synchronisation](7-monitoring-remote-systems.md#cluster-zone-config-sync) enabled * verify if this endpoints requires [configuration synchronisation](8-monitoring-remote-systems.md#cluster-zone-config-sync) enabled
* if the node requires the existing zone history: [initial cluster sync](7-monitoring-remote-systems.md#initial-cluster-sync) * if the node requires the existing zone history: [initial cluster sync](8-monitoring-remote-systems.md#initial-cluster-sync)
* add a [cluster health check](7-monitoring-remote-systems.md#cluster-health-check) * add a [cluster health check](8-monitoring-remote-systems.md#cluster-health-check)
#### <a id="initial-cluster-sync"></a> Initial Cluster Sync #### <a id="initial-cluster-sync"></a> Initial Cluster Sync

View File

@ -8,8 +8,8 @@ pages:
- [4-configuring-icinga-2.md, Configuring Icinga 2] - [4-configuring-icinga-2.md, Configuring Icinga 2]
- [5-object-types.md, Object Types] - [5-object-types.md, Object Types]
- [6-icinga-template-library.md, Icinga Template Library] - [6-icinga-template-library.md, Icinga Template Library]
- [7-monitoring-remote-systems.md, Monitoring Remote Systems] - [7-cli-commands.md, CLI Commands]
- [8-cli-commands.md, CLI Commands] - [8-monitoring-remote-systems.md, Monitoring Remote Systems]
- [9-addons-plugins.md, Addons and Plugins] - [9-addons-plugins.md, Addons and Plugins]
- [10-alternative-frontends.md, Alternative Frontends] - [10-alternative-frontends.md, Alternative Frontends]
- [11-livestatus.md, Livestatus] - [11-livestatus.md, Livestatus]