mirror of
https://github.com/Icinga/icinga2.git
synced 2025-07-25 14:44:32 +02:00
Documentation: Refactor RemoteClient/Cluster/etc from community&training feedback
fixes #8318 fixes #8522 fixes #6858
This commit is contained in:
parent
77fc213d6d
commit
a93b565868
@ -47,7 +47,7 @@ More details in the [Icinga FAQ](https://www.icinga.org/icinga/faq/).
|
|||||||
|
|
||||||
* [Register](https://exchange.icinga.org/authentication/register) an Icinga account.
|
* [Register](https://exchange.icinga.org/authentication/register) an Icinga account.
|
||||||
* Create a new issue at the [Icinga 2 Development Tracker](https://dev.icinga.org/projects/i2).
|
* Create a new issue at the [Icinga 2 Development Tracker](https://dev.icinga.org/projects/i2).
|
||||||
* When reporting a bug, please include the details described in the [Troubleshooting](13-troubleshooting.md#troubleshooting-information-required) chapter (version, configs, logs, etc).
|
* When reporting a bug, please include the details described in the [Troubleshooting](16-troubleshooting.md#troubleshooting-information-required) chapter (version, configs, logs, etc).
|
||||||
|
|
||||||
## <a id="whats-new"></a> What's new
|
## <a id="whats-new"></a> What's new
|
||||||
|
|
||||||
@ -55,7 +55,7 @@ More details in the [Icinga FAQ](https://www.icinga.org/icinga/faq/).
|
|||||||
|
|
||||||
#### Changes
|
#### Changes
|
||||||
|
|
||||||
* [DB IDO schema upgrade](14-upgrading-icinga-2.md#upgrading-icinga-2) to `1.13.0` required!
|
* [DB IDO schema upgrade](17-upgrading-icinga-2.md#upgrading-icinga-2) to `1.13.0` required!
|
||||||
|
|
||||||
TODO
|
TODO
|
||||||
|
|
||||||
|
736
doc/10-icinga2-client.md
Normal file
736
doc/10-icinga2-client.md
Normal file
@ -0,0 +1,736 @@
|
|||||||
|
# <a id="icinga2-client"></a> Icinga 2 Client
|
||||||
|
|
||||||
|
## <a id="icinga2-client-introduction"></a> Introduction
|
||||||
|
|
||||||
|
Icinga 2 uses its own unique and secure communitication protol amongst instances.
|
||||||
|
Be it an High-Availability cluster setup, distributed load-balanced setup or just a single
|
||||||
|
agent [monitoring a remote client](10-icinga2-client.md#icinga2-client).
|
||||||
|
|
||||||
|
All communication is secured by TLS with certificates, and fully supports IPv4 and IPv6.
|
||||||
|
|
||||||
|
If you are planning to use the native Icinga 2 cluster feature for distributed
|
||||||
|
monitoring and high-availability, please continue reading in
|
||||||
|
[this chapter](distributed-monitoring-high-availability).
|
||||||
|
|
||||||
|
> **Tip**
|
||||||
|
>
|
||||||
|
> Don't panic - there are CLI commands available, including setup wizards for easy installation
|
||||||
|
> with SSL certificates.
|
||||||
|
> If you prefer to use your own CA (for example Puppet) you can do that as well.
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="icinga2-client-scenarios"></a> Client Scenarios
|
||||||
|
|
||||||
|
* Clients with [local configuration](10-icinga2-client.md#icinga2-client-configuration-local), sending their inventory to the master
|
||||||
|
* Clients as [command execution bridge](10-icinga2-client.md#icinga2-client-configuration-command-bridge) without local configuration
|
||||||
|
* Clients receive their configuration from the master ([Cluster config sync](10-icinga2-client.md#icinga2-client-configuration-master-config-sync))
|
||||||
|
|
||||||
|
### <a id="icinga2-client-configuration-combined-scenarios"></a> Combined Client Scenarios
|
||||||
|
|
||||||
|
If your setup consists of remote clients with local configuration but also command execution bridges
|
||||||
|
and probably syncing global templates through the cluster config sync, you should take a deep
|
||||||
|
breath and take pen and paper to draw your design before starting over.
|
||||||
|
|
||||||
|
Keep the following hints in mind:
|
||||||
|
|
||||||
|
* You can blacklist remote nodes entirely. They are then ignored on `node update-config`
|
||||||
|
on the master.
|
||||||
|
* Your remote instance can have local configuration **and** act as remote command execution bridge.
|
||||||
|
* You can use the `global` cluster zones to sync check commands, templates, etc to your remote clients.
|
||||||
|
Be it just for command execution or for helping the local configuration.
|
||||||
|
* If your remote clients shouldn't have local configuration, remove `conf.d` inclusion from `icinga2`
|
||||||
|
and simply use the cluster configuration sync.
|
||||||
|
* `accept_config` and `accept_commands` are disabled by default in the `api` feature
|
||||||
|
|
||||||
|
If you are planning to use the Icinga 2 client inside a distributed setup, refer to
|
||||||
|
[this chapter](12-distributed-monitoring-ha.md#cluster-scenarios-master-satellite-clients) with detailed instructions.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="icinga2-client-requirements"></a> Requirements
|
||||||
|
|
||||||
|
* Overview of Configuration Objects
|
||||||
|
* SSL Certificates (manual or using CSR-Autosigning by the setup wizard)
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="icinga2-client-installation"></a> Installation
|
||||||
|
|
||||||
|
### <a id="icinga2-client-installation-firewall"></a> Configure the Firewall
|
||||||
|
|
||||||
|
Icinga 2 master, satellite and client instances communicate using the default tcp
|
||||||
|
port `5665`. The communication is bi-directional and the first node opening the
|
||||||
|
connection "wins" if there are both connection ways enabled in your firewall policies.
|
||||||
|
|
||||||
|
If you are going to use CSR-Autosigning, you must (temporarly) allow the client
|
||||||
|
connecting to the master instance and open the firewall port. Once the client install is done,
|
||||||
|
you can close the port and use a different communication direction (master-to-client).
|
||||||
|
|
||||||
|
### <a id="icinga2-client-installation-master-setup"></a> Setup the Master for Remote Clients
|
||||||
|
|
||||||
|
If you are planning to use the [remote Icinga 2 clients](10-icinga2-client.md#icinga2-client)
|
||||||
|
you'll first need to update your master setup.
|
||||||
|
|
||||||
|
Your master setup requires the following
|
||||||
|
|
||||||
|
* SSL CA and signed certificate for the master
|
||||||
|
* Enabled API feature, and a local Endpoint and Zone object configuration
|
||||||
|
* Firewall ACLs for the communication port (default 5665)
|
||||||
|
|
||||||
|
You can use the [CLI command](8-cli-commands.md#cli-command-node) `node wizard` for setting up a new node
|
||||||
|
on the master. The command must be run as root, all Icinga 2 specific files
|
||||||
|
will be updated to the icinga user the daemon is running as (certificate files
|
||||||
|
for example).
|
||||||
|
|
||||||
|
Make sure to answer the first question with `n` (no).
|
||||||
|
|
||||||
|
# icinga2 node wizard
|
||||||
|
Welcome to the Icinga 2 Setup Wizard!
|
||||||
|
|
||||||
|
We'll guide you through all required configuration details.
|
||||||
|
|
||||||
|
Please specify if this is a satellite setup ('n' installs a master setup) [Y/n]: n
|
||||||
|
Starting the Master setup routine...
|
||||||
|
Please specifiy the common name (CN) [icinga2-node1.localdomain]:
|
||||||
|
information/base: Writing private key to '/var/lib/icinga2/ca/ca.key'.
|
||||||
|
information/base: Writing X509 certificate to '/var/lib/icinga2/ca/ca.crt'.
|
||||||
|
information/cli: Initializing serial file in '/var/lib/icinga2/ca/serial.txt'.
|
||||||
|
information/cli: Generating new CSR in '/etc/icinga2/pki/icinga2-node1.localdomain.csr'.
|
||||||
|
information/base: Writing private key to '/etc/icinga2/pki/icinga2-node1.localdomain.key'.
|
||||||
|
information/base: Writing certificate signing request to '/etc/icinga2/pki/icinga2-node1.localdomain.csr'.
|
||||||
|
information/cli: Signing CSR with CA and writing certificate to '/etc/icinga2/pki/icinga2-node1.localdomain.crt'.
|
||||||
|
information/cli: Copying CA certificate to '/etc/icinga2/pki/ca.crt'.
|
||||||
|
information/cli: Dumping config items to file '/etc/icinga2/zones.conf'.
|
||||||
|
information/cli: Created backup file '/etc/icinga2/zones.conf.orig'.
|
||||||
|
Please specify the API bind host/port (optional):
|
||||||
|
Bind Host []:
|
||||||
|
Bind Port []:
|
||||||
|
information/cli: Enabling the APIlistener feature.
|
||||||
|
Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.
|
||||||
|
information/cli: Created backup file '/etc/icinga2/features-available/api.conf.orig'.
|
||||||
|
information/cli: Updating constants.conf.
|
||||||
|
information/cli: Created backup file '/etc/icinga2/constants.conf.orig'.
|
||||||
|
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
|
||||||
|
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
|
||||||
|
Please edit the constants.conf file '/etc/icinga2/constants.conf' and set a secure 'TicketSalt' constant.
|
||||||
|
Done.
|
||||||
|
|
||||||
|
Now restart your Icinga 2 daemon to finish the installation!
|
||||||
|
|
||||||
|
|
||||||
|
The setup wizard will do the following:
|
||||||
|
|
||||||
|
* Generate a local CA in `/var/lib/icinga2/ca` or use the existing one
|
||||||
|
* Generate a new CSR, sign it with the local CA and copying it into `/etc/icinga2/pki`
|
||||||
|
* Generate a local zone and endpoint configuration for this master based on FQDN
|
||||||
|
* Enabling the API feature, and setting optional `bind_host` and `bind_port`
|
||||||
|
* Setting the `NodeName` and `TicketSalt` constants in [constants.conf](5-configuring-icinga-2.md#constants-conf)
|
||||||
|
|
||||||
|
The setup wizard does not automatically restart Icinga 2.
|
||||||
|
|
||||||
|
Verify the modified configuration:
|
||||||
|
|
||||||
|
# egrep 'NodeName|TicketSalt' /etc/icinga2/constants.conf
|
||||||
|
|
||||||
|
# cat /etc/icinga2/zones.conf
|
||||||
|
/*
|
||||||
|
* Generated by Icinga 2 node setup commands
|
||||||
|
* on 2015-02-09 15:21:49 +0100
|
||||||
|
*/
|
||||||
|
|
||||||
|
object Endpoint "icinga2-node1.localdomain" {
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "master" {
|
||||||
|
//this is the local node master named = "master"
|
||||||
|
endpoints = [ "icinga2-node1.localdomain" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
Validate the configuration and restart Icinga 2.
|
||||||
|
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> This setup wizard will install a standalone master, HA cluster scenarios are currently
|
||||||
|
> not supported and require manual modifications afterwards.
|
||||||
|
|
||||||
|
## <a id="icinga2-client-setup"></a> Client Setup for Remote Monitoring
|
||||||
|
|
||||||
|
Icinga 2 can be installed on Linux/Unix and Windows. While
|
||||||
|
[Linux/Unix](10-icinga2-client.md#icinga2-client-installation-client-setup-linux) will be using the [CLI command](8-cli-commands.md#cli-command-node)
|
||||||
|
`node wizard` for a guided setup, you will need to use the
|
||||||
|
graphical installer for Windows based client setup.
|
||||||
|
|
||||||
|
Your client setup requires the following
|
||||||
|
|
||||||
|
* A ready configured and installed [master node](10-icinga2-client.md#icinga2-client-installation-master-setup)
|
||||||
|
* SSL signed certificate for communication with the master (Use [CSR auto-signing](certifiates-csr-autosigning)).
|
||||||
|
* Enabled API feature, and a local Endpoint and Zone object configuration
|
||||||
|
* Firewall ACLs for the communication port (default 5665)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### <a id="csr-autosigning-requirements"></a> Requirements for CSR Auto-Signing
|
||||||
|
|
||||||
|
If your remote clients are capable of connecting to the central master, Icinga 2
|
||||||
|
supports CSR auto-signing.
|
||||||
|
|
||||||
|
First you'll need to define a secure ticket salt in the [constants.conf](5-configuring-icinga-2.md#constants-conf).
|
||||||
|
The [setup wizard for the master setup](10-icinga2-client.md#icinga2-client-installation-master-setup) will create
|
||||||
|
one for you already.
|
||||||
|
|
||||||
|
# grep TicketSalt /etc/icinga2/constants.conf
|
||||||
|
|
||||||
|
The client setup wizard will ask you to generate a valid ticket number using its CN.
|
||||||
|
If you already know your remote client's Common Names (CNs) - usually the FQDN - you
|
||||||
|
can generate all ticket numbers on-demand.
|
||||||
|
|
||||||
|
This is also reasonable if you are not capable of installing the remote client, but
|
||||||
|
a colleague of yours, or a customer.
|
||||||
|
|
||||||
|
Example for a client:
|
||||||
|
|
||||||
|
# icinga2 pki ticket --cn icinga2-node2.localdomain
|
||||||
|
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> You can omit the `--salt` parameter using the `TicketSalt` constant from
|
||||||
|
> [constants.conf](5-configuring-icinga-2.md#constants-conf) if already defined and Icinga 2 was
|
||||||
|
> reloaded after the master setup.
|
||||||
|
|
||||||
|
### <a id="certificates-manual-creation"></a> Manual SSL Certificate Generation
|
||||||
|
|
||||||
|
This is described separately in the [cluster setup chapter](12-distributed-monitoring-ha.md#manual-certificate-generation).
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> If you're using [CSR Auto-Signing](10-icinga2-client.md#csr-autosigning-requirements), skip this step.
|
||||||
|
|
||||||
|
|
||||||
|
### <a id="icinga2-client-installation-client-setup-linux"></a> Setup the Client on Linux
|
||||||
|
|
||||||
|
There is no extra client binary or package required. Install Icinga 2 from your distribution's package
|
||||||
|
repository as described in the general [installation instructions](2-getting-started.md#setting-up-icinga2).
|
||||||
|
|
||||||
|
Please make sure that either [CSR Auto-Signing](10-icinga2-client.md#csr-autosigning-requirements) requirements
|
||||||
|
are fulfilled, or that you're using [manual SSL certificate generation](12-distributed-monitoring-ha.md#manual-certificate-generation).
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> You don't need any features (DB IDO, Livestatus) or user interfaces on the remote client.
|
||||||
|
> Install them only if you're planning to use them.
|
||||||
|
|
||||||
|
Once the package installation succeeded, use the `node wizard` CLI command to install
|
||||||
|
a new Icinga 2 node as client setup.
|
||||||
|
|
||||||
|
You'll need the following configuration details:
|
||||||
|
|
||||||
|
* The client common name (CN). Defaults to FQDN.
|
||||||
|
* The client's local zone name. Defaults to FQDN.
|
||||||
|
* The master endpoint name. Look into your master setup `zones.conf` file for the proper name.
|
||||||
|
* The master endpoint connection information. Your master's IP address and port (port defaults to 5665)
|
||||||
|
* The [request ticket number](10-icinga2-client.md#csr-autosigning-requirements) generated on your master
|
||||||
|
for CSR Auto-Signing
|
||||||
|
* Bind host/port for the Api feature (optional)
|
||||||
|
|
||||||
|
The command must be run as root, all Icinga 2 specific files will be updated to the icinga
|
||||||
|
user the daemon is running as (certificate files for example). The wizard creates backups
|
||||||
|
of configuration and certificate files if already existing.
|
||||||
|
|
||||||
|
Capitalized options in square brackets (e.g. `[Y/n]`) signal the default value and
|
||||||
|
allow you to continue pressing `Enter` instead of entering a value.
|
||||||
|
|
||||||
|
# icinga2 node wizard
|
||||||
|
Welcome to the Icinga 2 Setup Wizard!
|
||||||
|
We'll guide you through all required configuration details.
|
||||||
|
|
||||||
|
Please specify if this is a satellite setup ('n' installs a master setup) [Y/n]:
|
||||||
|
Starting the Node setup routine...
|
||||||
|
Please specifiy the common name (CN) [icinga2-node2.localdomain]:
|
||||||
|
Please specifiy the local zone name [icinga2-node2.localdomain]:
|
||||||
|
Please specify the master endpoint(s) this node should connect to:
|
||||||
|
Master Common Name (CN from your master setup): icinga2-node1.localdomain
|
||||||
|
Please fill out the master connection information:
|
||||||
|
Master endpoint host (optional, your master's IP address or FQDN): 192.168.56.101
|
||||||
|
Master endpoint port (optional) []:
|
||||||
|
Add more master endpoints? [y/N]
|
||||||
|
Please specify the master connection for CSR auto-signing (defaults to master endpoint host):
|
||||||
|
Host [192.168.56.101]:
|
||||||
|
Port [5665]:
|
||||||
|
information/base: Writing private key to '/etc/icinga2/pki/icinga2-node2.localdomain.key'.
|
||||||
|
information/base: Writing X509 certificate to '/etc/icinga2/pki/icinga2-node2.localdomain.crt'.
|
||||||
|
information/cli: Generating self-signed certifiate:
|
||||||
|
information/cli: Fetching public certificate from master (192.168.56.101, 5665):
|
||||||
|
|
||||||
|
information/cli: Writing trusted certificate to file '/etc/icinga2/pki/trusted-master.crt'.
|
||||||
|
information/cli: Stored trusted master certificate in '/etc/icinga2/pki/trusted-master.crt'.
|
||||||
|
|
||||||
|
Please specify the request ticket generated on your Icinga 2 master.
|
||||||
|
(Hint: # icinga2 pki ticket --cn 'icinga2-node2.localdomain'): ead2d570e18c78abf285d6b85524970a0f69c22d
|
||||||
|
information/cli: Processing self-signed certificate request. Ticket 'ead2d570e18c78abf285d6b85524970a0f69c22d'.
|
||||||
|
|
||||||
|
information/cli: Writing signed certificate to file '/etc/icinga2/pki/icinga2-node2.localdomain.crt'.
|
||||||
|
information/cli: Writing CA certificate to file '/etc/icinga2/pki/ca.crt'.
|
||||||
|
Please specify the API bind host/port (optional):
|
||||||
|
Bind Host []:
|
||||||
|
Bind Port []:
|
||||||
|
information/cli: Disabling the Notification feature.
|
||||||
|
Disabling feature notification. Make sure to restart Icinga 2 for these changes to take effect.
|
||||||
|
information/cli: Enabling the Apilistener feature.
|
||||||
|
Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.
|
||||||
|
information/cli: Created backup file '/etc/icinga2/features-available/api.conf.orig'.
|
||||||
|
information/cli: Generating local zones.conf.
|
||||||
|
information/cli: Dumping config items to file '/etc/icinga2/zones.conf'.
|
||||||
|
information/cli: Created backup file '/etc/icinga2/zones.conf.orig'.
|
||||||
|
information/cli: Updating constants.conf.
|
||||||
|
information/cli: Created backup file '/etc/icinga2/constants.conf.orig'.
|
||||||
|
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
|
||||||
|
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
|
||||||
|
Done.
|
||||||
|
|
||||||
|
Now restart your Icinga 2 daemon to finish the installation!
|
||||||
|
|
||||||
|
|
||||||
|
The setup wizard will do the following:
|
||||||
|
|
||||||
|
* Generate a new self-signed certificate and copy it into `/etc/icinga2/pki`
|
||||||
|
* Store the master's certificate as trusted certificate for requesting a new signed certificate
|
||||||
|
(manual step when using `node setup`).
|
||||||
|
* Request a new signed certificate from the master and store updated certificate and master CA in `/etc/icinga2/pki`
|
||||||
|
* Generate a local zone and endpoint configuration for this client and the provided master information
|
||||||
|
(based on FQDN)
|
||||||
|
* Disabling the `notification` feature for this client
|
||||||
|
* Enabling the `api` feature, and setting optional `bind_host` and `bind_port`
|
||||||
|
* Setting the `NodeName` constant in [constants.conf](5-configuring-icinga-2.md#constants-conf)
|
||||||
|
|
||||||
|
The setup wizard does not automatically restart Icinga 2.
|
||||||
|
|
||||||
|
Verify the modified configuration:
|
||||||
|
|
||||||
|
# grep 'NodeName' /etc/icinga2/constants.conf
|
||||||
|
|
||||||
|
# cat /etc/icinga2/zones.conf
|
||||||
|
/*
|
||||||
|
* Generated by Icinga 2 node setup commands
|
||||||
|
* on 2015-02-09 16:56:10 +0100
|
||||||
|
*/
|
||||||
|
|
||||||
|
object Endpoint "icinga2-node1.localdomain" {
|
||||||
|
host = "192.168.56.101"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "master" {
|
||||||
|
endpoints = [ "icinga2-node1.localdomain" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
object Endpoint "icinga2-node2.localdomain" {
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "icinga2-node2.localdomain" {
|
||||||
|
//this is the local node = "icinga2-node2.localdomain"
|
||||||
|
endpoints = [ "icinga2-node2.localdomain" ]
|
||||||
|
parent = "master"
|
||||||
|
}
|
||||||
|
|
||||||
|
Validate the configuration and restart Icinga 2.
|
||||||
|
|
||||||
|
If you are getting an error when requesting the ticket number, please check the following:
|
||||||
|
|
||||||
|
* Can your client connect to the master instance?
|
||||||
|
* Is the CN the same (from pki ticket on the master and setup node on the client)?
|
||||||
|
* Is the ticket expired?
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#### <a id="icinga2-client-installation-client-setup-linux-manual"></a> Manual Setup without Wizard
|
||||||
|
|
||||||
|
Instead of using the `node wizard` cli command, there is an alternative `node setup`
|
||||||
|
cli command available which has some pre-requisites. Make sure that the
|
||||||
|
`/etc/icinga2/pki` exists and is owned by the `icinga` user (or the user Icinga 2 is
|
||||||
|
running as).
|
||||||
|
|
||||||
|
`icinga2-node1.localdomain` is the already installed master instance while
|
||||||
|
`icinga2-node2.localdomain` is the instance where the installation cli commands
|
||||||
|
are executed.
|
||||||
|
|
||||||
|
Required information:
|
||||||
|
|
||||||
|
* The client common name (CN). Use the FQDN, e.g. `icinga2-node2.localdomain`.
|
||||||
|
* The master host and zone name. Pass that to `pki save-cert` as `--host` parameter for example.
|
||||||
|
* The client ticket number generated on the master (`icinga2 pki ticket --cn icinga2-node2.localdomain`)
|
||||||
|
|
||||||
|
Generate a new local self-signed certificate.
|
||||||
|
|
||||||
|
# icinga2 pki new-cert --cn icinga2-node2.localdomain \
|
||||||
|
--key /etc/icinga2/pki/icinga2-node2.localdomain.key \
|
||||||
|
--cert /etc/icinga2/pki/icinga2-node2.localdomain.crt
|
||||||
|
|
||||||
|
Request the master certificate from the master host (`icinga2-node1.localdomain`)
|
||||||
|
and store it as `trusted-master.crt`. Review it and continue.
|
||||||
|
|
||||||
|
# icinga2 pki save-cert --key /etc/icinga2/pki/icinga2-node2.localdomain.key \
|
||||||
|
--cert /etc/icinga2/pki/icinga2-node2.localdomain.crt \
|
||||||
|
--trustedcert /etc/icinga2/pki/trusted-master.crt \
|
||||||
|
--host icinga2-node1.localdomain
|
||||||
|
|
||||||
|
Send the self-signed certificate to the master host using the ticket number and
|
||||||
|
receive a CA signed certificate and the master's `ca.crt` certificate.
|
||||||
|
Specify the path to the previously stored trusted master certificate.
|
||||||
|
|
||||||
|
# icinga2 pki request --host icinga2-node1.localdomain \
|
||||||
|
--port 5665 \
|
||||||
|
--ticket ead2d570e18c78abf285d6b85524970a0f69c22d \
|
||||||
|
--key /etc/icinga2/pki/icinga2-node2.localdomain.key \
|
||||||
|
--cert /etc/icinga2/pki/icinga2-node2.localdomain.crt \
|
||||||
|
--trustedcert /etc/icinga2/pki/trusted-master.crt \
|
||||||
|
--ca /etc/icinga2/pki/ca.crt
|
||||||
|
|
||||||
|
Continue with the additional node setup steps. Specify a local endpoint and zone name (`icinga2-node2.localdomain`)
|
||||||
|
and set the master host (`icinga2-node1.localdomain`) as parent zone configuration. Specify the path to
|
||||||
|
the previously stored trusted master certificate.
|
||||||
|
|
||||||
|
# icinga2 node setup --ticket ead2d570e18c78abf285d6b85524970a0f69c22d \
|
||||||
|
--endpoint icinga2-node1.localdomain \
|
||||||
|
--zone icinga2-node2.localdomain \
|
||||||
|
--master_host icinga2-node1.localdomain \
|
||||||
|
--trustedcert /etc/icinga2/pki/trusted-master.crt
|
||||||
|
|
||||||
|
Restart Icinga 2 once complete.
|
||||||
|
|
||||||
|
# service icinga2 restart
|
||||||
|
|
||||||
|
|
||||||
|
### <a id="icinga2-client-installation-client-setup-windows"></a> Setup the Client on Windows
|
||||||
|
|
||||||
|
Download the MSI-Installer package from [http://packages.icinga.org/windows/](http://packages.icinga.org/windows/).
|
||||||
|
|
||||||
|
Requirements:
|
||||||
|
* [Microsoft .NET Framework 2.0](http://www.microsoft.com/de-de/download/details.aspx?id=1639) if not already installed.
|
||||||
|
|
||||||
|
The setup wizard will install Icinga 2 and then continue with SSL certificate generation,
|
||||||
|
CSR-Autosigning and configuration setup.
|
||||||
|
|
||||||
|
You'll need the following configuration details:
|
||||||
|
|
||||||
|
* The client common name (CN). Defaults to FQDN.
|
||||||
|
* The client's local zone name. Defaults to FQDN.
|
||||||
|
* The master endpoint name. Look into your master setup `zones.conf` file for the proper name.
|
||||||
|
* The master endpoint connection information. Your master's IP address and port (defaults to 5665)
|
||||||
|
* The [request ticket number](10-icinga2-client.md#csr-autosigning-requirements) generated on your master
|
||||||
|
for CSR Auto-Signing
|
||||||
|
* Bind host/port for the Api feature (optional)
|
||||||
|
|
||||||
|
Once install is done, Icinga 2 is automatically started as a Windows service.
|
||||||
|
|
||||||
|
The Icinga 2 configuration is located inside the installation path and can be edited with
|
||||||
|
your favorite editor.
|
||||||
|
|
||||||
|
Configuration validation is done similar to the linux pendant on the Windows shell:
|
||||||
|
|
||||||
|
C:> icinga2.exe daemon -C
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="icinga2-client-configuration-modes"></a> Client Configuration Modes
|
||||||
|
|
||||||
|
* Clients with [local configuration](10-icinga2-client.md#icinga2-client-configuration-local), sending their inventory to the master
|
||||||
|
* Clients as [command execution bridge](10-icinga2-client.md#icinga2-client-configuration-command-bridge) without local configuration
|
||||||
|
* Clients receive their configuration from the master ([Cluster config sync](10-icinga2-client.md#icinga2-client-configuration-master-config-sync))
|
||||||
|
|
||||||
|
### <a id="icinga2-client-configuration-local"></a> Clients with Local Configuration
|
||||||
|
|
||||||
|
This is considered as independant satellite using a local scheduler, configuration
|
||||||
|
and the possibility to add Icinga 2 features on demand.
|
||||||
|
|
||||||
|
There is no difference in the configuration syntax on clients to any other Icinga 2 installation.
|
||||||
|
You can also use additional features like notifications directly on the remote client, if you are
|
||||||
|
required to. Basically everything a single Icinga 2 instance provides by default.
|
||||||
|
|
||||||
|
The following convention applies to remote clients:
|
||||||
|
|
||||||
|
* The hostname in the default host object should be the same as the Common Name (CN) used for SSL setup
|
||||||
|
* Add new services and check commands locally
|
||||||
|
|
||||||
|
Local configured checks are transferred to the central master. There are additional `node`
|
||||||
|
cli commands available which allow you to list/add/remove/blacklist remote clients and
|
||||||
|
generate the configuration on the master.
|
||||||
|
|
||||||
|
|
||||||
|
#### <a id="icinga2-remote-monitoring-master-discovery"></a> Discover Client Services on the Master
|
||||||
|
|
||||||
|
Icinga 2 clients will sync their locally defined objects to the defined master node. That way you can
|
||||||
|
list, add, filter and remove nodes based on their `node`, `zone`, `host` or `service` name.
|
||||||
|
|
||||||
|
List all discovered nodes (satellites, agents) and their hosts/services:
|
||||||
|
|
||||||
|
# icinga2 node list
|
||||||
|
Node 'icinga2-node2.localdomain' (last seen: Mon Feb 9 16:58:21 2015)
|
||||||
|
* Host 'icinga2-node2.localdomain'
|
||||||
|
* Service 'ping4'
|
||||||
|
* Service 'ping6'
|
||||||
|
* Service 'ssh'
|
||||||
|
* Service 'http'
|
||||||
|
* Service 'disk'
|
||||||
|
* Service 'disk /'
|
||||||
|
* Service 'icinga'
|
||||||
|
* Service 'load'
|
||||||
|
* Service 'procs'
|
||||||
|
* Service 'swap'
|
||||||
|
* Service 'users'
|
||||||
|
|
||||||
|
Listing the node and its host(s) and service(s) does not modify the master configuration yet. You
|
||||||
|
meed to generate the configuration in the next step.
|
||||||
|
|
||||||
|
|
||||||
|
### <a id="icinga2-client-master-discovery-generate-config"></a> Generate Configuration for Client Services on the Master
|
||||||
|
|
||||||
|
There is a dedicated Icinga 2 CLI command for updating the client services on the master,
|
||||||
|
generating all required configuration.
|
||||||
|
|
||||||
|
# icinga2 node update-config
|
||||||
|
|
||||||
|
The generated configuration of all nodes is stored in the `repository.d/` directory.
|
||||||
|
|
||||||
|
By default, the following additional configuration is generated:
|
||||||
|
* add `Endpoint` and `Zone` objects for the newly added node
|
||||||
|
* add `cluster-zone` health check for the master host for reachability and dependencies
|
||||||
|
* use the default templates `satellite-host` and `satellite-service` defined in `/etc/icinga2/conf.d/satellite.conf`
|
||||||
|
* apply a dependency for all other hosts on the remote satellite prevening failure checks/notifications
|
||||||
|
|
||||||
|
If hosts or services disappeared from the client discovery, it will remove the existing configuration objects
|
||||||
|
from the config repository. If there are existing hosts/services defined or modified, the CLI command will not
|
||||||
|
overwrite these (modified) configuration files.
|
||||||
|
|
||||||
|
After updating the configuration repository, make sure to reload Icinga 2.
|
||||||
|
|
||||||
|
# service icinga2 reload
|
||||||
|
|
||||||
|
Using systemd:
|
||||||
|
# systemctl reload icinga2
|
||||||
|
|
||||||
|
|
||||||
|
The `update-config` CLI command will fail, if there are uncommitted changes for the
|
||||||
|
configuration repository.
|
||||||
|
Please review these changes manually, or clear the commit and try again. This is a
|
||||||
|
safety hook to prevent unwanted manual changes to be committed by a updating the
|
||||||
|
client discovered objects only.
|
||||||
|
|
||||||
|
# icinga2 repository commit --simulate
|
||||||
|
# icinga2 repository clear-changes
|
||||||
|
# icinga2 repository commit
|
||||||
|
|
||||||
|
|
||||||
|
### <a id="icinga2-client-configuration-command-bridge"></a> Clients as Command Execution Bridge
|
||||||
|
|
||||||
|
Similar to other addons (NRPE, NSClient++, etc) the remote Icinga 2 client will only
|
||||||
|
execute commands the master instance is sending. There are no local host or service
|
||||||
|
objects configured, only the check command definitions must be configured.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Remote clients must explicitely accept commands in a similar
|
||||||
|
> fashion as cluster nodes [accept configuration](#cluster-zone-config-sync).
|
||||||
|
> This is due to security reasons.
|
||||||
|
|
||||||
|
Edit the `api` feature configuration in `/etc/icinga2/features-enabled/api.conf` on your client
|
||||||
|
and set `accept_commands` to `true`.
|
||||||
|
|
||||||
|
object ApiListener "api" {
|
||||||
|
cert_path = SysconfDir + "/icinga2/pki/" + NodeName + ".crt"
|
||||||
|
key_path = SysconfDir + "/icinga2/pki/" + NodeName + ".key"
|
||||||
|
ca_path = SysconfDir + "/icinga2/pki/ca.crt"
|
||||||
|
accept_commands = true
|
||||||
|
}
|
||||||
|
|
||||||
|
Icinga 2 on the remote client does not schedule checks locally, or keep checking
|
||||||
|
hosts/services on connection loss. This mode also does not allow to use features
|
||||||
|
for backend data writing (DB IDO, Perfdata, etc.) as the client does not have
|
||||||
|
local objects configured.
|
||||||
|
|
||||||
|
Icinga 2 already provides a variety of `CheckCommand` definitions using the Plugin
|
||||||
|
Check Commands, but you should also modify the local configuration inside `commands.conf`
|
||||||
|
for example.
|
||||||
|
|
||||||
|
If you're wondering why you need to keep the same command configuration on the master and
|
||||||
|
remote client: Icinga 2 calculates all required runtime macros used as command arguments on
|
||||||
|
the master and sends that information to the client.
|
||||||
|
In case you want to limit command arguments or handles values in a different manner, you
|
||||||
|
can modify the check command configuration on the remote client only. See [this issue](https://dev.icinga.org/issues/8221#note-3)
|
||||||
|
for more details.
|
||||||
|
|
||||||
|
### <a id="icinga2-client-configuration-command-bridge-master-config"></a> Master Configuration for Clients as Command Execution Bridge
|
||||||
|
|
||||||
|
This step involves little knowledge about the way the Icinga 2 nodes communication and trust
|
||||||
|
each other. Each client is configured as `Endpoint` object providing connection information.
|
||||||
|
As a matter of trust the client `Endpoint` is a member of its own `Zone` object which gets
|
||||||
|
the master zone configured as parent. That way the master knows how to connect to the client
|
||||||
|
and where to execute the check commands.
|
||||||
|
|
||||||
|
Add an `Endpoint` and `Zone` configuration object for the remote client
|
||||||
|
in `/etc/icinga2/zones.conf` and define a trusted master zone as `parent`.
|
||||||
|
|
||||||
|
object Endpoint "icinga2-node2.localdomain" {
|
||||||
|
host = "192.168.56.102"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "icinga2-node2.localdomain" {
|
||||||
|
parent = "master"
|
||||||
|
endpoints = [ "icinga2-node2.localdomain" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
More details here:
|
||||||
|
* [configure endpoints](12-distributed-monitoring-ha.md#configure-cluster-endpoints)
|
||||||
|
* [configure zones](12-distributed-monitoring-ha.md#configure-cluster-zones)
|
||||||
|
|
||||||
|
|
||||||
|
Once you have configured the required `Endpoint` and `Zone` object definition, you can start
|
||||||
|
configuring your host and service objects. The configuration is simple: If the `command_endpoint`
|
||||||
|
attribute is set, Icinga 2 calculcates all required runtime macros and sends that over to the
|
||||||
|
defined endpoint. The check result is then received asynchronously through the cluster protocol.
|
||||||
|
|
||||||
|
object Host "host-remote" {
|
||||||
|
import "generic-host"
|
||||||
|
|
||||||
|
address = "127.0.0.1"
|
||||||
|
address6 = "::1"
|
||||||
|
|
||||||
|
vars.os = "Linux"
|
||||||
|
}
|
||||||
|
|
||||||
|
apply Service "users-remote" {
|
||||||
|
import "generic-service"
|
||||||
|
|
||||||
|
check_command = "users"
|
||||||
|
command_endpoint = "remote-client1"
|
||||||
|
|
||||||
|
vars.users_wgreater = 10
|
||||||
|
vars.users_cgreater = 20
|
||||||
|
|
||||||
|
/* assign where a remote client is set */
|
||||||
|
assign where host.vars.remote_client
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
If there is a failure on execution (for example, the local check command configuration or the plugin
|
||||||
|
is missing), the check will return `UNKNOWN` and populate the check output with the error message.
|
||||||
|
This will happen in a similar fashion if you forgot to enable the `accept_commands` attribute
|
||||||
|
inside the `api` feature.
|
||||||
|
|
||||||
|
If you don't want to define the endpoint name inside the service apply rule everytime, you can
|
||||||
|
also easily inherit this from a host's custom attribute like shown in the example below.
|
||||||
|
|
||||||
|
object Host "host-remote" {
|
||||||
|
import "generic-host"
|
||||||
|
|
||||||
|
address = "127.0.0.1"
|
||||||
|
address6 = "::1"
|
||||||
|
|
||||||
|
vars.os = "Linux"
|
||||||
|
|
||||||
|
vars.remote_client = "remote-client1"
|
||||||
|
|
||||||
|
/* host specific check arguments */
|
||||||
|
vars.users_wgreater = 10
|
||||||
|
vars.users_cgreater = 20
|
||||||
|
}
|
||||||
|
|
||||||
|
apply Service "users-remote" {
|
||||||
|
import "generic-service"
|
||||||
|
|
||||||
|
check_command = "users"
|
||||||
|
command_endpoint = host.vars.remote_client
|
||||||
|
|
||||||
|
/* override (remote) command arguments with host settings */
|
||||||
|
vars.users_wgreater = host.vars.users_wgreater
|
||||||
|
vars.users_cgreater = host.vars.users_cgreater
|
||||||
|
|
||||||
|
/* assign where a remote client is set */
|
||||||
|
assign where host.vars.remote_client
|
||||||
|
}
|
||||||
|
|
||||||
|
That way your generated host object is the information provider and the service apply
|
||||||
|
rules must only be configured once.
|
||||||
|
|
||||||
|
> **Tip**
|
||||||
|
>
|
||||||
|
> [Event commands](3-monitoring-basics.md#event-commands) are executed on the
|
||||||
|
> remote command endpoint as well. You do not need
|
||||||
|
> an additional transport layer such as SSH or similar.
|
||||||
|
|
||||||
|
|
||||||
|
### <a id="icinga2-client-configuration-master-config-sync"></a> Clients with Master Config Sync
|
||||||
|
|
||||||
|
This is an advanced configuration mode which requires knowledge about the Icinga 2
|
||||||
|
cluster configuration and its object relation (Zones, Endpoints, etc) and the way you
|
||||||
|
will be able to sync the configuration from the master to the remote satellite or client.
|
||||||
|
|
||||||
|
Please continue reading in the [distributed monitoring chapter](12-distributed-monitoring-ha.md#distributed-monitoring-high-availability),
|
||||||
|
especially the [configuration synchronisation section](12-distributed-monitoring-ha.md#cluster-zone-config-sync).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### <a id="icinga2-client-cli-node"></a> Advanced Node Cli Actions
|
||||||
|
|
||||||
|
#### <a id="icinga2-remote-monitoring-master-discovery-blacklist-whitelist"></a> Blacklist/Whitelist for Clients on the Master
|
||||||
|
|
||||||
|
It's sometimes necessary to `blacklist` an entire remote client, or specific hosts or services
|
||||||
|
provided by this client. While it's reasonable for the local admin to configure for example an
|
||||||
|
additional ping check, you're not interested in that on the master sending out notifications
|
||||||
|
and presenting the dashboard to your support team.
|
||||||
|
|
||||||
|
Blacklisting an entire set might not be sufficient for excluding several objects, be it a
|
||||||
|
specific remote client with one ping servie you're interested in. Therefore you can `whitelist`
|
||||||
|
clients, hosts, services in a similar manner
|
||||||
|
|
||||||
|
Example for blacklisting all `ping*` services, but allowing only `probe` host with `ping4`:
|
||||||
|
|
||||||
|
# icinga2 node blacklist add --zone "*" --host "*" --service "ping*"
|
||||||
|
# icinga2 node whitelist add --zone "*" --host "probe" --service "ping*"
|
||||||
|
|
||||||
|
You can `list` and `remove` existing blacklists:
|
||||||
|
|
||||||
|
# icinga2 node blacklist list
|
||||||
|
Listing all blacklist entries:
|
||||||
|
blacklist filter for Node: '*' Host: '*' Service: 'ping*'.
|
||||||
|
|
||||||
|
# icinga2 node whitelist list
|
||||||
|
Listing all whitelist entries:
|
||||||
|
whitelist filter for Node: '*' Host: 'probe' Service: 'ping*'.
|
||||||
|
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> The `--zone` and `--host` arguments are required. A zone is always where the remote client is in.
|
||||||
|
> If you are unsure about it, set a wildcard (`*`) for them and filter only by host/services.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#### <a id="icinga2-client-master-discovery-manual"></a> Manually Discover Clients on the Master
|
||||||
|
|
||||||
|
Add a to-be-discovered client to the master:
|
||||||
|
|
||||||
|
# icinga2 node add my-remote-client
|
||||||
|
|
||||||
|
Set the connection details, and the Icinga 2 master will attempt to connect to this node and sync its
|
||||||
|
object repository.
|
||||||
|
|
||||||
|
# icinga2 node set my-remote-client --host 192.168.33.101 --port 5665
|
||||||
|
|
||||||
|
You can control that by calling the `node list` command:
|
||||||
|
|
||||||
|
# icinga2 node list
|
||||||
|
Node 'my-remote-client' (host: 192.168.33.101, port: 5665, log duration: 1 day, last seen: Sun Nov 2 17:46:29 2014)
|
||||||
|
|
||||||
|
#### <a id="icinga2-remote-monitoring-master-discovery-remove"></a> Remove Discovered Clients
|
||||||
|
|
||||||
|
If you don't require a connected agent, you can manually remove it and its discovered hosts and services
|
||||||
|
using the following CLI command:
|
||||||
|
|
||||||
|
# icinga2 node remove my-discovered-agent
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Better use [blacklists and/or whitelists](10-icinga2-client.md#icinga2-remote-monitoring-master-discovery-blacklist-whitelist)
|
||||||
|
> to control which clients and hosts/services are integrated into your master configuration repository.
|
360
doc/11-agent-based-checks.md
Normal file
360
doc/11-agent-based-checks.md
Normal file
@ -0,0 +1,360 @@
|
|||||||
|
# <a id="agent-based-checks-addon"></a> Additional Agent-based Checks
|
||||||
|
|
||||||
|
If the remote services are not directly accessible through the network, a
|
||||||
|
local agent installation exposing the results to check queries can
|
||||||
|
become handy.
|
||||||
|
|
||||||
|
## <a id="agent-based-checks-snmp"></a> SNMP
|
||||||
|
|
||||||
|
The SNMP daemon runs on the remote system and answers SNMP queries by plugin
|
||||||
|
binaries. The [Monitoring Plugins package](2-getting-started.md#setting-up-check-plugins) ships
|
||||||
|
the `check_snmp` plugin binary, but there are plenty of [existing plugins](13-addons-plugins.md#plugins)
|
||||||
|
for specific use cases already around, for example monitoring Cisco routers.
|
||||||
|
|
||||||
|
The following example uses the [SNMP ITL](7-icinga-template-library.md#plugin-check-command-snmp) `CheckCommand` and just
|
||||||
|
overrides the `snmp_oid` custom attribute. A service is created for all hosts which
|
||||||
|
have the `snmp-community` custom attribute.
|
||||||
|
|
||||||
|
apply Service "uptime" {
|
||||||
|
import "generic-service"
|
||||||
|
|
||||||
|
check_command = "snmp"
|
||||||
|
vars.snmp_oid = "1.3.6.1.2.1.1.3.0"
|
||||||
|
vars.snmp_miblist = "DISMAN-EVENT-MIB"
|
||||||
|
|
||||||
|
assign where host.vars.snmp_community != ""
|
||||||
|
}
|
||||||
|
|
||||||
|
Additional SNMP plugins are available using the [Manubulon SNMP Plugins](7-icinga-template-library.md#snmp-manubulon-plugin-check-commands).
|
||||||
|
|
||||||
|
If no `snmp_miblist` is specified the plugin will default to `ALL`. As the number of available MIB files
|
||||||
|
on the system increases so will the load generated by this plugin if no `MIB` is specified.
|
||||||
|
As such, it is recommended to always specify at least one `MIB`.
|
||||||
|
|
||||||
|
## <a id="agent-based-checks-ssh"></a> SSH
|
||||||
|
|
||||||
|
Calling a plugin using the SSH protocol to execute a plugin on the remote server fetching
|
||||||
|
its return code and output. The `by_ssh` command object is part of the built-in templates and
|
||||||
|
requires the `check_by_ssh` check plugin which is available in the [Monitoring Plugins package](2-getting-started.md#setting-up-check-plugins).
|
||||||
|
|
||||||
|
object CheckCommand "by_ssh_swap" {
|
||||||
|
import "by_ssh"
|
||||||
|
|
||||||
|
vars.by_ssh_command = "/usr/lib/nagios/plugins/check_swap -w $by_ssh_swap_warn$ -c $by_ssh_swap_crit$"
|
||||||
|
vars.by_ssh_swap_warn = "75%"
|
||||||
|
vars.by_ssh_swap_crit = "50%"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Service "swap" {
|
||||||
|
import "generic-service"
|
||||||
|
|
||||||
|
host_name = "remote-ssh-host"
|
||||||
|
|
||||||
|
check_command = "by_ssh_swap"
|
||||||
|
|
||||||
|
vars.by_ssh_logname = "icinga"
|
||||||
|
}
|
||||||
|
|
||||||
|
## <a id="agent-based-checks-nrpe"></a> NRPE
|
||||||
|
|
||||||
|
[NRPE](http://docs.icinga.org/latest/en/nrpe.html) runs as daemon on the remote client including
|
||||||
|
the required plugins and command definitions.
|
||||||
|
Icinga 2 calls the `check_nrpe` plugin binary in order to query the configured command on the
|
||||||
|
remote client.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> The NRPE protocol is considered insecure and has multiple flaws in its
|
||||||
|
> design. Upstream is not willing to fix these issues.
|
||||||
|
>
|
||||||
|
> In order to stay safe, please use the native [Icinga 2 client](10-icinga2-client.md#icinga2-client)
|
||||||
|
> instead.
|
||||||
|
|
||||||
|
The NRPE daemon uses its own configuration format in nrpe.cfg while `check_nrpe`
|
||||||
|
can be embedded into the Icinga 2 `CheckCommand` configuration syntax.
|
||||||
|
|
||||||
|
You can use the `check_nrpe` plugin from the NRPE project to query the NRPE daemon.
|
||||||
|
Icinga 2 provides the [nrpe check command](7-icinga-template-library.md#plugin-check-command-nrpe) for this:
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
object Service "users" {
|
||||||
|
import "generic-service"
|
||||||
|
|
||||||
|
host_name = "remote-nrpe-host"
|
||||||
|
|
||||||
|
check_command = "nrpe"
|
||||||
|
vars.nrpe_command = "check_users"
|
||||||
|
}
|
||||||
|
|
||||||
|
nrpe.cfg:
|
||||||
|
|
||||||
|
command[check_users]=/usr/local/icinga/libexec/check_users -w 5 -c 10
|
||||||
|
|
||||||
|
If you are planning to pass arguments to NRPE using the `-a`
|
||||||
|
command line parameter, make sure that your NRPE daemon has them
|
||||||
|
supported and enabled.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Enabling command arguments in NRPE is considered harmful
|
||||||
|
> and exposes a security risk allowing attackers to execute
|
||||||
|
> commands remotely. Details at [seclists.org](http://seclists.org/fulldisclosure/2014/Apr/240).
|
||||||
|
|
||||||
|
The plugin check command `nrpe` provides the `nrpe_arguments` custom
|
||||||
|
attribute which expects either a single value or an array of values.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
object Service "nrpe-disk-/" {
|
||||||
|
import "generic-service"
|
||||||
|
|
||||||
|
host_name = "remote-nrpe-host"
|
||||||
|
|
||||||
|
check_command = "nrpe"
|
||||||
|
vars.nrpe_command = "check_disk"
|
||||||
|
vars.nrpe_arguments = [ "20%", "10%", "/" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
Icinga 2 will execute the nrpe plugin like this:
|
||||||
|
|
||||||
|
/usr/lib/nagios/plugins/check_nrpe -H <remote-nrpe-host> -c 'check_disk' -a '20%' '10%' '/'
|
||||||
|
|
||||||
|
NRPE expects all additional arguments in an ordered fashion
|
||||||
|
and interprets the first value as `$ARG1$` macro, the second
|
||||||
|
value as `$ARG2$`, and so on.
|
||||||
|
|
||||||
|
nrpe.cfg:
|
||||||
|
|
||||||
|
command[check_disk]=/usr/local/icinga/libexec/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$
|
||||||
|
|
||||||
|
Using the above example with `nrpe_arguments` the command
|
||||||
|
executed by the NRPE daemon looks similar to that:
|
||||||
|
|
||||||
|
/usr/local/icinga/libexec/check_disk -w 20% -c 10% -p /
|
||||||
|
|
||||||
|
You can pass arguments in a similar manner to [NSClient++](11-agent-based-checks.md#agent-based-checks-nsclient)
|
||||||
|
when using its NRPE supported check method.
|
||||||
|
|
||||||
|
## <a id="agent-based-checks-nsclient"></a> NSClient++
|
||||||
|
|
||||||
|
[NSClient++](http://nsclient.org) works on both Windows and Linux platforms and is well
|
||||||
|
known for its magnificent Windows support. There are alternatives like the WMI interface,
|
||||||
|
but using `NSClient++` will allow you to run local scripts similar to check plugins fetching
|
||||||
|
the required output and performance counters.
|
||||||
|
|
||||||
|
You can use the `check_nt` plugin from the Monitoring Plugins project to query NSClient++.
|
||||||
|
Icinga 2 provides the [nscp check command](7-icinga-template-library.md#plugin-check-command-nscp) for this:
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
object Service "disk" {
|
||||||
|
import "generic-service"
|
||||||
|
|
||||||
|
host_name = "remote-windows-host"
|
||||||
|
|
||||||
|
check_command = "nscp"
|
||||||
|
|
||||||
|
vars.nscp_variable = "USEDDISKSPACE"
|
||||||
|
vars.nscp_params = "c"
|
||||||
|
vars.nscp_warn = 70
|
||||||
|
vars.nscp_crit = 80
|
||||||
|
}
|
||||||
|
|
||||||
|
For details on the `NSClient++` configuration please refer to the [official documentation](http://www.nsclient.org/nscp/wiki/doc/configuration/0.4.x).
|
||||||
|
|
||||||
|
## <a id="agent-based-checks-nsca-ng"></a> NSCA-NG
|
||||||
|
|
||||||
|
[NSCA-ng](http://www.nsca-ng.org) provides a client-server pair that allows the
|
||||||
|
remote sender to push check results into the Icinga 2 `ExternalCommandListener`
|
||||||
|
feature.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> This addon works in a similar fashion like the Icinga 1.x distributed model. If you
|
||||||
|
> are looking for a real distributed architecture with Icinga 2, scroll down.
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="agent-based-checks-snmp-traps"></a> Passive Check Results and SNMP Traps
|
||||||
|
|
||||||
|
SNMP Traps can be received and filtered by using [SNMPTT](http://snmptt.sourceforge.net/)
|
||||||
|
and specific trap handlers passing the check results to Icinga 2.
|
||||||
|
|
||||||
|
Following the SNMPTT [Format](http://snmptt.sourceforge.net/docs/snmptt.shtml#SNMPTT.CONF-FORMAT)
|
||||||
|
documentation and the Icinga external command syntax found [here](22-appendix.md#external-commands-list-detail)
|
||||||
|
we can create generic services that can accommodate any number of hosts for a given scenario.
|
||||||
|
|
||||||
|
### <a id="simple-traps"></a> Simple SNMP Traps
|
||||||
|
|
||||||
|
A simple example might be monitoring host reboots indicated by an SNMP agent reset.
|
||||||
|
Building the event to auto reset after dispatching a notification is important.
|
||||||
|
Setup the manual check parameters to reset the event from an initial unhandled
|
||||||
|
state or from a missed reset event.
|
||||||
|
|
||||||
|
Add a directive in `snmptt.conf`
|
||||||
|
|
||||||
|
EVENT coldStart .1.3.6.1.6.3.1.1.5.1 "Status Events" Normal
|
||||||
|
FORMAT Device reinitialized (coldStart)
|
||||||
|
EXEC echo "[$@] PROCESS_SERVICE_CHECK_RESULT;$A;Coldstart;2;The snmp agent has reinitialized." >> /var/run/icinga2/cmd/icinga2.cmd
|
||||||
|
SDESC
|
||||||
|
A coldStart trap signifies that the SNMPv2 entity, acting
|
||||||
|
in an agent role, is reinitializing itself and that its
|
||||||
|
configuration may have been altered.
|
||||||
|
EDESC
|
||||||
|
|
||||||
|
1. Define the `EVENT` as per your need.
|
||||||
|
2. Construct the `EXEC` statement with the service name matching your template
|
||||||
|
applied to your _n_ hosts. The host address inferred by SNMPTT will be the
|
||||||
|
correlating factor. You can have snmptt provide host names or ip addresses to
|
||||||
|
match your Icinga convention.
|
||||||
|
|
||||||
|
Add an `EventCommand` configuration object for the passive service auto reset event.
|
||||||
|
|
||||||
|
object EventCommand "coldstart-reset-event" {
|
||||||
|
import "plugin-event-command"
|
||||||
|
|
||||||
|
command = [ SysconfDir + "/icinga2/conf.d/custom/scripts/coldstart_reset_event.sh" ]
|
||||||
|
|
||||||
|
arguments = {
|
||||||
|
"-i" = "$service.state_id$"
|
||||||
|
"-n" = "$host.name$"
|
||||||
|
"-s" = "$service.name$"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Create the `coldstart_reset_event.sh` shell script to pass the expanded variable
|
||||||
|
data in. The `$service.state_id$` is important in order to prevent an endless loop
|
||||||
|
of event firing after the service has been reset.
|
||||||
|
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
SERVICE_STATE_ID=""
|
||||||
|
HOST_NAME=""
|
||||||
|
SERVICE_NAME=""
|
||||||
|
|
||||||
|
show_help()
|
||||||
|
{
|
||||||
|
cat <<-EOF
|
||||||
|
Usage: ${0##*/} [-h] -n HOST_NAME -s SERVICE_NAME
|
||||||
|
Writes a coldstart reset event to the Icinga command pipe.
|
||||||
|
|
||||||
|
-h Display this help and exit.
|
||||||
|
-i SERVICE_STATE_ID The associated service state id.
|
||||||
|
-n HOST_NAME The associated host name.
|
||||||
|
-s SERVICE_NAME The associated service name.
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
while getopts "hi:n:s:" opt; do
|
||||||
|
case "$opt" in
|
||||||
|
h)
|
||||||
|
show_help
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
i)
|
||||||
|
SERVICE_STATE_ID=$OPTARG
|
||||||
|
;;
|
||||||
|
n)
|
||||||
|
HOST_NAME=$OPTARG
|
||||||
|
;;
|
||||||
|
s)
|
||||||
|
SERVICE_NAME=$OPTARG
|
||||||
|
;;
|
||||||
|
'?')
|
||||||
|
show_help
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ -z "$SERVICE_STATE_ID" ]; then
|
||||||
|
show_help
|
||||||
|
printf "\n Error: -i required.\n"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -z "$HOST_NAME" ]; then
|
||||||
|
show_help
|
||||||
|
printf "\n Error: -n required.\n"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -z "$SERVICE_NAME" ]; then
|
||||||
|
show_help
|
||||||
|
printf "\n Error: -s required.\n"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$SERVICE_STATE_ID" -gt 0 ]; then
|
||||||
|
echo "[`date +%s`] PROCESS_SERVICE_CHECK_RESULT;$HOST_NAME;$SERVICE_NAME;0;Auto-reset (`date +"%m-%d-%Y %T"`)." >> /var/run/icinga2/cmd/icinga2.cmd
|
||||||
|
fi
|
||||||
|
|
||||||
|
Finally create the `Service` and assign it:
|
||||||
|
|
||||||
|
apply Service "Coldstart" {
|
||||||
|
import "generic-service-custom"
|
||||||
|
|
||||||
|
check_command = "dummy"
|
||||||
|
event_command = "coldstart-reset-event"
|
||||||
|
|
||||||
|
enable_notifications = 1
|
||||||
|
enable_active_checks = 0
|
||||||
|
enable_passive_checks = 1
|
||||||
|
enable_flapping = 0
|
||||||
|
volatile = 1
|
||||||
|
enable_perfdata = 0
|
||||||
|
|
||||||
|
vars.dummy_state = 0
|
||||||
|
vars.dummy_text = "Manual reset."
|
||||||
|
|
||||||
|
vars.sla = "24x7"
|
||||||
|
|
||||||
|
assign where (host.vars.os == "Linux" || host.vars.os == "Windows")
|
||||||
|
}
|
||||||
|
|
||||||
|
### <a id="complex-traps"></a> Complex SNMP Traps
|
||||||
|
|
||||||
|
A more complex example might be passing dynamic data from a traps varbind list
|
||||||
|
for a backup scenario where the backup software dispatches status updates. By
|
||||||
|
utilizing active and passive checks, the older freshness concept can be leveraged.
|
||||||
|
|
||||||
|
By defining the active check as a hard failed state, a missed backup can be reported.
|
||||||
|
As long as the most recent passive update has occurred, the active check is bypassed.
|
||||||
|
|
||||||
|
Add a directive in `snmptt.conf`
|
||||||
|
|
||||||
|
EVENT enterpriseSpecific <YOUR OID> "Status Events" Normal
|
||||||
|
FORMAT Enterprise specific trap
|
||||||
|
EXEC echo "[$@] PROCESS_SERVICE_CHECK_RESULT;$A;$1;$2;$3" >> /var/run/icinga2/cmd/icinga2.cmd
|
||||||
|
SDESC
|
||||||
|
An enterprise specific trap.
|
||||||
|
The varbinds in order denote the Icinga service name, state and text.
|
||||||
|
EDESC
|
||||||
|
|
||||||
|
1. Define the `EVENT` as per your need using your actual oid.
|
||||||
|
2. The service name, state and text are extracted from the first three varbinds.
|
||||||
|
This has the advantage of accommodating an unlimited set of use cases.
|
||||||
|
|
||||||
|
Create a `Service` for the specific use case associated to the host. If the host
|
||||||
|
matches and the first varbind value is `Backup`, SNMPTT will submit the corresponding
|
||||||
|
passive update with the state and text from the second and third varbind:
|
||||||
|
|
||||||
|
object Service "Backup" {
|
||||||
|
import "generic-service-custom"
|
||||||
|
|
||||||
|
host_name = "host.domain.com"
|
||||||
|
check_command = "dummy"
|
||||||
|
|
||||||
|
enable_notifications = 1
|
||||||
|
enable_active_checks = 1
|
||||||
|
enable_passive_checks = 1
|
||||||
|
enable_flapping = 0
|
||||||
|
volatile = 1
|
||||||
|
max_check_attempts = 1
|
||||||
|
check_interval = 87000
|
||||||
|
enable_perfdata = 0
|
||||||
|
|
||||||
|
vars.sla = "24x7"
|
||||||
|
vars.dummy_state = 2
|
||||||
|
vars.dummy_text = "No passive check result received."
|
||||||
|
}
|
784
doc/12-distributed-monitoring-ha.md
Normal file
784
doc/12-distributed-monitoring-ha.md
Normal file
@ -0,0 +1,784 @@
|
|||||||
|
# <a id="distributed-monitoring-high-availability"></a> Distributed Monitoring and High Availability
|
||||||
|
|
||||||
|
Building distributed environments with high availability included is fairly easy with Icinga 2.
|
||||||
|
The cluster feature is built-in and allows you to build many scenarios based on your requirements:
|
||||||
|
|
||||||
|
* [High Availability](12-distributed-monitoring-ha.md#cluster-scenarios-high-availability). All instances in the `Zone` elect one active master and run as Active/Active cluster.
|
||||||
|
* [Distributed Zones](12-distributed-monitoring-ha.md#cluster-scenarios-distributed-zones). A master zone and one or more satellites in their zones.
|
||||||
|
* [Load Distribution](12-distributed-monitoring-ha.md#cluster-scenarios-load-distribution). A configuration master and multiple checker satellites.
|
||||||
|
|
||||||
|
You can combine these scenarios into a global setup fitting your requirements.
|
||||||
|
|
||||||
|
Each instance got their own event scheduler, and does not depend on a centralized master
|
||||||
|
coordinating and distributing the events. In case of a cluster failure, all nodes
|
||||||
|
continue to run independently. Be alarmed when your cluster fails and a Split-Brain-scenario
|
||||||
|
is in effect - all alive instances continue to do their job, and history will begin to differ.
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="cluster-requirements"></a> Cluster Requirements
|
||||||
|
|
||||||
|
Before you start deploying, keep the following things in mind:
|
||||||
|
|
||||||
|
* Your [SSL CA and certificates](12-distributed-monitoring-ha.md#manual-certificate-generation) are mandatory for secure communication
|
||||||
|
* Get pen and paper or a drawing board and design your nodes and zones!
|
||||||
|
* all nodes in a cluster zone are providing high availability functionality and trust each other
|
||||||
|
* cluster zones can be built in a Top-Down-design where the child trusts the parent
|
||||||
|
* communication between zones happens bi-directional which means that a DMZ-located node can still reach the master node, or vice versa
|
||||||
|
* Update firewall rules and ACLs
|
||||||
|
* Decide whether to use the built-in [configuration syncronization](12-distributed-monitoring-ha.md#cluster-zone-config-sync) or use an external tool (Puppet, Ansible, Chef, Salt, etc) to manage the configuration deployment
|
||||||
|
|
||||||
|
|
||||||
|
> **Tip**
|
||||||
|
>
|
||||||
|
> If you're looking for troubleshooting cluster problems, check the general
|
||||||
|
> [troubleshooting](16-troubleshooting.md#troubleshooting-cluster) section.
|
||||||
|
|
||||||
|
## <a id="manual-certificate-generation"></a> Manual SSL Certificate Generation
|
||||||
|
|
||||||
|
Icinga 2 provides [CLI commands](8-cli-commands.md#cli-command-pki) assisting with CA
|
||||||
|
and node certificate creation for your Icinga 2 distributed setup.
|
||||||
|
|
||||||
|
> **Tip**
|
||||||
|
>
|
||||||
|
> You can also use the master and client setup wizards to install the cluster nodes
|
||||||
|
> using CSR-Autosigning.
|
||||||
|
>
|
||||||
|
> The manual steps are helpful if you want to use your own and/or existing CA (for example
|
||||||
|
> Puppet CA).
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> You're free to use your own method to generated a valid ca and signed client
|
||||||
|
> certificates.
|
||||||
|
|
||||||
|
The first step is the creation of the certificate authority (CA) by running the
|
||||||
|
following command:
|
||||||
|
|
||||||
|
# icinga2 pki new-ca
|
||||||
|
|
||||||
|
Now create a certificate and key file for each node running the following command
|
||||||
|
(replace `icinga2a` with the required hostname):
|
||||||
|
|
||||||
|
# icinga2 pki new-cert --cn icinga2a --key icinga2a.key --csr icinga2a.csr
|
||||||
|
# icinga2 pki sign-csr --csr icinga2a.csr --cert icinga2a.crt
|
||||||
|
|
||||||
|
Repeat the step for all nodes in your cluster scenario.
|
||||||
|
|
||||||
|
Save the CA key in a secure location in case you want to set up certificates for
|
||||||
|
additional nodes at a later time.
|
||||||
|
|
||||||
|
Navigate to the location of your newly generated certificate files, and manually
|
||||||
|
copy/transfer them to `/etc/icinga2/pki` in your Icinga 2 configuration folder.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> The certificate files must be readable by the user Icinga 2 is running as. Also,
|
||||||
|
> the private key file must not be world-readable.
|
||||||
|
|
||||||
|
Each node requires the following files in `/etc/icinga2/pki` (replace `fqdn-nodename` with
|
||||||
|
the host's FQDN):
|
||||||
|
|
||||||
|
* ca.crt
|
||||||
|
* <fqdn-nodename>.crt
|
||||||
|
* <fqdn-nodename>.key
|
||||||
|
|
||||||
|
If you're planning to use your existing CA and certificates please note that you *must not*
|
||||||
|
use wildcard certificates. The common name (CN) is mandatory for the cluster communication and
|
||||||
|
therefore must be unique for each connecting instance.
|
||||||
|
|
||||||
|
### <a id="cluster-naming-convention"></a> Cluster Naming Convention
|
||||||
|
|
||||||
|
The SSL certificate common name (CN) will be used by the [ApiListener](6-object-types.md#objecttype-apilistener)
|
||||||
|
object to determine the local authority. This name must match the local [Endpoint](6-object-types.md#objecttype-endpoint)
|
||||||
|
object name.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
# icinga2 pki new-cert --cn icinga2a --key icinga2a.key --csr icinga2a.csr
|
||||||
|
# icinga2 pki sign-csr --csr icinga2a.csr --cert icinga2a.crt
|
||||||
|
|
||||||
|
# vim zones.conf
|
||||||
|
|
||||||
|
object Endpoint "icinga2a" {
|
||||||
|
host = "icinga2a.icinga.org"
|
||||||
|
}
|
||||||
|
|
||||||
|
The [Endpoint](6-object-types.md#objecttype-endpoint) name is further referenced as `endpoints` attribute on the
|
||||||
|
[Zone](6-object-types.md#objecttype-zone) object.
|
||||||
|
|
||||||
|
object Endpoint "icinga2b" {
|
||||||
|
host = "icinga2b.icinga.org"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "config-ha-master" {
|
||||||
|
endpoints = [ "icinga2a", "icinga2b" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
Specifying the local node name using the [NodeName](12-distributed-monitoring-ha.md#configure-nodename) variable requires
|
||||||
|
the same name as used for the endpoint name and common name above. If not set, the FQDN is used.
|
||||||
|
|
||||||
|
const NodeName = "icinga2a"
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="cluster-configuration"></a> Cluster Configuration
|
||||||
|
|
||||||
|
The following section describe which configuration must be updated/created
|
||||||
|
in order to get your cluster running with basic functionality.
|
||||||
|
|
||||||
|
* [configure the node name](12-distributed-monitoring-ha.md#configure-nodename)
|
||||||
|
* [configure the ApiListener object](12-distributed-monitoring-ha.md#configure-apilistener-object)
|
||||||
|
* [configure cluster endpoints](12-distributed-monitoring-ha.md#configure-cluster-endpoints)
|
||||||
|
* [configure cluster zones](12-distributed-monitoring-ha.md#configure-cluster-zones)
|
||||||
|
|
||||||
|
Once you're finished with the basic setup the following section will
|
||||||
|
describe how to use [zone configuration synchronisation](12-distributed-monitoring-ha.md#cluster-zone-config-sync)
|
||||||
|
and configure [cluster scenarios](12-distributed-monitoring-ha.md#cluster-scenarios).
|
||||||
|
|
||||||
|
### <a id="configure-nodename"></a> Configure the Icinga Node Name
|
||||||
|
|
||||||
|
Instead of using the default FQDN as node name you can optionally set
|
||||||
|
that value using the [NodeName](19-language-reference.md#constants) constant.
|
||||||
|
|
||||||
|
> ** Note **
|
||||||
|
>
|
||||||
|
> Skip this step if your FQDN already matches the default `NodeName` set
|
||||||
|
> in `/etc/icinga2/constants.conf`.
|
||||||
|
|
||||||
|
This setting must be unique for each node, and must also match
|
||||||
|
the name of the local [Endpoint](6-object-types.md#objecttype-endpoint) object and the
|
||||||
|
SSL certificate common name as described in the
|
||||||
|
[cluster naming convention](12-distributed-monitoring-ha.md#cluster-naming-convention).
|
||||||
|
|
||||||
|
vim /etc/icinga2/constants.conf
|
||||||
|
|
||||||
|
/* Our local instance name. By default this is the server's hostname as returned by `hostname --fqdn`.
|
||||||
|
* This should be the common name from the API certificate.
|
||||||
|
*/
|
||||||
|
const NodeName = "icinga2a"
|
||||||
|
|
||||||
|
|
||||||
|
Read further about additional [naming conventions](12-distributed-monitoring-ha.md#cluster-naming-convention).
|
||||||
|
|
||||||
|
Not specifying the node name will make Icinga 2 using the FQDN. Make sure that all
|
||||||
|
configured endpoint names and common names are in sync.
|
||||||
|
|
||||||
|
### <a id="configure-apilistener-object"></a> Configure the ApiListener Object
|
||||||
|
|
||||||
|
The [ApiListener](6-object-types.md#objecttype-apilistener) object needs to be configured on
|
||||||
|
every node in the cluster with the following settings:
|
||||||
|
|
||||||
|
A sample config looks like:
|
||||||
|
|
||||||
|
object ApiListener "api" {
|
||||||
|
cert_path = SysconfDir + "/icinga2/pki/" + NodeName + ".crt"
|
||||||
|
key_path = SysconfDir + "/icinga2/pki/" + NodeName + ".key"
|
||||||
|
ca_path = SysconfDir + "/icinga2/pki/ca.crt"
|
||||||
|
accept_config = true
|
||||||
|
accept_commands = true
|
||||||
|
}
|
||||||
|
|
||||||
|
You can simply enable the `api` feature using
|
||||||
|
|
||||||
|
# icinga2 feature enable api
|
||||||
|
|
||||||
|
Edit `/etc/icinga2/features-enabled/api.conf` if you require the configuration
|
||||||
|
synchronisation enabled for this node. Set the `accept_config` attribute to `true`.
|
||||||
|
|
||||||
|
If you want to use this node as [remote client for command execution](10-icinga2-client.md#icinga2-client-configuration-command-bridge)
|
||||||
|
set the `accept_commands` attribute to `true`.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> The certificate files must be readable by the user Icinga 2 is running as. Also,
|
||||||
|
> the private key file must not be world-readable.
|
||||||
|
|
||||||
|
### <a id="configure-cluster-endpoints"></a> Configure Cluster Endpoints
|
||||||
|
|
||||||
|
`Endpoint` objects specify the `host` and `port` settings for the cluster node
|
||||||
|
connection information.
|
||||||
|
This configuration can be the same on all nodes in the cluster only containing
|
||||||
|
connection information.
|
||||||
|
|
||||||
|
A sample configuration looks like:
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Configure config master endpoint
|
||||||
|
*/
|
||||||
|
|
||||||
|
object Endpoint "icinga2a" {
|
||||||
|
host = "icinga2a.icinga.org"
|
||||||
|
}
|
||||||
|
|
||||||
|
If this endpoint object is reachable on a different port, you must configure the
|
||||||
|
`ApiListener` on the local `Endpoint` object accordingly too.
|
||||||
|
|
||||||
|
If you don't want the local instance to connect to the remote instance, remove the
|
||||||
|
`host` attribute locally. Keep in mind that the configuration is now different amongst
|
||||||
|
all instances and point-of-view dependant.
|
||||||
|
|
||||||
|
### <a id="configure-cluster-zones"></a> Configure Cluster Zones
|
||||||
|
|
||||||
|
`Zone` objects specify the endpoints located in a zone. That way your distributed setup can be
|
||||||
|
seen as zones connected together instead of multiple instances in that specific zone.
|
||||||
|
|
||||||
|
Zones can be used for [high availability](12-distributed-monitoring-ha.md#cluster-scenarios-high-availability),
|
||||||
|
[distributed setups](12-distributed-monitoring-ha.md#cluster-scenarios-distributed-zones) and
|
||||||
|
[load distribution](12-distributed-monitoring-ha.md#cluster-scenarios-load-distribution).
|
||||||
|
Furthermore zones are used for the [Icinga 2 remote client](10-icinga2-client.md#icinga2-client).
|
||||||
|
|
||||||
|
Each Icinga 2 `Endpoint` must be put into its respective `Zone`. In this example, you will
|
||||||
|
define the zone `config-ha-master` where the `icinga2a` and `icinga2b` endpoints
|
||||||
|
are located. The `check-satellite` zone consists of `icinga2c` only, but more nodes could
|
||||||
|
be added.
|
||||||
|
|
||||||
|
The `config-ha-master` zone acts as High-Availability setup - the Icinga 2 instances elect
|
||||||
|
one active master where all features are running on (for example `icinga2a`). In case of
|
||||||
|
failure of the `icinga2a` instance, `icinga2b` will take over automatically.
|
||||||
|
|
||||||
|
object Zone "config-ha-master" {
|
||||||
|
endpoints = [ "icinga2a", "icinga2b" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
The `check-satellite` zone is a separated location and only sends back their checkresults to
|
||||||
|
the defined parent zone `config-ha-master`.
|
||||||
|
|
||||||
|
object Zone "check-satellite" {
|
||||||
|
endpoints = [ "icinga2c" ]
|
||||||
|
parent = "config-ha-master"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="cluster-zone-config-sync"></a> Zone Configuration Synchronisation
|
||||||
|
|
||||||
|
By default all objects for specific zones should be organized in
|
||||||
|
|
||||||
|
/etc/icinga2/zones.d/<zonename>
|
||||||
|
|
||||||
|
on the configuration master.
|
||||||
|
|
||||||
|
Your child zones and endpoint members **must not** have their config copied to `zones.d`.
|
||||||
|
The built-in configuration synchronisation takes care of that if your nodes accept
|
||||||
|
configuration from the parent zone. You can define that in the
|
||||||
|
[ApiListener](12-distributed-monitoring-ha.md#configure-apilistener-object) object by configuring the `accept_config`
|
||||||
|
attribute accordingly.
|
||||||
|
|
||||||
|
You should remove the sample config included in `conf.d` by commenting the `recursive_include`
|
||||||
|
statement in [icinga2.conf](5-configuring-icinga-2.md#icinga2-conf):
|
||||||
|
|
||||||
|
//include_recursive "conf.d"
|
||||||
|
|
||||||
|
Better use a dedicated directory name like `cluster` or similar, and include that
|
||||||
|
one if your nodes require local configuration not being synced to other nodes. That's
|
||||||
|
useful for local [health checks](12-distributed-monitoring-ha.md#cluster-health-check) for example.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> In a [high availability](12-distributed-monitoring-ha.md#cluster-scenarios-high-availability)
|
||||||
|
> setup only one assigned node can act as configuration master. All other zone
|
||||||
|
> member nodes **must not** have the `/etc/icinga2/zones.d` directory populated.
|
||||||
|
|
||||||
|
These zone packages are then distributed to all nodes in the same zone, and
|
||||||
|
to their respective target zone instances.
|
||||||
|
|
||||||
|
Each configured zone must exist with the same directory name. The parent zone
|
||||||
|
syncs the configuration to the child zones, if allowed using the `accept_config`
|
||||||
|
attribute of the [ApiListener](12-distributed-monitoring-ha.md#configure-apilistener-object) object.
|
||||||
|
|
||||||
|
Config on node `icinga2a`:
|
||||||
|
|
||||||
|
object Zone "master" {
|
||||||
|
endpoints = [ "icinga2a" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "checker" {
|
||||||
|
endpoints = [ "icinga2b" ]
|
||||||
|
parent = "master"
|
||||||
|
}
|
||||||
|
|
||||||
|
/etc/icinga2/zones.d
|
||||||
|
master
|
||||||
|
health.conf
|
||||||
|
checker
|
||||||
|
health.conf
|
||||||
|
demo.conf
|
||||||
|
|
||||||
|
Config on node `icinga2b`:
|
||||||
|
|
||||||
|
object Zone "master" {
|
||||||
|
endpoints = [ "icinga2a" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "checker" {
|
||||||
|
endpoints = [ "icinga2b" ]
|
||||||
|
parent = "master"
|
||||||
|
}
|
||||||
|
|
||||||
|
/etc/icinga2/zones.d
|
||||||
|
EMPTY_IF_CONFIG_SYNC_ENABLED
|
||||||
|
|
||||||
|
If the local configuration is newer than the received update Icinga 2 will skip the synchronisation
|
||||||
|
process.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> `zones.d` must not be included in [icinga2.conf](5-configuring-icinga-2.md#icinga2-conf). Icinga 2 automatically
|
||||||
|
> determines the required include directory. This can be overridden using the
|
||||||
|
> [global constant](19-language-reference.md#constants) `ZonesDir`.
|
||||||
|
|
||||||
|
### <a id="zone-global-config-templates"></a> Global Configuration Zone for Templates
|
||||||
|
|
||||||
|
If your zone configuration setup shares the same templates, groups, commands, timeperiods, etc.
|
||||||
|
you would have to duplicate quite a lot of configuration objects making the merged configuration
|
||||||
|
on your configuration master unique.
|
||||||
|
|
||||||
|
> ** Note **
|
||||||
|
>
|
||||||
|
> Only put templates, groups, etc into this zone. DO NOT add checkable objects such as
|
||||||
|
> hosts or services here. If they are checked by all instances globally, this will lead
|
||||||
|
> into duplicated check results and unclear state history. Not easy to troubleshoot too -
|
||||||
|
> you've been warned.
|
||||||
|
|
||||||
|
That is not necessary by defining a global zone shipping all those templates. By setting
|
||||||
|
`global = true` you ensure that this zone serving common configuration templates will be
|
||||||
|
synchronized to all involved nodes (only if they accept configuration though).
|
||||||
|
|
||||||
|
Config on configuration master:
|
||||||
|
|
||||||
|
/etc/icinga2/zones.d
|
||||||
|
global-templates/
|
||||||
|
templates.conf
|
||||||
|
groups.conf
|
||||||
|
master
|
||||||
|
health.conf
|
||||||
|
checker
|
||||||
|
health.conf
|
||||||
|
demo.conf
|
||||||
|
|
||||||
|
In this example, the global zone is called `global-templates` and must be defined in
|
||||||
|
your zone configuration visible to all nodes.
|
||||||
|
|
||||||
|
object Zone "global-templates" {
|
||||||
|
global = true
|
||||||
|
}
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> If the remote node does not have this zone configured, it will ignore the configuration
|
||||||
|
> update, if it accepts synchronized configuration.
|
||||||
|
|
||||||
|
If you don't require any global configuration, skip this setting.
|
||||||
|
|
||||||
|
### <a id="zone-config-sync-permissions"></a> Zone Configuration Synchronisation Permissions
|
||||||
|
|
||||||
|
Each [ApiListener](6-object-types.md#objecttype-apilistener) object must have the `accept_config` attribute
|
||||||
|
set to `true` to receive configuration from the parent `Zone` members. Default value is `false`.
|
||||||
|
|
||||||
|
object ApiListener "api" {
|
||||||
|
cert_path = SysconfDir + "/icinga2/pki/" + NodeName + ".crt"
|
||||||
|
key_path = SysconfDir + "/icinga2/pki/" + NodeName + ".key"
|
||||||
|
ca_path = SysconfDir + "/icinga2/pki/ca.crt"
|
||||||
|
accept_config = true
|
||||||
|
}
|
||||||
|
|
||||||
|
If `accept_config` is set to `false`, this instance won't accept configuration from remote
|
||||||
|
master instances anymore.
|
||||||
|
|
||||||
|
> ** Tip **
|
||||||
|
>
|
||||||
|
> Look into the [troubleshooting guides](16-troubleshooting.md#troubleshooting-cluster-config-sync) for debugging
|
||||||
|
> problems with the configuration synchronisation.
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="cluster-health-check"></a> Cluster Health Check
|
||||||
|
|
||||||
|
The Icinga 2 [ITL](7-icinga-template-library.md#icinga-template-library) ships an internal check command checking all configured
|
||||||
|
`EndPoints` in the cluster setup. The check result will become critical if
|
||||||
|
one or more configured nodes are not connected.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
object Service "cluster" {
|
||||||
|
check_command = "cluster"
|
||||||
|
check_interval = 5s
|
||||||
|
retry_interval = 1s
|
||||||
|
|
||||||
|
host_name = "icinga2a"
|
||||||
|
}
|
||||||
|
|
||||||
|
Each cluster node should execute its own local cluster health check to
|
||||||
|
get an idea about network related connection problems from different
|
||||||
|
points of view.
|
||||||
|
|
||||||
|
Additionally you can monitor the connection from the local zone to the remote
|
||||||
|
connected zones.
|
||||||
|
|
||||||
|
Example for the `checker` zone checking the connection to the `master` zone:
|
||||||
|
|
||||||
|
object Service "cluster-zone-master" {
|
||||||
|
check_command = "cluster-zone"
|
||||||
|
check_interval = 5s
|
||||||
|
retry_interval = 1s
|
||||||
|
vars.cluster_zone = "master"
|
||||||
|
|
||||||
|
host_name = "icinga2b"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="cluster-scenarios"></a> Cluster Scenarios
|
||||||
|
|
||||||
|
All cluster nodes are full-featured Icinga 2 instances. You only need to enabled
|
||||||
|
the features for their role (for example, a `Checker` node only requires the `checker`
|
||||||
|
feature enabled, but not `notification` or `ido-mysql` features).
|
||||||
|
|
||||||
|
> **Tip**
|
||||||
|
>
|
||||||
|
> There's a [Vagrant demo setup](https://github.com/Icinga/icinga-vagrant/tree/master/icinga2x-cluster)
|
||||||
|
> available featuring a two node cluster showcasing several aspects (config sync,
|
||||||
|
> remote command execution, etc).
|
||||||
|
|
||||||
|
### <a id="cluster-scenarios-master-satellite-clients"></a> Cluster with Master, Satellites and Remote Clients
|
||||||
|
|
||||||
|
You can combine "classic" cluster scenarios from HA to Master-Checker with the
|
||||||
|
Icinga 2 Remote Client modes. Each instance plays a certain role in that picture.
|
||||||
|
|
||||||
|
Imagine the following scenario:
|
||||||
|
|
||||||
|
* The master zone acts as High-Availability zone
|
||||||
|
* Remote satellite zones execute local checks and report them to the master
|
||||||
|
* All satellites query remote clients and receive check results (which they also replay to the master)
|
||||||
|
* All involved nodes share the same configuration logic: zones, endpoints, apilisteners
|
||||||
|
|
||||||
|
You'll need to think about the following:
|
||||||
|
|
||||||
|
* Deploy the entire configuration from the master to satellites and cascading remote clients? ("top down")
|
||||||
|
* Use local client configuration instead and report the inventory to satellites and cascading to the master? ("bottom up")
|
||||||
|
* Combine that with command execution brdiges on remote clients and also satellites
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### <a id="cluster-scenarios-security"></a> Security in Cluster Scenarios
|
||||||
|
|
||||||
|
While there are certain capabilities to ensure the safe communication between all
|
||||||
|
nodes (firewalls, policies, software hardening, etc) the Icinga 2 cluster also provides
|
||||||
|
additional security itself:
|
||||||
|
|
||||||
|
* [SSL certificates](12-distributed-monitoring-ha.md#manual-certificate-generation) are mandatory for cluster communication.
|
||||||
|
* Child zones only receive event updates (check results, commands, etc) for their configured updates.
|
||||||
|
* Zones cannot influence/interfere other zones. Each checked object is assigned to only one zone.
|
||||||
|
* All nodes in a zone trust each other.
|
||||||
|
* [Configuration sync](12-distributed-monitoring-ha.md#zone-config-sync-permissions) is disabled by default.
|
||||||
|
|
||||||
|
### <a id="cluster-scenarios-features"></a> Features in Cluster Zones
|
||||||
|
|
||||||
|
Each cluster zone may use all available features. If you have multiple locations
|
||||||
|
or departments, they may write to their local database, or populate graphite.
|
||||||
|
Even further all commands are distributed amongst connected nodes. For example, you could
|
||||||
|
re-schedule a check or acknowledge a problem on the master, and it gets replicated to the
|
||||||
|
actual slave checker node.
|
||||||
|
|
||||||
|
DB IDO on the left, graphite on the right side - works (if you disable
|
||||||
|
[DB IDO HA](12-distributed-monitoring-ha.md#high-availability-db-ido)).
|
||||||
|
Icinga Web 2 on the left, checker and notifications on the right side - works too.
|
||||||
|
Everything on the left and on the right side - make sure to deal with
|
||||||
|
[load-balanced notifications and checks](12-distributed-monitoring-ha.md#high-availability-features) in a
|
||||||
|
[HA zone](12-distributed-monitoring-ha.md#cluster-scenarios-high-availability).
|
||||||
|
|
||||||
|
|
||||||
|
### <a id="cluster-scenarios-distributed-zones"></a> Distributed Zones
|
||||||
|
|
||||||
|
That scenario fits if your instances are spread over the globe and they all report
|
||||||
|
to a master instance. Their network connection only works towards the master master
|
||||||
|
(or the master is able to connect, depending on firewall policies) which means
|
||||||
|
remote instances won't see each/connect to each other.
|
||||||
|
|
||||||
|
All events (check results, downtimes, comments, etc) are synced to the master node,
|
||||||
|
but the remote nodes can still run local features such as a web interface, reporting,
|
||||||
|
graphing, etc. in their own specified zone.
|
||||||
|
|
||||||
|
Imagine the following example with a master node in Nuremberg, and two remote DMZ
|
||||||
|
based instances in Berlin and Vienna. Additonally you'll specify
|
||||||
|
[global templates](12-distributed-monitoring-ha.md#zone-global-config-templates) available in all zones.
|
||||||
|
|
||||||
|
The configuration tree on the master instance `nuremberg` could look like this:
|
||||||
|
|
||||||
|
zones.d
|
||||||
|
global-templates/
|
||||||
|
templates.conf
|
||||||
|
groups.conf
|
||||||
|
nuremberg/
|
||||||
|
local.conf
|
||||||
|
berlin/
|
||||||
|
hosts.conf
|
||||||
|
vienna/
|
||||||
|
hosts.conf
|
||||||
|
|
||||||
|
The configuration deployment will take care of automatically synchronising
|
||||||
|
the child zone configuration:
|
||||||
|
|
||||||
|
* The master node sends `zones.d/berlin` to the `berlin` child zone.
|
||||||
|
* The master node sends `zones.d/vienna` to the `vienna` child zone.
|
||||||
|
* The master node sends `zones.d/global-templates` to the `vienna` and `berlin` child zones.
|
||||||
|
|
||||||
|
The endpoint configuration would look like:
|
||||||
|
|
||||||
|
object Endpoint "nuremberg-master" {
|
||||||
|
host = "nuremberg.icinga.org"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Endpoint "berlin-satellite" {
|
||||||
|
host = "berlin.icinga.org"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Endpoint "vienna-satellite" {
|
||||||
|
host = "vienna.icinga.org"
|
||||||
|
}
|
||||||
|
|
||||||
|
The zones would look like:
|
||||||
|
|
||||||
|
object Zone "nuremberg" {
|
||||||
|
endpoints = [ "nuremberg-master" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "berlin" {
|
||||||
|
endpoints = [ "berlin-satellite" ]
|
||||||
|
parent = "nuremberg"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "vienna" {
|
||||||
|
endpoints = [ "vienna-satellite" ]
|
||||||
|
parent = "nuremberg"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "global-templates" {
|
||||||
|
global = true
|
||||||
|
}
|
||||||
|
|
||||||
|
The `nuremberg-master` zone will only execute local checks, and receive
|
||||||
|
check results from the satellite nodes in the zones `berlin` and `vienna`.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> The child zones `berlin` and `vienna` will get their configuration synchronised
|
||||||
|
> from the configuration master 'nuremberg'. The endpoints in the child
|
||||||
|
> zones **must not** have their `zones.d` directory populated if this endpoint
|
||||||
|
> [accepts synced configuration](12-distributed-monitoring-ha.md#zone-config-sync-permissions).
|
||||||
|
|
||||||
|
### <a id="cluster-scenarios-load-distribution"></a> Load Distribution
|
||||||
|
|
||||||
|
If you are planning to off-load the checks to a defined set of remote workers
|
||||||
|
you can achieve that by:
|
||||||
|
|
||||||
|
* Deploying the configuration on all nodes.
|
||||||
|
* Let Icinga 2 distribute the load amongst all available nodes.
|
||||||
|
|
||||||
|
That way all remote check instances will receive the same configuration
|
||||||
|
but only execute their part. The master instance located in the `master` zone
|
||||||
|
can also execute checks, but you may also disable the `Checker` feature.
|
||||||
|
|
||||||
|
Configuration on the master node:
|
||||||
|
|
||||||
|
zones.d/
|
||||||
|
global-templates/
|
||||||
|
master/
|
||||||
|
checker/
|
||||||
|
|
||||||
|
If you are planning to have some checks executed by a specific set of checker nodes
|
||||||
|
you have to define additional zones and define these check objects there.
|
||||||
|
|
||||||
|
Endpoints:
|
||||||
|
|
||||||
|
object Endpoint "master-node" {
|
||||||
|
host = "master.icinga.org"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Endpoint "checker1-node" {
|
||||||
|
host = "checker1.icinga.org"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Endpoint "checker2-node" {
|
||||||
|
host = "checker2.icinga.org"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
Zones:
|
||||||
|
|
||||||
|
object Zone "master" {
|
||||||
|
endpoints = [ "master-node" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "checker" {
|
||||||
|
endpoints = [ "checker1-node", "checker2-node" ]
|
||||||
|
parent = "master"
|
||||||
|
}
|
||||||
|
|
||||||
|
object Zone "global-templates" {
|
||||||
|
global = true
|
||||||
|
}
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> The child zones `checker` will get its configuration synchronised
|
||||||
|
> from the configuration master 'master'. The endpoints in the child
|
||||||
|
> zone **must not** have their `zones.d` directory populated if this endpoint
|
||||||
|
> [accepts synced configuration](12-distributed-monitoring-ha.md#zone-config-sync-permissions).
|
||||||
|
|
||||||
|
### <a id="cluster-scenarios-high-availability"></a> Cluster High Availability
|
||||||
|
|
||||||
|
High availability with Icinga 2 is possible by putting multiple nodes into
|
||||||
|
a dedicated [zone](12-distributed-monitoring-ha.md#configure-cluster-zones). All nodes will elect one
|
||||||
|
active master, and retry an election once the current active master is down.
|
||||||
|
|
||||||
|
Selected features provide advanced [HA functionality](12-distributed-monitoring-ha.md#high-availability-features).
|
||||||
|
Checks and notifications are load-balanced between nodes in the high availability
|
||||||
|
zone.
|
||||||
|
|
||||||
|
Connections from other zones will be accepted by all active and passive nodes
|
||||||
|
but all are forwarded to the current active master dealing with the check results,
|
||||||
|
commands, etc.
|
||||||
|
|
||||||
|
object Zone "config-ha-master" {
|
||||||
|
endpoints = [ "icinga2a", "icinga2b", "icinga2c" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
Two or more nodes in a high availability setup require an [initial cluster sync](12-distributed-monitoring-ha.md#initial-cluster-sync).
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Keep in mind that **only one node acts as configuration master** having the
|
||||||
|
> configuration files in the `zones.d` directory. All other nodes **must not**
|
||||||
|
> have that directory populated. Instead they are required to
|
||||||
|
> [accept synced configuration](12-distributed-monitoring-ha.md#zone-config-sync-permissions).
|
||||||
|
> Details in the [Configuration Sync Chapter](12-distributed-monitoring-ha.md#cluster-zone-config-sync).
|
||||||
|
|
||||||
|
### <a id="cluster-scenarios-multiple-hierarchies"></a> Multiple Hierarchies
|
||||||
|
|
||||||
|
Your master zone collects all check results for reporting and graphing and also
|
||||||
|
does some sort of additional notifications.
|
||||||
|
The customers got their own instances in their local DMZ zones. They are limited to read/write
|
||||||
|
only their services, but replicate all events back to the master instance.
|
||||||
|
Within each DMZ there are additional check instances also serving interfaces for local
|
||||||
|
departments. The customers instances will collect all results, but also send them back to
|
||||||
|
your master instance.
|
||||||
|
Additionally the customers instance on the second level in the middle prohibits you from
|
||||||
|
sending commands to the subjacent department nodes. You're only allowed to receive the
|
||||||
|
results, and a subset of each customers configuration too.
|
||||||
|
|
||||||
|
Your master zone will generate global reports, aggregate alert notifications, and check
|
||||||
|
additional dependencies (for example, the customers internet uplink and bandwidth usage).
|
||||||
|
|
||||||
|
The customers zone instances will only check a subset of local services and delegate the rest
|
||||||
|
to each department. Even though it acts as configuration master with a master dashboard
|
||||||
|
for all departments managing their configuration tree which is then deployed to all
|
||||||
|
department instances. Furthermore the master NOC is able to see what's going on.
|
||||||
|
|
||||||
|
The instances in the departments will serve a local interface, and allow the administrators
|
||||||
|
to reschedule checks or acknowledge problems for their services.
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="high-availability-features"></a> High Availability for Icinga 2 features
|
||||||
|
|
||||||
|
All nodes in the same zone require the same features enabled for High Availability (HA)
|
||||||
|
amongst them.
|
||||||
|
|
||||||
|
By default the following features provide advanced HA functionality:
|
||||||
|
|
||||||
|
* [Checks](12-distributed-monitoring-ha.md#high-availability-checks) (load balanced, automated failover)
|
||||||
|
* [Notifications](12-distributed-monitoring-ha.md#high-availability-notifications) (load balanced, automated failover)
|
||||||
|
* [DB IDO](12-distributed-monitoring-ha.md#high-availability-db-ido) (Run-Once, automated failover)
|
||||||
|
|
||||||
|
### <a id="high-availability-checks"></a> High Availability with Checks
|
||||||
|
|
||||||
|
All nodes in the same zone load-balance the check execution. When one instance
|
||||||
|
fails the other nodes will automatically take over the reamining checks.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> If a node should not check anything, disable the `checker` feature explicitely and
|
||||||
|
> reload Icinga 2.
|
||||||
|
|
||||||
|
# icinga2 feature disable checker
|
||||||
|
# service icinga2 reload
|
||||||
|
|
||||||
|
### <a id="high-availability-notifications"></a> High Availability with Notifications
|
||||||
|
|
||||||
|
Notifications are load balanced amongst all nodes in a zone. By default this functionality
|
||||||
|
is enabled.
|
||||||
|
If your nodes should notify independent from any other nodes (this will cause
|
||||||
|
duplicated notifications if not properly handled!), you can set `enable_ha = false`
|
||||||
|
in the [NotificationComponent](6-object-types.md#objecttype-notificationcomponent) feature.
|
||||||
|
|
||||||
|
### <a id="high-availability-db-ido"></a> High Availability with DB IDO
|
||||||
|
|
||||||
|
All instances within the same zone (e.g. the `master` zone as HA cluster) must
|
||||||
|
have the DB IDO feature enabled.
|
||||||
|
|
||||||
|
Example DB IDO MySQL:
|
||||||
|
|
||||||
|
# icinga2 feature enable ido-mysql
|
||||||
|
The feature 'ido-mysql' is already enabled.
|
||||||
|
|
||||||
|
By default the DB IDO feature only runs on the elected zone master. All other passive
|
||||||
|
nodes disable the active IDO database connection at runtime.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> The DB IDO HA feature can be disabled by setting the `enable_ha` attribute to `false`
|
||||||
|
> for the [IdoMysqlConnection](6-object-types.md#objecttype-idomysqlconnection) or
|
||||||
|
> [IdoPgsqlConnection](6-object-types.md#objecttype-idopgsqlconnection) object on all nodes in the
|
||||||
|
> same zone.
|
||||||
|
>
|
||||||
|
> All endpoints will enable the DB IDO feature then, connect to the configured
|
||||||
|
> database and dump configuration, status and historical data on their own.
|
||||||
|
|
||||||
|
If the instance with the active DB IDO connection dies, the HA functionality will
|
||||||
|
re-enable the DB IDO connection on the newly elected zone master.
|
||||||
|
|
||||||
|
The DB IDO feature will try to determine which cluster endpoint is currently writing
|
||||||
|
to the database and bail out if another endpoint is active. You can manually verify that
|
||||||
|
by running the following query:
|
||||||
|
|
||||||
|
icinga=> SELECT status_update_time, endpoint_name FROM icinga_programstatus;
|
||||||
|
status_update_time | endpoint_name
|
||||||
|
------------------------+---------------
|
||||||
|
2014-08-15 15:52:26+02 | icinga2a
|
||||||
|
(1 Zeile)
|
||||||
|
|
||||||
|
This is useful when the cluster connection between endpoints breaks, and prevents
|
||||||
|
data duplication in split-brain-scenarios. The failover timeout can be set for the
|
||||||
|
`failover_timeout` attribute, but not lower than 60 seconds.
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="cluster-add-node"></a> Add a new cluster endpoint
|
||||||
|
|
||||||
|
These steps are required for integrating a new cluster endpoint:
|
||||||
|
|
||||||
|
* generate a new [SSL client certificate](12-distributed-monitoring-ha.md#manual-certificate-generation)
|
||||||
|
* identify its location in the zones
|
||||||
|
* update the `zones.conf` file on each involved node ([endpoint](12-distributed-monitoring-ha.md#configure-cluster-endpoints), [zones](12-distributed-monitoring-ha.md#configure-cluster-zones))
|
||||||
|
* a new slave zone node requires updates for the master and slave zones
|
||||||
|
* verify if this endpoints requires [configuration synchronisation](12-distributed-monitoring-ha.md#cluster-zone-config-sync) enabled
|
||||||
|
* if the node requires the existing zone history: [initial cluster sync](12-distributed-monitoring-ha.md#initial-cluster-sync)
|
||||||
|
* add a [cluster health check](12-distributed-monitoring-ha.md#cluster-health-check)
|
||||||
|
|
||||||
|
### <a id="initial-cluster-sync"></a> Initial Cluster Sync
|
||||||
|
|
||||||
|
In order to make sure that all of your cluster nodes have the same state you will
|
||||||
|
have to pick one of the nodes as your initial "master" and copy its state file
|
||||||
|
to all the other nodes.
|
||||||
|
|
||||||
|
You can find the state file in `/var/lib/icinga2/icinga2.state`. Before copying
|
||||||
|
the state file you should make sure that all your cluster nodes are properly shut
|
||||||
|
down.
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="host-multiple-cluster-nodes"></a> Host With Multiple Cluster Nodes
|
||||||
|
|
||||||
|
Special scenarios might require multiple cluster nodes running on a single host.
|
||||||
|
By default Icinga 2 and its features will place their runtime data below the prefix
|
||||||
|
`LocalStateDir`. By default packages will set that path to `/var`.
|
||||||
|
You can either set that variable as constant configuration
|
||||||
|
definition in [icinga2.conf](5-configuring-icinga-2.md#icinga2-conf) or pass it as runtime variable to
|
||||||
|
the Icinga 2 daemon.
|
||||||
|
|
||||||
|
# icinga2 -c /etc/icinga2/node1/icinga2.conf -DLocalStateDir=/opt/node1/var
|
@ -1,37 +1,104 @@
|
|||||||
# <a id="addons-plugins"></a> Icinga 2 Addons and Plugins
|
# <a id="addons-plugins"></a> Icinga 2 Addons and Plugins
|
||||||
|
|
||||||
## <a id="addons-graphing-pnp"></a> PNP
|
## <a id="addons-graphing"></a> Graphing
|
||||||
|
|
||||||
[PNP](http://www.pnp4nagios.org) is a graphing addon. If you're planning to use
|
### <a id="addons-graphing-pnp"></a> PNP
|
||||||
it you have to configure it to use the
|
|
||||||
|
[PNP](http://www.pnp4nagios.org) is a graphing addon.
|
||||||
|
|
||||||
|
[PNP](http://www.pnp4nagios.org) is an addon which adds a graphical representation of the performance data collected
|
||||||
|
by the monitoring plugins. The data is stored as rrd (round robin database) files.
|
||||||
|
|
||||||
|
Use your distribution's package manager to install the `pnp4nagios` package.
|
||||||
|
|
||||||
|
If you're planning to use it configure it to use the
|
||||||
[bulk mode with npcd and npcdmod](http://docs.pnp4nagios.org/pnp-0.6/modes#bulk_mode_with_npcd_and_npcdmod)
|
[bulk mode with npcd and npcdmod](http://docs.pnp4nagios.org/pnp-0.6/modes#bulk_mode_with_npcd_and_npcdmod)
|
||||||
in combination with Icinga 2's [PerfdataWriter](4-advanced-topics.md#performance-data). NPCD collects the performance
|
in combination with Icinga 2's [PerfdataWriter](4-advanced-topics.md#performance-data). NPCD collects the performance
|
||||||
data files which Icinga 2 generates.
|
data files which Icinga 2 generates.
|
||||||
|
|
||||||
## <a id="addons-graphing-ingraph"></a> inGraph
|
Enable performance data writer in icinga 2
|
||||||
|
|
||||||
|
# icinga2 feature enable perfdata
|
||||||
|
|
||||||
|
Configure npcd to use the performance data created by Icinga 2:
|
||||||
|
|
||||||
|
vim /etc/pnp4nagios/npcd.cfg
|
||||||
|
|
||||||
|
Set `perfdata_spool_dir = /var/spool/icinga2/perfdata` and restart the `npcd` daemon.
|
||||||
|
|
||||||
|
There's also an Icinga Web 2 module for direct PNP graph integration
|
||||||
|
available at https://exchange.icinga.org/icinga/PNP4Nagios
|
||||||
|
|
||||||
|
More information on [action_url as attribute](12-addons-plugins.md#addons-graphing-pnp-action-url)
|
||||||
|
and [graph template names](12-addons-plugins.md#addons-graphing-pnp-custom-templates).
|
||||||
|
|
||||||
|
|
||||||
|
### <a id="addons-graphing-graphite"></a> Graphite
|
||||||
|
|
||||||
|
[Graphite](http://graphite.readthedocs.org/en/latest/) is a time-series database
|
||||||
|
storing collected metrics and making them available through restful apis
|
||||||
|
and web interfaces.
|
||||||
|
|
||||||
|
Graphite consists of 3 software components:
|
||||||
|
|
||||||
|
* carbon - a Twisted daemon that listens for time-series data
|
||||||
|
* whisper - a simple database library for storing time-series data (similar in design to RRD)
|
||||||
|
* graphite webapp - A Django webapp that renders graphs on-demand using Cairo
|
||||||
|
|
||||||
|
Use the [GraphiteWriter](4-advanced-topics.md#graphite-carbon-cache-writer) feature
|
||||||
|
for sending real-time metrics from Icinga 2 to Graphite.
|
||||||
|
|
||||||
|
# icinga 2 feature enable graphite
|
||||||
|
|
||||||
|
There are Graphite addons available for collecting the performance data files too (e.g. `Graphios`).
|
||||||
|
|
||||||
|
### <a id="addons-graphing-ingraph"></a> inGraph
|
||||||
|
|
||||||
[inGraph](https://www.netways.org/projects/ingraph/wiki) requires the ingraph-collector addon
|
[inGraph](https://www.netways.org/projects/ingraph/wiki) requires the ingraph-collector addon
|
||||||
to be configured to point at the perfdata files. Icinga 2's [PerfdataWriter](4-advanced-topics.md#performance-data) will
|
to be configured to point at the perfdata files. Icinga 2's [PerfdataWriter](4-advanced-topics.md#performance-data) will
|
||||||
write to the performance data spool directory.
|
write to the performance data spool directory.
|
||||||
|
|
||||||
## <a id="addons-graphing-graphite"></a> Graphite
|
## <a id="addons-visualization"></a> Visualization
|
||||||
|
|
||||||
There are Graphite addons available for collecting the performance data files as well. But
|
### <a id="addons-visualization-reporting"></a> Icinga Reporting
|
||||||
natively you can use the [GraphiteWriter](4-advanced-topics.md#graphite-carbon-cache-writer) feature.
|
|
||||||
|
|
||||||
## <a id="addons-reporting"></a> Icinga Reporting
|
|
||||||
|
|
||||||
By enabling the DB IDO feature you can use the Icinga Reporting package.
|
By enabling the DB IDO feature you can use the Icinga Reporting package.
|
||||||
|
|
||||||
## <a id="addons-visualization-nagvis"></a> NagVis
|
### <a id="addons-visualization-nagvis"></a> NagVis
|
||||||
|
|
||||||
By using either Livestatus or DB IDO as a backend you can create your own network maps
|
By using either Livestatus or DB IDO as a backend you can create your own network maps
|
||||||
based on your monitoring configuration and status data using [NagVis](http://www.nagvis.org).
|
based on your monitoring configuration and status data using [NagVis](http://www.nagvis.org).
|
||||||
|
|
||||||
## <a id="addons-thruk"></a> Thruk
|
### <a id="addons-visualization-thruk"></a> Thruk
|
||||||
|
|
||||||
[Thruk](http://www.thruk.org) is an alternative web interface which can be used with Icinga 2.
|
[Thruk](http://www.thruk.org) is an alternative web interface which can be used with Icinga 2.
|
||||||
|
|
||||||
|
## <a id="log-monitoring"></a> Log Monitoring
|
||||||
|
|
||||||
|
Using Logstash or Graylog in your infrastructure and correlate events with your monitoring
|
||||||
|
is even simpler these days.
|
||||||
|
|
||||||
|
* Use the `GelfWriter` feature to write Icinga 2's check and notification events to Graylog or Logstash.
|
||||||
|
* Configure the logstash `nagios` output to send passive traps to Icinga 2 using the external command pipe.
|
||||||
|
* Execute a plugin to check Graylog alert streams.
|
||||||
|
|
||||||
|
More details can be found in [this blog post](https://www.icinga.org/2014/12/02/team-icinga-at-osmc-2014/).
|
||||||
|
|
||||||
|
## <a id="configuration-tools"></a> Configuration Management Tools
|
||||||
|
|
||||||
|
If you require your favourite configuration tool to export Icinga 2 configuration, please get in
|
||||||
|
touch with their developers. The Icinga project does not provide a configuration web interface
|
||||||
|
yet. Follow the [Icinga Blog](https://www.icinga.org/blog/) for updates on this topic.
|
||||||
|
|
||||||
|
If you're looking for puppet manifests, chef cookbooks, ansible recipes, etc - we're happy
|
||||||
|
to integrate them upstream, so please get in touch at [https://support.icinga.org](https://support.icinga.org).
|
||||||
|
|
||||||
|
These tools are currently in development and require feedback and tests:
|
||||||
|
|
||||||
|
* [Ansible Roles](https://github.com/Icinga/icinga2-ansible)
|
||||||
|
* [Puppet Module](https://github.com/Icinga/puppet-icinga2)
|
||||||
|
* [Chef Cookbook](https://github.com/Icinga/chef-icinga2)
|
||||||
|
|
||||||
|
|
||||||
## <a id="plugins"></a> Plugins
|
## <a id="plugins"></a> Plugins
|
||||||
|
|
||||||
@ -133,65 +200,57 @@ The `Monitoring Plugin API` is defined in the [Monitoring Plugins Development Gu
|
|||||||
There are no output length restrictions using Icinga 2. This is different to the
|
There are no output length restrictions using Icinga 2. This is different to the
|
||||||
[Icinga 1.x plugin api definition](http://docs.icinga.org/latest/en/pluginapi.html#outputlengthrestrictions).
|
[Icinga 1.x plugin api definition](http://docs.icinga.org/latest/en/pluginapi.html#outputlengthrestrictions).
|
||||||
|
|
||||||
## <a id="configuration-tools"></a> Configuration Tools
|
|
||||||
|
|
||||||
If you require your favourite configuration tool to export Icinga 2 configuration, please get in
|
## <a id="addon-integration-hints"></a> More Addon Integration Hints
|
||||||
touch with their developers. The Icinga project does not provide a configuration web interface
|
|
||||||
or similar.
|
|
||||||
|
|
||||||
> **Tip**
|
### <a id="addons-graphing-pnp-action-url"></a> PNP Action Url
|
||||||
>
|
|
||||||
> Get to know the new configuration format and the advanced [apply](3-monitoring-basics.md#using-apply) rules and
|
|
||||||
> use [syntax highlighting](10-addons-plugins.md#configuration-syntax-highlighting) in vim/nano.
|
|
||||||
|
|
||||||
If you're looking for puppet manifests, chef cookbooks, ansible recipes, etc - we're happy
|
They work in a similar fashion for Icinga 2 and are used for 1.x web interfaces (Icinga Web 2 doesn't require
|
||||||
to integrate them upstream, so please get in touch at [https://support.icinga.org](https://support.icinga.org).
|
the action url attribute in its own module).
|
||||||
|
|
||||||
These tools are in development and require feedback and tests:
|
template Service "pnp-hst" {
|
||||||
|
action_url = "/pnp4nagios/graph?host=$HOSTNAME$"
|
||||||
|
}
|
||||||
|
|
||||||
* [Ansible Roles](https://github.com/Icinga/icinga2-ansible)
|
template Service "pnp-svc" {
|
||||||
* [Puppet Module](https://github.com/Icinga/puppet-icinga2)
|
action_url = "/pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$"
|
||||||
|
}
|
||||||
|
|
||||||
## <a id="configuration-syntax-highlighting"></a> Configuration Syntax Highlighting
|
### <a id="addons-graphing-pnp-custom-templates"></a> PNP Custom Templates with Icinga 2
|
||||||
|
|
||||||
Icinga 2 ships configuration examples for syntax highlighting using the `vim` and `nano` editors.
|
PNP automatically determines the graph template from the check command name (or the argument's name).
|
||||||
The RHEL, SUSE and Debian package `icinga2-common` install these files into
|
This behavior changed in Icinga 2 compared to Icinga 1.x. Though there are certain possibilities to
|
||||||
`/usr/share/*/icinga2-common/syntax`. Sources provide these files in `tools/syntax`.
|
fix this:
|
||||||
|
|
||||||
### <a id="configuration-syntax-highlighting-vim"></a> Configuration Syntax Highlighting using Vim
|
* Create a symlink for example from the `templates.dist/check_ping.php` template to the actual check name in Icinga 2 (`templates/ping4.php`)
|
||||||
|
* Pass the check command name inside the [format template configuration](04-advanced-topics.md#writing-performance-data-files)
|
||||||
|
|
||||||
Create a new local vim configuration storage, if not already existing.
|
The latter becomes difficult with agent based checks like NRPE or SSH where the first command argument acts as
|
||||||
Edit `vim/ftdetect/icinga2.vim` if your paths to the Icinga 2 configuration
|
graph template identifier. There is the possibility to define the pnp template name as custom attribute
|
||||||
differ.
|
and use that inside the formatting templates as `SERVICECHECKCOMMAND` for instance.
|
||||||
|
|
||||||
$ PREFIX=~/.vim
|
Example for services:
|
||||||
$ mkdir -p $PREFIX/{syntax,ftdetect}
|
|
||||||
$ cp vim/syntax/icinga2.vim $PREFIX/syntax/
|
|
||||||
$ cp vim/ftdetect/icinga2.vim $PREFIX/ftdetect/
|
|
||||||
|
|
||||||
Test it:
|
# vim /etc/icinga2/features-enabled/perfdata.conf
|
||||||
|
|
||||||
$ vim /etc/icinga2/conf.d/templates.conf
|
service_format_template = "DATATYPE::SERVICEPERFDATA\tTIMET::$icinga.timet$\tHOSTNAME::$host.name$\tSERVICEDESC::$service.name$\tSERVICEPERFDATA::$service.perfdata$\tSERVICECHECKCOMMAND::$service.checkcommand$$pnp_check_arg1$\tHOSTSTATE::$host.state$\tHOSTSTATETYPE::$host.statetype$\tSERVICESTATE::$service.state$\tSERVICESTATETYPE::$service.statetype$"
|
||||||
|
|
||||||
### <a id="configuration-syntax-highlighting-nano"></a> Configuration Syntax Highlighting using Nano
|
# vim /etc/icinga2/conf.d/services.conf
|
||||||
|
|
||||||
Copy the `/etc/nanorc` sample file to your home directory. Create the `/etc/nano` directory
|
template Service "pnp-svc" {
|
||||||
and copy the provided `icinga2.nanorc` into it.
|
action_url = "/pnp4nagios/graph?host=$HOSTNAME$&srv=$SERVICEDESC$"
|
||||||
|
vars.pnp_check_arg1 = ""
|
||||||
|
}
|
||||||
|
|
||||||
$ cp /etc/nanorc ~/.nanorc
|
apply Service "nrpe-check" {
|
||||||
|
import "pnp-svc"
|
||||||
|
check_command = nrpe
|
||||||
|
vars.nrpe_command = "check_disk"
|
||||||
|
|
||||||
# mkdir -p /etc/nano
|
vars.pnp_check_arg1 = "!$nrpe_command$"
|
||||||
# cp icinga2.nanorc /etc/nano/
|
}
|
||||||
|
|
||||||
Then include the icinga2.nanorc file in your ~/.nanorc by adding the following line:
|
|
||||||
|
|
||||||
$ vim ~/.nanorc
|
|
||||||
|
|
||||||
## Icinga 2
|
|
||||||
include "/etc/nano/icinga2.nanorc"
|
|
||||||
|
|
||||||
Test it:
|
|
||||||
|
|
||||||
$ nano /etc/icinga2/conf.d/templates.conf
|
|
||||||
|
|
||||||
|
If there are warnings about unresolved macros make sure to specify a default value for `vars.pnp_check_arg1` inside the
|
||||||
|
|
||||||
|
In PNP, the custom template for nrpe is then defined in `/etc/pnp4nagios/custom/nrpe.cfg`
|
||||||
|
and the additional command arg string will be seen in the xml too for other templates.
|
@ -57,7 +57,7 @@ please check the official [Icinga 1.x user interface documentation](http://docs.
|
|||||||
|
|
||||||
Icinga 2 can write to the same schema supplied by `Icinga IDOUtils 1.x` which
|
Icinga 2 can write to the same schema supplied by `Icinga IDOUtils 1.x` which
|
||||||
is an explicit requirement to run `Icinga Web` next to the external command pipe.
|
is an explicit requirement to run `Icinga Web` next to the external command pipe.
|
||||||
Therefore you need to setup the [DB IDO feature](#configuring-db-ido) remarked in the previous sections.
|
Therefore you need to setup the [DB IDO feature](2-getting-started.md#configuring-db-ido-mysql) remarked in the previous sections.
|
||||||
|
|
||||||
### <a id="installing-icinga-web"></a> Installing Icinga Web 1.x
|
### <a id="installing-icinga-web"></a> Installing Icinga Web 1.x
|
||||||
|
|
||||||
@ -140,7 +140,7 @@ use one of the config packages:
|
|||||||
- `icinga-web-config-icinga2-ido-mysql`
|
- `icinga-web-config-icinga2-ido-mysql`
|
||||||
- `icinga-web-config-icinga2-ido-pgsql`
|
- `icinga-web-config-icinga2-ido-pgsql`
|
||||||
|
|
||||||
These packages take care of setting up the [DB IDO](#configuring-db-ido) configuration,
|
These packages take care of setting up the [DB IDO](2-getting-started.md#configuring-db-ido-mysql) configuration,
|
||||||
enabling the external command pipe for Icinga Web and depend on
|
enabling the external command pipe for Icinga Web and depend on
|
||||||
the corresponding packages of Icinga 2.
|
the corresponding packages of Icinga 2.
|
||||||
|
|
||||||
@ -157,7 +157,7 @@ When changing Icinga Web configuration files make sure to clear the config cache
|
|||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
> If you are using an older version of Icinga Web, install it like this and adapt
|
> If you are using an older version of Icinga Web, install it like this and adapt
|
||||||
> the configuration manually as shown in [the RPM notes](11-alternative-frontends.md#icinga-web-rpm-notes):
|
> the configuration manually as shown in [the RPM notes](14-alternative-frontends.md#icinga-web-rpm-notes):
|
||||||
>
|
>
|
||||||
> `apt-get install --no-install-recommends icinga-web`
|
> `apt-get install --no-install-recommends icinga-web`
|
||||||
|
|
@ -8,7 +8,7 @@ status information. It can also be used to send commands.
|
|||||||
>
|
>
|
||||||
> Only install the Livestatus feature if your web interface or addon requires
|
> Only install the Livestatus feature if your web interface or addon requires
|
||||||
> you to do so (for example, [Icinga Web 2](2-getting-started.md#setting-up-icingaweb2)).
|
> you to do so (for example, [Icinga Web 2](2-getting-started.md#setting-up-icingaweb2)).
|
||||||
> [Icinga Classic UI](11-alternative-frontends.md#setting-up-icinga-classic-ui) and [Icinga Web](11-alternative-frontends.md#setting-up-icinga-web)
|
> [Icinga Classic UI](14-alternative-frontends.md#setting-up-icinga-classic-ui) and [Icinga Web](14-alternative-frontends.md#setting-up-icinga-web)
|
||||||
> do not use Livestatus as backend.
|
> do not use Livestatus as backend.
|
||||||
|
|
||||||
The Livestatus component that is distributed as part of Icinga 2 is a
|
The Livestatus component that is distributed as part of Icinga 2 is a
|
||||||
@ -16,7 +16,7 @@ re-implementation of the Livestatus protocol which is compatible with MK
|
|||||||
Livestatus.
|
Livestatus.
|
||||||
|
|
||||||
Details on the available tables and attributes with Icinga 2 can be found
|
Details on the available tables and attributes with Icinga 2 can be found
|
||||||
in the [Livestatus Schema](19-appendix.md#schema-livestatus) section.
|
in the [Livestatus Schema](22-appendix.md#schema-livestatus) section.
|
||||||
|
|
||||||
You can enable Livestatus using icinga2 feature enable:
|
You can enable Livestatus using icinga2 feature enable:
|
||||||
|
|
||||||
@ -92,7 +92,7 @@ Example using the tcp socket listening on port `6558`:
|
|||||||
|
|
||||||
### <a id="livestatus-command-queries"></a> Livestatus COMMAND Queries
|
### <a id="livestatus-command-queries"></a> Livestatus COMMAND Queries
|
||||||
|
|
||||||
A list of available external commands and their parameters can be found [here](19-appendix.md#external-commands-list-detail)
|
A list of available external commands and their parameters can be found [here](22-appendix.md#external-commands-list-detail)
|
||||||
|
|
||||||
$ echo -e 'COMMAND <externalcommandstring>' | netcat 127.0.0.1 6558
|
$ echo -e 'COMMAND <externalcommandstring>' | netcat 127.0.0.1 6558
|
||||||
|
|
||||||
@ -193,5 +193,5 @@ Default separators.
|
|||||||
|
|
||||||
The `commands` table is populated with `CheckCommand`, `EventCommand` and `NotificationCommand` objects.
|
The `commands` table is populated with `CheckCommand`, `EventCommand` and `NotificationCommand` objects.
|
||||||
|
|
||||||
A detailed list on the available table attributes can be found in the [Livestatus Schema documentation](19-appendix.md#schema-livestatus).
|
A detailed list on the available table attributes can be found in the [Livestatus Schema documentation](22-appendix.md#schema-livestatus).
|
||||||
|
|
@ -13,7 +13,7 @@
|
|||||||
* How was Icinga 2 installed (and which repository in case) and which distribution are you using
|
* How was Icinga 2 installed (and which repository in case) and which distribution are you using
|
||||||
* Provide complete configuration snippets explaining your problem in detail
|
* Provide complete configuration snippets explaining your problem in detail
|
||||||
* If the check command failed - what's the output of your manual plugin tests?
|
* If the check command failed - what's the output of your manual plugin tests?
|
||||||
* In case of [debugging](18-debug.md#debug) Icinga 2, the full back traces and outputs
|
* In case of [debugging](21-debug.md#debug) Icinga 2, the full back traces and outputs
|
||||||
|
|
||||||
## <a id="troubleshooting-enable-debug-output"></a> Enable Debug Output
|
## <a id="troubleshooting-enable-debug-output"></a> Enable Debug Output
|
||||||
|
|
||||||
@ -37,7 +37,7 @@ You can find the debug log file in `/var/log/icinga2/debug.log`.
|
|||||||
The `icinga2 object list` CLI command can be used to list all configuration objects and their
|
The `icinga2 object list` CLI command can be used to list all configuration objects and their
|
||||||
attributes. The tool also shows where each of the attributes was modified.
|
attributes. The tool also shows where each of the attributes was modified.
|
||||||
|
|
||||||
That way you can also identify which objects have been created from your [apply rules](16-language-reference.md#apply).
|
That way you can also identify which objects have been created from your [apply rules](19-language-reference.md#apply).
|
||||||
|
|
||||||
# icinga2 object list
|
# icinga2 object list
|
||||||
|
|
||||||
@ -157,19 +157,19 @@ to `features-enabled` and that the latter is included in [icinga2.conf](5-config
|
|||||||
|
|
||||||
## <a id="configuration-ignored"></a> Configuration is ignored
|
## <a id="configuration-ignored"></a> Configuration is ignored
|
||||||
|
|
||||||
* Make sure that the line(s) are not [commented out](16-language-reference.md#comments) (starting with `//` or `#`, or
|
* Make sure that the line(s) are not [commented out](19-language-reference.md#comments) (starting with `//` or `#`, or
|
||||||
encapsulated by `/* ... */`).
|
encapsulated by `/* ... */`).
|
||||||
* Is the configuration file included in [icinga2.conf](5-configuring-icinga-2.md#icinga2-conf)?
|
* Is the configuration file included in [icinga2.conf](5-configuring-icinga-2.md#icinga2-conf)?
|
||||||
|
|
||||||
## <a id="configuration-attribute-inheritance"></a> Configuration attributes are inherited from
|
## <a id="configuration-attribute-inheritance"></a> Configuration attributes are inherited from
|
||||||
|
|
||||||
Icinga 2 allows you to import templates using the [import](16-language-reference.md#template-imports) keyword. If these templates
|
Icinga 2 allows you to import templates using the [import](19-language-reference.md#template-imports) keyword. If these templates
|
||||||
contain additional attributes, your objects will automatically inherit them. You can override
|
contain additional attributes, your objects will automatically inherit them. You can override
|
||||||
or modify these attributes in the current object.
|
or modify these attributes in the current object.
|
||||||
|
|
||||||
## <a id="troubleshooting-cluster"></a> Cluster Troubleshooting
|
## <a id="troubleshooting-cluster"></a> Cluster Troubleshooting
|
||||||
|
|
||||||
You should configure the [cluster health checks](9-monitoring-remote-systems.md#cluster-health-check) if you haven't
|
You should configure the [cluster health checks](12-distributed-monitoring-ha.md#cluster-health-check) if you haven't
|
||||||
done so already.
|
done so already.
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
@ -223,4 +223,4 @@ If the cluster zones do not sync their configuration, make sure to check the fol
|
|||||||
* Within a config master zone, only one configuration master is allowed to have its config in `/etc/icinga2/zones.d`.
|
* Within a config master zone, only one configuration master is allowed to have its config in `/etc/icinga2/zones.d`.
|
||||||
** The master syncs the configuration to `/var/lib/icinga2/api/zones/` during startup and only syncs valid configuration to the other nodes
|
** The master syncs the configuration to `/var/lib/icinga2/api/zones/` during startup and only syncs valid configuration to the other nodes
|
||||||
** The other nodes receive the configuration into `/var/lib/icinga2/api/zones/`
|
** The other nodes receive the configuration into `/var/lib/icinga2/api/zones/`
|
||||||
* The `icinga2.log` log file will indicate whether this ApiListener [accepts config](9-monitoring-remote-systems.md#zone-config-sync-permissions), or not
|
* The `icinga2.log` log file will indicate whether this ApiListener [accepts config](12-distributed-monitoring-ha.md#zone-config-sync-permissions), or not
|
@ -27,7 +27,7 @@ If you encounter a bug, please open an issue at https://dev.icinga.org.
|
|||||||
For a long-term migration of your configuration you should consider re-creating
|
For a long-term migration of your configuration you should consider re-creating
|
||||||
your configuration based on the proposed Icinga 2 configuration paradigm.
|
your configuration based on the proposed Icinga 2 configuration paradigm.
|
||||||
|
|
||||||
Please read the [next chapter](15-migrating-from-icinga-1x.md#differences-1x-2) to find out more about the differences
|
Please read the [next chapter](18-migrating-from-icinga-1x.md#differences-1x-2) to find out more about the differences
|
||||||
between 1.x and 2.
|
between 1.x and 2.
|
||||||
|
|
||||||
### <a id="manual-config-migration-hints"></a> Manual Config Migration Hints
|
### <a id="manual-config-migration-hints"></a> Manual Config Migration Hints
|
||||||
@ -39,7 +39,7 @@ Icinga 1.x configuration.
|
|||||||
The examples are taken from Icinga 1.x test and production environments and converted
|
The examples are taken from Icinga 1.x test and production environments and converted
|
||||||
straight into a possible Icinga 2 format. If you found a different strategy, send a patch!
|
straight into a possible Icinga 2 format. If you found a different strategy, send a patch!
|
||||||
|
|
||||||
If you require in-depth explanations, please check the [next chapter](15-migrating-from-icinga-1x.md#differences-1x-2).
|
If you require in-depth explanations, please check the [next chapter](18-migrating-from-icinga-1x.md#differences-1x-2).
|
||||||
|
|
||||||
#### <a id="manual-config-migration-hints-Intervals"></a> Manual Config Migration Hints for Intervals
|
#### <a id="manual-config-migration-hints-Intervals"></a> Manual Config Migration Hints for Intervals
|
||||||
|
|
||||||
@ -127,7 +127,7 @@ a member and includes all members of the `hg1` hostgroup.
|
|||||||
hostgroup_members hg1
|
hostgroup_members hg1
|
||||||
}
|
}
|
||||||
|
|
||||||
This can be migrated to Icinga 2 and [using group assign](16-language-reference.md#group-assign). The additional nested hostgroup
|
This can be migrated to Icinga 2 and [using group assign](19-language-reference.md#group-assign). The additional nested hostgroup
|
||||||
`hg1` is included into `hg2` with the `groups` attribute.
|
`hg1` is included into `hg2` with the `groups` attribute.
|
||||||
|
|
||||||
|
|
||||||
@ -217,8 +217,8 @@ directory - one major problem solved.
|
|||||||
For the check command it is required to
|
For the check command it is required to
|
||||||
|
|
||||||
* Escape all double quotes with an additional `\`.
|
* Escape all double quotes with an additional `\`.
|
||||||
* Replace all [runtime macros](15-migrating-from-icinga-1x.md#manual-config-migration-hints-runtime-macros), e.g. `$HOSTADDRESS$` with `$address$`.
|
* Replace all [runtime macros](18-migrating-from-icinga-1x.md#manual-config-migration-hints-runtime-macros), e.g. `$HOSTADDRESS$` with `$address$`.
|
||||||
* Replace [custom variable macros](15-migrating-from-icinga-1x.md#manual-config-migration-hints-runtime-custom-attributes) if any.
|
* Replace [custom variable macros](18-migrating-from-icinga-1x.md#manual-config-migration-hints-runtime-custom-attributes) if any.
|
||||||
* Keep `$ARGn$` macros.
|
* Keep `$ARGn$` macros.
|
||||||
|
|
||||||
The final check command looks like this in Icinga2:
|
The final check command looks like this in Icinga2:
|
||||||
@ -257,7 +257,7 @@ That way the old command arguments fashion can be applied for Icinga 2, although
|
|||||||
|
|
||||||
#### <a id="manual-config-migration-hints-runtime-macros"></a> Manual Config Migration Hints for Runtime Macros
|
#### <a id="manual-config-migration-hints-runtime-macros"></a> Manual Config Migration Hints for Runtime Macros
|
||||||
|
|
||||||
Runtime macros have been renamed. A detailed comparison table can be found [here](15-migrating-from-icinga-1x.md#differences-1x-2-runtime-macros).
|
Runtime macros have been renamed. A detailed comparison table can be found [here](18-migrating-from-icinga-1x.md#differences-1x-2-runtime-macros).
|
||||||
|
|
||||||
For example, accessing the service check output looks like the following in Icinga 1.x:
|
For example, accessing the service check output looks like the following in Icinga 1.x:
|
||||||
|
|
||||||
@ -318,7 +318,7 @@ while the service check command resolves its value to the service attribute attr
|
|||||||
#### <a id="manual-config-migration-hints-contacts-users"></a> Manual Config Migration Hints for Contacts (Users)
|
#### <a id="manual-config-migration-hints-contacts-users"></a> Manual Config Migration Hints for Contacts (Users)
|
||||||
|
|
||||||
Contacts in Icinga 1.x act as users in Icinga 2, but do not have any notification commands specified.
|
Contacts in Icinga 1.x act as users in Icinga 2, but do not have any notification commands specified.
|
||||||
This migration part is explained in the [next chapter](15-migrating-from-icinga-1x.md#manual-config-migration-hints-notifications).
|
This migration part is explained in the [next chapter](18-migrating-from-icinga-1x.md#manual-config-migration-hints-notifications).
|
||||||
|
|
||||||
define contact{
|
define contact{
|
||||||
contact_name testconfig-user
|
contact_name testconfig-user
|
||||||
@ -328,7 +328,7 @@ This migration part is explained in the [next chapter](15-migrating-from-icinga-
|
|||||||
email icinga@localhost
|
email icinga@localhost
|
||||||
}
|
}
|
||||||
|
|
||||||
The `service_notification_options` can be [mapped](15-migrating-from-icinga-1x.md#manual-config-migration-hints-notification-filters)
|
The `service_notification_options` can be [mapped](18-migrating-from-icinga-1x.md#manual-config-migration-hints-notification-filters)
|
||||||
into generic `state` and `type` filters, if additional notification filtering is required. `alias` gets
|
into generic `state` and `type` filters, if additional notification filtering is required. `alias` gets
|
||||||
renamed to `display_name`.
|
renamed to `display_name`.
|
||||||
|
|
||||||
@ -380,7 +380,7 @@ Assign it to the host or service and set the newly generated notification comman
|
|||||||
|
|
||||||
|
|
||||||
Convert the `notification_options` attribute from Icinga 1.x to Icinga 2 `states` and `types`. Details
|
Convert the `notification_options` attribute from Icinga 1.x to Icinga 2 `states` and `types`. Details
|
||||||
[here](15-migrating-from-icinga-1x.md#manual-config-migration-hints-notification-filters). Add the notification period.
|
[here](18-migrating-from-icinga-1x.md#manual-config-migration-hints-notification-filters). Add the notification period.
|
||||||
|
|
||||||
states = [ OK, Warning, Critical ]
|
states = [ OK, Warning, Critical ]
|
||||||
types = [ Recovery, Problem, Custom ]
|
types = [ Recovery, Problem, Custom ]
|
||||||
@ -617,7 +617,7 @@ enabled.
|
|||||||
assign where "hg_svcdep2" in host.groups
|
assign where "hg_svcdep2" in host.groups
|
||||||
}
|
}
|
||||||
|
|
||||||
Host dependencies are explained in the [next chapter](15-migrating-from-icinga-1x.md#manual-config-migration-hints-host-parents).
|
Host dependencies are explained in the [next chapter](18-migrating-from-icinga-1x.md#manual-config-migration-hints-host-parents).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -721,9 +721,9 @@ daemon for passing check results between instances.
|
|||||||
* Icinga 2 does not support any 1.x NEB addons for check load distribution
|
* Icinga 2 does not support any 1.x NEB addons for check load distribution
|
||||||
|
|
||||||
* If your current setup consists of instances distributing the check load, you should consider
|
* If your current setup consists of instances distributing the check load, you should consider
|
||||||
building a [load distribution](9-monitoring-remote-systems.md#cluster-scenarios-load-distribution) setup with Icinga 2.
|
building a [load distribution](12-distributed-monitoring-ha.md#cluster-scenarios-load-distribution) setup with Icinga 2.
|
||||||
* If your current setup includes active/passive clustering with external tools like Pacemaker/DRBD
|
* If your current setup includes active/passive clustering with external tools like Pacemaker/DRBD
|
||||||
consider the [High Availability](9-monitoring-remote-systems.md#cluster-scenarios-high-availability) setup.
|
consider the [High Availability](12-distributed-monitoring-ha.md#cluster-scenarios-high-availability) setup.
|
||||||
* If you have build your own custom configuration deployment and check result collecting mechanism
|
* If you have build your own custom configuration deployment and check result collecting mechanism
|
||||||
you should re-design your setup and re-evaluate your requirements, and how they may be fulfilled
|
you should re-design your setup and re-evaluate your requirements, and how they may be fulfilled
|
||||||
using the Icinga 2 cluster capabilities.
|
using the Icinga 2 cluster capabilities.
|
||||||
@ -773,7 +773,7 @@ included in `icinga2.conf` by default.
|
|||||||
### <a id="differences-1x-2-main-config"></a> Main Config File
|
### <a id="differences-1x-2-main-config"></a> Main Config File
|
||||||
|
|
||||||
In Icinga 1.x there are many global configuration settings available in `icinga.cfg`.
|
In Icinga 1.x there are many global configuration settings available in `icinga.cfg`.
|
||||||
Icinga 2 only uses a small set of [global constants](16-language-reference.md#constants) allowing
|
Icinga 2 only uses a small set of [global constants](19-language-reference.md#constants) allowing
|
||||||
you to specify certain different setting such as the `NodeName` in a cluster scenario.
|
you to specify certain different setting such as the `NodeName` in a cluster scenario.
|
||||||
|
|
||||||
Aside from that, the [icinga2.conf](5-configuring-icinga-2.md#icinga2-conf) should take care of including
|
Aside from that, the [icinga2.conf](5-configuring-icinga-2.md#icinga2-conf) should take care of including
|
||||||
@ -826,7 +826,7 @@ set in the `constants.conf` configuration file:
|
|||||||
|
|
||||||
const PluginDir = "/usr/lib/nagios/plugins"
|
const PluginDir = "/usr/lib/nagios/plugins"
|
||||||
|
|
||||||
[Global macros](16-language-reference.md#constants) can only be defined once. Trying to modify a
|
[Global macros](19-language-reference.md#constants) can only be defined once. Trying to modify a
|
||||||
global constant will result in an error.
|
global constant will result in an error.
|
||||||
|
|
||||||
### <a id="differences-1x-2-configuration-comments"></a> Configuration Comments
|
### <a id="differences-1x-2-configuration-comments"></a> Configuration Comments
|
||||||
@ -1386,7 +1386,7 @@ child attributes may be omitted.
|
|||||||
For detailed examples on how to use the dependencies please check the [dependencies](3-monitoring-basics.md#dependencies)
|
For detailed examples on how to use the dependencies please check the [dependencies](3-monitoring-basics.md#dependencies)
|
||||||
chapter.
|
chapter.
|
||||||
|
|
||||||
Dependencies can be applied to hosts or services using the [apply rules](16-language-reference.md#apply).
|
Dependencies can be applied to hosts or services using the [apply rules](19-language-reference.md#apply).
|
||||||
|
|
||||||
The `StatusDataWriter`, `IdoMysqlConnection` and `LivestatusListener` types
|
The `StatusDataWriter`, `IdoMysqlConnection` and `LivestatusListener` types
|
||||||
support the Icinga 1.x schema with dependencies and parent attributes for
|
support the Icinga 1.x schema with dependencies and parent attributes for
|
||||||
@ -1436,7 +1436,7 @@ Unlike Icinga 1.x the Icinga 2 daemon reload happens asynchronously.
|
|||||||
* parent process continues with old configuration objects and the event scheduling
|
* parent process continues with old configuration objects and the event scheduling
|
||||||
(doing checks, replicating cluster events, triggering alert notifications, etc.)
|
(doing checks, replicating cluster events, triggering alert notifications, etc.)
|
||||||
* validation NOT ok: child process terminates, parent process continues with old configuration state
|
* validation NOT ok: child process terminates, parent process continues with old configuration state
|
||||||
(this is ESSENTIAL for the [cluster config synchronisation](9-monitoring-remote-systems.md#cluster-zone-config-sync))
|
(this is ESSENTIAL for the [cluster config synchronisation](12-distributed-monitoring-ha.md#cluster-zone-config-sync))
|
||||||
* validation ok: child process signals parent process to terminate and save its current state
|
* validation ok: child process signals parent process to terminate and save its current state
|
||||||
(all events until now) into the icinga2 state file
|
(all events until now) into the icinga2 state file
|
||||||
* parent process shuts down writing icinga2.state file
|
* parent process shuts down writing icinga2.state file
|
||||||
@ -1491,6 +1491,6 @@ distribution out-of-the-box. Furthermore comments, downtimes, and other stateful
|
|||||||
not synced between the master and slave nodes. There are addons available solving the check
|
not synced between the master and slave nodes. There are addons available solving the check
|
||||||
and configuration distribution problems Icinga 1.x distributed monitoring currently suffers from.
|
and configuration distribution problems Icinga 1.x distributed monitoring currently suffers from.
|
||||||
|
|
||||||
Icinga 2 implements a new built-in [distributed monitoring architecture](9-monitoring-remote-systems.md#distributed-monitoring-high-availability),
|
Icinga 2 implements a new built-in [distributed monitoring architecture](12-distributed-monitoring-ha.md#distributed-monitoring-high-availability),
|
||||||
including config and check distribution, IPv4/IPv6 support, SSL certificates and zone support for DMZ.
|
including config and check distribution, IPv4/IPv6 support, SSL certificates and zone support for DMZ.
|
||||||
High Availability and load balancing are also part of the Icinga 2 Cluster setup.
|
High Availability and load balancing are also part of the Icinga 2 Cluster setup.
|
@ -199,7 +199,7 @@ Functions can be called using the `()` operator:
|
|||||||
check_interval = len(MyGroups) * 1m
|
check_interval = len(MyGroups) * 1m
|
||||||
}
|
}
|
||||||
|
|
||||||
A list of available functions is available in the [Library Reference](17-library-reference.md#library-reference) chapter.
|
A list of available functions is available in the [Library Reference](20-library-reference.md#library-reference) chapter.
|
||||||
|
|
||||||
## <a id="dictionary-operators"></a> Assignments
|
## <a id="dictionary-operators"></a> Assignments
|
||||||
|
|
||||||
@ -394,7 +394,7 @@ another group of objects.
|
|||||||
|
|
||||||
In this example the `assign where` condition is a boolean expression which is
|
In this example the `assign where` condition is a boolean expression which is
|
||||||
evaluated for all objects of type `Host` and a new service with name "ping"
|
evaluated for all objects of type `Host` and a new service with name "ping"
|
||||||
is created for each matching host. [Expression operators](16-language-reference.md#expression-operators)
|
is created for each matching host. [Expression operators](19-language-reference.md#expression-operators)
|
||||||
may be used in `assign where` conditions.
|
may be used in `assign where` conditions.
|
||||||
|
|
||||||
The `to` keyword and the target type may be omitted if there is only one target
|
The `to` keyword and the target type may be omitted if there is only one target
|
||||||
@ -431,7 +431,7 @@ and `ignore where` conditions.
|
|||||||
In this example the `assign where` condition is a boolean expression which is evaluated
|
In this example the `assign where` condition is a boolean expression which is evaluated
|
||||||
for all objects of the type `Host`. Each matching host is added as member to the host group
|
for all objects of the type `Host`. Each matching host is added as member to the host group
|
||||||
with the name "linux-servers". Membership exclusion can be controlled using the `ignore where`
|
with the name "linux-servers". Membership exclusion can be controlled using the `ignore where`
|
||||||
condition. [Expression operators](16-language-reference.md#expression-operators) may be used in `assign where` and
|
condition. [Expression operators](19-language-reference.md#expression-operators) may be used in `assign where` and
|
||||||
`ignore where` conditions.
|
`ignore where` conditions.
|
||||||
|
|
||||||
Source Type | Variables
|
Source Type | Variables
|
||||||
@ -460,7 +460,7 @@ Empty dictionary | {} | false
|
|||||||
Non-empty dictionary | { key = "value" } | true
|
Non-empty dictionary | { key = "value" } | true
|
||||||
|
|
||||||
For a list of supported expression operators for `assign where` and `ignore where`
|
For a list of supported expression operators for `assign where` and `ignore where`
|
||||||
statements, see [expression operators](16-language-reference.md#expression-operators).
|
statements, see [expression operators](19-language-reference.md#expression-operators).
|
||||||
|
|
||||||
## <a id="comments"></a> Comments
|
## <a id="comments"></a> Comments
|
||||||
|
|
@ -158,7 +158,7 @@ update the global `PluginDir` constant in your [Icinga 2 configuration](5-config
|
|||||||
This constant is used by the check command definitions contained in the Icinga Template Library
|
This constant is used by the check command definitions contained in the Icinga Template Library
|
||||||
to determine where to find the plugin binaries.
|
to determine where to find the plugin binaries.
|
||||||
|
|
||||||
Please refer to the [plugins](10-addons-plugins.md#plugins) chapter for details about how to integrate
|
Please refer to the [plugins](13-addons-plugins.md#plugins) chapter for details about how to integrate
|
||||||
additional check plugins into your Icinga 2 setup.
|
additional check plugins into your Icinga 2 setup.
|
||||||
|
|
||||||
## <a id="running-icinga2"></a> Running Icinga 2
|
## <a id="running-icinga2"></a> Running Icinga 2
|
||||||
@ -233,11 +233,55 @@ Examples:
|
|||||||
If you're stuck with configuration errors, you can manually invoke the
|
If you're stuck with configuration errors, you can manually invoke the
|
||||||
[configuration validation](8-cli-commands.md#config-validation).
|
[configuration validation](8-cli-commands.md#config-validation).
|
||||||
|
|
||||||
|
|
||||||
|
## <a id="configuration-syntax-highlighting"></a> Configuration Syntax Highlighting
|
||||||
|
|
||||||
|
Icinga 2 ships configuration examples for syntax highlighting using the `vim` and `nano` editors.
|
||||||
|
The RHEL, SUSE and Debian package `icinga2-common` install these files into
|
||||||
|
`/usr/share/*/icinga2-common/syntax`. Sources provide these files in `tools/syntax`.
|
||||||
|
|
||||||
|
### <a id="configuration-syntax-highlighting-vim"></a> Configuration Syntax Highlighting using Vim
|
||||||
|
|
||||||
|
Create a new local vim configuration storage, if not already existing.
|
||||||
|
Edit `vim/ftdetect/icinga2.vim` if your paths to the Icinga 2 configuration
|
||||||
|
differ.
|
||||||
|
|
||||||
|
$ PREFIX=~/.vim
|
||||||
|
$ mkdir -p $PREFIX/{syntax,ftdetect}
|
||||||
|
$ cp vim/syntax/icinga2.vim $PREFIX/syntax/
|
||||||
|
$ cp vim/ftdetect/icinga2.vim $PREFIX/ftdetect/
|
||||||
|
|
||||||
|
Test it:
|
||||||
|
|
||||||
|
$ vim /etc/icinga2/conf.d/templates.conf
|
||||||
|
|
||||||
|
### <a id="configuration-syntax-highlighting-nano"></a> Configuration Syntax Highlighting using Nano
|
||||||
|
|
||||||
|
Copy the `/etc/nanorc` sample file to your home directory. Create the `/etc/nano` directory
|
||||||
|
and copy the provided `icinga2.nanorc` into it.
|
||||||
|
|
||||||
|
$ cp /etc/nanorc ~/.nanorc
|
||||||
|
|
||||||
|
# mkdir -p /etc/nano
|
||||||
|
# cp icinga2.nanorc /etc/nano/
|
||||||
|
|
||||||
|
Then include the icinga2.nanorc file in your ~/.nanorc by adding the following line:
|
||||||
|
|
||||||
|
$ vim ~/.nanorc
|
||||||
|
|
||||||
|
## Icinga 2
|
||||||
|
include "/etc/nano/icinga2.nanorc"
|
||||||
|
|
||||||
|
Test it:
|
||||||
|
|
||||||
|
$ nano /etc/icinga2/conf.d/templates.conf
|
||||||
|
|
||||||
|
|
||||||
## <a id="setting-up-the-user-interface"></a> Setting up Icinga Web 2
|
## <a id="setting-up-the-user-interface"></a> Setting up Icinga Web 2
|
||||||
|
|
||||||
Icinga 2 can be used with Icinga Web 2 and a number of other web interfaces.
|
Icinga 2 can be used with Icinga Web 2 and a number of other web interfaces.
|
||||||
This chapter explains how to set up Icinga Web 2. The
|
This chapter explains how to set up Icinga Web 2. The
|
||||||
[Alternative Frontends](11-alternative-frontends.md#alternative-frontends)
|
[Alternative Frontends](14-alternative-frontends.md#alternative-frontends)
|
||||||
chapter can be used as a starting point for installing some of the other web
|
chapter can be used as a starting point for installing some of the other web
|
||||||
interfaces which are also available.
|
interfaces which are also available.
|
||||||
|
|
||||||
@ -548,5 +592,4 @@ for further instructions on how to install Icinga Web 2.
|
|||||||
|
|
||||||
A number of additional features are available in the form of addons. A list of
|
A number of additional features are available in the form of addons. A list of
|
||||||
popular addons is available in the
|
popular addons is available in the
|
||||||
[Addons and Plugins](10-addons-plugins.md#addons-plugins) chapter.
|
[Addons and Plugins](13-addons-plugins.md#addons-plugins) chapter.
|
||||||
|
|
||||||
|
@ -43,7 +43,7 @@ check command.
|
|||||||
The `address` attribute is used by check commands to determine which network
|
The `address` attribute is used by check commands to determine which network
|
||||||
address is associated with the host object.
|
address is associated with the host object.
|
||||||
|
|
||||||
Details on troubleshooting check problems can be found [here](13-troubleshooting.md#troubleshooting).
|
Details on troubleshooting check problems can be found [here](16-troubleshooting.md#troubleshooting).
|
||||||
|
|
||||||
### <a id="host-states"></a> Host States
|
### <a id="host-states"></a> Host States
|
||||||
|
|
||||||
@ -167,7 +167,7 @@ the function and uses whatever value the function returns:
|
|||||||
vars.text = {{ Math.random() * 100 }}
|
vars.text = {{ Math.random() * 100 }}
|
||||||
}
|
}
|
||||||
|
|
||||||
This example uses the [abbreviated lambda syntax](16-language-reference.md#nullary-lambdas).
|
This example uses the [abbreviated lambda syntax](19-language-reference.md#nullary-lambdas).
|
||||||
|
|
||||||
These functions have access to a number of variables:
|
These functions have access to a number of variables:
|
||||||
|
|
||||||
@ -193,7 +193,7 @@ value of arbitrary macro expressions:
|
|||||||
return "Some text"
|
return "Some text"
|
||||||
}}
|
}}
|
||||||
|
|
||||||
The [Object Accessor Functions](17-library-reference.md#object-accessor-functions) can be used to retrieve references
|
The [Object Accessor Functions](20-library-reference.md#object-accessor-functions) can be used to retrieve references
|
||||||
to other objects by name.
|
to other objects by name.
|
||||||
|
|
||||||
## <a id="runtime-macros"></a> Runtime Macros
|
## <a id="runtime-macros"></a> Runtime Macros
|
||||||
@ -399,15 +399,15 @@ The following macros provide global statistics:
|
|||||||
Instead of assigning each object ([Service](6-object-types.md#objecttype-service),
|
Instead of assigning each object ([Service](6-object-types.md#objecttype-service),
|
||||||
[Notification](6-object-types.md#objecttype-notification), [Dependency](6-object-types.md#objecttype-dependency),
|
[Notification](6-object-types.md#objecttype-notification), [Dependency](6-object-types.md#objecttype-dependency),
|
||||||
[ScheduledDowntime](6-object-types.md#objecttype-scheduleddowntime))
|
[ScheduledDowntime](6-object-types.md#objecttype-scheduleddowntime))
|
||||||
based on attribute identifiers for example `host_name` objects can be [applied](16-language-reference.md#apply).
|
based on attribute identifiers for example `host_name` objects can be [applied](19-language-reference.md#apply).
|
||||||
|
|
||||||
Before you start using the apply rules keep the following in mind:
|
Before you start using the apply rules keep the following in mind:
|
||||||
|
|
||||||
* Define the best match.
|
* Define the best match.
|
||||||
* A set of unique [custom attributes](#custom-attributes-apply) for these hosts/services?
|
* A set of unique [custom attributes](#custom-attributes-apply) for these hosts/services?
|
||||||
* Or [group](3-monitoring-basics.md#groups) memberships, e.g. a host being a member of a hostgroup, applying services to it?
|
* Or [group](3-monitoring-basics.md#groups) memberships, e.g. a host being a member of a hostgroup, applying services to it?
|
||||||
* A generic pattern [match](16-language-reference.md#function-calls) on the host/service name?
|
* A generic pattern [match](19-language-reference.md#function-calls) on the host/service name?
|
||||||
* [Multiple expressions combined](3-monitoring-basics.md#using-apply-expressions) with `&&` or `||` [operators](16-language-reference.md#expression-operators)
|
* [Multiple expressions combined](3-monitoring-basics.md#using-apply-expressions) with `&&` or `||` [operators](19-language-reference.md#expression-operators)
|
||||||
* All expressions must return a boolean value (an empty string is equal to `false` e.g.)
|
* All expressions must return a boolean value (an empty string is equal to `false` e.g.)
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
@ -471,7 +471,7 @@ two condition passes: Either the `customer` host custom attribute is set to `cus
|
|||||||
`OR` the host custom attribute `always_notify` is set to `true`.
|
`OR` the host custom attribute `always_notify` is set to `true`.
|
||||||
|
|
||||||
The notification is ignored for services whose host name ends with `*internal`
|
The notification is ignored for services whose host name ends with `*internal`
|
||||||
`OR` the `priority` custom attribute is [less than](16-language-reference.md#expression-operators) `2`.
|
`OR` the `priority` custom attribute is [less than](19-language-reference.md#expression-operators) `2`.
|
||||||
|
|
||||||
template Notification "cust-xy-notification" {
|
template Notification "cust-xy-notification" {
|
||||||
users = [ "noc-xy", "mgmt-xy" ]
|
users = [ "noc-xy", "mgmt-xy" ]
|
||||||
@ -613,7 +613,7 @@ You can also specifiy the check command that way.
|
|||||||
}
|
}
|
||||||
|
|
||||||
Note that numbers must be explicitely casted to string when adding to strings.
|
Note that numbers must be explicitely casted to string when adding to strings.
|
||||||
This can be achieved by wrapping them into the [string()](16-language-reference.md#function-calls) function.
|
This can be achieved by wrapping them into the [string()](19-language-reference.md#function-calls) function.
|
||||||
|
|
||||||
> **Tip**
|
> **Tip**
|
||||||
>
|
>
|
||||||
@ -737,7 +737,7 @@ hosts or with the `test_server` attribute set to `true` are not added to this
|
|||||||
group.
|
group.
|
||||||
|
|
||||||
Details on the `assign where` syntax can be found in the
|
Details on the `assign where` syntax can be found in the
|
||||||
[Language Reference](16-language-reference.md#apply)
|
[Language Reference](19-language-reference.md#apply)
|
||||||
|
|
||||||
## <a id="notifications"></a> Notifications
|
## <a id="notifications"></a> Notifications
|
||||||
|
|
||||||
@ -771,7 +771,7 @@ The user `icingaadmin` in the example below will get notified only on `WARNING`
|
|||||||
If you don't set the `states` and `types` configuration attributes for the `User`
|
If you don't set the `states` and `types` configuration attributes for the `User`
|
||||||
object, notifications for all states and types will be sent.
|
object, notifications for all states and types will be sent.
|
||||||
|
|
||||||
Details on troubleshooting notification problems can be found [here](13-troubleshooting.md#troubleshooting).
|
Details on troubleshooting notification problems can be found [here](16-troubleshooting.md#troubleshooting).
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
>
|
>
|
||||||
@ -1433,7 +1433,7 @@ Rephrased: If the parent service object changes into the `Warning` state, this
|
|||||||
dependency will fail and render all child objects (hosts or services) unreachable.
|
dependency will fail and render all child objects (hosts or services) unreachable.
|
||||||
|
|
||||||
You can determine the child's reachability by querying the `is_reachable` attribute
|
You can determine the child's reachability by querying the `is_reachable` attribute
|
||||||
in for example [DB IDO](19-appendix.md#schema-db-ido-extensions).
|
in for example [DB IDO](22-appendix.md#schema-db-ido-extensions).
|
||||||
|
|
||||||
### <a id="dependencies-implicit-host-service"></a> Implicit Dependencies for Services on Host
|
### <a id="dependencies-implicit-host-service"></a> Implicit Dependencies for Services on Host
|
||||||
|
|
||||||
|
@ -273,7 +273,7 @@ a forced service check:
|
|||||||
Oct 17 15:01:25 icinga-server icinga2: Executing external command: [1382014885] SCHEDULE_FORCED_SVC_CHECK;localhost;ping4;1382014885
|
Oct 17 15:01:25 icinga-server icinga2: Executing external command: [1382014885] SCHEDULE_FORCED_SVC_CHECK;localhost;ping4;1382014885
|
||||||
Oct 17 15:01:25 icinga-server icinga2: Rescheduling next check for service 'ping4'
|
Oct 17 15:01:25 icinga-server icinga2: Rescheduling next check for service 'ping4'
|
||||||
|
|
||||||
A list of currently supported external commands can be found [here](19-appendix.md#external-commands-list-detail).
|
A list of currently supported external commands can be found [here](22-appendix.md#external-commands-list-detail).
|
||||||
|
|
||||||
Detailed information on the commands and their required parameters can be found
|
Detailed information on the commands and their required parameters can be found
|
||||||
on the [Icinga 1.x documentation](http://docs.icinga.org/latest/en/extcommands2.html).
|
on the [Icinga 1.x documentation](http://docs.icinga.org/latest/en/extcommands2.html).
|
||||||
@ -359,7 +359,7 @@ You can customize the metric prefix name by using the `host_name_template` and
|
|||||||
`service_name_template` configuration attributes.
|
`service_name_template` configuration attributes.
|
||||||
|
|
||||||
The example below uses [runtime macros](3-monitoring-basics.md#runtime-macros) and a
|
The example below uses [runtime macros](3-monitoring-basics.md#runtime-macros) and a
|
||||||
[global constant](16-language-reference.md#constants) named `GraphiteEnv`. The constant name
|
[global constant](19-language-reference.md#constants) named `GraphiteEnv`. The constant name
|
||||||
is freely definable and should be put in the [constants.conf](5-configuring-icinga-2.md#constants-conf) file.
|
is freely definable and should be put in the [constants.conf](5-configuring-icinga-2.md#constants-conf) file.
|
||||||
|
|
||||||
const GraphiteEnv = "icinga.env1"
|
const GraphiteEnv = "icinga.env1"
|
||||||
@ -516,7 +516,7 @@ in Icinga 2 provided with the `CompatLogger` object.
|
|||||||
These logs are not only used for informational representation in
|
These logs are not only used for informational representation in
|
||||||
external web interfaces parsing the logs, but also to generate
|
external web interfaces parsing the logs, but also to generate
|
||||||
SLA reports and trends in Icinga 1.x Classic UI. Furthermore the
|
SLA reports and trends in Icinga 1.x Classic UI. Furthermore the
|
||||||
[Livestatus](12-livestatus.md#setting-up-livestatus) feature uses these logs for answering queries to
|
[Livestatus](15-livestatus.md#setting-up-livestatus) feature uses these logs for answering queries to
|
||||||
historical tables.
|
historical tables.
|
||||||
|
|
||||||
The `CompatLogger` object can be enabled with
|
The `CompatLogger` object can be enabled with
|
||||||
@ -563,12 +563,12 @@ The IDO (Icinga Data Output) modules for Icinga 2 take care of exporting all
|
|||||||
configuration and status information into a database. The IDO database is used
|
configuration and status information into a database. The IDO database is used
|
||||||
by a number of projects including Icinga Web 1.x and 2.
|
by a number of projects including Icinga Web 1.x and 2.
|
||||||
|
|
||||||
Details on the installation can be found in the [Configuring DB IDO](#configuring-db-ido)
|
Details on the installation can be found in the [Configuring DB IDO](2-getting-started.md#configuring-db-ido-mysql)
|
||||||
chapter. Details on the configuration can be found in the
|
chapter. Details on the configuration can be found in the
|
||||||
[IdoMysqlConnection](6-object-types.md#objecttype-idomysqlconnection) and
|
[IdoMysqlConnection](6-object-types.md#objecttype-idomysqlconnection) and
|
||||||
[IdoPgsqlConnection](6-object-types.md#objecttype-idopgsqlconnection)
|
[IdoPgsqlConnection](6-object-types.md#objecttype-idopgsqlconnection)
|
||||||
object configuration documentation.
|
object configuration documentation.
|
||||||
The DB IDO feature supports [High Availability](9-monitoring-remote-systems.md#high-availability-db-ido) in
|
The DB IDO feature supports [High Availability](12-distributed-monitoring-ha.md#high-availability-db-ido) in
|
||||||
the Icinga 2 cluster.
|
the Icinga 2 cluster.
|
||||||
|
|
||||||
The following example query checks the health of the current Icinga 2 instance
|
The following example query checks the health of the current Icinga 2 instance
|
||||||
@ -579,7 +579,7 @@ the query returns an empty result.
|
|||||||
|
|
||||||
> **Tip**
|
> **Tip**
|
||||||
>
|
>
|
||||||
> Use [check plugins](10-addons-plugins.md#plugins) to monitor the backend.
|
> Use [check plugins](13-addons-plugins.md#plugins) to monitor the backend.
|
||||||
|
|
||||||
Replace the `default` string with your instance name, if different.
|
Replace the `default` string with your instance name, if different.
|
||||||
|
|
||||||
@ -610,7 +610,7 @@ Example for PostgreSQL:
|
|||||||
(1 Zeile)
|
(1 Zeile)
|
||||||
|
|
||||||
|
|
||||||
A detailed list on the available table attributes can be found in the [DB IDO Schema documentation](19-appendix.md#schema-db-ido).
|
A detailed list on the available table attributes can be found in the [DB IDO Schema documentation](22-appendix.md#schema-db-ido).
|
||||||
|
|
||||||
|
|
||||||
## <a id="check-result-files"></a> Check Result Files
|
## <a id="check-result-files"></a> Check Result Files
|
||||||
|
@ -5,7 +5,7 @@ The configuration files which are automatically created when installing the Icin
|
|||||||
are a good way to start with Icinga 2.
|
are a good way to start with Icinga 2.
|
||||||
|
|
||||||
If you're interested in a detailed explanation of each language feature used in those
|
If you're interested in a detailed explanation of each language feature used in those
|
||||||
configuration files you can find more information in the [Language Reference](16-language-reference.md#language-reference)
|
configuration files you can find more information in the [Language Reference](19-language-reference.md#language-reference)
|
||||||
chapter.
|
chapter.
|
||||||
|
|
||||||
## <a id="configuration-best-practice"></a> Configuration Best Practice
|
## <a id="configuration-best-practice"></a> Configuration Best Practice
|
||||||
@ -17,7 +17,7 @@ decide for a possible strategy.
|
|||||||
There are many ways of creating Icinga 2 configuration objects:
|
There are many ways of creating Icinga 2 configuration objects:
|
||||||
|
|
||||||
* Manually with your preferred editor, for example vi(m), nano, notepad, etc.
|
* Manually with your preferred editor, for example vi(m), nano, notepad, etc.
|
||||||
* Generated by a [configuration management too](10-addons-plugins.md#configuration-tools) such as Puppet, Chef, Ansible, etc.
|
* Generated by a [configuration management too](13-addons-plugins.md#configuration-tools) such as Puppet, Chef, Ansible, etc.
|
||||||
* A configuration addon for Icinga 2
|
* A configuration addon for Icinga 2
|
||||||
* A custom exporter script from your CMDB or inventory tool
|
* A custom exporter script from your CMDB or inventory tool
|
||||||
* your own.
|
* your own.
|
||||||
@ -79,7 +79,7 @@ Here's a brief description of the example configuration:
|
|||||||
* to the documentation that is distributed as part of Icinga 2.
|
* to the documentation that is distributed as part of Icinga 2.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
Icinga 2 supports [C/C++-style comments](16-language-reference.md#comments).
|
Icinga 2 supports [C/C++-style comments](19-language-reference.md#comments).
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* The constants.conf defines global constants.
|
* The constants.conf defines global constants.
|
||||||
@ -123,7 +123,7 @@ the features which have been enabled with `icinga2 feature enable`. See
|
|||||||
|
|
||||||
This `include_recursive` directive is used for discovery of services on remote clients
|
This `include_recursive` directive is used for discovery of services on remote clients
|
||||||
and their generated configuration described in
|
and their generated configuration described in
|
||||||
[this chapter](9-monitoring-remote-systems.md#icinga2-remote-monitoring-master-discovery-generate-config).
|
[this chapter](10-icinga2-client.md#icinga2-remote-monitoring-master-discovery).
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -293,13 +293,13 @@ host and your additional hosts are getting [services](5-configuring-icinga-2.md#
|
|||||||
|
|
||||||
> **Tip**
|
> **Tip**
|
||||||
>
|
>
|
||||||
> If you don't understand all the attributes and how to use [apply rules](16-language-reference.md#apply)
|
> If you don't understand all the attributes and how to use [apply rules](19-language-reference.md#apply)
|
||||||
> don't worry - the [monitoring basics](3-monitoring-basics.md#monitoring-basics) chapter will explain
|
> don't worry - the [monitoring basics](3-monitoring-basics.md#monitoring-basics) chapter will explain
|
||||||
> that in detail.
|
> that in detail.
|
||||||
|
|
||||||
#### <a id="services-conf"></a> services.conf
|
#### <a id="services-conf"></a> services.conf
|
||||||
|
|
||||||
These service [apply rules](16-language-reference.md#apply) will show you how to monitor
|
These service [apply rules](19-language-reference.md#apply) will show you how to monitor
|
||||||
the local host, but also allow you to re-use or modify them for
|
the local host, but also allow you to re-use or modify them for
|
||||||
your own requirements.
|
your own requirements.
|
||||||
|
|
||||||
@ -347,7 +347,7 @@ these services in [downtimes.conf](5-configuring-icinga-2.md#downtimes-conf).
|
|||||||
|
|
||||||
In this example the `assign where` condition is a boolean expression which is
|
In this example the `assign where` condition is a boolean expression which is
|
||||||
evaluated for all objects of type `Host` and a new service with name "load"
|
evaluated for all objects of type `Host` and a new service with name "load"
|
||||||
is created for each matching host. [Expression operators](16-language-reference.md#expression-operators)
|
is created for each matching host. [Expression operators](19-language-reference.md#expression-operators)
|
||||||
may be used in `assign where` conditions.
|
may be used in `assign where` conditions.
|
||||||
|
|
||||||
Multiple `assign where` condition can be combined with `AND` using the `&&` operator
|
Multiple `assign where` condition can be combined with `AND` using the `&&` operator
|
||||||
@ -365,7 +365,7 @@ In this example, the service `ssh` is applied to all hosts having the `address`
|
|||||||
attribute defined `AND` having the custom attribute `os` set to the string
|
attribute defined `AND` having the custom attribute `os` set to the string
|
||||||
`Linux`.
|
`Linux`.
|
||||||
You can modify this condition to match multiple expressions by combinding `AND`
|
You can modify this condition to match multiple expressions by combinding `AND`
|
||||||
and `OR` using `&&` and `||` [operators](16-language-reference.md#expression-operators), for example
|
and `OR` using `&&` and `||` [operators](19-language-reference.md#expression-operators), for example
|
||||||
`assign where host.address && (vars.os == "Linux" || vars.os == "Windows")`.
|
`assign where host.address && (vars.os == "Linux" || vars.os == "Windows")`.
|
||||||
|
|
||||||
|
|
||||||
@ -511,7 +511,7 @@ The example host defined in [hosts.conf](hosts-conf) already has the
|
|||||||
custom attribute `os` set to `Linux` and is therefore automatically
|
custom attribute `os` set to `Linux` and is therefore automatically
|
||||||
a member of the host group `linux-servers`.
|
a member of the host group `linux-servers`.
|
||||||
|
|
||||||
This is done by using the [group assign](16-language-reference.md#group-assign) expressions similar
|
This is done by using the [group assign](19-language-reference.md#group-assign) expressions similar
|
||||||
to previously seen [apply rules](3-monitoring-basics.md#using-apply).
|
to previously seen [apply rules](3-monitoring-basics.md#using-apply).
|
||||||
|
|
||||||
object HostGroup "linux-servers" {
|
object HostGroup "linux-servers" {
|
||||||
@ -527,7 +527,7 @@ to previously seen [apply rules](3-monitoring-basics.md#using-apply).
|
|||||||
}
|
}
|
||||||
|
|
||||||
Service groups can be grouped together by similar pattern matches.
|
Service groups can be grouped together by similar pattern matches.
|
||||||
The [match() function](16-language-reference.md#function-calls) expects a wildcard match string
|
The [match() function](19-language-reference.md#function-calls) expects a wildcard match string
|
||||||
and the attribute string to match with.
|
and the attribute string to match with.
|
||||||
|
|
||||||
object ServiceGroup "ping" {
|
object ServiceGroup "ping" {
|
||||||
@ -633,9 +633,9 @@ objects such as hosts, services or notifications.
|
|||||||
#### <a id="satellite-conf"></a> satellite.conf
|
#### <a id="satellite-conf"></a> satellite.conf
|
||||||
|
|
||||||
Includes default templates and dependencies for
|
Includes default templates and dependencies for
|
||||||
[monitoring remote clients](9-monitoring-remote-systems.md#icinga2-remote-client-monitoring)
|
[monitoring remote clients](10-icinga2-client.md#icinga2-client)
|
||||||
using service discovery and
|
using service discovery and
|
||||||
[config generation](9-monitoring-remote-systems.md#icinga2-remote-monitoring-master-discovery-generate-config)
|
[config generation](10-icinga2-client.md#icinga2-remote-monitoring-master-discovery)
|
||||||
on the master. Can be ignored/removed on setups not using this features.
|
on the master. Can be ignored/removed on setups not using this features.
|
||||||
|
|
||||||
|
|
||||||
|
@ -82,7 +82,7 @@ A group of hosts.
|
|||||||
|
|
||||||
> **Best Practice**
|
> **Best Practice**
|
||||||
>
|
>
|
||||||
> Assign host group members using the [group assign](16-language-reference.md#group-assign) rules.
|
> Assign host group members using the [group assign](19-language-reference.md#group-assign) rules.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
@ -189,7 +189,7 @@ A group of services.
|
|||||||
|
|
||||||
> **Best Practice**
|
> **Best Practice**
|
||||||
>
|
>
|
||||||
> Assign service group members using the [group assign](16-language-reference.md#group-assign) rules.
|
> Assign service group members using the [group assign](19-language-reference.md#group-assign) rules.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
@ -273,7 +273,7 @@ A user group.
|
|||||||
|
|
||||||
> **Best Practice**
|
> **Best Practice**
|
||||||
>
|
>
|
||||||
> Assign user group members using the [group assign](16-language-reference.md#group-assign) rules.
|
> Assign user group members using the [group assign](19-language-reference.md#group-assign) rules.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
@ -800,7 +800,7 @@ Configuration Attributes:
|
|||||||
|
|
||||||
Metric prefix names can be modified using [runtime macros](3-monitoring-basics.md#runtime-macros).
|
Metric prefix names can be modified using [runtime macros](3-monitoring-basics.md#runtime-macros).
|
||||||
|
|
||||||
Example with your custom [global constant](16-language-reference.md#constants) `GraphiteEnv`:
|
Example with your custom [global constant](19-language-reference.md#constants) `GraphiteEnv`:
|
||||||
|
|
||||||
const GraphiteEnv = "icinga.env1"
|
const GraphiteEnv = "icinga.env1"
|
||||||
|
|
||||||
@ -889,8 +889,8 @@ Configuration Attributes:
|
|||||||
table\_prefix |**Optional.** MySQL database table prefix. Defaults to "icinga\_".
|
table\_prefix |**Optional.** MySQL database table prefix. Defaults to "icinga\_".
|
||||||
instance\_name |**Optional.** Unique identifier for the local Icinga 2 instance. Defaults to "default".
|
instance\_name |**Optional.** Unique identifier for the local Icinga 2 instance. Defaults to "default".
|
||||||
instance\_description|**Optional.** Description for the Icinga 2 instance.
|
instance\_description|**Optional.** Description for the Icinga 2 instance.
|
||||||
enable_ha |**Optional.** Enable the high availability functionality. Only valid in a [cluster setup](9-monitoring-remote-systems.md#high-availability-db-ido). Defaults to "true".
|
enable_ha |**Optional.** Enable the high availability functionality. Only valid in a [cluster setup](12-distributed-monitoring-ha.md#high-availability-db-ido). Defaults to "true".
|
||||||
failover_timeout | **Optional.** Set the failover timeout in a [HA cluster](9-monitoring-remote-systems.md#high-availability-db-ido). Must not be lower than 60s. Defaults to "60s".
|
failover_timeout | **Optional.** Set the failover timeout in a [HA cluster](12-distributed-monitoring-ha.md#high-availability-db-ido). Must not be lower than 60s. Defaults to "60s".
|
||||||
cleanup |**Optional.** Dictionary with items for historical table cleanup.
|
cleanup |**Optional.** Dictionary with items for historical table cleanup.
|
||||||
categories |**Optional.** The types of information that should be written to the database.
|
categories |**Optional.** The types of information that should be written to the database.
|
||||||
|
|
||||||
@ -978,8 +978,8 @@ Configuration Attributes:
|
|||||||
table\_prefix |**Optional.** PostgreSQL database table prefix. Defaults to "icinga\_".
|
table\_prefix |**Optional.** PostgreSQL database table prefix. Defaults to "icinga\_".
|
||||||
instance\_name |**Optional.** Unique identifier for the local Icinga 2 instance. Defaults to "default".
|
instance\_name |**Optional.** Unique identifier for the local Icinga 2 instance. Defaults to "default".
|
||||||
instance\_description|**Optional.** Description for the Icinga 2 instance.
|
instance\_description|**Optional.** Description for the Icinga 2 instance.
|
||||||
enable_ha |**Optional.** Enable the high availability functionality. Only valid in a [cluster setup](9-monitoring-remote-systems.md#high-availability-db-ido). Defaults to "true".
|
enable_ha |**Optional.** Enable the high availability functionality. Only valid in a [cluster setup](12-distributed-monitoring-ha.md#high-availability-db-ido). Defaults to "true".
|
||||||
failover_timeout | **Optional.** Set the failover timeout in a [HA cluster](9-monitoring-remote-systems.md#high-availability-db-ido). Must not be lower than 60s. Defaults to "60s".
|
failover_timeout | **Optional.** Set the failover timeout in a [HA cluster](12-distributed-monitoring-ha.md#high-availability-db-ido). Must not be lower than 60s. Defaults to "60s".
|
||||||
cleanup |**Optional.** Dictionary with items for historical table cleanup.
|
cleanup |**Optional.** Dictionary with items for historical table cleanup.
|
||||||
categories |**Optional.** The types of information that should be written to the database.
|
categories |**Optional.** The types of information that should be written to the database.
|
||||||
|
|
||||||
|
@ -106,12 +106,12 @@ Debian/Ubuntu:
|
|||||||
|
|
||||||
### Libraries
|
### Libraries
|
||||||
|
|
||||||
Instead of loading libraries using the [`library` config directive](16-language-reference.md#library)
|
Instead of loading libraries using the [`library` config directive](19-language-reference.md#library)
|
||||||
you can also use the `--library` command-line option.
|
you can also use the `--library` command-line option.
|
||||||
|
|
||||||
### Constants
|
### Constants
|
||||||
|
|
||||||
[Global constants](16-language-reference.md#constants) can be set using the `--define` command-line option.
|
[Global constants](19-language-reference.md#constants) can be set using the `--define` command-line option.
|
||||||
|
|
||||||
### <a id="config-include-path"></a> Config Include Path
|
### <a id="config-include-path"></a> Config Include Path
|
||||||
|
|
||||||
@ -220,8 +220,8 @@ The `feature list` command shows which features are currently enabled:
|
|||||||
## <a id="cli-command-node"></a> CLI command: Node
|
## <a id="cli-command-node"></a> CLI command: Node
|
||||||
|
|
||||||
Provides the functionality to install and manage master and client
|
Provides the functionality to install and manage master and client
|
||||||
nodes in a [remote monitoring ](9-monitoring-remote-systems.md#icinga2-remote-client-monitoring) or
|
nodes in a [remote monitoring ](10-icinga2-client.md#icinga2-client) or
|
||||||
[distributed cluster](9-monitoring-remote-systems.md#distributed-monitoring-high-availability) scenario.
|
[distributed cluster](12-distributed-monitoring-ha.md#distributed-monitoring-high-availability) scenario.
|
||||||
|
|
||||||
|
|
||||||
# icinga2 node --help
|
# icinga2 node --help
|
||||||
@ -265,9 +265,9 @@ nodes in a [remote monitoring ](9-monitoring-remote-systems.md#icinga2-remote-cl
|
|||||||
|
|
||||||
The `object` CLI command can be used to list all configuration objects and their
|
The `object` CLI command can be used to list all configuration objects and their
|
||||||
attributes. The command also shows where each of the attributes was modified.
|
attributes. The command also shows where each of the attributes was modified.
|
||||||
That way you can also identify which objects have been created from your [apply rules](16-language-reference.md#apply).
|
That way you can also identify which objects have been created from your [apply rules](19-language-reference.md#apply).
|
||||||
|
|
||||||
More information can be found in the [troubleshooting](13-troubleshooting.md#list-configuration-objects) section.
|
More information can be found in the [troubleshooting](16-troubleshooting.md#list-configuration-objects) section.
|
||||||
|
|
||||||
# icinga2 object --help
|
# icinga2 object --help
|
||||||
icinga2 - The Icinga 2 network monitoring daemon (version: v2.1.1-299-gf695275)
|
icinga2 - The Icinga 2 network monitoring daemon (version: v2.1.1-299-gf695275)
|
||||||
@ -395,7 +395,7 @@ cleared after review.
|
|||||||
|
|
||||||
## <a id="cli-command-variable"></a> CLI command: Troubleshoot
|
## <a id="cli-command-variable"></a> CLI command: Troubleshoot
|
||||||
|
|
||||||
Collects basic information like version, paths, log files and crash reports for troubleshooting purposes and prints them to a file or the console. See [troubleshooting](13-troubleshooting.md#troubleshooting-information-required).
|
Collects basic information like version, paths, log files and crash reports for troubleshooting purposes and prints them to a file or the console. See [troubleshooting](16-troubleshooting.md#troubleshooting-information-required).
|
||||||
|
|
||||||
Its output defaults to a file named `troubleshooting-[TIMESTAMP].log` so it won't overwrite older troubleshooting files.
|
Its output defaults to a file named `troubleshooting-[TIMESTAMP].log` so it won't overwrite older troubleshooting files.
|
||||||
|
|
||||||
@ -542,12 +542,12 @@ Or manually passing the `-C` argument:
|
|||||||
> `# icinga2 daemon -C`
|
> `# icinga2 daemon -C`
|
||||||
|
|
||||||
If you encouter errors during configuration validation, please make sure
|
If you encouter errors during configuration validation, please make sure
|
||||||
to read the [troubleshooting](13-troubleshooting.md#troubleshooting) chapter.
|
to read the [troubleshooting](16-troubleshooting.md#troubleshooting) chapter.
|
||||||
|
|
||||||
You can also use the [CLI command](8-cli-commands.md#cli-command-object) `icinga2 object list`
|
You can also use the [CLI command](8-cli-commands.md#cli-command-object) `icinga2 object list`
|
||||||
after validation passes to analyze object attributes, inheritance or created
|
after validation passes to analyze object attributes, inheritance or created
|
||||||
objects by apply rules.
|
objects by apply rules.
|
||||||
Find more on troubleshooting with `object list` in [this chapter](13-troubleshooting.md#list-configuration-objects).
|
Find more on troubleshooting with `object list` in [this chapter](16-troubleshooting.md#list-configuration-objects).
|
||||||
|
|
||||||
Example filtered by `Service` objects with the name `ping*`:
|
Example filtered by `Service` objects with the name `ping*`:
|
||||||
|
|
||||||
@ -591,5 +591,5 @@ safely reload the Icinga 2 daemon.
|
|||||||
> which will validate the configuration in a separate process and not stop
|
> which will validate the configuration in a separate process and not stop
|
||||||
> the other events like check execution, notifications, etc.
|
> the other events like check execution, notifications, etc.
|
||||||
>
|
>
|
||||||
> Details can be found [here](15-migrating-from-icinga-1x.md#differences-1x-2-real-reload).
|
> Details can be found [here](18-migrating-from-icinga-1x.md#differences-1x-2-real-reload).
|
||||||
|
|
||||||
|
File diff suppressed because it is too large
Load Diff
23
mkdocs.yml
23
mkdocs.yml
@ -11,16 +11,19 @@ pages:
|
|||||||
- [7-icinga-template-library.md, Icinga Template Library]
|
- [7-icinga-template-library.md, Icinga Template Library]
|
||||||
- [8-cli-commands.md, CLI Commands]
|
- [8-cli-commands.md, CLI Commands]
|
||||||
- [9-monitoring-remote-systems.md, Monitoring Remote Systems]
|
- [9-monitoring-remote-systems.md, Monitoring Remote Systems]
|
||||||
- [10-addons-plugins.md, Addons and Plugins]
|
- [10-icinga2-client.md, Icinga 2 Client]
|
||||||
- [11-alternative-frontends.md, Alternative Frontends]
|
- [11-agent-based-checks.md, Additional Agent-based Checks]
|
||||||
- [12-livestatus.md, Livestatus]
|
- [12-distributed-monitoring-ha.md, Distributed Monitoring and High Availability]
|
||||||
- [13-troubleshooting.md, Troubleshooting]
|
- [13-addons-plugins.md, Addons and Plugins]
|
||||||
- [14-upgrading-icinga-2.md, Upgrading Icinga 2]
|
- [14-alternative-frontends.md, Alternative Frontends]
|
||||||
- [15-migrating-from-icinga-1x.md, Migrating from Icinga 1.x]
|
- [15-livestatus.md, Livestatus]
|
||||||
- [16-language-reference.md, Language Reference]
|
- [16-troubleshooting.md, Troubleshooting]
|
||||||
- [17-library-reference.md, Library Reference]
|
- [17-upgrading-icinga-2.md, Upgrading Icinga 2]
|
||||||
- [18-debug.md, Debug]
|
- [18-migrating-from-icinga-1x.md, Migrating from Icinga 1.x]
|
||||||
- [19-appendix.md, Appendix]
|
- [19-language-reference.md, Language Reference]
|
||||||
|
- [20-library-reference.md, Library Reference]
|
||||||
|
- [21-debug.md, Debug]
|
||||||
|
- [22-appendix.md, Appendix]
|
||||||
theme: readthedocs
|
theme: readthedocs
|
||||||
markdown_extensions: [smarty]
|
markdown_extensions: [smarty]
|
||||||
extra_javascript: [scroll.js]
|
extra_javascript: [scroll.js]
|
||||||
|
Loading…
x
Reference in New Issue
Block a user