This section explains how to install a central single master node using
the `node wizard` command. If you prefer to do a manual installation please
refer to the [manual setup]() section.
Required information:
Parameter | Description
--------------------|--------------------
Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
API bind host | **Optional.** Allows to specify the address where the ApiListener is bound to. For advanced usage only.
API bind port | **Optional.** Allows to specify the port where the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
The setup wizard will ensure that the following steps are taken:
* Setup the `api` feature
* Generate a new certificate authority (CA) in `/var/lib/icinga2/ca` if not existing
* Create a certificate signing request (CSR) for the local node
* Sign the CSR with the local CA and copy all files into the `/etc/icinga2/pki` directory
* Update the `zones.conf` file with the new zone hierarchy
* Update `/etc/icinga2/features-enabled/api.conf` and `constants.conf`
Example master setup for the `icinga2-master1.localdomain` node on CentOS 7:
This section describes the setup of a satellite and/or client connected to an
existing master node setup. If you haven't done so already please [run the master setup](6-distributed-monitoring.md#distributed-monitoring-setup-master).
Icinga 2 on the master node must be running and accepting connections on port `5665`.
Store that ticket number for the satellite/client setup below.
### <a id="distributed-monitoring-setup-client-linux"></a> Client/Satellite Linux Setup
Please ensure that you've run all the steps mentioned in the [client/satellite chapter](6-distributed-monitoring.md#distributed-monitoring-setup-satellite-client).
Required information:
Parameter | Description
--------------------|--------------------
Common name (CN) | **Required.** By convention this should be the host's FQDN. Defaults to the FQDN.
Master common name | **Required.** Use the common name you've specified for your master node before.
Establish connection to the master | **Optional.** Whether the client should attempt to connect the to master or not. Defaults to `y`.
Master endpoint host | **Required if the the client should connect to the master.** The master's IP address or FQDN. This information is written to the `Endpoint` object configuration in the `zones.conf` file.
Master endpoint port | **Optional if the the client should connect to the master.** The master's listening port. This information is written to the `Endpoint` object configuration.
Add more master endpoints | **Optional.** If you have multiple master nodes configured, add them here.
Master connection for CSR auto-signing | **Required.** The master node's IP address or FQDN and port where the client should request a certificate from. Defaults to the master endpoint host.
Certificate information | **Required.** Verify that the connecting host really is the requested master node.
API bind host | **Optional.** Allows to specify the address where the ApiListener is bound to. For advanced usage only.
API bind port | **Optional.** Allows to specify the port where the ApiListener is bound to. For advanced usage only (requires changing the default port 5665 everywhere).
Accept config | **Optional.** Whether this node accepts configuration sync from the master node (required for [config sync mode](6-distributed-monitoring.md#distributed-monitoring-top-down-config-sync). Defaults to 'n'.
Accept commands | **Optional.** Whether this node accepts command execution message from the master node (required for [command endpoint mode](6-distributed-monitoring.md#distributed-monitoring-top-down-command-endpoint). Defaults to 'n'.
Example for the client `icinga2-client1.localdomain` generating a ticket on the master node
There are different ways to ensure that the Icinga 2 cluster nodes execute
checks, send notifications, etc.
The following modes differ in the way the host/service object
configuration is synchronized among nodes and checks are executed.
* [Top down](6-distributed-monitoring.md#distributed-monitoring-top-down). This mode syncs the configuration and commands from the master into child zones.
* [Bottom up](6-distributed-monitoring.md#distributed-monitoring-bottom-up). This mode leaves the configuration on the child nodes and requires an import on the parent nodes.
Read the chapter carefully and decide upon your requirements which way fits
best for your environments. You should not mix them -- that will overly complicate
your setup.
Check results are synced all the way up from the child nodes to the parent nodes.
That happens automatically and is ensured by the cluster protocol.
### <a id="distributed-monitoring-top-down"></a> Top Down
This is the most commonly used mode gathered from community feedback.
There are two different behaviours with check execution:
* Send a command execution event remotely, the scheduler still runs on the parent node
* Sync the host/service objects directly to the child node, checks are executed locally
Again -- it does not matter whether this is a `client` or a `satellite`
which is receiving configuration or command execution events.
### <a id="distributed-monitoring-top-down-command-endpoint"></a> Top Down Command Endpoint
This mode will force the Icinga 2 node to execute commands remotely on a specified endpoint.
The host/service object configuration is located on the master/satellite and the client only
needs the CheckCommand object definitions used.
![Icinga 2 Distributed Top Down Command Endpoint](images/distributed-monitoring/icinga2_distributed_top_down_command_endpoint.png)
Advantages:
* No local checks defined on the child node (client)
* No replay log necessary on child node disconnect (ensure to set `log_duration=0` on the parent node)
* Pin checks to specific endpoints (if the child zone consists of 2 endpoints)
Disadvantages:
* If the child node is not connected, no more checks are executed
* Requires additional configuration attribute specified in host/service objects
* Requires local `CheckCommand` object configuration. Best practice is to use a [global config zone](6-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync).
In order that all involved nodes will accept configuration and/or
commands you'll need to configure the `Zone` and `Endpoint` hierarchy
on all nodes.
*`icinga2-master1.localdomain` is the configuration master in this scenario.
*`icinga2-client2.localdomain` acts as client which receives command execution messages via command endpoint from the master. In addition it receives global check command configuration from the master.
Put the endpoint and zone configuration on **both** nodes into `/etc/icinga2/zones.conf`.
The endpoint configuration could look like this:
object Endpoint "icinga2-master1.localdomain" {
host = "192.168.56.101"
}
object Endpoint "icinga2-client2.localdomain" {
host = "192.168.56.112"
}
Then you'll need to define two zones. There is no naming convention but best practice
is to either use `master`, `satellite`/`client-fqdn` or go by region names.
> **Note**
>
> Each client requires its own zone and endpoint configuration. Best practice
> has been to use the client's FQDN for all object names.
The `master` zone is a parent of the `icinga2-client2.localdomain` zone.
object Zone "master" {
endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
}
object Zone "icinga2-client2.localdomain" {
endpoints = [ "icinga2-client2.localdomain" ]
parent = "master" //establish zone hierarchy
}
In addition to that add a [global zone](6-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync)
for syncing check commands later.
object Zone "global-templates" {
global = true
}
Edit the `api` feature on the client `icinga2-client2.localdomain` in
the `/etc/icinga2/features-enabled/api.conf` file and ensure to set
`accept_commands` and `accept_config` to `true`.
[root@icinga2-client1.localdomain /]# vim /etc/icinga2/features-enabled/api.conf
object ApiListener "api" {
//...
accept_commands = true
accept_config = true
}
Now it is time to validate the configuration and restart the Icinga 2 daemon
* Icinga 2 validates the configuration on `icinga2-master1.localdomain` and restarts.
* The `icinga2-master1.localdomain` node schedules and executes the checks.
* The `icinga2-client2.localdomain` node receives the execute command event with additional command parameters.
* The `icinga2-client2.localdomain` node maps the command parameters onto the local check command, executes the check locally and sends back the check result message.
You'll see - no reload or any interaction required on the client
itself.
Now you have learned the basics about command endpoint checks. Proceed in
the [scenarios](6-distributed-monitoring.md#distributed-monitoring-scenarios)
chapter for more details on extending the setup.
### <a id="distributed-monitoring-top-down-config-sync"></a> Top Down Config Sync
This mode syncs the object configuration files within specified zones.
This comes in handy if you want to configure everything on the master node
and sync the satellite checks (disk, memory, etc.). The satellites run their
own local scheduler and will send the check result messages back to the master.
![Icinga 2 Distributed Top Down Config Sync](images/distributed-monitoring/icinga2_distributed_top_down_config_sync.png)
Advantages:
* Sync the configuration files from the parent zone to the child zones.
* No manual restart required on the child nodes - sync, validation and restarts happen automatically.
* Execute checks directly on the child node's scheduler.
* Replay log if the connection drops (important for keeping the check history in sync, e.g. for SLA reports).
* Use a global zone for syncing templates, groups, etc.
Disadvantages:
* Requires a config directory on the master node with the zone name underneath `/etc/icinga2/zones.d`.
* Additional zone and endpoint configuration.
* Replay log is replicated on reconnect. This might generate an overload on the used connection.
In order that all involved nodes will accept configuration and/or
commands you'll need to configure the `Zone` and `Endpoint` hierarchy
on all nodes.
*`icinga2-master1.localdomain` is the configuration master in this scenario.
*`icinga2-client1.localdomain` acts as client which receives configuration from the master.
Put the endpoint and zone configuration on **both** nodes into `/etc/icinga2/zones.conf`.
The endpoint configuration could look like this:
object Endpoint "icinga2-master1.localdomain" {
host = "192.168.56.101"
}
object Endpoint "icinga2-client1.localdomain" {
host = "192.168.56.111"
}
Then you'll need to define two zones. There is no naming convention but best practice
is to either use `master`, `satellite`/`client-fqdn` or go by region names.
> **Note**
>
> Each client requires its own zone and endpoint configuration. Best practice
> has been to use the client's FQDN for all object names.
The `master` zone is a parent of the `icinga2-client1.localdomain` zone.
object Zone "master" {
endpoints = [ "icinga2-master1.localdomain" ] //array with endpoint names
}
object Zone "icinga2-client1.localdomain" {
endpoints = [ "icinga2-client1.localdomain" ]
parent = "master" //establish zone hierarchy
}
Edit the `api` feature on the client `icinga2-client1.localdomain` in
the `/etc/icinga2/features-enabled/api.conf` file and ensure to set
`accept_config` to `true`.
[root@icinga2-client1.localdomain /]# vim /etc/icinga2/features-enabled/api.conf
object ApiListener "api" {
//...
accept_config = true
}
Now it is time to validate the configuration and restart the Icinga 2 daemon
These examples should give you an idea how you can build your own
distributed monitoring environment. We've seen them all in production
environments and received feedback from our [community](https://www.icinga.org/community/get-help/)
and [partner support](https://www.icinga.org/services/support/) channels.
* Single master with clients
* HA master with clients as command endpoint
* Three level cluster with config HA masters, satellites receiving config sync and clients checked using command_endpoint
### <a id="distributed-monitoring-master-clients"></a> Master with Clients
![Icinga 2 Distributed Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_clients.png)
*`icinga2-master1.localdomain` is the primary master node
*`icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients
Setup requirements:
* Install `icinga2-master1.localdomain` as [master setup](6-distributed-monitoring.md#distributed-monitoring-setup-master)
* Install `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [client setup](6-distributed-monitoring.md#distributed-monitoring-setup-satellite-client)
Edit the `zones.conf` configuration file on the master:
[root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
object Endpoint "icinga2-master1.localdomain" {
}
object Endpoint "icinga2-client1.localdomain" {
host = "192.168.33.111" //the master actively tries to connect to the client
}
object Endpoint "icinga2-client2.localdomain" {
host = "192.168.33.112" //the master actively tries to connect to the client
}
object Zone "master" {
endpoints = [ "icinga2-master1.localdomain" ]
}
object Zone "icinga2-client1.localdomain" {
endpoints = [ "icinga2-client1.localdomain" ]
}
object Zone "icinga2-client2.localdomain" {
endpoints = [ "icinga2-client2.localdomain" ]
}
/* sync global commands */
object Zone "global-templates" {
global = true
}
The two client nodes do not necessarily need to know about each other. The only important thing
is that they know about the parent zone and their endpoint members and optional the global zone.
If you specify the `host` attribute in the `icinga2-master1.localdomain` endpoint object
the client will actively try to connect to the master node. Since we've specified the client
endpoint's attribute on the master node already, we don't want the clients to connect to the
master. Choose one connection direction.
[root@icinga2-client1.localdomain /]# vim /etc/icinga2/zones.conf
object Endpoint "icinga2-master1.localdomain" {
//do not actively connect to the master by leaving out the 'host' attribute
}
object Endpoint "icinga2-client1.localdomain" {
}
object Zone "master" {
endpoints = [ "icinga2-master1.localdomain" ]
}
object Zone "icinga2-client1.localdomain" {
endpoints = [ "icinga2-client1.localdomain" ]
}
/* sync global commands */
object Zone "global-templates" {
global = true
}
[root@icinga2-client2.localdomain /]# vim /etc/icinga2/zones.conf
object Endpoint "icinga2-master1.localdomain" {
//do not actively connect to the master by leaving out the 'host' attribute
}
object Endpoint "icinga2-client2.localdomain" {
}
object Zone "master" {
endpoints = [ "icinga2-master1.localdomain" ]
}
object Zone "icinga2-client2.localdomain" {
endpoints = [ "icinga2-client2.localdomain" ]
}
/* sync global commands */
object Zone "global-templates" {
global = true
}
Now it is time to define the two client hosts and apply service checks using
the command endpoint execution method to them. Note: You can also use the
config sync mode here.
Create a new configuration directory on the master node.
[root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
[root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
object Host "icinga2-client1.localdomain" {
check_command = "hostalive"
address = "192.168.56.111"
vars.client_endpoint = host.name //follows the convention host name == endpoint name
}
object Host "icinga2-client2.localdomain" {
check_command = "hostalive"
address = "192.168.56.112"
vars.client_endpoint = host.name //follows the convention host name == endpoint name
}
Add services using command endpoint checks.
[root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
apply Service "ping4" {
check_command = "ping4"
//check is executed on the master node
assign where host.address
}
apply Service "disk" {
check_command = "disk"
//specify where the check is executed
command_endpoint = host.vars.client_endpoint
assign where host.vars.client_endpoint
}
Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
Open Icinga Web 2 and check the 2 newly created clients hosts with two new services
-- one executed locally (`ping4`) and one using command endpoint (`disk`).
### <a id="distributed-monitoring-scenarios-ha-master-clients"></a> High-Availability Master with Clients
![Icinga 2 Distributed High Availability Master with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_ha_master_clients.png)
This scenario is quite the same as you have already found in the [chapter before](6-distributed-monitoring.md#distributed-monitoring-master-clients).
The real difference is that we will now setup two master nodes in a High-Availablity setup.
These nodes must be configured into zone and endpoints objects.
This scenario uses the capabilities of the Icinga 2 cluster. All zone members
replicate cluster events amongst each other. In addition to that several Icinga 2
features can enable HA functionality.
> **Notes**
> All nodes in the same zone require the same features enabled for High Availability (HA)
> amongst them.
Overview:
*`icinga2-master1.localdomain` is the config master master node
*`icinga2-master2.localdomain` is the secondary master master node without config in `zones.d`
*`icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients
Setup requirements:
* Install `icinga2-master1.localdomain` as [master setup](6-distributed-monitoring.md#distributed-monitoring-setup-master)
* Install `icinga2-master2.localdomain` as [client setup](6-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration)
* Install `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [client setup](6-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (when asked for adding multiple masters, tick 'y' and add the secondary master `icinga2-master2.localdomain`).
In case you not want to use the cli commands you can also manually create and sync the
required SSL certificates. We will modify and discuss the generated configuration here
in detail.
Since there are now two nodes in the same zone we must consider the
* Checks and notifiations are balanced between the two master nodes. That's fine but requires check plugins and notification scripts to exist on both nodes.
* The IDO feature will only be active on one node by default. Since all events are replicated between both nodes it is easier to just have one central database.
Decide whether you want to use a dedicated MySQL cluster VIP (external application cluster)
and leave the IDO feature with enabled HA capabilities. Or you'll configure the feature to
disable HA and write to a local installed database on each node. Both implementation methods
require you to configure Icinga Web 2 accordingly (Monitoring backend - IDO database, used transports).
The zone hierarchy could look like this. It involves putting the two master nodes
`icinga2-master1.localdomain` and `icinga2-master2.localdomain` into the `master` zone.
[root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.conf
object Endpoint "icinga2-master1.localdomain" {
host = "192.168.56.101"
}
object Endpoint "icinga2-master2.localdomain" {
host = "192.168.56.101"
}
object Endpoint "icinga2-client1.localdomain" {
host = "192.168.33.111" //the master actively tries to connect to the client
}
object Endpoint "icinga2-client2.localdomain" {
host = "192.168.33.112" //the master actively tries to connect to the client
[root@icinga2-master1.localdomain /]# cd /etc/icinga2/zones.d/master
[root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim hosts.conf
object Host "icinga2-client1.localdomain" {
check_command = "hostalive"
address = "192.168.56.111"
vars.client_endpoint = host.name //follows the convention host name == endpoint name
}
object Host "icinga2-client2.localdomain" {
check_command = "hostalive"
address = "192.168.56.112"
vars.client_endpoint = host.name //follows the convention host name == endpoint name
}
Add services using command endpoint checks.
[root@icinga2-master1.localdomain /etc/icinga2/zones.d/master]# vim services.conf
apply Service "ping4" {
check_command = "ping4"
//check is executed on the master node
assign where host.address
}
apply Service "disk" {
check_command = "disk"
//specify where the check is executed
command_endpoint = host.vars.client_endpoint
assign where host.vars.client_endpoint
}
Validate the configuration and restart Icinga 2 on the master node `icinga2-master1.localdomain`.
Open Icinga Web 2 and check the 2 newly created clients hosts with two new services
-- one executed locally (`ping4`) and one using command endpoint (`disk`).
In addition to that you should add [health checks](6-distributed-monitoring.md#distributed-monitoring-health-checks)
ensuring that your cluster notifies you in case of failure.
### <a id="distributed-monitoring-scenarios-master-satellite-client"></a> Three Levels with Master, Satellites and Clients
![Icinga 2 Distributed Master and Satellites with Clients](images/distributed-monitoring/icinga2_distributed_scenarios_master_satellite_client.png)
This scenario combines everything you've learned so far. High-availability masters,
satellites receiving their config from the master zone, clients checked via command
endpoint from the satellite zones.
> **Tip**
>
> It can get complicated so take pen and paper and bring your thoughts to life.
> Play around with a test setup before putting such a thing into production too!
Overview:
*`icinga2-master1.localdomain` is the config master master node
*`icinga2-master2.localdomain` is the secondary master master node without config in `zones.d`
*`icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` are satellite nodes in a `master` child zone
*`icinga2-client1.localdomain` and `icinga2-client2.localdomain` are two child nodes as clients
Setup requirements:
* Install `icinga2-master1.localdomain` as [master setup](6-distributed-monitoring.md#distributed-monitoring-setup-master)
* Install `icinga2-master2.localdomain`, `icinga2-satellite1.localdomain` and `icinga2-satellite2.localdomain` as [client setup](6-distributed-monitoring.md#distributed-monitoring-setup-satellite-client) (we will modify the generated configuration)
* Install `icinga2-client1.localdomain` and `icinga2-client2.localdomain` as [client setup](6-distributed-monitoring.md#distributed-monitoring-setup-satellite-client)
Once you are asked for the master endpoint providing CSR auto-signing capabilities
please add the master node which holds the CA and has the ApiListener feature configured.
The parent endpoint must still remain the satellite endpoint name.
Example for `icinga2-client1.localdomain`:
Please specify the master endpoint(s) this node should connect to:
"master" is the first satellite `icinga2-satellite1.localdomain`.
Master Common Name (CN from your master setup): icinga2-satellite1.localdomain
Do you want to establish a connection to the master from this node? [Y/n]: y
Please fill out the master connection information:
Master endpoint host (Your master's IP address or FQDN): 192.168.56.105
Master endpoint port [5665]:
Add more "masters", the second satellite `icinga2-satellite2.localdomain`.
Add more master endpoints? [y/N]: y
Master Common Name (CN from your master setup): icinga2-satellite2.localdomain
Do you want to establish a connection to the master from this node? [Y/n]: y
Please fill out the master connection information:
Master endpoint host (Your master's IP address or FQDN): 192.168.56.106
Master endpoint port [5665]:
Add more master endpoints? [y/N]: n
Specify the master node `icinga2-master2.localdomain`with the CA private key and ticket salt configured.
Please specify the master connection for CSR auto-signing (defaults to master endpoint host):