Documentation: Add best practices for cluster config sync

fixes #9285
This commit is contained in:
Michael Friedrich 2015-06-16 19:44:02 +02:00
parent 917c5da666
commit b70b594262
2 changed files with 77 additions and 3 deletions

View File

@ -659,7 +659,8 @@ cluster configuration and its object relation (Zones, Endpoints, etc) and the wa
will be able to sync the configuration from the master to the remote satellite or client.
Please continue reading in the [distributed monitoring chapter](12-distributed-monitoring-ha.md#distributed-monitoring-high-availability),
especially the [configuration synchronisation section](12-distributed-monitoring-ha.md#cluster-zone-config-sync).
especially the [configuration synchronisation](12-distributed-monitoring-ha.md#cluster-zone-config-sync)
and [best practices](12-distributed-monitoring-ha.md#zone-config-sync-best-practice).

View File

@ -267,8 +267,11 @@ statement in [icinga2.conf](4-configuring-icinga-2.md#icinga2-conf):
//include_recursive "conf.d"
Better use a dedicated directory name like `cluster` or similar, and include that
one if your nodes require local configuration not being synced to other nodes. That's
This applies to any other non-used configuration directories as well (e.g. `repository.d`
if not used).
Better use a dedicated directory name for local configuration like `local` or similar, and
include that one if your nodes require local configuration not being synced to other nodes. That's
useful for local [health checks](12-distributed-monitoring-ha.md#cluster-health-check) for example.
> **Note**
@ -277,6 +280,7 @@ useful for local [health checks](12-distributed-monitoring-ha.md#cluster-health-
> setup only one assigned node can act as configuration master. All other zone
> member nodes **must not** have the `/etc/icinga2/zones.d` directory populated.
These zone packages are then distributed to all nodes in the same zone, and
to their respective target zone instances.
@ -389,6 +393,75 @@ master instances anymore.
> problems with the configuration synchronisation.
### <a id="zone-config-sync-best-practice"></a> Zone Configuration Synchronisation Best Practice
The configuration synchronisation works with multiple hierarchies. The following example
illustrate a quite common setup where the master is reponsible for configuration deployment:
* [High-Availability master zone](12-distributed-monitoring-ha.md#distributed-monitoring-high-availability)
* [Distributed satellites](12-distributed-monitoring-ha.md#)
* [Remote clients](10-icinga2-client.md#icinga2-client-scenarios) connected to the satellite
While you could use the clients with local configuration and service discovery on the satellite/master
**bottom up**, the configuration sync could be more reasonable working **top-down** in a cascaded scenario.
Take pen and paper and draw your network scenario including the involved zone and endpoint names.
Once you've added them to your zones.conf as connection and permission configuration, start over with
the actual configuration organization:
* Ensure that `command` object definitions are globally available. That way you can use the
`command_endpoint` configuration more easily on clients as [command execution bridge](10-icinga2-client.md#icinga2-client-configuration-command-bridge)
* Generic `Templates`, `timeperiods`, `downtimes` should be synchronized in a global zone as well.
* [Apply rules](3-monitoring-basics.md#using-apply) can be synchronized globally. Keep in mind that they are evaluated on each instance,
and might require additional filters (e.g. `match("icinga2*", NodeName) or similar based on the zone information.
* [Apply rules](3-monitoring-basics.md#using-apply) specified inside zone directories will only affect endpoints in the same zone or below.
* Host configuration must be put into the specific zone directory.
* Duplicated host and service objects (also generated by faulty apply rules) will generate a configuration error.
* Consider using custom constants in your host/service configuration. Each instance may set their local value, e.g. for `PluginDir`.
This example specifies the following hierarchy over three levels:
* `ha-master` zone with two child zones `dmz1-checker` and `dmz2-checker`
* `dmz1-checker` has two client child zones `dmz1-client1` and `dmz1-client2`
* `dmz2-checker` has one client child zone `dmz2-client9`
The configuration tree could look like this:
# tree /etc/icinga2/zones.d
/etc/icinga2/zones.d
├── dmz1-checker
│   └── health.conf
├── dmz1-client1
│   └── hosts.conf
├── dmz1-client2
│   └── hosts.conf
├── dmz2-checker
│   └── health.conf
├── dmz2-client9
│   └── hosts.conf
├── global-templates
│   ├── apply_notifications.conf
│   ├── apply_services.conf
│   ├── commands.conf
│   ├── groups.conf
│   ├── templates.conf
│   └── users.conf
├── ha-master
│   └── health.conf
└── README
7 directories, 13 files
If you prefer adifferent naming schema for directories or files names, go for it. If you
are unsure about the best method, join the [support channels](1-about.md#support) and discuss
with the community.
If you are planning to synchronize local service health checks inside a zone, look into the
[command endpoint](12-distributed-monitoring-ha.md#cluster-health-check-command-endpoint)
explainations.
## <a id="cluster-health-check"></a> Cluster Health Check
The Icinga 2 [ITL](7-icinga-template-library.md#icinga-template-library) provides