Update documentation.

Refs #5870
This commit is contained in:
Gunnar Beutner 2014-03-31 10:14:45 +02:00
parent 93baea247c
commit 79b3afbcfb
4 changed files with 16 additions and 230 deletions

View File

@ -9,3 +9,4 @@ touch with their developers.
If you're looking for puppet manifests, chef cookbooks, ansible recipes, etc - we're happy
to integrate them upstream, so please get in touch using [https://support.icinga.org](https://support.icinga.org) :-)

View File

@ -1,197 +0,0 @@
## <a id="configuration-best-practice"></a> Configuration Best Practice
### <a id="best-practice-config-structure"></a> Configuration File and Directory Structure
Icinga 2 does not care how you name your files and/or directories as long as
you include them in the [icinga2.conf](#icinga2-conf) file.
By default, the `conf.d` directory is included recursively looking for files
which match the pattern `*.conf`.
If you're putting/generating your configuration structure in there, you do not
need to touch the [icinga2.conf](#icinga2-conf) file. This becomes useful with
external addons not having write permissions to this file such as LConf.
Example:
include_recursive "conf.d" "*.conf"
Below `conf.d` you're free to choose. An example based on host objects with
inline services in `conf.d/hosts` and their templates below `conf.d/services/`
would be:
conf.d/
services/
templates.conf
hosts/
hosts.conf
If your setup consists of location based monitoring, you could reflect that with
your configuration directory tree and files:
conf.d/
germany/
nuremberg/
hosts.conf
osmc.conf
berlin/
hosts.conf
osdc.conf
austria/
linz/
hosts.conf
vienna/
hosts.conf
If you're planning to create a [cluster](#cluster) setup with Icinga 2 and your
configuration master should deploy specific configuration parts to slave nodes,
it's reasonable not to confuse it with configuration below `conf.d`. Rather
create a dedicated directory and put all nodes into their own directories:
include_recursive "cluster" "*.conf"
cluster/
node1/
node2/
node99/
If you are preferring to control what several parties drop into the configuration
pool (for example different departments with their own standalone configuration),
you can still deactivate the `conf.d` inclusion and use your own strategy.
Example:
include_recursive "dep1" "*.conf"
include_recursive "dep2" "*.conf"
include_recursive "dep3" "*.conf"
include_recursive "remotecust" "*.conf"
include_recursive "cmdb" "*.conf"
> **Note**
>
> You can omit the file pattern `"*.conf"` because that's the Icinga 2 default already.
### <a id="best-practice-use-templates"></a> Use Templates
Templates are the key to minimize configuration overhead, and share widely
used attributes among objects inheriting their values. And if one template
does not fit everyone, split it into two.
Or rather inherit that template into a new template, and override/disable
unwanted values.
template Service "generic-service-disable-notifications" {
import "generic-service",
notifications["mail-icingaadmin"] = null
}
### <a id="best-practice-inline-objects-using-templates"></a> Inline Objects using Templates
While it is reasonable to create single objects by your preferred configuration
tool, using templates and the `apply` keyword will save you a lot of typing extra work.
For instance, you can still create a host object, then a service object linking
to it, after that a notification object referencing the service object, and last
but not least defining scheduled downtime objects linked to services.
object Host "localhost" {
display_name = "The best host there is",
groups = [ "all-hosts" ],
host_dependencies = [ "router" ],
}
object Service "localhost-ping4" {
host = "localhost",
short_name = "ping4",
display_name = "localhost ping4",
check_command = "ping4",
check_interval = 60s,
retry_interval = 15s,
servicegroups = [ "all-services" ],
}
object Notification "localhost-ping4-notification" {
host = "localhost",
service = "ping4",
notification_command = "mail-service-notification",
users = [ "user1", "user2" ]
}
object ScheduledDowntime "some-downtime" {
host = "localhost",
service = "ping4",
author = "icingaadmin",
comment = "Some comment",
fixed = false,
duration = 30m,
ranges = {
"sunday" = "02:00-03:00"
}
}
By doing that everytime for such a series of linked objects, your configuration
will get bloated and unreadable. You've already read that [using templates](#best-practice-use-templates)
will help here.
Using the `apply` keyword you can create services, notifications, scheduled downtimes and dependencies
for an arbitrary number of hosts and services respectively:
apply Notification "mail-notification" {
notification_command = "mail-service-notification",
users = [ "user1", "user2" ]
assign where "generic-service" in service.templates
}
apply ScheduledDowntime "backup-downtime" {
author = "icingaadmin",
comment = "Some comment",
fixed = false,
duration = 30m,
ranges = {
"sunday" = "02:00-03:00"
}
assign where "generic-service" in service.templates
}
template Service "generic-service" {
max_check_attempts = 3,
check_interval = 5m,
retry_interval = 1m,
enable_perfdata = true,
}
apply Service "ping4" {
import "generic-service",
check_command = "ping4",
assign where "linux-server" in host.templates
}
template Host "linux-server" {
groups = [ "all-hosts" ],
check = "ping4"
}
object Host "localhost" {
import "linux-server",
display_name = "The best host there is",
}

View File

@ -13,9 +13,9 @@ following command:
> **Note**
>
> Vagrant and VirtualBox are available for various distributions. Please note
> that Vagrant version `1.0.x` is not supported. At least version `1.2.x` is
> required to be installed (for example from [http://downloads.vagrantup.com](http://downloads.vagrantup.com)).
> [Vagrant](http://www.vagrantup.com/) and [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
> are available for various distributions. Please note that Vagrant version `1.0.x` is not
> supported. At least version `1.2.x` is required.
The Vagrant VM is based on CentOS 6.4 and uses the official Icinga 2 RPM
packages from `packages.icinga.org`. The check plugins are installed from
@ -23,46 +23,28 @@ EPEL providing RPMs with sources from the Monitoring Plugins project.
SSH login is available using `vagrant ssh`.
## <a id="vagrant-demo-guis"></a> Vagrant Demo GUIs
## <a id="vagrant-demo-guis"></a> Demo GUIs
In addition to installing Icinga 2 the Vagrant puppet modules also install the
Icinga 1.x Classic UI and Icinga Web.
GUI | Url | Credentials
----------------|----------------------------------------------------------------------|------------------------
Classic UI | [http://localhost:8080/icinga](http://localhost:8080/icinga) | icingaadmin/icingaadmin
Icinga Web | [http://localhost:8080/icinga-web](http://localhost:8080/icinga-web) | root/password
Classic UI | [http://localhost:8080/icinga](http://localhost:8080/icinga) | icingaadmin / icingaadmin
Icinga Web | [http://localhost:8080/icinga-web](http://localhost:8080/icinga-web) | root / password
## <a id="vagrant-windows"></a> Vagrant on Windows
## <a id="vagrant-windows"></a> SSH Access
You need to install [VirtualBox](#https://www.virtualbox.org/wiki/Downloads)
next to [Vagrant for Windows](#http://www.vagrantup.com/downloads.html). For SSH access
you need to install [Git for Windows](#http://git-scm.com/download/win) too.
You can access the Vagrant VM using SSH:
Either download and extract the Icinga 2 tarball (or git archive) or clone the
git repository using your preferred git gui.
$ vagrant ssh
Open the Windows command prompt (cmd+R) and change the directory to your
Icinga 2 directory containing the `Vagrantfile` file and start the Vagrant box.
Alternatively you can use your favorite SSH client:
c:> cd C:\Users\admin\icinga2
c:> vagrant up
> **Note**
>
> If SSH access is not working, you may need to add the Git binary path to the system path.
c:> set PATH=%PATH%;C:\Program Files (x86)\Git\bin
c:> vagrant ssh
For manual SSH access using [Putty](#http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html)
you'll need the following default credentials:
Name |Value
Name | Value
----------------|----------------
hostname | 127.0.0.1
port | 2222
connection type | ssh
username | vagrant
password | vagrant
Host | 127.0.0.1
Port | 2222
Username | vagrant
Password | vagrant