Compare commits

..

53 Commits

Author SHA1 Message Date
Alexander Aleksandrovič Klimov
f2c83fbbf2
Merge pull request #9950 from Icinga/probot/sync-changelog/support/2.14/9fa41f3b4fbae83d4ca5bbfce4771031b6bd1fa4
CHANGELOG.md: add v2.13.9
2023-12-20 17:30:01 +01:00
Alexander A. Klimov
b85cd98eb1 CHANGELOG.md: add v2.13.9 2023-12-20 16:27:56 +00:00
Alexander Aleksandrovič Klimov
9fa41f3b4f
Merge pull request #9940 from Icinga/2141
Icinga 2.14.1
2023-12-20 17:27:43 +01:00
Alexander Aleksandrovič Klimov
61d190f892
Merge pull request #9947 from Icinga/2141morebackport
Truncate too big notification command lines, fix GelfWriter deadlock and return 503 in /v1/console/* during reload
2023-12-20 12:44:07 +01:00
Alexander Aleksandrovič Klimov
3ddbbebc63
Merge pull request #9946 from Icinga/2141backport
Disable TLS renegotiation, bump Windows deps and fix Icinga DB crashes
2023-12-20 12:40:41 +01:00
Alexander A. Klimov
41b692793b Icinga 2.14.1 2023-12-20 10:56:15 +01:00
Alexander A. Klimov
fecb209fe0 GelfWriter: protect m_Stream via m_WorkQueue, not ObjectLock(this)
On shutdown or HA re-connect ConfigObject#SetAuthority(false) is called which
does ObjectLock(this) and ConfigObject#Pause(). GelfWriter#Pause(), with the
above ObjectLock, calls m_WorkQueue.Join(). But items inside that also doing
ObjectLock(this) cause a deadlock.
2023-12-20 10:46:51 +01:00
Mattia Codato
85c5a7c901 Prevent calls to command API while the configuration is reloading.
Fixes #9840
2023-12-20 10:46:51 +01:00
Alexander A. Klimov
0eeac3b385 PluginNotificationTask::ScriptFunc(): on Linux truncate output and comment
not to run into an exec(3) error E2BIG due to a too long argument.
This sends a notification with truncated output instead of not sending.
2023-12-20 10:46:51 +01:00
Alexander A. Klimov
7efdae6a53 IcingaDB#SendConfigDelete(): fix missing nullptr check before deref 2023-12-20 10:30:01 +01:00
Alexander A. Klimov
79efda7a14 Icinga DB downtime history: provide cancel_time where has_been_cancelled may be 1
The table sla_history_downtime requires a downtime_end.
The Go daemon takes the cancel_time if has_been_cancelled is 1.
So we must supply a cancel_time whereever has_been_cancelled is 1.
Otherwise the Go daemon can't process some entries.
2023-12-20 10:30:01 +01:00
Alexander A. Klimov
8c9f3ede4a Bump OpenSSL shipped for Windows to v3.0.12 2023-12-20 10:14:00 +01:00
Alexander A. Klimov
4547c1e5a3 Bump Boost shipped for Windows to v1.83
Note: For doc/21-development.md use:

perl -pi -e 's/(boost[-\w]*?1[-_]?)82/${1}83/g' doc/21-development.md
2023-12-20 10:14:00 +01:00
Alexander A. Klimov
ec77b6f1e3 Disable TLS renegotiation
The API doesn't need it and a customer's security scanner
is afraid of a potential DoS attack vector.
2023-12-20 10:14:00 +01:00
Alexander Aleksandrovič Klimov
bbb45894dd
Merge pull request #9944 from Icinga/targeted-api-filter-214
FilterUtility::GetFilterTargets(): don't run filter for specific object(s) for all objects
2023-12-19 17:40:59 +01:00
Alexander Aleksandrovič Klimov
b55a14d536
Merge pull request #9921 from Icinga/doc2141
Update documentation
2023-12-19 17:01:31 +01:00
Alexander A. Klimov
03aa5adb7a Tests: config_apply/gettargetservices_*: use BOOST_CHECK_EQUAL_COLLECTIONS()
to show the value diff in case of mismatch.

Co-authored-by: Yonas Habteab <yonas.habteab@icinga.com>
2023-12-19 15:19:20 +01:00
Alexander A. Klimov
b99db24100 Test ApplyRule::GetTarget*s() 2023-12-19 15:19:20 +01:00
Alexander A. Klimov
bcbb1aee52 FilterUtility::GetFilterTargets(): don't run filter for specific object(s) for all objects 2023-12-19 15:19:20 +01:00
Alexander A. Klimov
60b7e96adc ApplyRule::GetTarget*s(): support constant strings from variables
in addition to literal strings. This is for sandboxed filters with some
variables pre-set by the caller. They're "constant" in that scope, too.
2023-12-19 15:19:20 +01:00
Alexander A. Klimov
8248fa110c Introduce DictExpression#GetExpressions() 2023-12-19 15:19:20 +01:00
Alexander A. Klimov
5c10bad86f Introduce Dictionary#GetRef() 2023-12-19 15:19:20 +01:00
Alexander Aleksandrovič Klimov
5059d0f8b0
Merge pull request #9933 from Icinga/renew-the-ca-9890-214
ApiListener#Start(): auto-renew CA on its owner
2023-12-19 15:15:00 +01:00
Alexander A. Klimov
12c706a8ac Update AUTHORS 2023-12-19 12:29:51 +01:00
Julian Brost
5835e2e03b Update .mailmap 2023-12-19 12:29:51 +01:00
Alexander A. Klimov
61900b73e1 Doc: Troubleshooting: remove obsolete section "Analyze Notification Result"
This feature has been reverted and won't be re-introduced anytime soon.
2023-12-19 12:29:51 +01:00
Alexander A. Klimov
4195f8d0f0 RequestCertificateHandler(): also renew if CA needs a renewal
and a newer one is available.
2023-12-18 17:04:59 +01:00
Alexander A. Klimov
6b000fbce6 CertificateToString(): allow raw pointer input 2023-12-18 17:04:59 +01:00
Alexander A. Klimov
32f43c4873 ApiListener#Start(): auto-renew CA on its owner
otherwise it would expire.
2023-12-18 17:04:59 +01:00
Alexander A. Klimov
b3dee0bb0a ApiListener#RenewCert(): enable optional CA creation 2023-12-18 17:04:59 +01:00
Alexander A. Klimov
0cb037c698 CreateCertIcingaCA(EVP_PKEY*, X509_NAME*): enable optional CA creation 2023-12-18 17:04:59 +01:00
Alexander A. Klimov
17eac30868 Test IsCertUptodate() and IsCaUptodate() 2023-12-18 17:04:59 +01:00
Alexander A. Klimov
0f4723e567 Introduce IsCaUptodate() by splitting IsCertUptodate() 2023-12-18 17:04:59 +01:00
Alexander Aleksandrovič Klimov
bff7e69991
Merge pull request #9932 from Icinga/do-not-re-notify-if-filtered-states-don-t-change-4503-214
Discard likely duplicate problem notifications via Notification#last_notified_state_per_user
2023-12-13 18:15:56 +01:00
Alexander A. Klimov
d7500ca1bd Notification#BeginExecuteNotification(): on recovery clear last_notified_state_per_user 2023-12-13 16:14:57 +01:00
Alexander A. Klimov
bbadf1f27b Notification#BeginExecuteNotification(): discard likely duplicate problem notifications 2023-12-13 16:14:57 +01:00
Alexander A. Klimov
0ae2bdc444 Cluster-sync Notification#last_notified_state_per_user 2023-12-13 16:14:57 +01:00
Alexander A. Klimov
66ba9f446a Notification#BeginExecuteNotification(): track state change notifications 2023-12-13 16:14:57 +01:00
Alexander A. Klimov
9a08472162 Docs: change "Amazon Linux 2" to "Amazon Linux" where applicable
We also support Amazon Linux 2023 now.
2023-11-24 17:29:35 +01:00
Alvar Penning
0dd66f9886 Document host Common Runtime Attribute 2023-11-24 17:29:35 +01:00
Alvar Penning
bb717cf177 Fix link text for Downtime* Event Stream Types
The link text for all Downtime* Event Stream Types contains "Comment"
instead of "Downtime" even when pointing to the correct object.
2023-11-24 17:29:35 +01:00
Yonas Habteab
04dbf4aa13 Fix downtime host/service name attribute descriptions 2023-11-24 17:29:35 +01:00
Alexander Aleksandrovič Klimov
2e03a2e528 Doc: ITL: correct $ifw_api_crl$ default
In contrast to cert/key/CA, no CRL means no CRL.
(The behavior of the API is the same.)
2023-11-24 17:29:35 +01:00
Mathias Aerts
99aa33b85f Fix typo 2023-11-24 17:29:35 +01:00
Alexander Aleksandrovič Klimov
0ceff4c09b
Merge pull request #9918 from Icinga/gha2141
Update GitHub actions
2023-11-24 17:26:27 +01:00
Alexander A. Klimov
9944437b7b Update AUTHORS 2023-11-24 15:06:52 +01:00
Lord Hepipud
968f7401cf Adds ProgressPreference SilentlyContinue
We should use `$Global:ProgressPreference = 'SilentlyContinue';` to disable the progress bar during download.
By doing so, information are directly written to the disk instead of written inside the memory and dumped to the disk afterwards
2023-11-24 14:43:52 +01:00
Alexander Aleksandrovič Klimov
d583598d08 GHA: drop EOL Fedora 36 2023-11-24 14:38:44 +01:00
Alexander A. Klimov
de47878991 GHA: complain if PR adds commits from people not yet listed in ./AUTHORS
not to have to update ./AUTHORS or .mailmap after merging.
2023-11-24 14:38:40 +01:00
Alexander A. Klimov
3801b8a7cb GHA: cancel runs on PR, but not on push
In a PR one top commit replaces the previous one.
But the central branches are more like timelines.
It's nice to have red crosses in a such timeline
as clear indicators that something was actually broken.
2023-11-24 14:38:00 +01:00
Alexander Aleksandrovič Klimov
35ef622cc6 GHA: add upcoming (already frozen) Ubuntu 23.10 2023-11-24 14:38:00 +01:00
Alexander Aleksandrovič Klimov
1f4ac7e651 GHA: add upcoming (already frozen) Fedora 39 2023-11-24 14:38:00 +01:00
Alexander Aleksandrovič Klimov
19927d0043 GHA: drop EOL Ubuntu 22.10 2023-11-24 14:38:00 +01:00
354 changed files with 6739 additions and 10640 deletions

View File

@ -1,49 +0,0 @@
---
name: '[INTERNAL] Release'
about: Release a version
title: 'Release Version v$version'
labels: ''
assignees: ''
---
# Release Workflow
- [ ] Update `ICINGA2_VERSION`
- [ ] Update bundled Windows dependencies
- [ ] Harden global TLS defaults (consult https://ssl-config.mozilla.org)
- [ ] Update `CHANGELOG.md`
- [ ] Update `doc/16-upgrading-icinga-2.md` if applicable
- [ ] Create and push a signed tag for the version
- [ ] Build and release DEB and RPM packages
- [ ] Build and release Windows packages
- [ ] Merge dependency updates in https://github.com/Icinga/docker-icinga2/pulls
- [ ] Create release on GitHub
- [ ] Update public docs
- [ ] Announce release
## Update Bundled Windows Dependencies
### Update packages.icinga.com
Add the latest Boost and OpenSSL versions to
https://packages.icinga.com/windows/dependencies/, e.g.:
* https://master.dl.sourceforge.net/project/boost/boost-binaries/1.82.0/boost_1_82_0-msvc-14.2-64.exe
* https://master.dl.sourceforge.net/project/boost/boost-binaries/1.82.0/boost_1_82_0-msvc-14.2-32.exe
* https://slproweb.com/download/Win64OpenSSL-3_0_9.exe
* https://slproweb.com/download/Win32OpenSSL-3_0_9.exe
### Update Build Server, CI/CD and Documentation
* [doc/win-dev.ps1](doc/win-dev.ps1) (also affects CI/CD)
* [tools/win32/configure.ps1](tools/win32/configure.ps1)
* [tools/win32/configure-dev.ps1](tools/win32/configure-dev.ps1)
### Re-provision Build Server
Even if there aren't any new releases of dependencies with versions
hardcoded in the repos and files listed above (Boost, OpenSSL).
There may be new build versions of other dependencies (VS, MSVC).
Our GitHub actions (tests) use the latest ones automatically,
but the GitLab runner (release packages) doesn't.

View File

@ -1,7 +0,0 @@
version: 2
updates:
- package-ecosystem: github-actions
directory: /
schedule:
interval: daily

View File

@ -1,8 +0,0 @@
# This Dockerfile is used in the linux job for Alpine Linux.
#
# As the linux.bash script is, in fact, a bash script and Alpine does not ship
# a bash by default, the "alpine:bash" container will be built using this
# Dockerfile in the GitHub Action.
FROM alpine:3
RUN ["apk", "--no-cache", "add", "bash"]

View File

@ -10,7 +10,7 @@ jobs:
steps:
- name: Checkout HEAD
uses: actions/checkout@v4
uses: actions/checkout@v3
with:
fetch-depth: 0
@ -20,8 +20,8 @@ jobs:
sort -uo AUTHORS AUTHORS
git add AUTHORS
git log --format='format:%aN <%aE>' "$(
git merge-base HEAD^1 HEAD^2
)..HEAD^2" | sed '/^dependabot\[bot] /d' >> AUTHORS
git merge-base "origin/$GITHUB_BASE_REF" "origin/$GITHUB_HEAD_REF"
)..origin/$GITHUB_HEAD_REF" >> AUTHORS
sort -uo AUTHORS AUTHORS
git diff AUTHORS >> AUTHORS.diff

View File

@ -5,6 +5,7 @@ on:
push:
branches:
- master
- 'support/*'
release:
types:
- published

View File

@ -1,28 +1,19 @@
#!/bin/bash
set -exo pipefail
export PATH="/usr/lib/ccache/bin:/usr/lib/ccache:/usr/lib64/ccache:$PATH"
export PATH="/usr/lib/ccache:/usr/lib64/ccache:/opt/rh/devtoolset-11/root/usr/bin:$PATH"
export CCACHE_DIR=/icinga2/ccache
export CTEST_OUTPUT_ON_FAILURE=1
CMAKE_OPTS=()
CMAKE_OPTS=''
case "$DISTRO" in
alpine:*)
# Packages inspired by the Alpine package, just
# - LibreSSL instead of OpenSSL 3 and
# - no MariaDB or libpq as they depend on OpenSSL.
# https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/community/icinga2/APKBUILD
apk add bison boost-dev ccache cmake flex g++ libedit-dev libressl-dev ninja-build tzdata
ln -vs /usr/lib/ninja-build/bin/ninja /usr/local/bin/ninja
;;
amazonlinux:2)
amazon-linux-extras install -y epel
yum install -y bison ccache cmake3 gcc-c++ flex ninja-build system-rpm-config \
yum install -y bison ccache cmake3 gcc-c++ flex ninja-build \
{libedit,mariadb,ncurses,openssl,postgresql,systemd}-devel
yum install -y bzip2 tar wget
wget https://archives.boost.io/release/1.69.0/source/boost_1_69_0.tar.bz2
wget https://boostorg.jfrog.io/artifactory/main/release/1.69.0/source/boost_1_69_0.tar.bz2
tar -xjf boost_1_69_0.tar.bz2
(
@ -33,34 +24,42 @@ case "$DISTRO" in
ln -vs /usr/bin/cmake3 /usr/local/bin/cmake
ln -vs /usr/bin/ninja-build /usr/local/bin/ninja
CMAKE_OPTS+=(-DBOOST_{INCLUDEDIR=/boost_1_69_0,LIBRARYDIR=/boost_1_69_0/stage/lib})
CMAKE_OPTS='-DBOOST_INCLUDEDIR=/boost_1_69_0 -DBOOST_LIBRARYDIR=/boost_1_69_0/stage/lib'
export LD_LIBRARY_PATH=/boost_1_69_0/stage/lib
;;
amazonlinux:20*)
dnf install -y amazon-rpm-config bison cmake flex gcc-c++ ninja-build \
{boost,libedit,mariadb-connector-c,ncurses,openssl,postgresql,systemd}-devel
dnf install -y bison cmake flex gcc-c++ ninja-build \
{boost,libedit,mariadb1\*,ncurses,openssl,postgresql,systemd}-devel
;;
centos:*)
yum install -y centos-release-scl epel-release
yum install -y bison ccache cmake3 devtoolset-11-gcc-c++ flex ninja-build \
{boost169,libedit,mariadb,ncurses,openssl,postgresql,systemd}-devel
ln -vs /usr/bin/cmake3 /usr/local/bin/cmake
ln -vs /usr/bin/ccache /usr/lib64/ccache/g++
CMAKE_OPTS='-DBOOST_INCLUDEDIR=/usr/include/boost169 -DBOOST_LIBRARYDIR=/usr/lib64/boost169'
;;
debian:*|ubuntu:*)
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-{recommends,suggests} -y \
bison ccache cmake dpkg-dev flex g++ ninja-build tzdata \
lib{boost-all,edit,mariadb,ncurses,pq,ssl,systemd}-dev
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-{recommends,suggests} -y bison \
ccache cmake flex g++ lib{boost-all,edit,mariadb,ncurses,pq,ssl,systemd}-dev ninja-build tzdata
;;
fedora:*)
dnf install -y bison ccache cmake flex gcc-c++ ninja-build redhat-rpm-config \
dnf install -y bison ccache cmake flex gcc-c++ ninja-build \
{boost,libedit,mariadb,ncurses,openssl,postgresql,systemd}-devel
;;
*suse*)
zypper in -y bison ccache cmake flex gcc-c++ ninja rpm-config-SUSE \
{lib{edit,mariadb,openssl},ncurses,postgresql,systemd}-devel \
opensuse/*)
zypper in -y bison ccache cmake flex gcc-c++ ninja {lib{edit,mariadb,openssl},ncurses,postgresql,systemd}-devel \
libboost_{context,coroutine,filesystem,iostreams,program_options,regex,system,test,thread}-devel
;;
*rockylinux:*)
rockylinux:*)
dnf install -y 'dnf-command(config-manager)' epel-release
case "$DISTRO" in
@ -72,22 +71,8 @@ case "$DISTRO" in
;;
esac
dnf install -y bison ccache cmake gcc-c++ flex ninja-build redhat-rpm-config \
{boost,bzip2,libedit,mariadb,ncurses,openssl,postgresql,systemd,xz,libzstd}-devel
;;
esac
case "$DISTRO" in
alpine:*)
CMAKE_OPTS+=(-DUSE_SYSTEMD=OFF -DICINGA2_WITH_MYSQL=OFF -DICINGA2_WITH_PGSQL=OFF)
;;
debian:*|ubuntu:*)
CMAKE_OPTS+=(-DICINGA2_LTO_BUILD=ON)
source <(dpkg-buildflags --export=sh)
;;
*)
CMAKE_OPTS+=(-DCMAKE_{C,CXX}_FLAGS="$(rpm -E '%{optflags} %{?march_flag}')")
export LDFLAGS="$(rpm -E '%{?build_ldflags}')"
dnf install -y bison ccache cmake gcc-c++ flex ninja-build \
{boost,libedit,mariadb,ncurses,openssl,postgresql,systemd}-devel
;;
esac
@ -96,14 +81,14 @@ cd /icinga2/build
cmake \
-GNinja \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DCMAKE_BUILD_TYPE=Release \
-DICINGA2_UNITY_BUILD=ON \
-DUSE_SYSTEMD=ON \
-DICINGA2_USER=$(id -un) \
-DICINGA2_GROUP=$(id -gn) \
"${CMAKE_OPTS[@]}" ..
$CMAKE_OPTS ..
ninja -v
ninja
ninja test
ninja install

View File

@ -13,7 +13,7 @@ concurrency:
jobs:
linux:
name: ${{ matrix.distro }}${{ matrix.platform != 'linux/amd64' && format(' ({0})', matrix.platform) || '' }}
name: ${{ matrix.distro }}
runs-on: ubuntu-latest
strategy:
@ -21,67 +21,36 @@ jobs:
max-parallel: 2
matrix:
distro:
# Alpine Linux to build Icinga 2 with LibreSSL, OpenBSD's default.
# The "alpine:bash" image will be built below based on "alpine:3".
- alpine:bash
- amazonlinux:2
- amazonlinux:2023
# Raspberry Pi OS is close enough to Debian to test just one of them.
# Its architecture is different, though, and covered by the Docker job.
- debian:11
- debian:12
- centos:7 # and RHEL 7
- debian:10
- debian:11 # and Raspbian 11
- debian:12 # and Raspbian 12
- fedora:37
- fedora:38
- fedora:39
- fedora:40
- fedora:41
- fedora:42
- opensuse/leap:15.5
- opensuse/leap:15.6
# We don't actually support Rocky Linux as such!
# We just use that RHEL clone to test the original.
- rockylinux:8
- rockylinux:9
- rockylinux/rockylinux:10
- registry.suse.com/suse/sle15:15.5
- registry.suse.com/suse/sle15:15.6
- registry.suse.com/suse/sle15:15.7
- opensuse/leap:15.3 # SLES 15.3
- opensuse/leap:15.4 # and SLES 15.4
- opensuse/leap:15.5 # and SLES 15.5
- rockylinux:8 # RHEL 8
- rockylinux:9 # RHEL 9
- ubuntu:20.04
- ubuntu:22.04
- ubuntu:24.04
- ubuntu:24.10
- ubuntu:25.04
platform:
- linux/amd64
include:
- distro: debian:11
platform: linux/386
- distro: debian:12
platform: linux/386
- ubuntu:23.04
- ubuntu:23.10
steps:
- name: Checkout HEAD
uses: actions/checkout@v4
uses: actions/checkout@v3
- name: Restore/backup ccache
uses: actions/cache@v4
uses: actions/cache@v3
with:
path: ccache
key: ccache/${{ matrix.distro }}
- name: Build Alpine Docker Image
if: "matrix.distro == 'alpine:bash'"
run: >-
docker build --file .github/workflows/alpine-bash.Dockerfile
--tag alpine:bash `mktemp -d`
- name: Build Icinga
- name: Build
run: >-
docker run --rm -v "$(pwd):/icinga2" -e DISTRO=${{ matrix.distro }}
--platform ${{ matrix.platform }} ${{ matrix.distro }} /icinga2/.github/workflows/linux.bash
${{ matrix.distro }} /icinga2/.github/workflows/linux.bash

116
.github/workflows/rpm.yml vendored Normal file
View File

@ -0,0 +1,116 @@
name: .rpm
on:
push:
branches:
- master
- 'support/*'
pull_request: {}
concurrency:
group: rpm-${{ github.event_name == 'push' && github.sha || github.ref }}
cancel-in-progress: true
jobs:
rpm:
name: .rpm (${{ matrix.distro.name }}, ${{ matrix.distro.release }})
strategy:
fail-fast: false
max-parallel: 1
matrix:
distro:
- name: sles
release: '12.5'
subscription: true
runs-on: ubuntu-latest
steps:
- name: Vars
id: vars
env:
GITLAB_RO_TOKEN: '${{ secrets.GITLAB_RO_TOKEN }}'
run: |
if [ ${{ matrix.distro.subscription }} = true ]; then
if [ "$(tr -d '\n' <<<"$GITLAB_RO_TOKEN" |wc -c)" -eq 0 ]; then
echo '::set-output name=CAN_BUILD::false'
echo '::set-output name=NEED_LOGIN::false'
else
echo '::set-output name=CAN_BUILD::true'
echo '::set-output name=NEED_LOGIN::true'
fi
else
echo '::set-output name=CAN_BUILD::true'
echo '::set-output name=NEED_LOGIN::false'
fi
- name: Checkout HEAD
if: "steps.vars.outputs.CAN_BUILD == 'true'"
uses: actions/checkout@v1
- name: Login
if: "steps.vars.outputs.NEED_LOGIN == 'true'"
env:
GITLAB_RO_TOKEN: '${{ secrets.GITLAB_RO_TOKEN }}'
run: |
docker login registry.icinga.com -u github-actions --password-stdin <<<"$GITLAB_RO_TOKEN"
- name: rpm-icinga2
if: "steps.vars.outputs.CAN_BUILD == 'true' && !matrix.distro.subscription"
run: |
set -exo pipefail
git clone https://git.icinga.com/packaging/rpm-icinga2.git
chmod o+w rpm-icinga2
- name: subscription-rpm-icinga2
if: "steps.vars.outputs.CAN_BUILD == 'true' && matrix.distro.subscription"
env:
GITLAB_RO_TOKEN: '${{ secrets.GITLAB_RO_TOKEN }}'
run: |
set -exo pipefail
git config --global credential.helper store
cat <<EOF >~/.git-credentials
https://github-actions:${GITLAB_RO_TOKEN}@git.icinga.com
EOF
git clone https://git.icinga.com/packaging/subscription-rpm-icinga2.git rpm-icinga2
chmod o+w rpm-icinga2
- name: Restore/backup ccache
if: "steps.vars.outputs.CAN_BUILD == 'true'"
id: ccache
uses: actions/cache@v1
with:
path: rpm-icinga2/ccache
key: |-
${{ matrix.distro.name }}/${{ matrix.distro.release }}-ccache-${{ hashFiles('rpm-icinga2/ccache') }}
- name: Binary
if: "steps.vars.outputs.CAN_BUILD == 'true'"
run: |
set -exo pipefail
git checkout -B master
if [ -e rpm-icinga2/ccache ]; then
chmod -R o+w rpm-icinga2/ccache
fi
docker run --rm \
-v "$(pwd)/rpm-icinga2:/rpm-icinga2" \
-v "$(pwd)/.git:/icinga2.git:ro" \
-w /rpm-icinga2 \
-e ICINGA_BUILD_PROJECT=icinga2 \
-e ICINGA_BUILD_TYPE=snapshot \
-e UPSTREAM_GIT_URL=file:///icinga2.git \
registry.icinga.com/build-docker/${{ matrix.distro.name }}/${{ matrix.distro.release }} \
icinga-build-package
- name: Test
if: "steps.vars.outputs.CAN_BUILD == 'true'"
run: |
set -exo pipefail
docker run --rm \
-v "$(pwd)/rpm-icinga2:/rpm-icinga2" \
-w /rpm-icinga2 \
-e ICINGA_BUILD_PROJECT=icinga2 \
-e ICINGA_BUILD_TYPE=snapshot \
registry.icinga.com/build-docker/${{ matrix.distro.name }}/${{ matrix.distro.release }} \
icinga-build-test

View File

@ -21,39 +21,33 @@ jobs:
matrix:
bits: [32, 64]
runs-on: windows-2025
runs-on: windows-2019
env:
BITS: '${{ matrix.bits }}'
CMAKE_BUILD_TYPE: RelWithDebInfo
ICINGA_BUILD_TYPE: snapshot
UPSTREAM_GIT_URL: file://D:/a/icinga2/icinga2/.git
steps:
- name: Checkout HEAD
uses: actions/checkout@v4
with:
fetch-depth: 0
uses: actions/checkout@v1
- name: windows-icinga2
run: |
git clone https://git.icinga.com/packaging/windows-icinga2.git
- name: Build tools
run: |
Set-PSDebug -Trace 1
& .\doc\win-dev.ps1
- name: Binary
- name: Source
run: |
Set-PSDebug -Trace 1
& .\tools\win32\load-vsenv.ps1
& powershell.exe .\tools\win32\configure.ps1
if ($LastExitCode -ne 0) { throw "Error during configure" }
& powershell.exe .\tools\win32\build.ps1
if ($LastExitCode -ne 0) { throw "Error during build" }
& powershell.exe .\tools\win32\test.ps1
if ($LastExitCode -ne 0) { throw "Error during test" }
git checkout -B master
cd windows-icinga2
& .\source.ps1
- name: Show Log Files
if: ${{ always() }}
- name: Binary
working-directory: windows-icinga2
run: |
foreach ($file in Get-ChildItem -Recurse -Filter "*.log") {
Write-Host "::group::$($file.FullName)"
Get-Content $file.FullName
Write-Host "::endgroup::"
}
New-Item -ItemType Directory -Path 'C:\Program Files\Icinga2\WillBeRemoved' -ErrorAction SilentlyContinue
& .\build.ps1

View File

@ -1,7 +1,6 @@
<alexander.klimov@icinga.com> <alexander.klimov@netways.de>
Alexander A. Klimov <alexander.klimov@icinga.com> <alexander.klimov@icinga.com>
<alexander.klimov@icinga.com> <grandmaster@al2klimov.de>
Alexander A. Klimov <alexander.klimov@icinga.com> <al2klimov@gmail.com>
<assaf@aikilinux.com> <assaf.flatto@livepopuli.com>
<atj@pulsewidth.org.uk> <adam.james@transitiv.co.uk>
<bernd.erk@icinga.com> <bernd.erk@icinga.org>
@ -35,7 +34,6 @@ Alexander A. Klimov <alexander.klimov@icinga.com> <al2klimov@gmail.com>
<tobias.vonderkrone@profitbricks.com> <tobias@vonderkrone.info>
<yonas.habteab@icinga.com> <yonas.habteab@netways.de>
Alex <alexp710@hotmail.com> <alexp710@hotmail.com>
Alvar Penning <alvar.penning@icinga.com> <8402811+oxzi@users.noreply.github.com>
Baptiste Beauplat <lyknode@cilg.org> <lyknode@cilg.org>
Carsten Köbke <carsten.koebke@gmx.de> Carsten Koebke <carsten.koebke@koebbes.de>
Claudio Kuenzler <ck@claudiokuenzler.com>
@ -43,7 +41,6 @@ Diana Flach <diana.flach@icinga.com> <crunsher@bamberg.ccc.de>
Diana Flach <diana.flach@icinga.com> <Crunsher@users.noreply.github.com>
Diana Flach <diana.flach@icinga.com> <jean-marcel.flach@netways.de>
Diana Flach <diana.flach@icinga.com> Jean Flach <jean-marcel.flach@icinga.com>
Dirk Wening <dirk.wening@netways.de> <170401214+SpeedD3@users.noreply.github.com>
Dolf Schimmel <dolf@transip.nl> <dolf@dolfschimmel.nl>
Gunnar Beutner <gunnar.beutner@icinga.com> <icinga@net-icinga2.adm.netways.de>
Henrik Triem <henrik.triem@icinga.com> <henrik.triem@netways.de>

19
AUTHORS
View File

@ -21,7 +21,6 @@ Andres Ivanov <andres@andres.wtf>
Andrew Jaffie <ajaffie@gmail.com>
Andrew Meyer <ameyer+secure@nodnetwork.org>
Andy Grunwald <andygrunwald@gmail.com>
Angel Roman <angel.r.roman77@gmail.com>
Ant1x <37016240+Ant1x@users.noreply.github.com>
Arnd Hannemann <arnd@arndnet.de>
Assaf Flatto <assaf@aikilinux.com>
@ -48,13 +47,11 @@ C C Magnus Gustavsson <magnus@gustavsson.se>
Carlos Cesario <carloscesario@gmail.com>
Carsten Köbke <carsten.koebke@gmx.de>
Chris Boot <crb@tiger-computing.co.uk>
Chris Malton <chris@deltav-tech.co.uk>
Christian Birk <mail@birkc.de>
Christian Gut <cycloon@is-root.org>
Christian Harke <ch.harke@gmail.com>
Christian Jonak <christian@jonak.org>
Christian Lehmann <christian_lehmann@gmx.de>
Christian Lauf <github.com@christian-lauf.info>
Christian Loos <cloos@netsandbox.de>
Christian Schmidt <github@chsc.dk>
Christopher Peterson <3893680+cspeterson@users.noreply.github.com>
@ -75,10 +72,8 @@ Denis <zaharden@gmail.com>
Dennis Lichtenthäler <dennis.lichtenthaeler@stiftung-tannenhof.de>
dh.harald <dh.harald@gmail.com>
Diana Flach <diana.flach@icinga.com>
Didier 'OdyX' Raboud <didier.raboud@liip.ch>
Dinesh Majrekar <dinesh.majrekar@serverchoice.com>
Dirk Goetz <dirk.goetz@icinga.com>
Dirk Wening <dirk.wening@netways.de>
Dirk Melchers <dirk@dirk-melchers.de>
Dolf Schimmel <dolf@transip.nl>
Dominik Riva <driva@protonmail.com>
@ -137,10 +132,8 @@ Jesse Morgan <morgajel@gmail.com>
Jo Goossens <jo.goossens@hosted-power.com>
Jochen Friedrich <j.friedrich@nwe.de>
Johannes Meyer <johannes.meyer@icinga.com>
Johannes Schmidt <johannes.schmidt@icinga.com>
Jonas Meurer <jonas@freesources.org>
Jordi van Scheijen <jordi.vanscheijen@solvinity.com>
Josef Friedrich <josef@friedrich.rocks>
Joseph L. Casale <jcasale@activenetwerx.com>
jre3brg <jorge.rebelo@pt.bosch.com>
Julian Brost <julian.brost@icinga.com>
@ -168,7 +161,6 @@ Luca Lesinigo <luca@lm-net.it>
Lucas Bremgartner <breml@users.noreply.github.com>
Lucas Fairchild-Madar <lucas.madar@gmail.com>
Luiz Amaral <luiz.amaral@innogames.com>
Maciej Dems <maciej.dems@p.lodz.pl>
Magnus Bäck <magnus@noun.se>
Maik Stuebner <maik@stuebner.info>
Malte Rabenseifner <mail@malte-rabenseifner.de>
@ -181,7 +173,6 @@ Marius Bergmann <marius@yeai.de>
Marius Sturm <marius@graylog.com>
Mark Leary <mleary@mit.edu>
Markus Frosch <markus.frosch@icinga.com>
Markus Opolka <markus.opolka@netways.de>
Markus Waldmüller <markus.waldmueller@netways.de>
Markus Weber <github@ztweb.de>
Martijn van Duren <m.vanduren@itisit.nl>
@ -216,9 +207,7 @@ mocruz <mocruz@theworkshop.com>
Muhammad Mominul Huque <nahidbinbaten1995@gmail.com>
nemtrif <ntrifunovic@hotmail.com>
Nicolai <nbuchwitz@users.noreply.github.com>
Nicolas Berens <nicolas.berens@planet.com>
Nicolas Limage <github@xephon.org>
Nicolas Rodriguez <nico@nicoladmin.fr>
Nicole Lang <nicole.lang@icinga.com>
Niflou <dubuscyr@gmail.com>
Noah Hilverling <noah.hilverling@icinga.com>
@ -232,7 +221,6 @@ Patrick Dolinic <pdolinic@netways.de>
Patrick Huy <frz@frz.cc>
Paul Denning <paul.denning@dimensiondata.com>
Paul Richards <paul@minimoo.org>
Pavel Motyrev <legioner.r@gmail.com>
Pawel Szafer <pszafer@gmail.com>
Per von Zweigbergk <pvz@itassistans.se>
Peter Eckel <6815386+peteeckel@users.noreply.github.com>
@ -246,7 +234,7 @@ pv2b <pvz@pvz.pp.se>
Ralph Breier <ralph.breier@roedl.com>
Reto Zeder <reto.zeder@arcade.ch>
Ricardo Bartels <ricardo@bitchbrothers.com>
Richard Mortimer <richm@oldelvet.org.uk>
RincewindsHat <12514511+RincewindsHat@users.noreply.github.com>
Rinck H. Sonnenberg <r.sonnenberg@netson.nl>
Robert Lindgren <robert.lindgren@gmail.com>
Robert Scheck <robert@fedoraproject.org>
@ -263,7 +251,6 @@ Sascha Westermann <sascha.westermann@hl-services.de>
Sebastian Brückner <mail@invlid.com>
Sebastian Chrostek <sebastian@chrostek.net>
Sebastian Eikenberg <eikese@mail.uni-paderborn.de>
Sebastian Grund <s.grund@openinfrastructure.de>
Sebastian Marsching <sebastian-git-2016@marsching.com>
Silas <67681686+Tqnsls@users.noreply.github.com>
Simon Murray <spjmurray@yahoo.co.uk>
@ -284,7 +271,6 @@ Sven Wegener <swegener@gentoo.org>
sysadt <sysadt@protonmail.com>
T. Mulyana <nothinux@gmail.com>
teclogi <27726999+teclogi@users.noreply.github.com>
Theo Buehler <tb@openbsd.org>
Thomas Forrer <thomas.forrer@wuerth-phoenix.com>
Thomas Gelf <thomas.gelf@icinga.com>
Thomas Niedermeier <tniedermeier@thomas-krenn.com>
@ -292,7 +278,6 @@ Thomas Widhalm <thomas.widhalm@icinga.com>
Tim Hardeck <thardeck@suse.de>
Tim Weippert <weiti@weiti.eu>
Timo Buhrmester <van.fstd@gmail.com>
Tobias Bauriedel <tobias.bauriedel@netways.de>
Tobias Birnbaum <osterd@gmx.de>
Tobias Deiminger <haxtibal@posteo.de>
Tobias von der Krone <tobias.vonderkrone@profitbricks.com>
@ -303,12 +288,10 @@ vigiroux <vincent.giroux@nokia.com>
Vytenis Darulis <vytenis@uber.com>
Wenger Florian <wenger@unifox.at>
Will Frey <will.frey@digitalreasoning.com>
William Calliari <42240136+w1ll-i-code@users.noreply.github.com>
Winfried Angele <winfried.angele@gmail.com>
Wolfgang Nieder <wnd@gmx.net>
XnS <git@xns.be>
Yannick Charton <tontonitch-pro@yahoo.fr>
Yannick Martin <yannick.martin@ovhcloud.com>
Yohan Jarosz <yohanjarosz@yahoo.fr>
Yonas Habteab <yonas.habteab@icinga.com>
Zachary McGibbon <zachary.mcgibbon@gmail.com>

View File

@ -7,275 +7,6 @@ documentation before upgrading to a new release.
Released closed milestones can be found on [GitHub](https://github.com/Icinga/icinga2/milestones?state=closed).
## 2.15.0 (2025-06-18)
This Icinga 2 release is focused on adding Icinga 2 dependencies support to Icinga DB, but also includes a number
of bugfixes, enhancements and code quality improvements. Below is a summary of the most important changes, for the
complete list of issues and PRs, please see the [milestone on GitHub](https://github.com/Icinga/icinga2/issues?q=is%3Aclosed+milestone%3A2.15.0).
### Notes
Thanks to all contributors:
[ChrLau](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3AChrLau),
[Josef-Friedrich](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3AJosef-Friedrich),
[LordHepipud](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3ALordHepipud),
[OdyX](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3AOdyX),
[RincewindsHat](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3ARincewindsHat),
[SebastianOpeni](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3ASebastianOpeni),
[SpeedD3](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3ASpeedD3),
[Tqnsls](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3ATqnsls),
[botovq](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Abotovq),
[cycloon](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Acycloon),
[legioner0](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Alegioner0),
[legna-namor](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Alegna-namor),
[macdems](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Amacdems),
[mathiasaerts](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Amathiasaerts),
[mcodato](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Amcodato),
[n-rodriguez](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3An-rodriguez),
[netphantm](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Anetphantm),
[nicolasberens](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Anicolasberens),
[oldelvet](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Aoldelvet),
[peteeckel](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Apeteeckel),
[tbauriedel](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Atbauriedel),
[w1ll-i-code](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Aw1ll-i-code),
[ymartin-ovh](https://github.com/Icinga/icinga2/pulls?q=is%3Apr+is%3Aclosed+milestone%3A2.15.0+author%3Aymartin-ovh)
### Breaking Changes
* API: Fix `/v1/objects/*` queries with `attrs` set to `[]` to return empty attributes instead of all of them. #8169
* Drop the undocumented `Checkable#process_check_result` and broken `System#track_parents` DSL functions. #10457
### Enhancements
* Gracefully disconnect all clients on shutdown and prevent from accepting new connections. #10460
* Icinga DB: Send data to Redis® exactly as they're stored in the database to avoid extra value-mapping routines by the Go daemon. #10452
* Add support for Icinga 2 dependencies in Icinga DB. #10290
* Take host/service reachability into account when computing its severity. #10399
* Rework the dependency cycle detection to efficiently handle large configs and provide better error messages. #10360
* Don't log next check timestamp in scientific notation. #10352
* Automatically remove child downtimes when removing parent downtime. #10345
* Ensure compatibility with Boost version up to v1.88. #10278 #10419
* Reject infinite performance data values. #10077
* Support `host_template` and `service_template` tags in `ElasticsearchWriter`. #10074
* Icinga DB: Support Redis® username authentication. #10102
* Cluster: Distribute host child objects (e.g. services, notifications, etc.) based on the host's name. #10161
* Icinga DB Check: Report an error if both Icinga DB instances are responsible in a HA setup. #10188
* Windows: upgrade build toolchain to Visual Studio 2022. #9747
### Bugfixes
* Core
* Use `Checkable#check_timeout` also for rescheduling remote checks. #10443
* Log: Don't unnecessarily buffer log messages that are going to be dropped anyway. #10177
* Don't loose perfdata counter (`c`) unit when normalizing performance data for Icinga DB. #10432
* Fix broken SELinux policy on Fedora ≥ 41 due to the new `/usr/sbin` to `/usr/bin` equivalence. #10429
* Don't load `Notification` objects before `User` and `UserGroup` objects to allow them to be referenced in notifications. #10427
* Ensure consistent DST handling across different platforms. #10422
* Fix Icinga 2 doesn't generate a core dump when it crashes with SIGABRT. #10416
* Don't process concurrent checks for the same checkable. #10372
* Don't process check results after the checker and API listener have been stopped. #10397
* Avoid zombie processes on plugin execution timeout on busy systems. #10375
* Properly restore the notification object state on `Recovery` notification. #10361
* Fix incorrectly dropped acknowledgement and recovery notifications. #10211
* Prevent checks from always being rescheduled outside the configured `check_period`. #10070
* Don't send reminder notifications after a `Custom` notification while `interval` is set to `0`. #7818
* Reset all signal handlers of child processes to their defaults before starting a plugin. #8011
* tests: Fix `FormatDateTime` test cases with invalid formats on macOS and all BSD-based systems. #10149
* Mark move constructor and assignment operator in `String` as `noexcept` to allow optimizations. #10353 #10365
* Cluster and API
* Fix an inverted condition in `ApiListener#IsHACluster()` that caused to always return `true` in a non-HA setup. #10417
* Don't silently accept authenticated JSON-RPC connections with no valid endpoint. #10415
* Sync `Notification#notified_problem_users` across the cluster to prevent lost recovery notifications. #10380
* Remove superfluous `)` from a HTTP request log message. #9966
* Disable TLS renegotiation (handshake on existing connection) on OpenBSD as well. #9943
* Log also the underlying error message when a HTTP request is closed with `No data received` by Icinga 2. #9928
* Fix a deadlock triggered by concurrent `/v1/actions/add-comment` and `/v1/actions/acknowledge-problem` requests on
the same checkable, as well as a crash that might occur when running perfectly timed `/v1/actions/add-comment`
and `/v1/actions/remove-comment` requests targeting the same comment. #9924
* Icinga DB
* Fix missing acknowledgement and flapping history entries due to a number overflow. #10467
* Send downtime `cancel_time` only if it is cancelled. #10379
* Send only the necessary data to the `icinga:stats` Redis® stream. #10359
* Remove a spin lock in `RedisConnection#Connect()` to avoid busy waiting. #10265
* Writers
* Serialize all required metrics before queueing them to a `WorkQueue`. #10420
* `OpenTsdbWriter`: Include checkable name in log messages to ease troubleshooting. #10009
* `OpenTsdbWriter`: Don't send custom empty tags. #7928
* `InfluxDBWriter`: Add missing closing quote in validation error message. #10174
### ITL
* Add `--maintenance_mode_state` (`$vmware_maintenance_mode_state`) argument to `vmware-esx-command` check command. #10435
* Add `-n` (`$load_procs_to_show$`) argument to `load` check command. #10426
* Add `--inode-perfdata` (`$disk_np_inode_perfdata$`) argument to `disk` check command. #10395
* Add `-r` (`$ssh_remote_version$`) and `-P` (`$ssh_remote_protocol$`) arguments to `ssh` check command. #10283
* Add `--unplugged_nics_state` (`$vmware_unplugged_nics_state$`) argument to `vmware-esx-soap-host-net` and `vmware-esx-soap-host-net-nic` check commands. #10261
* Add `-X` (`$proc_exclude_process$`) argument to `procs` check command. #10232
* Add `--dane` (`$ssl_cert_dane$`) argument to `ssl_cert` check command. #10196
* Fix `check_ssl_cert` deprecation warnings. #9758
* Fix `check_systemd` executable name add add all missing arguments. #10035
* Add `-M` (`$snmp_multiplier$` & `$snmpv3_multiplier$`) argument to `snmp` and `snmpv3` check commands. #9975
* Add `--continue-after-certificate` (`$http_certificate_continue$`) argument to `http` check command. #9974
* Add `--ignore-maximum-validity` (`$ssl_cert_ignore_maximum_validity$`) argument to `ssl_cert` check command. #10396
* Add `--maximum-validity` (`$ssl_cert_maximum_validity$`) argument to `ssl_cert` check command. #9881
* Add `--url` (`$ssl_cert_http_url$`) argument to `ssl_cert` check command. #9759
* Add `fuse.sshfs` and `fuse.*` (supported only by Monitoring Plugins) to the list of default disk exclude types. #9749
* Add `check_curl` check command. #9205
* Add the `--extra-opts` argument to various commands that support it. #8010
### Documentation
* Don't use `dnf config-manager` to configure Fedora repository and mention `icingadb-redis-selinux` package. #10479
* Update the outdated cold startup duration documentation to reflect the current behavior. #10446
* Indent second-level unordered lists with four spaces to correctly render them in the HTML documentation. #10441
* Add a reference to the check result state documentation from within the Advanced Topics section. #10421
* Improve the documentation of how to generate Icinga 2 core dumps. #10418
* Update Icinga 2 CLI output examples to match the current output. #10323
* Fix incorrect `ping_timeout` value in the `hostalive` check command documentation. #10069
### Code Quality
* Simplify deferred SSL shutdown in `ApiListener#NewClientHandlerInternal()`. #10301
* Don't unnecessarily shuffle configuration items during config load. #10008
* Sort config types by their load dependencies at namespace initialization time to save some round trips during config load. #10148
* Fix `livestatus` build error on macOS without unity builds. #10176
* Remove unused methods in `SharedObject` class. #10456
* Remove unused `ProcessingResult#NoCheckResult` enum value. #10444
* CMake: Drop all third-party cmake modules and use the ones shipped with CMake v3.8+. #10403
* CMake: Raise the minimum required policy to `3.8`. #10402 #10478
* CMake: Turn on `-Wsuggest-override` to warn about missing `override` specifiers. #10225 #10356
* Make `icinga::Empty` a constant to prevent accidental modifications. #10224
* Remove various unused methods in the `Registry` class. #10222
* Fix missing parent `std::atomic<T>` constructor call in our `Atomic<T>` wrapper class. #10215
* Drop unused `m_NextHeartbeat` member variable from `JsonRpcConnection`. #10208
* Enhance some of the validation error messages. #10201
* Don't allow `Type#GetLoadDependencies()` to return non-config object type dependencies. #10169
* Don't allow `Type#GetLoadDependencies()` to return a set of nullptr type dependencies. #10155
* Remove EOL distros detection code from `Utility::ReleaseHelper()` function. #10147
* Remove dead code in TLS `GetSignatureAlgorithm()` function. #9882
* Mark `Logger#GetSeverity()` as non-virtual to avoid unnecessary vtable lookups. #9851
* Remove unused `Stream#Peak()` method and unused `allow_partial` parameter from `Stream#Read()`. #9734 #9736
* Suppress compiler warnings in third-party libraries. #9732
* Fix various compiler warnings. #9731 #10442
* Reduce task function allocation overhead by using a per-thread created lambda in `WorkQueue`. #9575
* Remove redundant trailing empty lines and add missing newlines in some files. #7799
## 2.14.6 (2025-05-27)
This security release fixes a critical issue in the certificate renewal logic in Icinga 2, which
might incorrectly renew an invalid certificate. However, only nodes with access to the Icinga CA
private key running with OpenSSL older than version 1.1.0 (released in 2016) are vulnerable. So this
typically affects Icinga 2 masters running on operating systems like RHEL 7 and Amazon Linux 2.
* CVE-2025-48057: Prevent invalid certificates from being renewed with OpenSSL older than v1.1.0.
* Fix use-after-free in VerifyCertificate(): Additionally, a use-after-free was found in the same
function which is fixed as well, but in case it is triggered, typically only a wrong error code
may be shown in a log message.
* Windows: Update OpenSSL shipped on Windows to v3.0.16.
## 2.14.5 (2025-02-06)
This release fixes a regression introduced in 2.14.4 that caused the `icinga2 node setup`,
`icinga2 node wizard`, and `icinga2 pki request` commands to fail if a certificate was
requested from a node that has to forward the request to another node for signing.
Additionally, it fixes a small bug in the performance data normalization and includes
various documentation improvements.
### Bug Fixes
* Don't close anonymous connections before sending the response for a certificate request #10337
* Performance data: Don't discard min/max values even if crit/warn thresholds arent given #10339
* Fix a failing test case on systems `time_t` is only 32 bits #10343
### Documentation
* Document the -X option for the mail-host-notification and mail-service-notification commands #10335
* Include Nagios in the migration docs #10324
* Remove RHEL 7 from installation instructions #10334
* Add instructions for installing build dependencies on Windows Server #10336
## 2.14.4 (2025-01-23)
This bugfix release is focused on improving HA cluster stability and easing
troubleshooting of issues in this area. It also addresses several crashes,
in the core itself and both in Icinga DB and IDO (numbers out of range).
In addition, it fixes several other issues such as lost notifications
or TimePeriod/ScheduledDowntime exceeding specified date ranges.
### Crash Fixes
* Invalid `DateTime#format()` arguments in config and console on Windows Server 2016 and older. #10112
* Downtime scheduling at runtime with non-existent trigger. #10049
* Object creation at runtime during Icinga DB initialization. #10151
* Comment on a service of a non-existent host. #9861
### Miscellaneous Bugfixes
* Lost notifications after recovery outside the notification time period. #10187
* TimePeriod/ScheduledDowntime exceeding specified date range. #9983 #10107
* Clean up failure for obsolete Downtimes. #10062
* ifw-api check command: use correct process-finished handler. #10140
* Email notification scripts: strip 0x0D (CR) for a proper Content-Type. #10061
* Several fixes and improvements of the code quality. #10066 #10214 #10254 #10263 #10264
### Cluster and API
* Sync runtime objects in topological order to honor their dependencies. #10000
* Make parallel config syncs more robust. #10013
* After object creation via API fails, clean up properly for the next try. #10111
* Close HTTPS connections properly to prevent leaks. #10005 #10006
* Reduce the number of cluster messages in memory at the same time. #9991 #9999 #10210
* Once a cluster connection shall be closed, stop communicating. #10213 #10221
* Remove unnecessary blocking of semaphores. #9992 #9994
* Reduce unnecessary cluster messages setting the next check time. #10011
### Icinga DB and IDO
* IDO: fix object relations after aborted synchronization. #10065
* Icinga DB, IDO: limit all timestamps to four year digits. #10058 #10059
* Icinga DB: limit execution\_time and latency (milliseconds) to database schema. #10060
### Troubleshooting
* Add `/v1/debug/malloc_info` which calls `malloc_info(3)` if available. #10015
* Add log messages about own network I/O. #9993 #10141 #10207
* Several fixes and improvements of log messages. #9997 #10021 #10209
### Windows
* Update OpenSSL shipped on Windows to v3.0.15. #10170
* Update Boost shipped on Windows to v1.86. #10114
* Support CMake v3.29. #10037
* Don't require to build .msi as admin. #10137
* Build configuration scripts: allow custom `$CMAKE_ARGS`. #10312
### Documentation
* Distributed Monitoring: add section "External CA/PKI". #9825
* Explain how to enable/disable debug logging on the fly. #9981
* Update supported OS versions and repository configuration. #10064 #10090 #10120 #10135 #10136 #10205
* Several fixes and improvements. #9960 #10050 #10071 #10156 #10194
* Replace broken links. #10115 #10118 #10282
* Fix typographical and similarly trivial errors. #9953 #9967 #10056 #10116 #10152 #10153 #10204
## 2.14.3 (2024-11-12)
This security release fixes a TLS certificate validation bypass.
Given the severity of that issue, users are advised to upgrade all nodes immediately.
* Security: fix TLS certificate validation bypass. CVE-2024-49369
* Security: update OpenSSL shipped on Windows to v3.0.15.
* Windows: sign MSI packages with a certificate the OS trusts by default.
## 2.14.2 (2024-01-18)
Version 2.14.2 is a hotfix release for master nodes that mainly
fixes excessive disk usage caused by the InfluxDB writers.
* InfluxDB: truncate timestamps to whole seconds to save disk space. #9969
* HttpServerConnection: log request processing time as well. #9970
* Update Boost shipped on Windows to v1.84. #9970
## 2.14.1 (2023-12-21)
Version 2.14.1 is a hotfix release for masters and satellites that mainly
@ -494,58 +225,6 @@ Add `linux_netdev` check command. #9045
* Several code quality improvements. #8815 #9106 #9250
#9508 #9517 #9537 #9594 #9605 #9606 #9641 #9658 #9702 #9717 #9738
## 2.13.12 (2025-05-27)
This security release fixes a critical issue in the certificate renewal logic in Icinga 2, which
might incorrectly renew an invalid certificate. However, only nodes with access to the Icinga CA
private key running with OpenSSL older than version 1.1.0 (released in 2016) are vulnerable. So this
typically affects Icinga 2 masters running on operating systems like RHEL 7 and Amazon Linux 2.
* CVE-2025-48057: Prevent invalid certificates from being renewed with OpenSSL older than v1.1.0.
* Fix use-after-free in VerifyCertificate(): Additionally, a use-after-free was found in the same
function which is fixed as well, but in case it is triggered, typically only a wrong error code
may be shown in a log message.
* Windows: Update OpenSSL shipped on Windows to v3.0.16.
* Fix a failing test case on systems `time_t` is only 32 bits #10344.
## 2.13.11 (2025-01-23)
This bugfix release addresses several crashes,
both in the core itself and in Icinga DB (numbers out of range).
In addition, it fixes several other issues such as lost notifications
or TimePeriod/ScheduledDowntime exceeding specified date ranges.
### Crash Fixes
* Invalid `DateTime#format()` arguments in config and console on Windows Server 2016 and older. #10165
* Downtime scheduling at runtime with non-existent trigger. #10127
* Object creation at runtime during Icinga DB initialization. #10164
* Icinga DB: several numbers out of database schema range. #10244
### Miscellaneous Bugfixes
* Lost notifications after recovery outside the notification time period. #10241
* TimePeriod/ScheduledDowntime exceeding specified date range. #10128 #10133
* Make parallel config syncs more robust. #10126
* Reduce unnecessary cluster messages setting the next check time. #10168
### Windows
* Update OpenSSL shipped on Windows to v3.0.15. #10175
* Update Boost shipped on Windows to v1.86. #10134
* Support CMake v3.29. #10087
* Don't require to build .msi as admin. #10305
* Build configuration scripts: allow custom `$CMAKE_ARGS`. #10315
## 2.13.10 (2024-11-12)
This security release fixes a TLS certificate validation bypass.
Given the severity of that issue, users are advised to upgrade all nodes immediately.
* Security: fix TLS certificate validation bypass. CVE-2024-49369
* Security: update OpenSSL shipped on Windows to v3.0.15.
* Windows: sign MSI packages with a certificate the OS trusts by default.
## 2.13.9 (2023-12-21)
Version 2.13.9 is a hotfix release for masters and satellites that mainly
@ -1279,15 +958,6 @@ Thanks to all contributors:
* Code quality fixes
* Small documentation fixes
## 2.11.12 (2024-11-12)
This security release fixes a TLS certificate validation bypass.
Given the severity of that issue, users are advised to upgrade all nodes immediately.
* Security: fix TLS certificate validation bypass. CVE-2024-49369
* Security: update OpenSSL shipped on Windows to v3.0.15.
* Windows: sign MSI packages with a certificate the OS trusts by default.
## 2.11.11 (2021-08-19)
The main focus of these versions is a security vulnerability in the TLS certificate verification of our metrics writers ElasticsearchWriter, GelfWriter and InfluxdbWriter.

View File

@ -1,12 +1,17 @@
# Icinga 2 | (c) 2012 Icinga GmbH | GPLv2+
# CMake 3.8 is required, CMake policy compatibility was verified up to 3.17.
cmake_minimum_required(VERSION 3.8...3.17)
cmake_minimum_required(VERSION 2.8.12)
set(BOOST_MIN_VERSION "1.66.0")
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
if("${CMAKE_VERSION}" VERSION_LESS "3.8") # SLES 12.5
if(NOT MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++17")
endif()
else()
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
endif()
project(icinga2)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
@ -18,10 +23,6 @@ if(NOT CMAKE_BUILD_TYPE)
FORCE)
endif()
# Include symbols in executables so that function names can be printed in stack traces, for example in crash dumps.
set(CMAKE_ENABLE_EXPORTS ON) # Added in CMake 3.4
set(CMAKE_EXECUTABLE_ENABLE_EXPORTS ON) # Added in CMake 3.27 and supersedes the above one.
if(WIN32)
set(ICINGA2_MASTER OFF)
else()
@ -185,21 +186,21 @@ add_definitions(-DBOOST_FILESYSTEM_NO_DEPRECATED)
add_definitions(-DBOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT)
link_directories(${Boost_LIBRARY_DIRS})
include_directories(SYSTEM ${Boost_INCLUDE_DIRS})
include_directories(${Boost_INCLUDE_DIRS})
find_package(OpenSSL REQUIRED)
include_directories(SYSTEM ${OPENSSL_INCLUDE_DIR})
include_directories(${OPENSSL_INCLUDE_DIR})
set(base_DEPS ${CMAKE_DL_LIBS} ${Boost_LIBRARIES} ${OPENSSL_LIBRARIES})
set(base_OBJS $<TARGET_OBJECTS:mmatch> $<TARGET_OBJECTS:socketpair> $<TARGET_OBJECTS:base>)
# JSON
find_package(JSON)
include_directories(SYSTEM ${JSON_INCLUDE})
include_directories(${JSON_INCLUDE})
# UTF8CPP
find_package(UTF8CPP)
include_directories(SYSTEM ${UTF8CPP_INCLUDE})
include_directories(${UTF8CPP_INCLUDE})
find_package(Editline)
set(HAVE_EDITLINE "${EDITLINE_FOUND}")
@ -222,23 +223,22 @@ endif()
if(EDITLINE_FOUND)
list(APPEND base_DEPS ${EDITLINE_LIBRARIES})
include_directories(SYSTEM ${EDITLINE_INCLUDE_DIR})
include_directories(${EDITLINE_INCLUDE_DIR})
endif()
if(TERMCAP_FOUND)
list(APPEND base_DEPS ${TERMCAP_LIBRARIES})
include_directories(SYSTEM ${TERMCAP_INCLUDE_DIR})
include_directories(${TERMCAP_INCLUDE_DIR})
endif()
if(WIN32)
list(APPEND base_DEPS ws2_32 dbghelp shlwapi msi)
endif()
set(CMAKE_MACOSX_RPATH 1)
set(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_RPATH};${CMAKE_INSTALL_FULL_LIBDIR}/icinga2")
if(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Winconsistent-missing-override -Wrange-loop-construct")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Qunused-arguments -fcolor-diagnostics -fno-limit-debug-info")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Qunused-arguments -fcolor-diagnostics -fno-limit-debug-info")
@ -256,12 +256,6 @@ if(CMAKE_C_COMPILER_ID STREQUAL "SunPro")
endif()
if(CMAKE_C_COMPILER_ID STREQUAL "GNU")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wsuggest-override")
if("${CMAKE_CXX_COMPILER_VERSION}" VERSION_GREATER_EQUAL "11.0.0")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wrange-loop-construct")
endif()
if(CMAKE_SYSTEM_NAME MATCHES AIX)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -g -lpthread")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -lpthread")
@ -371,7 +365,6 @@ check_function_exists(vfork HAVE_VFORK)
check_function_exists(backtrace_symbols HAVE_BACKTRACE_SYMBOLS)
check_function_exists(pipe2 HAVE_PIPE2)
check_function_exists(nice HAVE_NICE)
check_function_exists(malloc_info HAVE_MALLOC_INFO)
check_library_exists(dl dladdr "dlfcn.h" HAVE_DLADDR)
check_library_exists(execinfo backtrace_symbols "" HAVE_LIBEXECINFO)
check_include_file_cxx(cxxabi.h HAVE_CXXABI_H)
@ -513,7 +506,6 @@ set(CPACK_WIX_UI_DIALOG "${CMAKE_CURRENT_SOURCE_DIR}/icinga-installer/dlgbmp.bmp
set(CPACK_WIX_PATCH_FILE "${CMAKE_CURRENT_BINARY_DIR}/icinga-installer/icinga2.wixpatch.Debug")
set(CPACK_WIX_PATCH_FILE "${CMAKE_CURRENT_BINARY_DIR}/icinga-installer/icinga2.wixpatch")
set(CPACK_WIX_EXTENSIONS "WixUtilExtension" "WixNetFxExtension")
set(CPACK_WIX_INSTALL_SCOPE NONE)
set(CMAKE_INSTALL_SYSTEM_RUNTIME_DESTINATION "sbin")
set(CMAKE_INSTALL_UCRT_LIBRARIES TRUE)

View File

@ -1,2 +1,2 @@
Version: 2.15.0
Version: 2.14.1
Revision: 1

421
RELEASE.md Normal file
View File

@ -0,0 +1,421 @@
# Release Workflow <a id="release-workflow"></a>
#### Table of Content
- [1. Preparations](#preparations)
- [1.1. Issues](#issues)
- [1.2. Backport Commits](#backport-commits)
- [1.3. Windows Dependencies](#windows-dependencies)
- [2. Version](#version)
- [3. Changelog](#changelog)
- [4. Git Tag](#git-tag)
- [5. Package Builds](#package-builds)
- [5.1. RPM Packages](#rpm-packages)
- [5.2. DEB Packages](#deb-packages)
- [6. Build Server](#build-infrastructure)
- [7. Release Tests](#release-tests)
- [8. GitHub Release](#github-release)
- [9. Docker](#docker)
- [10. Post Release](#post-release)
- [10.1. Online Documentation](#online-documentation)
- [10.2. Announcement](#announcement)
- [10.3. Project Management](#project-management)
## Preparations <a id="preparations"></a>
Specify the release version.
```bash
VERSION=2.11.0
```
Add your signing key to your Git configuration file, if not already there.
```
vim $HOME/.gitconfig
[user]
email = michael.friedrich@icinga.com
name = Michael Friedrich
signingkey = D14A1F16
```
### Issues <a id="issues"></a>
Check issues at https://github.com/Icinga/icinga2
### Backport Commits <a id="backport-commits"></a>
For minor versions you need to manually backports any and all commits from the
master branch which should be part of this release.
### Windows Dependencies <a id="windows-dependencies"></a>
In contrast to Linux, the bundled Windows dependencies
(at least Boost and OpenSSL) aren't updated automatically.
(Neither by Icinga administrators, nor at package build time.)
To ensure the upcoming Icinga release ships the latest (i.e. most secure) dependencies on Windows:
#### Update packages.icinga.com
Add the latest Boost and OpenSSL versions to
https://packages.icinga.com/windows/dependencies/ like this:
```
localhost:~$ ssh aptly.vm.icinga.com
aptly:~$ sudo -i
aptly:~# cd /var/www/html/aptly/public/windows/dependencies
aptly:dependencies# wget https://master.dl.sourceforge.net/project/boost/boost-binaries/1.76.0/boost_1_76_0-msvc-14.2-64.exe
aptly:dependencies# wget https://master.dl.sourceforge.net/project/boost/boost-binaries/1.76.0/boost_1_76_0-msvc-14.2-32.exe
aptly:dependencies# wget https://slproweb.com/download/Win64OpenSSL-1_1_1k.exe
aptly:dependencies# wget https://slproweb.com/download/Win32OpenSSL-1_1_1k.exe
```
#### Ensure Compatibility
Preferably on a fresh Windows VM (not to accidentally build Icinga
with old dependency versions) setup a dev environment using the new dependency versions:
1. Download [doc/win-dev.ps1](doc/win-dev.ps1)
2. Edit your local copy, adjust the dependency versions
3. Ensure there are 35 GB free space on C:
4. Run the following in an administrative Powershell:
1. `Enable-WindowsOptionalFeature -FeatureName "NetFx3" -Online`
(reboot when asked!)
2. `powershell -NoProfile -ExecutionPolicy Bypass -File "${Env:USERPROFILE}\Downloads\win-dev.ps1"`
(will take some time)
Actually clone and build Icinga using the new dependency versions as described
[here](https://github.com/Icinga/icinga2/blob/master/doc/21-development.md#tldr).
Fix incompatibilities if any.
#### Update Build Server, CI/CD and Documentation
* https://git.icinga.com/infra/ansible-windows-build
(don't forget to provision!)
* [doc/21-development.md](doc/21-development.md)
* [doc/win-dev.ps1](doc/win-dev.ps1)
(also affects CI/CD)
* [tools/win32/configure.ps1](tools/win32/configure.ps1)
* [tools/win32/configure-dev.ps1](tools/win32/configure-dev.ps1)
#### Re-provision Build Server
Even if there aren't any new releases of dependencies with versions
hardcoded in the repos and files listed above (Boost, OpenSSL).
There may be new build versions of other dependencies (VS, MSVC).
Our GitHub actions (tests) use the latest ones automatically,
but the GitLab runner (release packages) doesn't.
## Version <a id="version"></a>
Update the version:
```bash
perl -pi -e "s/Version: .*/Version: $VERSION/g" ICINGA2_VERSION
```
## Changelog <a id="changelog"></a>
Choose the most important issues and summarize them in multiple groups/paragraphs. Provide links to the mentioned
issues/PRs. At the start include a link to the milestone's closed issues.
## Git Tag <a id="git-tag"></a>
```bash
git commit -v -a -m "Release version $VERSION"
```
Create a signed tag (tags/v<VERSION>) on the `master` branch (for major
releases) or the `support` branch (for minor releases).
```bash
git tag -s -m "Version $VERSION" v$VERSION
```
Push the tag:
```bash
git push origin v$VERSION
```
**For major releases:** Create a new `support` branch:
```bash
git checkout master
git push
git checkout -b support/2.12
git push -u origin support/2.12
```
## Package Builds <a id="package-builds"></a>
```bash
mkdir $HOME/dev/icinga/packaging
cd $HOME/dev/icinga/packaging
```
### RPM Packages <a id="rpm-packages"></a>
```bash
git clone git@git.icinga.com:packaging/rpm-icinga2.git && cd rpm-icinga2
```
### DEB Packages <a id="deb-packages"></a>
```bash
git clone git@git.icinga.com:packaging/deb-icinga2.git && cd deb-icinga2
```
### Raspbian Packages
```bash
git clone git@git.icinga.com:packaging/raspbian-icinga2.git && cd raspbian-icinga2
```
### Windows Packages
```bash
git clone git@git.icinga.com:packaging/windows-icinga2.git && cd windows-icinga2
```
### Branch Workflow
For each support branch in this repo (e.g. support/2.12), there exists a corresponding branch in the packaging repos
(e.g. 2.12). Each package revision is a tagged commit on these branches. When doing a major release, create the new
branch, otherweise switch to the existing one.
### Switch Build Type
Ensure that `ICINGA_BUILD_TYPE` is set to `release` in `.gitlab-ci.yml`. This should only be necessary after creating a
new branch.
```yaml
variables:
...
ICINGA_BUILD_TYPE: release
...
```
Commit the change.
```bash
git commit -av -m "Switch build type for 2.13"
```
#### RPM Release Preparations
Set the `Version`, `revision` and `%changelog` inside the spec file:
```
perl -pi -e "s/Version:.*/Version: $VERSION/g" icinga2.spec
vim icinga2.spec
%changelog
* Thu Sep 19 2019 Michael Friedrich <michael.friedrich@icinga.com> 2.11.0-1
- Update to 2.11.0
```
#### DEB and Raspbian Release Preparations
Update file `debian/changelog` and add at the beginning:
```
icinga2 (2.11.0-1) icinga; urgency=medium
* Release 2.11.0
-- Michael Friedrich <michael.friedrich@icinga.com> Thu, 19 Sep 2019 10:50:31 +0200
```
#### Windows Release Preparations
Update the file `.gitlab-ci.yml`:
```
perl -pi -e "s/^ UPSTREAM_GIT_BRANCH: .*/ UPSTREAM_GIT_BRANCH: v$VERSION/g" .gitlab-ci.yml
perl -pi -e "s/^ ICINGA_FORCE_VERSION: .*/ ICINGA_FORCE_VERSION: v$VERSION/g" .gitlab-ci.yml
```
### Release Commit
Commit the changes and push the branch.
```bash
git commit -av -m "Release $VERSION-1"
git push origin 2.11
```
GitLab will now build snapshot packages based on the tag `v2.11.0` of Icinga 2.
### Package Tests
In order to test the created packages you can download a job's artifacts:
Visit [git.icinga.com](https://git.icinga.com/packaging/rpm-icinga2)
and navigate to the respective pipeline under `CI / CD -> Pipelines`.
There click on the job you want to download packages from.
The job's output appears. On the right-hand sidebar you can browse its artifacts.
Once there, navigate to `build/RPMS/noarch` where you'll find the packages.
### Release Packages
To build release packages and upload them to [packages.icinga.com](https://packages.icinga.com)
tag the release commit and push it.
RPM/DEB/Raspbian:
```bash
git tag -s $VERSION-1 -m "Release v$VERSION-1"
git push origin $VERSION-1
```
Windows:
```bash
git tag -s $VERSION -m "Release v$VERSION"
git push origin $VERSION
```
Now cherry pick the release commit to `master` so that the changes are transferred back to it.
**Attention**: Only the release commit. *NOT* the one switching the build type!
## Build Infrastructure <a id="build-infrastructure"></a>
https://git.icinga.com/packaging/rpm-icinga2/pipelines
https://git.icinga.com/packaging/deb-icinga2/pipelines
https://git.icinga.com/packaging/windows-icinga2/pipelines
https://git.icinga.com/packaging/raspbian-icinga2/pipelines
* Verify package build changes for this version.
* Test the snapshot packages for all distributions beforehand.
Once the release repository tags are pushed, release builds
are triggered and automatically published to packages.icinga.com
## Release Tests <a id="release-tests"></a>
* Test DB IDO with MySQL and PostgreSQL.
* Provision the vagrant boxes and test the release packages.
* Test the [setup wizard](https://packages.icinga.com/windows/) inside a Windows VM.
* Start a new docker container and install/run icinga2.
### CentOS
```bash
docker run -ti centos:7 bash
yum -y install https://packages.icinga.com/epel/icinga-rpm-release-7-latest.noarch.rpm
yum -y install epel-release
yum -y install icinga2
icinga2 daemon -C
```
### Ubuntu
```bash
docker run -ti ubuntu:bionic bash
apt-get update
apt-get -y install apt-transport-https wget gnupg
wget -O - https://packages.icinga.com/icinga.key | apt-key add -
. /etc/os-release; if [ ! -z ${UBUNTU_CODENAME+x} ]; then DIST="${UBUNTU_CODENAME}"; else DIST="$(lsb_release -c| awk '{print $2}')"; fi; \
echo "deb https://packages.icinga.com/ubuntu icinga-${DIST} main" > \
/etc/apt/sources.list.d/${DIST}-icinga.list
echo "deb-src https://packages.icinga.com/ubuntu icinga-${DIST} main" >> \
/etc/apt/sources.list.d/${DIST}-icinga.list
apt-get update
apt-get -y install icinga2
icinga2 daemon -C
```
## GitHub Release <a id="github-release"></a>
Create a new release for the newly created Git tag: https://github.com/Icinga/icinga2/releases
> Hint: Choose [tags](https://github.com/Icinga/icinga2/tags), pick one to edit and
> make this a release. You can also create a draft release.
The release body should contain a short changelog, with links
into the roadmap, changelog and blogpost.
## Post Release <a id="post-release"></a>
### Online Documentation <a id="online-documentation"></a>
> Only required for major releases.
Navigate to `puppet-customer/icinga.git` and do the following steps:
#### Testing
```bash
git checkout testing && git pull
vim files/var/www/docs/config/icinga2-latest.yml
git commit -av -m "icinga-web: Update docs for Icinga 2"
git push
```
SSH into the webserver and do a manual Puppet dry run with the testing environment.
```bash
puppet agent -t --environment testing --noop
```
Once succeeded, continue with production deployment.
#### Production
```bash
git checkout master && git pull
git merge testing
git push
```
SSH into the webserver and do a manual Puppet run from the production environment (default).
```bash
puppet agent -t
```
#### Manual Generation
SSH into the webserver or ask @bobapple.
```bash
cd /usr/local/icinga-docs-tools && ./build-docs.rb -c /var/www/docs/config/icinga2-latest.yml
```
### Announcement <a id="announcement"></a>
* Create a new blog post on [icinga.com/blog](https://icinga.com/blog) including a featured image
* Create a release topic on [community.icinga.com](https://community.icinga.com)
* Release email to net-tech & team
### Project Management <a id="project-management"></a>
* Add new minor version on [GitHub](https://github.com/Icinga/icinga2/milestones).

View File

@ -8,7 +8,6 @@
#cmakedefine HAVE_LIBEXECINFO
#cmakedefine HAVE_CXXABI_H
#cmakedefine HAVE_NICE
#cmakedefine HAVE_MALLOC_INFO
#cmakedefine HAVE_EDITLINE
#cmakedefine HAVE_SYSTEMD

View File

@ -67,3 +67,4 @@ Read more about development builds in the [development chapter](21-development.m
Icinga 2 and the Icinga 2 documentation are licensed under the terms of the GNU
General Public License Version 2. You will find a copy of this license in the
LICENSE file included in the source package.

View File

@ -14,16 +14,9 @@ In case you are upgrading an existing setup, please ensure to
follow the [upgrade documentation](16-upgrading-icinga-2.md#upgrading-icinga-2).
<!-- {% else %} -->
<!-- {% if not windows %} -->
## Add Icinga Package Repository <a id="add-icinga-package-repository"></a>
We recommend using our official repositories.
All the following commands should be executed as the root user.
As pipes and nested commands are used, it is recommended to switch to a root user session, e.g., using `sudo -i`.
Here's how to add it to your system:
<!-- {% endif %} -->
We recommend using our official repositories. Here's how to add it to your system:
<!-- {% if debian %} -->
@ -31,13 +24,9 @@ Here's how to add it to your system:
```bash
apt update
apt -y install apt-transport-https wget
apt -y install apt-transport-https wget gnupg
wget -O icinga-archive-keyring.deb "https://packages.icinga.com/icinga-archive-keyring_latest+debian$(
. /etc/os-release; echo "$VERSION_ID"
).deb"
apt install ./icinga-archive-keyring.deb
wget -O - https://packages.icinga.com/icinga.key | gpg --dearmor -o /usr/share/keyrings/icinga-archive-keyring.gpg
DIST=$(awk -F"[)(]+" '/VERSION=/ {print $2}' /etc/os-release); \
echo "deb [signed-by=/usr/share/keyrings/icinga-archive-keyring.gpg] https://packages.icinga.com/debian icinga-${DIST} main" > \
@ -47,6 +36,21 @@ DIST=$(awk -F"[)(]+" '/VERSION=/ {print $2}' /etc/os-release); \
apt update
```
#### Debian Backports Repository <a id="debian-backports-repository"></a>
This repository is required for Debian Stretch since Icinga v2.11.
Debian Stretch:
```bash
DIST=$(awk -F"[)(]+" '/VERSION=/ {print $2}' /etc/os-release); \
echo "deb https://deb.debian.org/debian ${DIST}-backports main" > \
/etc/apt/sources.list.d/${DIST}-backports.list
apt update
```
<!-- {% endif %} -->
<!-- {% if ubuntu %} -->
@ -54,13 +58,9 @@ apt update
```bash
apt update
apt -y install apt-transport-https wget
apt -y install apt-transport-https wget gnupg
wget -O icinga-archive-keyring.deb "https://packages.icinga.com/icinga-archive-keyring_latest+ubuntu$(
. /etc/os-release; echo "$VERSION_ID"
).deb"
apt install ./icinga-archive-keyring.deb
wget -O - https://packages.icinga.com/icinga.key | gpg --dearmor -o /usr/share/keyrings/icinga-archive-keyring.gpg
. /etc/os-release; if [ ! -z ${UBUNTU_CODENAME+x} ]; then DIST="${UBUNTU_CODENAME}"; else DIST="$(lsb_release -c| awk '{print $2}')"; fi; \
echo "deb [signed-by=/usr/share/keyrings/icinga-archive-keyring.gpg] https://packages.icinga.com/ubuntu icinga-${DIST} main" > \
@ -72,6 +72,41 @@ apt update
```
<!-- {% endif %} -->
<!-- {% if raspbian %} -->
### Raspbian Repository <a id="raspbian-repository"></a>
```bash
apt update
apt -y install apt-transport-https wget gnupg
wget -O - https://packages.icinga.com/icinga.key | gpg --dearmor -o /usr/share/keyrings/icinga-archive-keyring.gpg
DIST=$(awk -F"[)(]+" '/VERSION=/ {print $2}' /etc/os-release); \
echo "deb [signed-by=/usr/share/keyrings/icinga-archive-keyring.gpg] https://packages.icinga.com/raspbian icinga-${DIST} main" > \
/etc/apt/sources.list.d/icinga.list
echo "deb-src [signed-by=/usr/share/keyrings/icinga-archive-keyring.gpg] https://packages.icinga.com/raspbian icinga-${DIST} main" >> \
/etc/apt/sources.list.d/icinga.list
apt update
```
<!-- {% endif %} -->
<!-- {% if centos %} -->
### CentOS Repository <a id="centos-repository"></a>
```bash
rpm --import https://packages.icinga.com/icinga.key
wget https://packages.icinga.com/centos/ICINGA-release.repo -O /etc/yum.repos.d/ICINGA-release.repo
```
The packages for CentOS depend on other packages which are distributed
as part of the [EPEL repository](https://fedoraproject.org/wiki/EPEL):
```bash
yum install epel-release
```
<!-- {% endif %} -->
<!-- {% if rhel %} -->
### RHEL Repository <a id="rhel-repository"></a>
@ -83,6 +118,7 @@ apt update
Don't forget to fill in the username and password section with your credentials in the local .repo file.
```bash
rpm --import https://packages.icinga.com/icinga.key
wget https://packages.icinga.com/subscription/rhel/ICINGA-release.repo -O /etc/yum.repos.d/ICINGA-release.repo
```
@ -100,16 +136,24 @@ subscription-manager repos --enable "codeready-builder-for-rhel-${OSVER}-${ARCH}
dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-${OSVER}.noarch.rpm
```
#### RHEL 7
```bash
subscription-manager repos --enable rhel-7-server-optional-rpms
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
```
<!-- {% endif %} -->
<!-- {% if fedora %} -->
### Fedora Repository <a id="fedora-repository"></a>
```bash
rpm --import https://packages.icinga.com/icinga.key
curl -o /etc/yum.repos.d/ICINGA-release.repo https://packages.icinga.com/fedora/ICINGA-release.repo
dnf install -y 'dnf-command(config-manager)'
dnf config-manager --add-repo https://packages.icinga.com/fedora/$(. /etc/os-release; echo "$VERSION_ID")/release
```
<!-- {% endif %} -->
<!-- {% if sles %} -->
@ -123,6 +167,8 @@ curl -o /etc/yum.repos.d/ICINGA-release.repo https://packages.icinga.com/fedora/
Don't forget to fill in the username and password section with your credentials in the local .repo file.
```bash
rpm --import https://packages.icinga.com/icinga.key
zypper ar https://packages.icinga.com/subscription/sles/ICINGA-release.repo
zypper ref
```
@ -140,9 +186,17 @@ SUSEConnect -p PackageHub/$VERSION_ID/x86_64
### openSUSE Repository <a id="opensuse-repository"></a>
```bash
rpm --import https://packages.icinga.com/icinga.key
zypper ar https://packages.icinga.com/openSUSE/ICINGA-release.repo
zypper ref
```
You need to additionally add the `server:monitoring` repository to fulfill dependencies:
```bash
zypper ar https://download.opensuse.org/repositories/server:/monitoring/15.3/server:monitoring.repo
```
<!-- {% endif %} -->
<!-- {% if amazon_linux %} -->
@ -156,6 +210,7 @@ zypper ref
Don't forget to fill in the username and password section with your credentials in the local .repo file.
```bash
rpm --import https://packages.icinga.com/icinga.key
wget https://packages.icinga.com/subscription/amazon/ICINGA-release.repo -O /etc/yum.repos.d/ICINGA-release.repo
```
@ -184,21 +239,36 @@ You can install Icinga 2 by using your distribution's package manager
to install the `icinga2` package. The following commands must be executed
with `root` permissions unless noted otherwise.
<!-- {% if rhel or fedora or amazon_linux %} -->
<!-- {% if centos or rhel or fedora or amazon_linux %} -->
!!! tip
If you have [SELinux](22-selinux.md) enabled, the package `icinga2-selinux` is also required.
<!-- {% endif %} -->
<!-- {% if debian or ubuntu %} -->
<!-- {% if debian or ubuntu or raspbian %} -->
<!-- {% if not icingaDocs %} -->
#### Debian / Ubuntu / Raspberry Pi OS
#### Debian / Ubuntu / Raspbian
<!-- {% endif %} -->
```bash
apt install icinga2
```
<!-- {% endif %} -->
<!-- {% if centos %} -->
<!-- {% if not icingaDocs %} -->
#### CentOS
<!-- {% endif %} -->
!!! info
Note that installing Icinga 2 is only supported on CentOS 7 as CentOS 8 is EOL.
```bash
yum install icinga2
systemctl enable icinga2
systemctl start icinga2
```
<!-- {% endif %} -->
<!-- {% if rhel %} -->
#### RHEL 8 or Later
@ -207,6 +277,14 @@ dnf install icinga2
systemctl enable icinga2
systemctl start icinga2
```
#### RHEL 7
```bash
yum install icinga2
systemctl enable icinga2
systemctl start icinga2
```
<!-- {% endif %} -->
<!-- {% if fedora %} -->
@ -279,15 +357,26 @@ to determine where to find the plugin binaries.
additional check plugins into your Icinga 2 setup.
<!-- {% if debian or ubuntu %} -->
<!-- {% if debian or ubuntu or raspbian %} -->
<!-- {% if not icingaDocs %} -->
#### Debian / Ubuntu / Raspberry Pi OS
#### Debian / Ubuntu / Raspbian
<!-- {% endif %} -->
```bash
apt install monitoring-plugins
```
<!-- {% endif %} -->
<!-- {% if centos %} -->
<!-- {% if not icingaDocs %} -->
#### CentOS
<!-- {% endif %} -->
The packages for CentOS depend on other packages which are distributed as part of the EPEL repository.
```bash
yum install nagios-plugins-all
```
<!-- {% endif %} -->
<!-- {% if rhel %} -->
<!-- {% if not icingaDocs %} -->
#### RHEL
@ -299,6 +388,12 @@ The packages for RHEL depend on other packages which are distributed as part of
```bash
dnf install nagios-plugins-all
```
#### RHEL 7
```bash
yum install nagios-plugins-all
```
<!-- {% endif %} -->
<!-- {% if fedora %} -->
@ -369,6 +464,7 @@ Restart Icinga 2 for these changes to take effect.
systemctl restart icinga2
```
<!-- {% if amazon_linux or centos or debian or rhel or sles or ubuntu %} -->
## Set up Icinga DB <a id="set-up-icinga-db"></a>
Icinga DB is a set of components for publishing, synchronizing and
@ -409,13 +505,7 @@ A Redis server from version 6.2 is required.
#### Install Icinga DB Redis Package <a id="install-icinga-db-redis-package"></a>
Use your distribution's package manager to install the `icingadb-redis` package.
<!-- {% if amazon_linux or fedora or rhel or opensuse or sles %} -->
!!! tip
If you have [SELinux](22-selinux.md) enabled, the package `icingadb-redis-selinux` is also required.
<!-- {% endif %} -->
Use your distribution's package manager to install the `icingadb-redis` package as follows:
<!-- {% if amazon_linux %} -->
<!-- {% if not icingaDocs %} -->
@ -426,9 +516,23 @@ yum install icingadb-redis
```
<!-- {% endif %} -->
<!-- {% if centos %} -->
<!-- {% if not icingaDocs %} -->
##### CentOS
<!-- {% endif %} -->
!!! info
Note that installing Icinga DB Redis is only supported on CentOS 7 as CentOS 8 is EOL.
```bash
yum install icingadb-redis
```
<!-- {% endif %} -->
<!-- {% if debian or ubuntu %} -->
<!-- {% if not icingaDocs %} -->
##### Debian / Ubuntu / Raspberry Pi OS
##### Debian / Ubuntu
<!-- {% endif %} -->
```bash
apt install icingadb-redis
@ -441,20 +545,17 @@ apt install icingadb-redis
```bash
dnf install icingadb-redis
```
<!-- {% endif %} -->
<!-- {% if fedora %} -->
<!-- {% if not icingaDocs %} -->
##### Fedora
<!-- {% endif %} -->
##### RHEL 7
```bash
dnf install icingadb-redis
yum install icingadb-redis
```
<!-- {% endif %} -->
<!-- {% if sles or opensuse %} -->
<!-- {% if sles %} -->
<!-- {% if not icingaDocs %} -->
##### SLES / openSUSE
##### SLES
<!-- {% endif %} -->
```bash
zypper install icingadb-redis
@ -540,27 +641,25 @@ the Icinga DB daemon that synchronizes monitoring data between the Redis server
The Icinga DB daemon package is also included in the Icinga repository, and since it is already set up,
you have completed the instructions here and can proceed to
<!-- {% if amazon_linux %} -->
[install the Icinga DB daemon on Amazon Linux](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/Amazon-Linux/#installing-the-package),
[install the Icinga DB daemon on Amazon Linux](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/01-Amazon-Linux/#installing-icinga-db-package),
<!-- {% endif %} -->
<!-- {% if centos %} -->
[install the Icinga DB daemon on CentOS](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/02-CentOS/#installing-icinga-db-package),
<!-- {% endif %} -->
<!-- {% if debian %} -->
[install the Icinga DB daemon on Debian](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/Debian/#installing-the-package),
<!-- {% endif %} -->
<!-- {% if fedora %} -->
[install the Icinga DB daemon on Fedora](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/Fedora/#installing-the-package),
[install the Icinga DB daemon on Debian](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/03-Debian/#installing-icinga-db-package),
<!-- {% endif %} -->
<!-- {% if rhel %} -->
[install the Icinga DB daemon on RHEL](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/RHEL/#installing-the-package),
[install the Icinga DB daemon on RHEL](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/04-RHEL/#installing-icinga-db-package),
<!-- {% endif %} -->
<!-- {% if sles %} -->
[install the Icinga DB daemon on SLES](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/SLES/#installing-the-package),
[install the Icinga DB daemon on SLES](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/05-SLES/#installing-icinga-db-package),
<!-- {% endif %} -->
<!-- {% if ubuntu %} -->
[install the Icinga DB daemon on Ubuntu](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/Ubuntu/#installing-the-package),
<!-- {% endif %} -->
<!-- {% if opensuse %} -->
[install the Icinga DB daemon on openSUSE](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/openSUSE/#installing-the-package),
[install the Icinga DB daemon on Ubuntu](https://icinga.com/docs/icinga-db/latest/doc/02-Installation/06-Ubuntu/#installing-icinga-db-package),
<!-- {% endif %} -->
which will also guide you through the setup of the database and Icinga DB Web.
<!-- {% endif %} -->
## Backup <a id="install-backup"></a>

View File

@ -1,3 +0,0 @@
# Install Icinga 2 on Raspberry Pi OS
<!-- {% set debian = True %} -->
<!-- {% include "02-installation.md" %} -->

View File

@ -0,0 +1,3 @@
# Install Icinga 2 on Raspbian
<!-- {% set raspbian = True %} -->
<!-- {% include "02-installation.md" %} -->

View File

@ -0,0 +1,3 @@
# Install Icinga 2 on CentOS
<!-- {% set centos = True %} -->
<!-- {% include "02-installation.md" %} -->

View File

@ -766,7 +766,7 @@ apply Notification "mail-icingaadmin" to Host {
A more advanced example is to use [apply rules with for loops on arrays or
dictionaries](03-monitoring-basics.md#using-apply-for) provided by
[custom attributes](03-monitoring-basics.md#custom-variables) or groups.
[custom atttributes](03-monitoring-basics.md#custom-variables) or groups.
Remember the examples shown for [custom variable values](03-monitoring-basics.md#custom-variables-values):
@ -1599,7 +1599,7 @@ A common pattern is to store the users and user groups
on the host or service objects instead of the notification
object itself.
The sample configuration provided in [hosts.conf](04-configuration.md#hosts-conf) and [notifications.conf](04-configuration.md#notifications-conf)
The sample configuration provided in [hosts.conf](04-configuration.md#hosts-conf) and [notifications.conf](notifications-conf)
already provides an example for this question.
> **Tip**
@ -2135,7 +2135,7 @@ In order to find out about the command argument, call the plugin's help
or consult the README.
```
./check_systemd --help
./check_systemd.py --help
...
@ -2194,7 +2194,7 @@ With the [example above](03-monitoring-basics.md#command-arguments-value),
inspect the parameter's help text.
```
./check_systemd --help
./check_systemd.py --help
...
@ -2579,7 +2579,6 @@ information.
`notification_useremail` | **Required.** The notification's recipient(s). Defaults to `$user.email$`.
`notification_hoststate` | **Required.** Current state of host. Defaults to `$host.state$`.
`notification_type` | **Required.** Type of notification. Defaults to `$notification.type$`.
`notification_hostnotes` | **Optional.** The host's notes. Defaults to `$host.notes$`.
`notification_address` | **Optional.** The host's IPv4 address. Defaults to `$address$`.
`notification_address6` | **Optional.** The host's IPv6 address. Defaults to `$address6$`.
`notification_author` | **Optional.** Comment author. Defaults to `$notification.author$`.
@ -2608,8 +2607,6 @@ information.
`notification_useremail` | **Required.** The notification's recipient(s). Defaults to `$user.email$`.
`notification_servicestate` | **Required.** Current state of host. Defaults to `$service.state$`.
`notification_type` | **Required.** Type of notification. Defaults to `$notification.type$`.
`notification_hostnotes` | **Optional.** The host's notes. Defaults to `$host.notes$`.
`notification_servicenotes` | **Optional.** The service's notes. Defaults to `$service.notes$`.
`notification_address` | **Optional.** The host's IPv4 address. Defaults to `$address$`.
`notification_address6` | **Optional.** The host's IPv6 address. Defaults to `$address6$`.
`notification_author` | **Optional.** Comment author. Defaults to `$notification.author$`.
@ -2732,7 +2729,7 @@ Requirements:
* Icinga 2 as client on the remote node
* icinga user with sudo permissions to the httpd daemon
Example on RHEL:
Example on CentOS 7:
```
# visudo
@ -3172,16 +3169,16 @@ i.e. to consider the parent unreachable only if no dependency is fulfilled.
Think of a host connected to both a network and a storage switch vs. a host connected to redundant routers.
Sometimes you even want a mixture of both.
Think of a service like SSH depending on both LDAP and DNS to function,
Think of a service like SSH depeding on both LDAP and DNS to function,
while operating redundant LDAP servers as well as redundant DNS resolvers.
Before v2.12, Icinga regarded all dependencies as cumulative.
Before v2.12, Icinga regarded all dependecies as cumulative.
In v2.12 and v2.13, Icinga regarded all dependencies redundant.
The latter led to unrelated services being inadvertently regarded to be redundant to each other.
The latter led to unrelated services being inadvertantly regarded to be redundant to each other.
v2.14 restored the former behavior and allowed to override it.
I.e. all dependencies are regarded as essential for the parent by default.
Specifying the `redundancy_group` attribute for two dependencies of a child object with the equal value
I.e. all dependecies are regarded as essential for the parent by default.
Specifying the `redundancy_group` attribute for two dependecies of a child object with the equal value
causes them to be regarded as redundant (only inside that redundancy group).
<!-- Keep this for compatibility -->

View File

@ -593,7 +593,7 @@ Read more on that topic [here](03-monitoring-basics.md#notification-commands).
#### groups.conf <a id="groups-conf"></a>
The example host defined in [hosts.conf](#hosts-conf) already has the
The example host defined in [hosts.conf](hosts-conf) already has the
custom variable `os` set to `Linux` and is therefore automatically
a member of the host group `linux-servers`.

View File

@ -51,7 +51,7 @@ described. Try running the plugin after setup and [ensure it works](05-service-m
Prior to using the check plugin with Icinga 2 you should ensure that it is working properly
by trying to run it on the console using whichever user Icinga 2 is running as:
RHEL/Fedora
RHEL/CentOS/Fedora
```bash
sudo -u icinga /usr/lib64/nagios/plugins/check_mysql_health --help
@ -111,7 +111,7 @@ Can't locate Net/SNMP.pm in @INC (you may need to install the Net::SNMP module)
Prior to installing the Perl module via CPAN, look for a distribution
specific package, e.g. `libnet-snmp-perl` on Debian/Ubuntu or `perl-Net-SNMP`
on RHEL.
on RHEL/CentOS.
#### Optional: Custom Path <a id="service-monitoring-plugins-custom-path"></a>
@ -281,9 +281,9 @@ that [it works](05-service-monitoring.md#service-monitoring-plugins-it-works). T
`--help` parameter to see the actual parameters (docs might be outdated).
```
./check_systemd --help
./check_systemd.py --help
usage: check_systemd [-h] [-c SECONDS] [-e UNIT | -u UNIT] [-v] [-V]
usage: check_systemd.py [-h] [-c SECONDS] [-e UNIT | -u UNIT] [-v] [-V]
[-w SECONDS]
...
@ -319,14 +319,14 @@ Start with the basic plugin call without any parameters.
```
object CheckCommand "systemd" { // Plugin name without 'check_' prefix
command = [ PluginContribDir + "/check_systemd" ] // Use the 'PluginContribDir' constant, see the contributed ITL commands
command = [ PluginContribDir + "/check_systemd.py" ] // Use the 'PluginContribDir' constant, see the contributed ITL commands
}
```
Run a config validation to see if that works, `icinga2 daemon -C`
Next, analyse the plugin parameters. Plugins with a good help output show
optional parameters in square brackets. This is the case for all parameters
optional parameters in square brackes. This is the case for all parameters
for this plugin. If there are required parameters, use the `required` key
inside the argument.
@ -689,7 +689,7 @@ liters (l) | ml, l, hl
The UoM "c" represents a continuous counter (e.g. interface traffic counters).
Unknown UoMs are discarded (as if none was given).
Unknown UoMs are discarted (as if none was given).
A value without any UoM may be an integer or floating point number
for any type (processes, users, etc.).

View File

@ -264,7 +264,7 @@ The setup wizard will ensure that the following steps are taken:
* Update the [ApiListener](06-distributed-monitoring.md#distributed-monitoring-apilistener) and [constants](04-configuration.md#constants-conf) configuration.
* Update the [icinga2.conf](04-configuration.md#icinga2-conf) to disable the `conf.d` inclusion, and add the `api-users.conf` file inclusion.
Here is an example of a master setup for the `icinga2-master1.localdomain` node:
Here is an example of a master setup for the `icinga2-master1.localdomain` node on CentOS 7:
```
[root@icinga2-master1.localdomain /]# icinga2 node wizard
@ -1031,7 +1031,9 @@ in `/etc/icinga2/icinga2.conf`.
> Defaults to disabled.
Now it is time to validate the configuration and to restart the Icinga 2 daemon
on both nodes:
on both nodes.
Example on CentOS 7:
```
[root@icinga2-agent1.localdomain /]# icinga2 daemon -C
@ -1110,8 +1112,7 @@ Save the changes and validate the configuration on the master node:
```
[root@icinga2-master1.localdomain /]# icinga2 daemon -C
```
Restart the Icinga 2 daemon:
Restart the Icinga 2 daemon (example for CentOS 7):
```
[root@icinga2-master1.localdomain /]# systemctl restart icinga2
@ -1220,7 +1221,9 @@ object ApiListener "api" {
```
Now it is time to validate the configuration and to restart the Icinga 2 daemon
on both nodes:
on both nodes.
Example on CentOS 7:
```
[root@icinga2-satellite1.localdomain /]# icinga2 daemon -C
@ -1282,7 +1285,7 @@ Save the changes and validate the configuration on the master node:
[root@icinga2-master1.localdomain /]# icinga2 daemon -C
```
Restart the Icinga 2 daemon:
Restart the Icinga 2 daemon (example for CentOS 7):
```
[root@icinga2-master1.localdomain /]# systemctl restart icinga2
@ -2205,7 +2208,7 @@ object Zone "icinga2-agent2.localdomain" {
The two agent nodes do not need to know about each other. The only important thing
is that they know about the parent zone (the satellite) and their endpoint members (and optionally the global zone).
> **Tip**
> **Tipp**
>
> In the example above we've specified the `host` attribute in the agent endpoint configuration. In this mode,
> the satellites actively connect to the agents. This costs some resources on the satellite -- if you prefer to
@ -3131,7 +3134,7 @@ object Endpoint "icinga2-master2.localdomain" {
> **Note**
>
> This is required if you decide to change an already running single endpoint production
> environment into an HA-enabled cluster zone with two endpoints.
> environment into a HA-enabled cluster zone with two endpoints.
> The [initial setup](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-clients)
> with 2 HA masters doesn't require this step.
@ -3180,7 +3183,7 @@ Create a certificate signing request (CSR) for the local instance:
Sign the CSR with the previously created CA:
```
[root@icinga2-master1.localdomain /root]# icinga2 pki sign-csr --csr icinga2-master1.localdomain.csr --cert icinga2-master1.localdomain.crt
[root@icinga2-master1.localdomain /root]# icinga2 pki sign-csr --csr icinga2-master1.localdomain.csr --cert icinga2-master1.localdomain
```
Repeat the steps for all instances in your setup.
@ -3227,53 +3230,6 @@ information/pki: Writing certificate to file 'icinga2-satellite1.localdomain.crt
Copy and move these certificates to the respective instances e.g. with SSH/SCP.
#### External CA/PKI
Icinga works best with its own certificates.
The commands described above take care of the optimal certificate properties.
Also, Icinga renews them periodically at runtime to avoid expiry.
But you can also provide your own certificates,
just like to any other application which uses TLS.
!!! warning
The only serious reasons to generate own certificates are company policies.
You are responsible for making Icinga working with your certificates,
as well as for [expiry monitoring](10-icinga-template-library.md#plugin-check-command-ssl_cert)
and renewal.
Especially `icinga2 pki` CLI commands do not expect such certificates.
Also, do not provide your custom CA private key to Icinga 2!
Otherwise, it will automatically renew leaf certificates
with our hardcoded properties, not your custom ones.
The CA certificate must be located in `/var/lib/icinga2/certs/ca.crt`.
The basic requirements for all leaf certificates are:
* Located in `/var/lib/icinga2/certs/NODENAME.crt`
and `/var/lib/icinga2/certs/NODENAME.key`
* Subject with CN matching the endpoint name
* A DNS SAN matching the endpoint name
Pretty much everything else is limited only by your company policy
and the OpenSSL versions your Icinga nodes use. E.g. the following works:
* Custom key sizes, e.g. 2048 bits
* Custom key types, e.g. ECC
* Any number of intermediate CAs (but see limitations below)
* Multiple trusted root CAs in `/var/lib/icinga2/certs/ca.crt`
* Different root CAs per cluster subtree, as long as each node trusts the
certificate issuers of all nodes it's directly connected to
Intermediate CA restrictions:
* Each side has to provide its intermediate CAs along with the leaf certificate
in `/var/lib/icinga2/certs/NODENAME.crt`, ordered from leaf to root.
* Intermediate CAs may not be used directly as root CAs. To trust only specific
intermediate CAs, cross-sign them with themselves, so that you get equal
certificates except that they're self-signed. Use them as root CAs in Icinga.
## Automation <a id="distributed-monitoring-automation"></a>
These hints should get you started with your own automation tools (Puppet, Ansible, Chef, Salt, etc.)

View File

@ -484,7 +484,7 @@ host or service is considered flapping until it drops below the low flapping thr
The attribute `flapping_ignore_states` allows to ignore state changes to specified states during the flapping calculation.
`FlappingStart` and `FlappingEnd` notifications will be sent out accordingly, if configured. See the chapter on
[notifications](03-monitoring-basics.md#notifications) for details
[notifications](alert-notifications) for details
> Note: There is no distinctions between hard and soft states with flapping. All state changes count and notifications
> will be sent out regardless of the objects state.
@ -1181,7 +1181,7 @@ to represent its internal state. The following types are exposed via the [API](1
performance\_data | Array | Array of [performance data values](08-advanced-topics.md#advanced-value-types-perfdatavalue).
check\_source | String | Name of the node executing the check.
scheduling\_source | String | Name of the node scheduling the check.
state | Number | Current state according to the [check result state mapping](03-monitoring-basics.md#check-result-state-mapping).
state | Number | The current state (0 = OK, 1 = WARNING, 2 = CRITICAL, 3 = UNKNOWN).
command | Value | Array of command with shell-escaped arguments or command line string.
execution\_start | Timestamp | Check execution start time (as a UNIX timestamp).
execution\_end | Timestamp | Check execution end time (as a UNIX timestamp).

View File

@ -393,6 +393,7 @@ Runtime Attributes:
last\_check\_result | CheckResult | The current [check result](08-advanced-topics.md#advanced-value-types-checkresult).
last\_state\_change | Timestamp | When the last state change occurred (as a UNIX timestamp).
last\_hard\_state\_change | Timestamp | When the last hard state change occurred (as a UNIX timestamp).
last\_in\_downtime | Boolean | Whether the host was in a downtime when the last check occurred.
acknowledgement | Number | The acknowledgement type (0 = NONE, 1 = NORMAL, 2 = STICKY).
acknowledgement\_expiry | Timestamp | When the acknowledgement expires (as a UNIX timestamp; 0 = no expiry).
downtime\_depth | Number | Whether the host has one or more active downtimes.
@ -757,6 +758,7 @@ Runtime Attributes:
last\_check\_result | CheckResult | The current [check result](08-advanced-topics.md#advanced-value-types-checkresult).
last\_state\_change | Timestamp | When the last state change occurred (as a UNIX timestamp).
last\_hard\_state\_change | Timestamp | When the last hard state change occurred (as a UNIX timestamp).
last\_in\_downtime | Boolean | Whether the service was in a downtime when the last check occurred.
acknowledgement | Number | The acknowledgement type (0 = NONE, 1 = NORMAL, 2 = STICKY).
acknowledgement\_expiry | Timestamp | When the acknowledgement expires (as a UNIX timestamp; 0 = no expiry).
acknowledgement\_last\_change | Timestamp | When the acknowledgement has been set/cleared
@ -1134,7 +1136,7 @@ for a more secure configuration is provided by the [Mozilla Wiki](https://wiki.m
Ensure to use the same configuration for both attributes on **all** endpoints to avoid communication problems which
requires to use `cipher_list` compatible with the endpoint using the oldest version of the OpenSSL library. If using
other tools to connect to the API ensure also compatibility with them as this setting affects not only inter-cluster
communication but also the REST API.
communcation but also the REST API.
### CheckerComponent <a id="objecttype-checkercomponent"></a>
@ -1181,7 +1183,7 @@ Configuration Attributes:
### ElasticsearchWriter <a id="objecttype-elasticsearchwriter"></a>
Writes check result metrics and performance data to an Elasticsearch or OpenSearch instance.
Writes check result metrics and performance data to an Elasticsearch instance.
This configuration object is available as [elasticsearch feature](14-features.md#elasticsearch-writer).
Example:
@ -1194,10 +1196,6 @@ object ElasticsearchWriter "elasticsearch" {
enable_send_perfdata = true
host_tags_template = {
os_name = "$host.vars.os$"
}
flush_threshold = 1024
flush_interval = 10
}
@ -1211,7 +1209,7 @@ Configuration Attributes:
--------------------------|-----------------------|----------------------------------
host | String | **Required.** Elasticsearch host address. Defaults to `127.0.0.1`.
port | Number | **Required.** Elasticsearch port. Defaults to `9200`.
index | String | **Required.** Prefix for the index names. Defaults to `icinga2`.
index | String | **Required.** Elasticsearch index name. Defaults to `icinga2`.
enable\_send\_perfdata | Boolean | **Optional.** Send parsed performance data metrics for check results. Defaults to `false`.
flush\_interval | Duration | **Optional.** How long to buffer data points before transferring to Elasticsearch. Defaults to `10s`.
flush\_threshold | Number | **Optional.** How many data points to buffer before forcing a transfer to Elasticsearch. Defaults to `1024`.
@ -1219,8 +1217,6 @@ Configuration Attributes:
password | String | **Optional.** Basic auth password if Elasticsearch is hidden behind an HTTP proxy.
enable\_tls | Boolean | **Optional.** Whether to use a TLS stream. Defaults to `false`. Requires an HTTP proxy.
insecure\_noverify | Boolean | **Optional.** Disable TLS peer verification.
host\_tags\_template | Dictionary | **Optional.** Allows to apply additional tags to the Elasticsearch host entries.
service\_tags\_template | Dictionary | **Optional.** Allows to apply additional tags to the Elasticsearch service entries.
ca\_path | String | **Optional.** Path to CA certificate to validate the remote host. Requires `enable_tls` set to `true`.
cert\_path | String | **Optional.** Path to host certificate to present to the remote host for mutual verification. Requires `enable_tls` set to `true`.
key\_path | String | **Optional.** Path to host key to accompany the cert\_path. Requires `enable_tls` set to `true`.
@ -1229,11 +1225,6 @@ Configuration Attributes:
Note: If `flush_threshold` is set too low, this will force the feature to flush all data to Elasticsearch too often.
Experiment with the setting, if you are processing more than 1024 metrics per second or similar.
> **Note**
>
> Be aware that `enable_send_perfdata` will create a new field mapping in the index for each performance data metric in a check plugin.
> Elasticsearch/OpenSearch have a maximum number of fields in an index. The default value is usually 1000 fields. See [mapping settings limit](https://www.elastic.co/guide/en/elasticsearch/reference/8.18/mapping-settings-limit.html)
Basic auth is supported with the `username` and `password` attributes. This requires an
HTTP proxy (Nginx, etc.) in front of the Elasticsearch instance. Check [this blogpost](https://blog.netways.de/2017/09/14/secure-elasticsearch-and-kibana-with-an-nginx-http-proxy/)
for an example.
@ -1398,9 +1389,7 @@ Configuration Attributes:
host | String | **Optional.** Redis host. Defaults to `127.0.0.1`.
port | Number | **Optional.** Redis port. Defaults to `6380` since the Redis server provided by the `icingadb-redis` package listens on that port.
path | String | **Optional.** Redis unix socket path. Can be used instead of `host` and `port` attributes.
username | String | **Optional.** Redis auth username. Only possible if Redis ACLs are used. Requires `password` to be set as well.
password | String | **Optional.** Redis auth password.
db\_index | Number | **Optional.** Redis logical database by its number. Defaults to `0`.
enable\_tls | Boolean | **Optional.** Whether to use TLS.
cert\_path | String | **Optional.** Path to the certificate.
key\_path | String | **Optional.** Path to the private key.
@ -1685,11 +1674,11 @@ Configuration Attributes:
flush\_threshold | Number | **Optional.** How many data points to buffer before forcing a transfer to InfluxDB. Defaults to `1024`.
enable\_ha | Boolean | **Optional.** Enable the high availability functionality. Only valid in a [cluster setup](06-distributed-monitoring.md#distributed-monitoring-high-availability-features). Defaults to `false`.
> **Note**
>
> If `flush_threshold` is set too low, this will always force the feature to flush all data
> to InfluxDB. Experiment with the setting, if you are processing more than 1024 metrics per second
> or similar.
Note: If `flush_threshold` is set too low, this will always force the feature to flush all data
to InfluxDB. Experiment with the setting, if you are processing more than 1024 metrics per second
or similar.
### Influxdb2Writer <a id="objecttype-influxdb2writer"></a>

File diff suppressed because it is too large Load Diff

View File

@ -13,18 +13,18 @@ options.
```
# icinga2
icinga2 - The Icinga 2 network monitoring daemon (version: v2.14.4)
icinga2 - The Icinga 2 network monitoring daemon (version: v2.11.0)
Usage:
icinga2 <command> [<arguments>]
Supported commands:
* api setup (setup for API)
* ca list (lists pending certificate signing requests)
* ca remove (removes an outstanding certificate request)
* ca list (lists all certificate signing requests)
* ca restore (restores a removed certificate request)
* ca remove (removes an outstanding certificate request)
* ca sign (signs an outstanding certificate request)
* console (Icinga console)
* console (Icinga debug console)
* daemon (starts Icinga 2)
* feature disable (disables specified feature)
* feature enable (enables specified feature)
@ -48,6 +48,8 @@ Global options:
--color use VT100 color codes even when stdout is not a
terminal
-D [ --define ] arg define a constant
-a [ --app ] arg application library name (default: icinga)
-l [ --library ] arg load a library
-I [ --include ] arg add include search directory
-x [ --log-level ] arg specify the log level for the console log.
The valid value is either debug, notice,
@ -55,8 +57,6 @@ Global options:
-X [ --script-debugger ] whether to enable the script debugger
Report bugs at <https://github.com/Icinga/icinga2>
Get support: <https://icinga.com/support/>
Documentation: <https://icinga.com/docs/>
Icinga home page: <https://icinga.com/>
```
@ -73,7 +73,7 @@ RPM and Debian packages install the bash completion files into
You need to install the `bash-completion` package if not already installed.
RHEL/Fedora:
RHEL/CentOS/Fedora:
```bash
yum install bash-completion
@ -102,6 +102,18 @@ source /etc/bash-completion.d/icinga2
## Icinga 2 CLI Global Options <a id="cli-commands-global-options"></a>
### Application Type
By default the `icinga2` binary loads the `icinga` library. A different application type
can be specified with the `--app` command-line option.
Note: This is not needed by the average Icinga user, only developers.
### Libraries
Instead of loading libraries using the [`library` config directive](17-language-reference.md#library)
you can also use the `--library` command-line option.
Note: This is not needed by the average Icinga user, only developers.
### Constants
[Global constants](17-language-reference.md#constants) can be set using the `--define` command-line option.
@ -132,7 +144,7 @@ Provides helper functions to enable and setup the
```
# icinga2 api setup --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.14.4)
icinga2 - The Icinga 2 network monitoring daemon (version: v2.11.0)
Usage:
icinga2 api setup [<arguments>]
@ -164,20 +176,20 @@ Icinga home page: <https://icinga.com/>
List and manage incoming certificate signing requests. More details
can be found in the [signing methods](06-distributed-monitoring.md#distributed-monitoring-setup-sign-certificates-master)
chapter.
chapter. This CLI command is available since v2.8.
```
# icinga2 ca --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.14.4)
icinga2 - The Icinga 2 network monitoring daemon (version: v2.11.0)
Usage:
icinga2 <command> [<arguments>]
Supported commands:
* ca list (lists pending certificate signing requests)
* ca remove (removes an outstanding certificate request)
* ca restore (restores a removed certificate request)
* ca list (lists all certificate signing requests)
* ca sign (signs an outstanding certificate request)
* ca restore (restores a removed certificate request)
* ca remove (removes an outstanding certificate request)
Global options:
-h [ --help ] show this help message
@ -185,6 +197,8 @@ Global options:
--color use VT100 color codes even when stdout is not a
terminal
-D [ --define ] arg define a constant
-a [ --app ] arg application library name (default: icinga)
-l [ --library ] arg load a library
-I [ --include ] arg add include search directory
-x [ --log-level ] arg specify the log level for the console log.
The valid value is either debug, notice,
@ -192,8 +206,6 @@ Global options:
-X [ --script-debugger ] whether to enable the script debugger
Report bugs at <https://github.com/Icinga/icinga2>
Get support: <https://icinga.com/support/>
Documentation: <https://icinga.com/docs/>
Icinga home page: <https://icinga.com/>
```
@ -201,8 +213,8 @@ Icinga home page: <https://icinga.com/>
### CLI command: Ca List <a id="cli-command-ca-list"></a>
```
# icinga2 ca list --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.14.4)
icinga2 ca list --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.11.0)
Usage:
icinga2 ca list [<arguments>]
@ -237,14 +249,11 @@ Icinga home page: <https://icinga.com/>
## CLI command: Console <a id="cli-command-console"></a>
The CLI command `console` can be used to debug and evaluate Icinga 2 config expressions,
e.g., to test [functions](17-language-reference.md#functions) in your local sandbox.
This command can be executed by any user and does not require access to the Icinga 2 configuration.
e.g. to test [functions](17-language-reference.md#functions) in your local sandbox.
```
# icinga2 console
Icinga 2 (version: v2.14.4)
Type $help to view available commands.
$ icinga2 console
Icinga 2 (version: v2.11.0)
<1> => function test(name) {
<1> .. log("Hello " + name)
<1> .. }
@ -259,7 +268,7 @@ Further usage examples can be found in the [library reference](18-library-refere
```
# icinga2 console --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.14.4)
icinga2 - The Icinga 2 network monitoring daemon (version: v2.11.0)
Usage:
icinga2 console [<arguments>]
@ -272,6 +281,8 @@ Global options:
--color use VT100 color codes even when stdout is not a
terminal
-D [ --define ] arg define a constant
-a [ --app ] arg application library name (default: icinga)
-l [ --library ] arg load a library
-I [ --include ] arg add include search directory
-x [ --log-level ] arg specify the log level for the console log.
The valid value is either debug, notice,
@ -286,13 +297,11 @@ Command options:
--sandbox enable sandbox mode
Report bugs at <https://github.com/Icinga/icinga2>
Get support: <https://icinga.com/support/>
Documentation: <https://icinga.com/docs/>
Icinga home page: <https://icinga.com/>
```
On operating systems without the `libedit` library installed, there is no
On operating systems without the `libedit` library installed there is no
support for line-editing or a command history. However you can
use the `rlwrap` program if you require those features:
@ -302,7 +311,7 @@ rlwrap icinga2 console
The debug console can be used to connect to a running Icinga 2 instance using
the [REST API](12-icinga2-api.md#icinga2-api). [API permissions](12-icinga2-api.md#icinga2-api-permissions)
for `console` are required for executing config expressions and auto-completion.
are required for executing config expressions and auto-completion.
> **Note**
>
@ -314,20 +323,20 @@ for `console` are required for executing config expressions and auto-completion.
You can specify the API URL using the `--connect` parameter.
Although the password can be specified there, process arguments are usually
visible to other users (e.g. through `ps`). In order to securely specify the
user credentials, the debug console supports two environment variables:
Although the password can be specified there process arguments on UNIX platforms are
usually visible to other users (e.g. through `ps`). In order to securely specify the
user credentials the debug console supports two environment variables:
Environment variable | Description
---------------------|-------------
ICINGA2_API_USERNAME | The API username.
ICINGA2_API_PASSWORD | The API password.
Here is an example:
Here's an example:
```
$ ICINGA2_API_PASSWORD=icinga icinga2 console --connect 'https://root@localhost:5665/'
Icinga 2 (version: v2.14.4)
Icinga 2 (version: v2.11.0)
<1> =>
```
@ -374,7 +383,7 @@ The `--syntax-only` option can be used in combination with `--eval` or `--file`
to check a script for syntax errors. In this mode the script is parsed to identify
syntax errors but not evaluated.
Here is an example that retrieves the command that was used by Icinga to check the `icinga2-agent1.localdomain` host:
Here's an example that retrieves the command that was used by Icinga to check the `icinga2-agent1.localdomain` host:
```
$ ICINGA2_API_PASSWORD=icinga icinga2 console --connect 'https://root@localhost:5665/' --eval 'get_host("icinga2-agent1.localdomain").last_check_result.command' | python -m json.tool
@ -396,7 +405,7 @@ Furthermore it allows to run the [configuration validation](11-cli-commands.md#c
```
# icinga2 daemon --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.14.4)
icinga2 - The Icinga 2 network monitoring daemon (version: v2.11.0)
Usage:
icinga2 daemon [<arguments>]
@ -409,6 +418,8 @@ Global options:
--color use VT100 color codes even when stdout is not a
terminal
-D [ --define ] arg define a constant
-a [ --app ] arg application library name (default: icinga)
-l [ --library ] arg load a library
-I [ --include ] arg add include search directory
-x [ --log-level ] arg specify the log level for the console log.
The valid value is either debug, notice,
@ -419,8 +430,7 @@ Command options:
-c [ --config ] arg parse a configuration file
-z [ --no-config ] start without a configuration file
-C [ --validate ] exit after validating the configuration
--dump-objects write icinga2.debug cache file for icinga2 object
list
--dump-objects write icinga2.debug cache file for icinga2 object list
-e [ --errorlog ] arg log fatal errors to the specified log file (only
works in combination with --daemonize or
--close-stdio)
@ -428,8 +438,6 @@ Command options:
--close-stdio do not log to stdout (or stderr) after startup
Report bugs at <https://github.com/Icinga/icinga2>
Get support: <https://icinga.com/support/>
Documentation: <https://icinga.com/docs/>
Icinga home page: <https://icinga.com/>
```
@ -468,8 +476,8 @@ The `feature list` command shows which features are currently enabled:
```
# icinga2 feature list
Disabled features: debuglog elasticsearch gelf ido-mysql ido-pgsql influxdb influxdb2 journald opentsdb perfdata syslog
Enabled features: api checker graphite icingadb mainlog notification
Disabled features: compatlog debuglog gelf ido-pgsql influxdb livestatus opentsdb perfdata statusdata syslog
Enabled features: api checker command graphite ido-mysql mainlog notification
```
## CLI command: Node <a id="cli-command-node"></a>
@ -521,7 +529,7 @@ More information can be found in the [troubleshooting](15-troubleshooting.md#tro
```
# icinga2 object --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.14.4)
icinga2 - The Icinga 2 network monitoring daemon (version: v2.11.0)
Usage:
icinga2 <command> [<arguments>]
@ -535,6 +543,8 @@ Global options:
--color use VT100 color codes even when stdout is not a
terminal
-D [ --define ] arg define a constant
-a [ --app ] arg application library name (default: icinga)
-l [ --library ] arg load a library
-I [ --include ] arg add include search directory
-x [ --log-level ] arg specify the log level for the console log.
The valid value is either debug, notice,
@ -542,8 +552,6 @@ Global options:
-X [ --script-debugger ] whether to enable the script debugger
Report bugs at <https://github.com/Icinga/icinga2>
Get support: <https://icinga.com/support/>
Documentation: <https://icinga.com/docs/>
Icinga home page: <https://icinga.com/>
```
@ -563,7 +571,7 @@ You will need them in the [distributed monitoring chapter](06-distributed-monito
```
# icinga2 pki --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.14.4)
icinga2 - The Icinga 2 network monitoring daemon (version: v2.12.0)
Usage:
icinga2 <command> [<arguments>]
@ -583,6 +591,8 @@ Global options:
--color use VT100 color codes even when stdout is not a
terminal
-D [ --define ] arg define a constant
-a [ --app ] arg application library name (default: icinga)
-l [ --library ] arg load a library
-I [ --include ] arg add include search directory
-x [ --log-level ] arg specify the log level for the console log.
The valid value is either debug, notice,
@ -590,8 +600,6 @@ Global options:
-X [ --script-debugger ] whether to enable the script debugger
Report bugs at <https://github.com/Icinga/icinga2>
Get support: <https://icinga.com/support/>
Documentation: <https://icinga.com/docs/>
Icinga home page: <https://icinga.com/>
```
@ -601,7 +609,7 @@ Lists all configured variables (constants) in a similar fashion like [object lis
```
# icinga2 variable --help
icinga2 - The Icinga 2 network monitoring daemon (version: v2.14.4)
icinga2 - The Icinga 2 network monitoring daemon (version: v2.11.0)
Usage:
icinga2 <command> [<arguments>]
@ -616,6 +624,8 @@ Global options:
--color use VT100 color codes even when stdout is not a
terminal
-D [ --define ] arg define a constant
-a [ --app ] arg application library name (default: icinga)
-l [ --library ] arg load a library
-I [ --include ] arg add include search directory
-x [ --log-level ] arg specify the log level for the console log.
The valid value is either debug, notice,
@ -623,8 +633,6 @@ Global options:
-X [ --script-debugger ] whether to enable the script debugger
Report bugs at <https://github.com/Icinga/icinga2>
Get support: <https://icinga.com/support/>
Documentation: <https://icinga.com/docs/>
Icinga home page: <https://icinga.com/>
```
@ -643,8 +651,8 @@ You can view a list of enabled and disabled features:
```
# icinga2 feature list
Disabled features: debuglog elasticsearch gelf ido-mysql ido-pgsql influxdb influxdb2 journald opentsdb perfdata syslog
Enabled features: api checker graphite icingadb mainlog notification
Disabled features: api command compatlog debuglog graphite icingastatus ido-mysql ido-pgsql livestatus notification perfdata statusdata syslog
Enabled features: checker mainlog notification
```
Using the `icinga2 feature enable` command you can enable features:
@ -667,9 +675,10 @@ restart Icinga 2. You will need to restart Icinga 2 using the init script
after enabling or disabling features.
## Configuration Validation <a id="config-validation"></a>
Once you have edited the configuration, make sure to tell Icinga 2 to validate
Once you've edited the configuration files make sure to tell Icinga 2 to validate
the configuration changes. Icinga 2 will log any configuration error including
a hint on the file, the line number and the affected configuration line itself.
@ -707,12 +716,12 @@ to read the [troubleshooting](15-troubleshooting.md#troubleshooting) chapter.
You can also use the [CLI command](11-cli-commands.md#cli-command-object) `icinga2 object list`
after validation passes to analyze object attributes, inheritance or created
objects by apply rules.
Find more on troubleshooting with `icinga2 object list` in [this chapter](15-troubleshooting.md#troubleshooting-list-configuration-objects).
Find more on troubleshooting with `object list` in [this chapter](15-troubleshooting.md#troubleshooting-list-configuration-objects).
## Reload on Configuration Changes <a id="config-change-reload"></a>
Every time you have changed your configuration, you should first tell Icinga 2
Every time you have changed your configuration you should first tell Icinga 2
to [validate](11-cli-commands.md#config-validation). If there are no validation errors, you can
safely reload the Icinga 2 daemon.

View File

@ -288,7 +288,6 @@ Available permissions for specific URL endpoints:
config/query | /v1/config | No | 1
config/modify | /v1/config | No | 512
console | /v1/console | No | 1
debug | /v1/debug | No | 1
events/&lt;type&gt; | /v1/events | No | 1
objects/query/&lt;type&gt; | /v1/objects | Yes | 1
objects/create/&lt;type&gt; | /v1/objects | No | 1
@ -498,7 +497,7 @@ The example below is not valid:
-d '{ "type": "Host", "filter": ""linux-servers" in host.groups" }'
```
The double quotes need to be escaped with a preceding backslash:
The double quotes need to be escaped with a preceeding backslash:
```
-d '{ "type": "Host", "filter": "\"linux-servers\" in host.groups" }'
@ -566,7 +565,7 @@ created by the API.
### Querying Objects <a id="icinga2-api-config-objects-query"></a>
You can request information about configuration objects by sending
a `GET` query to the `/v1/objects/<type>` URL endpoint. `<type>` has
a `GET` query to the `/v1/objects/<type>` URL endpoint. `<type` has
to be replaced with the plural name of the object type you are interested
in:
@ -814,7 +813,7 @@ parameters need to be passed inside the JSON body:
Parameters | Type | Description
------------------|--------------|--------------------------
templates | Array | **Optional.** Import existing configuration templates for this object type. Note: These templates must either be statically configured or provided in [config packages](12-icinga2-api.md#icinga2-api-config-management).
templates | Array | **Optional.** Import existing configuration templates for this object type. Note: These templates must either be statically configured or provided in [config packages](12-icinga2-api.md#icinga2-api-config-management)-
attrs | Dictionary | **Required.** Set specific object attributes for this [object type](09-object-types.md#object-types).
ignore\_on\_error | Boolean | **Optional.** Ignore object creation errors and return an HTTP 200 status instead.
@ -951,7 +950,7 @@ list the latter in the `restore_attrs` parameter. E.g.:
```bash
curl -k -s -S -i -u root:icinga -H 'Accept: application/json' \
-X POST 'https://localhost:5665/v1/objects/hosts/example.localdomain' \
-d '{ "restore_attrs": [ "address", "vars.os" ], "pretty": true }'
-d '{ "restore_attrs": [ "address", "vars.os" ] }, "pretty": true }'
```
```json
@ -1009,7 +1008,7 @@ curl -k -s -S -i -u root:icinga -H 'Accept: application/json' \
There are several actions available for Icinga 2 provided by the `/v1/actions`
URL endpoint. You can run actions by sending a `POST` request.
The following actions are also used by [Icinga Web 2](https://icinga.com/docs/icinga-web/latest/):
The following actions are also used by [Icinga Web 2](https://icinga.com/products/icinga-web-2/):
* sending check results to Icinga from scripts, remote agents, etc.
* scheduling downtimes from external scripts or cronjobs
@ -1073,7 +1072,7 @@ Send a `POST` request to the URL endpoint `/v1/actions/process-check-result`.
exit\_status | Number | **Required.** For services: 0=OK, 1=WARNING, 2=CRITICAL, 3=UNKNOWN, for hosts: 0=UP, 1=DOWN.
plugin\_output | String | **Required.** One or more lines of the plugin main output. Does **not** contain the performance data.
performance\_data | Array<code>&#124;</code>String | **Optional.** The performance data as array of strings. The raw performance data string can be used too.
check\_command | Array<code>&#124;</code>String | **Optional.** The first entry should be the check commands path, then one entry for each command line option followed by an entry for each of its argument. Alternatively a single string can be used.
check\_command | Array<code>&#124;</code>String | **Optional.** The first entry should be the check commands path, then one entry for each command line option followed by an entry for each of its argument. Alternativly a single string can be used.
check\_source | String | **Optional.** Usually the name of the `command_endpoint`
execution\_start | Timestamp | **Optional.** The timestamp where a script/process started its execution.
execution\_end | Timestamp | **Optional.** The timestamp where a script/process ended its execution. This timestamp is used in features to determine e.g. the metric timestamp.
@ -1879,32 +1878,6 @@ Example for all object events:
timestamp | Timestamp | Unix timestamp when the event happened.
downtime | Dictionary | Serialized [Downtime](09-object-types.md#objecttype-downtime) object.
#### <a id="icinga2-api-event-streams-type-objectcreated"></a> Event Stream Type: ObjectCreated
| Name | Type | Description |
|--------------|-----------|----------------------------------------------------------------|
| type | String | Event type `ObjectCreated`. |
| timestamp | Timestamp | Unix timestamp when the event happened. |
| object\_type | String | Type of the newly created object, such as `Host` or `Service`. |
| object\_name | String | The full name of the object. |
#### <a id="icinga2-api-event-streams-type-objectmodified"></a> Event Stream Type: ObjectModified
| Name | Type | Description |
|--------------|-----------|-----------------------------------------------------------|
| type | String | Event type `ObjectModified`. |
| timestamp | Timestamp | Unix timestamp when the event happened. |
| object\_type | String | Type of the modified object, such as `Host` or `Service`. |
| object\_name | String | The full name of the object. |
#### <a id="icinga2-api-event-streams-type-objectdeleted"></a> Event Stream Type: ObjectDeleted
| Name | Type | Description |
|--------------|-----------|----------------------------------------------------------|
| type | String | Event type `ObjectDeleted`. |
| timestamp | Timestamp | Unix timestamp when the event happened. |
| object\_type | String | Type of the deleted object, such as `Host` or `Service`. |
| object\_name | String | The full name of the object. |
### Event Stream Filter <a id="icinga2-api-event-streams-filter"></a>
@ -2019,7 +1992,7 @@ validate the configuration asynchronously and populate a status log which
can be fetched in a separated request. Once the validation succeeds,
a reload is triggered by default.
This functionality was primarily developed for the [Icinga Director](https://icinga.com/docs/director/latest/)
This functionality was primarly developed for the [Icinga Director](https://icinga.com/docs/director/latest/)
but can be used with your own deployments too. It also solves the problem
with certain runtime objects (zones, endpoints) and can be used to
deploy global templates in [global cluster zones](06-distributed-monitoring.md#distributed-monitoring-global-zone-config-sync).
@ -2374,7 +2347,7 @@ Creation, modification and deletion of templates at runtime is not supported.
### Querying Templates <a id="icinga2-api-config-templates-query"></a>
You can request information about configuration templates by sending
a `GET` query to the `/v1/templates/<type>` URL endpoint. `<type>` has
a `GET` query to the `/v1/templates/<type>` URL endpoint. `<type` has
to be replaced with the plural name of the object type you are interested
in:
@ -2529,72 +2502,6 @@ curl -k -s -S -i -u root:icinga -H 'Accept: application/json' \
}
```
## Memory Usage Analysis <a id="icinga2-api-memory"></a>
The GNU libc function `malloc_info(3)` provides memory allocation and usage
statistics of Icinga 2 itself. You can call it directly by sending a `GET`
request to the URL endpoint `/v1/debug/malloc_info`.
The [API permission](12-icinga2-api.md#icinga2-api-permissions) `debug` is required.
Example:
```bash
curl -k -s -S -i -u root:icinga https://localhost:5665/v1/debug/malloc_info
```
In contrast to other API endpoints, the response is not JSON,
but the raw XML output from `malloc_info(3)`. See also the
[glibc malloc(3) internals](https://sourceware.org/glibc/wiki/MallocInternals).
```xml
<malloc version="1">
<heap nr="0">
<sizes>
<size from="33" to="48" total="96" count="2"/>
<size from="49" to="64" total="192" count="3"/>
<size from="65" to="80" total="80" count="1"/>
<unsorted from="84817" to="84817" total="84817" count="1"/>
</sizes>
<total type="fast" count="6" size="368"/>
<total type="rest" count="2" size="859217"/>
<system type="current" size="7409664"/>
<system type="max" size="7409664"/>
<aspace type="total" size="7409664"/>
<aspace type="mprotect" size="7409664"/>
</heap>
<!-- ... -->
<heap nr="30">
<sizes>
<size from="17" to="32" total="96" count="3"/>
<size from="33" to="48" total="576" count="12"/>
<size from="49" to="64" total="64" count="1"/>
<size from="97" to="112" total="3584" count="32"/>
<size from="49" to="49" total="98" count="2"/>
<size from="81" to="81" total="810" count="10"/>
<size from="257" to="257" total="2827" count="11"/>
<size from="689" to="689" total="689" count="1"/>
<size from="705" to="705" total="705" count="1"/>
<unsorted from="81" to="81" total="81" count="1"/>
</sizes>
<total type="fast" count="48" size="4320"/>
<total type="rest" count="27" size="118618"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<total type="fast" count="938" size="79392"/>
<total type="rest" count="700" size="4409469"/>
<total type="mmap" count="0" size="0"/>
<system type="current" size="15114240"/>
<system type="max" size="15114240"/>
<aspace type="total" size="15114240"/>
<aspace type="mprotect" size="15114240"/>
</malloc>
```
## API Clients <a id="icinga2-api-clients"></a>
After its initial release in 2015, community members
@ -2699,7 +2606,7 @@ The following languages are covered:
* [Golang](12-icinga2-api.md#icinga2-api-clients-programmatic-examples-golang)
* [Powershell](12-icinga2-api.md#icinga2-api-clients-programmatic-examples-powershell)
The [request method](#icinga2-api-requests) is `POST` using [X-HTTP-Method-Override: GET](12-icinga2-api.md#icinga2-api-requests-method-override)
The [request method](icinga2-api-requests) is `POST` using [X-HTTP-Method-Override: GET](12-icinga2-api.md#icinga2-api-requests-method-override)
which allows you to send a JSON request body. The examples request specific service
attributes joined with host attributes. `attrs` and `joins` are therefore specified
as array.

View File

@ -32,7 +32,7 @@ vim /etc/icinga2/conf.d/templates.conf
Install the package `nano-icinga2` with your distribution's package manager.
**Note:** On Debian, Ubuntu and Raspberry Pi OS, the syntax files are installed with the `icinga2-common` package already.
**Note:** On Debian, Ubuntu and Raspbian, the syntax files are installed with the `icinga2-common` package already.
Copy the `/etc/nanorc` sample file to your home directory.
@ -71,6 +71,9 @@ via email.
![Icinga Reporting](images/addons/icinga_reporting.png)
Follow along in this [hands-on blog post](https://icinga.com/2019/06/17/icinga-reporting-hands-on/).
## Graphs and Metrics <a id="addons-graphs-metrics"></a>
### Graphite <a id="addons-graphing-graphite"></a>
@ -122,7 +125,7 @@ icinga2 feature enable influxdb2
A popular frontend for InfluxDB is for example [Grafana](https://grafana.org).
Integration in Icinga Web 2 is possible by installing the community [Grafana module](https://github.com/NETWAYS/icingaweb2-module-grafana).
Integration in Icinga Web 2 is possible by installing the community [Grafana module](https://github.com/Mikesch-mp/icingaweb2-module-grafana).
![Icinga Web 2 Detail View with Grafana](images/addons/icingaweb2_grafana.png)
@ -182,7 +185,7 @@ in a tree or list overview and can be added to any dashboard.
![Icinga Web 2 Business Process](images/addons/icingaweb2_businessprocess.png)
Read more [here](https://icinga.com/docs/icinga-business-process-modeling/latest/).
Read more [here](https://icinga.com/products/icinga-business-process-modelling/).
### Certificate Monitoring <a id="addons-visualization-certificate-monitoring"></a>
@ -191,7 +194,8 @@ actions and view all details at a glance.
![Icinga Certificate Monitoring](images/addons/icinga_certificate_monitoring.png)
Read more [here](https://icinga.com/products/icinga-certificate-monitoring/).
Read more [here](https://icinga.com/products/icinga-certificate-monitoring/)
and [here](https://icinga.com/2019/06/03/monitoring-automation-with-icinga-certificate-monitoring/).
### Dashing Dashboard <a id="addons-visualization-dashing-dashboard"></a>
@ -200,7 +204,7 @@ on top of Dashing and uses the [REST API](12-icinga2-api.md#icinga2-api) to visu
on with your monitoring. It combines several popular widgets and provides development
instructions for your own implementation.
The dashboard also allows to embed the [Icinga Web 2](https://icinga.com/docs/icinga-web/latest/)
The dashboard also allows to embed the [Icinga Web 2](https://icinga.com/products/icinga-web-2/)
host and service problem lists as Iframe.
![Dashing dashboard](images/addons/dashing_icinga2.png)
@ -230,6 +234,10 @@ There's a variety of resources available, for example different notification scr
* Ticket systems
* etc.
Blog posts and howtos:
* [Environmental Monitoring and Alerting](https://icinga.com/2019/09/02/environmental-monitoring-and-alerting-via-text-message/)
Additionally external services can be [integrated with Icinga 2](https://icinga.com/products/integrations/):
* [Pagerduty](https://icinga.com/products/integrations/pagerduty/)

View File

@ -106,7 +106,7 @@ The current naming schema is defined as follows. The [Icinga Web 2 Graphite modu
depends on this schema.
The default prefix for hosts and services is configured using
[runtime macros](03-monitoring-basics.md#runtime-macros) like this:
[runtime macros](03-monitoring-basics.md#runtime-macros)like this:
```
icinga2.$host.name$.host.$host.check_command$
@ -147,7 +147,7 @@ parsed from plugin output:
Note that labels may contain dots (`.`) allowing to
add more subsequent levels inside the Graphite tree.
`::` adds support for [multi performance labels](https://github.com/flackem/check_multi/blob/next/doc/configuration/performance.md)
`::` adds support for [multi performance labels](http://my-plugin.de/wiki/projects/check_multi/configuration/performance)
and is therefore replaced by `.`.
By enabling `enable_send_thresholds` Icinga 2 automatically adds the following threshold metrics:
@ -246,7 +246,7 @@ resolved, it will be dropped and not sent to the target host.
Backslashes are allowed in tag keys, tag values and field keys, however they are also
escape characters when followed by a space or comma, but cannot be escaped themselves.
As a result all trailing slashes in these fields are replaced with an underscore. This
As a result all trailling slashes in these fields are replaced with an underscore. This
predominantly affects Windows paths e.g. `C:\` becomes `C:_`.
The database/bucket is assumed to exist so this object will make no attempt to create it currently.
@ -335,14 +335,16 @@ More integrations:
#### Elasticsearch Writer <a id="elasticsearch-writer"></a>
This feature forwards check results, state changes and notification events
to an [Elasticsearch](https://www.elastic.co/products/elasticsearch) or an [OpenSearch](https://opensearch.org/) installation over its HTTP API.
to an [Elasticsearch](https://www.elastic.co/products/elasticsearch) installation over its HTTP API.
The check results include parsed performance data metrics if enabled.
> **Note**
>
> Elasticsearch 7.x, 8.x or Opensearch 2.12.x are required. This feature has been successfully tested with
> Elasticsearch 7.17.10, 8.8.1 and OpenSearch 2.13.0.
> Elasticsearch 5.x or 6.x are required. This feature has been successfully tested with
> Elasticsearch 5.6.7 and 6.3.1.
Enable the feature and restart Icinga 2.
@ -364,8 +366,7 @@ The following event types are written to Elasticsearch:
* icinga2.event.notification
Performance data metrics must be explicitly enabled with the `enable_send_perfdata`
attribute. Be aware that this will create a new field mapping in the index for each performance data metric in a check plugin.
See: [ElasticsearchWriter](09-object-types.md#objecttype-elasticsearchwriter)
attribute.
Metric values are stored like this:
@ -384,7 +385,7 @@ The following characters are escaped in perfdata labels:
Note that perfdata labels may contain dots (`.`) allowing to
add more subsequent levels inside the tree.
`::` adds support for [multi performance labels](https://github.com/flackem/check_multi/blob/next/doc/configuration/performance.md)
`::` adds support for [multi performance labels](http://my-plugin.de/wiki/projects/check_multi/configuration/performance)
and is therefore replaced by `.`.
Icinga 2 automatically adds the following threshold metrics
@ -397,28 +398,6 @@ check_result.perfdata.<perfdata-label>.warn
check_result.perfdata.<perfdata-label>.crit
```
Additionally it is possible to configure custom tags that are applied to the metrics via `host_tags_template` or `service_tags_template`.
Depending on whether the write event was triggered on a service or host object, additional tags are added to the ElasticSearch entries.
A host metrics entry configured with the following `host_tags_template`:
```
host_tags_template = {
os_name = "$host.vars.os$"
custom_label = "A Custom Label"
list = [ "$host.groups$", "$host.vars.foo$" ]
}
```
Will in addition to the above mentioned lines also contain:
```
os_name = "Linux"
custom_label = "A Custom Label"
list = [ "group-A;linux-servers", "bar" ]
```
#### Elasticsearch in Cluster HA Zones <a id="elasticsearch-writer-cluster-ha"></a>
The Elasticsearch feature supports [high availability](06-distributed-monitoring.md#distributed-monitoring-high-availability-features)
@ -443,11 +422,11 @@ or Logstash for additional filtering.
#### GELF Writer <a id="gelfwriter"></a>
The `Graylog Extended Log Format` (short: GELF)
The `Graylog Extended Log Format` (short: [GELF](https://docs.graylog.org/en/latest/pages/gelf.html))
can be used to send application logs directly to a TCP socket.
While it has been specified by the [Graylog](https://www.graylog.org) project as their
[input resource standard](https://go2docs.graylog.org/current/getting_in_log_data/inputs.htm), other tools such as
[input resource standard](https://docs.graylog.org/en/latest/pages/sending_data.html), other tools such as
[Logstash](https://www.elastic.co/products/logstash) also support `GELF` as
[input type](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-gelf.html).
@ -575,7 +554,7 @@ with the following tags
Functionality exists to modify the built in OpenTSDB metric names that the plugin
writes to. By default this is `icinga.host` and `icinga.service.<servicename>`.
These prefixes can be modified as necessary to any arbitrary string. The prefix
These prefixes can be modified as necessary to any arbitary string. The prefix
configuration also supports Icinga macros, so if you rather use `<checkcommand>`
or any other variable instead of `<servicename>` you may do so.
@ -836,6 +815,16 @@ apt-get install icinga2-ido-mysql
default. You can skip the automated setup and install/upgrade the
database manually if you prefer.
###### CentOS 7
!!! info
Note that installing `icinga2-ido-mysql` is only supported on CentOS 7 as CentOS 8 is EOL.
```bash
yum install icinga2-ido-mysql
```
###### RHEL 8
```bash
@ -925,6 +914,16 @@ apt-get install icinga2-ido-pgsql
You can skip the automated setup and install/upgrade the database manually
if you prefer that.
###### CentOS 7
!!! info
Note that installing `icinga2-ido-pgsql` is only supported on CentOS 7 as CentOS 8 is EOL.
```bash
yum install icinga2-ido-pgsql
```
###### RHEL 8
```bash
@ -1119,7 +1118,7 @@ As with any application database, there are ways to optimize and tune the databa
General tips for performance tuning:
* [MariaDB KB](https://mariadb.com/docs/server/ha-and-performance/optimization-and-tuning)
* [MariaDB KB](https://mariadb.com/kb/en/library/optimization-and-tuning/)
* [PostgreSQL Wiki](https://wiki.postgresql.org/wiki/Performance_Optimization)
Re-creation of indexes, changed column values, etc. will increase the database size. Ensure to
@ -1236,7 +1235,7 @@ on the [Icinga 1.x documentation](https://docs.icinga.com/latest/en/extcommands2
> This feature is DEPRECATED and may be removed in future releases.
> Check the [roadmap](https://github.com/Icinga/icinga2/milestones).
The [MK Livestatus](https://exchange.nagios.org/directory/Documentation/MK-Livestatus/details) project
The [MK Livestatus](https://mathias-kettner.de/checkmk_livestatus.html) project
implements a query protocol that lets users query their Icinga instance for
status information. It can also be used to send commands.

View File

@ -19,8 +19,8 @@ findings and details please.
* `icinga2 --version`
* `icinga2 feature list`
* `icinga2 daemon -C`
* [Icinga Web 2](https://icinga.com/docs/icinga-web/latest/) version (screenshot from System - About)
* Icinga Web 2 modules e.g. the Icinga Director (optional)
* [Icinga Web 2](https://icinga.com/products/icinga-web-2/) version (screenshot from System - About)
* [Icinga Web 2 modules](https://icinga.com/products/icinga-web-2-modules/) e.g. the Icinga Director (optional)
* Configuration insights:
* Provide complete configuration snippets explaining your problem in detail
* Your [icinga2.conf](04-configuration.md#icinga2-conf) file
@ -42,7 +42,7 @@ is also key to identify bottlenecks and issues.
>
> [Monitor Icinga 2](08-advanced-topics.md#monitoring-icinga) and use the hints for further analysis.
* Analyze the system's performance and identify bottlenecks and issues.
* Analyze the system's performance and dentify bottlenecks and issues.
* Collect details about all applications (e.g. Icinga 2, MySQL, Apache, Graphite, Elastic, etc.).
* If data is exchanged via network (e.g. central MySQL cluster) ensure to monitor the bandwidth capabilities too.
* Add graphs from Grafana or Graphite as screenshots to your issue description
@ -176,64 +176,6 @@ C:\> cd C:\ProgramData\icinga2\var\log\icinga2
C:\ProgramData\icinga2\var\log\icinga2> Get-Content .\debug.log -tail 10 -wait
```
### Enable/Disable Debug Output on the fly <a id="troubleshooting-enable-disable-debug-output-api"></a>
The `debuglog` feature can also be created and deleted at runtime without having to restart Icinga 2.
Technically, this is possible because this feature is a [FileLogger](09-object-types.md#objecttype-filelogger)
that can be managed through the [API](12-icinga2-api.md#icinga2-api-config-objects).
This is a good alternative to `icinga2 feature enable debuglog` as object
creation/deletion via API happens immediately and requires no restart.
The above matters in setups large enough for the reload to take a while.
Especially these produce a lot of debug log output until disabled again.
!!! info
In case of [an HA zone](06-distributed-monitoring.md#distributed-monitoring-scenarios-ha-master-agents),
the following API examples toggle the feature on both nodes.
#### Enable Debug Output on the fly <a id="troubleshooting-enable-debug-output-api"></a>
```bash
curl -k -s -S -i -u root:icinga -H 'Accept: application/json' \
-X PUT 'https://localhost:5665/v1/objects/fileloggers/on-the-fly-debug-file' \
-d '{ "attrs": { "severity": "debug", "path": "/var/log/icinga2/on-the-fly-debug.log" }, "pretty": true }'
```
```json
{
"results": [
{
"code": 200.0,
"status": "Object was created."
}
]
}
```
#### Disable Debug Output on the fly <a id="troubleshooting-disable-debug-output-api"></a>
This works only for debug loggers enabled on the fly as above!
```bash
curl -k -s -S -i -u root:icinga -H 'Accept: application/json' \
-X DELETE 'https://localhost:5665/v1/objects/fileloggers/on-the-fly-debug-file?pretty=1'
```
```json
{
"results": [
{
"code": 200.0,
"name": "on-the-fly-debug-file",
"status": "Object was deleted.",
"type": "FileLogger"
}
]
}
```
## Icinga starts/restarts/reloads very slowly
### Try swapping out the allocator
@ -872,7 +814,7 @@ trying because you probably have a problem that requires manual intervention.
### Late Check Results <a id="late-check-results"></a>
[Icinga Web 2](https://icinga.com/docs/icinga-web/latest/) provides
[Icinga Web 2](https://icinga.com/products/icinga-web-2/) provides
a dashboard overview for `overdue checks`.
The REST API provides the [status](12-icinga2-api.md#icinga2-api-status) URL endpoint with some generic metrics
@ -887,7 +829,8 @@ You can also calculate late check results via the REST API:
* Fetch the `last_check` timestamp from each object
* Compare the timestamp with the current time and add `check_interval` multiple times (change it to see which results are really late, like five times check_interval)
You can use the [icinga2 console](11-cli-commands.md#cli-command-console) to connect to the instance, fetch all data and calculate the differences.
You can use the [icinga2 console](11-cli-commands.md#cli-command-console) to connect to the instance, fetch all data
and calculate the differences. More infos can be found in [this blogpost](https://icinga.com/2016/08/11/analyse-icinga-2-problems-using-the-console-api/).
```
# ICINGA2_API_USERNAME=root ICINGA2_API_PASSWORD=icinga icinga2 console --connect 'https://localhost:5665/'
@ -935,7 +878,7 @@ actively attempts to schedule and execute checks. Otherwise the node does not fe
}
```
You may ask why this analysis is important? Fair enough - if the numbers are not inverted in an HA zone
You may ask why this analysis is important? Fair enough - if the numbers are not inverted in a HA zone
with two members, this may give a hint that the cluster nodes are in a split-brain scenario, or you've
found a bug in the cluster.
@ -1697,9 +1640,6 @@ Typical errors are:
* The api feature doesn't [accept config](06-distributed-monitoring.md#distributed-monitoring-top-down-config-sync). This is logged into `/var/lib/icinga2/icinga2.log`.
* The received configuration zone is not configured in [zones.conf](04-configuration.md#zones-conf) and Icinga denies it. This is logged into `/var/lib/icinga2/icinga2.log`.
* The satellite/agent has local configuration in `/etc/icinga2/zones.d` and thinks it is authoritive for this zone. It then denies the received update. Purge the content from `/etc/icinga2/zones.d`, `/var/lib/icinga2/api/zones/*` and restart Icinga to fix this.
* Configuration parts stored outside of `/etc/icinga2/zones.d` on the master, for example a constant in `/etc/icinga2/constants.conf`, are then missing on the satellite/agent.
Note that if set up, the [built-in icinga CheckCommand](10-icinga-template-library.md#icinga) will notify you in case the config sync wasn't successful.
#### New configuration does not trigger a reload <a id="troubleshooting-cluster-config-sync-no-reload"></a>

View File

@ -8,28 +8,6 @@ Specific version upgrades are described below. Please note that version
updates are incremental. An upgrade from v2.6 to v2.8 requires to
follow the instructions for v2.7 too.
## Upgrading to v2.15 <a id="upgrading-to-2-15"></a>
### Icinga DB <a id="upgrading-to-2-15-icingadb"></a>
Version 2.15.0 of Icinga 2 is released alongside Icinga DB 1.4.0 and Icinga DB
Web 1.2.0. A change to the internal communication API requires these updates to
be applied together. To put it simply, Icinga 2.15.0 needs Icinga DB 1.4.0 or
later.
### REST API Attribute Filter <a id="upgrading-to-2-15-attrs"></a>
When [querying objects](12-icinga2-api.md#icinga2-api-config-objects-query)
using the API, specifying `{"attrs":[]}` now returns the objects with no
attributes. Not supplying the parameter or using `{"attrs":null}` still returns
the unfiltered list of all attributes.
### Removed DSL Functions <a id="upgrading-to-2-15-dsl"></a>
The undocumented `Checkable#process_check_result` and `System#track_parents`
functions were removed from the Icinga 2 config language (the
`process-check-result` API action is unaffected by this).
## Upgrading to v2.14 <a id="upgrading-to-2-14"></a>
### Dependencies and Redundancy Groups <a id="upgrading-to-2-14-dependencies"></a>
@ -128,7 +106,7 @@ have been removed from the command and documentation.
### Bugfixes for 2.11 <a id="upgrading-to-2-11-bugfixes"></a>
2.11.1 on agents/satellites fixes a problem where 2.10.x as config master would send out an unwanted config marker file,
thus rendering the agent to think it is authoritative for the config, and never accepting any new
thus rendering the agent to think it is autoritative for the config, and never accepting any new
config files for the zone(s). **If your config master is 2.11.x already, you are not affected by this problem.**
In order to fix this, upgrade to at least 2.11.1, and purge away the local config sync storage once, then restart.
@ -390,7 +368,7 @@ This affects the following features:
The reconnect failover has been improved, and the default `failover_timeout`
for the DB IDO features has been lowered from 60 to 30 seconds.
Object authority updates (required for balancing in the cluster) happen
more frequently (was 30, is 10 seconds).
more frequenty (was 30, is 10 seconds).
Also the cold startup without object authority updates has been reduced
from 60 to 30 seconds. This is to allow cluster reconnects (lowered from 60s to 10s in 2.10)
before actually considering a failover/split brain scenario.

View File

@ -97,7 +97,6 @@ Character | Escape sequence
--------------------------|------------------------------------
" | \\"
\\ | \\\\
$ | $$
&lt;TAB&gt; | \\t
&lt;CARRIAGE-RETURN&gt; | \\r
&lt;LINE-FEED&gt; | \\n
@ -108,10 +107,6 @@ In addition to these pre-defined escape sequences you can specify
arbitrary ASCII characters using the backslash character (\\) followed
by an ASCII character in octal encoding.
In Icinga 2, the `$` character is reserved for resolving [runtime macros](03-monitoring-basics.md#runtime-macros).
However, in situations where a string that isn't intended to be used as a runtime macro contains the `$` character,
it is necessary to escape it with another `$` character.
### Multi-line String Literals <a id="multiline-string-literals"></a>
Strings spanning multiple lines can be specified by enclosing them in
@ -666,7 +661,7 @@ setting the `check_command` attribute or custom variables as command parameters.
and afterwards the `assign where` and `ignore where` conditions are evaluated.
It is not necessary to check attributes referenced in the `for loop` expression
for their existence using an additional `assign where` condition.
for their existance using an additional `assign where` condition.
More usage examples are documented in the [monitoring basics](03-monitoring-basics.md#using-apply-for)
chapter.

View File

@ -204,7 +204,7 @@ You can read the full story [here](https://github.com/Icinga/icinga2/issues/7309
With 2.11 you'll now see 3 processes:
- The umbrella process which takes care of signal handling and process spawning/stopping
- The umbrella process which takes care about signal handling and process spawning/stopping
- The main process with the check scheduler, notifications, etc.
- The execution helper process
@ -622,8 +622,7 @@ The algorithm works like this:
* Determine whether this instance is assigned to a local zone and endpoint.
* Collects all endpoints in this zone if they are connected.
* If there's two endpoints, but only us seeing ourselves and the application start is less than
30 seconds in the past, do nothing (wait for cluster reconnect to take place, grace period).
* If there's two endpoints, but only us seeing ourselves and the application start is less than 60 seconds in the past, do nothing (wait for cluster reconnect to take place, grace period).
* Sort the collected endpoints by name.
* Iterate over all config types and their respective objects
* Ignore !active objects
@ -633,12 +632,15 @@ The algorithm works like this:
* Set the authority (true or false)
The object authority calculation works "offline" without any message exchange.
Each instance calculates the SDBM hash of the config object name. However, for objects bound to some
host, i.e. the object name is composed of `<host_name>!<object_name>`, the SDBM hash is calculated based
on the host name only instead of the full object name. That way, each child object like services, downtimes,
etc. will be assigned to the same endpoint as the host object itself. The resulting hash modulo (`%`) the number of
connected endpoints produces the index of the endpoint which is authoritative for this config object. If the
endpoint at this index is equal to the local endpoint, the authority is set to `true`, otherwise it is set to `false`.
Each instance alculates the SDBM hash of the config object name, puts that in contrast
modulo the connected endpoints size.
This index is used to lookup the corresponding endpoint in the connected endpoints array,
including the local endpoint. Whether the local endpoint is equal to the selected endpoint,
or not, this sets the authority to `true` or `false`.
```cpp
authority = endpoints[Utility::SDBM(object->GetName()) % endpoints.size()] == my_endpoint;
```
`ConfigObject::SetAuthority(bool authority)` triggers the following events:
@ -649,7 +651,7 @@ endpoint at this index is equal to the local endpoint, the authority is set to `
that by querying the `paused` attribute for all objects via REST API
or debug console on both endpoints.
Endpoints inside an HA zone calculate the object authority independent from each other.
Endpoints inside a HA zone calculate the object authority independent from each other.
This object authority is important for selected features explained below.
Since features are configuration objects too, you must ensure that all nodes

View File

@ -48,7 +48,7 @@ or `icinga2-ido-mysql`.
Distribution | Command
-------------------|------------------------------------------
Debian/Ubuntu | `apt-get install icinga2-dbg`
RHEL | `yum install icinga2-debuginfo`
RHEL/CentOS | `yum install icinga2-debuginfo`
Fedora | `dnf install icinga2-debuginfo icinga2-bin-debuginfo icinga2-ido-mysql-debuginfo`
SLES/openSUSE | `zypper install icinga2-bin-debuginfo icinga2-ido-mysql-debuginfo`
@ -65,7 +65,7 @@ Install GDB in your development environment.
Distribution | Command
-------------------|------------------------------------------
Debian/Ubuntu | `apt-get install gdb`
RHEL | `yum install gdb`
RHEL/CentOS | `yum install gdb`
Fedora | `dnf install gdb`
SLES/openSUSE | `zypper install gdb`
@ -267,130 +267,73 @@ $3 = std::vector of length 11, capacity 16 = {{static NPos = 1844674407370955161
### Core Dump <a id="development-debug-core-dump"></a>
When the Icinga 2 daemon is terminated by `SIGSEGV` or `SIGABRT`, a core dump file
should be written. This will help developers to analyze and fix the problem.
When the Icinga 2 daemon crashes with a `SIGSEGV` signal
a core dump file should be written. This will help
developers to analyze and fix the problem.
#### Core Dump Kernel Pattern <a id="development-debug-core-dump-format"></a>
#### Core Dump File Size Limit <a id="development-debug-core-dump-limit"></a>
Core dumps are generated according to the format specified in
`/proc/sys/kernel/core_pattern`. This can either be a path relative to the
directory the program was started in, an absolute path or a pipe to a different
program.
This requires setting the core dump file size to `unlimited`.
For more information see the [core(5)](https://man7.org/linux/man-pages/man5/core.5.html) man page.
#### Systemd Coredumpctl <a id="development-debug-core-dump-systemd"></a>
##### Systemd
Most distributions offer systemd's coredumpctl either by default or as a package.
Distributions that offer it by default include RHEL and SLES, on others like
Debian or Ubuntu it can be installed via the `systemd-coredump` package.
When set up correctly, `core_pattern` will look something like this:
```
# cat /proc/sys/kernel/core_pattern
|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h`
```
systemctl edit icinga2.service
You can look at the generated core dumps with the `coredumpctl list` command.
You can show information, including a stack trace using
`coredumpctl show icinga2 -1` and retrieve the actual core dump file with
`coredumpctl dump icinga2 -1 --output <file>`.
For further information on how to configure and use coredumpctl, read the man pages
[coredumpctl(1)](https://man7.org/linux/man-pages/man1/coredumpctl.1.html) and
[coredump.conf(5)](https://man7.org/linux/man-pages/man5/coredump.conf.5.html).
#### Ubuntu Apport <a id="development-debug-core-dump-apport"></a>
Ubuntu uses their own application `apport` to record core dumps. When it is
enabled, your `core_pattern` will look like this:
```
# cat /proc/sys/kernel/core_pattern
|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E
```
Apport is unsuitable for development work, because by default it only works
with Ubuntu packages and it has a rather complicated interface for retrieving
the core dump. So unless you rely on Apport for some other workflow, systemd's
coredumpctl is a much better option and is available on Ubuntu in the
`systemd-coredump` package that can replace Apport on your system with no
further setup required.
If you still want to use Apport however, to set it up to work with unpackaged programs,
add the following (create the file if it doesn't exist) to `/etc/apport/settings`:
```
[main]
unpackaged=true
```
and restart Apport:
```
systemctl restart apport.service
```
When the program crashes you can then find an Apport crash report in `/var/crash/`
that you can read with the interactive `apport-cli` command. To extract the core
dump you run `apport-unpack /var/crash/<crash-file> <output-dir>` which then
saves a `<outputdir>/CoreDump` file that contains the actual core dump.
#### Directly to a File <a id="development-debug-core-dump-direct"></a>
If coredumpctl is not available, simply writing the core dump directly to a file
is also sufficient. You can set up your `core_pattern` to write a file to a
suitable path:
```bash
sysctl -w kernel.core_pattern=/var/lib/cores/core.%e.%p.%h.%t
install -m 1777 -d /var/lib/cores
```
If you want to make this setting permanent you can also add a file to
`/etc/sysctl.d`, named something like `80-coredumps.conf`:
```
kernel.core_pattern = /var/lib/cores/core.%e.%p.%h.%t
```
This will create core dump files in `/var/lib/cores` where `%e` is the truncated
name of the program, `%p` is the programs PID, `%h` is the hostname, and `%t` a
timestamp.
Note that unlike the other methods this requires the core size limit to be set
for the process. When starting Icinga 2 via systemd you can set it to unlimited
by adding the following to `/etc/systemd/system/icinga2.service.d/limits.conf`:
```
[Service]
...
LimitCORE=infinity
```
Then reload and restart icinga:
```bash
systemctl daemon-reload
systemctl restart icinga2.service
systemctl restart icinga2
```
Alternatively you edit and reload in one step:
```bash
systemctl edit --drop-in=limits icinga2.service`
```
##### Init Script
When using an init script or starting manually, you need to run `ulimit -c unlimited`
before starting the program:
```bash
```
vim /etc/init.d/icinga2
...
ulimit -c unlimited
./icinga2 daemon
service icinga2 restart
```
To verify that the limit has been set to `unlimited` run the following:
```bash
##### Verify
Verify that the Icinga 2 process core file size limit is set to `unlimited`.
```
for pid in $(pidof icinga2); do cat /proc/$pid/limits; done
```
And look for the line:
```
...
Max core file size unlimited unlimited bytes
```
#### MacOS <a id="development-debug-core-dump-macos"></a>
#### Core Dump Kernel Format <a id="development-debug-core-dump-format"></a>
The Icinga 2 daemon runs with the SUID bit set. Therefore you need
to explicitly enable core dumps for SUID on Linux.
```bash
sysctl -w fs.suid_dumpable=2
```
Adjust the coredump kernel format and file location on Linux:
```bash
sysctl -w kernel.core_pattern=/var/lib/cores/core.%e.%p
install -m 1777 -d /var/lib/cores
```
MacOS:
```bash
sysctl -w kern.corefile=/cores/core.%P
chmod 777 /cores
```
@ -534,18 +477,18 @@ File Type: EXECUTABLE IMAGE
Image has the following dependencies:
boost_coroutine-vc142-mt-gd-x64-1_85.dll
boost_date_time-vc142-mt-gd-x64-1_85.dll
boost_filesystem-vc142-mt-gd-x64-1_85.dll
boost_thread-vc142-mt-gd-x64-1_85.dll
boost_regex-vc142-mt-gd-x64-1_85.dll
boost_coroutine-vc142-mt-gd-x64-1_83.dll
boost_date_time-vc142-mt-gd-x64-1_83.dll
boost_filesystem-vc142-mt-gd-x64-1_83.dll
boost_thread-vc142-mt-gd-x64-1_83.dll
boost_regex-vc142-mt-gd-x64-1_83.dll
libssl-3_0-x64.dll
libcrypto-3_0-x64.dll
WS2_32.dll
dbghelp.dll
SHLWAPI.dll
msi.dll
boost_unit_test_framework-vc142-mt-gd-x64-1_85.dll
boost_unit_test_framework-vc142-mt-gd-x64-1_83.dll
KERNEL32.dll
SHELL32.dll
ADVAPI32.dll
@ -594,7 +537,7 @@ packages.
If you encounter a problem, please [open a new issue](https://github.com/Icinga/icinga2/issues/new/choose)
on GitHub and mention that you're testing the snapshot packages.
#### RHEL <a id="development-tests-snapshot-packages-rhel"></a>
#### RHEL/CentOS <a id="development-tests-snapshot-packages-rhel"></a>
2.11+ requires the EPEL repository for Boost 1.66+.
@ -740,7 +683,7 @@ these tools:
- vim
- CLion (macOS, Linux)
- MS Visual Studio (Windows)
- Emacs
- Atom
Editors differ on the functionality. The more helpers you get for C++ development,
the faster your development workflow will be.
@ -798,12 +741,12 @@ perfdata | Performance data related, including Graphite, Elastic, etc.
db\_ido | IDO database abstraction layer.
db\_ido\_mysql | IDO database driver for MySQL.
db\_ido\_pgsql | IDO database driver for PgSQL.
mysql\_shim | Library stub for linking against the MySQL client libraries.
mysql\_shin | Library stub for linking against the MySQL client libraries.
pgsql\_shim | Library stub for linking against the PgSQL client libraries.
#### Class Compiler <a id="development-develop-design-patterns-class-compiler"></a>
Something else you might notice are the `.ti` files which are compiled
Another thing you will recognize are the `.ti` files which are compiled
by our own class compiler into actual source code. The meta language allows
developers to easily add object attributes and specify their behaviour.
@ -849,18 +792,17 @@ The most common benefits:
#### Unity Builds <a id="development-develop-builds-unity-builds"></a>
You should be aware that by default unity builds are enabled. You can turn them
off by setting the `ICINGA2_UNITY_BUILD` CMake option to `OFF`.
Another thing you should be aware of: Unity builds on and off.
Typically, we already use caching mechanisms to reduce recompile time with ccache.
For release builds, there's always a new build needed as the difference is huge compared
to a previous (major) release.
Unity builds basically concatenate all source files into one big library source code file.
The compiler then doesn't need to load many small files, each with all of their includes,
but compiles and links only a few huge ones.
Therefore we've invented the Unity builds, which basically concatenates all source files
into one big library source code file. The compiler then doesn't need to load the many small
files but compiles and links this huge one.
However, unity builds require more memory which is why you should disable them for development
Unity builds require more memory which is why you should disable them for development
builds in small sized VMs (Linux, Windows) and also Docker containers.
There's a couple of header files which are included everywhere. If you touch/edit them,
@ -1286,7 +1228,7 @@ every second.
Avoid log messages which could irritate the user. During
implementation, developers can change log levels to better
see what's going on, but remember to change this back to `debug`
see what's going one, but remember to change this back to `debug`
or remove it entirely.
@ -1390,6 +1332,9 @@ autocmd BufWinLeave * call clearmatches()
### Linux Dev Environment <a id="development-linux-dev-env"></a>
Based on CentOS 7, we have an early draft available inside the Icinga Vagrant boxes:
[centos7-dev](https://github.com/Icinga/icinga-vagrant/tree/master/centos7-dev).
If you're compiling Icinga 2 natively without any virtualization layer in between,
this usually is faster. This is also the reason why developers on macOS prefer native builds
over Linux or Windows VMs. Don't forget to test the actual code on Linux later! Socket specific
@ -1412,20 +1357,21 @@ mkdir -p release debug
Proceed with the specific distribution examples below. Keep in mind that these instructions
are best effort and sometimes out-of-date. Git Master may contain updates.
* [Fedora 40](21-development.md#development-linux-dev-env-fedora)
* [CentOS 7](21-development.md#development-linux-dev-env-centos)
* [Debian 10 Buster](21-development.md#development-linux-dev-env-debian)
* [Ubuntu 18 Bionic](21-development.md#development-linux-dev-env-ubuntu)
#### Fedora 40 <a id="development-linux-dev-env-fedora"></a>
#### CentOS 7 <a id="development-linux-dev-env-centos"></a>
```bash
yum -y install gdb vim git bash-completion htop
yum -y install gdb vim git bash-completion htop centos-release-scl
yum -y install rpmdevtools ccache \
cmake make gcc-c++ flex bison \
openssl-devel boost-devel systemd-devel \
cmake make devtoolset-11-gcc-c++ flex bison \
openssl-devel boost169-devel systemd-devel \
mysql-devel postgresql-devel libedit-devel \
libstdc++-devel
devtoolset-11-libstdc++-devel
groupadd icinga
groupadd icingacmd
@ -1443,42 +1389,47 @@ slower but allows for better debugging insights.
For benchmarks, change `CMAKE_BUILD_TYPE` to `RelWithDebInfo` and
build inside the `release` directory.
First, override the default prefix path.
First, off export some generics for Boost.
```bash
export I2_GENERIC="-DCMAKE_INSTALL_PREFIX=/usr/local/icinga2"
export I2_BOOST="-DBoost_NO_BOOST_CMAKE=TRUE -DBoost_NO_SYSTEM_PATHS=TRUE -DBOOST_LIBRARYDIR=/usr/lib64/boost169 -DBOOST_INCLUDEDIR=/usr/include/boost169 -DBoost_ADDITIONAL_VERSIONS='1.69;1.69.0'"
```
Second, define the two build types with their specific CMake variables.
Second, add the prefix path to it.
```bash
export I2_GENERIC="$I2_BOOST -DCMAKE_INSTALL_PREFIX=/usr/local/icinga2"
```
Third, define the two build types with their specific CMake variables.
```bash
export I2_DEBUG="-DCMAKE_BUILD_TYPE=Debug -DICINGA2_UNITY_BUILD=OFF $I2_GENERIC"
export I2_RELEASE="-DCMAKE_BUILD_TYPE=RelWithDebInfo -DICINGA2_WITH_TESTS=ON -DICINGA2_UNITY_BUILD=ON $I2_GENERIC"
```
Third, depending on your likings, you may use a bash alias for building,
Fourth, depending on your likings, you may add a bash alias for building,
or invoke the commands inside:
```bash
alias i2_debug="cd /root/icinga2; mkdir -p debug; cd debug; cmake $I2_DEBUG ..; make -j2; sudo make -j2 install; cd .."
alias i2_release="cd /root/icinga2; mkdir -p release; cd release; cmake $I2_RELEASE ..; make -j2; sudo make -j2 install; cd .."
alias i2_debug="cd /root/icinga2; mkdir -p debug; cd debug; scl enable devtoolset-11 -- cmake $I2_DEBUG ..; make -j2; sudo make -j2 install; cd .."
alias i2_release="cd /root/icinga2; mkdir -p release; cd release; scl enable devtoolset-11 -- cmake $I2_RELEASE ..; make -j2; sudo make -j2 install; cd .."
```
```bash
i2_debug
```
This is taken from the [centos7-dev](https://github.com/Icinga/icinga-vagrant/tree/master/centos7-dev) Vagrant box.
The source installation doesn't set proper permissions, this is
handled in the package builds which are officially supported.
```bash
chown -R icinga:icinga /usr/local/icinga2/{etc,var}/
chown -R icinga:icinga /usr/local/icinga2/var/
/usr/local/icinga2/lib/icinga2/prepare-dirs /usr/local/icinga2/etc/sysconfig/icinga2
/usr/local/icinga2/sbin/icinga2 api setup
vim /usr/local/icinga2/etc/icinga2/conf.d/api-users.conf
/usr/local/icinga2/lib64/icinga2/sbin/icinga2 daemon
/usr/local/icinga2/lib/icinga2/sbin/icinga2 daemon
```
#### Debian 10 <a id="development-linux-dev-env-debian"></a>
@ -1525,7 +1476,7 @@ The source installation doesn't set proper permissions, this is
handled in the package builds which are officially supported.
```bash
chown -R icinga:icinga /usr/local/icinga2/{etc,var}/
chown -R icinga:icinga /usr/local/icinga2/var/
/usr/local/icinga2/lib/icinga2/prepare-dirs /usr/local/icinga2/etc/sysconfig/icinga2
/usr/local/icinga2/sbin/icinga2 api setup
@ -1589,7 +1540,7 @@ The source installation doesn't set proper permissions, this is
handled in the package builds which are officially supported.
```bash
chown -R icinga:icinga /usr/local/icinga2/{etc,var}/
chown -R icinga:icinga /usr/local/icinga2/var/
/usr/local/icinga2/lib/icinga2/prepare-dirs /usr/local/icinga2/etc/sysconfig/icinga2
/usr/local/icinga2/sbin/icinga2 api setup
@ -1794,11 +1745,9 @@ and don't care for the details,
1. ensure there are 35 GB free space on C:
2. run the following in an administrative Powershell:
1. Windows Server only:
`Enable-WindowsOptionalFeature -FeatureName NetFx3ServerFeatures -Online`
2. `Enable-WindowsOptionalFeature -FeatureName NetFx3 -Online`
1. `Enable-WindowsOptionalFeature -FeatureName "NetFx3" -Online`
(reboot when asked!)
3. `powershell -NoProfile -ExecutionPolicy Bypass -Command "Invoke-Expression (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/Icinga/icinga2/master/doc/win-dev.ps1')"`
2. `powershell -NoProfile -ExecutionPolicy Bypass -Command "Invoke-Expression (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/Icinga/icinga2/master/doc/win-dev.ps1')"`
(will take some time)
This installs everything needed for cloning and building Icinga 2
@ -1814,7 +1763,7 @@ mkdir build
cd .\build\
& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin\cmake.exe" `
-DICINGA2_UNITY_BUILD=OFF -DBoost_INCLUDE_DIR=C:\local\boost_1_85_0-Win64 `
-DICINGA2_UNITY_BUILD=OFF -DBoost_INCLUDE_DIR=C:\local\boost_1_83_0-Win64 `
-DBISON_EXECUTABLE=C:\ProgramData\chocolatey\lib\winflexbison3\tools\win_bison.exe `
-DFLEX_EXECUTABLE=C:\ProgramData\chocolatey\lib\winflexbison3\tools\win_flex.exe ..
@ -1986,16 +1935,16 @@ Download the [boost-binaries](https://sourceforge.net/projects/boost/files/boost
- 64 for 64 bit builds
```
https://sourceforge.net/projects/boost/files/boost-binaries/1.85.0/boost_1_85_0-msvc-14.2-64.exe/download
https://sourceforge.net/projects/boost/files/boost-binaries/1.82.0/boost_1_83_0-msvc-14.2-64.exe/download
```
Run the installer and leave the default installation path in `C:\local\boost_1_85_0`.
Run the installer and leave the default installation path in `C:\local\boost_1_83_0`.
##### Source & Compile
In order to use the boost development header and library files you need to [download](https://www.boost.org/users/download/)
Boost and then extract it to e.g. `C:\local\boost_1_85_0`.
Boost and then extract it to e.g. `C:\local\boost_1_83_0`.
> **Note**
>
@ -2003,12 +1952,12 @@ Boost and then extract it to e.g. `C:\local\boost_1_85_0`.
> the archive contains more than 70k files.
In order to integrate Boost into Visual Studio, open the `Developer Command Prompt` from the start menu,
and navigate to `C:\local\boost_1_85_0`.
and navigate to `C:\local\boost_1_83_0`.
Execute `bootstrap.bat` first.
```
cd C:\local\boost_1_85_0
cd C:\local\boost_1_83_0
bootstrap.bat
```
@ -2091,8 +2040,8 @@ You need to specify the previously installed component paths.
Variable | Value | Description
----------------------|----------------------------------------------------------------------|-------------------------------------------------------
`BOOST_ROOT` | `C:\local\boost_1_85_0` | Root path where you've extracted and compiled Boost.
`BOOST_LIBRARYDIR` | Binary: `C:\local\boost_1_85_0\lib64-msvc-14.2`, Source: `C:\local\boost_1_85_0\stage` | Path to the static compiled Boost libraries, directory must contain `lib`.
`BOOST_ROOT` | `C:\local\boost_1_83_0` | Root path where you've extracted and compiled Boost.
`BOOST_LIBRARYDIR` | Binary: `C:\local\boost_1_83_0\lib64-msvc-14.2`, Source: `C:\local\boost_1_83_0\stage` | Path to the static compiled Boost libraries, directory must contain `lib`.
`BISON_EXECUTABLE` | `C:\ProgramData\chocolatey\lib\winflexbison\tools\win_bison.exe` | Path to the Bison executable.
`FLEX_EXECUTABLE` | `C:\ProgramData\chocolatey\lib\winflexbison\tools\win_flex.exe` | Path to the Flex executable.
`ICINGA2_UNITY_BUILD` | OFF | Disable unity builds for development environments.
@ -2127,8 +2076,8 @@ $env:ICINGA2_INSTALLPATH = 'C:\Program Files\Icinga2-debug'
$env:ICINGA2_BUILDPATH='debug'
$env:CMAKE_BUILD_TYPE='Debug'
$env:OPENSSL_ROOT_DIR='C:\OpenSSL-Win64'
$env:BOOST_ROOT='C:\local\boost_1_85_0'
$env:BOOST_LIBRARYDIR='C:\local\boost_1_85_0\lib64-msvc-14.2'
$env:BOOST_ROOT='C:\local\boost_1_83_0'
$env:BOOST_LIBRARYDIR='C:\local\boost_1_83_0\lib64-msvc-14.2'
```
#### Icinga 2 in Visual Studio
@ -2254,7 +2203,7 @@ Icinga application using a dist tarball (including notes for distributions):
* Debian/Ubuntu: libpq-dev
* postgresql-dev on Alpine
* libedit (CLI console)
* RHEL/Fedora: libedit-devel (RHEL requires rhel-7-server-optional-rpms)
* RHEL/Fedora: libedit-devel on CentOS (RHEL requires rhel-7-server-optional-rpms)
* Debian/Ubuntu/Alpine: libedit-dev
* Termcap (only required if libedit doesn't already link against termcap/ncurses)
* RHEL/Fedora: libtermcap-devel
@ -2320,7 +2269,7 @@ cmake .. -DCMAKE_INSTALL_PREFIX=/tmp/icinga2
### CMake Variables <a id="development-package-builds-cmake-variables"></a>
In addition to `CMAKE_INSTALL_PREFIX` here are most of the supported Icinga-specific CMake variables.
In addition to `CMAKE_INSTALL_PREFIX` here are most of the supported Icinga-specific cmake variables.
For all variables regarding defaults paths on in CMake, see
[GNUInstallDirs](https://cmake.org/cmake/help/latest/module/GNUInstallDirs.html).
@ -2394,7 +2343,7 @@ for implementation details.
CMake determines the Icinga 2 version number using `git describe` if the
source directory is contained in a Git repository. Otherwise the version number
is extracted from the `ICINGA2_VERSION` file. This behavior can be
is extracted from the [ICINGA2_VERSION](ICINGA2_VERSION) file. This behavior can be
overridden by creating a file called `icinga-version.h.force` in the source
directory. Alternatively the `-DICINGA2_GIT_VERSION_INFO=OFF` option for CMake
can be used to disable the usage of `git describe`.
@ -2402,7 +2351,7 @@ can be used to disable the usage of `git describe`.
### Building RPMs <a id="development-package-builds-rpms"></a>
#### Build Environment on RHEL, Fedora, Amazon Linux
#### Build Environment on RHEL, CentOS, Fedora, Amazon Linux
Setup your build environment:
@ -2458,7 +2407,7 @@ spectool -g ../SPECS/icinga2.spec
cd $HOME/rpmbuild
```
Install the build dependencies:
Install the build dependencies. Example for CentOS 7:
```bash
yum -y install libedit-devel ncurses-devel gcc-c++ libstdc++-devel openssl-devel \
@ -2487,9 +2436,21 @@ rpmbuild -ba SPECS/icinga2.spec
The following packages are required to build the SELinux policy module:
* checkpolicy
* selinux-policy-devel
* selinux-policy (selinux-policy on CentOS 6, selinux-policy-devel on CentOS 7)
* selinux-policy-doc
##### RHEL/CentOS 7
The RedHat Developer Toolset is required for building Icinga 2 beforehand.
This contains a C++ compiler which supports C++17 features.
```bash
yum install centos-release-scl
```
Dependencies to devtools-11 are used in the RPM SPEC, so the correct tools
should be used for building.
##### Amazon Linux
If you prefer to build packages offline, a suitable Vagrant box is located
@ -2580,7 +2541,7 @@ chmod +x /etc/init.d/icinga2
Icinga 2 reads a single configuration file which is used to specify all
configuration settings (global settings, hosts, services, etc.). The
configuration format is explained in detail in the `doc/` directory.
configuration format is explained in detail in the [doc/](doc/) directory.
By default `make install` installs example configuration files in
`/usr/local/etc/icinga2` unless you have specified a different prefix or

View File

@ -122,7 +122,7 @@ Having this boolean enabled allows icinga2 to connect to all ports. This can be
**icinga2_run_sudo**
To allow Icinga 2 executing plugins via sudo you can toggle this boolean. It is disabled by default, resulting in error messages like `execvpe(sudo) failed: Permission denied`.
To allow Icinga 2 executing plugins via sudo you can toogle this boolean. It is disabled by default, resulting in error messages like `execvpe(sudo) failed: Permission denied`.
**httpd_can_write_icinga2_command**
@ -204,7 +204,7 @@ If you restart the daemon now it will successfully connect to graphite.
#### Running plugins requiring sudo <a id="selinux-policy-examples-sudo"></a>
Some plugins require privileged access to the system and are designed to be executed via `sudo` to get these privileges.
Some plugins require privileged access to the system and are designied to be executed via `sudo` to get these privileges.
In this case it is the CheckCommand [running_kernel](10-icinga-template-library.md#plugin-contrib-command-running_kernel) which is set to use `sudo`.
@ -219,7 +219,7 @@ In this case it is the CheckCommand [running_kernel](10-icinga-template-library.
assign where host.name == NodeName
}
Having this Service defined will result in a UNKNOWN state and the error message `execvpe(sudo) failed: Permission denied` because SELinux denying the execution.
Having this Service defined will result in a UNKNOWN state and the error message `execvpe(sudo) failed: Permission denied` because SELinux dening the execution.
Switching the boolean `icinga2_run_sudo` to allow the execution will result in the check executed successfully.
@ -229,7 +229,7 @@ Switching the boolean `icinga2_run_sudo` to allow the execution will result in t
#### Confining a user <a id="selinux-policy-examples-user"></a>
If you want to have an administrative account capable of only managing icinga2 and not the complete system, you can restrict the privileges by confining
this user. This is completely optional!
this user. This is completly optional!
Start by adding the Icinga 2 administrator role `icinga2adm_r` to the administrative SELinux user `staff_u`.
@ -295,7 +295,7 @@ Failed to issue method call: Access denied
If you experience any problems while running in enforcing mode try to reproduce it in permissive mode. If the problem persists it is not related to SELinux because in permissive mode SELinux will not deny anything.
After some feedback Icinga 2 is now running in a enforced domain, but still adds also some rules for other necessary services so no problems should occur at all. But you can help to enhance the policy by testing Icinga 2 running confined by SELinux.
After some feedback Icinga 2 is now running in a enforced domain, but still adds also some rules for other necessary services so no problems should occure at all. But you can help to enhance the policy by testing Icinga 2 running confined by SELinux.
Please add the following information to [bug reports](https://icinga.com/community/):

View File

@ -1,8 +1,4 @@
# Migration from Icinga 1.x or Nagios <a id="migration"></a>
!!! note
Icinga 1.x was originally a fork of Nagios. The information provided here also applies to Nagios.
# Migration from Icinga 1.x <a id="migration"></a>
## Configuration Migration <a id="configuration-migration"></a>

View File

@ -692,3 +692,4 @@ the [servicegroups](24-appendix.md#schema-livestatus-servicegroups-table-attribu
All [services](24-appendix.md#schema-livestatus-services-table-attributes) table attributes grouped with
the [hostgroups](24-appendix.md#schema-livestatus-hostgroups-table-attributes) table prefixed with `hostgroup_`.

View File

@ -11,10 +11,10 @@ function ThrowOnNativeFailure {
}
$VsVersion = 2022
$MsvcVersion = '14.3'
$BoostVersion = @(1, 88, 0)
$OpensslVersion = '3_0_16'
$VsVersion = 2019
$MsvcVersion = '14.2'
$BoostVersion = @(1, 83, 0)
$OpensslVersion = '3_0_12'
switch ($Env:BITS) {
32 { }
@ -74,6 +74,7 @@ try {
if (-not $Env:GITHUB_ACTIONS) {
choco install -y `
"visualstudio${VsVersion}community" `
"visualstudio${VsVersion}-workload-netcoretools" `
"visualstudio${VsVersion}-workload-vctools" `
"visualstudio${VsVersion}-workload-manageddesktop" `
"visualstudio${VsVersion}-workload-nativedesktop" `
@ -82,7 +83,6 @@ if (-not $Env:GITHUB_ACTIONS) {
git `
cmake `
winflexbison3 `
netfx-4.6-devpack `
windows-sdk-8.1 `
wixtoolset
ThrowOnNativeFailure

View File

@ -165,15 +165,13 @@ if [ -n "$MAILFROM" ] ; then
## Debian/Ubuntu use mailutils which requires `-a` to append the header
if [ -f /etc/debian_version ]; then
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" | tr -d '\015' \
| $MAILBIN -a "From: $MAILFROM" -s "$SUBJECT" $USEREMAIL
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" | $MAILBIN -a "From: $MAILFROM" -s "$SUBJECT" $USEREMAIL
## Other distributions (RHEL/SUSE/etc.) prefer mailx which sets a sender address with `-r`
else
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" | tr -d '\015' \
| $MAILBIN -r "$MAILFROM" -s "$SUBJECT" $USEREMAIL
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" | $MAILBIN -r "$MAILFROM" -s "$SUBJECT" $USEREMAIL
fi
else
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" | tr -d '\015' \
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" \
| $MAILBIN -s "$SUBJECT" $USEREMAIL
fi

View File

@ -178,15 +178,13 @@ if [ -n "$MAILFROM" ] ; then
## Debian/Ubuntu use mailutils which requires `-a` to append the header
if [ -f /etc/debian_version ]; then
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" | tr -d '\015' \
| $MAILBIN -a "From: $MAILFROM" -s "$SUBJECT" $USEREMAIL
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" | $MAILBIN -a "From: $MAILFROM" -s "$SUBJECT" $USEREMAIL
## Other distributions (RHEL/SUSE/etc.) prefer mailx which sets a sender address with `-r`
else
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" | tr -d '\015' \
| $MAILBIN -r "$MAILFROM" -s "$SUBJECT" $USEREMAIL
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" | $MAILBIN -r "$MAILFROM" -s "$SUBJECT" $USEREMAIL
fi
else
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" | tr -d '\015' \
/usr/bin/printf "%b" "$NOTIFICATION_MESSAGE" \
| $MAILBIN -s "$SUBJECT" $USEREMAIL
fi

View File

@ -19,7 +19,7 @@ set_target_properties (
FOLDER Lib
)
include_directories(SYSTEM ${Boost_INCLUDE_DIRS})
include_directories(${Boost_INCLUDE_DIRS})
if(ICINGA2_WITH_CHECKER)
list(APPEND icinga_app_SOURCES $<TARGET_OBJECTS:checker>)
@ -95,8 +95,6 @@ install(
RUNTIME DESTINATION ${InstallPath}
)
if(NOT WIN32)
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_LOGDIR}\")")
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_DATADIR}\")")
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_INITRUNDIR}\")")
endif()
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_LOGDIR}\")")
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_DATADIR}\")")
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_INITRUNDIR}\")")

View File

@ -401,7 +401,7 @@ static int Main()
#endif /* _WIN32 */
if (vm.count("define")) {
for (String define : vm["define"].as<std::vector<std::string>>()) {
for (const String& define : vm["define"].as<std::vector<std::string> >()) {
String key, value;
size_t pos = define.FindFirstOf('=');
if (pos != String::NPos) {
@ -420,10 +420,12 @@ static int Main()
for (size_t i = 1; i < keyTokens.size(); i++) {
std::unique_ptr<IndexerExpression> indexerExpr{new IndexerExpression(std::move(expr), MakeLiteral(keyTokens[i]))};
indexerExpr->SetOverrideFrozen();
expr = std::move(indexerExpr);
}
std::unique_ptr<SetExpression> setExpr{new SetExpression(std::move(expr), OpSetLiteral, MakeLiteral(value))};
setExpr->SetOverrideFrozen();
try {
ScriptFrame frame(true);
@ -458,7 +460,7 @@ static int Main()
ConfigCompiler::AddIncludeSearchDir(Configuration::IncludeConfDir);
if (!autocomplete && vm.count("include")) {
for (String includePath : vm["include"].as<std::vector<std::string>>()) {
for (const String& includePath : vm["include"].as<std::vector<std::string> >()) {
ConfigCompiler::AddIncludeSearchDir(includePath);
}
}

View File

@ -19,10 +19,6 @@ set_target_properties(
FOLDER Bin
OUTPUT_NAME icinga2-installer
LINK_FLAGS "/SUBSYSTEM:WINDOWS"
# Use a statically-linked runtime library as this binary is run during the installation process where the other DLLs
# may not have been installed already and the system-provided version may be too old.
MSVC_RUNTIME_LIBRARY "MultiThreaded$<$<CONFIG:Debug>:Debug>"
)
target_link_libraries(icinga-installer shlwapi)

View File

@ -24,10 +24,6 @@ template CheckCommand "ping-common" {
value = "$ping_address$"
description = "host to ping"
}
"--extra-opts" = {
value = "$ping_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-w" = {
value = "$ping_wrta$,$ping_wpl$%"
description = "warning threshold pair"
@ -105,10 +101,6 @@ template CheckCommand "fping-common" {
]
arguments = {
"--extra-opts" = {
value = "$fping_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-w" = {
value = "$fping_wrta$,$fping_wpl$%"
description = "warning threshold pair"
@ -151,13 +143,6 @@ template CheckCommand "fping-common" {
vars.fping_interval = 500
}
object CheckCommand "fping" {
import "fping-common"
import "ipv4-or-ipv6"
vars.fping_address = "$check_address$"
}
object CheckCommand "fping4" {
import "fping-common"
@ -184,10 +169,6 @@ object CheckCommand "tcp" {
value = "$tcp_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)."
}
"--extra-opts" = {
value = "$tcp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$tcp_port$"
description = "The TCP port number."
@ -295,10 +276,6 @@ object CheckCommand "ssl" {
value = "$ssl_address$"
description = "Host address"
}
"--extra-opts" = {
value = "$ssl_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$ssl_port$"
description ="TCP port (default: 443)"
@ -344,10 +321,6 @@ object CheckCommand "udp" {
]
arguments = {
"--extra-opts" = {
value = "$udp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-s" = {
value = "$udp_send$"
required = true
@ -387,11 +360,6 @@ object CheckCommand "http" {
value = "$http_vhost$"
description = "Host name argument for servers using host headers (virtual host)"
}
"--extra-opts" = {
set_if = {{ string(macro("$http_extra_opts$")) != "" }}
value = "$http_extra_opts$"
description = "Read extra plugin options from an ini file"
}
"-I" = {
set_if = {{ string(macro("$http_address$")) != "" }}
value = "$http_address$"
@ -451,16 +419,12 @@ object CheckCommand "http" {
}
"--sni" = {
set_if = "$http_sni$"
description = "Enable SSL/TLS hostname extension support (SNI). This is (normally) the default in modern setups"
description = "Enable SSL/TLS hostname extension support (SNI)"
}
"-C" = {
value = "$http_certificate$"
description = "Minimum number of days a certificate has to be valid. This parameter explicitely sets the port to 443 and ignores the URL if passed."
}
"--continue-after-certificate" = {
set_if = "$http_certificate_continue$"
description = "Allows the HTTP check to continue after performing the certificate check. Does nothing unless -C is used"
}
"-J" = {
value = "$http_clientcert$"
description = "Name of file contains the client certificate (PEM format)"
@ -593,212 +557,6 @@ object CheckCommand "http" {
vars.http_verbose = false
}
object CheckCommand "curl" {
import "ipv4-or-ipv6"
command = [ PluginDir + "/check_curl" ]
arguments += {
"--extra-opts" = {
value = "$curl_extra_opts$"
description = "Read options from an ini file"
}
"-H" = {
value = "$curl_vhost$"
description = "Host name argument for servers using host headers (virtual host). Append a port to include it in the header (eg: example.com:5000)"
}
"-I" = {
value = "$curl_ip$"
set_if = {{ string(macro("$curl_ip$")) != "" }}
description = "IP address or name (use numeric address if possible to bypass DNS lookup)."
}
"-p" = {
value = "$curl_port$"
description = "Port number (default: 80)"
}
"-4" = {
set_if = "$curl_ipv4$"
description = "Force `check_curl` to use IPv4 instead of choosing automatically"
}
"-6" = {
set_if = "$curl_ipv6$"
description = "Force `check_curl` to use IPv6 instead of choosing automatically"
}
"(-S w/ value)" = {
set_if = {{ macro("$curl_tls$") && string(macro("$curl_tls_version$")) != "" }}
key = "-S"
value = "$curl_tls_version$"
description = "Connect via SSL. Port defaults to 443. VERSION is optional, and prevents auto-negotiation"
}
"(-S w/o value)" = {
set_if = {{ macro("$curl_tls$") && string(macro("$curl_tls_version$")) == "" }}
key = "-S"
description = "Connect via SSL. Port defaults to 443. VERSION is optional, and prevents auto-negotiation"
}
"--sni" = {
set_if = "$curl_sni$"
description = "Enable SSL/TLS hostname extension support (SNI). Default if TLS version > 1.0"
}
"-C" = {
value = "$curl_certificate_valid_days_min_warning$,$curl_certificate_valid_days_min_critical$"
description = "Minimum number of days a certificate has to be valid."
}
"--continue-after-certificate" = {
value = "$curl_continue_after_certificate$"
description = "Allows the HTTP check to continue after performing the certificate check. Does nothing unless -C is used."
}
"-J" = {
value = "$curl_client_certificate_file$"
description = "Name of file that contains the client certificate (PEM format) to be used in establishing the SSL session"
}
"-K" = {
value = "$curl_client_certificate_key_file$"
description = "Name of file containing the private key (PEM format) matching the client certificate"
}
"--ca-cert" = {
value = "$curl_ca_cert_file$"
description = "CA certificate file to verify peer against"
}
"-D" = {
set_if = "$curl_verify_peer_cert$"
description = "Verify the peer's SSL certificate and hostname"
}
"-e" = {
value = "$curl_expect_string$"
description = "Comma-delimited list of strings, at least one of them is expected in the first (status) line of the server response (default: HTTP/), If specified skips all other status line logic (ex: 3xx, 4xx, 5xx processing)"
}
"-d" = {
value = "$curl_expect_header_string$"
description = "String to expect in the response headers"
}
"-s" = {
value = "$curl_expect_content_string$"
description = "String to expect in the content"
}
"-u" = {
value = "$curl_url$"
description = "URL to GET or POST (default: /)"
}
"-P" = {
value = "$curl_post_data$"
description = "URL encoded http POST data"
}
"-j" = {
value = "$curl_http_method$"
description = "Set HTTP method (for example: HEAD, OPTIONS, TRACE, PUT, DELETE, CONNECT)"
}
"-N" = {
value = "$curl_no_body$"
description = "Don't wait for document body: stop reading after headers. (Note that this still does an HTTP GET or POST, not a HEAD.)"
}
"-M" = {
value = "$curl_max_age$"
description = "Warn if document is more than SECONDS old. the number can also be of the form '10m' for minutes, '10h' for hours, or '10d' for days."
}
"-T" = {
value = "$curl_content_type$"
description = "specify Content-Type header media type when POSTing"
}
"-l" = {
value = "$curl_linespan$"
description = "Allow regex to span newlines (must precede -r or -R)"
}
"-r" = {
value = "$curl_ereg$"
description = "Search page for regex STRING"
}
"-R" = {
value = "$curl_eregi$"
description = "Search page for case-insensitive regex STRING"
}
"--invert-regex" = {
set_if = "$curl_invert_regex$"
description = "When using regex, return CRITICAL if found, OK if not"
}
"--state-regex" = {
value = "$curl_state_regex$"
description = "Return STATE if regex is found, OK if not"
}
"-a" = {
value = "$curl_authorization$"
description = "Username:password on sites with basic authentication"
}
"-b" = {
value = "$curl_proxy_authorization$"
description = "Username:password on proxy-servers with basic authentication"
}
"-A" = {
value = "$curl_user_agent$"
description = "String to be sent in http header as 'User Agent'"
}
"-k" = {
value = "$curl_header$"
repeat_key = true
description = "Any other tags to be sent in http header. Use multiple times for additional headers"
}
"-E" = {
set_if = "$curl_extended_perfdata$"
description = "Print additional performance data"
}
"-B" = {
set_if = "$curl_show_body$"
description = "Print body content below status line"
}
"-L" = {
set_if = "$curl_link$"
description = "Wrap output in HTML link (obsoleted by urlize)"
}
"-f" = {
value = "$curl_onredirect$"
description = "Options: <ok|warning|critical|follow|sticky|stickyport|curl> How to handle redirected pages."
}
"--max-redirs" = {
value = "$curl_max_redirs$"
description = "Maximal number of redirects (default: 15)"
}
"-m" = {
value = "$curl_pagesize$"
description = "Minimum page size required (bytes) : Maximum page size required (bytes)"
}
"--http-version" = {
value = "$curl_http_version$"
description = "Connect via specific HTTP protocol. 1.0 = HTTP/1.0, 1.1 = HTTP/1.1, 2.0 = HTTP/2 (HTTP/2 will fail without -S)"
}
"--enable-automatic-decompression" = {
set_if = "$curl_enable_automatic_decompression$"
description = "Enable automatic decompression of body (CURLOPT_ACCEPT_ENCODING)."
}
"--haproxy-protocol" = {
set_if = "$curl_haproxy_protocol$"
description = "Send HAProxy proxy protocol v1 header (CURLOPT_HAPROXYPROTOCOL)"
}
"--cookie-jar" = {
value = "$curl_cookie_jar_file$"
description = "Store cookies in the cookie jar file and send them out when requested."
}
"-w" = {
value = "$curl_warning$"
description = "Response time to result in warning status (seconds)"
}
"-c" = {
value = "$curl_critical$"
description = "Response time to result in critical status (seconds)"
}
"-t" = {
value = "$curl_timeout$"
description = "Seconds before connection times out (default: 10)"
}
}
vars.curl_ip = "$check_address$"
vars.curl_link = false
vars.curl_invert_regex = false
vars.curl_show_body = false
vars.curl_extended_perfdata = false
vars.check_ipv4 = "$curl_ipv4$"
vars.check_ipv6 = "$curl_ipv6$"
}
object CheckCommand "ftp" {
import "ipv4-or-ipv6"
@ -809,10 +567,6 @@ object CheckCommand "ftp" {
value = "$ftp_address$"
description = "The host's address. Defaults to $address$ or $address6$ if the address attribute is not set."
}
"--extra-opts" = {
value = "$ftp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$ftp_port$"
description = "The FTP port number. Defaults to none"
@ -916,10 +670,6 @@ object CheckCommand "smtp" {
value = "$smtp_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$smtp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$smtp_port$"
description = "Port number (default: 25)"
@ -1005,10 +755,6 @@ object CheckCommand "ssmtp" {
value = "$ssmtp_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$ssmtp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$ssmtp_port$"
description = "Port number (default: none)"
@ -1098,10 +844,6 @@ object CheckCommand "imap" {
value = "$imap_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$imap_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$imap_port$"
description = "Port number (default: none)"
@ -1191,10 +933,6 @@ object CheckCommand "simap" {
value = "$simap_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$simap_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$simap_port$"
description = "Port number (default: none)"
@ -1284,10 +1022,6 @@ object CheckCommand "pop" {
value = "$pop_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$pop_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$pop_port$"
description = "Port number (default: none)"
@ -1377,10 +1111,6 @@ object CheckCommand "spop" {
value = "$spop_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$spop_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$spop_port$"
description = "Port number (default: none)"
@ -1470,10 +1200,6 @@ object CheckCommand "ntp_time" {
value = "$ntp_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$ntp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$ntp_port$"
description = "Port number (default: 123)"
@ -1523,10 +1249,6 @@ object CheckCommand "ntp_peer" {
value = "$ntp_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$ntp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$ntp_port$"
description = "Port number (default: 123)"
@ -1592,10 +1314,6 @@ object CheckCommand "ssh" {
command = [ PluginDir + "/check_ssh" ]
arguments = {
"--extra-opts" = {
value = "$ssh_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$ssh_port$"
description = "Port number (default: 22)"
@ -1617,14 +1335,6 @@ object CheckCommand "ssh" {
set_if = "$ssh_ipv6$"
description = "Use IPv6 connection"
}
"-r" = {
value = "$ssh_remote_version$"
description = "Alert if string doesn't match expected server version (ex: OpenSSH_3.9p1)"
}
"-P" = {
value = "$ssh_remote_protocol$"
description = "Alert if protocol doesn't match expected protocol version (ex: 2.0)"
}
}
vars.ssh_address = "$check_address$"
@ -1636,10 +1346,6 @@ object CheckCommand "disk" {
command = [ PluginDir + "/check_disk" ]
arguments = {
"--extra-opts" = {
value = "$disk_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-w" = {
value = "$disk_wfree$"
description = "Exit with WARNING status if less than INTEGER units of disk are free or Exit with WARNING status if less than PERCENT of disk space is free"
@ -1666,10 +1372,6 @@ object CheckCommand "disk" {
description = "Display inode usage in perfdata"
set_if = "$disk_inode_perfdata$"
}
"--inode-perfdata" = {
description = "Enable performance data for inode-based statistics (nagios-plugins)"
set_if = "$disk_np_inode_perfdata$"
}
"-p" = {
value = "$disk_partitions$"
description = "Path or partition (may be repeated)"
@ -1789,11 +1491,9 @@ object CheckCommand "disk" {
"mtmfs",
"tracefs",
"cgroup",
"fuse.*", // only Monitoring Plugins support this so far
"fuse.gvfsd-fuse",
"fuse.gvfs-fuse-daemon",
"fuse.portal",
"fuse.sshfs",
"fdescfs",
"overlay",
"nsfs",
@ -1851,10 +1551,6 @@ object CheckCommand "users" {
command = [ PluginDir + "/check_users" ]
arguments = {
"--extra-opts" = {
value = "$users_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-w" = {
value = "$users_wgreater$"
description = "Set WARNING status if more than INTEGER users are logged in"
@ -1873,10 +1569,6 @@ object CheckCommand "procs" {
command = [ PluginDir + "/check_procs" ]
arguments = {
"--extra-opts" = {
value = "$procs_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-w" = {
value = "$procs_warning$"
description = "Generate warning state if metric is outside this range"
@ -1933,10 +1625,6 @@ object CheckCommand "procs" {
value = "$procs_command$"
description = "Only scan for exact matches of COMMAND (without path)"
}
"-X" = {
value = "$procs_exclude_process$"
description = "Exclude processes which match this comma separated list"
}
"-k" = {
set_if = "$procs_nokthreads$"
description = "Only scan for non kernel threads"
@ -1953,10 +1641,6 @@ object CheckCommand "swap" {
command = [ PluginDir + "/check_swap" ]
arguments = {
"--extra-opts" = {
value = "$swap_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-w" = {{
if (macro("$swap_integer$")) {
return macro("$swap_wfree$")
@ -1991,10 +1675,6 @@ object CheckCommand "load" {
command = [ PluginDir + "/check_load" ]
arguments = {
"--extra-opts" = {
value = "$load_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-w" = {
value = "$load_wload1$,$load_wload5$,$load_wload15$"
description = "Exit with WARNING status if load average exceeds WLOADn"
@ -2007,10 +1687,6 @@ object CheckCommand "load" {
set_if = "$load_percpu$"
description = "Divide the load averages by the number of CPUs (when possible)"
}
"-n" = {
value = "$load_procs_to_show$"
description = "Number of processes to show when printing the top consuming processes. (Default value is 0)"
}
}
vars.load_wload1 = 5.0
@ -2032,10 +1708,6 @@ object CheckCommand "snmp" {
value = "$snmp_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$snmp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-o" = {
value = "$snmp_oid$"
description = "Object identifier(s) or SNMP variables whose value you wish to query"
@ -2096,10 +1768,6 @@ object CheckCommand "snmp" {
value = "$snmp_miblist$"
description = "List of MIBS to be loaded (default = none if using numeric OIDs or 'ALL' for symbolic OIDs.)"
}
"-M" = {
value = "$snmp_multiplier$"
description = "Multiplies current value, 0 < n < 1 works as divider, defaults to 1"
}
"--rate-multiplier" = {
value = "$snmp_rate_multiplier$"
description = "Converts rate per second. For example, set to 60 to convert to per minute"
@ -2152,10 +1820,6 @@ object CheckCommand "snmpv3" {
value = "$snmpv3_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$snmpv3_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$snmpv3_port$"
description = "Port number"
@ -2232,10 +1896,6 @@ object CheckCommand "snmpv3" {
value = "$snmpv3_miblist$"
description = "List of SNMP MIBs for translating OIDs between numeric and textual representation"
}
"-M" = {
value = "$snmpv3_multiplier$"
description = "Multiplies current value, 0 < n < 1 works as divider, defaults to 1"
}
"-u" = {
value = "$snmpv3_units$"
description = "Units label(s) for output data (e.g., 'sec.')"
@ -2341,10 +2001,6 @@ object CheckCommand "dhcp" {
command = [ PluginDir + "/check_dhcp" ]
arguments = {
"--extra-opts" = {
value = "$dhcp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-s" = {
value = "$dhcp_serverip$"
description = "IP address of DHCP server that we must hear from"
@ -2384,10 +2040,6 @@ object CheckCommand "dns" {
value = "$dns_lookup$"
description = "The name or address you want to query."
}
"--extra-opts" = {
value = "$dns_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-s" = {
value = "$dns_server$"
description = "Optional DNS server you want to use for the lookup."
@ -2440,10 +2092,6 @@ object CheckCommand "dig" {
value = "$dig_server$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$dig_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$dig_port$"
description = "Port number (default: 53)"
@ -2502,10 +2150,6 @@ object CheckCommand "nscp" {
value = "$nscp_address$"
description = "Name of the host to check"
}
"--extra-opts" = {
value = "$nscp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$nscp_port$"
description = "Optional port number (default: 1248)"
@ -2557,10 +2201,6 @@ object CheckCommand "by_ssh" {
value = "$by_ssh_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$by_ssh_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$by_ssh_port$"
description = "Port number (default: none)"
@ -2638,10 +2278,6 @@ object CheckCommand "ups" {
description = "Address of the upsd server"
required = true
}
"--extra-opts" = {
value = "$ups_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-u" = {
value = "$ups_name$"
description = "Name of the UPS to monitor"
@ -2779,10 +2415,6 @@ object CheckCommand "hpjd" {
value = "$hpjd_address$"
description = "Host address"
}
"--extra-opts" = {
value = "$hpjd_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-C" = {
value = "$hpjd_community$"
description = "The SNMP community name (default=public)"
@ -2806,10 +2438,6 @@ object CheckCommand "icmp" {
order = 1
description = "Host address"
}
"--extra-opts" = {
value = "$icmp_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-w" = {
value = "$icmp_wrta$,$icmp_wpl$%"
description = "warning threshold (currently 200.000ms,10%)"
@ -2869,10 +2497,6 @@ object CheckCommand "ldap" {
value = "$ldap_address$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$ldap_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$ldap_port$"
description = "Port number (default: 389)"
@ -2952,10 +2576,6 @@ object CheckCommand "clamd" {
description = "The host's address or unix socket (must be an absolute path)."
required = true
}
"--extra-opts" = {
value = "$clamd_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-p" = {
value = "$clamd_port$"
description = "Port number (default: none)."
@ -3100,10 +2720,6 @@ object CheckCommand "pgsql" {
value = "$pgsql_hostname$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$pgsql_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-P" = {
value = "$pgsql_port$"
description = "Port number (default: 5432)"
@ -3168,10 +2784,6 @@ object CheckCommand "mysql" {
value = "$mysql_hostname$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$mysql_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-P" = {
value = "$mysql_port$"
description = "Port number (default: 3306)"
@ -3333,10 +2945,6 @@ object CheckCommand "smart" {
command = [ PluginDir + "/check_ide_smart" ]
arguments = {
"--extra-opts" = {
value = "$smart_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-d" = {
value = "$smart_device$"
description = "Name of a local hard drive to monitor"
@ -3399,10 +3007,6 @@ object CheckCommand "game" {
command = [ PluginDir + "/check_game" ]
arguments = {
"--extra-opts" = {
value = "$game_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-P" = {
value = "$game_port$"
description = "Port to connect to"
@ -3456,10 +3060,6 @@ object CheckCommand "mysql_query" {
value = "$mysql_query_hostname$"
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$mysql_query_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-P" = {
value = "$mysql_query_port$"
description = "Port number (default: 3306)"
@ -3513,10 +3113,6 @@ object CheckCommand "radius" {
value = "$radius_address$",
description = "Host name, IP Address, or unix socket (must be an absolute path)"
}
"--extra-opts" = {
value = "$radius_extra_opts$"
description = "Read extra plugin options from an ini file."
}
"-F" = {
value = "$radius_config_file$",
description = "Configuration file"

View File

@ -1,10 +1,42 @@
/* Icinga 2 | (c) 2012 Icinga GmbH | GPLv2+ */
object CheckCommand "systemd" {
command = [ PluginContribDir + "/check_systemd" ]
command = [ PluginContribDir + "/check_systemd.py" ]
arguments = {
/* General options */
"--unit" = {
value = "$systemd_unit$"
description = "Name of the systemd unit that is being tested."
}
"--exclude" = {
value = "$systemd_exclude_unit$"
description = "Exclude a systemd unit from the checks. This option can be applied multiple times. Also supports regular expressions."
repeat_key = true
}
"--no-startup-time" = {
set_if = "$systemd_no_startup_time$"
description = "Dont check the startup time. Using this option the options `systemd_warning` and `systemd_critical` have no effect. (Default: `false`)"
}
"--warning" = {
value = "$systemd_warning$"
description = "Startup time in seconds to result in a warning status. (Default: `60s`)"
}
"--critical" = {
value = "$systemd_critical$"
description = "Startup time in seconds to result in a critical status. (Default: `120s`)"
}
"--dead-timers" = {
set_if = "$systemd_dead_timers$"
description = "Detect dead / inactive timers. (Default: `false`)"
}
"--dead-timers-warning" = {
value = "$systemd_dead_timers_warning$"
description = "Time ago in seconds for dead / inactive timers to trigger a warning state (by default 6 days)."
}
"--dead-timers-critical" = {
value = "$systemd_dead_timers_critical$"
description = "Time ago in seconds for dead / inactive timers to trigger a critical state (by default 7 days)."
}
"-v" = {
set_if = {{ macro("$systemd_verbose_level$") == 1 }}
description = "Increase verbosity level (Accepted values: `1`, `2` or `3`). Defaults to none."
@ -15,85 +47,5 @@ object CheckCommand "systemd" {
"-vvv" = {
set_if = {{ macro("$systemd_verbose_level$") == 3 }}
}
/* Options related to unit selection */
"--ignore-inactive-state" = {
set_if = "$systemd_ignore_inactive_state$"
description = "Ignore an inactive state on a specific unit. Only affective if used with `systemd_unit`."
}
"--include" = {
value = "$systemd_include$"
description = "Include systemd units to the checks, regular expressions are supported. This option can be applied multiple times."
repeat_key = true
}
"--unit" = {
value = "$systemd_unit$"
description = "Name of the systemd unit that is being tested."
}
"--include-type" = {
value = "$systemd_include_type$"
description = "Unit types to be tested (for example: `service`, `timer`). This option can be applied multiple times."
repeat_key = true
}
"--exclude" = {
value = "$systemd_exclude_unit$"
description = "Exclude a systemd unit from the checks, regular expressions are supported. This option can be applied multiple times."
repeat_key = true
}
"--exclude-unit" = {
value = "$systemd_exclude_unit_name$"
description = "Exclude a systemd unit from the checks. This option can be applied multiple times."
repeat_key = true
}
"--exclude-type" = {
value = "$systemd_exclude_type$"
description = "Exclude a systemd unit type (for example: `service`, `timer`)"
}
"--state" = {
value = "$systemd_state$"
description = "Specify the active state that the systemd unit must have (for example: `active`, `inactive`)"
}
/* Timers related options */
"--dead-timers" = {
set_if = "$systemd_dead_timers$"
description = "Detect dead / inactive timers, see `systemd_dead_timers_{warning,critical}`. (Default `false`)"
}
"--dead-timers-warning" = {
value = "$systemd_dead_timers_warning$"
description = "Time ago in seconds for dead / inactive timers to trigger a warning state. (Default 6 days)"
}
"--dead-timers-critical" = {
value = "$systemd_dead_timers_critical$"
description = "Time ago in seconds for dead / inactive timers to trigger a critical state. (Default 7 days)"
}
/* Startup time related options */
"--no-startup-time" = {
set_if = "$systemd_no_startup_time$"
description = "Don't check the startup time. Using this option, the options `systemd_{warning,critical}` have no effect. (Default `false`)"
}
"--warning" = {
value = "$systemd_warning$"
description = "Startup time in seconds to result in a warning status. (Default 60 seconds)"
}
"--critical" = {
value = "$systemd_critical$"
description = "Startup time in seconds to result in a critical status. (Default 120 seconds)"
}
/* Monitoring data acquisition */
"--dbus" = {
set_if = "$systemd_dbus$"
description = "Use systemd's D-Bus API instead of parsing command output. Only partially implemented!"
}
"--cli" = {
set_if = "$systemd_cli$"
description = "Use text output from parsing command output. (Default)"
}
"--user" = {
set_if = "$systemd_user$"
description = "Also show user (systemctl --user) units."
}
}
}

View File

@ -50,10 +50,6 @@ template CheckCommand "vmware-esx-command" {
username=<username> \
password=<password>"
}
"--maintenance_mode_state" = {
value = "$vmware_maintenance_mode_state$"
description = "Set status in case ESX host is in maintenace mode. Possible Values are: ok or OK, CRITICAL or critical or CRIT or crit, WARNING or warning or WARN or warn. Default is UNKNOWN because you do not know the real state. Values are case insensitive."
}
}
vars.vmware_timeout = "90"
@ -425,10 +421,6 @@ object CheckCommand "vmware-esx-soap-host-net" {
"--isregexp" = {
set_if = "$vmware_isregexp$"
}
"--unplugged_nics_state" = {
value = "$vmware_unplugged_nics_state$"
description = "Sets status for unplugged nics (Possible values are: [OK | ok] or [CRITICAL | critical | CRIT | crit] or [WARNING | warning | WARN | warn]. Default is WARNING. Values are case insensitive.)"
}
}
}
@ -475,10 +467,6 @@ object CheckCommand "vmware-esx-soap-host-net-nic" {
"--isregexp" = {
set_if = "$vmware_isregexp$"
}
"--unplugged_nics_state" = {
value = "$vmware_unplugged_nics_state$"
description = "Sets status for unplugged nics (Possible values are: [OK | ok] or [CRITICAL | critical | CRIT | crit] or [WARNING | warning | WARN | warn]. Default is WARNING. Values are case insensitive.)"
}
}
}

View File

@ -396,9 +396,13 @@ object CheckCommand "ssl_cert" {
value = "$ssl_cert_critical$"
description = "Minimum number of days a certificate has to be valid to issue a critical status"
}
"--match" = {
"-n" = {
value = "$ssl_cert_cn$"
description = "Pattern to match the CN or AltNames of the certificate"
description = "Pattern to match the CN of the certificate"
}
"--altnames" = {
set_if = "$ssl_cert_altnames$"
description = "Matches the pattern specified in -n with alternate"
}
"-i" = {
value = "$ssl_cert_issuer$"
@ -440,10 +444,6 @@ object CheckCommand "ssl_cert" {
value = "$ssl_cert_protocol$"
description = "Use the specific protocol {http|smtp|pop3|imap|ftp|xmpp|irc|ldap} (default: http)"
}
"--url" = {
value = "$ssl_cert_http_url$"
description = "HTTP request URL (default: /)"
}
"-C" = {
value = "$ssl_cert_clientssl_cert$"
description = "Use client certificate to authenticate"
@ -578,25 +578,11 @@ object CheckCommand "ssl_cert" {
set_if = "$ssl_cert_ignore_tls_renegotiation$"
description = "Do not check for renegotiation"
}
"--maximum-validity" = {
value = "$ssl_cert_maximum_validity$"
description = "The maximum validity of the certificate in days (default: 397)"
}
"--dane" = {
value = "$ssl_cert_dane$"
description = "verify that valid DANE records exist (since OpenSSL 1.1.0)"
repeat_key = false
}
"--ignore-maximum-validity" = {
description = "Ignore the certificate maximum validity"
set_if = "$ssl_cert_ignore_maximum_validity$"
}
}
vars.ssl_cert_address = "$check_address$"
vars.ssl_cert_port = 443
vars.ssl_cert_cn = "$ssl_cert_altnames$"
}
object CheckCommand "varnish" {

View File

@ -37,9 +37,7 @@ set(base_SOURCES
fifo.cpp fifo.hpp
filelogger.cpp filelogger.hpp filelogger-ti.hpp
function.cpp function.hpp function-ti.hpp function-script.cpp functionwrapper.hpp
generator.hpp
initialize.cpp initialize.hpp
intrusive-ptr.hpp
io-engine.cpp io-engine.hpp
journaldlogger.cpp journaldlogger.hpp journaldlogger-ti.hpp
json.cpp json.hpp json-script.cpp
@ -88,7 +86,6 @@ set(base_SOURCES
unixsocket.cpp unixsocket.hpp
utility.cpp utility.hpp
value.cpp value.hpp value-operators.cpp
wait-group.cpp wait-group.hpp
win32.hpp
workqueue.cpp workqueue.hpp
)
@ -133,7 +130,7 @@ if(HAVE_SYSTEMD)
find_path(SYSTEMD_INCLUDE_DIR
NAMES systemd/sd-daemon.h
HINTS ${SYSTEMD_ROOT_DIR})
include_directories(SYSTEM ${SYSTEMD_INCLUDE_DIR})
include_directories(${SYSTEMD_INCLUDE_DIR})
set_property(
SOURCE ${CMAKE_CURRENT_SOURCE_DIR}/journaldlogger.cpp
APPEND PROPERTY COMPILE_DEFINITIONS
@ -143,13 +140,13 @@ endif()
add_library(base OBJECT ${base_SOURCES})
include_directories(SYSTEM ${icinga2_SOURCE_DIR}/third-party/execvpe)
include_directories(${icinga2_SOURCE_DIR}/third-party/execvpe)
link_directories(${icinga2_BINARY_DIR}/third-party/execvpe)
include_directories(SYSTEM ${icinga2_SOURCE_DIR}/third-party/mmatch)
include_directories(${icinga2_SOURCE_DIR}/third-party/mmatch)
link_directories(${icinga2_BINARY_DIR}/third-party/mmatch)
include_directories(SYSTEM ${icinga2_SOURCE_DIR}/third-party/socketpair)
include_directories(${icinga2_SOURCE_DIR}/third-party/socketpair)
link_directories(${icinga2_BINARY_DIR}/third-party/socketpair)
set_target_properties (
@ -157,9 +154,7 @@ set_target_properties (
FOLDER Lib
)
if(NOT WIN32)
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_CACHEDIR}\")")
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_LOGDIR}/crash\")")
endif()
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_CACHEDIR}\")")
install(CODE "file(MAKE_DIRECTORY \"\$ENV{DESTDIR}${ICINGA2_FULL_LOGDIR}/crash\")")
set(CPACK_NSIS_EXTRA_INSTALL_COMMANDS "${CPACK_NSIS_EXTRA_INSTALL_COMMANDS}" PARENT_SCOPE)

View File

@ -776,12 +776,6 @@ void Application::SigAbrtHandler(int)
}
AttachDebugger(fname, interactive_debugger);
#ifdef __linux__
prctl(PR_SET_DUMPABLE, 1);
#endif /* __linux __ */
abort();
}
#ifdef _WIN32

View File

@ -45,12 +45,13 @@ Value Array::Get(SizeType index) const
*
* @param index The index.
* @param value The value.
* @param overrideFrozen Whether to allow modifying frozen arrays.
*/
void Array::Set(SizeType index, const Value& value)
void Array::Set(SizeType index, const Value& value, bool overrideFrozen)
{
ObjectLock olock(this);
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Value in array must not be modified."));
m_Data.at(index) = value;
@ -61,12 +62,13 @@ void Array::Set(SizeType index, const Value& value)
*
* @param index The index.
* @param value The value.
* @param overrideFrozen Whether to allow modifying frozen arrays.
*/
void Array::Set(SizeType index, Value&& value)
void Array::Set(SizeType index, Value&& value, bool overrideFrozen)
{
ObjectLock olock(this);
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Array must not be modified."));
m_Data.at(index).Swap(value);
@ -76,12 +78,13 @@ void Array::Set(SizeType index, Value&& value)
* Adds a value to the array.
*
* @param value The value.
* @param overrideFrozen Whether to allow modifying frozen arrays.
*/
void Array::Add(Value value)
void Array::Add(Value value, bool overrideFrozen)
{
ObjectLock olock(this);
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Array must not be modified."));
m_Data.push_back(std::move(value));
@ -96,7 +99,7 @@ void Array::Add(Value value)
*/
Array::Iterator Array::Begin()
{
ASSERT(Frozen() || OwnsLock());
ASSERT(OwnsLock());
return m_Data.begin();
}
@ -110,7 +113,7 @@ Array::Iterator Array::Begin()
*/
Array::Iterator Array::End()
{
ASSERT(Frozen() || OwnsLock());
ASSERT(OwnsLock());
return m_Data.end();
}
@ -145,14 +148,15 @@ bool Array::Contains(const Value& value) const
*
* @param index The index
* @param value The value to add
* @param overrideFrozen Whether to allow modifying frozen arrays.
*/
void Array::Insert(SizeType index, Value value)
void Array::Insert(SizeType index, Value value, bool overrideFrozen)
{
ObjectLock olock(this);
ASSERT(index <= m_Data.size());
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Array must not be modified."));
m_Data.insert(m_Data.begin() + index, std::move(value));
@ -162,12 +166,13 @@ void Array::Insert(SizeType index, Value value)
* Removes the specified index from the array.
*
* @param index The index.
* @param overrideFrozen Whether to allow modifying frozen arrays.
*/
void Array::Remove(SizeType index)
void Array::Remove(SizeType index, bool overrideFrozen)
{
ObjectLock olock(this);
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Array must not be modified."));
if (index >= m_Data.size())
@ -180,42 +185,43 @@ void Array::Remove(SizeType index)
* Removes the item specified by the iterator from the array.
*
* @param it The iterator.
* @param overrideFrozen Whether to allow modifying frozen arrays.
*/
void Array::Remove(Array::Iterator it)
void Array::Remove(Array::Iterator it, bool overrideFrozen)
{
ASSERT(OwnsLock());
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Array must not be modified."));
m_Data.erase(it);
}
void Array::Resize(SizeType newSize)
void Array::Resize(SizeType newSize, bool overrideFrozen)
{
ObjectLock olock(this);
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Array must not be modified."));
m_Data.resize(newSize);
}
void Array::Clear()
void Array::Clear(bool overrideFrozen)
{
ObjectLock olock(this);
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Array must not be modified."));
m_Data.clear();
}
void Array::Reserve(SizeType newSize)
void Array::Reserve(SizeType newSize, bool overrideFrozen)
{
ObjectLock olock(this);
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Array must not be modified."));
m_Data.reserve(newSize);
@ -274,11 +280,11 @@ Array::Ptr Array::Reverse() const
return result;
}
void Array::Sort()
void Array::Sort(bool overrideFrozen)
{
ObjectLock olock(this);
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Array must not be modified."));
std::sort(m_Data.begin(), m_Data.end());
@ -327,26 +333,7 @@ Array::Ptr Array::Unique() const
void Array::Freeze()
{
ObjectLock olock(this);
m_Frozen.store(true, std::memory_order_release);
}
bool Array::Frozen() const
{
return m_Frozen.load(std::memory_order_acquire);
}
/**
* Returns an already locked ObjectLock if the array is frozen.
* Otherwise, returns an unlocked object lock.
*
* @returns An object lock.
*/
ObjectLock Array::LockIfRequired()
{
if (Frozen()) {
return ObjectLock(this, std::defer_lock);
}
return ObjectLock(this);
m_Frozen = true;
}
Value Array::GetFieldByName(const String& field, bool sandboxed, const DebugInfo& debugInfo) const
@ -367,7 +354,7 @@ Value Array::GetFieldByName(const String& field, bool sandboxed, const DebugInfo
return Get(index);
}
void Array::SetFieldByName(const String& field, const Value& value, const DebugInfo& debugInfo)
void Array::SetFieldByName(const String& field, const Value& value, bool overrideFrozen, const DebugInfo& debugInfo)
{
ObjectLock olock(this);
@ -377,9 +364,9 @@ void Array::SetFieldByName(const String& field, const Value& value, const DebugI
BOOST_THROW_EXCEPTION(ScriptError("Array index '" + Convert::ToString(index) + "' is out of bounds.", debugInfo));
if (static_cast<size_t>(index) >= GetLength())
Resize(index + 1);
Resize(index + 1, overrideFrozen);
Set(index, value);
Set(index, value, overrideFrozen);
}
Array::Iterator icinga::begin(const Array::Ptr& x)

View File

@ -4,7 +4,6 @@
#define ARRAY_H
#include "base/i2-base.hpp"
#include "base/atomic.hpp"
#include "base/objectlock.hpp"
#include "base/value.hpp"
#include <boost/range/iterator.hpp>
@ -39,9 +38,9 @@ public:
Array(std::initializer_list<Value> init);
Value Get(SizeType index) const;
void Set(SizeType index, const Value& value);
void Set(SizeType index, Value&& value);
void Add(Value value);
void Set(SizeType index, const Value& value, bool overrideFrozen = false);
void Set(SizeType index, Value&& value, bool overrideFrozen = false);
void Add(Value value, bool overrideFrozen = false);
Iterator Begin();
Iterator End();
@ -49,14 +48,14 @@ public:
size_t GetLength() const;
bool Contains(const Value& value) const;
void Insert(SizeType index, Value value);
void Remove(SizeType index);
void Remove(Iterator it);
void Insert(SizeType index, Value value, bool overrideFrozen = false);
void Remove(SizeType index, bool overrideFrozen = false);
void Remove(Iterator it, bool overrideFrozen = false);
void Resize(SizeType newSize);
void Clear();
void Resize(SizeType newSize, bool overrideFrozen = false);
void Clear(bool overrideFrozen = false);
void Reserve(SizeType newSize);
void Reserve(SizeType newSize, bool overrideFrozen = false);
void CopyTo(const Array::Ptr& dest) const;
Array::Ptr ShallowClone() const;
@ -92,22 +91,20 @@ public:
Array::Ptr Reverse() const;
void Sort();
void Sort(bool overrideFrozen = false);
String ToString() const override;
Value Join(const Value& separator) const;
Array::Ptr Unique() const;
void Freeze();
bool Frozen() const;
ObjectLock LockIfRequired();
Value GetFieldByName(const String& field, bool sandboxed, const DebugInfo& debugInfo) const override;
void SetFieldByName(const String& field, const Value& value, const DebugInfo& debugInfo) override;
void SetFieldByName(const String& field, const Value& value, bool overrideFrozen, const DebugInfo& debugInfo) override;
private:
std::vector<Value> m_Data; /**< The data for the array. */
Atomic<bool> m_Frozen{false};
bool m_Frozen{false};
};
Array::Iterator begin(const Array::Ptr& x);

View File

@ -12,12 +12,7 @@ namespace icinga
{
/**
* Like std::atomic, but enforces usage of its only safe constructor.
*
* "The default-initialized std::atomic<T> does not contain a T object,
* and its only valid uses are destruction and
* initialization by std::atomic_init, see LWG issue 2334."
* -- https://en.cppreference.com/w/cpp/atomic/atomic/atomic
* Extends std::atomic with an atomic constructor.
*
* @ingroup base
*/
@ -25,12 +20,24 @@ template<class T>
class Atomic : public std::atomic<T> {
public:
/**
* The only safe constructor of std::atomic#atomic
* Like std::atomic#atomic, but operates atomically
*
* @param desired Initial value
*/
inline Atomic(T desired) : std::atomic<T>(desired)
inline Atomic(T desired)
{
this->store(desired);
}
/**
* Like std::atomic#atomic, but operates atomically
*
* @param desired Initial value
* @param order Initial store operation's memory order
*/
inline Atomic(T desired, std::memory_order order)
{
this->store(desired, order);
}
};

View File

@ -23,3 +23,4 @@ Object::Ptr Boolean::GetPrototype()
return prototype;
}

View File

@ -6,3 +6,4 @@
using namespace icinga;
REGISTER_BUILTIN_TYPE(Boolean, Boolean::GetPrototype());

View File

@ -33,3 +33,4 @@ Object::Ptr ConfigObject::GetPrototype()
return prototype;
}

View File

@ -9,13 +9,11 @@
#include "base/dictionary.hpp"
#include <shared_mutex>
#include <unordered_map>
#include <boost/signals2.hpp>
namespace icinga
{
class ConfigObject;
class ConfigItems;
class ConfigType
{
@ -50,13 +48,6 @@ for (const auto& object : objects) {
int GetObjectCount() const;
/**
* Signal that allows hooking into the config loading process just before ConfigObject::OnAllConfigLoaded() is
* called for a bunch of objects. A vector of pointers to these objects is passed as an argument. All elements
* are of the object type the signal is called on.
*/
boost::signals2::signal<void (const ConfigItems&)> BeforeOnAllConfigLoaded;
private:
typedef std::unordered_map<String, intrusive_ptr<ConfigObject> > ObjectMap;
typedef std::vector<intrusive_ptr<ConfigObject> > ObjectVector;

View File

@ -25,3 +25,4 @@ Object::Ptr DateTime::GetPrototype()
return prototype;
}

View File

@ -35,7 +35,7 @@ DateTime::DateTime(const std::vector<Value>& args)
tms.tm_isdst = -1;
m_Value = Utility::TmToTimestamp(&tms);
m_Value = mktime(&tms);
} else if (args.size() == 1)
m_Value = args[0];
else

View File

@ -95,3 +95,4 @@ void icinga::ShowCodeLocation(std::ostream& out, const DebugInfo& di, bool verbo
}
}
}

View File

@ -22,8 +22,6 @@ public:
{
}
Defer() = default;
Defer(const Defer&) = delete;
Defer(Defer&&) = delete;
Defer& operator=(const Defer&) = delete;
@ -41,11 +39,6 @@ public:
}
}
inline void SetFunc(std::function<void()> func)
{
m_Func = std::move(func);
}
inline
void Cancel()
{

View File

@ -5,68 +5,46 @@
using namespace icinga;
std::mutex DependencyGraph::m_Mutex;
DependencyGraph::DependencyMap DependencyGraph::m_Dependencies;
std::map<Object *, std::map<Object *, int> > DependencyGraph::m_Dependencies;
void DependencyGraph::AddDependency(ConfigObject* child, ConfigObject* parent)
void DependencyGraph::AddDependency(Object *parent, Object *child)
{
std::unique_lock<std::mutex> lock(m_Mutex);
if (auto [it, inserted] = m_Dependencies.insert(Edge(parent, child)); !inserted) {
m_Dependencies.modify(it, [](Edge& e) { e.count++; });
}
m_Dependencies[child][parent]++;
}
void DependencyGraph::RemoveDependency(ConfigObject* child, ConfigObject* parent)
void DependencyGraph::RemoveDependency(Object *parent, Object *child)
{
std::unique_lock<std::mutex> lock(m_Mutex);
if (auto it(m_Dependencies.find(Edge(parent, child))); it != m_Dependencies.end()) {
if (it->count > 1) {
// Remove a duplicate edge from child to node, i.e. decrement the corresponding counter.
m_Dependencies.modify(it, [](Edge& e) { e.count--; });
} else {
// Remove the last edge from child to node (decrementing the counter would set it to 0),
// thus remove that connection from the data structure completely.
m_Dependencies.erase(it);
}
}
auto& refs = m_Dependencies[child];
auto it = refs.find(parent);
if (it == refs.end())
return;
it->second--;
if (it->second == 0)
refs.erase(it);
if (refs.empty())
m_Dependencies.erase(child);
}
/**
* Returns all the parent objects of the given child object.
*
* @param child The child object.
*
* @returns A list of the parent objects.
*/
std::vector<ConfigObject::Ptr> DependencyGraph::GetParents(const ConfigObject::Ptr& child)
std::vector<Object::Ptr> DependencyGraph::GetParents(const Object::Ptr& child)
{
std::vector<ConfigObject::Ptr> objects;
std::vector<Object::Ptr> objects;
std::unique_lock lock(m_Mutex);
auto [begin, end] = m_Dependencies.get<2>().equal_range(child.get());
std::transform(begin, end, std::back_inserter(objects), [](const Edge& edge) {
return edge.parent;
});
return objects;
}
/**
* Returns all the dependent objects of the given parent object.
*
* @param parent The parent object.
*
* @returns A list of the dependent objects.
*/
std::vector<ConfigObject::Ptr> DependencyGraph::GetChildren(const ConfigObject::Ptr& parent)
{
std::vector<ConfigObject::Ptr> objects;
std::unique_lock lock(m_Mutex);
auto [begin, end] = m_Dependencies.get<1>().equal_range(parent.get());
std::transform(begin, end, std::back_inserter(objects), [](const Edge& edge) {
return edge.child;
});
std::unique_lock<std::mutex> lock(m_Mutex);
auto it = m_Dependencies.find(child.get());
if (it != m_Dependencies.end()) {
typedef std::pair<Object *, int> kv_pair;
for (const kv_pair& kv : it->second) {
objects.emplace_back(kv.first);
}
}
return objects;
}

View File

@ -4,10 +4,8 @@
#define DEPENDENCYGRAPH_H
#include "base/i2-base.hpp"
#include "base/configobject.hpp"
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/hashed_index.hpp>
#include <boost/multi_index/member.hpp>
#include "base/object.hpp"
#include <map>
#include <mutex>
namespace icinga {
@ -20,84 +18,15 @@ namespace icinga {
class DependencyGraph
{
public:
static void AddDependency(ConfigObject* child, ConfigObject* parent);
static void RemoveDependency(ConfigObject* child, ConfigObject* parent);
static std::vector<ConfigObject::Ptr> GetParents(const ConfigObject::Ptr& child);
static std::vector<ConfigObject::Ptr> GetChildren(const ConfigObject::Ptr& parent);
static void AddDependency(Object *parent, Object *child);
static void RemoveDependency(Object *parent, Object *child);
static std::vector<Object::Ptr> GetParents(const Object::Ptr& child);
private:
DependencyGraph();
/**
* Represents an undirected dependency edge between two objects.
*
* It allows to traverse the graph in both directions, i.e. from parent to child and vice versa.
*/
struct Edge
{
ConfigObject* parent; // The parent object of the child one.
ConfigObject* child; // The dependent object of the parent.
// Counter for the number of parent <-> child edges to allow duplicates.
int count;
Edge(ConfigObject* parent, ConfigObject* child, int count = 1): parent(parent), child(child), count(count)
{
}
struct Hash
{
/**
* Generates a unique hash of the given Edge object.
*
* Note, the hash value is generated only by combining the hash values of the parent and child pointers.
*
* @param edge The Edge object to be hashed.
*
* @return size_t The resulting hash value of the given object.
*/
size_t operator()(const Edge& edge) const
{
size_t seed = 0;
boost::hash_combine(seed, edge.parent);
boost::hash_combine(seed, edge.child);
return seed;
}
};
struct Equal
{
/**
* Compares whether the two Edge objects contain the same parent and child pointers.
*
* Note, the member property count is not taken into account for equality checks.
*
* @param a The first Edge object to compare.
* @param b The second Edge object to compare.
*
* @return bool Returns true if the two objects are equal, false otherwise.
*/
bool operator()(const Edge& a, const Edge& b) const
{
return a.parent == b.parent && a.child == b.child;
}
};
};
using DependencyMap = boost::multi_index_container<
Edge, // The value type we want to sore in the container.
boost::multi_index::indexed_by<
// The first indexer is used for lookups by the Edge from child to parent, thus it
// needs its own hash function and comparison predicate.
boost::multi_index::hashed_unique<boost::multi_index::identity<Edge>, Edge::Hash, Edge::Equal>,
// These two indexers are used for lookups by the parent and child pointers.
boost::multi_index::hashed_non_unique<boost::multi_index::member<Edge, ConfigObject*, &Edge::parent>>,
boost::multi_index::hashed_non_unique<boost::multi_index::member<Edge, ConfigObject*, &Edge::child>>
>
>;
static std::mutex m_Mutex;
static DependencyMap m_Dependencies;
static std::map<Object *, std::map<Object *, int> > m_Dependencies;
};
}

View File

@ -116,3 +116,4 @@ Object::Ptr Dictionary::GetPrototype()
return prototype;
}

View File

@ -1,6 +1,7 @@
/* Icinga 2 | (c) 2012 Icinga GmbH | GPLv2+ */
#include "base/dictionary.hpp"
#include "base/objectlock.hpp"
#include "base/debug.hpp"
#include "base/primitivetype.hpp"
#include "base/configwriter.hpp"
@ -85,13 +86,14 @@ const Value * Dictionary::GetRef(const String& key) const
*
* @param key The key.
* @param value The value.
* @param overrideFrozen Whether to allow modifying frozen dictionaries.
*/
void Dictionary::Set(const String& key, Value value)
void Dictionary::Set(const String& key, Value value, bool overrideFrozen)
{
ObjectLock olock(this);
std::unique_lock<std::shared_timed_mutex> lock (m_DataMutex);
if (m_Frozen)
if (m_Frozen && !overrideFrozen)
BOOST_THROW_EXCEPTION(std::invalid_argument("Value in dictionary must not be modified."));
m_Data[key] = std::move(value);
@ -131,7 +133,7 @@ bool Dictionary::Contains(const String& key) const
*/
Dictionary::Iterator Dictionary::Begin()
{
ASSERT(Frozen() || OwnsLock());
ASSERT(OwnsLock());
return m_Data.begin();
}
@ -145,7 +147,7 @@ Dictionary::Iterator Dictionary::Begin()
*/
Dictionary::Iterator Dictionary::End()
{
ASSERT(Frozen() || OwnsLock());
ASSERT(OwnsLock());
return m_Data.end();
}
@ -275,26 +277,7 @@ String Dictionary::ToString() const
void Dictionary::Freeze()
{
ObjectLock olock(this);
m_Frozen.store(true, std::memory_order_release);
}
bool Dictionary::Frozen() const
{
return m_Frozen.load(std::memory_order_acquire);
}
/**
* Returns an already locked ObjectLock if the dictionary is frozen.
* Otherwise, returns an unlocked object lock.
*
* @returns An object lock.
*/
ObjectLock Dictionary::LockIfRequired()
{
if (Frozen()) {
return ObjectLock(this, std::defer_lock);
}
return ObjectLock(this);
m_Frozen = true;
}
Value Dictionary::GetFieldByName(const String& field, bool, const DebugInfo& debugInfo) const
@ -307,9 +290,9 @@ Value Dictionary::GetFieldByName(const String& field, bool, const DebugInfo& deb
return GetPrototypeField(const_cast<Dictionary *>(this), field, false, debugInfo);
}
void Dictionary::SetFieldByName(const String& field, const Value& value, const DebugInfo&)
void Dictionary::SetFieldByName(const String& field, const Value& value, bool overrideFrozen, const DebugInfo&)
{
Set(field, value);
Set(field, value, overrideFrozen);
}
bool Dictionary::HasOwnField(const String& field) const
@ -331,3 +314,4 @@ Dictionary::Iterator icinga::end(const Dictionary::Ptr& x)
{
return x->End();
}

View File

@ -4,9 +4,7 @@
#define DICTIONARY_H
#include "base/i2-base.hpp"
#include "base/atomic.hpp"
#include "base/object.hpp"
#include "base/objectlock.hpp"
#include "base/value.hpp"
#include <boost/range/iterator.hpp>
#include <map>
@ -45,7 +43,7 @@ public:
Value Get(const String& key) const;
bool Get(const String& key, Value *result) const;
const Value * GetRef(const String& key) const;
void Set(const String& key, Value value);
void Set(const String& key, Value value, bool overrideFrozen = false);
bool Contains(const String& key) const;
Iterator Begin();
@ -71,18 +69,16 @@ public:
String ToString() const override;
void Freeze();
bool Frozen() const;
ObjectLock LockIfRequired();
Value GetFieldByName(const String& field, bool sandboxed, const DebugInfo& debugInfo) const override;
void SetFieldByName(const String& field, const Value& value, const DebugInfo& debugInfo) override;
void SetFieldByName(const String& field, const Value& value, bool overrideFrozen, const DebugInfo& debugInfo) override;
bool HasOwnField(const String& field) const override;
bool GetOwnField(const String& field, Value *result) const override;
private:
std::map<String, Value> m_Data; /**< The data for the dictionary. */
mutable std::shared_timed_mutex m_DataMutex;
Atomic<bool> m_Frozen{false};
bool m_Frozen{false};
};
Dictionary::Iterator begin(const Dictionary::Ptr& x);

View File

@ -54,11 +54,26 @@ void FIFO::Optimize()
}
}
size_t FIFO::Peek(void *buffer, size_t count, bool allow_partial)
{
ASSERT(allow_partial);
if (count > m_DataSize)
count = m_DataSize;
if (buffer)
std::memcpy(buffer, m_Buffer + m_Offset, count);
return count;
}
/**
* Implements IOQueue::Read.
*/
size_t FIFO::Read(void *buffer, size_t count)
size_t FIFO::Read(void *buffer, size_t count, bool allow_partial)
{
ASSERT(allow_partial);
if (count > m_DataSize)
count = m_DataSize;

View File

@ -23,7 +23,8 @@ public:
~FIFO() override;
size_t Read(void *buffer, size_t count) override;
size_t Peek(void *buffer, size_t count, bool allow_partial = false) override;
size_t Read(void *buffer, size_t count, bool allow_partial = false) override;
void Write(const void *buffer, size_t count) override;
void Close() override;
bool IsEof() const override;

View File

@ -47,3 +47,4 @@ Object::Ptr Function::GetPrototype()
return prototype;
}

View File

@ -1,48 +0,0 @@
/* Icinga 2 | (c) 2025 Icinga GmbH | GPLv2+ */
#pragma once
#include "base/i2-base.hpp"
#include "base/value.hpp"
#include <optional>
namespace icinga
{
/**
* ValueGenerator is a class that defines a generator function type for producing Values on demand.
*
* This class is used to create generator functions that can yield any values that can be represented by the
* Icinga Value type. The generator function is exhausted when it returns `std::nullopt`, indicating that there
* are no more values to produce. Subsequent calls to `Next()` will always return `std::nullopt` after exhaustion.
*
* @ingroup base
*/
class ValueGenerator final : public Object
{
public:
DECLARE_PTR_TYPEDEFS(ValueGenerator);
/**
* Generates a Value using the provided generator function.
*
* The generator function should return an `std::optional<Value>` which contains the produced Value or
* `std::nullopt` when there are no more values to produce. After the generator function returns `std::nullopt`,
* the generator is considered exhausted, and further calls to `Next()` will always return `std::nullopt`.
*/
using GenFunc = std::function<std::optional<Value>()>;
explicit ValueGenerator(GenFunc generator): m_Generator(std::move(generator))
{
}
std::optional<Value> Next() const
{
return m_Generator();
}
private:
GenFunc m_Generator; // The generator function that produces Values.
};
}

View File

@ -10,3 +10,4 @@ bool icinga::InitializeOnceHelper(const std::function<void()>& func, InitializeP
Loader::AddDeferredInitializer(func, priority);
return true;
}

View File

@ -23,7 +23,6 @@ enum class InitializePriority {
RegisterBuiltinTypes,
RegisterFunctions,
RegisterTypes,
SortTypes,
EvaluateConfigFragments,
Default,
FreezeNamespaces,

View File

@ -1,22 +0,0 @@
/* Icinga 2 | (c) 2025 Icinga GmbH | GPLv2+ */
#pragma once
#include "base/i2-base.hpp"
#include <memory>
#include <boost/smart_ptr/intrusive_ptr.hpp>
#include <boost/version.hpp>
// std::hash is only implemented starting from Boost 1.74. Implement it ourselves for older version to allow using
// boost::intrusive_ptr inside std::unordered_set<> or as the key of std::unordered_map<>.
// https://github.com/boostorg/smart_ptr/commit/5a18ffdc5609a0e64b63e47cb81c4f0847e0c087
#if BOOST_VERSION < 107400
template<class T>
struct std::hash<boost::intrusive_ptr<T>>
{
std::size_t operator()(const boost::intrusive_ptr<T>& ptr) const noexcept
{
return std::hash<T*>{}(ptr.get());
}
};
#endif /* BOOST_VERSION < 107400 */

View File

@ -124,63 +124,31 @@ void IoEngine::RunEventLoop()
}
}
AsioEvent::AsioEvent(boost::asio::io_context& io, bool init)
AsioConditionVariable::AsioConditionVariable(boost::asio::io_context& io, bool init)
: m_Timer(io)
{
m_Timer.expires_at(init ? boost::posix_time::neg_infin : boost::posix_time::pos_infin);
}
void AsioEvent::Set()
void AsioConditionVariable::Set()
{
m_Timer.expires_at(boost::posix_time::neg_infin);
}
void AsioEvent::Clear()
void AsioConditionVariable::Clear()
{
m_Timer.expires_at(boost::posix_time::pos_infin);
}
void AsioEvent::Wait(boost::asio::yield_context yc)
void AsioConditionVariable::Wait(boost::asio::yield_context yc)
{
boost::system::error_code ec;
m_Timer.async_wait(yc[ec]);
}
AsioDualEvent::AsioDualEvent(boost::asio::io_context& io, bool init)
: m_IsTrue(io, init), m_IsFalse(io, !init)
{
}
void AsioDualEvent::Set()
{
m_IsTrue.Set();
m_IsFalse.Clear();
}
void AsioDualEvent::Clear()
{
m_IsTrue.Clear();
m_IsFalse.Set();
}
void AsioDualEvent::WaitForSet(boost::asio::yield_context yc)
{
m_IsTrue.Wait(std::move(yc));
}
void AsioDualEvent::WaitForClear(boost::asio::yield_context yc)
{
m_IsFalse.Wait(std::move(yc));
}
/**
* Cancels any pending timeout callback.
*
* Must be called in the strand in which the callback was scheduled!
*/
void Timeout::Cancel()
{
m_Cancelled->store(true);
m_Cancelled.store(true);
boost::system::error_code ec;
m_Timer.cancel(ec);

View File

@ -3,12 +3,10 @@
#ifndef IO_ENGINE_H
#define IO_ENGINE_H
#include "base/atomic.hpp"
#include "base/debug.hpp"
#include "base/exception.hpp"
#include "base/lazy-init.hpp"
#include "base/logger.hpp"
#include "base/shared.hpp"
#include "base/shared-object.hpp"
#include <atomic>
#include <exception>
#include <memory>
@ -16,16 +14,11 @@
#include <utility>
#include <vector>
#include <stdexcept>
#include <boost/context/fixedsize_stack.hpp>
#include <boost/exception/all.hpp>
#include <boost/asio/deadline_timer.hpp>
#include <boost/asio/io_context.hpp>
#include <boost/asio/spawn.hpp>
#if BOOST_VERSION >= 108700
# include <boost/asio/detached.hpp>
#endif // BOOST_VERSION >= 108700
namespace icinga
{
@ -105,32 +98,25 @@ public:
template <typename Handler, typename Function>
static void SpawnCoroutine(Handler& h, Function f) {
auto wrapper = [f = std::move(f)](boost::asio::yield_context yc) {
boost::asio::spawn(h,
[f](boost::asio::yield_context yc) {
try {
f(yc);
} catch (const std::exception& ex) {
Log(LogCritical, "IoEngine") << "Exception in coroutine: " << DiagnosticInformation(ex);
} catch (...) {
try {
Log(LogCritical, "IoEngine", "Exception in coroutine!");
} catch (...) {
}
} catch (const boost::coroutines::detail::forced_unwind &) {
// Required for proper stack unwinding when coroutines are destroyed.
// https://github.com/boostorg/coroutine/issues/39
throw;
} catch (const std::exception& ex) {
Log(LogCritical, "IoEngine", "Exception in coroutine!");
Log(LogDebug, "IoEngine") << "Exception in coroutine: " << DiagnosticInformation(ex);
} catch (...) {
Log(LogCritical, "IoEngine", "Exception in coroutine!");
}
};
#if BOOST_VERSION >= 108700
boost::asio::spawn(h,
std::allocator_arg, boost::context::fixedsize_stack(GetCoroutineStackSize()),
std::move(wrapper),
boost::asio::detached
},
boost::coroutines::attributes(GetCoroutineStackSize()) // Set a pre-defined stack size.
);
#else // BOOST_VERSION >= 108700
boost::asio::spawn(h, std::move(wrapper), boost::coroutines::attributes(GetCoroutineStackSize()));
#endif // BOOST_VERSION >= 108700
}
static inline
@ -158,14 +144,14 @@ class TerminateIoThread : public std::exception
};
/**
* Awaitable flag which doesn't block I/O threads, inspired by threading.Event from Python
* Condition variable which doesn't block I/O threads
*
* @ingroup base
*/
class AsioEvent
class AsioConditionVariable
{
public:
AsioEvent(boost::asio::io_context& io, bool init = false);
AsioConditionVariable(boost::asio::io_context& io, bool init = false);
void Set();
void Clear();
@ -175,103 +161,54 @@ private:
boost::asio::deadline_timer m_Timer;
};
/**
* Like AsioEvent, which only allows waiting for an event to be set, but additionally supports waiting for clearing
*
* @ingroup base
*/
class AsioDualEvent
{
public:
AsioDualEvent(boost::asio::io_context& io, bool init = false);
void Set();
void Clear();
void WaitForSet(boost::asio::yield_context yc);
void WaitForClear(boost::asio::yield_context yc);
private:
AsioEvent m_IsTrue, m_IsFalse;
};
/**
* I/O timeout emulator
*
* This class provides a workaround for Boost.ASIO's lack of built-in timeout support.
* While Boost.ASIO handles asynchronous operations, it does not natively support timeouts for these operations.
* This class uses a boost::asio::deadline_timer to emulate a timeout by scheduling a callback to be triggered
* after a specified duration, effectively adding timeout behavior where none exists.
* The callback is executed within the provided strand, ensuring thread-safety.
*
* The constructor returns immediately after scheduling the timeout callback.
* The callback itself is invoked asynchronously when the timeout occurs.
* This allows the caller to continue execution while the timeout is running in the background.
*
* The class provides a Cancel() method to unschedule any pending callback. If the callback has already been run,
* calling Cancel() has no effect. This method can be used to abort the timeout early if the monitored operation
* completes before the callback has been run. The Timeout destructor also automatically cancels any pending callback.
* A callback is considered pending even if the timeout has already expired,
* but the callback has not been executed yet due to a busy strand.
*
* @ingroup base
*/
class Timeout
class Timeout : public SharedObject
{
public:
using Timer = boost::asio::deadline_timer;
DECLARE_PTR_TYPEDEFS(Timeout);
/**
* Schedules onTimeout to be triggered after timeoutFromNow on strand.
*
* @param strand The strand in which the callback will be executed.
* The caller must also run in this strand, as well as Cancel() and the destructor!
* @param timeoutFromNow The duration after which the timeout callback will be triggered.
* @param onTimeout The callback to invoke when the timeout occurs.
*/
template<class OnTimeout>
Timeout(boost::asio::io_context::strand& strand, const Timer::duration_type& timeoutFromNow, OnTimeout onTimeout)
: m_Timer(strand.context(), timeoutFromNow), m_Cancelled(Shared<Atomic<bool>>::Make(false))
template<class Executor, class TimeoutFromNow, class OnTimeout>
Timeout(boost::asio::io_context& io, Executor& executor, TimeoutFromNow timeoutFromNow, OnTimeout onTimeout)
: m_Timer(io)
{
VERIFY(strand.running_in_this_thread());
Ptr keepAlive (this);
m_Timer.async_wait(boost::asio::bind_executor(
strand, [cancelled = m_Cancelled, onTimeout = std::move(onTimeout)](boost::system::error_code ec) {
if (!ec && !cancelled->load()) {
onTimeout();
}
}
));
m_Cancelled.store(false);
m_Timer.expires_from_now(std::move(timeoutFromNow));
IoEngine::SpawnCoroutine(executor, [this, keepAlive, onTimeout](boost::asio::yield_context yc) {
if (m_Cancelled.load()) {
return;
}
Timeout(const Timeout&) = delete;
Timeout(Timeout&&) = delete;
Timeout& operator=(const Timeout&) = delete;
Timeout& operator=(Timeout&&) = delete;
/**
* Cancels any pending timeout callback.
*
* Must be called in the strand in which the callback was scheduled!
*/
~Timeout()
{
Cancel();
boost::system::error_code ec;
m_Timer.async_wait(yc[ec]);
if (ec) {
return;
}
}
if (m_Cancelled.load()) {
return;
}
auto f (onTimeout);
f(std::move(yc));
});
}
void Cancel();
private:
Timer m_Timer;
/**
* Indicates whether the Timeout has been cancelled.
*
* This must be Shared<> between the lambda in the constructor and Cancel() for the case
* the destructor calls Cancel() while the lambda is already queued in the strand.
* The whole Timeout instance can't be kept alive by the lambda because this would delay the destructor.
*/
Shared<Atomic<bool>>::Ptr m_Cancelled;
boost::asio::deadline_timer m_Timer;
std::atomic<bool> m_Cancelled;
};
}

View File

@ -2,324 +2,22 @@
#include "base/json.hpp"
#include "base/debug.hpp"
#include "base/dictionary.hpp"
#include "base/namespace.hpp"
#include "base/dictionary.hpp"
#include "base/array.hpp"
#include "base/objectlock.hpp"
#include "base/convert.hpp"
#include "base/utility.hpp"
#include <boost/numeric/conversion/cast.hpp>
#include <bitset>
#include <boost/exception_ptr.hpp>
#include <cstdint>
#include <json.hpp>
#include <stack>
#include <utility>
#include <vector>
using namespace icinga;
JsonEncoder::JsonEncoder(std::string& output, bool prettify)
: JsonEncoder{nlohmann::detail::output_adapter<char>(output), prettify}
{
}
JsonEncoder::JsonEncoder(std::basic_ostream<char>& stream, bool prettify)
: JsonEncoder{nlohmann::detail::output_adapter<char>(stream), prettify}
{
}
JsonEncoder::JsonEncoder(nlohmann::detail::output_adapter_t<char> w, bool prettify)
: m_Pretty(prettify), m_Writer(std::move(w)), m_Flusher{m_Writer}
{
}
/**
* Encodes a single value into JSON and writes it to the underlying output stream.
*
* This method is the main entry point for encoding JSON data. It takes a value of any type that can
* be represented by our @c Value class recursively and encodes it into JSON in an efficient manner.
* If prettifying is enabled, the JSON output will be formatted with indentation and newlines for better
* readability, and the final JSON will also be terminated by a newline character.
*
* @note If the used output adapter performs asynchronous I/O operations (it's derived from @c AsyncJsonWriter),
* please provide a @c boost::asio::yield_context object to allow the encoder to flush the output stream in a
* safe manner. The encoder will try to regularly give the output stream a chance to flush its data when it is
* safe to do so, but for this to work, there must be a valid yield context provided. Otherwise, the encoder
* will not attempt to flush the output stream at all, which may lead to huge memory consumption when encoding
* large JSON objects or arrays.
*
* @param value The value to be JSON serialized.
* @param yc The optional yield context for asynchronous operations. If provided, it allows the encoder
* to flush the output stream safely when it has not acquired any object lock on the parent containers.
*/
void JsonEncoder::Encode(const Value& value, boost::asio::yield_context* yc)
{
switch (value.GetType()) {
case ValueEmpty:
Write("null");
break;
case ValueBoolean:
Write(value.ToBool() ? "true" : "false");
break;
case ValueString:
EncodeNlohmannJson(value.Get<String>());
break;
case ValueNumber:
EncodeNumber(value.Get<double>());
break;
case ValueObject: {
const auto& obj = value.Get<Object::Ptr>();
const auto& type = obj->GetReflectionType();
if (type == Namespace::TypeInstance) {
static constexpr auto extractor = [](const NamespaceValue& v) -> const Value& { return v.Val; };
EncodeObject(static_pointer_cast<Namespace>(obj), extractor, yc);
} else if (type == Dictionary::TypeInstance) {
static constexpr auto extractor = [](const Value& v) -> const Value& { return v; };
EncodeObject(static_pointer_cast<Dictionary>(obj), extractor, yc);
} else if (type == Array::TypeInstance) {
EncodeArray(static_pointer_cast<Array>(obj), yc);
} else if (auto gen(dynamic_pointer_cast<ValueGenerator>(obj)); gen) {
EncodeValueGenerator(gen, yc);
} else {
// Some other non-serializable object type!
EncodeNlohmannJson(obj->ToString());
}
break;
}
default:
VERIFY(!"Invalid variant type.");
}
// If we are at the top level of the JSON object and prettifying is enabled, we need to end
// the JSON with a newline character to ensure that the output is properly formatted.
if (m_Indent == 0 && m_Pretty) {
Write("\n");
}
}
/**
* Encodes an Array object into JSON and writes it to the output stream.
*
* @param array The Array object to be serialized into JSON.
* @param yc The optional yield context for asynchronous operations. If provided, it allows the encoder
* to flush the output stream safely when it has not acquired any object lock.
*/
void JsonEncoder::EncodeArray(const Array::Ptr& array, boost::asio::yield_context* yc)
{
BeginContainer('[');
auto olock = array->LockIfRequired();
if (olock) {
yc = nullptr; // We've acquired an object lock, never allow asynchronous operations.
}
bool isEmpty = true;
for (const auto& item : array) {
WriteSeparatorAndIndentStrIfNeeded(!isEmpty);
isEmpty = false;
Encode(item, yc);
m_Flusher.FlushIfSafe(yc);
}
EndContainer(']', isEmpty);
}
/**
* Encodes a ValueGenerator object into JSON and writes it to the output stream.
*
* This will iterate through the generator, encoding each value it produces until it is exhausted.
*
* @param generator The ValueGenerator object to be serialized into JSON.
* @param yc The optional yield context for asynchronous operations. If provided, it allows the encoder
* to flush the output stream safely when it has not acquired any object lock on the parent containers.
*/
void JsonEncoder::EncodeValueGenerator(const ValueGenerator::Ptr& generator, boost::asio::yield_context* yc)
{
BeginContainer('[');
bool isEmpty = true;
while (auto result = generator->Next()) {
WriteSeparatorAndIndentStrIfNeeded(!isEmpty);
isEmpty = false;
Encode(*result, yc);
m_Flusher.FlushIfSafe(yc);
}
EndContainer(']', isEmpty);
}
/**
* Encodes an Icinga 2 object (Namespace or Dictionary) into JSON and writes it to @c m_Writer.
*
* @tparam Iterable Type of the container (Namespace or Dictionary).
* @tparam ValExtractor Type of the value extractor function used to extract values from the container's iterator.
*
* @param container The container to JSON serialize.
* @param extractor The value extractor function used to extract values from the container's iterator.
* @param yc The optional yield context for asynchronous operations. It will only be set when the encoder
* has not acquired any object lock on the parent containers, allowing safe asynchronous operations.
*/
template<typename Iterable, typename ValExtractor>
void JsonEncoder::EncodeObject(const Iterable& container, const ValExtractor& extractor, boost::asio::yield_context* yc)
{
static_assert(std::is_same_v<Iterable, Namespace::Ptr> || std::is_same_v<Iterable, Dictionary::Ptr>,
"Container must be a Namespace or Dictionary");
BeginContainer('{');
auto olock = container->LockIfRequired();
if (olock) {
yc = nullptr; // We've acquired an object lock, never allow asynchronous operations.
}
bool isEmpty = true;
for (const auto& [key, val] : container) {
WriteSeparatorAndIndentStrIfNeeded(!isEmpty);
isEmpty = false;
EncodeNlohmannJson(key);
Write(m_Pretty ? ": " : ":");
Encode(extractor(val), yc);
m_Flusher.FlushIfSafe(yc);
}
EndContainer('}', isEmpty);
}
/**
* Dumps a nlohmann::json object to the output stream using the serializer.
*
* This function uses the @c nlohmann::detail::serializer to dump the provided @c nlohmann::json
* object to the output stream managed by the @c JsonEncoder. Strings will be properly escaped, and
* if any invalid UTF-8 sequences are encountered, it will replace them with the Unicode replacement
* character (U+FFFD).
*
* @param json The nlohmann::json object to encode.
*/
void JsonEncoder::EncodeNlohmannJson(const nlohmann::json& json) const
{
nlohmann::detail::serializer<nlohmann::json> s(m_Writer, ' ', nlohmann::json::error_handler_t::replace);
s.dump(json, m_Pretty, true, 0, 0);
}
/**
* Encodes a double value into JSON format and writes it to the output stream.
*
* This function checks if the double value can be safely cast to an integer or unsigned integer type
* without loss of precision. If it can, it will serialize it as such; otherwise, it will serialize
* it as a double. This is particularly useful for ensuring that values like 0.0 are serialized as 0,
* which can be important for compatibility with clients like Icinga DB that expect integers in such cases.
*
* @param value The double value to encode as JSON.
*/
void JsonEncoder::EncodeNumber(double value) const
{
try {
if (value < 0) {
if (auto ll(boost::numeric_cast<nlohmann::json::number_integer_t>(value)); ll == value) {
EncodeNlohmannJson(ll);
return;
}
} else if (auto ull(boost::numeric_cast<nlohmann::json::number_unsigned_t>(value)); ull == value) {
EncodeNlohmannJson(ull);
return;
}
// If we reach this point, the value cannot be safely cast to a signed or unsigned integer
// type because it would otherwise lose its precision. If the value was just too large to fit
// into the above types, then boost will throw an exception and end up in the below catch block.
// So, in either case, serialize the number as-is without any casting.
} catch (const boost::bad_numeric_cast&) {}
EncodeNlohmannJson(value);
}
/**
* Writes a string to the underlying output stream.
*
* This function writes the provided string view directly to the output stream without any additional formatting.
*
* @param sv The string view to write to the output stream.
*/
void JsonEncoder::Write(const std::string_view& sv) const
{
m_Writer->write_characters(sv.data(), sv.size());
}
/**
* Begins a JSON container (object or array) by writing the opening character and adjusting the
* indentation level if pretty-printing is enabled.
*
* @param openChar The character that opens the container (either '{' for objects or '[' for arrays).
*/
void JsonEncoder::BeginContainer(char openChar)
{
if (m_Pretty) {
m_Indent += m_IndentSize;
if (m_IndentStr.size() < m_Indent) {
m_IndentStr.resize(m_IndentStr.size() * 2, ' ');
}
}
m_Writer->write_character(openChar);
}
/**
* Ends a JSON container (object or array) by writing the closing character and adjusting the
* indentation level if pretty-printing is enabled.
*
* @param closeChar The character that closes the container (either '}' for objects or ']' for arrays).
* @param isContainerEmpty Whether the container is empty, used to determine if a newline should be written.
*/
void JsonEncoder::EndContainer(char closeChar, bool isContainerEmpty)
{
if (m_Pretty) {
ASSERT(m_Indent >= m_IndentSize); // Ensure we don't underflow the indent size.
m_Indent -= m_IndentSize;
if (!isContainerEmpty) {
Write("\n");
m_Writer->write_characters(m_IndentStr.c_str(), m_Indent);
}
}
m_Writer->write_character(closeChar);
}
/**
* Writes a separator (comma) and an indentation string if pretty-printing is enabled.
*
* This function is used to separate items in a JSON array or object and to maintain the correct indentation level.
*
* @param emitComma Whether to emit a comma. This is typically true for all but the first item in a container.
*/
void JsonEncoder::WriteSeparatorAndIndentStrIfNeeded(bool emitComma) const
{
if (emitComma) {
Write(",");
}
if (m_Pretty) {
Write("\n");
m_Writer->write_characters(m_IndentStr.c_str(), m_Indent);
}
}
/**
* Wraps any writer of type @c nlohmann::detail::output_adapter_t<char> into a Flusher
*
* @param w The writer to wrap.
*/
JsonEncoder::Flusher::Flusher(const nlohmann::detail::output_adapter_t<char>& w)
: m_AsyncWriter(dynamic_cast<AsyncJsonWriter*>(w.get()))
{
}
/**
* Flushes the underlying writer if it supports that operation and is safe to do so.
*
* Safe flushing means that it only performs the flush operation if the @c JsonEncoder has not acquired
* any object lock so far. This is to ensure that the stream can safely perform asynchronous operations
* without risking undefined behaviour due to coroutines being suspended while the stream is being flushed.
*
* When the @c yc parameter is provided, it indicates that it's safe to perform asynchronous operations,
* and the function will attempt to flush if the writer is an instance of @c AsyncJsonWriter. Otherwise,
* this function does nothing.
*
* @param yc The yield context to use for asynchronous operations.
*/
void JsonEncoder::Flusher::FlushIfSafe(boost::asio::yield_context* yc) const
{
if (yc && m_AsyncWriter) {
m_AsyncWriter->MayFlush(*yc);
}
}
class JsonSax : public nlohmann::json_sax<nlohmann::json>
{
public:
@ -347,25 +45,165 @@ private:
void FillCurrentTarget(Value value);
};
String icinga::JsonEncode(const Value& value, bool prettify)
const char l_Null[] = "null";
const char l_False[] = "false";
const char l_True[] = "true";
const char l_Indent[] = " ";
// https://github.com/nlohmann/json/issues/1512
template<bool prettyPrint>
class JsonEncoder
{
std::string output;
JsonEncoder encoder(output, prettify);
encoder.Encode(value);
return String(std::move(output));
public:
void Null();
void Boolean(bool value);
void NumberFloat(double value);
void Strng(String value);
void StartObject();
void Key(String value);
void EndObject();
void StartArray();
void EndArray();
String GetResult();
private:
std::vector<char> m_Result;
String m_CurrentKey;
std::stack<std::bitset<2>> m_CurrentSubtree;
void AppendChar(char c);
template<class Iterator>
void AppendChars(Iterator begin, Iterator end);
void AppendJson(nlohmann::json json);
void BeforeItem();
void FinishContainer(char terminator);
};
template<bool prettyPrint>
void Encode(JsonEncoder<prettyPrint>& stateMachine, const Value& value);
template<bool prettyPrint>
inline
void EncodeNamespace(JsonEncoder<prettyPrint>& stateMachine, const Namespace::Ptr& ns)
{
stateMachine.StartObject();
ObjectLock olock(ns);
for (const Namespace::Pair& kv : ns) {
stateMachine.Key(Utility::ValidateUTF8(kv.first));
Encode(stateMachine, kv.second.Val);
}
stateMachine.EndObject();
}
/**
* Serializes an Icinga Value into a JSON object and writes it to the given output stream.
*
* @param value The value to be JSON serialized.
* @param os The output stream to write the JSON data to.
* @param prettify Whether to pretty print the serialized JSON.
*/
void icinga::JsonEncode(const Value& value, std::ostream& os, bool prettify)
template<bool prettyPrint>
inline
void EncodeDictionary(JsonEncoder<prettyPrint>& stateMachine, const Dictionary::Ptr& dict)
{
JsonEncoder encoder(os, prettify);
encoder.Encode(value);
stateMachine.StartObject();
ObjectLock olock(dict);
for (const Dictionary::Pair& kv : dict) {
stateMachine.Key(Utility::ValidateUTF8(kv.first));
Encode(stateMachine, kv.second);
}
stateMachine.EndObject();
}
template<bool prettyPrint>
inline
void EncodeArray(JsonEncoder<prettyPrint>& stateMachine, const Array::Ptr& arr)
{
stateMachine.StartArray();
ObjectLock olock(arr);
for (const Value& value : arr) {
Encode(stateMachine, value);
}
stateMachine.EndArray();
}
template<bool prettyPrint>
void Encode(JsonEncoder<prettyPrint>& stateMachine, const Value& value)
{
switch (value.GetType()) {
case ValueNumber:
stateMachine.NumberFloat(value.Get<double>());
break;
case ValueBoolean:
stateMachine.Boolean(value.ToBool());
break;
case ValueString:
stateMachine.Strng(Utility::ValidateUTF8(value.Get<String>()));
break;
case ValueObject:
{
const Object::Ptr& obj = value.Get<Object::Ptr>();
{
Namespace::Ptr ns = dynamic_pointer_cast<Namespace>(obj);
if (ns) {
EncodeNamespace(stateMachine, ns);
break;
}
}
{
Dictionary::Ptr dict = dynamic_pointer_cast<Dictionary>(obj);
if (dict) {
EncodeDictionary(stateMachine, dict);
break;
}
}
{
Array::Ptr arr = dynamic_pointer_cast<Array>(obj);
if (arr) {
EncodeArray(stateMachine, arr);
break;
}
}
// obj is most likely a function => "Object of type 'Function'"
Encode(stateMachine, obj->ToString());
break;
}
case ValueEmpty:
stateMachine.Null();
break;
default:
VERIFY(!"Invalid variant type.");
}
}
String icinga::JsonEncode(const Value& value, bool pretty_print)
{
if (pretty_print) {
JsonEncoder<true> stateMachine;
Encode(stateMachine, value);
return stateMachine.GetResult() + "\n";
} else {
JsonEncoder<false> stateMachine;
Encode(stateMachine, value);
return stateMachine.GetResult();
}
}
Value icinga::JsonDecode(const String& data)
@ -511,3 +349,177 @@ void JsonSax::FillCurrentTarget(Value value)
}
}
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::Null()
{
BeforeItem();
AppendChars((const char*)l_Null, (const char*)l_Null + 4);
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::Boolean(bool value)
{
BeforeItem();
if (value) {
AppendChars((const char*)l_True, (const char*)l_True + 4);
} else {
AppendChars((const char*)l_False, (const char*)l_False + 5);
}
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::NumberFloat(double value)
{
BeforeItem();
// Make sure 0.0 is serialized as 0, so e.g. Icinga DB can parse it as int.
if (value < 0) {
long long i = value;
if (i == value) {
AppendJson(i);
} else {
AppendJson(value);
}
} else {
unsigned long long i = value;
if (i == value) {
AppendJson(i);
} else {
AppendJson(value);
}
}
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::Strng(String value)
{
BeforeItem();
AppendJson(std::move(value));
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::StartObject()
{
BeforeItem();
AppendChar('{');
m_CurrentSubtree.push(2);
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::Key(String value)
{
m_CurrentKey = std::move(value);
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::EndObject()
{
FinishContainer('}');
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::StartArray()
{
BeforeItem();
AppendChar('[');
m_CurrentSubtree.push(0);
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::EndArray()
{
FinishContainer(']');
}
template<bool prettyPrint>
inline
String JsonEncoder<prettyPrint>::GetResult()
{
return String(m_Result.begin(), m_Result.end());
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::AppendChar(char c)
{
m_Result.emplace_back(c);
}
template<bool prettyPrint>
template<class Iterator>
inline
void JsonEncoder<prettyPrint>::AppendChars(Iterator begin, Iterator end)
{
m_Result.insert(m_Result.end(), begin, end);
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::AppendJson(nlohmann::json json)
{
nlohmann::detail::serializer<nlohmann::json>(nlohmann::detail::output_adapter<char>(m_Result), ' ').dump(std::move(json), prettyPrint, true, 0);
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::BeforeItem()
{
if (!m_CurrentSubtree.empty()) {
auto& node (m_CurrentSubtree.top());
if (node[0]) {
AppendChar(',');
} else {
node[0] = true;
}
if (prettyPrint) {
AppendChar('\n');
for (auto i (m_CurrentSubtree.size()); i; --i) {
AppendChars((const char*)l_Indent, (const char*)l_Indent + 4);
}
}
if (node[1]) {
AppendJson(std::move(m_CurrentKey));
AppendChar(':');
if (prettyPrint) {
AppendChar(' ');
}
}
}
}
template<bool prettyPrint>
inline
void JsonEncoder<prettyPrint>::FinishContainer(char terminator)
{
if (prettyPrint && m_CurrentSubtree.top()[0]) {
AppendChar('\n');
for (auto i (m_CurrentSubtree.size() - 1u); i; --i) {
AppendChars((const char*)l_Indent, (const char*)l_Indent + 4);
}
}
AppendChar(terminator);
m_CurrentSubtree.pop();
}

View File

@ -4,121 +4,14 @@
#define JSON_H
#include "base/i2-base.hpp"
#include "base/array.hpp"
#include "base/generator.hpp"
#include <boost/asio/spawn.hpp>
#include <json.hpp>
namespace icinga
{
/**
* AsyncJsonWriter allows writing JSON data to any output stream asynchronously.
*
* All users of this class must ensure that the underlying output stream will not perform any asynchronous I/O
* operations when the @c write_character() or @c write_characters() methods are called. They shall only perform
* such ops when the @c JsonEncoder allows them to do so by calling the @c MayFlush() method.
*
* @ingroup base
*/
class AsyncJsonWriter : public nlohmann::detail::output_adapter_protocol<char>
{
public:
/**
* It instructs the underlying output stream to write any buffered data to wherever it is supposed to go.
*
* The @c JsonEncoder allows the stream to even perform asynchronous operations in a safe manner by calling
* this method with a dedicated @c boost::asio::yield_context object. The stream must not perform any async
* I/O operations triggered by methods other than this one. Any attempt to do so will result in undefined behavior.
*
* However, this doesn't necessarily enforce the stream to really flush its data immediately, but it's up
* to the implementation to do whatever it needs to. The encoder just gives it a chance to do so by calling
* this method.
*
* @param yield The yield context to use for asynchronous operations.
*/
virtual void MayFlush(boost::asio::yield_context& yield) = 0;
};
class String;
class Value;
/**
* JSON encoder.
*
* This class can be used to encode Icinga Value types into JSON format and write them to an output stream.
* The supported stream types include any @c std::ostream like objects and our own @c AsyncJsonWriter, which
* allows writing JSON data to an Asio stream asynchronously. The nlohmann/json library already provides
* full support for the former stream type, while the latter is fully implemented by our own and satisfies the
* @c nlohmann::detail::output_adapter_protocol<> interface as well.
*
* The JSON encoder generates most of the low level JSON tokens, but it still relies on the already existing
* @c nlohmann::detail::serializer<> class to dump numbers and ASCII validated JSON strings. This means that the
* encoder doesn't perform any kind of JSON validation or escaping on its own, but simply delegates all this kind
* of work to serializer<>.
*
* The generated JSON can be either prettified or compact, depending on your needs. The prettified JSON object
* is indented with 4 spaces and grows linearly with the depth of the object tree.
*
* @ingroup base
*/
class JsonEncoder
{
public:
explicit JsonEncoder(std::string& output, bool prettify = false);
explicit JsonEncoder(std::basic_ostream<char>& stream, bool prettify = false);
explicit JsonEncoder(nlohmann::detail::output_adapter_t<char> w, bool prettify = false);
void Encode(const Value& value, boost::asio::yield_context* yc = nullptr);
private:
void EncodeArray(const Array::Ptr& array, boost::asio::yield_context* yc);
void EncodeValueGenerator(const ValueGenerator::Ptr& generator, boost::asio::yield_context* yc);
template<typename Iterable, typename ValExtractor>
void EncodeObject(const Iterable& container, const ValExtractor& extractor, boost::asio::yield_context* yc);
void EncodeNlohmannJson(const nlohmann::json& json) const;
void EncodeNumber(double value) const;
void Write(const std::string_view& sv) const;
void BeginContainer(char openChar);
void EndContainer(char closeChar, bool isContainerEmpty = false);
void WriteSeparatorAndIndentStrIfNeeded(bool emitComma) const;
// The number of spaces to use for indentation in prettified JSON.
static constexpr uint8_t m_IndentSize = 4;
bool m_Pretty; // Whether to pretty-print the JSON output.
unsigned m_Indent{0}; // The current indentation level for pretty-printing.
/**
* Pre-allocate for 8 levels of indentation for pretty-printing.
*
* This is used to avoid reallocating the string on every indent level change.
* The size of this string is dynamically adjusted if the indentation level exceeds its initial size at some point.
*/
std::string m_IndentStr{8*m_IndentSize, ' '};
// The output stream adapter for writing JSON data. This can be either a std::ostream or an Asio stream adapter.
nlohmann::detail::output_adapter_t<char> m_Writer;
/**
* This class wraps any @c nlohmann::detail::output_adapter_t<char> writer and provides a method to flush it as
* required. Only @c AsyncJsonWriter supports the flush operation, however, this class is also safe to use with
* other writer types and the flush method does nothing for them.
*/
class Flusher {
public:
explicit Flusher(const nlohmann::detail::output_adapter_t<char>& w);
void FlushIfSafe(boost::asio::yield_context* yc) const;
private:
AsyncJsonWriter* m_AsyncWriter;
} m_Flusher;
};
String JsonEncode(const Value& value, bool prettify = false);
void JsonEncode(const Value& value, std::ostream& os, bool prettify = false);
String JsonEncode(const Value& value, bool pretty_print = false);
Value JsonDecode(const String& data);
}

View File

@ -35,3 +35,4 @@ void Loader::AddDeferredInitializer(const std::function<void()>& callback, Initi
initializers->push(DeferredInitializer(callback, priority));
}

View File

@ -121,10 +121,7 @@ public:
template<typename T>
Log& operator<<(const T& val)
{
if (!m_IsNoOp) {
m_Buffer << val;
}
return *this;
}

View File

@ -9,7 +9,7 @@ namespace icinga
abstract class Logger : ConfigObject
{
[config, set_virtual] String severity {
[config, virtual] String severity {
default {{{ return "information"; }}}
};
};

View File

@ -81,3 +81,4 @@ Object::Ptr Namespace::GetPrototype()
return prototype;
}

View File

@ -1,6 +1,7 @@
/* Icinga 2 | (c) 2012 Icinga GmbH | GPLv2+ */
#include "base/namespace.hpp"
#include "base/objectlock.hpp"
#include "base/debug.hpp"
#include "base/primitivetype.hpp"
#include "base/debuginfo.hpp"
@ -118,26 +119,7 @@ void Namespace::Remove(const String& field)
void Namespace::Freeze() {
ObjectLock olock(this);
m_Frozen.store(true, std::memory_order_release);
}
bool Namespace::Frozen() const
{
return m_Frozen.load(std::memory_order_acquire);
}
/**
* Returns an already locked ObjectLock if the namespace is frozen.
* Otherwise, returns an unlocked object lock.
*
* @returns An object lock.
*/
ObjectLock Namespace::LockIfRequired()
{
if (Frozen()) {
return ObjectLock(this, std::defer_lock);
}
return ObjectLock(this);
m_Frozen = true;
}
std::shared_lock<std::shared_timed_mutex> Namespace::ReadLockUnlessFrozen() const
@ -161,8 +143,13 @@ Value Namespace::GetFieldByName(const String& field, bool, const DebugInfo& debu
return GetPrototypeField(const_cast<Namespace *>(this), field, false, debugInfo); /* Ignore indexer not found errors similar to the Dictionary class. */
}
void Namespace::SetFieldByName(const String& field, const Value& value, const DebugInfo& debugInfo)
void Namespace::SetFieldByName(const String& field, const Value& value, bool overrideFrozen, const DebugInfo& debugInfo)
{
// The override frozen parameter is mandated by the interface but ignored here. If the namespace is frozen, this
// disables locking for read operations, so it must not be modified again to ensure the consistency of the internal
// data structures.
(void) overrideFrozen;
Set(field, value, false, debugInfo);
}
@ -178,14 +165,14 @@ bool Namespace::GetOwnField(const String& field, Value *result) const
Namespace::Iterator Namespace::Begin()
{
ASSERT(Frozen() || OwnsLock());
ASSERT(OwnsLock());
return m_Data.begin();
}
Namespace::Iterator Namespace::End()
{
ASSERT(Frozen() || OwnsLock());
ASSERT(OwnsLock());
return m_Data.end();
}
@ -199,3 +186,4 @@ Namespace::Iterator icinga::end(const Namespace::Ptr& x)
{
return x->End();
}

View File

@ -5,7 +5,6 @@
#include "base/i2-base.hpp"
#include "base/object.hpp"
#include "base/objectlock.hpp"
#include "base/shared-object.hpp"
#include "base/value.hpp"
#include "base/debuginfo.hpp"
@ -74,8 +73,6 @@ public:
bool Contains(const String& field) const;
void Remove(const String& field);
void Freeze();
bool Frozen() const;
ObjectLock LockIfRequired();
Iterator Begin();
Iterator End();
@ -83,7 +80,7 @@ public:
size_t GetLength() const;
Value GetFieldByName(const String& field, bool sandboxed, const DebugInfo& debugInfo) const override;
void SetFieldByName(const String& field, const Value& value, const DebugInfo& debugInfo) override;
void SetFieldByName(const String& field, const Value& value, bool overrideFrozen, const DebugInfo& debugInfo) override;
bool HasOwnField(const String& field) const override;
bool GetOwnField(const String& field, Value *result) const override;

View File

@ -23,10 +23,12 @@ void NetworkStream::Close()
* @param count The number of bytes to read from the queue.
* @returns The number of bytes actually read.
*/
size_t NetworkStream::Read(void *buffer, size_t count)
size_t NetworkStream::Read(void *buffer, size_t count, bool allow_partial)
{
size_t rc;
ASSERT(allow_partial);
if (m_Eof)
BOOST_THROW_EXCEPTION(std::invalid_argument("Tried to read from closed socket."));

View File

@ -22,7 +22,7 @@ public:
NetworkStream(Socket::Ptr socket);
size_t Read(void *buffer, size_t count) override;
size_t Read(void *buffer, size_t count, bool allow_partial = false) override;
void Write(const void *buffer, size_t count) override;
void Close() override;

View File

@ -22,3 +22,4 @@ Object::Ptr Number::GetPrototype()
return prototype;
}

View File

@ -6,3 +6,4 @@
using namespace icinga;
REGISTER_BUILTIN_TYPE(Number, Number::GetPrototype());

View File

@ -42,3 +42,4 @@ Object::Ptr Object::GetPrototype()
return prototype;
}

View File

@ -125,7 +125,7 @@ Value Object::GetFieldByName(const String& field, bool sandboxed, const DebugInf
return GetField(fid);
}
void Object::SetFieldByName(const String& field, const Value& value, const DebugInfo& debugInfo)
void Object::SetFieldByName(const String& field, const Value& value, bool overrideFrozen, const DebugInfo& debugInfo)
{
Type::Ptr type = GetReflectionType();

View File

@ -5,7 +5,6 @@
#include "base/i2-base.hpp"
#include "base/debug.hpp"
#include "base/intrusive-ptr.hpp"
#include <boost/smart_ptr/intrusive_ptr.hpp>
#include <atomic>
#include <cstddef>
@ -28,7 +27,7 @@ class String;
struct DebugInfo;
class ValidationUtils;
extern const Value Empty;
extern Value Empty;
#define DECLARE_PTR_TYPEDEFS(klass) \
typedef intrusive_ptr<klass> Ptr
@ -171,7 +170,7 @@ public:
virtual void SetField(int id, const Value& value, bool suppress_events = false, const Value& cookie = Empty);
virtual Value GetField(int id) const;
virtual Value GetFieldByName(const String& field, bool sandboxed, const DebugInfo& debugInfo) const;
virtual void SetFieldByName(const String& field, const Value& value, const DebugInfo& debugInfo);
virtual void SetFieldByName(const String& field, const Value& value, bool overrideFrozen, const DebugInfo& debugInfo);
virtual bool HasOwnField(const String& field) const;
virtual bool GetOwnField(const String& field, Value *result) const;
virtual void ValidateField(int id, const Lazy<Value>& lvalue, const ValidationUtils& utils);

View File

@ -18,18 +18,6 @@ ObjectLock::ObjectLock(const Object::Ptr& object)
{
}
/**
* Constructs a lock for the given object without locking it immediately.
*
* The user must call Lock() explicitly when needed.
*
* @param object The object to lock.
*/
ObjectLock::ObjectLock(const Object::Ptr& object, std::defer_lock_t)
: m_Object(object.get()), m_Locked(false)
{
}
ObjectLock::ObjectLock(const Object *object)
: m_Object(object), m_Locked(false)
{
@ -65,15 +53,3 @@ void ObjectLock::Unlock()
m_Locked = false;
}
}
/**
* Returns true if the object is locked, false otherwise.
*
* This operator allows using ObjectLock in boolean contexts.
*
* @returns true if the object is locked, false otherwise.
*/
ObjectLock::operator bool() const
{
return m_Locked;
}

View File

@ -15,7 +15,6 @@ struct ObjectLock
{
public:
ObjectLock(const Object::Ptr& object);
ObjectLock(const Object::Ptr& object, std::defer_lock_t);
ObjectLock(const Object *object);
ObjectLock(const ObjectLock&) = delete;
@ -26,8 +25,6 @@ public:
void Lock();
void Unlock();
operator bool() const;
private:
const Object *m_Object{nullptr};
bool m_Locked{false};

View File

@ -54,3 +54,4 @@ ObjectFactory ObjectType::GetFactory() const
{
return DefaultObjectFactory<Object>;
}

View File

@ -259,10 +259,6 @@ PerfdataValue::Ptr PerfdataValue::Parse(const String& perfdata)
double value = Convert::ToDouble(tokens[0].SubStr(0, pos));
if (!std::isfinite(value)) {
BOOST_THROW_EXCEPTION(std::invalid_argument("Invalid performance data value: " + perfdata + " is outside of any reasonable range"));
}
bool counter = false;
String unit;
Value warn, crit, min, max;
@ -270,11 +266,6 @@ PerfdataValue::Ptr PerfdataValue::Parse(const String& perfdata)
if (pos != String::NPos)
unit = tokens[0].SubStr(pos, String::NPos);
// UoM.Out is an empty string for "c". So set counter before parsing.
if (unit == "c") {
counter = true;
}
double base;
{
@ -300,6 +291,10 @@ PerfdataValue::Ptr PerfdataValue::Parse(const String& perfdata)
}
}
if (unit == "c") {
counter = true;
}
warn = ParseWarnCritMinMaxToken(tokens, 1, "warning");
crit = ParseWarnCritMinMaxToken(tokens, 2, "critical");
min = ParseWarnCritMinMaxToken(tokens, 3, "minimum");
@ -368,27 +363,20 @@ String PerfdataValue::Format() const
result << unit;
std::string interm(";");
if (!GetWarn().IsEmpty()) {
result << interm << Convert::ToString(GetWarn());
interm.clear();
}
result << ";" << Convert::ToString(GetWarn());
interm += ";";
if (!GetCrit().IsEmpty()) {
result << interm << Convert::ToString(GetCrit());
interm.clear();
}
result << ";" << Convert::ToString(GetCrit());
interm += ";";
if (!GetMin().IsEmpty()) {
result << interm << Convert::ToString(GetMin());
interm.clear();
}
result << ";" << Convert::ToString(GetMin());
interm += ";";
if (!GetMax().IsEmpty()) {
result << interm << Convert::ToString(GetMax());
result << ";" << Convert::ToString(GetMax());
}
}
}
}
return result.str();

View File

@ -61,3 +61,4 @@ ObjectFactory PrimitiveType::GetFactory() const
{
return m_Factory;
}

View File

@ -19,7 +19,6 @@
#ifndef _WIN32
# include <execvpe.h>
# include <poll.h>
# include <signal.h>
# include <string.h>
# ifndef __APPLE__
@ -171,17 +170,6 @@ static Value ProcessSpawnImpl(struct msghdr *msgh, const Dictionary::Ptr& reques
}
#endif /* HAVE_NICE */
{
struct sigaction sa;
memset(&sa, 0, sizeof(sa));
sa.sa_handler = SIG_DFL;
for (int sig = 1; sig <= 31; ++sig) {
(void)sigaction(sig, &sa, nullptr);
}
}
sigset_t mask;
sigemptyset(&mask);
sigprocmask(SIG_SETMASK, &mask, nullptr);
@ -643,7 +631,8 @@ void Process::IOThreadProc(int tid)
#endif /* _WIN32 */
int i = 1;
for (auto& kv : l_Processes[tid]) {
typedef std::pair<ProcessHandle, Process::Ptr> kv_pair;
for (const kv_pair& kv : l_Processes[tid]) {
const Process::Ptr& process = kv.second;
#ifdef _WIN32
handles[i] = kv.first;
@ -1086,10 +1075,8 @@ bool Process::DoEvents()
Log(LogWarning, "Process")
<< "Couldn't kill the process group " << m_PID << " (" << PrettyPrintArguments(m_Arguments)
<< "): [errno " << error << "] " << strerror(error);
if (error != ESRCH) {
could_not_kill = true;
}
}
#endif /* _WIN32 */
is_timeout = true;

View File

@ -5,7 +5,6 @@
#include "base/i2-base.hpp"
#include "base/dictionary.hpp"
#include <cstdint>
#include <iosfwd>
#include <deque>
#include <vector>
@ -26,7 +25,7 @@ struct ProcessResult
pid_t PID;
double ExecutionStart;
double ExecutionEnd;
int_fast64_t ExitStatus;
long ExitStatus;
String Output;
};

View File

@ -24,7 +24,7 @@ Value Reference::Get() const
void Reference::Set(const Value& value)
{
m_Parent->SetFieldByName(m_Index, value, DebugInfo());
m_Parent->SetFieldByName(m_Index, value, false, DebugInfo());
}
Object::Ptr Reference::GetParent() const

Some files were not shown because too many files have changed in this diff Show More