Services downtimes scheduled via the `all_services` flag get already
removed automatically when removing their parent downtimes (introduced
with #8913). Now, this commit makes it possible to perform the same actions
for all child downtimes, i.e. not only for those of service objects, but
for all child objects represented in the dependency tree.
This commit removes a distinction in how dependency objects are checked for
cycles in the resulting graph depending on whether they are part of the
initially loaded configuration during process startup or as part of a runtime
update.
The DependencyCycleChecker helper class is extended with a mechanism that
allows additional dependencies to be considered during the cycle search. This
allows using it to check for cycles before actually registering the
dependencies with the checkables.
The aforementioned case-distinction for initial/runtime-update config is
removed by making use of the newly added BeforeOnAllConfigLoaded signal to
perform the cycle check at once for each batch of dependencies inside
ConfigItem::CommitNewItems() for both cases now. During the initial config
loading, there can be multiple batches of dependencies as objects from apply
rules are created separately, so parts of the dependency graph might be visited
multiple times now, however that is limited to a minimum as only parts of the
graph that are reachable from the newly added dependencies are searched.
This commit groups a bunch of structs and static functions inside
dependency.cpp into a new DependencyCycleChecker helper class. In the process,
the implementation was changed a bit, the behavior should be unchanged except
for a more user-friendly error message in the exception.
This fixes an issue where recovery notifications get lost if they happen
outside of a notification time period.
Not all calls to `Checkable::NotificationReasonApplies()` need
`GetStateBeforeSuppression()` to be checked. In fact, for one caller,
`FireSuppressedNotifications()` in
`lib/notification/notificationcomponent.cpp`, the state before suppression may
not even be initialized properly, so that the default value of OK is used which
can lead to incorrect return values. Note the difference between suppressions
happening on the level of the `Checkable` object level and the `Notification`
object level. Only the first sets the state before suppression in the
`Checkable` object, but so far, also the latter used that value incorrectly.
This commit moves the check of `GetStateBeforeSuppression()` from
`Checkable::NotificationReasonApplies()` to the one place where it's actually
relevant: `Checkable::FireSuppressedNotifications()`. This made the existing
call to `NotificationReasonApplies()` unneccessary as it would always return
true: the `type` argument is computed based on the current check result, so
there's no need to check it against the current check result.
error: no matching function for call to 'intrusive_ptr_release'
...
candidate function not viable: cannot convert argument of incomplete type 'icinga::Notification *' to 'Object *' for 1st argument
void intrusive_ptr_release(Object *object);
Currently, when processing a `CheckResult`, it will first trigger an
`OnNextCheckChanged` event, which is sent to all connected endpoints.
Then, when `Checkable::ProcessCheckResult()` returns, an `OnCheckResult`
event is fired, which is of course also sent to all connected endpoints.
Next, the other endpoints receive the `event::SetNextCheck` cluster
event followed by `event::CheckResult`and invoke
`checkable#SetNextCheck()` and `Checkable#CheckResult()` with the newly
received check. So they also try to recalculate the next check
themselves and invalidate the previously received next check timestamp
from the source endpoint. Since each endpoint randomly initialises its
own scheduling offset, the recalculated next check will always differ by
a split second/millisecond on each of them. As a consequence, two Icinga
DB HA instances will generate two different checksums for the same state
and causes the state histories to be fully resynchronised after a
takeover/Icinga 2 reload.
A day specification like "monday -1" refers to the last Monday of the month.
However, there was an off by one if the first day of the next month is the same
day of the week, i.e. a Monday in this example.
LegacyTimePeriod::FindNthWeekday() picks a day to start the search for the day
in question. When given a negative n to search for the n-th last day, it
wrongly used the first day of the following month as the start and counted it
as if it was within the current month. This resulted in a 1/7 chance that the
result was one week too late.
This is fixed by using the last day of the current month instead.
As MacroProcessor checked just for CustomVarObject base class, but
IcingaApplication provided the vars attribute by itself, it had to also
resolve CV macros by itself. That logic diverged from MacroProcessor so that
macros inside IcingaApplication CVs weren't resolved. Until now.
the last state change could be a long time ago. If it's longer than
the new downtime's duration, the downtime expires immediately.
trigger time + duration < now