not just boost::coroutines::detail::forced_unwind.
This is needed because as of Boost 1.87, boost::asio::spawn() uses Fiber, not Coroutine v1.
https://github.com/boostorg/asio/commit/df973a85ed69f021
This is safe because every actual exception shall inherit from std::exception. Except forced_unwind and its Fiber equivalent, so that `catch(const std::exception&)` doesn't catch them and only them.
It's not used. Also, the callback shall run completely at once. This ensures that it won't (continue to) run once another coroutine on the strand calls Timeout#Cancel().
The Redis ACL system was introduced with Redis 6.0. It introduced users
with precisely granular permissions. This change allows Icinga 2 to use
the Icinga DB feature against a Redis with an ACL user.
This was reflected in the documentation, next to the already
implemented, but undocumented Redis database.
Closes#9536.
Each configuration field of an IcingaDB Object was marked with
no_user_modify as modifications via the API would not result in an
actual change. While the Object would be updated, the internal Redis
connection would not be restarted, resulting in an unexpected behavior.
The missing db_index was added to the documentation.
The table sla_history_downtime requires a downtime_end.
The Go daemon takes the cancel_time if has_been_cancelled is 1.
So we must supply a cancel_time whereever has_been_cancelled is 1.
Otherwise the Go daemon can't process some entries.
and, in case of null, fall back to Checkable#check_command.timeout, just like
IcingaDB#SerializeState(). Otherwise the Go daemon crashes. It expects a number.
At the moment, the Icinga DB feature will use that value as-is and
serialize it to JSON, resulting in a crash in Icinga DB down the road
because it expects a boolean.
This commit removes EmbeddedNamespaceValue and ConstEmbeddedNamespaceValue and
reduces NamespaceValue down to a simple struct without inheritance or member
functions. The code from these clases is inlined into the Namespace class. The
class hierarchy determining whether a value is const is moved to an attribute
of NamespaceValue.
This is done in preparation for changes to the locking in the Namespace class.
Currently, it relies on a recursive mutex. In the future, a shared mutex
(read/write lock) should be used instead, which cannot allow recursive locking
(without failing or risk deadlocking on lock upgrades). With this change, all
operations requiring a lock for one operation are within one function, no
recursive locking is not needed any more.
Before (time: vertical, stack: horizontal):
* Checkable::ExecuteCheck
* Checkable::UpdateNextCheck
* IcingaDB::NextCheckChangedHandler
* HSET icinga:host:state
* HSET icinga:checksum:host:state
* ZADD icinga:nextupdate:host
* RandomCheckTask::ScriptFunc
* Checkable::ProcessCheckResult
* Checkable::UpdateNextCheck
* IcingaDB::NextCheckChangedHandler
* HSET icinga:host:state
* HSET icinga:checksum:host:state
* ZADD icinga:nextupdate:host
* IcingaDB::NewCheckResultHandler
* HSET icinga:host:state
* HSET icinga:checksum:host:state
* ZADD icinga:nextupdate:host
* IcingaDB::StateChangeHandler
* XADD icinga:runtime:state
* IcingaDB::ForwardHistoryEntries
* XADD icinga:history:stream:state
After:
* Checkable::ExecuteCheck
* Checkable::UpdateNextCheck
* RandomCheckTask::ScriptFunc
* Checkable::ProcessCheckResult
* Checkable::UpdateNextCheck
* IcingaDB::NewCheckResultHandler
* HSET icinga:host:state
* HSET icinga:checksum:host:state
* ZADD icinga:nextupdate:host
* IcingaDB::StateChangeHandler
* XADD icinga:runtime:state
* IcingaDB::ForwardHistoryEntries
* XADD icinga:history:stream:state
The first state + nextupdate (for overdue) update comes from next_check being
set to now + interval immediately before doing the actual check (not to trigger
it twice). This update is not only not important for the end user, but even
inappropriate. The end user SHALL see next_check being e.g. in -4s, not 5m, as
the check is running at the moment.
The second one is just redundant as IcingaDB::NewCheckResultHandler (the third
one) is called anyway and will update state + nextupdate as well.
If some kind of query is not supposed to be processed at the moment, there is
little point in checking it. During a full dump, state updates are suppressed
(i.e. delayed), so when a dump takes very long, this would have resulted in a
false Redis backlog warning.