Especially ApiListener#ReplayLog() enqueued lots of messages into
JsonRpcConnection#{m_IoStrand,m_OutgoingMessagesQueue} (RAM) even if
the connection was shut(ting) down. Now #Disconnect() takes effect ASAP.
This allows the function to be used both with a double timestamp or a pointer
to a tm struct. With this, a similar implementation inside the tests can simply
use our regular function.
So far, the return value of strftime() was simply ignored and the output buffer
passed to the icinga::String constructor. However, there are error conditions
where strftime() returns 0 to signal an error, like if the buffer was too small
for the output. In that case, there's no guarantee on the buffer contents and
reading it can result in undefined behavior. Unfortunately, returning 0 can
also indicate success and strftime() doesn't set errno, so there's no reliable
way to distinguish both situations. Thus, the implementation now returns the
empty string in both cases.
I attempted to use std::put_time() at first as that allows for better error
handling, however, there were problems with the implementation on Windows (see
inline comment), so I put that plan on hold at left strftime() there for the
time being.
localtime() is not thread-safe as it returns a pointer to a shared tm struct.
Everywhere except on Windows, localtime_r() is used already which avoids the
problem by using a struct allocated by the caller for the output.
Windows actually has a similar function called localtime_s() which has the same
properties, just with a different name and order of arguments.
The previous implementation actually had undefined behavior when called with a
double that can't be represented as time_t. With boost::numeric_cast, there's a
convenient cast available that avoids this and throws an exceptions on
overflow.
It's undefined behavior ([0], where the implicit conversion rule comes into
play because the C-style cast uses static_cast [1] which in turn uses the
imlicit conversion as per rule 5 of [2]):
> A prvalue of floating-point type can be converted to a prvalue of any integer
> type. The fractional part is truncated, that is, the fractional part is
> discarded.
>
> * If the truncated value cannot fit into the destination type, the behavior
> is undefined (even when the destination type is unsigned, modulo arithmetic
> does not apply).
Note that on Linux amd64, the undefined behavior typically manifests itself in
the result being the minimal value of time_t which then results in localtime_r
failing with EOVERFLOW.
[0]: https://en.cppreference.com/w/cpp/language/implicit_conversion#Floating.E2.80.93integral_conversions
[1]: https://en.cppreference.com/w/cpp/language/explicit_cast
[2]: https://en.cppreference.com/w/cpp/language/static_cast
Currently, when processing a `CheckResult`, it will first trigger an
`OnNextCheckChanged` event, which is sent to all connected endpoints.
Then, when `Checkable::ProcessCheckResult()` returns, an `OnCheckResult`
event is fired, which is of course also sent to all connected endpoints.
Next, the other endpoints receive the `event::SetNextCheck` cluster
event followed by `event::CheckResult`and invoke
`checkable#SetNextCheck()` and `Checkable#CheckResult()` with the newly
received check. So they also try to recalculate the next check
themselves and invalidate the previously received next check timestamp
from the source endpoint. Since each endpoint randomly initialises its
own scheduling offset, the recalculated next check will always differ by
a split second/millisecond on each of them. As a consequence, two Icinga
DB HA instances will generate two different checksums for the same state
and causes the state histories to be fully resynchronised after a
takeover/Icinga 2 reload.
A day specification like "monday -1" refers to the last Monday of the month.
However, there was an off by one if the first day of the next month is the same
day of the week, i.e. a Monday in this example.
LegacyTimePeriod::FindNthWeekday() picks a day to start the search for the day
in question. When given a negative n to search for the n-th last day, it
wrongly used the first day of the following month as the start and counted it
as if it was within the current month. This resulted in a 1/7 chance that the
result was one week too late.
This is fixed by using the last day of the current month instead.
Before ae693cb7e1 (#9577) we've repeatedly looped over all items in parallel like this:
while not types.done:
for t in types:
if not t.done and t.dependencies.done:
with parallel(all_items, CONCURRENCY) as some_items:
for i in some_items:
if i.type is t:
i.commit()
I.e. all items got distributed over CONCURRENCY threads, but not always equally. E.g. it was the hosts' turn, but only two threads got hosts and did all the work. The others didn't do actual work (due to the lack of hosts in their queue) which reduced the performance. c721c302cd (#6581) fixed it by shuffling all_items first. ae693cb7e1 (#9577) made the latter unnecessary by replacing the above algorithm with this:
while not types.done:
for t in types:
if not t.done and t.dependencies.done:
with parallel(all_items[t], CONCURRENCY) as some_items:
for i in some_items:
if i.type is t:
i.commit()
I.e. parallel() gets only items of type t, so all threads get e.g. hosts.
When a HTTP connection dies prematurely while the response is sent,
`http::async_write()` sets the error code to something like broken pipe for
example. When calling `async_flush()` afterwards, it sometimes happens that
this never returns. This results in a resource leak as the coroutine isn't
cleaned up. This commit makes the individual functions throw exceptions instead
of silently ignoring the errors, resulting in the function terminating early
and also resulting in an error being logged as well.
Given that the internal `config::Update` cluster events are using this
as well to create received runtime objects, we don't want to persist
first the conf file and the load and validate it with `CompileFile`.
Otherwise, we are forced to remove the newly created file whenever we
can't validate, commit or activate it. This also would also have the
downside that two cluster events for the same object arriving at the
same moment from two different endpoints would result in two different
threads simultaneously creating and loading the same config file -
whereby only one of the surpasses the validation, while the other is
facing an object `re-definition` error and tries to remove that config
file it mistakenly thinks it has created. As a consequence, an object
successfully created by the former is implicitly deleted by the latter
thread, causing the objects to mysteriously disappear.
Closing and re-opening that very same log file shouldn't reset the
counter, otherwise some log files may exceed the max limit per file as
their offset indicator is reset each time they are re-opened.
On incoming connection timeout we log the remote endpoint which isn't
available if it was already disconnected - an exception is thrown. Get it
as long as we're still connected not to lose it, nor to get an exception.
While analyzing a possible memory leak, we encountered several coroutine
exception messages, which unfortunately do not provide any information
about what exactly went wrong, as exception diagnostics were previously
only logged at the notice level.
Although there is locking involved here, it shoudln't take too long for
the thread to actually acquire it, since there aren't that many threads
dealing with endpoint clients concurrently. It's just wasting pointless
time trying to obtain a CPU slot.