The initial config object sync for each new connection (in
`ApiListener::SendRuntimeConfigObjects()`) only considers currently
existing objects and has no way to pass the information that objects
were deleted in the meantime.
This commit logs config object deletions to the replay log if required
so that there is a chance that it will be propagated to nodes that were
offline when the deletion happened.
Note that this can only be considered a workaround as the replay log
might be pruned or could even be completely disabled. Also, there still
seems to be a race-condition between the config sync and replay log of
multiple new connections at the same time.
config::UpdateObject would create a new object, but this may
have been silently ignored with 'ignore_on_error' - downtimes, etc.
Since we cannot simply fetch the error from inside the config compiler,
we'd just check whether there's a config object created at this stage.
This happens synchronously, and once there is, log something.
The previous code always logged the creation, even if the downtime
was ignored, e.g. when the first master sent one for local host objects.
This commit also adds more details: identity, endpoint, zone to extract
the MessageOrigin details into log messages for better troubleshooting
and debugging.
refs #7198
What does this change?
* Remove use of spaces for formatting
These could be found by using `grep -r -l -P '^\t+ +[^*]'
* Removal of training whitespaces
* A few lines longer than 120 chars
1) No zone defined. The object will only be synced in the local zone for HA purposes.
2) Zone is set to 'master'. Only nodes in the master zone will get this object and updates synced.
3) Zone is set to 'satellite'. Only nodes in the satellite zone, or in parent zones above will get this object and updates synced.
4) Zone is set to 'client'. Only nodes in the client zone, and in parent zones (satellite, master) will get object updates.
Furthermore this commit adds a bit more security measures for syncing object
config bottom-up which is clearly restricted at this time. Clients cannot
send their config to the top, but yet we only support the top-down thing we
also have with the cluster file config sync.
The initial sync will also take the zone relation model into account
and only allow object syncs only when the same conditions apply as written
above.
refs #9927
refs #9100
This is fairly ugly and sets an extension for the ConfigObjectUtility
delete handler to signal the OnActiveChanged handler inside the cluster
configsync to send a delete event to the other nodes.
refs #9927