Adjust the debouncing logic so that it applies to all inbound file
events, regardless of whether they match a sync or rebuild rule.
When the batch is flushed out, if any event for the service is a
rebuild event, then the service is rebuilt and all sync events for
the batch are ignored. If _all_ events in the batch are sync events,
then a sync is triggered, passing the entire batch at once. This
provides a substantial performance win for the new `tar`-based
implementation, as it can efficiently transfer the changes in bulk.
Additionally, this helps with jitter, e.g. it's not uncommon for
there to be double-writes in quick succession to a file, so even if
there's not many files being modified at once, it can still prevent
some unnecessary transfers.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Support services with scale > 1 for the tar watch sync.
Add a "lossy" multi-writer specific to pipes that writes the
tar data to each `io.PipeWriter`, which is connected to `stdin`
for the `tar` process being exec'd in the container.
The data is written serially to each writer. This could be
adjusted to do concurrent writes but that will rapidly increase
the I/O load, so is not done here - in general, 99% of the
time you'll be developing (and thus using watch/sync) with a
single replica of a service.
If a write fails, the corresponding `io.PipeWriter` is removed
from the active set and closed with an error.
This means that a single container copy failing won't stop
writes to the others that are succeeding. Of course, they will
be in an inconsistent state afterwards still, but that's a
different problem.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>