This was left over from debugging, but we should not block.
OTel will handle the connection in the background.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Adjust the debouncing logic so that it applies to all inbound file
events, regardless of whether they match a sync or rebuild rule.
When the batch is flushed out, if any event for the service is a
rebuild event, then the service is rebuilt and all sync events for
the batch are ignored. If _all_ events in the batch are sync events,
then a sync is triggered, passing the entire batch at once. This
provides a substantial performance win for the new `tar`-based
implementation, as it can efficiently transfer the changes in bulk.
Additionally, this helps with jitter, e.g. it's not uncommon for
there to be double-writes in quick succession to a file, so even if
there's not many files being modified at once, it can still prevent
some unnecessary transfers.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Support services with scale > 1 for the tar watch sync.
Add a "lossy" multi-writer specific to pipes that writes the
tar data to each `io.PipeWriter`, which is connected to `stdin`
for the `tar` process being exec'd in the container.
The data is written serially to each writer. This could be
adjusted to do concurrent writes but that will rapidly increase
the I/O load, so is not done here - in general, 99% of the
time you'll be developing (and thus using watch/sync) with a
single replica of a service.
If a write fails, the corresponding `io.PipeWriter` is removed
from the active set and closed with an error.
This means that a single container copy failing won't stop
writes to the others that are succeeding. Of course, they will
be in an inconsistent state afterwards still, but that's a
different problem.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Just moving some code around in preparation for an alternative
sync implementation that can do bulk transfers by using `tar`.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
* Move all the initialization code out of `main.go`
* Ensure spans are reported when there's an error with the
command
* Attach the Compose version & active Docker context to the
resource instead of the span
* Name the root CLI span `cli/<cmd>` for clarity and grab
the full subcommand path (e.g. `alpha-viz` instead of just
`viz`)
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
* If there's no `otel` key (or the value is `null`) in the config,
don't return an error
* Propagate error from the exporter instead of panicking
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
This is a bunch of OTEL initialization code. It's all in
`internal/` because there are re-usable parts here, but Compose
isn't the right spot. Once we've stabilized the interfaces a bit
and the need arises, we can move it to a separate module.
Currently, a single span is produced to wrap the root Compose
command.
Compose will respect the standard OTEL environment variables
as well as OTEL metadata from the Docker context. Both can be
used simultaneously. The latter is intended for local system
integration and is restricted to Unix sockets / named pipes.
None of this is enabled by default. It requires setting the
`COMPOSE_EXPERIMENTAL_OTEL=1` environment variable to
gate it during development.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>