Unfortunately, the feature flag mechanism for experimental features
isn't adequate. To avoid some edge cases where Compose might try to
use Synchronized file shares with Desktop when the feature isn't
available, we need to query a new endpoint.
Before we move any of this out of experimental, we need to improve
the integration here with Desktop & Compose so that we get all the
necessary feature/experiment state up-front to reduce the quantity
of IPC calls needed up-front.
For now, there's some intentional redundancy to avoid making this
extra call if we can avoid it. The actual endpoint is very cheap/
fast, but every IPC call is a potential point of of failure, so
it's worth it.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Use environment variable for global opt-out and Docker Desktop (if
available) to determine specific experiment states.
In the future, we'll allow per-feature opt-in/opt-out via env vars
as well, but currently there is a single `COMPOSE_EXPERIMENTAL` env
var that can be used to opt-out of all experimental features
independently of Docker Desktop configuration.
This has not been the default for quite a while and required
setting an environment variable to revert back.
The tar implementation is more performant and addresses several
edge cases with the original `docker cp` version, so it's time
to fully retire it.
The scaffolding for multiple sync implementations remains to
support future experimentation here.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
This was a bad configuration (my fault) that meant each span was
exported synchronously, as it ended. That can cause weird behavior
such as stuttering/blocking.
There's really no reason to NOT use the batch processor, it's the
recommended way to configure it. In the future, it might make sense
to tune the intervals based on the fact that Compose is a CLI vs
a long-running server app, but we handle flushing out on exit
already, so it's not a huge deal.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
This is a good place to start introducing (local) exclusivity
to Compose. Now, when `alpha watch` launches, it will check for
the existence of a PID file in the user XDG runtime directory,
and create one if the existing one is stale or does not exist.
If the PID file exists and is valid, an error is returned and
Compose exits.
A slight tweak to the experimental remote Git loader has been
made to use the XDG package for consistency.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
This was left over from debugging, but we should not block.
OTel will handle the connection in the background.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Adjust the debouncing logic so that it applies to all inbound file
events, regardless of whether they match a sync or rebuild rule.
When the batch is flushed out, if any event for the service is a
rebuild event, then the service is rebuilt and all sync events for
the batch are ignored. If _all_ events in the batch are sync events,
then a sync is triggered, passing the entire batch at once. This
provides a substantial performance win for the new `tar`-based
implementation, as it can efficiently transfer the changes in bulk.
Additionally, this helps with jitter, e.g. it's not uncommon for
there to be double-writes in quick succession to a file, so even if
there's not many files being modified at once, it can still prevent
some unnecessary transfers.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Support services with scale > 1 for the tar watch sync.
Add a "lossy" multi-writer specific to pipes that writes the
tar data to each `io.PipeWriter`, which is connected to `stdin`
for the `tar` process being exec'd in the container.
The data is written serially to each writer. This could be
adjusted to do concurrent writes but that will rapidly increase
the I/O load, so is not done here - in general, 99% of the
time you'll be developing (and thus using watch/sync) with a
single replica of a service.
If a write fails, the corresponding `io.PipeWriter` is removed
from the active set and closed with an error.
This means that a single container copy failing won't stop
writes to the others that are succeeding. Of course, they will
be in an inconsistent state afterwards still, but that's a
different problem.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Just moving some code around in preparation for an alternative
sync implementation that can do bulk transfers by using `tar`.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
* Move all the initialization code out of `main.go`
* Ensure spans are reported when there's an error with the
command
* Attach the Compose version & active Docker context to the
resource instead of the span
* Name the root CLI span `cli/<cmd>` for clarity and grab
the full subcommand path (e.g. `alpha-viz` instead of just
`viz`)
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
* If there's no `otel` key (or the value is `null`) in the config,
don't return an error
* Propagate error from the exporter instead of panicking
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
This is a bunch of OTEL initialization code. It's all in
`internal/` because there are re-usable parts here, but Compose
isn't the right spot. Once we've stabilized the interfaces a bit
and the need arises, we can move it to a separate module.
Currently, a single span is produced to wrap the root Compose
command.
Compose will respect the standard OTEL environment variables
as well as OTEL metadata from the Docker context. Both can be
used simultaneously. The latter is intended for local system
integration and is restricted to Unix sockets / named pipes.
None of this is enabled by default. It requires setting the
`COMPOSE_EXPERIMENTAL_OTEL=1` environment variable to
gate it during development.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>