mirror of
https://github.com/docker/compose.git
synced 2025-09-23 17:57:49 +02:00
commit
a8d7ebd987
@ -1,2 +1,4 @@
|
||||
.git
|
||||
build
|
||||
dist
|
||||
venv
|
||||
|
45
CHANGES.md
45
CHANGES.md
@ -1,6 +1,51 @@
|
||||
Change log
|
||||
==========
|
||||
|
||||
1.3.0 (2015-06-18)
|
||||
------------------
|
||||
|
||||
Firstly, two important notes:
|
||||
|
||||
- **This release contains breaking changes, and you will need to either remove or migrate your existing containers before running your app** - see the [upgrading section of the install docs](https://github.com/docker/compose/blob/1.3.0rc1/docs/install.md#upgrading) for details.
|
||||
|
||||
- Compose now requires Docker 1.6.0 or later.
|
||||
|
||||
We've done a lot of work in this release to remove hacks and make Compose more stable:
|
||||
|
||||
- Compose now uses container labels, rather than names, to keep track of containers. This makes Compose both faster and easier to integrate with your own tools.
|
||||
|
||||
- Compose no longer uses "intermediate containers" when recreating containers for a service. This makes `docker-compose up` less complex and more resilient to failure.
|
||||
|
||||
There are some new features:
|
||||
|
||||
- `docker-compose up` has an **experimental** new behaviour: it will only recreate containers for services whose configuration has changed in `docker-compose.yml`. This will eventually become the default, but for now you can take it for a spin:
|
||||
|
||||
$ docker-compose up --x-smart-recreate
|
||||
|
||||
- When invoked in a subdirectory of a project, `docker-compose` will now climb up through parent directories until it finds a `docker-compose.yml`.
|
||||
|
||||
Several new configuration keys have been added to `docker-compose.yml`:
|
||||
|
||||
- `dockerfile`, like `docker build --file`, lets you specify an alternate Dockerfile to use with `build`.
|
||||
- `labels`, like `docker run --labels`, lets you add custom metadata to containers.
|
||||
- `extra_hosts`, like `docker run --add-host`, lets you add entries to a container's `/etc/hosts` file.
|
||||
- `pid: host`, like `docker run --pid=host`, lets you reuse the same PID namespace as the host machine.
|
||||
- `cpuset`, like `docker run --cpuset-cpus`, lets you specify which CPUs to allow execution in.
|
||||
- `read_only`, like `docker run --read-only`, lets you mount a container's filesystem as read-only.
|
||||
- `security_opt`, like `docker run --security-opt`, lets you specify [security options](https://docs.docker.com/reference/run/#security-configuration).
|
||||
- `log_driver`, like `docker run --log-driver`, lets you specify a [log driver](https://docs.docker.com/reference/run/#logging-drivers-log-driver).
|
||||
|
||||
Many bugs have been fixed, including the following:
|
||||
|
||||
- The output of `docker-compose run` was sometimes truncated, especially when running under Jenkins.
|
||||
- A service's volumes would sometimes not update after volume configuration was changed in `docker-compose.yml`.
|
||||
- Authenticating against third-party registries would sometimes fail.
|
||||
- `docker-compose run --rm` would fail to remove the container if the service had a `restart` policy in place.
|
||||
- `docker-compose scale` would refuse to scale a service beyond 1 container if it exposed a specific port number on the host.
|
||||
- Compose would refuse to create multiple volume entries with the same host path.
|
||||
|
||||
Thanks @ahromis, @albers, @aleksandr-vin, @antoineco, @ccverak, @chernjie, @dnephin, @edmorley, @fordhurley, @josephpage, @KyleJamesWalker, @lsowen, @mchasal, @noironetworks, @sdake, @sdurrheimer, @sherter, @stephenlawrence, @thaJeztah, @thieman, @turtlemonvh, @twhiteman, @vdemeester, @xuxinkun and @zwily!
|
||||
|
||||
1.2.0 (2015-04-16)
|
||||
------------------
|
||||
|
||||
|
@ -1,6 +1,8 @@
|
||||
# Contributing to Compose
|
||||
|
||||
Compose is a part of the Docker project, and follows the same rules and principles. Take a read of [Docker's contributing guidelines](https://github.com/docker/docker/blob/master/CONTRIBUTING.md) to get an overview.
|
||||
Compose is a part of the Docker project, and follows the same rules and
|
||||
principles. Take a read of [Docker's contributing guidelines](https://github.com/docker/docker/blob/master/CONTRIBUTING.md)
|
||||
to get an overview.
|
||||
|
||||
## TL;DR
|
||||
|
||||
@ -17,22 +19,32 @@ If you're looking contribute to Compose
|
||||
but you're new to the project or maybe even to Python, here are the steps
|
||||
that should get you started.
|
||||
|
||||
1. Fork [https://github.com/docker/compose](https://github.com/docker/compose) to your username.
|
||||
1. Clone your forked repository locally `git clone git@github.com:yourusername/compose.git`.
|
||||
1. Enter the local directory `cd compose`.
|
||||
1. Set up a development environment by running `python setup.py develop`. This will install the dependencies and set up a symlink from your `docker-compose` executable to the checkout of the repository. When you now run `docker-compose` from anywhere on your machine, it will run your development version of Compose.
|
||||
1. Fork [https://github.com/docker/compose](https://github.com/docker/compose)
|
||||
to your username.
|
||||
2. Clone your forked repository locally `git clone git@github.com:yourusername/compose.git`.
|
||||
3. Enter the local directory `cd compose`.
|
||||
4. Set up a development environment by running `python setup.py develop`. This
|
||||
will install the dependencies and set up a symlink from your `docker-compose`
|
||||
executable to the checkout of the repository. When you now run
|
||||
`docker-compose` from anywhere on your machine, it will run your development
|
||||
version of Compose.
|
||||
|
||||
## Running the test suite
|
||||
|
||||
Use the test script to run linting checks and then the full test suite:
|
||||
Use the test script to run linting checks and then the full test suite against
|
||||
different Python interpreters:
|
||||
|
||||
$ script/test
|
||||
|
||||
Tests are run against a Docker daemon inside a container, so that we can test against multiple Docker versions. By default they'll run against only the latest Docker version - set the `DOCKER_VERSIONS` environment variable to "all" to run against all supported versions:
|
||||
Tests are run against a Docker daemon inside a container, so that we can test
|
||||
against multiple Docker versions. By default they'll run against only the latest
|
||||
Docker version - set the `DOCKER_VERSIONS` environment variable to "all" to run
|
||||
against all supported versions:
|
||||
|
||||
$ DOCKER_VERSIONS=all script/test
|
||||
|
||||
Arguments to `script/test` are passed through to the `nosetests` executable, so you can specify a test directory, file, module, class or method:
|
||||
Arguments to `script/test` are passed through to the `nosetests` executable, so
|
||||
you can specify a test directory, file, module, class or method:
|
||||
|
||||
$ script/test tests/unit
|
||||
$ script/test tests/unit/cli_test.py
|
||||
@ -41,35 +53,34 @@ Arguments to `script/test` are passed through to the `nosetests` executable, so
|
||||
|
||||
## Building binaries
|
||||
|
||||
Linux:
|
||||
`script/build-linux` will build the Linux binary inside a Docker container:
|
||||
|
||||
$ script/build-linux
|
||||
|
||||
OS X:
|
||||
`script/build-osx` will build the Mac OS X binary inside a virtualenv:
|
||||
|
||||
$ script/build-osx
|
||||
|
||||
Note that this only works on Mountain Lion, not Mavericks, due to a [bug in PyInstaller](http://www.pyinstaller.org/ticket/807).
|
||||
For official releases, you should build inside a Mountain Lion VM for proper
|
||||
compatibility. Run the this script first to prepare the environment before
|
||||
building - it will use Homebrew to make sure Python is installed and
|
||||
up-to-date.
|
||||
|
||||
$ script/prepare-osx
|
||||
|
||||
## Release process
|
||||
|
||||
1. Open pull request that:
|
||||
|
||||
- Updates the version in `compose/__init__.py`
|
||||
- Updates the binary URL in `docs/install.md`
|
||||
- Updates the script URL in `docs/completion.md`
|
||||
- Adds release notes to `CHANGES.md`
|
||||
|
||||
2. Create unpublished GitHub release with release notes
|
||||
|
||||
3. Build Linux version on any Docker host with `script/build-linux` and attach to release
|
||||
|
||||
4. Build OS X version on Mountain Lion with `script/build-osx` and attach to release as `docker-compose-Darwin-x86_64` and `docker-compose-Linux-x86_64`.
|
||||
|
||||
3. Build Linux version on any Docker host with `script/build-linux` and attach
|
||||
to release
|
||||
4. Build OS X version on Mountain Lion with `script/build-osx` and attach to
|
||||
release as `docker-compose-Darwin-x86_64` and `docker-compose-Linux-x86_64`.
|
||||
5. Publish GitHub release, creating tag
|
||||
|
||||
6. Update website with `script/deploy-docs`
|
||||
|
||||
7. Upload PyPi package
|
||||
|
||||
$ git checkout $VERSION
|
||||
|
51
Dockerfile
51
Dockerfile
@ -3,9 +3,11 @@ FROM debian:wheezy
|
||||
RUN set -ex; \
|
||||
apt-get update -qq; \
|
||||
apt-get install -y \
|
||||
python \
|
||||
python-pip \
|
||||
python-dev \
|
||||
gcc \
|
||||
make \
|
||||
zlib1g \
|
||||
zlib1g-dev \
|
||||
libssl-dev \
|
||||
git \
|
||||
apt-transport-https \
|
||||
ca-certificates \
|
||||
@ -15,16 +17,47 @@ RUN set -ex; \
|
||||
; \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV ALL_DOCKER_VERSIONS 1.3.3 1.4.1 1.5.0
|
||||
# Build Python 2.7.9 from source
|
||||
RUN set -ex; \
|
||||
curl -LO https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz; \
|
||||
tar -xzf Python-2.7.9.tgz; \
|
||||
cd Python-2.7.9; \
|
||||
./configure --enable-shared; \
|
||||
make; \
|
||||
make install; \
|
||||
cd ..; \
|
||||
rm -rf /Python-2.7.9; \
|
||||
rm Python-2.7.9.tgz
|
||||
|
||||
# Make libpython findable
|
||||
ENV LD_LIBRARY_PATH /usr/local/lib
|
||||
|
||||
# Install setuptools
|
||||
RUN set -ex; \
|
||||
curl -LO https://bootstrap.pypa.io/ez_setup.py; \
|
||||
python ez_setup.py; \
|
||||
rm ez_setup.py
|
||||
|
||||
# Install pip
|
||||
RUN set -ex; \
|
||||
curl -LO https://pypi.python.org/packages/source/p/pip/pip-7.0.1.tar.gz; \
|
||||
tar -xzf pip-7.0.1.tar.gz; \
|
||||
cd pip-7.0.1; \
|
||||
python setup.py install; \
|
||||
cd ..; \
|
||||
rm -rf pip-7.0.1; \
|
||||
rm pip-7.0.1.tar.gz
|
||||
|
||||
ENV ALL_DOCKER_VERSIONS 1.6.0 1.7.0
|
||||
|
||||
RUN set -ex; \
|
||||
for v in ${ALL_DOCKER_VERSIONS}; do \
|
||||
curl https://get.docker.com/builds/Linux/x86_64/docker-$v -o /usr/local/bin/docker-$v; \
|
||||
chmod +x /usr/local/bin/docker-$v; \
|
||||
done
|
||||
curl https://get.docker.com/builds/Linux/x86_64/docker-1.6.0 -o /usr/local/bin/docker-1.6.0; \
|
||||
chmod +x /usr/local/bin/docker-1.6.0; \
|
||||
curl https://test.docker.com/builds/Linux/x86_64/docker-1.7.0 -o /usr/local/bin/docker-1.7.0; \
|
||||
chmod +x /usr/local/bin/docker-1.7.0
|
||||
|
||||
# Set the default Docker to be run
|
||||
RUN ln -s /usr/local/bin/docker-1.3.3 /usr/local/bin/docker
|
||||
RUN ln -s /usr/local/bin/docker-1.6.0 /usr/local/bin/docker
|
||||
|
||||
RUN useradd -d /home/user -m -s /bin/bash user
|
||||
WORKDIR /code/
|
||||
|
57
README.md
57
README.md
@ -1,45 +1,35 @@
|
||||
Docker Compose
|
||||
==============
|
||||
[](http://jenkins.dockerproject.com/job/Compose%20Master/)
|
||||
*(Previously known as Fig)*
|
||||
|
||||
Compose is a tool for defining and running complex applications with Docker.
|
||||
With Compose, you define a multi-container application in a single file, then
|
||||
spin your application up in a single command which does everything that needs to
|
||||
be done to get it running.
|
||||
Compose is a tool for defining and running multi-container applications with
|
||||
Docker. With Compose, you define a multi-container application in a single
|
||||
file, then spin your application up in a single command which does everything
|
||||
that needs to be done to get it running.
|
||||
|
||||
Compose is great for development environments, staging servers, and CI. We don't
|
||||
recommend that you use it in production yet.
|
||||
|
||||
Using Compose is basically a three-step process.
|
||||
|
||||
First, you define your app's environment with a `Dockerfile` so it can be
|
||||
reproduced anywhere:
|
||||
|
||||
```Dockerfile
|
||||
FROM python:2.7
|
||||
WORKDIR /code
|
||||
ADD requirements.txt /code/
|
||||
RUN pip install -r requirements.txt
|
||||
ADD . /code
|
||||
CMD python app.py
|
||||
```
|
||||
|
||||
Next, you define the services that make up your app in `docker-compose.yml` so
|
||||
1. Define your app's environment with a `Dockerfile` so it can be
|
||||
reproduced anywhere.
|
||||
2. Define the services that make up your app in `docker-compose.yml` so
|
||||
they can be run together in an isolated environment:
|
||||
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
|
||||
|
||||
```yaml
|
||||
web:
|
||||
build: .
|
||||
links:
|
||||
- db
|
||||
ports:
|
||||
- "8000:8000"
|
||||
db:
|
||||
image: postgres
|
||||
```
|
||||
A `docker-compose.yml` looks like this:
|
||||
|
||||
Lastly, run `docker-compose up` and Compose will start and run your entire app.
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
links:
|
||||
- redis
|
||||
redis:
|
||||
image: redis
|
||||
|
||||
Compose has commands for managing the whole lifecycle of your application:
|
||||
|
||||
@ -52,4 +42,11 @@ Installation and documentation
|
||||
------------------------------
|
||||
|
||||
- Full documentation is available on [Docker's website](http://docs.docker.com/compose/).
|
||||
- Hop into #docker-compose on Freenode if you have any questions.
|
||||
- If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
|
||||
|
||||
Contributing
|
||||
------------
|
||||
|
||||
[](http://jenkins.dockerproject.org/job/Compose%20Master/)
|
||||
|
||||
Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
|
||||
|
45
SWARM.md
45
SWARM.md
@ -9,43 +9,24 @@ Still, Compose and Swarm can be useful in a “batch processing” scenario (whe
|
||||
|
||||
A number of things need to happen before full integration is achieved, which are documented below.
|
||||
|
||||
Re-deploying containers with `docker-compose up`
|
||||
------------------------------------------------
|
||||
|
||||
Repeated invocations of `docker-compose up` will not work reliably when used against a Swarm cluster because of an under-the-hood design problem; [this will be fixed](https://github.com/docker/fig/pull/972) in the next version of Compose. For now, containers must be completely removed and re-created:
|
||||
|
||||
$ docker-compose kill
|
||||
$ docker-compose rm --force
|
||||
$ docker-compose up
|
||||
|
||||
Links and networking
|
||||
--------------------
|
||||
|
||||
The primary thing stopping multi-container apps from working seamlessly on Swarm is getting them to talk to one another: enabling private communication between containers on different hosts hasn’t been solved in a non-hacky way.
|
||||
|
||||
Long-term, networking is [getting overhauled](https://github.com/docker/docker/issues/9983) in such a way that it’ll fit the multi-host model much better. For now, containers on different hosts cannot be linked. In the next version of Compose, linked services will be automatically scheduled on the same host; for now, this must be done manually (see “Co-scheduling containers” below).
|
||||
Long-term, networking is [getting overhauled](https://github.com/docker/docker/issues/9983) in such a way that it’ll fit the multi-host model much better. For now, **linked containers are automatically scheduled on the same host**.
|
||||
|
||||
`volumes_from` and `net: container`
|
||||
-----------------------------------
|
||||
Building
|
||||
--------
|
||||
|
||||
For containers to share volumes or a network namespace, they must be scheduled on the same host - this is, after all, inherent to how both volumes and network namespaces work. In the next version of Compose, this co-scheduling will be automatic whenever `volumes_from` or `net: "container:..."` is specified; for now, containers which share volumes or a network namespace must be co-scheduled manually (see “Co-scheduling containers” below).
|
||||
`docker build` against a Swarm cluster is not implemented, so for now the `build` option will not work - you will need to manually build your service's image, push it somewhere and use `image` to instruct Compose to pull it. Here's an example using the Docker Hub:
|
||||
|
||||
Co-scheduling containers
|
||||
------------------------
|
||||
|
||||
For now, containers can be manually scheduled on the same host using Swarm’s [affinity filters](https://github.com/docker/swarm/blob/master/scheduler/filter/README.md#affinity-filter). Here’s a simple example:
|
||||
|
||||
```yaml
|
||||
web:
|
||||
image: my-web-image
|
||||
links: ["db"]
|
||||
environment:
|
||||
- "affinity:container==myproject_db_*"
|
||||
db:
|
||||
image: postgres
|
||||
```
|
||||
|
||||
Here, we express an affinity filter on all web containers, saying that each one must run alongside a container whose name begins with `myproject_db_`.
|
||||
|
||||
- `myproject` is the common prefix Compose gives to all containers in your project, which is either generated from the name of the current directory or specified with `-p` or the `DOCKER_COMPOSE_PROJECT_NAME` environment variable.
|
||||
- `*` is a wildcard, which works just like filename wildcards in a Unix shell.
|
||||
$ docker build -t myusername/web .
|
||||
$ docker push myusername/web
|
||||
$ cat docker-compose.yml
|
||||
web:
|
||||
image: myusername/web
|
||||
links: ["db"]
|
||||
db:
|
||||
image: postgres
|
||||
$ docker-compose up -d
|
||||
|
@ -1,4 +1,3 @@
|
||||
from __future__ import unicode_literals
|
||||
from .service import Service # noqa:flake8
|
||||
|
||||
__version__ = '1.2.0'
|
||||
__version__ = '1.3.0'
|
||||
|
@ -10,7 +10,7 @@ from .. import config
|
||||
from ..project import Project
|
||||
from ..service import ConfigError
|
||||
from .docopt_command import DocoptCommand
|
||||
from .utils import call_silently, is_mac, is_ubuntu
|
||||
from .utils import call_silently, is_mac, is_ubuntu, find_candidates_in_parent_dirs
|
||||
from .docker_client import docker_client
|
||||
from . import verbose_proxy
|
||||
from . import errors
|
||||
@ -18,6 +18,13 @@ from .. import __version__
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
SUPPORTED_FILENAMES = [
|
||||
'docker-compose.yml',
|
||||
'docker-compose.yaml',
|
||||
'fig.yml',
|
||||
'fig.yaml',
|
||||
]
|
||||
|
||||
|
||||
class Command(DocoptCommand):
|
||||
base_dir = '.'
|
||||
@ -100,20 +107,10 @@ class Command(DocoptCommand):
|
||||
if file_path:
|
||||
return os.path.join(self.base_dir, file_path)
|
||||
|
||||
supported_filenames = [
|
||||
'docker-compose.yml',
|
||||
'docker-compose.yaml',
|
||||
'fig.yml',
|
||||
'fig.yaml',
|
||||
]
|
||||
|
||||
def expand(filename):
|
||||
return os.path.join(self.base_dir, filename)
|
||||
|
||||
candidates = [filename for filename in supported_filenames if os.path.exists(expand(filename))]
|
||||
(candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, self.base_dir)
|
||||
|
||||
if len(candidates) == 0:
|
||||
raise errors.ComposeFileNotFound(supported_filenames)
|
||||
raise errors.ComposeFileNotFound(SUPPORTED_FILENAMES)
|
||||
|
||||
winner = candidates[0]
|
||||
|
||||
@ -130,4 +127,4 @@ class Command(DocoptCommand):
|
||||
log.warning("%s is deprecated and will not be supported in future. "
|
||||
"Please rename your config file to docker-compose.yml\n" % winner)
|
||||
|
||||
return expand(winner)
|
||||
return os.path.join(path, winner)
|
||||
|
@ -32,4 +32,4 @@ def docker_client():
|
||||
)
|
||||
|
||||
timeout = int(os.environ.get('DOCKER_CLIENT_TIMEOUT', 60))
|
||||
return Client(base_url=base_url, tls=tls_config, version='1.15', timeout=timeout)
|
||||
return Client(base_url=base_url, tls=tls_config, version='1.18', timeout=timeout)
|
||||
|
@ -33,6 +33,8 @@ class DocoptCommand(object):
|
||||
if command is None:
|
||||
raise SystemExit(getdoc(self))
|
||||
|
||||
command = command.replace('-', '_')
|
||||
|
||||
if not hasattr(self, command):
|
||||
raise NoSuchCommand(command, self)
|
||||
|
||||
|
@ -58,7 +58,7 @@ class ConnectionErrorGeneric(UserError):
|
||||
class ComposeFileNotFound(UserError):
|
||||
def __init__(self, supported_filenames):
|
||||
super(ComposeFileNotFound, self).__init__("""
|
||||
Can't find a suitable configuration file. Are you in the right directory?
|
||||
Can't find a suitable configuration file in this directory or any parent. Are you in the right directory?
|
||||
|
||||
Supported filenames: %s
|
||||
""" % ", ".join(supported_filenames))
|
||||
|
@ -10,16 +10,16 @@ import sys
|
||||
from docker.errors import APIError
|
||||
import dockerpty
|
||||
|
||||
from .. import __version__
|
||||
from .. import legacy
|
||||
from ..project import NoSuchService, ConfigurationError
|
||||
from ..service import BuildError, CannotBeScaledError
|
||||
from ..service import BuildError, NeedsBuildError
|
||||
from ..config import parse_environment
|
||||
from .command import Command
|
||||
from .docopt_command import NoSuchCommand
|
||||
from .errors import UserError
|
||||
from .formatter import Formatter
|
||||
from .log_printer import LogPrinter
|
||||
from .utils import yesno
|
||||
from .utils import get_version_info, yesno
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
@ -32,7 +32,7 @@ def main():
|
||||
except KeyboardInterrupt:
|
||||
log.error("\nAborting.")
|
||||
sys.exit(1)
|
||||
except (UserError, NoSuchService, ConfigurationError) as e:
|
||||
except (UserError, NoSuchService, ConfigurationError, legacy.LegacyContainersError) as e:
|
||||
log.error(e.msg)
|
||||
sys.exit(1)
|
||||
except NoSuchCommand as e:
|
||||
@ -46,6 +46,9 @@ def main():
|
||||
except BuildError as e:
|
||||
log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
|
||||
sys.exit(1)
|
||||
except NeedsBuildError as e:
|
||||
log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def setup_logging():
|
||||
@ -68,38 +71,39 @@ def parse_doc_section(name, source):
|
||||
|
||||
|
||||
class TopLevelCommand(Command):
|
||||
"""Fast, isolated development environments using Docker.
|
||||
"""Define and run multi-container applications with Docker.
|
||||
|
||||
Usage:
|
||||
docker-compose [options] [COMMAND] [ARGS...]
|
||||
docker-compose -h|--help
|
||||
|
||||
Options:
|
||||
--verbose Show more output
|
||||
--version Print version and exit
|
||||
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
|
||||
-p, --project-name NAME Specify an alternate project name (default: directory name)
|
||||
--verbose Show more output
|
||||
-v, --version Print version and exit
|
||||
|
||||
Commands:
|
||||
build Build or rebuild services
|
||||
help Get help on a command
|
||||
kill Kill containers
|
||||
logs View output from containers
|
||||
port Print the public port for a port binding
|
||||
ps List containers
|
||||
pull Pulls service images
|
||||
rm Remove stopped containers
|
||||
run Run a one-off command
|
||||
scale Set number of containers for a service
|
||||
start Start services
|
||||
stop Stop services
|
||||
restart Restart services
|
||||
up Create and start containers
|
||||
build Build or rebuild services
|
||||
help Get help on a command
|
||||
kill Kill containers
|
||||
logs View output from containers
|
||||
port Print the public port for a port binding
|
||||
ps List containers
|
||||
pull Pulls service images
|
||||
restart Restart services
|
||||
rm Remove stopped containers
|
||||
run Run a one-off command
|
||||
scale Set number of containers for a service
|
||||
start Start services
|
||||
stop Stop services
|
||||
up Create and start containers
|
||||
migrate-to-labels Recreate containers to add labels
|
||||
|
||||
"""
|
||||
def docopt_options(self):
|
||||
options = super(TopLevelCommand, self).docopt_options()
|
||||
options['version'] = "docker-compose %s" % __version__
|
||||
options['version'] = get_version_info()
|
||||
return options
|
||||
|
||||
def build(self, project, options):
|
||||
@ -108,7 +112,7 @@ class TopLevelCommand(Command):
|
||||
|
||||
Services are built once and then tagged as `project_service`,
|
||||
e.g. `composetest_db`. If you change a service's `Dockerfile` or the
|
||||
contents of its build directory, you can run `compose build` to rebuild it.
|
||||
contents of its build directory, you can run `docker-compose build` to rebuild it.
|
||||
|
||||
Usage: build [options] [SERVICE...]
|
||||
|
||||
@ -165,13 +169,14 @@ class TopLevelCommand(Command):
|
||||
Usage: port [options] SERVICE PRIVATE_PORT
|
||||
|
||||
Options:
|
||||
--protocol=proto tcp or udp (defaults to tcp)
|
||||
--protocol=proto tcp or udp [default: tcp]
|
||||
--index=index index of the container if there are multiple
|
||||
instances of a service (defaults to 1)
|
||||
instances of a service [default: 1]
|
||||
"""
|
||||
index = int(options.get('--index'))
|
||||
service = project.get_service(options['SERVICE'])
|
||||
try:
|
||||
container = service.get_container(number=options.get('--index') or 1)
|
||||
container = service.get_container(number=index)
|
||||
except ValueError as e:
|
||||
raise UserError(str(e))
|
||||
print(container.get_local_port(
|
||||
@ -295,9 +300,8 @@ class TopLevelCommand(Command):
|
||||
project.up(
|
||||
service_names=deps,
|
||||
start_deps=True,
|
||||
recreate=False,
|
||||
allow_recreate=False,
|
||||
insecure_registry=insecure_registry,
|
||||
detach=options['-d']
|
||||
)
|
||||
|
||||
tty = True
|
||||
@ -317,14 +321,14 @@ class TopLevelCommand(Command):
|
||||
}
|
||||
|
||||
if options['-e']:
|
||||
# Merge environment from config with -e command line
|
||||
container_options['environment'] = dict(
|
||||
parse_environment(service.options.get('environment')),
|
||||
**parse_environment(options['-e']))
|
||||
container_options['environment'] = parse_environment(options['-e'])
|
||||
|
||||
if options['--entrypoint']:
|
||||
container_options['entrypoint'] = options.get('--entrypoint')
|
||||
|
||||
if options['--rm']:
|
||||
container_options['restart'] = None
|
||||
|
||||
if options['--user']:
|
||||
container_options['user'] = options.get('--user')
|
||||
|
||||
@ -332,6 +336,7 @@ class TopLevelCommand(Command):
|
||||
container_options['ports'] = []
|
||||
|
||||
container = service.create_container(
|
||||
quiet=True,
|
||||
one_off=True,
|
||||
insecure_registry=insecure_registry,
|
||||
**container_options
|
||||
@ -341,11 +346,9 @@ class TopLevelCommand(Command):
|
||||
service.start_container(container)
|
||||
print(container.name)
|
||||
else:
|
||||
service.start_container(container)
|
||||
dockerpty.start(project.client, container.id, interactive=not options['-T'])
|
||||
exit_code = container.wait()
|
||||
if options['--rm']:
|
||||
log.info("Removing %s..." % container.name)
|
||||
project.client.remove_container(container.id)
|
||||
sys.exit(exit_code)
|
||||
|
||||
@ -369,15 +372,7 @@ class TopLevelCommand(Command):
|
||||
except ValueError:
|
||||
raise UserError('Number of containers for service "%s" is not a '
|
||||
'number' % service_name)
|
||||
try:
|
||||
project.get_service(service_name).scale(num)
|
||||
except CannotBeScaledError:
|
||||
raise UserError(
|
||||
'Service "%s" cannot be scaled because it specifies a port '
|
||||
'on the host. If multiple containers for this service were '
|
||||
'created, the port would clash.\n\nRemove the ":" from the '
|
||||
'port definition in docker-compose.yml so Docker can choose a random '
|
||||
'port for each container.' % service_name)
|
||||
project.get_service(service_name).scale(num)
|
||||
|
||||
def start(self, project, options):
|
||||
"""
|
||||
@ -440,6 +435,8 @@ class TopLevelCommand(Command):
|
||||
print new container names.
|
||||
--no-color Produce monochrome output.
|
||||
--no-deps Don't start linked services.
|
||||
--x-smart-recreate Only recreate containers whose configuration or
|
||||
image needs to be updated. (EXPERIMENTAL)
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
--no-build Don't build an image, even if it's missing
|
||||
-t, --timeout TIMEOUT When attached, use this timeout in seconds
|
||||
@ -452,15 +449,16 @@ class TopLevelCommand(Command):
|
||||
monochrome = options['--no-color']
|
||||
|
||||
start_deps = not options['--no-deps']
|
||||
recreate = not options['--no-recreate']
|
||||
allow_recreate = not options['--no-recreate']
|
||||
smart_recreate = options['--x-smart-recreate']
|
||||
service_names = options['SERVICE']
|
||||
|
||||
project.up(
|
||||
service_names=service_names,
|
||||
start_deps=start_deps,
|
||||
recreate=recreate,
|
||||
allow_recreate=allow_recreate,
|
||||
smart_recreate=smart_recreate,
|
||||
insecure_registry=insecure_registry,
|
||||
detach=detached,
|
||||
do_build=not options['--no-build'],
|
||||
)
|
||||
|
||||
@ -483,6 +481,14 @@ class TopLevelCommand(Command):
|
||||
params = {} if timeout is None else {'timeout': int(timeout)}
|
||||
project.stop(service_names=service_names, **params)
|
||||
|
||||
def migrate_to_labels(self, project, _options):
|
||||
"""
|
||||
Recreate containers to add labels
|
||||
|
||||
Usage: migrate-to-labels
|
||||
"""
|
||||
legacy.migrate_project_to_labels(project)
|
||||
|
||||
|
||||
def list_containers(containers):
|
||||
return ", ".join(c.name for c in containers)
|
||||
|
@ -5,6 +5,9 @@ import datetime
|
||||
import os
|
||||
import subprocess
|
||||
import platform
|
||||
import ssl
|
||||
|
||||
from .. import __version__
|
||||
|
||||
|
||||
def yesno(prompt, default=None):
|
||||
@ -62,6 +65,25 @@ def mkdir(path, permissions=0o700):
|
||||
return path
|
||||
|
||||
|
||||
def find_candidates_in_parent_dirs(filenames, path):
|
||||
"""
|
||||
Given a directory path to start, looks for filenames in the
|
||||
directory, and then each parent directory successively,
|
||||
until found.
|
||||
|
||||
Returns tuple (candidates, path).
|
||||
"""
|
||||
candidates = [filename for filename in filenames
|
||||
if os.path.exists(os.path.join(path, filename))]
|
||||
|
||||
if len(candidates) == 0:
|
||||
parent_dir = os.path.join(path, '..')
|
||||
if os.path.abspath(parent_dir) != os.path.abspath(path):
|
||||
return find_candidates_in_parent_dirs(filenames, parent_dir)
|
||||
|
||||
return (candidates, path)
|
||||
|
||||
|
||||
def split_buffer(reader, separator):
|
||||
"""
|
||||
Given a generator which yields strings and a separator string,
|
||||
@ -101,3 +123,11 @@ def is_mac():
|
||||
|
||||
def is_ubuntu():
|
||||
return platform.system() == 'Linux' and platform.linux_distribution()[0] == 'Ubuntu'
|
||||
|
||||
|
||||
def get_version_info():
|
||||
return '\n'.join([
|
||||
'docker-compose version: %s' % __version__,
|
||||
"%s version: %s" % (platform.python_implementation(), platform.python_version()),
|
||||
"OpenSSL version: %s" % ssl.OPENSSL_VERSION,
|
||||
])
|
||||
|
@ -7,22 +7,30 @@ DOCKER_CONFIG_KEYS = [
|
||||
'cap_add',
|
||||
'cap_drop',
|
||||
'cpu_shares',
|
||||
'cpuset',
|
||||
'command',
|
||||
'detach',
|
||||
'devices',
|
||||
'dns',
|
||||
'dns_search',
|
||||
'domainname',
|
||||
'entrypoint',
|
||||
'env_file',
|
||||
'environment',
|
||||
'extra_hosts',
|
||||
'read_only',
|
||||
'hostname',
|
||||
'image',
|
||||
'labels',
|
||||
'links',
|
||||
'mem_limit',
|
||||
'net',
|
||||
'log_driver',
|
||||
'pid',
|
||||
'ports',
|
||||
'privileged',
|
||||
'restart',
|
||||
'security_opt',
|
||||
'stdin_open',
|
||||
'tty',
|
||||
'user',
|
||||
@ -33,20 +41,25 @@ DOCKER_CONFIG_KEYS = [
|
||||
|
||||
ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
|
||||
'build',
|
||||
'dockerfile',
|
||||
'expose',
|
||||
'external_links',
|
||||
'name',
|
||||
]
|
||||
|
||||
DOCKER_CONFIG_HINTS = {
|
||||
'cpu_share' : 'cpu_shares',
|
||||
'link' : 'links',
|
||||
'port' : 'ports',
|
||||
'privilege' : 'privileged',
|
||||
'cpu_share': 'cpu_shares',
|
||||
'add_host': 'extra_hosts',
|
||||
'hosts': 'extra_hosts',
|
||||
'extra_host': 'extra_hosts',
|
||||
'device': 'devices',
|
||||
'link': 'links',
|
||||
'port': 'ports',
|
||||
'privilege': 'privileged',
|
||||
'priviliged': 'privileged',
|
||||
'privilige' : 'privileged',
|
||||
'volume' : 'volumes',
|
||||
'workdir' : 'working_dir',
|
||||
'privilige': 'privileged',
|
||||
'volume': 'volumes',
|
||||
'workdir': 'working_dir',
|
||||
}
|
||||
|
||||
|
||||
@ -63,6 +76,7 @@ def from_dictionary(dictionary, working_dir=None, filename=None):
|
||||
raise ConfigurationError('Service "%s" doesn\'t have any configuration options. All top level keys in your docker-compose.yml must map to a dictionary of configuration options.' % service_name)
|
||||
loader = ServiceLoader(working_dir=working_dir, filename=filename)
|
||||
service_dict = loader.make_service_dict(service_name, service_dict)
|
||||
validate_paths(service_dict)
|
||||
service_dicts.append(service_dict)
|
||||
|
||||
return service_dicts
|
||||
@ -174,6 +188,9 @@ def process_container_options(service_dict, working_dir=None):
|
||||
if 'build' in service_dict:
|
||||
service_dict['build'] = resolve_build_path(service_dict['build'], working_dir=working_dir)
|
||||
|
||||
if 'labels' in service_dict:
|
||||
service_dict['labels'] = parse_labels(service_dict['labels'])
|
||||
|
||||
return service_dict
|
||||
|
||||
|
||||
@ -186,10 +203,19 @@ def merge_service_dicts(base, override):
|
||||
override.get('environment'),
|
||||
)
|
||||
|
||||
if 'volumes' in base or 'volumes' in override:
|
||||
d['volumes'] = merge_volumes(
|
||||
base.get('volumes'),
|
||||
override.get('volumes'),
|
||||
path_mapping_keys = ['volumes', 'devices']
|
||||
|
||||
for key in path_mapping_keys:
|
||||
if key in base or key in override:
|
||||
d[key] = merge_path_mappings(
|
||||
base.get(key),
|
||||
override.get(key),
|
||||
)
|
||||
|
||||
if 'labels' in base or 'labels' in override:
|
||||
d['labels'] = merge_labels(
|
||||
base.get('labels'),
|
||||
override.get('labels'),
|
||||
)
|
||||
|
||||
if 'image' in override and 'build' in d:
|
||||
@ -210,7 +236,7 @@ def merge_service_dicts(base, override):
|
||||
if key in base or key in override:
|
||||
d[key] = to_list(base.get(key)) + to_list(override.get(key))
|
||||
|
||||
already_merged_keys = ['environment', 'volumes'] + list_keys + list_or_string_keys
|
||||
already_merged_keys = ['environment', 'labels'] + path_mapping_keys + list_keys + list_or_string_keys
|
||||
|
||||
for k in set(ALLOWED_KEYS) - set(already_merged_keys):
|
||||
if k in override:
|
||||
@ -326,7 +352,7 @@ def resolve_host_paths(volumes, working_dir=None):
|
||||
|
||||
|
||||
def resolve_host_path(volume, working_dir):
|
||||
container_path, host_path = split_volume(volume)
|
||||
container_path, host_path = split_path_mapping(volume)
|
||||
if host_path is not None:
|
||||
host_path = os.path.expanduser(host_path)
|
||||
host_path = os.path.expandvars(host_path)
|
||||
@ -338,32 +364,34 @@ def resolve_host_path(volume, working_dir):
|
||||
def resolve_build_path(build_path, working_dir=None):
|
||||
if working_dir is None:
|
||||
raise Exception("No working_dir passed to resolve_build_path")
|
||||
|
||||
_path = expand_path(working_dir, build_path)
|
||||
if not os.path.exists(_path) or not os.access(_path, os.R_OK):
|
||||
raise ConfigurationError("build path %s either does not exist or is not accessible." % _path)
|
||||
else:
|
||||
return _path
|
||||
return expand_path(working_dir, build_path)
|
||||
|
||||
|
||||
def merge_volumes(base, override):
|
||||
d = dict_from_volumes(base)
|
||||
d.update(dict_from_volumes(override))
|
||||
return volumes_from_dict(d)
|
||||
def validate_paths(service_dict):
|
||||
if 'build' in service_dict:
|
||||
build_path = service_dict['build']
|
||||
if not os.path.exists(build_path) or not os.access(build_path, os.R_OK):
|
||||
raise ConfigurationError("build path %s either does not exist or is not accessible." % build_path)
|
||||
|
||||
|
||||
def dict_from_volumes(volumes):
|
||||
if volumes:
|
||||
return dict(split_volume(v) for v in volumes)
|
||||
def merge_path_mappings(base, override):
|
||||
d = dict_from_path_mappings(base)
|
||||
d.update(dict_from_path_mappings(override))
|
||||
return path_mappings_from_dict(d)
|
||||
|
||||
|
||||
def dict_from_path_mappings(path_mappings):
|
||||
if path_mappings:
|
||||
return dict(split_path_mapping(v) for v in path_mappings)
|
||||
else:
|
||||
return {}
|
||||
|
||||
|
||||
def volumes_from_dict(d):
|
||||
return [join_volume(v) for v in d.items()]
|
||||
def path_mappings_from_dict(d):
|
||||
return [join_path_mapping(v) for v in d.items()]
|
||||
|
||||
|
||||
def split_volume(string):
|
||||
def split_path_mapping(string):
|
||||
if ':' in string:
|
||||
(host, container) = string.split(':', 1)
|
||||
return (container, host)
|
||||
@ -371,7 +399,7 @@ def split_volume(string):
|
||||
return (string, None)
|
||||
|
||||
|
||||
def join_volume(pair):
|
||||
def join_path_mapping(pair):
|
||||
(container, host) = pair
|
||||
if host is None:
|
||||
return container
|
||||
@ -379,6 +407,35 @@ def join_volume(pair):
|
||||
return ":".join((host, container))
|
||||
|
||||
|
||||
def merge_labels(base, override):
|
||||
labels = parse_labels(base)
|
||||
labels.update(parse_labels(override))
|
||||
return labels
|
||||
|
||||
|
||||
def parse_labels(labels):
|
||||
if not labels:
|
||||
return {}
|
||||
|
||||
if isinstance(labels, list):
|
||||
return dict(split_label(e) for e in labels)
|
||||
|
||||
if isinstance(labels, dict):
|
||||
return labels
|
||||
|
||||
raise ConfigurationError(
|
||||
"labels \"%s\" must be a list or mapping" %
|
||||
labels
|
||||
)
|
||||
|
||||
|
||||
def split_label(label):
|
||||
if '=' in label:
|
||||
return label.split('=', 1)
|
||||
else:
|
||||
return label, ''
|
||||
|
||||
|
||||
def expand_path(working_dir, path):
|
||||
return os.path.abspath(os.path.join(working_dir, path))
|
||||
|
||||
|
7
compose/const.py
Normal file
7
compose/const.py
Normal file
@ -0,0 +1,7 @@
|
||||
|
||||
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
|
||||
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
|
||||
LABEL_PROJECT = 'com.docker.compose.project'
|
||||
LABEL_SERVICE = 'com.docker.compose.service'
|
||||
LABEL_VERSION = 'com.docker.compose.version'
|
||||
LABEL_CONFIG_HASH = 'com.docker.compose.config-hash'
|
@ -4,6 +4,8 @@ from __future__ import absolute_import
|
||||
import six
|
||||
from functools import reduce
|
||||
|
||||
from .const import LABEL_CONTAINER_NUMBER, LABEL_SERVICE
|
||||
|
||||
|
||||
class Container(object):
|
||||
"""
|
||||
@ -44,6 +46,10 @@ class Container(object):
|
||||
def image(self):
|
||||
return self.dictionary['Image']
|
||||
|
||||
@property
|
||||
def image_config(self):
|
||||
return self.client.inspect_image(self.image)
|
||||
|
||||
@property
|
||||
def short_id(self):
|
||||
return self.id[:10]
|
||||
@ -54,14 +60,15 @@ class Container(object):
|
||||
|
||||
@property
|
||||
def name_without_project(self):
|
||||
return '_'.join(self.dictionary['Name'].split('_')[1:])
|
||||
return '{0}_{1}'.format(self.labels.get(LABEL_SERVICE), self.number)
|
||||
|
||||
@property
|
||||
def number(self):
|
||||
try:
|
||||
return int(self.name.split('_')[-1])
|
||||
except ValueError:
|
||||
return None
|
||||
number = self.labels.get(LABEL_CONTAINER_NUMBER)
|
||||
if not number:
|
||||
raise ValueError("Container {0} does not have a {1} label".format(
|
||||
self.short_id, LABEL_CONTAINER_NUMBER))
|
||||
return int(number)
|
||||
|
||||
@property
|
||||
def ports(self):
|
||||
@ -79,6 +86,14 @@ class Container(object):
|
||||
return ', '.join(format_port(*item)
|
||||
for item in sorted(six.iteritems(self.ports)))
|
||||
|
||||
@property
|
||||
def labels(self):
|
||||
return self.get('Config.Labels') or {}
|
||||
|
||||
@property
|
||||
def log_config(self):
|
||||
return self.get('HostConfig.LogConfig') or None
|
||||
|
||||
@property
|
||||
def human_readable_state(self):
|
||||
if self.is_running:
|
||||
@ -126,8 +141,8 @@ class Container(object):
|
||||
def kill(self, **options):
|
||||
return self.client.kill(self.id, **options)
|
||||
|
||||
def restart(self):
|
||||
return self.client.restart(self.id)
|
||||
def restart(self, **options):
|
||||
return self.client.restart(self.id, **options)
|
||||
|
||||
def remove(self, **options):
|
||||
return self.client.remove_container(self.id, **options)
|
||||
@ -147,6 +162,7 @@ class Container(object):
|
||||
self.has_been_inspected = True
|
||||
return self.dictionary
|
||||
|
||||
# TODO: only used by tests, move to test module
|
||||
def links(self):
|
||||
links = []
|
||||
for container in self.client.containers():
|
||||
@ -163,13 +179,16 @@ class Container(object):
|
||||
return self.client.attach_socket(self.id, **kwargs)
|
||||
|
||||
def __repr__(self):
|
||||
return '<Container: %s>' % self.name
|
||||
return '<Container: %s (%s)>' % (self.name, self.id[:6])
|
||||
|
||||
def __eq__(self, other):
|
||||
if type(self) != type(other):
|
||||
return False
|
||||
return self.id == other.id
|
||||
|
||||
def __hash__(self):
|
||||
return self.id.__hash__()
|
||||
|
||||
|
||||
def get_container_name(container):
|
||||
if not container.get('Name') and not container.get('Names'):
|
||||
|
122
compose/legacy.py
Normal file
122
compose/legacy.py
Normal file
@ -0,0 +1,122 @@
|
||||
import logging
|
||||
import re
|
||||
|
||||
from .container import get_container_name, Container
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# TODO: remove this section when migrate_project_to_labels is removed
|
||||
NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
|
||||
|
||||
ERROR_MESSAGE_FORMAT = """
|
||||
Compose found the following containers without labels:
|
||||
|
||||
{names_list}
|
||||
|
||||
As of Compose 1.3.0, containers are identified with labels instead of naming convention. If you want to continue using these containers, run:
|
||||
|
||||
$ docker-compose migrate-to-labels
|
||||
|
||||
Alternatively, remove them:
|
||||
|
||||
$ docker rm -f {rm_args}
|
||||
"""
|
||||
|
||||
|
||||
def check_for_legacy_containers(
|
||||
client,
|
||||
project,
|
||||
services,
|
||||
stopped=False,
|
||||
one_off=False):
|
||||
"""Check if there are containers named using the old naming convention
|
||||
and warn the user that those containers may need to be migrated to
|
||||
using labels, so that compose can find them.
|
||||
"""
|
||||
containers = list(get_legacy_containers(
|
||||
client,
|
||||
project,
|
||||
services,
|
||||
stopped=stopped,
|
||||
one_off=one_off))
|
||||
|
||||
if containers:
|
||||
raise LegacyContainersError([c.name for c in containers])
|
||||
|
||||
|
||||
class LegacyContainersError(Exception):
|
||||
def __init__(self, names):
|
||||
self.names = names
|
||||
|
||||
self.msg = ERROR_MESSAGE_FORMAT.format(
|
||||
names_list="\n".join(" {}".format(name) for name in names),
|
||||
rm_args=" ".join(names),
|
||||
)
|
||||
|
||||
def __unicode__(self):
|
||||
return self.msg
|
||||
|
||||
__str__ = __unicode__
|
||||
|
||||
|
||||
def add_labels(project, container):
|
||||
project_name, service_name, one_off, number = NAME_RE.match(container.name).groups()
|
||||
if project_name != project.name or service_name not in project.service_names:
|
||||
return
|
||||
service = project.get_service(service_name)
|
||||
service.recreate_container(container)
|
||||
|
||||
|
||||
def migrate_project_to_labels(project):
|
||||
log.info("Running migration to labels for project %s", project.name)
|
||||
|
||||
containers = get_legacy_containers(
|
||||
project.client,
|
||||
project.name,
|
||||
project.service_names,
|
||||
stopped=True,
|
||||
one_off=False)
|
||||
|
||||
for container in containers:
|
||||
add_labels(project, container)
|
||||
|
||||
|
||||
def get_legacy_containers(
|
||||
client,
|
||||
project,
|
||||
services,
|
||||
stopped=False,
|
||||
one_off=False):
|
||||
|
||||
containers = client.containers(all=stopped)
|
||||
|
||||
for service in services:
|
||||
for container in containers:
|
||||
name = get_container_name(container)
|
||||
if has_container(project, service, name, one_off=one_off):
|
||||
yield Container.from_ps(client, container)
|
||||
|
||||
|
||||
def has_container(project, service, name, one_off=False):
|
||||
if not name or not is_valid_name(name, one_off):
|
||||
return False
|
||||
container_project, container_service, _container_number = parse_name(name)
|
||||
return container_project == project and container_service == service
|
||||
|
||||
|
||||
def is_valid_name(name, one_off=False):
|
||||
match = NAME_RE.match(name)
|
||||
if match is None:
|
||||
return False
|
||||
if one_off:
|
||||
return match.group(3) == 'run_'
|
||||
else:
|
||||
return match.group(3) is None
|
||||
|
||||
|
||||
def parse_name(name):
|
||||
match = NAME_RE.match(name)
|
||||
(project, service_name, _, suffix) = match.groups()
|
||||
return (project, service_name, int(suffix))
|
@ -74,8 +74,9 @@ def print_output_event(event, stream, is_terminal):
|
||||
stream.write("%s %s%s" % (status, event['progress'], terminator))
|
||||
elif 'progressDetail' in event:
|
||||
detail = event['progressDetail']
|
||||
if 'current' in detail:
|
||||
percentage = float(detail['current']) / float(detail['total']) * 100
|
||||
total = detail.get('total')
|
||||
if 'current' in detail and total:
|
||||
percentage = float(detail['current']) / float(total) * 100
|
||||
stream.write('%s (%.1f%%)%s' % (status, percentage, terminator))
|
||||
else:
|
||||
stream.write('%s%s' % (status, terminator))
|
||||
|
@ -1,12 +1,15 @@
|
||||
from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
|
||||
from functools import reduce
|
||||
|
||||
from docker.errors import APIError
|
||||
|
||||
from .config import get_service_name_from_net, ConfigurationError
|
||||
from .const import LABEL_PROJECT, LABEL_SERVICE, LABEL_ONE_OFF
|
||||
from .service import Service
|
||||
from .container import Container
|
||||
from docker.errors import APIError
|
||||
from .legacy import check_for_legacy_containers
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
@ -60,6 +63,12 @@ class Project(object):
|
||||
self.services = services
|
||||
self.client = client
|
||||
|
||||
def labels(self, one_off=False):
|
||||
return [
|
||||
'{0}={1}'.format(LABEL_PROJECT, self.name),
|
||||
'{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False"),
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def from_dicts(cls, name, service_dicts, client):
|
||||
"""
|
||||
@ -75,6 +84,10 @@ class Project(object):
|
||||
volumes_from=volumes_from, **service_dict))
|
||||
return project
|
||||
|
||||
@property
|
||||
def service_names(self):
|
||||
return [service.name for service in self.services]
|
||||
|
||||
def get_service(self, name):
|
||||
"""
|
||||
Retrieve a service by name. Raises NoSuchService
|
||||
@ -102,7 +115,7 @@ class Project(object):
|
||||
"""
|
||||
if service_names is None or len(service_names) == 0:
|
||||
return self.get_services(
|
||||
service_names=[s.name for s in self.services],
|
||||
service_names=self.service_names,
|
||||
include_deps=include_deps
|
||||
)
|
||||
else:
|
||||
@ -158,7 +171,7 @@ class Project(object):
|
||||
try:
|
||||
net = Container.from_id(self.client, net_name)
|
||||
except APIError:
|
||||
raise ConfigurationError('Serivce "%s" is trying to use the network of "%s", which is not the name of a service or container.' % (service_dict['name'], net_name))
|
||||
raise ConfigurationError('Service "%s" is trying to use the network of "%s", which is not the name of a service or container.' % (service_dict['name'], net_name))
|
||||
else:
|
||||
net = service_dict['net']
|
||||
|
||||
@ -195,26 +208,62 @@ class Project(object):
|
||||
def up(self,
|
||||
service_names=None,
|
||||
start_deps=True,
|
||||
recreate=True,
|
||||
allow_recreate=True,
|
||||
smart_recreate=False,
|
||||
insecure_registry=False,
|
||||
detach=False,
|
||||
do_build=True):
|
||||
running_containers = []
|
||||
for service in self.get_services(service_names, include_deps=start_deps):
|
||||
if recreate:
|
||||
for (_, container) in service.recreate_containers(
|
||||
insecure_registry=insecure_registry,
|
||||
detach=detach,
|
||||
do_build=do_build):
|
||||
running_containers.append(container)
|
||||
else:
|
||||
for container in service.start_or_create_containers(
|
||||
insecure_registry=insecure_registry,
|
||||
detach=detach,
|
||||
do_build=do_build):
|
||||
running_containers.append(container)
|
||||
|
||||
return running_containers
|
||||
services = self.get_services(service_names, include_deps=start_deps)
|
||||
|
||||
plans = self._get_convergence_plans(
|
||||
services,
|
||||
allow_recreate=allow_recreate,
|
||||
smart_recreate=smart_recreate,
|
||||
)
|
||||
|
||||
return [
|
||||
container
|
||||
for service in services
|
||||
for container in service.execute_convergence_plan(
|
||||
plans[service.name],
|
||||
insecure_registry=insecure_registry,
|
||||
do_build=do_build,
|
||||
)
|
||||
]
|
||||
|
||||
def _get_convergence_plans(self,
|
||||
services,
|
||||
allow_recreate=True,
|
||||
smart_recreate=False):
|
||||
|
||||
plans = {}
|
||||
|
||||
for service in services:
|
||||
updated_dependencies = [
|
||||
name
|
||||
for name in service.get_dependency_names()
|
||||
if name in plans
|
||||
and plans[name].action == 'recreate'
|
||||
]
|
||||
|
||||
if updated_dependencies:
|
||||
log.debug(
|
||||
'%s has upstream changes (%s)',
|
||||
service.name, ", ".join(updated_dependencies),
|
||||
)
|
||||
plan = service.convergence_plan(
|
||||
allow_recreate=allow_recreate,
|
||||
smart_recreate=False,
|
||||
)
|
||||
else:
|
||||
plan = service.convergence_plan(
|
||||
allow_recreate=allow_recreate,
|
||||
smart_recreate=smart_recreate,
|
||||
)
|
||||
|
||||
plans[service.name] = plan
|
||||
|
||||
return plans
|
||||
|
||||
def pull(self, service_names=None, insecure_registry=False):
|
||||
for service in self.get_services(service_names, include_deps=True):
|
||||
@ -225,16 +274,29 @@ class Project(object):
|
||||
service.remove_stopped(**options)
|
||||
|
||||
def containers(self, service_names=None, stopped=False, one_off=False):
|
||||
return [Container.from_ps(self.client, container)
|
||||
for container in self.client.containers(all=stopped)
|
||||
for service in self.get_services(service_names)
|
||||
if service.has_container(container, one_off=one_off)]
|
||||
containers = [
|
||||
Container.from_ps(self.client, container)
|
||||
for container in self.client.containers(
|
||||
all=stopped,
|
||||
filters={'label': self.labels(one_off=one_off)})]
|
||||
|
||||
def matches_service_names(container):
|
||||
if not service_names:
|
||||
return True
|
||||
return container.labels.get(LABEL_SERVICE) in service_names
|
||||
|
||||
if not containers:
|
||||
check_for_legacy_containers(
|
||||
self.client,
|
||||
self.name,
|
||||
self.service_names,
|
||||
stopped=stopped,
|
||||
one_off=one_off)
|
||||
|
||||
return filter(matches_service_names, containers)
|
||||
|
||||
def _inject_deps(self, acc, service):
|
||||
net_name = service.get_net_name()
|
||||
dep_names = (service.get_linked_names() +
|
||||
service.get_volumes_from_names() +
|
||||
([net_name] if net_name else []))
|
||||
dep_names = service.get_dependency_names()
|
||||
|
||||
if len(dep_names) > 0:
|
||||
dep_services = self.get_services(
|
||||
|
@ -3,16 +3,27 @@ from __future__ import absolute_import
|
||||
from collections import namedtuple
|
||||
import logging
|
||||
import re
|
||||
from operator import attrgetter
|
||||
import sys
|
||||
from operator import attrgetter
|
||||
|
||||
import six
|
||||
|
||||
from docker.errors import APIError
|
||||
from docker.utils import create_host_config
|
||||
from docker.utils import create_host_config, LogConfig
|
||||
|
||||
from .config import DOCKER_CONFIG_KEYS
|
||||
from .container import Container, get_container_name
|
||||
from . import __version__
|
||||
from .config import DOCKER_CONFIG_KEYS, merge_environment
|
||||
from .const import (
|
||||
LABEL_CONTAINER_NUMBER,
|
||||
LABEL_ONE_OFF,
|
||||
LABEL_PROJECT,
|
||||
LABEL_SERVICE,
|
||||
LABEL_VERSION,
|
||||
LABEL_CONFIG_HASH,
|
||||
)
|
||||
from .container import Container
|
||||
from .legacy import check_for_legacy_containers
|
||||
from .progress_stream import stream_output, StreamOutputError
|
||||
from .utils import json_hash
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
@ -20,12 +31,19 @@ log = logging.getLogger(__name__)
|
||||
DOCKER_START_KEYS = [
|
||||
'cap_add',
|
||||
'cap_drop',
|
||||
'devices',
|
||||
'dns',
|
||||
'dns_search',
|
||||
'env_file',
|
||||
'extra_hosts',
|
||||
'read_only',
|
||||
'net',
|
||||
'log_driver',
|
||||
'pid',
|
||||
'privileged',
|
||||
'restart',
|
||||
'volumes_from',
|
||||
'security_opt',
|
||||
]
|
||||
|
||||
VALID_NAME_CHARS = '[a-zA-Z0-9]'
|
||||
@ -37,20 +55,24 @@ class BuildError(Exception):
|
||||
self.reason = reason
|
||||
|
||||
|
||||
class CannotBeScaledError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ConfigError(ValueError):
|
||||
pass
|
||||
|
||||
|
||||
class NeedsBuildError(Exception):
|
||||
def __init__(self, service):
|
||||
self.service = service
|
||||
|
||||
|
||||
VolumeSpec = namedtuple('VolumeSpec', 'external internal mode')
|
||||
|
||||
|
||||
ServiceName = namedtuple('ServiceName', 'project service number')
|
||||
|
||||
|
||||
ConvergencePlan = namedtuple('ConvergencePlan', 'action containers')
|
||||
|
||||
|
||||
class Service(object):
|
||||
def __init__(self, name, client=None, project='default', links=None, external_links=None, volumes_from=None, net=None, **options):
|
||||
if not re.match('^%s+$' % VALID_NAME_CHARS, name):
|
||||
@ -59,6 +81,8 @@ class Service(object):
|
||||
raise ConfigError('Invalid project name "%s" - only %s are allowed' % (project, VALID_NAME_CHARS))
|
||||
if 'image' in options and 'build' in options:
|
||||
raise ConfigError('Service %s has both an image and build path specified. A service can either be built to image or use an existing image, not both.' % name)
|
||||
if 'image' not in options and 'build' not in options:
|
||||
raise ConfigError('Service %s has neither an image nor a build path specified. Exactly one must be provided.' % name)
|
||||
|
||||
self.name = name
|
||||
self.client = client
|
||||
@ -70,28 +94,29 @@ class Service(object):
|
||||
self.options = options
|
||||
|
||||
def containers(self, stopped=False, one_off=False):
|
||||
return [Container.from_ps(self.client, container)
|
||||
for container in self.client.containers(all=stopped)
|
||||
if self.has_container(container, one_off=one_off)]
|
||||
containers = [
|
||||
Container.from_ps(self.client, container)
|
||||
for container in self.client.containers(
|
||||
all=stopped,
|
||||
filters={'label': self.labels(one_off=one_off)})]
|
||||
|
||||
def has_container(self, container, one_off=False):
|
||||
"""Return True if `container` was created to fulfill this service."""
|
||||
name = get_container_name(container)
|
||||
if not name or not is_valid_name(name, one_off):
|
||||
return False
|
||||
project, name, _number = parse_name(name)
|
||||
return project == self.project and name == self.name
|
||||
if not containers:
|
||||
check_for_legacy_containers(
|
||||
self.client,
|
||||
self.project,
|
||||
[self.name],
|
||||
stopped=stopped,
|
||||
one_off=one_off)
|
||||
|
||||
return containers
|
||||
|
||||
def get_container(self, number=1):
|
||||
"""Return a :class:`compose.container.Container` for this service. The
|
||||
container must be active, and match `number`.
|
||||
"""
|
||||
for container in self.client.containers():
|
||||
if not self.has_container(container):
|
||||
continue
|
||||
_, _, container_number = parse_name(get_container_name(container))
|
||||
if container_number == number:
|
||||
return Container.from_ps(self.client, container)
|
||||
labels = self.labels() + ['{0}={1}'.format(LABEL_CONTAINER_NUMBER, number)]
|
||||
for container in self.client.containers(filters={'label': labels}):
|
||||
return Container.from_ps(self.client, container)
|
||||
|
||||
raise ValueError("No container found for %s_%s" % (self.name, number))
|
||||
|
||||
@ -125,13 +150,14 @@ class Service(object):
|
||||
- removes all stopped containers
|
||||
"""
|
||||
if not self.can_be_scaled():
|
||||
raise CannotBeScaledError()
|
||||
log.warn('Service %s specifies a port on the host. If multiple containers '
|
||||
'for this service are created on a single host, the port will clash.'
|
||||
% self.name)
|
||||
|
||||
# Create enough containers
|
||||
containers = self.containers(stopped=True)
|
||||
while len(containers) < desired_num:
|
||||
log.info("Creating %s..." % self._next_container_name(containers))
|
||||
containers.append(self.create_container(detach=True))
|
||||
containers.append(self.create_container())
|
||||
|
||||
running_containers = []
|
||||
stopped_containers = []
|
||||
@ -169,65 +195,168 @@ class Service(object):
|
||||
one_off=False,
|
||||
insecure_registry=False,
|
||||
do_build=True,
|
||||
intermediate_container=None,
|
||||
previous_container=None,
|
||||
number=None,
|
||||
quiet=False,
|
||||
**override_options):
|
||||
"""
|
||||
Create a container for this service. If the image doesn't exist, attempt to pull
|
||||
it.
|
||||
"""
|
||||
container_options = self._get_container_create_options(
|
||||
override_options,
|
||||
one_off=one_off,
|
||||
intermediate_container=intermediate_container,
|
||||
self.ensure_image_exists(
|
||||
do_build=do_build,
|
||||
insecure_registry=insecure_registry,
|
||||
)
|
||||
|
||||
if (do_build and
|
||||
self.can_be_built() and
|
||||
not self.client.images(name=self.full_name)):
|
||||
self.build()
|
||||
container_options = self._get_container_create_options(
|
||||
override_options,
|
||||
number or self._next_container_number(one_off=one_off),
|
||||
one_off=one_off,
|
||||
previous_container=previous_container,
|
||||
)
|
||||
|
||||
if 'name' in container_options and not quiet:
|
||||
log.info("Creating %s..." % container_options['name'])
|
||||
|
||||
return Container.create(self.client, **container_options)
|
||||
|
||||
def ensure_image_exists(self,
|
||||
do_build=True,
|
||||
insecure_registry=False):
|
||||
|
||||
if self.image():
|
||||
return
|
||||
|
||||
if self.can_be_built():
|
||||
if do_build:
|
||||
self.build()
|
||||
else:
|
||||
raise NeedsBuildError(self)
|
||||
else:
|
||||
self.pull(insecure_registry=insecure_registry)
|
||||
|
||||
def image(self):
|
||||
try:
|
||||
return Container.create(self.client, **container_options)
|
||||
return self.client.inspect_image(self.image_name)
|
||||
except APIError as e:
|
||||
if e.response.status_code == 404 and e.explanation and 'No such image' in str(e.explanation):
|
||||
log.info('Pulling image %s...' % container_options['image'])
|
||||
output = self.client.pull(
|
||||
container_options['image'],
|
||||
stream=True,
|
||||
insecure_registry=insecure_registry
|
||||
)
|
||||
stream_output(output, sys.stdout)
|
||||
return Container.create(self.client, **container_options)
|
||||
raise
|
||||
return None
|
||||
else:
|
||||
raise
|
||||
|
||||
def recreate_containers(self, insecure_registry=False, do_build=True, **override_options):
|
||||
@property
|
||||
def image_name(self):
|
||||
if self.can_be_built():
|
||||
return self.full_name
|
||||
else:
|
||||
return self.options['image']
|
||||
|
||||
def converge(self,
|
||||
allow_recreate=True,
|
||||
smart_recreate=False,
|
||||
insecure_registry=False,
|
||||
do_build=True):
|
||||
"""
|
||||
If a container for this service doesn't exist, create and start one. If there are
|
||||
any, stop them, create+start new ones, and remove the old containers.
|
||||
"""
|
||||
plan = self.convergence_plan(
|
||||
allow_recreate=allow_recreate,
|
||||
smart_recreate=smart_recreate,
|
||||
)
|
||||
|
||||
return self.execute_convergence_plan(
|
||||
plan,
|
||||
insecure_registry=insecure_registry,
|
||||
do_build=do_build,
|
||||
)
|
||||
|
||||
def convergence_plan(self,
|
||||
allow_recreate=True,
|
||||
smart_recreate=False):
|
||||
|
||||
containers = self.containers(stopped=True)
|
||||
|
||||
if not containers:
|
||||
log.info("Creating %s..." % self._next_container_name(containers))
|
||||
return ConvergencePlan('create', [])
|
||||
|
||||
if smart_recreate and not self._containers_have_diverged(containers):
|
||||
stopped = [c for c in containers if not c.is_running]
|
||||
|
||||
if stopped:
|
||||
return ConvergencePlan('start', stopped)
|
||||
|
||||
return ConvergencePlan('noop', containers)
|
||||
|
||||
if not allow_recreate:
|
||||
return ConvergencePlan('start', containers)
|
||||
|
||||
return ConvergencePlan('recreate', containers)
|
||||
|
||||
def _containers_have_diverged(self, containers):
|
||||
config_hash = self.config_hash()
|
||||
has_diverged = False
|
||||
|
||||
for c in containers:
|
||||
container_config_hash = c.labels.get(LABEL_CONFIG_HASH, None)
|
||||
if container_config_hash != config_hash:
|
||||
log.debug(
|
||||
'%s has diverged: %s != %s',
|
||||
c.name, container_config_hash, config_hash,
|
||||
)
|
||||
has_diverged = True
|
||||
|
||||
return has_diverged
|
||||
|
||||
def execute_convergence_plan(self,
|
||||
plan,
|
||||
insecure_registry=False,
|
||||
do_build=True):
|
||||
(action, containers) = plan
|
||||
|
||||
if action == 'create':
|
||||
container = self.create_container(
|
||||
insecure_registry=insecure_registry,
|
||||
do_build=do_build,
|
||||
**override_options)
|
||||
)
|
||||
self.start_container(container)
|
||||
return [(None, container)]
|
||||
else:
|
||||
tuples = []
|
||||
|
||||
return [container]
|
||||
|
||||
elif action == 'recreate':
|
||||
return [
|
||||
self.recreate_container(
|
||||
c,
|
||||
insecure_registry=insecure_registry,
|
||||
)
|
||||
for c in containers
|
||||
]
|
||||
|
||||
elif action == 'start':
|
||||
for c in containers:
|
||||
log.info("Recreating %s..." % c.name)
|
||||
tuples.append(self.recreate_container(c, insecure_registry=insecure_registry, **override_options))
|
||||
self.start_container_if_stopped(c)
|
||||
|
||||
return tuples
|
||||
return containers
|
||||
|
||||
def recreate_container(self, container, **override_options):
|
||||
"""Recreate a container. An intermediate container is created so that
|
||||
the new container has the same name, while still supporting
|
||||
`volumes-from` the original container.
|
||||
elif action == 'noop':
|
||||
for c in containers:
|
||||
log.info("%s is up-to-date" % c.name)
|
||||
|
||||
return containers
|
||||
|
||||
else:
|
||||
raise Exception("Invalid action: {}".format(action))
|
||||
|
||||
def recreate_container(self,
|
||||
container,
|
||||
insecure_registry=False):
|
||||
"""Recreate a container.
|
||||
|
||||
The original container is renamed to a temporary name so that data
|
||||
volumes can be copied to the new container, before the original
|
||||
container is removed.
|
||||
"""
|
||||
log.info("Recreating %s..." % container.name)
|
||||
try:
|
||||
container.stop()
|
||||
except APIError as e:
|
||||
@ -238,29 +367,21 @@ class Service(object):
|
||||
else:
|
||||
raise
|
||||
|
||||
intermediate_container = Container.create(
|
||||
self.client,
|
||||
image=container.image,
|
||||
entrypoint=['/bin/echo'],
|
||||
command=[],
|
||||
detach=True,
|
||||
host_config=create_host_config(volumes_from=[container.id]),
|
||||
)
|
||||
intermediate_container.start()
|
||||
intermediate_container.wait()
|
||||
container.remove()
|
||||
# Use a hopefully unique container name by prepending the short id
|
||||
self.client.rename(
|
||||
container.id,
|
||||
'%s_%s' % (container.short_id, container.name))
|
||||
|
||||
options = dict(override_options)
|
||||
new_container = self.create_container(
|
||||
insecure_registry=insecure_registry,
|
||||
do_build=False,
|
||||
intermediate_container=intermediate_container,
|
||||
**options
|
||||
previous_container=container,
|
||||
number=container.labels.get(LABEL_CONTAINER_NUMBER),
|
||||
quiet=True,
|
||||
)
|
||||
self.start_container(new_container)
|
||||
|
||||
intermediate_container.remove()
|
||||
|
||||
return (intermediate_container, new_container)
|
||||
container.remove()
|
||||
return new_container
|
||||
|
||||
def start_container_if_stopped(self, container):
|
||||
if container.is_running:
|
||||
@ -273,23 +394,20 @@ class Service(object):
|
||||
container.start()
|
||||
return container
|
||||
|
||||
def start_or_create_containers(
|
||||
self,
|
||||
insecure_registry=False,
|
||||
detach=False,
|
||||
do_build=True):
|
||||
containers = self.containers(stopped=True)
|
||||
def config_hash(self):
|
||||
return json_hash(self.config_dict())
|
||||
|
||||
if not containers:
|
||||
log.info("Creating %s..." % self._next_container_name(containers))
|
||||
new_container = self.create_container(
|
||||
insecure_registry=insecure_registry,
|
||||
detach=detach,
|
||||
do_build=do_build,
|
||||
)
|
||||
return [self.start_container(new_container)]
|
||||
else:
|
||||
return [self.start_container_if_stopped(c) for c in containers]
|
||||
def config_dict(self):
|
||||
return {
|
||||
'options': self.options,
|
||||
'image_id': self.image()['Id'],
|
||||
}
|
||||
|
||||
def get_dependency_names(self):
|
||||
net_name = self.get_net_name()
|
||||
return (self.get_linked_names() +
|
||||
self.get_volumes_from_names() +
|
||||
([net_name] if net_name else []))
|
||||
|
||||
def get_linked_names(self):
|
||||
return [s.name for (s, _) in self.links]
|
||||
@ -303,14 +421,19 @@ class Service(object):
|
||||
else:
|
||||
return
|
||||
|
||||
def _next_container_name(self, all_containers, one_off=False):
|
||||
bits = [self.project, self.name]
|
||||
if one_off:
|
||||
bits.append('run')
|
||||
return '_'.join(bits + [str(self._next_container_number(all_containers))])
|
||||
def get_container_name(self, number, one_off=False):
|
||||
# TODO: Implement issue #652 here
|
||||
return build_container_name(self.project, self.name, number, one_off)
|
||||
|
||||
def _next_container_number(self, all_containers):
|
||||
numbers = [parse_name(c.name).number for c in all_containers]
|
||||
# TODO: this would benefit from github.com/docker/docker/pull/11943
|
||||
# to remove the need to inspect every container
|
||||
def _next_container_number(self, one_off=False):
|
||||
numbers = [
|
||||
Container.from_ps(self.client, container).number
|
||||
for container in self.client.containers(
|
||||
all=True,
|
||||
filters={'label': self.labels(one_off=one_off)})
|
||||
]
|
||||
return 1 if not numbers else max(numbers) + 1
|
||||
|
||||
def _get_links(self, link_to_self):
|
||||
@ -333,7 +456,7 @@ class Service(object):
|
||||
links.append((external_link, link_name))
|
||||
return links
|
||||
|
||||
def _get_volumes_from(self, intermediate_container=None):
|
||||
def _get_volumes_from(self):
|
||||
volumes_from = []
|
||||
for volume_source in self.volumes_from:
|
||||
if isinstance(volume_source, Service):
|
||||
@ -346,9 +469,6 @@ class Service(object):
|
||||
elif isinstance(volume_source, Container):
|
||||
volumes_from.append(volume_source.id)
|
||||
|
||||
if intermediate_container:
|
||||
volumes_from.append(intermediate_container.id)
|
||||
|
||||
return volumes_from
|
||||
|
||||
def _get_net(self):
|
||||
@ -370,15 +490,31 @@ class Service(object):
|
||||
|
||||
return net
|
||||
|
||||
def _get_container_create_options(self, override_options, one_off=False, intermediate_container=None):
|
||||
def _get_container_create_options(
|
||||
self,
|
||||
override_options,
|
||||
number,
|
||||
one_off=False,
|
||||
previous_container=None):
|
||||
|
||||
add_config_hash = (not one_off and not override_options)
|
||||
|
||||
container_options = dict(
|
||||
(k, self.options[k])
|
||||
for k in DOCKER_CONFIG_KEYS if k in self.options)
|
||||
container_options.update(override_options)
|
||||
|
||||
container_options['name'] = self._next_container_name(
|
||||
self.containers(stopped=True, one_off=one_off),
|
||||
one_off)
|
||||
container_options['name'] = self.get_container_name(number, one_off)
|
||||
|
||||
if add_config_hash:
|
||||
config_hash = self.config_hash()
|
||||
if 'labels' not in container_options:
|
||||
container_options['labels'] = {}
|
||||
container_options['labels'][LABEL_CONFIG_HASH] = config_hash
|
||||
log.debug("Added config hash: %s" % config_hash)
|
||||
|
||||
if 'detach' not in container_options:
|
||||
container_options['detach'] = True
|
||||
|
||||
# If a qualified hostname was given, split it into an
|
||||
# unqualified hostname and a domainname unless domainname
|
||||
@ -403,36 +539,49 @@ class Service(object):
|
||||
ports.append(port)
|
||||
container_options['ports'] = ports
|
||||
|
||||
override_options['binds'] = merge_volume_bindings(
|
||||
container_options.get('volumes') or [],
|
||||
previous_container)
|
||||
|
||||
if 'volumes' in container_options:
|
||||
container_options['volumes'] = dict(
|
||||
(parse_volume_spec(v).internal, {})
|
||||
for v in container_options['volumes'])
|
||||
|
||||
if self.can_be_built():
|
||||
container_options['image'] = self.full_name
|
||||
else:
|
||||
container_options['image'] = self._get_image_name(container_options['image'])
|
||||
container_options['environment'] = merge_environment(
|
||||
self.options.get('environment'),
|
||||
override_options.get('environment'))
|
||||
|
||||
if previous_container:
|
||||
container_options['environment']['affinity:container'] = ('=' + previous_container.id)
|
||||
|
||||
container_options['image'] = self.image_name
|
||||
|
||||
container_options['labels'] = build_container_labels(
|
||||
container_options.get('labels', {}),
|
||||
self.labels(one_off=one_off),
|
||||
number)
|
||||
|
||||
# Delete options which are only used when starting
|
||||
for key in DOCKER_START_KEYS:
|
||||
container_options.pop(key, None)
|
||||
|
||||
container_options['host_config'] = self._get_container_host_config(override_options, one_off=one_off, intermediate_container=intermediate_container)
|
||||
container_options['host_config'] = self._get_container_host_config(
|
||||
override_options,
|
||||
one_off=one_off)
|
||||
|
||||
return container_options
|
||||
|
||||
def _get_container_host_config(self, override_options, one_off=False, intermediate_container=None):
|
||||
def _get_container_host_config(self, override_options, one_off=False):
|
||||
options = dict(self.options, **override_options)
|
||||
port_bindings = build_port_bindings(options.get('ports') or [])
|
||||
|
||||
volume_bindings = dict(
|
||||
build_volume_binding(parse_volume_spec(volume))
|
||||
for volume in options.get('volumes') or []
|
||||
if ':' in volume)
|
||||
|
||||
privileged = options.get('privileged', False)
|
||||
cap_add = options.get('cap_add', None)
|
||||
cap_drop = options.get('cap_drop', None)
|
||||
log_config = LogConfig(type=options.get('log_driver', 'json-file'))
|
||||
pid = options.get('pid', None)
|
||||
security_opt = options.get('security_opt', None)
|
||||
|
||||
dns = options.get('dns', None)
|
||||
if isinstance(dns, six.string_types):
|
||||
@ -444,35 +593,43 @@ class Service(object):
|
||||
|
||||
restart = parse_restart_spec(options.get('restart', None))
|
||||
|
||||
extra_hosts = build_extra_hosts(options.get('extra_hosts', None))
|
||||
read_only = options.get('read_only', None)
|
||||
|
||||
devices = options.get('devices', None)
|
||||
|
||||
return create_host_config(
|
||||
links=self._get_links(link_to_self=one_off),
|
||||
port_bindings=port_bindings,
|
||||
binds=volume_bindings,
|
||||
volumes_from=self._get_volumes_from(intermediate_container),
|
||||
binds=options.get('binds'),
|
||||
volumes_from=self._get_volumes_from(),
|
||||
privileged=privileged,
|
||||
network_mode=self._get_net(),
|
||||
devices=devices,
|
||||
dns=dns,
|
||||
dns_search=dns_search,
|
||||
restart_policy=restart,
|
||||
cap_add=cap_add,
|
||||
cap_drop=cap_drop,
|
||||
log_config=log_config,
|
||||
extra_hosts=extra_hosts,
|
||||
read_only=read_only,
|
||||
pid_mode=pid,
|
||||
security_opt=security_opt
|
||||
)
|
||||
|
||||
def _get_image_name(self, image):
|
||||
repo, tag = parse_repository_tag(image)
|
||||
if tag == "":
|
||||
tag = "latest"
|
||||
return '%s:%s' % (repo, tag)
|
||||
|
||||
def build(self, no_cache=False):
|
||||
log.info('Building %s...' % self.name)
|
||||
|
||||
path = six.binary_type(self.options['build'])
|
||||
|
||||
build_output = self.client.build(
|
||||
self.options['build'],
|
||||
tag=self.full_name,
|
||||
path=path,
|
||||
tag=self.image_name,
|
||||
stream=True,
|
||||
rm=True,
|
||||
nocache=no_cache,
|
||||
dockerfile=self.options.get('dockerfile', None),
|
||||
)
|
||||
|
||||
try:
|
||||
@ -480,6 +637,11 @@ class Service(object):
|
||||
except StreamOutputError as e:
|
||||
raise BuildError(self, unicode(e))
|
||||
|
||||
# Ensure the HTTP connection is not reused for another
|
||||
# streaming command, as the Docker daemon can sometimes
|
||||
# complain about it
|
||||
self.client.close()
|
||||
|
||||
image_id = None
|
||||
|
||||
for event in all_events:
|
||||
@ -503,6 +665,13 @@ class Service(object):
|
||||
"""
|
||||
return '%s_%s' % (self.project, self.name)
|
||||
|
||||
def labels(self, one_off=False):
|
||||
return [
|
||||
'{0}={1}'.format(LABEL_PROJECT, self.project),
|
||||
'{0}={1}'.format(LABEL_SERVICE, self.name),
|
||||
'{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False")
|
||||
]
|
||||
|
||||
def can_be_scaled(self):
|
||||
for port in self.options.get('ports', []):
|
||||
if ':' in str(port):
|
||||
@ -510,48 +679,91 @@ class Service(object):
|
||||
return True
|
||||
|
||||
def pull(self, insecure_registry=False):
|
||||
if 'image' in self.options:
|
||||
image_name = self._get_image_name(self.options['image'])
|
||||
log.info('Pulling %s (%s)...' % (self.name, image_name))
|
||||
self.client.pull(
|
||||
image_name,
|
||||
insecure_registry=insecure_registry
|
||||
)
|
||||
if 'image' not in self.options:
|
||||
return
|
||||
|
||||
repo, tag = parse_repository_tag(self.options['image'])
|
||||
tag = tag or 'latest'
|
||||
log.info('Pulling %s (%s:%s)...' % (self.name, repo, tag))
|
||||
output = self.client.pull(
|
||||
repo,
|
||||
tag=tag,
|
||||
stream=True,
|
||||
insecure_registry=insecure_registry)
|
||||
stream_output(output, sys.stdout)
|
||||
|
||||
|
||||
NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
|
||||
# Names
|
||||
|
||||
|
||||
def is_valid_name(name, one_off=False):
|
||||
match = NAME_RE.match(name)
|
||||
if match is None:
|
||||
return False
|
||||
def build_container_name(project, service, number, one_off=False):
|
||||
bits = [project, service]
|
||||
if one_off:
|
||||
return match.group(3) == 'run_'
|
||||
else:
|
||||
return match.group(3) is None
|
||||
bits.append('run')
|
||||
return '_'.join(bits + [str(number)])
|
||||
|
||||
|
||||
def parse_name(name):
|
||||
match = NAME_RE.match(name)
|
||||
(project, service_name, _, suffix) = match.groups()
|
||||
return ServiceName(project, service_name, int(suffix))
|
||||
# Images
|
||||
|
||||
|
||||
def parse_restart_spec(restart_config):
|
||||
if not restart_config:
|
||||
return None
|
||||
parts = restart_config.split(':')
|
||||
if len(parts) > 2:
|
||||
raise ConfigError("Restart %s has incorrect format, should be "
|
||||
"mode[:max_retry]" % restart_config)
|
||||
if len(parts) == 2:
|
||||
name, max_retry_count = parts
|
||||
else:
|
||||
name, = parts
|
||||
max_retry_count = 0
|
||||
def parse_repository_tag(s):
|
||||
if ":" not in s:
|
||||
return s, ""
|
||||
repo, tag = s.rsplit(":", 1)
|
||||
if "/" in tag:
|
||||
return s, ""
|
||||
return repo, tag
|
||||
|
||||
return {'Name': name, 'MaximumRetryCount': int(max_retry_count)}
|
||||
|
||||
# Volumes
|
||||
|
||||
|
||||
def merge_volume_bindings(volumes_option, previous_container):
|
||||
"""Return a list of volume bindings for a container. Container data volumes
|
||||
are replaced by those from the previous container.
|
||||
"""
|
||||
volume_bindings = dict(
|
||||
build_volume_binding(parse_volume_spec(volume))
|
||||
for volume in volumes_option or []
|
||||
if ':' in volume)
|
||||
|
||||
if previous_container:
|
||||
volume_bindings.update(
|
||||
get_container_data_volumes(previous_container, volumes_option))
|
||||
|
||||
return volume_bindings.values()
|
||||
|
||||
|
||||
def get_container_data_volumes(container, volumes_option):
|
||||
"""Find the container data volumes that are in `volumes_option`, and return
|
||||
a mapping of volume bindings for those volumes.
|
||||
"""
|
||||
volumes = []
|
||||
|
||||
volumes_option = volumes_option or []
|
||||
container_volumes = container.get('Volumes') or {}
|
||||
image_volumes = container.image_config['ContainerConfig'].get('Volumes') or {}
|
||||
|
||||
for volume in set(volumes_option + image_volumes.keys()):
|
||||
volume = parse_volume_spec(volume)
|
||||
# No need to preserve host volumes
|
||||
if volume.external:
|
||||
continue
|
||||
|
||||
volume_path = container_volumes.get(volume.internal)
|
||||
# New volume, doesn't exist in the old container
|
||||
if not volume_path:
|
||||
continue
|
||||
|
||||
# Copy existing volume from old container
|
||||
volume = volume._replace(external=volume_path)
|
||||
volumes.append(build_volume_binding(volume))
|
||||
|
||||
return dict(volumes)
|
||||
|
||||
|
||||
def build_volume_binding(volume_spec):
|
||||
return volume_spec.internal, "{}:{}:{}".format(*volume_spec)
|
||||
|
||||
|
||||
def parse_volume_spec(volume_config):
|
||||
@ -574,18 +786,7 @@ def parse_volume_spec(volume_config):
|
||||
return VolumeSpec(external, internal, mode)
|
||||
|
||||
|
||||
def parse_repository_tag(s):
|
||||
if ":" not in s:
|
||||
return s, ""
|
||||
repo, tag = s.rsplit(":", 1)
|
||||
if "/" in tag:
|
||||
return s, ""
|
||||
return repo, tag
|
||||
|
||||
|
||||
def build_volume_binding(volume_spec):
|
||||
internal = {'bind': volume_spec.internal, 'ro': volume_spec.mode == 'ro'}
|
||||
return volume_spec.external, internal
|
||||
# Ports
|
||||
|
||||
|
||||
def build_port_bindings(ports):
|
||||
@ -614,3 +815,61 @@ def split_port(port):
|
||||
|
||||
external_ip, external_port, internal_port = parts
|
||||
return internal_port, (external_ip, external_port or None)
|
||||
|
||||
|
||||
# Labels
|
||||
|
||||
|
||||
def build_container_labels(label_options, service_labels, number, one_off=False):
|
||||
labels = label_options or {}
|
||||
labels.update(label.split('=', 1) for label in service_labels)
|
||||
labels[LABEL_CONTAINER_NUMBER] = str(number)
|
||||
labels[LABEL_VERSION] = __version__
|
||||
return labels
|
||||
|
||||
|
||||
# Restart policy
|
||||
|
||||
|
||||
def parse_restart_spec(restart_config):
|
||||
if not restart_config:
|
||||
return None
|
||||
parts = restart_config.split(':')
|
||||
if len(parts) > 2:
|
||||
raise ConfigError("Restart %s has incorrect format, should be "
|
||||
"mode[:max_retry]" % restart_config)
|
||||
if len(parts) == 2:
|
||||
name, max_retry_count = parts
|
||||
else:
|
||||
name, = parts
|
||||
max_retry_count = 0
|
||||
|
||||
return {'Name': name, 'MaximumRetryCount': int(max_retry_count)}
|
||||
|
||||
|
||||
# Extra hosts
|
||||
|
||||
|
||||
def build_extra_hosts(extra_hosts_config):
|
||||
if not extra_hosts_config:
|
||||
return {}
|
||||
|
||||
if isinstance(extra_hosts_config, list):
|
||||
extra_hosts_dict = {}
|
||||
for extra_hosts_line in extra_hosts_config:
|
||||
if not isinstance(extra_hosts_line, six.string_types):
|
||||
raise ConfigError(
|
||||
"extra_hosts_config \"%s\" must be either a list of strings or a string->string mapping," %
|
||||
extra_hosts_config
|
||||
)
|
||||
host, ip = extra_hosts_line.split(':')
|
||||
extra_hosts_dict.update({host.strip(): ip.strip()})
|
||||
extra_hosts_config = extra_hosts_dict
|
||||
|
||||
if isinstance(extra_hosts_config, dict):
|
||||
return extra_hosts_config
|
||||
|
||||
raise ConfigError(
|
||||
"extra_hosts_config \"%s\" must be either a list of strings or a string->string mapping," %
|
||||
extra_hosts_config
|
||||
)
|
||||
|
0
compose/state.py
Normal file
0
compose/state.py
Normal file
9
compose/utils.py
Normal file
9
compose/utils.py
Normal file
@ -0,0 +1,9 @@
|
||||
import json
|
||||
import hashlib
|
||||
|
||||
|
||||
def json_hash(obj):
|
||||
dump = json.dumps(obj, sort_keys=True, separators=(',', ':'))
|
||||
h = hashlib.sha256()
|
||||
h.update(dump)
|
||||
return h.hexdigest()
|
@ -94,7 +94,7 @@ _docker-compose_build() {
|
||||
_docker-compose_docker-compose() {
|
||||
case "$prev" in
|
||||
--file|-f)
|
||||
_filedir y?(a)ml
|
||||
_filedir "y?(a)ml"
|
||||
return
|
||||
;;
|
||||
--project-name|-p)
|
||||
@ -104,7 +104,7 @@ _docker-compose_docker-compose() {
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--help -h --verbose --version --file -f --project-name -p" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--help -h --verbose --version -v --file -f --project-name -p" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) )
|
||||
@ -293,7 +293,7 @@ _docker-compose_up() {
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--allow-insecure-ssl -d --no-build --no-color --no-deps --no-recreate -t --timeout" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--allow-insecure-ssl -d --no-build --no-color --no-deps --no-recreate -t --timeout --x-smart-recreate" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker-compose_services_all
|
||||
@ -303,11 +303,15 @@ _docker-compose_up() {
|
||||
|
||||
|
||||
_docker-compose() {
|
||||
local previous_extglob_setting=$(shopt -p extglob)
|
||||
shopt -s extglob
|
||||
|
||||
local commands=(
|
||||
build
|
||||
help
|
||||
kill
|
||||
logs
|
||||
migrate-to-labels
|
||||
port
|
||||
ps
|
||||
pull
|
||||
@ -352,6 +356,7 @@ _docker-compose() {
|
||||
local completions_func=_docker-compose_${command}
|
||||
declare -F $completions_func >/dev/null && $completions_func
|
||||
|
||||
eval "$previous_extglob_setting"
|
||||
return 0
|
||||
}
|
||||
|
||||
|
304
contrib/completion/zsh/_docker-compose
Normal file
304
contrib/completion/zsh/_docker-compose
Normal file
@ -0,0 +1,304 @@
|
||||
#compdef docker-compose
|
||||
|
||||
# Description
|
||||
# -----------
|
||||
# zsh completion for docker-compose
|
||||
# https://github.com/sdurrheimer/docker-compose-zsh-completion
|
||||
# -------------------------------------------------------------------------
|
||||
# Version
|
||||
# -------
|
||||
# 0.1.0
|
||||
# -------------------------------------------------------------------------
|
||||
# Authors
|
||||
# -------
|
||||
# * Steve Durrheimer <s.durrheimer@gmail.com>
|
||||
# -------------------------------------------------------------------------
|
||||
# Inspiration
|
||||
# -----------
|
||||
# * @albers docker-compose bash completion script
|
||||
# * @felixr docker zsh completion script : https://github.com/felixr/docker-zsh-completion
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
# For compatibility reasons, Compose and therefore its completion supports several
|
||||
# stack compositon files as listed here, in descending priority.
|
||||
# Support for these filenames might be dropped in some future version.
|
||||
__docker-compose_compose_file() {
|
||||
local file
|
||||
for file in docker-compose.y{,a}ml fig.y{,a}ml ; do
|
||||
[ -e $file ] && {
|
||||
echo $file
|
||||
return
|
||||
}
|
||||
done
|
||||
echo docker-compose.yml
|
||||
}
|
||||
|
||||
# Extracts all service names from docker-compose.yml.
|
||||
___docker-compose_all_services_in_compose_file() {
|
||||
local already_selected
|
||||
local -a services
|
||||
already_selected=$(echo ${words[@]} | tr " " "|")
|
||||
awk -F: '/^[a-zA-Z0-9]/{print $1}' "${compose_file:-$(__docker-compose_compose_file)}" 2>/dev/null | grep -Ev "$already_selected"
|
||||
}
|
||||
|
||||
# All services, even those without an existing container
|
||||
__docker-compose_services_all() {
|
||||
services=$(___docker-compose_all_services_in_compose_file)
|
||||
_alternative "args:services:($services)"
|
||||
}
|
||||
|
||||
# All services that have an entry with the given key in their docker-compose.yml section
|
||||
___docker-compose_services_with_key() {
|
||||
local already_selected
|
||||
local -a buildable
|
||||
already_selected=$(echo ${words[@]} | tr " " "|")
|
||||
# flatten sections to one line, then filter lines containing the key and return section name.
|
||||
awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' "${compose_file:-$(__docker-compose_compose_file)}" 2>/dev/null | awk -F: -v key=": +$1:" '$0 ~ key {print $1}' 2>/dev/null | grep -Ev "$already_selected"
|
||||
}
|
||||
|
||||
# All services that are defined by a Dockerfile reference
|
||||
__docker-compose_services_from_build() {
|
||||
buildable=$(___docker-compose_services_with_key build)
|
||||
_alternative "args:buildable services:($buildable)"
|
||||
}
|
||||
|
||||
# All services that are defined by an image
|
||||
__docker-compose_services_from_image() {
|
||||
pullable=$(___docker-compose_services_with_key image)
|
||||
_alternative "args:pullable services:($pullable)"
|
||||
}
|
||||
|
||||
__docker-compose_get_services() {
|
||||
local kind expl
|
||||
declare -a running stopped lines args services
|
||||
|
||||
docker_status=$(docker ps > /dev/null 2>&1)
|
||||
if [ $? -ne 0 ]; then
|
||||
_message "Error! Docker is not running."
|
||||
return 1
|
||||
fi
|
||||
|
||||
kind=$1
|
||||
shift
|
||||
[[ $kind = (stopped|all) ]] && args=($args -a)
|
||||
|
||||
lines=(${(f)"$(_call_program commands docker ps ${args})"})
|
||||
services=(${(f)"$(_call_program commands docker-compose 2>/dev/null ${compose_file:+-f $compose_file} ${compose_project:+-p $compose_project} ps -q)"})
|
||||
|
||||
# Parse header line to find columns
|
||||
local i=1 j=1 k header=${lines[1]}
|
||||
declare -A begin end
|
||||
while (( $j < ${#header} - 1 )) {
|
||||
i=$(( $j + ${${header[$j,-1]}[(i)[^ ]]} - 1))
|
||||
j=$(( $i + ${${header[$i,-1]}[(i) ]} - 1))
|
||||
k=$(( $j + ${${header[$j,-1]}[(i)[^ ]]} - 2))
|
||||
begin[${header[$i,$(($j-1))]}]=$i
|
||||
end[${header[$i,$(($j-1))]}]=$k
|
||||
}
|
||||
lines=(${lines[2,-1]})
|
||||
|
||||
# Container ID
|
||||
local line s name
|
||||
local -a names
|
||||
for line in $lines; do
|
||||
if [[ $services == *"${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}"* ]]; then
|
||||
names=(${(ps:,:)${${line[${begin[NAMES]},-1]}%% *}})
|
||||
for name in $names; do
|
||||
s="${${name%_*}#*_}:${(l:15:: :::)${${line[${begin[CREATED]},${end[CREATED]}]/ ago/}%% ##}}"
|
||||
s="$s, ${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}"
|
||||
s="$s, ${${${line[$begin[IMAGE],$end[IMAGE]]}/:/\\:}%% ##}"
|
||||
if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = Exit* ]]; then
|
||||
stopped=($stopped $s)
|
||||
else
|
||||
running=($running $s)
|
||||
fi
|
||||
done
|
||||
fi
|
||||
done
|
||||
|
||||
[[ $kind = (running|all) ]] && _describe -t services-running "running services" running
|
||||
[[ $kind = (stopped|all) ]] && _describe -t services-stopped "stopped services" stopped
|
||||
}
|
||||
|
||||
__docker-compose_stoppedservices() {
|
||||
__docker-compose_get_services stopped "$@"
|
||||
}
|
||||
|
||||
__docker-compose_runningservices() {
|
||||
__docker-compose_get_services running "$@"
|
||||
}
|
||||
|
||||
__docker-compose_services () {
|
||||
__docker-compose_get_services all "$@"
|
||||
}
|
||||
|
||||
__docker-compose_caching_policy() {
|
||||
oldp=( "$1"(Nmh+1) ) # 1 hour
|
||||
(( $#oldp ))
|
||||
}
|
||||
|
||||
__docker-compose_commands () {
|
||||
local cache_policy
|
||||
|
||||
zstyle -s ":completion:${curcontext}:" cache-policy cache_policy
|
||||
if [[ -z "$cache_policy" ]]; then
|
||||
zstyle ":completion:${curcontext}:" cache-policy __docker-compose_caching_policy
|
||||
fi
|
||||
|
||||
if ( [[ ${+_docker_compose_subcommands} -eq 0 ]] || _cache_invalid docker_compose_subcommands) \
|
||||
&& ! _retrieve_cache docker_compose_subcommands;
|
||||
then
|
||||
local -a lines
|
||||
lines=(${(f)"$(_call_program commands docker-compose 2>&1)"})
|
||||
_docker_compose_subcommands=(${${${lines[$((${lines[(i)Commands:]} + 1)),${lines[(I) *]}]}## #}/ ##/:})
|
||||
_store_cache docker_compose_subcommands _docker_compose_subcommands
|
||||
fi
|
||||
_describe -t docker-compose-commands "docker-compose command" _docker_compose_subcommands
|
||||
}
|
||||
|
||||
__docker-compose_subcommand () {
|
||||
local -a _command_args
|
||||
integer ret=1
|
||||
case "$words[1]" in
|
||||
(build)
|
||||
_arguments \
|
||||
'--no-cache[Do not use cache when building the image]' \
|
||||
'*:services:__docker-compose_services_from_build' && ret=0
|
||||
;;
|
||||
(help)
|
||||
_arguments ':subcommand:__docker-compose_commands' && ret=0
|
||||
;;
|
||||
(kill)
|
||||
_arguments \
|
||||
'-s[SIGNAL to send to the container. Default signal is SIGKILL.]:signal:_signals' \
|
||||
'*:running services:__docker-compose_runningservices' && ret=0
|
||||
;;
|
||||
(logs)
|
||||
_arguments \
|
||||
'--no-color[Produce monochrome output.]' \
|
||||
'*:services:__docker-compose_services_all' && ret=0
|
||||
;;
|
||||
(migrate-to-labels)
|
||||
_arguments \
|
||||
'(-):Recreate containers to add labels' && ret=0
|
||||
;;
|
||||
(port)
|
||||
_arguments \
|
||||
'--protocol=-[tcp or udap (defaults to tcp)]:protocol:(tcp udp)' \
|
||||
'--index=-[index of the container if there are mutiple instances of a service (defaults to 1)]:index: ' \
|
||||
'1:running services:__docker-compose_runningservices' \
|
||||
'2:port:_ports' && ret=0
|
||||
;;
|
||||
(ps)
|
||||
_arguments \
|
||||
'-q[Only display IDs]' \
|
||||
'*:services:__docker-compose_services_all' && ret=0
|
||||
;;
|
||||
(pull)
|
||||
_arguments \
|
||||
'--allow-insecure-ssl[Allow insecure connections to the docker registry]' \
|
||||
'*:services:__docker-compose_services_from_image' && ret=0
|
||||
;;
|
||||
(rm)
|
||||
_arguments \
|
||||
'(-f --force)'{-f,--force}"[Don't ask to confirm removal]" \
|
||||
'-v[Remove volumes associated with containers]' \
|
||||
'*:stopped services:__docker-compose_stoppedservices' && ret=0
|
||||
;;
|
||||
(run)
|
||||
_arguments \
|
||||
'--allow-insecure-ssl[Allow insecure connections to the docker registry]' \
|
||||
'-d[Detached mode: Run container in the background, print new container name.]' \
|
||||
'--entrypoint[Overwrite the entrypoint of the image.]:entry point: ' \
|
||||
'*-e[KEY=VAL Set an environment variable (can be used multiple times)]:environment variable KEY=VAL: ' \
|
||||
'(-u --user)'{-u,--user=-}'[Run as specified username or uid]:username or uid:_users' \
|
||||
"--no-deps[Don't start linked services.]" \
|
||||
'--rm[Remove container after run. Ignored in detached mode.]' \
|
||||
"--service-ports[Run command with the service's ports enabled and mapped to the host.]" \
|
||||
'-T[Disable pseudo-tty allocation. By default `docker-compose run` allocates a TTY.]' \
|
||||
'(-):services:__docker-compose_services' \
|
||||
'(-):command: _command_names -e' \
|
||||
'*::arguments: _normal' && ret=0
|
||||
;;
|
||||
(scale)
|
||||
_arguments '*:running services:__docker-compose_runningservices' && ret=0
|
||||
;;
|
||||
(start)
|
||||
_arguments '*:stopped services:__docker-compose_stoppedservices' && ret=0
|
||||
;;
|
||||
(stop|restart)
|
||||
_arguments \
|
||||
'(-t --timeout)'{-t,--timeout}"[Specify a shutdown timeout in seconds. (default: 10)]:seconds: " \
|
||||
'*:running services:__docker-compose_runningservices' && ret=0
|
||||
;;
|
||||
(up)
|
||||
_arguments \
|
||||
'--allow-insecure-ssl[Allow insecure connections to the docker registry]' \
|
||||
'-d[Detached mode: Run containers in the background, print new container names.]' \
|
||||
'--no-color[Produce monochrome output.]' \
|
||||
"--no-deps[Don't start linked services.]" \
|
||||
"--no-recreate[If containers already exist, don't recreate them.]" \
|
||||
"--no-build[Don't build an image, even if it's missing]" \
|
||||
'(-t --timeout)'{-t,--timeout}"[Specify a shutdown timeout in seconds. (default: 10)]:seconds: " \
|
||||
"--x-smart-recreate[Only recreate containers whose configuration or image needs to be updated. (EXPERIMENTAL)]" \
|
||||
'*:services:__docker-compose_services_all' && ret=0
|
||||
;;
|
||||
(*)
|
||||
_message 'Unknown sub command'
|
||||
esac
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
_docker-compose () {
|
||||
# Support for subservices, which allows for `compdef _docker docker-shell=_docker_containers`.
|
||||
# Based on /usr/share/zsh/functions/Completion/Unix/_git without support for `ret`.
|
||||
if [[ $service != docker-compose ]]; then
|
||||
_call_function - _$service
|
||||
return
|
||||
fi
|
||||
|
||||
local curcontext="$curcontext" state line ret=1
|
||||
typeset -A opt_args
|
||||
|
||||
_arguments -C \
|
||||
'(- :)'{-h,--help}'[Get help]' \
|
||||
'--verbose[Show more output]' \
|
||||
'(- :)'{-v,--version}'[Print version and exit]' \
|
||||
'(-f --file)'{-f,--file}'[Specify an alternate docker-compose file (default: docker-compose.yml)]:file:_files -g "*.yml"' \
|
||||
'(-p --project-name)'{-p,--project-name}'[Specify an alternate project name (default: directory name)]:project name:' \
|
||||
'(-): :->command' \
|
||||
'(-)*:: :->option-or-argument' && ret=0
|
||||
|
||||
local counter=1
|
||||
#local compose_file compose_project
|
||||
while [ $counter -lt ${#words[@]} ]; do
|
||||
case "${words[$counter]}" in
|
||||
-f|--file)
|
||||
(( counter++ ))
|
||||
compose_file="${words[$counter]}"
|
||||
;;
|
||||
-p|--project-name)
|
||||
(( counter++ ))
|
||||
compose_project="${words[$counter]}"
|
||||
;;
|
||||
*)
|
||||
;;
|
||||
esac
|
||||
(( counter++ ))
|
||||
done
|
||||
|
||||
case $state in
|
||||
(command)
|
||||
__docker-compose_commands && ret=0
|
||||
;;
|
||||
(option-or-argument)
|
||||
curcontext=${curcontext%:*:*}:docker-compose-$words[1]:
|
||||
__docker-compose_subcommand && ret=0
|
||||
;;
|
||||
esac
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
_docker-compose "$@"
|
@ -1,15 +1,24 @@
|
||||
FROM docs/base:latest
|
||||
MAINTAINER Sven Dowideit <SvenDowideit@docker.com> (@SvenDowideit)
|
||||
FROM docs/base:hugo
|
||||
MAINTAINER Mary Anthony <mary@docker.com> (@moxiegirl)
|
||||
|
||||
# to get the git info for this repo
|
||||
# To get the git info for this repo
|
||||
COPY . /src
|
||||
|
||||
# Reset the /docs dir so we can replace the theme meta with the new repo's git info
|
||||
RUN git reset --hard
|
||||
COPY . /docs/content/compose/
|
||||
|
||||
RUN grep "__version" /src/compose/__init__.py | sed "s/.*'\(.*\)'/\1/" > /docs/VERSION
|
||||
COPY docs/* /docs/sources/compose/
|
||||
COPY docs/mkdocs.yml /docs/mkdocs-compose.yml
|
||||
|
||||
# Then build everything together, ready for mkdocs
|
||||
RUN /docs/build.sh
|
||||
# Sed to process GitHub Markdown
|
||||
# 1-2 Remove comment code from metadata block
|
||||
# 3 Change ](/word to ](/project/ in links
|
||||
# 4 Change ](word.md) to ](/project/word)
|
||||
# 5 Remove .md extension from link text
|
||||
# 6 Change ](../ to ](/project/word)
|
||||
# 7 Change ](../../ to ](/project/ --> not implemented
|
||||
#
|
||||
#
|
||||
RUN find /docs/content/compose -type f -name "*.md" -exec sed -i.old \
|
||||
-e '/^<!.*metadata]>/g' \
|
||||
-e '/^<!.*end-metadata.*>/g' \
|
||||
-e 's/\(\]\)\([(]\)\(\/\)/\1\2\/compose\//g' \
|
||||
-e 's/\(\][(]\)\([A-z].*\)\(\.md\)/\1\/compose\/\2/g' \
|
||||
-e 's/\([(]\)\(.*\)\(\.md\)/\1\2/g' \
|
||||
-e 's/\(\][(]\)\(\.\.\/\)/\1\/compose\//g' {} \;
|
||||
|
55
docs/Makefile
Normal file
55
docs/Makefile
Normal file
@ -0,0 +1,55 @@
|
||||
.PHONY: all binary build cross default docs docs-build docs-shell shell test test-unit test-integration test-integration-cli test-docker-py validate
|
||||
|
||||
# env vars passed through directly to Docker's build scripts
|
||||
# to allow things like `make DOCKER_CLIENTONLY=1 binary` easily
|
||||
# `docs/sources/contributing/devenvironment.md ` and `project/PACKAGERS.md` have some limited documentation of some of these
|
||||
DOCKER_ENVS := \
|
||||
-e BUILDFLAGS \
|
||||
-e DOCKER_CLIENTONLY \
|
||||
-e DOCKER_EXECDRIVER \
|
||||
-e DOCKER_GRAPHDRIVER \
|
||||
-e TESTDIRS \
|
||||
-e TESTFLAGS \
|
||||
-e TIMEOUT
|
||||
# note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds
|
||||
|
||||
# to allow `make DOCSDIR=docs docs-shell` (to create a bind mount in docs)
|
||||
DOCS_MOUNT := $(if $(DOCSDIR),-v $(CURDIR)/$(DOCSDIR):/$(DOCSDIR))
|
||||
|
||||
# to allow `make DOCSPORT=9000 docs`
|
||||
DOCSPORT := 8000
|
||||
|
||||
# Get the IP ADDRESS
|
||||
DOCKER_IP=$(shell python -c "import urlparse ; print urlparse.urlparse('$(DOCKER_HOST)').hostname or ''")
|
||||
HUGO_BASE_URL=$(shell test -z "$(DOCKER_IP)" && echo localhost || echo "$(DOCKER_IP)")
|
||||
HUGO_BIND_IP=0.0.0.0
|
||||
|
||||
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
DOCKER_IMAGE := docker$(if $(GIT_BRANCH),:$(GIT_BRANCH))
|
||||
DOCKER_DOCS_IMAGE := docs-base$(if $(GIT_BRANCH),:$(GIT_BRANCH))
|
||||
|
||||
|
||||
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE
|
||||
|
||||
# for some docs workarounds (see below in "docs-build" target)
|
||||
GITCOMMIT := $(shell git rev-parse --short HEAD 2>/dev/null)
|
||||
|
||||
default: docs
|
||||
|
||||
docs: docs-build
|
||||
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP)
|
||||
|
||||
docs-draft: docs-build
|
||||
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --buildDrafts="true" --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP)
|
||||
|
||||
|
||||
docs-shell: docs-build
|
||||
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" bash
|
||||
|
||||
|
||||
docs-build:
|
||||
# ( git remote | grep -v upstream ) || git diff --name-status upstream/release..upstream/docs ./ > ./changed-files
|
||||
# echo "$(GIT_BRANCH)" > GIT_BRANCH
|
||||
# echo "$(AWS_S3_BUCKET)" > AWS_S3_BUCKET
|
||||
# echo "$(GITCOMMIT)" > GITCOMMIT
|
||||
docker build -t "$(DOCKER_DOCS_IMAGE)" .
|
77
docs/README.md
Normal file
77
docs/README.md
Normal file
@ -0,0 +1,77 @@
|
||||
# Contributing to the Docker Compose documentation
|
||||
|
||||
The documentation in this directory is part of the [https://docs.docker.com](https://docs.docker.com) website. Docker uses [the Hugo static generator](http://gohugo.io/overview/introduction/) to convert project Markdown files to a static HTML site.
|
||||
|
||||
You don't need to be a Hugo expert to contribute to the compose documentation. If you are familiar with Markdown, you can modify the content in the `docs` files.
|
||||
|
||||
If you want to add a new file or change the location of the document in the menu, you do need to know a little more.
|
||||
|
||||
## Documentation contributing workflow
|
||||
|
||||
1. Edit a Markdown file in the tree.
|
||||
|
||||
2. Save your changes.
|
||||
|
||||
3. Make sure you in your `docs` subdirectory.
|
||||
|
||||
4. Build the documentation.
|
||||
|
||||
$ make docs
|
||||
---> ffcf3f6c4e97
|
||||
Removing intermediate container a676414185e8
|
||||
Successfully built ffcf3f6c4e97
|
||||
docker run --rm -it -e AWS_S3_BUCKET -e NOCACHE -p 8000:8000 -e DOCKERHOST "docs-base:test-tooling" hugo server --port=8000 --baseUrl=192.168.59.103 --bind=0.0.0.0
|
||||
ERROR: 2015/06/13 MenuEntry's .Url is deprecated and will be removed in Hugo 0.15. Use .URL instead.
|
||||
0 of 4 drafts rendered
|
||||
0 future content
|
||||
12 pages created
|
||||
0 paginator pages created
|
||||
0 tags created
|
||||
0 categories created
|
||||
in 55 ms
|
||||
Serving pages from /docs/public
|
||||
Web Server is available at http://0.0.0.0:8000/
|
||||
Press Ctrl+C to stop
|
||||
|
||||
5. Open the available server in your browser.
|
||||
|
||||
The documentation server has the complete menu but only the Docker Compose
|
||||
documentation resolves. You can't access the other project docs from this
|
||||
localized build.
|
||||
|
||||
## Tips on Hugo metadata and menu positioning
|
||||
|
||||
The top of each Docker Compose documentation file contains TOML metadata. The metadata is commented out to prevent it from appears in GitHub.
|
||||
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Extending services in Compose"
|
||||
description = "How to use Docker Compose's extends keyword to share configuration between files and projects"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=2
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
The metadata alone has this structure:
|
||||
|
||||
+++
|
||||
title = "Extending services in Compose"
|
||||
description = "How to use Docker Compose's extends keyword to share configuration between files and projects"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=2
|
||||
+++
|
||||
|
||||
The `[menu.main]` section refers to navigation defined [in the main Docker menu](https://github.com/docker/docs-base/blob/hugo/config.toml). This metadata says *add a menu item called* Extending services in Compose *to the menu with the* `smn_workdw_compose` *identifier*. If you locate the menu in the configuration, you'll find *Create multi-container applications* is the menu title.
|
||||
|
||||
You can move an article in the tree by specifying a new parent. You can shift the location of the item by changing its weight. Higher numbers are heavier and shift the item to the bottom of menu. Low or no numbers shift it up.
|
||||
|
||||
|
||||
## Other key documentation repositories
|
||||
|
||||
The `docker/docs-base` repository contains [the Hugo theme and menu configuration](https://github.com/docker/docs-base). If you open the `Dockerfile` you'll see the `make docs` relies on this as a base image for building the Compose documentation.
|
||||
|
||||
The `docker/docs.docker.com` repository contains [build system for building the Docker documentation site](https://github.com/docker/docs.docker.com). Fork this repository to build the entire documentation site.
|
41
docs/cli.md
41
docs/cli.md
@ -1,9 +1,16 @@
|
||||
page_title: Compose CLI reference
|
||||
page_description: Compose CLI reference
|
||||
page_keywords: fig, composition, compose, docker, orchestration, cli, reference
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Compose CLI reference"
|
||||
description = "Compose CLI reference"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
|
||||
[menu.main]
|
||||
identifier = "smn_install_compose"
|
||||
parent = "smn_compose_ref"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# CLI reference
|
||||
# Compose CLI reference
|
||||
|
||||
Most Docker Compose commands are run against one or more services. If
|
||||
the service is not specified, the command will apply to all services.
|
||||
@ -47,6 +54,10 @@ Lists containers.
|
||||
|
||||
Pulls service images.
|
||||
|
||||
### restart
|
||||
|
||||
Restarts services.
|
||||
|
||||
### rm
|
||||
|
||||
Removes stopped service containers.
|
||||
@ -91,7 +102,9 @@ specify the `--no-deps` flag:
|
||||
|
||||
Similarly, if you do want the service's ports to be created and mapped to the
|
||||
host, specify the `--service-ports` flag:
|
||||
$ docker-compose run --service-ports web python manage.py shell
|
||||
|
||||
$ docker-compose run --service-ports web python manage.py shell
|
||||
|
||||
|
||||
### scale
|
||||
|
||||
@ -130,13 +143,16 @@ By default, if there are existing containers for a service, `docker-compose up`
|
||||
|
||||
Shows more output
|
||||
|
||||
### --version
|
||||
### -v, --version
|
||||
|
||||
Prints version and exits
|
||||
|
||||
### -f, --file FILE
|
||||
|
||||
Specifies an alternate Compose yaml file (default: `docker-compose.yml`)
|
||||
Specify what file to read configuration from. If not provided, Compose will look
|
||||
for `docker-compose.yml` in the current working directory, and then each parent
|
||||
directory successively, until found.
|
||||
|
||||
|
||||
### -p, --project-name NAME
|
||||
|
||||
@ -148,7 +164,7 @@ By default, if there are existing containers for a service, `docker-compose up`
|
||||
Several environment variables are available for you to configure Compose's behaviour.
|
||||
|
||||
Variables starting with `DOCKER_` are the same as those used to configure the
|
||||
Docker command-line client. If you're using boot2docker, `$(boot2docker shellinit)`
|
||||
Docker command-line client. If you're using boot2docker, `eval "$(boot2docker shellinit)"`
|
||||
will set them to their correct values.
|
||||
|
||||
### COMPOSE\_PROJECT\_NAME
|
||||
@ -157,7 +173,9 @@ Sets the project name, which is prepended to the name of every container started
|
||||
|
||||
### COMPOSE\_FILE
|
||||
|
||||
Sets the path to the `docker-compose.yml` to use. Defaults to `docker-compose.yml` in the current working directory.
|
||||
Specify what file to read configuration from. If not provided, Compose will look
|
||||
for `docker-compose.yml` in the current working directory, and then each parent
|
||||
directory successively, until found.
|
||||
|
||||
### DOCKER\_HOST
|
||||
|
||||
@ -174,8 +192,11 @@ Configures the path to the `ca.pem`, `cert.pem`, and `key.pem` files used for TL
|
||||
|
||||
## Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [Installing Compose](install.md)
|
||||
- [User guide](index.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Yaml file reference](yml.md)
|
||||
- [Compose environment variables](env.md)
|
||||
- [Compose command line completion](completion.md)
|
||||
|
@ -1,28 +1,53 @@
|
||||
---
|
||||
layout: default
|
||||
title: Command Completion
|
||||
---
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Command Completion"
|
||||
description = "Compose CLI reference"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=3
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
Command Completion
|
||||
==================
|
||||
# Command Completion
|
||||
|
||||
Compose comes with [command completion](http://en.wikipedia.org/wiki/Command-line_completion)
|
||||
for the bash shell.
|
||||
for the bash and zsh shell.
|
||||
|
||||
Installing Command Completion
|
||||
-----------------------------
|
||||
## Installing Command Completion
|
||||
|
||||
### Bash
|
||||
|
||||
Make sure bash completion is installed. If you use a current Linux in a non-minimal installation, bash completion should be available.
|
||||
On a Mac, install with `brew install bash-completion`
|
||||
|
||||
Place the completion script in `/etc/bash_completion.d/` (`/usr/local/etc/bash_completion.d/` on a Mac), using e.g.
|
||||
|
||||
curl -L https://raw.githubusercontent.com/docker/compose/1.2.0/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
|
||||
|
||||
Place the completion script in `/etc/bash_completion.d/` (`/usr/local/etc/bash_completion.d/` on a Mac), using e.g.
|
||||
|
||||
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk '{print $2}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
|
||||
|
||||
Completion will be available upon next login.
|
||||
|
||||
Available completions
|
||||
---------------------
|
||||
### Zsh
|
||||
|
||||
Place the completion script in your `/path/to/zsh/completion`, using e.g. `~/.zsh/completion/`
|
||||
|
||||
mkdir -p ~/.zsh/completion
|
||||
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk '{print $2}')/contrib/completion/zsh/_docker-compose > ~/.zsh/completion/_docker-compose
|
||||
|
||||
Include the directory in your `$fpath`, e.g. by adding in `~/.zshrc`
|
||||
|
||||
fpath=(~/.zsh/completion $fpath)
|
||||
|
||||
Make sure `compinit` is loaded or do it by adding in `~/.zshrc`
|
||||
|
||||
autoload -Uz compinit && compinit -i
|
||||
|
||||
Then reload your shell
|
||||
|
||||
exec $SHELL -l
|
||||
|
||||
## Available completions
|
||||
|
||||
Depending on what you typed on the command line so far, it will complete
|
||||
|
||||
- available docker-compose commands
|
||||
@ -34,8 +59,11 @@ Enjoy working with Compose faster and with less typos!
|
||||
|
||||
## Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [Installing Compose](install.md)
|
||||
- [User guide](index.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Command line reference](cli.md)
|
||||
- [Yaml file reference](yml.md)
|
||||
- [Compose environment variables](env.md)
|
||||
- [Compose environment variables](env.md)
|
@ -1,10 +1,16 @@
|
||||
page_title: Quickstart Guide: Compose and Django
|
||||
page_description: Getting started with Docker Compose and Django
|
||||
page_keywords: documentation, docs, docker, compose, orchestration, containers,
|
||||
django
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Quickstart Guide: Compose and Django"
|
||||
description = "Getting started with Docker Compose and Django"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=4
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
## Getting started with Compose and Django
|
||||
## Quickstart Guide: Compose and Django
|
||||
|
||||
|
||||
This Quick-start Guide will demonstrate how to use Compose to set up and run a
|
||||
@ -119,8 +125,11 @@ example, run `docker-compose up` and in another terminal run:
|
||||
|
||||
## More Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [Installing Compose](install.md)
|
||||
- [User guide](index.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Command line reference](cli.md)
|
||||
- [Yaml file reference](yml.md)
|
||||
- [Compose environment variables](env.md)
|
||||
|
21
docs/env.md
21
docs/env.md
@ -1,9 +1,15 @@
|
||||
---
|
||||
layout: default
|
||||
title: Compose environment variables reference
|
||||
---
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Compose environment variables reference"
|
||||
description = "Compose CLI reference"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
|
||||
[menu.main]
|
||||
parent="smn_compose_ref"
|
||||
weight=3
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
Environment variables reference
|
||||
# Compose environment variables reference
|
||||
===============================
|
||||
|
||||
**Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](yml.md#links) for details.
|
||||
@ -34,8 +40,11 @@ Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1`
|
||||
|
||||
## Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [Installing Compose](install.md)
|
||||
- [User guide](index.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Command line reference](cli.md)
|
||||
- [Yaml file reference](yml.md)
|
||||
- [Compose command line completion](completion.md)
|
||||
|
@ -1,6 +1,13 @@
|
||||
page_title: Extending services in Compose
|
||||
page_description: How to use Docker Compose's "extends" keyword to share configuration between files and projects
|
||||
page_keywords: fig, composition, compose, docker, orchestration, documentation, docs
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Extending services in Compose"
|
||||
description = "How to use Docker Compose's extends keyword to share configuration between files and projects"
|
||||
keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=2
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
## Extending services in Compose
|
||||
@ -145,8 +152,7 @@ Defining the web application requires the following:
|
||||
FROM python:2.7
|
||||
ADD . /code
|
||||
WORKDIR /code
|
||||
RUN pip install -r
|
||||
requirements.txt
|
||||
RUN pip install -r requirements.txt
|
||||
CMD python app.py
|
||||
|
||||
4. Create a Compose configuration file called `common.yml`:
|
||||
@ -321,8 +327,8 @@ expose:
|
||||
- "5000"
|
||||
```
|
||||
|
||||
In the case of `environment`, Compose "merges" entries together with
|
||||
locally-defined values taking precedence:
|
||||
In the case of `environment` and `labels`, Compose "merges" entries together
|
||||
with locally-defined values taking precedence:
|
||||
|
||||
```yaml
|
||||
# original service
|
||||
@ -342,8 +348,8 @@ environment:
|
||||
- BAZ=local
|
||||
```
|
||||
|
||||
Finally, for `volumes`, Compose "merges" entries together with locally-defined
|
||||
bindings taking precedence:
|
||||
Finally, for `volumes` and `devices`, Compose "merges" entries together with
|
||||
locally-defined bindings taking precedence:
|
||||
|
||||
```yaml
|
||||
# original service
|
||||
@ -361,4 +367,15 @@ volumes:
|
||||
- /original-dir/foo:/foo
|
||||
- /local-dir/bar:/bar
|
||||
- /local-dir/baz/:baz
|
||||
```
|
||||
```
|
||||
|
||||
## Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [Installing Compose](install.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Command line reference](cli.md)
|
||||
- [Yaml file reference](yml.md)
|
||||
- [Compose command line completion](completion.md)
|
||||
|
111
docs/index.md
111
docs/index.md
@ -1,50 +1,47 @@
|
||||
page_title: Compose: Multi-container orchestration for Docker
|
||||
page_description: Introduction and Overview of Compose
|
||||
page_keywords: documentation, docs, docker, compose, orchestration, containers
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Overview of Docker Compose"
|
||||
description = "Introduction and Overview of Compose"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# Docker Compose
|
||||
# Overview of Docker Compose
|
||||
|
||||
## Overview
|
||||
|
||||
Compose is a tool for defining and running complex applications with Docker.
|
||||
With Compose, you define a multi-container application in a single file, then
|
||||
spin your application up in a single command which does everything that needs to
|
||||
be done to get it running.
|
||||
Compose is a tool for defining and running multi-container applications with
|
||||
Docker. With Compose, you define a multi-container application in a single
|
||||
file, then spin your application up in a single command which does everything
|
||||
that needs to be done to get it running.
|
||||
|
||||
Compose is great for development environments, staging servers, and CI. We don't
|
||||
recommend that you use it in production yet.
|
||||
|
||||
Using Compose is basically a three-step process.
|
||||
|
||||
First, you define your app's environment with a `Dockerfile` so it can be
|
||||
reproduced anywhere:
|
||||
|
||||
```Dockerfile
|
||||
FROM python:2.7
|
||||
WORKDIR /code
|
||||
ADD requirements.txt /code/
|
||||
RUN pip install -r requirements.txt
|
||||
ADD . /code
|
||||
CMD python app.py
|
||||
```
|
||||
|
||||
Next, you define the services that make up your app in `docker-compose.yml` so
|
||||
1. Define your app's environment with a `Dockerfile` so it can be
|
||||
reproduced anywhere.
|
||||
2. Define the services that make up your app in `docker-compose.yml` so
|
||||
they can be run together in an isolated environment:
|
||||
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
|
||||
|
||||
A `docker-compose.yml` looks like this:
|
||||
|
||||
```yaml
|
||||
web:
|
||||
build: .
|
||||
links:
|
||||
- db
|
||||
ports:
|
||||
- "8000:8000"
|
||||
db:
|
||||
image: postgres
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
links:
|
||||
- redis
|
||||
redis:
|
||||
image: redis
|
||||
```
|
||||
|
||||
Lastly, run `docker-compose up` and Compose will start and run your entire app.
|
||||
|
||||
Compose has commands for managing the whole lifecycle of your application:
|
||||
|
||||
* Start, stop and rebuild services
|
||||
@ -55,6 +52,9 @@ Compose has commands for managing the whole lifecycle of your application:
|
||||
## Compose documentation
|
||||
|
||||
- [Installing Compose](install.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Command line reference](cli.md)
|
||||
- [Yaml file reference](yml.md)
|
||||
- [Compose environment variables](env.md)
|
||||
@ -110,13 +110,19 @@ specify how to build the image using a file called
|
||||
ADD . /code
|
||||
WORKDIR /code
|
||||
RUN pip install -r requirements.txt
|
||||
CMD python app.py
|
||||
|
||||
This tells Docker to include Python, your code, and your Python dependencies in
|
||||
a Docker image. For more information on how to write Dockerfiles, see the
|
||||
[Docker user
|
||||
guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile)
|
||||
and the
|
||||
[Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
This tells Docker to:
|
||||
|
||||
* Build an image starting with the Python 2.7 image.
|
||||
* Add the current directory `.` into the path `/code` in the image.
|
||||
* Set the working directory to `/code`.
|
||||
* Install your Python dependencies.
|
||||
* Set the default command for the container to `python app.py`
|
||||
|
||||
For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
You can test that this builds by running `docker build -t web .`.
|
||||
|
||||
### Define services
|
||||
|
||||
@ -124,7 +130,6 @@ Next, define a set of services using `docker-compose.yml`:
|
||||
|
||||
web:
|
||||
build: .
|
||||
command: python app.py
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
@ -136,19 +141,20 @@ Next, define a set of services using `docker-compose.yml`:
|
||||
|
||||
This defines two services:
|
||||
|
||||
- `web`, which is built from the `Dockerfile` in the current directory. It also
|
||||
says to run the command `python app.py` inside the image, forward the exposed
|
||||
port 5000 on the container to port 5000 on the host machine, connect up the
|
||||
Redis service, and mount the current directory inside the container so we can
|
||||
work on code without having to rebuild the image.
|
||||
- `redis`, which uses the public image
|
||||
[redis](https://registry.hub.docker.com/_/redis/), which gets pulled from the
|
||||
Docker Hub registry.
|
||||
#### web
|
||||
|
||||
* Builds from the `Dockerfile` in the current directory.
|
||||
* Forwards the exposed port 5000 on the container to port 5000 on the host machine.
|
||||
* Connects the web container to the Redis service via a link.
|
||||
* Mounts the current directory on the host to `/code` inside the container allowing you to modify the code without having to rebuild the image.
|
||||
|
||||
#### redis
|
||||
|
||||
* Uses the public [Redis](https://registry.hub.docker.com/_/redis/) image which gets pulled from the Docker Hub registry.
|
||||
|
||||
### Build and run your app with Compose
|
||||
|
||||
Now, when you run `docker-compose up`, Compose will pull a Redis image, build an
|
||||
image for your code, and start everything up:
|
||||
Now, when you run `docker-compose up`, Compose will pull a Redis image, build an image for your code, and start everything up:
|
||||
|
||||
$ docker-compose up
|
||||
Pulling image redis...
|
||||
@ -159,7 +165,12 @@ image for your code, and start everything up:
|
||||
web_1 | * Running on http://0.0.0.0:5000/
|
||||
|
||||
The web app should now be listening on port 5000 on your Docker daemon host (if
|
||||
you're using Boot2docker, `boot2docker ip` will tell you its address).
|
||||
you're using Boot2docker, `boot2docker ip` will tell you its address). In a browser,
|
||||
open `http://ip-from-boot2docker:5000` and you should get a message in your browser saying:
|
||||
|
||||
`Hello World! I have been seen 1 times.`
|
||||
|
||||
Refreshing the page will increment the number.
|
||||
|
||||
If you want to run your services in the background, you can pass the `-d` flag
|
||||
(for daemon mode) to `docker-compose up` and use `docker-compose ps` to see what
|
||||
@ -193,7 +204,7 @@ At this point, you have seen the basics of how Compose works.
|
||||
[Rails](rails.md), or [Wordpress](wordpress.md).
|
||||
- See the reference guides for complete details on the [commands](cli.md), the
|
||||
[configuration file](yml.md) and [environment variables](env.md).
|
||||
|
||||
|
||||
## Release Notes
|
||||
|
||||
### Version 1.2.0 (April 7, 2015)
|
||||
@ -202,7 +213,7 @@ For complete information on this release, see the [1.2.0 Milestone project page]
|
||||
In addition to bug fixes and refinements, this release adds the following:
|
||||
|
||||
* The `extends` keyword, which adds the ability to extend services by sharing common configurations. For details, see
|
||||
[PR #972](https://github.com/docker/compose/pull/1088).
|
||||
[PR #1088](https://github.com/docker/compose/pull/1088).
|
||||
|
||||
* Better integration with Swarm. Swarm will now schedule inter-dependent
|
||||
containers on the same host. For details, see
|
||||
|
@ -1,26 +1,33 @@
|
||||
page_title: Installing Compose
|
||||
page_description: How to install Docker Compose
|
||||
page_keywords: compose, orchestration, install, installation, docker, documentation
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Docker Compose"
|
||||
description = "How to install Docker Compose"
|
||||
keywords = ["compose, orchestration, install, installation, docker, documentation"]
|
||||
[menu.main]
|
||||
parent="mn_install"
|
||||
weight=4
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
## Installing Compose
|
||||
# Install Docker Compose
|
||||
|
||||
To install Compose, you'll need to install Docker first. You'll then install
|
||||
Compose with a `curl` command.
|
||||
|
||||
### Install Docker
|
||||
## Install Docker
|
||||
|
||||
First, install Docker version 1.3 or greater:
|
||||
First, install Docker version 1.6 or greater:
|
||||
|
||||
- [Instructions for Mac OS X](http://docs.docker.com/installation/mac/)
|
||||
- [Instructions for Ubuntu](http://docs.docker.com/installation/ubuntulinux/)
|
||||
- [Instructions for other systems](http://docs.docker.com/installation/)
|
||||
|
||||
### Install Compose
|
||||
## Install Compose
|
||||
|
||||
To install Compose, run the following commands:
|
||||
|
||||
curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
||||
curl -L https://github.com/docker/compose/releases/download/1.3.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
|
||||
chmod +x /usr/local/bin/docker-compose
|
||||
|
||||
> Note: If you get a "Permission denied" error, your `/usr/local/bin` directory probably isn't writable and you'll need to install Compose as the superuser. Run `sudo -i`, then the two commands above, then `exit`.
|
||||
@ -36,9 +43,24 @@ Compose can also be installed as a Python package:
|
||||
No further steps are required; Compose should now be successfully installed.
|
||||
You can test the installation by running `docker-compose --version`.
|
||||
|
||||
### Upgrading
|
||||
|
||||
If you're coming from Compose 1.2 or earlier, you'll need to remove or migrate your existing containers after upgrading Compose. This is because, as of version 1.3, Compose uses Docker labels to keep track of containers, and so they need to be recreated with labels added.
|
||||
|
||||
If Compose detects containers that were created without labels, it will refuse to run so that you don't end up with two sets of them. If you want to keep using your existing containers (for example, because they have data volumes you want to preserve) you can migrate them with the following command:
|
||||
|
||||
docker-compose migrate-to-labels
|
||||
|
||||
Alternatively, if you're not worried about keeping them, you can remove them - Compose will just create new ones.
|
||||
|
||||
docker rm -f myapp_web_1 myapp_db_1 ...
|
||||
|
||||
## Compose documentation
|
||||
|
||||
- [User guide](index.md)
|
||||
- [User guide](/)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Command line reference](cli.md)
|
||||
- [Yaml file reference](yml.md)
|
||||
- [Compose environment variables](env.md)
|
||||
|
@ -1,12 +0,0 @@
|
||||
|
||||
- ['compose/index.md', 'User Guide', 'Docker Compose' ]
|
||||
- ['compose/production.md', 'User Guide', 'Using Compose in production' ]
|
||||
- ['compose/extends.md', 'User Guide', 'Extending services in Compose']
|
||||
- ['compose/install.md', 'Installation', 'Docker Compose']
|
||||
- ['compose/cli.md', 'Reference', 'Compose command line']
|
||||
- ['compose/yml.md', 'Reference', 'Compose yml']
|
||||
- ['compose/env.md', 'Reference', 'Compose ENV variables']
|
||||
- ['compose/completion.md', 'Reference', 'Compose commandline completion']
|
||||
- ['compose/django.md', 'Examples', 'Getting started with Compose and Django']
|
||||
- ['compose/rails.md', 'Examples', 'Getting started with Compose and Rails']
|
||||
- ['compose/wordpress.md', 'Examples', 'Getting started with Compose and Wordpress']
|
@ -1,6 +1,13 @@
|
||||
page_title: Using Compose in production
|
||||
page_description: Guide to using Docker Compose in production
|
||||
page_keywords: documentation, docs, docker, compose, orchestration, containers, production
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Using Compose in production"
|
||||
description = "Guide to using Docker Compose in production"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers, production"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=1
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
## Using Compose in production
|
||||
@ -75,3 +82,15 @@ Compose against a Swarm instance and run your apps across multiple hosts.
|
||||
Compose/Swarm integration is still in the experimental stage, and Swarm is still
|
||||
in beta, but if you'd like to explore and experiment, check out the
|
||||
[integration guide](https://github.com/docker/compose/blob/master/SWARM.md).
|
||||
|
||||
## Compose documentation
|
||||
|
||||
- [Installing Compose](install.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Command line reference](cli.md)
|
||||
- [Yaml file reference](yml.md)
|
||||
- [Compose environment variables](env.md)
|
||||
- [Compose command line completion](completion.md)
|
||||
|
||||
|
@ -1,10 +1,15 @@
|
||||
page_title: Quickstart Guide: Compose and Rails
|
||||
page_description: Getting started with Docker Compose and Rails
|
||||
page_keywords: documentation, docs, docker, compose, orchestration, containers,
|
||||
rails
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Quickstart Guide: Compose and Rails"
|
||||
description = "Getting started with Docker Compose and Rails"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=5
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
## Getting started with Compose and Rails
|
||||
## Quickstart Guide: Compose and Rails
|
||||
|
||||
This Quickstart guide will show you how to use Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md).
|
||||
|
||||
@ -119,8 +124,11 @@ you're using Boot2docker, `boot2docker ip` will tell you its address).
|
||||
|
||||
## More Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [Installing Compose](install.md)
|
||||
- [User guide](index.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Command line reference](cli.md)
|
||||
- [Yaml file reference](yml.md)
|
||||
- [Compose environment variables](env.md)
|
||||
|
@ -1,14 +1,21 @@
|
||||
page_title: Quickstart Guide: Compose and Wordpress
|
||||
page_description: Getting started with Docker Compose and Rails
|
||||
page_keywords: documentation, docs, docker, compose, orchestration, containers,
|
||||
wordpress
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "Quickstart Guide: Compose and Wordpress"
|
||||
description = "Getting started with Compose and Wordpress"
|
||||
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
|
||||
[menu.main]
|
||||
parent="smn_workw_compose"
|
||||
weight=6
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
## Getting started with Compose and Wordpress
|
||||
|
||||
# Quickstart Guide: Compose and Wordpress
|
||||
|
||||
You can use Compose to easily run Wordpress in an isolated environment built
|
||||
with Docker containers.
|
||||
|
||||
### Define the project
|
||||
## Define the project
|
||||
|
||||
First, [Install Compose](install.md) and then download Wordpress into the
|
||||
current directory:
|
||||
@ -114,8 +121,11 @@ address).
|
||||
|
||||
## More Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [Installing Compose](install.md)
|
||||
- [User guide](index.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Command line reference](cli.md)
|
||||
- [Yaml file reference](yml.md)
|
||||
- [Compose environment variables](env.md)
|
||||
|
109
docs/yml.md
109
docs/yml.md
@ -1,10 +1,13 @@
|
||||
---
|
||||
layout: default
|
||||
title: docker-compose.yml reference
|
||||
page_title: docker-compose.yml reference
|
||||
page_description: docker-compose.yml reference
|
||||
page_keywords: fig, composition, compose, docker
|
||||
---
|
||||
<!--[metadata]>
|
||||
+++
|
||||
title = "docker-compose.yml reference"
|
||||
description = "docker-compose.yml reference"
|
||||
keywords = ["fig, composition, compose, docker"]
|
||||
[menu.main]
|
||||
parent="smn_compose_ref"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
|
||||
# docker-compose.yml reference
|
||||
|
||||
@ -29,8 +32,8 @@ image: a4bc65fd
|
||||
|
||||
### build
|
||||
|
||||
Path to a directory containing a Dockerfile. When the value supplied is a
|
||||
relative path, it is interpreted as relative to the location of the yml file
|
||||
Path to a directory containing a Dockerfile. When the value supplied is a
|
||||
relative path, it is interpreted as relative to the location of the yml file
|
||||
itself. This directory is also the build context that is sent to the Docker daemon.
|
||||
|
||||
Compose will build and tag it with a generated name, and use that image thereafter.
|
||||
@ -39,6 +42,16 @@ Compose will build and tag it with a generated name, and use that image thereaft
|
||||
build: /path/to/build/dir
|
||||
```
|
||||
|
||||
### dockerfile
|
||||
|
||||
Alternate Dockerfile.
|
||||
|
||||
Compose will use an alternate file to build with.
|
||||
|
||||
```
|
||||
dockerfile: Dockerfile-alternate
|
||||
```
|
||||
|
||||
### command
|
||||
|
||||
Override the default command.
|
||||
@ -87,6 +100,23 @@ external_links:
|
||||
- project_db_1:postgresql
|
||||
```
|
||||
|
||||
### extra_hosts
|
||||
|
||||
Add hostname mappings. Use the same values as the docker client `--add-host` parameter.
|
||||
|
||||
```
|
||||
extra_hosts:
|
||||
- "somehost:162.242.195.82"
|
||||
- "otherhost:50.31.209.229"
|
||||
```
|
||||
|
||||
An entry with the ip address and hostname will be created in `/etc/hosts` inside containers for this service, e.g:
|
||||
|
||||
```
|
||||
162.242.195.82 somehost
|
||||
50.31.209.229 otherhost
|
||||
```
|
||||
|
||||
### ports
|
||||
|
||||
Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container
|
||||
@ -226,6 +256,38 @@ environment variables (DEBUG) with a new value, and the other one
|
||||
For more on `extends`, see the [tutorial](extends.md#example) and
|
||||
[reference](extends.md#reference).
|
||||
|
||||
### labels
|
||||
|
||||
Add metadata to containers using [Docker labels](http://docs.docker.com/userguide/labels-custom-metadata/). You can use either an array or a dictionary.
|
||||
|
||||
It's recommended that you use reverse-DNS notation to prevent your labels from conflicting with those used by other software.
|
||||
|
||||
```
|
||||
labels:
|
||||
com.example.description: "Accounting webapp"
|
||||
com.example.department: "Finance"
|
||||
com.example.label-with-empty-value: ""
|
||||
|
||||
labels:
|
||||
- "com.example.description=Accounting webapp"
|
||||
- "com.example.department=Finance"
|
||||
- "com.example.label-with-empty-value"
|
||||
```
|
||||
|
||||
### log driver
|
||||
|
||||
Specify a logging driver for the service's containers, as with the ``--log-driver`` option for docker run ([documented here](http://docs.docker.com/reference/run/#logging-drivers-log-driver)).
|
||||
|
||||
Allowed values are currently ``json-file``, ``syslog`` and ``none``. The list will change over time as more drivers are added to the Docker engine.
|
||||
|
||||
The default value is json-file.
|
||||
|
||||
```
|
||||
log_driver: "json-file"
|
||||
log_driver: "syslog"
|
||||
log_driver: "none"
|
||||
```
|
||||
|
||||
### net
|
||||
|
||||
Networking mode. Use the same values as the docker client `--net` parameter.
|
||||
@ -283,13 +345,34 @@ dns_search:
|
||||
- dc2.example.com
|
||||
```
|
||||
|
||||
### working\_dir, entrypoint, user, hostname, domainname, mem\_limit, privileged, restart, stdin\_open, tty, cpu\_shares
|
||||
### devices
|
||||
|
||||
List of device mappings. Uses the same format as the `--device` docker
|
||||
client create option.
|
||||
|
||||
```
|
||||
devices:
|
||||
- "/dev/ttyUSB0:/dev/ttyUSB0"
|
||||
```
|
||||
|
||||
### security_opt
|
||||
|
||||
Override the default labeling scheme for each container.
|
||||
|
||||
```
|
||||
security_opt:
|
||||
- label:user:USER
|
||||
- label:role:ROLE
|
||||
```
|
||||
|
||||
### working\_dir, entrypoint, user, hostname, domainname, mem\_limit, privileged, restart, stdin\_open, tty, cpu\_shares, cpuset, read\_only
|
||||
|
||||
Each of these is a single value, analogous to its
|
||||
[docker run](https://docs.docker.com/reference/run/) counterpart.
|
||||
|
||||
```
|
||||
cpu_shares: 73
|
||||
cpuset: 0,1
|
||||
|
||||
working_dir: /code
|
||||
entrypoint: /code/entrypoint.sh
|
||||
@ -305,12 +388,16 @@ restart: always
|
||||
|
||||
stdin_open: true
|
||||
tty: true
|
||||
read_only: true
|
||||
```
|
||||
|
||||
## Compose documentation
|
||||
|
||||
- [User guide](/)
|
||||
- [Installing Compose](install.md)
|
||||
- [User guide](index.md)
|
||||
- [Get started with Django](django.md)
|
||||
- [Get started with Rails](rails.md)
|
||||
- [Get started with Wordpress](wordpress.md)
|
||||
- [Command line reference](cli.md)
|
||||
- [Compose environment variables](env.md)
|
||||
- [Compose command line completion](completion.md)
|
||||
|
@ -1,8 +1,8 @@
|
||||
PyYAML==3.10
|
||||
docker-py==1.0.0
|
||||
dockerpty==0.3.2
|
||||
docker-py==1.2.3
|
||||
dockerpty==0.3.4
|
||||
docopt==0.6.1
|
||||
requests==2.2.1
|
||||
requests==2.6.1
|
||||
six==1.7.3
|
||||
texttable==0.8.2
|
||||
websocket-client==0.11.0
|
||||
|
@ -1,33 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -z "$VALIDATE_UPSTREAM" ]; then
|
||||
# this is kind of an expensive check, so let's not do this twice if we
|
||||
# are running more than one validate bundlescript
|
||||
|
||||
VALIDATE_REPO='https://github.com/docker/fig.git'
|
||||
VALIDATE_BRANCH='master'
|
||||
|
||||
if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then
|
||||
VALIDATE_REPO="https://github.com/${TRAVIS_REPO_SLUG}.git"
|
||||
VALIDATE_BRANCH="${TRAVIS_BRANCH}"
|
||||
fi
|
||||
|
||||
VALIDATE_HEAD="$(git rev-parse --verify HEAD)"
|
||||
|
||||
git fetch -q "$VALIDATE_REPO" "refs/heads/$VALIDATE_BRANCH"
|
||||
VALIDATE_UPSTREAM="$(git rev-parse --verify FETCH_HEAD)"
|
||||
|
||||
VALIDATE_COMMIT_LOG="$VALIDATE_UPSTREAM..$VALIDATE_HEAD"
|
||||
VALIDATE_COMMIT_DIFF="$VALIDATE_UPSTREAM...$VALIDATE_HEAD"
|
||||
|
||||
validate_diff() {
|
||||
if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then
|
||||
git diff "$VALIDATE_COMMIT_DIFF" "$@"
|
||||
fi
|
||||
}
|
||||
validate_log() {
|
||||
if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then
|
||||
git log "$VALIDATE_COMMIT_LOG" "$@"
|
||||
fi
|
||||
}
|
||||
fi
|
@ -1,7 +1,10 @@
|
||||
#!/bin/bash
|
||||
set -ex
|
||||
|
||||
PATH="/usr/local/bin:$PATH"
|
||||
|
||||
rm -rf venv
|
||||
virtualenv venv
|
||||
virtualenv -p /usr/local/bin/python venv
|
||||
venv/bin/pip install -r requirements.txt
|
||||
venv/bin/pip install -r requirements-dev.txt
|
||||
venv/bin/pip install .
|
||||
|
@ -8,9 +8,6 @@
|
||||
|
||||
set -e
|
||||
|
||||
>&2 echo "Validating DCO"
|
||||
script/validate-dco
|
||||
|
||||
export DOCKER_VERSIONS=all
|
||||
. script/test-versions
|
||||
|
||||
|
53
script/prepare-osx
Executable file
53
script/prepare-osx
Executable file
@ -0,0 +1,53 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -ex
|
||||
|
||||
python_version() {
|
||||
python -V 2>&1
|
||||
}
|
||||
|
||||
openssl_version() {
|
||||
python -c "import ssl; print ssl.OPENSSL_VERSION"
|
||||
}
|
||||
|
||||
desired_python_version="2.7.9"
|
||||
desired_python_brew_version="2.7.9"
|
||||
python_formula="https://raw.githubusercontent.com/Homebrew/homebrew/1681e193e4d91c9620c4901efd4458d9b6fcda8e/Library/Formula/python.rb"
|
||||
|
||||
desired_openssl_version="1.0.1j"
|
||||
desired_openssl_brew_version="1.0.1j_1"
|
||||
openssl_formula="https://raw.githubusercontent.com/Homebrew/homebrew/62fc2a1a65e83ba9dbb30b2e0a2b7355831c714b/Library/Formula/openssl.rb"
|
||||
|
||||
PATH="/usr/local/bin:$PATH"
|
||||
|
||||
if !(which brew); then
|
||||
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
|
||||
fi
|
||||
|
||||
brew update
|
||||
|
||||
if !(python_version | grep "$desired_python_version"); then
|
||||
if brew list | grep python; then
|
||||
brew unlink python
|
||||
fi
|
||||
|
||||
brew install "$python_formula"
|
||||
brew switch python "$desired_python_brew_version"
|
||||
fi
|
||||
|
||||
if !(openssl_version | grep "$desired_openssl_version"); then
|
||||
if brew list | grep openssl; then
|
||||
brew unlink openssl
|
||||
fi
|
||||
|
||||
brew install "$openssl_formula"
|
||||
brew switch openssl "$desired_openssl_brew_version"
|
||||
fi
|
||||
|
||||
echo "*** Using $(python_version)"
|
||||
echo "*** Using $(openssl_version)"
|
||||
|
||||
if !(which virtualenv); then
|
||||
pip install virtualenv
|
||||
fi
|
||||
|
@ -9,9 +9,9 @@ docker build -t "$TAG" .
|
||||
docker run \
|
||||
--rm \
|
||||
--volume="/var/run/docker.sock:/var/run/docker.sock" \
|
||||
--volume="$(pwd):/code" \
|
||||
-e DOCKER_VERSIONS \
|
||||
-e "TAG=$TAG" \
|
||||
-e "affinity:image==$TAG" \
|
||||
--entrypoint="script/test-versions" \
|
||||
"$TAG" \
|
||||
"$@"
|
||||
|
@ -5,10 +5,10 @@
|
||||
set -e
|
||||
|
||||
>&2 echo "Running lint checks"
|
||||
flake8 compose
|
||||
flake8 compose tests setup.py
|
||||
|
||||
if [ "$DOCKER_VERSIONS" == "" ]; then
|
||||
DOCKER_VERSIONS="1.5.0"
|
||||
DOCKER_VERSIONS="default"
|
||||
elif [ "$DOCKER_VERSIONS" == "all" ]; then
|
||||
DOCKER_VERSIONS="$ALL_DOCKER_VERSIONS"
|
||||
fi
|
||||
|
@ -1,58 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
source "$(dirname "$BASH_SOURCE")/.validate"
|
||||
|
||||
adds=$(validate_diff --numstat | awk '{ s += $1 } END { print s }')
|
||||
dels=$(validate_diff --numstat | awk '{ s += $2 } END { print s }')
|
||||
notDocs="$(validate_diff --numstat | awk '$3 !~ /^docs\// { print $3 }')"
|
||||
|
||||
: ${adds:=0}
|
||||
: ${dels:=0}
|
||||
|
||||
# "Username may only contain alphanumeric characters or dashes and cannot begin with a dash"
|
||||
githubUsernameRegex='[a-zA-Z0-9][a-zA-Z0-9-]+'
|
||||
|
||||
# https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work
|
||||
dcoPrefix='Signed-off-by:'
|
||||
dcoRegex="^(Docker-DCO-1.1-)?$dcoPrefix ([^<]+) <([^<>@]+@[^<>]+)>( \\(github: ($githubUsernameRegex)\\))?$"
|
||||
|
||||
check_dco() {
|
||||
grep -qE "$dcoRegex"
|
||||
}
|
||||
|
||||
if [ $adds -eq 0 -a $dels -eq 0 ]; then
|
||||
echo '0 adds, 0 deletions; nothing to validate! :)'
|
||||
elif [ -z "$notDocs" -a $adds -le 1 -a $dels -le 1 ]; then
|
||||
echo 'Congratulations! DCO small-patch-exception material!'
|
||||
else
|
||||
commits=( $(validate_log --format='format:%H%n') )
|
||||
badCommits=()
|
||||
for commit in "${commits[@]}"; do
|
||||
if [ -z "$(git log -1 --format='format:' --name-status "$commit")" ]; then
|
||||
# no content (ie, Merge commit, etc)
|
||||
continue
|
||||
fi
|
||||
if ! git log -1 --format='format:%B' "$commit" | check_dco; then
|
||||
badCommits+=( "$commit" )
|
||||
fi
|
||||
done
|
||||
if [ ${#badCommits[@]} -eq 0 ]; then
|
||||
echo "Congratulations! All commits are properly signed with the DCO!"
|
||||
else
|
||||
{
|
||||
echo "These commits do not have a proper '$dcoPrefix' marker:"
|
||||
for commit in "${badCommits[@]}"; do
|
||||
echo " - $commit"
|
||||
done
|
||||
echo
|
||||
echo 'Please amend each commit to include a properly formatted DCO marker.'
|
||||
echo
|
||||
echo 'Visit the following URL for information about the Docker DCO:'
|
||||
echo ' https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work'
|
||||
echo
|
||||
} >&2
|
||||
false
|
||||
fi
|
||||
fi
|
@ -1,11 +1,9 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ "$DOCKER_VERSION" == "" ]; then
|
||||
DOCKER_VERSION="1.5.0"
|
||||
if [ "$DOCKER_VERSION" != "" ] && [ "$DOCKER_VERSION" != "default" ]; then
|
||||
ln -fs "/usr/local/bin/docker-$DOCKER_VERSION" "/usr/local/bin/docker"
|
||||
fi
|
||||
|
||||
ln -fs "/usr/local/bin/docker-$DOCKER_VERSION" "/usr/local/bin/docker"
|
||||
|
||||
# If a pidfile is still around (for example after a container restart),
|
||||
# delete it so that docker can start.
|
||||
rm -rf /var/run/docker.pid
|
||||
|
9
setup.py
9
setup.py
@ -27,14 +27,15 @@ def find_version(*file_paths):
|
||||
install_requires = [
|
||||
'docopt >= 0.6.1, < 0.7',
|
||||
'PyYAML >= 3.10, < 4',
|
||||
'requests >= 2.2.1, < 2.6',
|
||||
'requests >= 2.6.1, < 2.7',
|
||||
'texttable >= 0.8.1, < 0.9',
|
||||
'websocket-client >= 0.11.0, < 1.0',
|
||||
'docker-py >= 1.0.0, < 1.2',
|
||||
'dockerpty >= 0.3.2, < 0.4',
|
||||
'docker-py >= 1.2.3, < 1.3',
|
||||
'dockerpty >= 0.3.4, < 0.4',
|
||||
'six >= 1.3.0, < 2',
|
||||
]
|
||||
|
||||
|
||||
tests_require = [
|
||||
'mock >= 1.0.1',
|
||||
'nose',
|
||||
@ -54,7 +55,7 @@ setup(
|
||||
url='https://www.docker.com/',
|
||||
author='Docker, Inc.',
|
||||
license='Apache License 2.0',
|
||||
packages=find_packages(exclude=[ 'tests.*', 'tests' ]),
|
||||
packages=find_packages(exclude=['tests.*', 'tests']),
|
||||
include_package_data=True,
|
||||
test_suite='nose.collector',
|
||||
install_requires=install_requires,
|
||||
|
@ -1,7 +1,6 @@
|
||||
import sys
|
||||
|
||||
if sys.version_info >= (2,7):
|
||||
import unittest
|
||||
if sys.version_info >= (2, 7):
|
||||
import unittest # NOQA
|
||||
else:
|
||||
import unittest2 as unittest
|
||||
|
||||
import unittest2 as unittest # NOQA
|
||||
|
@ -1,6 +1,6 @@
|
||||
simple:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
another:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
|
@ -1,3 +1,3 @@
|
||||
FROM busybox
|
||||
VOLUME /data
|
||||
CMD sleep 3000
|
||||
CMD top
|
||||
|
@ -1,6 +1,6 @@
|
||||
service:
|
||||
image: busybox:latest
|
||||
command: sleep 5
|
||||
command: top
|
||||
|
||||
environment:
|
||||
foo: bar
|
||||
|
4
tests/fixtures/extends/docker-compose.yml
vendored
4
tests/fixtures/extends/docker-compose.yml
vendored
@ -2,7 +2,7 @@ myweb:
|
||||
extends:
|
||||
file: common.yml
|
||||
service: web
|
||||
command: sleep 300
|
||||
command: top
|
||||
links:
|
||||
- "mydb:db"
|
||||
environment:
|
||||
@ -13,4 +13,4 @@ myweb:
|
||||
BAZ: "2"
|
||||
mydb:
|
||||
image: busybox
|
||||
command: sleep 300
|
||||
command: top
|
||||
|
6
tests/fixtures/extends/nonexistent-path-base.yml
vendored
Normal file
6
tests/fixtures/extends/nonexistent-path-base.yml
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
dnebase:
|
||||
build: nonexistent.path
|
||||
command: /bin/true
|
||||
environment:
|
||||
- FOO=1
|
||||
- BAR=1
|
8
tests/fixtures/extends/nonexistent-path-child.yml
vendored
Normal file
8
tests/fixtures/extends/nonexistent-path-child.yml
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
dnechild:
|
||||
extends:
|
||||
file: nonexistent-path-base.yml
|
||||
service: dnebase
|
||||
image: busybox
|
||||
command: /bin/true
|
||||
environment:
|
||||
- BAR=2
|
@ -1,11 +1,11 @@
|
||||
db:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
web:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
links:
|
||||
- db:db
|
||||
console:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
|
@ -1,3 +1,3 @@
|
||||
definedinyamlnotyml:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
@ -1,3 +1,3 @@
|
||||
yetanother:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
|
@ -1,6 +1,6 @@
|
||||
simple:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
another:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
|
6
tests/fixtures/ports-composefile-scale/docker-compose.yml
vendored
Normal file
6
tests/fixtures/ports-composefile-scale/docker-compose.yml
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
|
||||
simple:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
ports:
|
||||
- '3000'
|
@ -1,7 +1,7 @@
|
||||
|
||||
simple:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
ports:
|
||||
- '3000'
|
||||
- '49152:3001'
|
||||
|
@ -1,6 +1,6 @@
|
||||
simple:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
another:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
command: top
|
||||
|
@ -1,6 +1,8 @@
|
||||
from __future__ import absolute_import
|
||||
from operator import attrgetter
|
||||
import sys
|
||||
import os
|
||||
import shlex
|
||||
|
||||
from six import StringIO
|
||||
from mock import patch
|
||||
@ -21,6 +23,8 @@ class CLITestCase(DockerClientTestCase):
|
||||
sys.exit = self.old_sys_exit
|
||||
self.project.kill()
|
||||
self.project.remove_stopped()
|
||||
for container in self.project.containers(stopped=True, one_off=True):
|
||||
container.remove(force=True)
|
||||
|
||||
@property
|
||||
def project(self):
|
||||
@ -62,6 +66,10 @@ class CLITestCase(DockerClientTestCase):
|
||||
|
||||
@patch('sys.stdout', new_callable=StringIO)
|
||||
def test_ps_alternate_composefile(self, mock_stdout):
|
||||
config_path = os.path.abspath(
|
||||
'tests/fixtures/multiple-composefiles/compose2.yml')
|
||||
self._project = self.command.get_project(config_path)
|
||||
|
||||
self.command.base_dir = 'tests/fixtures/multiple-composefiles'
|
||||
self.command.dispatch(['-f', 'compose2.yml', 'up', '-d'], None)
|
||||
self.command.dispatch(['-f', 'compose2.yml', 'ps'], None)
|
||||
@ -234,8 +242,8 @@ class CLITestCase(DockerClientTestCase):
|
||||
service = self.project.get_service(name)
|
||||
container = service.containers(stopped=True, one_off=True)[0]
|
||||
self.assertEqual(
|
||||
container.human_readable_command,
|
||||
u'/bin/echo helloworld'
|
||||
shlex.split(container.human_readable_command),
|
||||
[u'/bin/echo', u'helloworld'],
|
||||
)
|
||||
|
||||
@patch('dockerpty.start')
|
||||
@ -332,6 +340,17 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.command.dispatch(['rm', '-f'], None)
|
||||
self.assertEqual(len(service.containers(stopped=True)), 0)
|
||||
|
||||
def test_stop(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
service = self.project.get_service('simple')
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
self.assertTrue(service.containers()[0].is_running)
|
||||
|
||||
self.command.dispatch(['stop', '-t', '1'], None)
|
||||
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
self.assertFalse(service.containers(stopped=True)[0].is_running)
|
||||
|
||||
def test_kill(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
service = self.project.get_service('simple')
|
||||
@ -343,22 +362,22 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
self.assertFalse(service.containers(stopped=True)[0].is_running)
|
||||
|
||||
def test_kill_signal_sigint(self):
|
||||
def test_kill_signal_sigstop(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
service = self.project.get_service('simple')
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
self.assertTrue(service.containers()[0].is_running)
|
||||
|
||||
self.command.dispatch(['kill', '-s', 'SIGINT'], None)
|
||||
self.command.dispatch(['kill', '-s', 'SIGSTOP'], None)
|
||||
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
# The container is still running. It has been only interrupted
|
||||
# The container is still running. It has only been paused
|
||||
self.assertTrue(service.containers()[0].is_running)
|
||||
|
||||
def test_kill_interrupted_service(self):
|
||||
def test_kill_stopped_service(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
service = self.project.get_service('simple')
|
||||
self.command.dispatch(['kill', '-s', 'SIGINT'], None)
|
||||
self.command.dispatch(['kill', '-s', 'SIGSTOP'], None)
|
||||
self.assertTrue(service.containers()[0].is_running)
|
||||
|
||||
self.command.dispatch(['kill', '-s', 'SIGKILL'], None)
|
||||
@ -371,7 +390,7 @@ class CLITestCase(DockerClientTestCase):
|
||||
container = service.create_container()
|
||||
service.start_container(container)
|
||||
started_at = container.dictionary['State']['StartedAt']
|
||||
self.command.dispatch(['restart'], None)
|
||||
self.command.dispatch(['restart', '-t', '1'], None)
|
||||
container.inspect()
|
||||
self.assertNotEqual(
|
||||
container.dictionary['State']['FinishedAt'],
|
||||
@ -405,7 +424,6 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(len(project.get_service('another').containers()), 0)
|
||||
|
||||
def test_port(self):
|
||||
|
||||
self.command.base_dir = 'tests/fixtures/ports-composefile'
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
container = self.project.get_service('simple').get_container()
|
||||
@ -419,6 +437,27 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertEqual(get_port(3001), "0.0.0.0:49152")
|
||||
self.assertEqual(get_port(3002), "")
|
||||
|
||||
def test_port_with_scale(self):
|
||||
|
||||
self.command.base_dir = 'tests/fixtures/ports-composefile-scale'
|
||||
self.command.dispatch(['scale', 'simple=2'], None)
|
||||
containers = sorted(
|
||||
self.project.containers(service_names=['simple']),
|
||||
key=attrgetter('name'))
|
||||
|
||||
@patch('sys.stdout', new_callable=StringIO)
|
||||
def get_port(number, mock_stdout, index=None):
|
||||
if index is None:
|
||||
self.command.dispatch(['port', 'simple', str(number)], None)
|
||||
else:
|
||||
self.command.dispatch(['port', '--index=' + str(index), 'simple', str(number)], None)
|
||||
return mock_stdout.getvalue().rstrip()
|
||||
|
||||
self.assertEqual(get_port(3000), containers[0].get_local_port(3000))
|
||||
self.assertEqual(get_port(3000, index=1), containers[0].get_local_port(3000))
|
||||
self.assertEqual(get_port(3000, index=2), containers[1].get_local_port(3000))
|
||||
self.assertEqual(get_port(3002), "")
|
||||
|
||||
def test_env_file_relative_to_compose_file(self):
|
||||
config_path = os.path.abspath('tests/fixtures/env-file/docker-compose.yml')
|
||||
self.command.dispatch(['-f', config_path, 'up', '-d'], None)
|
||||
|
57
tests/integration/legacy_test.py
Normal file
57
tests/integration/legacy_test.py
Normal file
@ -0,0 +1,57 @@
|
||||
from compose import legacy
|
||||
from compose.project import Project
|
||||
from .testcases import DockerClientTestCase
|
||||
|
||||
|
||||
class ProjectTest(DockerClientTestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(ProjectTest, self).setUp()
|
||||
|
||||
db = self.create_service('db')
|
||||
web = self.create_service('web', links=[(db, 'db')])
|
||||
nginx = self.create_service('nginx', links=[(web, 'web')])
|
||||
|
||||
self.services = [db, web, nginx]
|
||||
self.project = Project('composetest', self.services, self.client)
|
||||
|
||||
# Create a legacy container for each service
|
||||
for service in self.services:
|
||||
service.ensure_image_exists()
|
||||
container = self.client.create_container(
|
||||
name='{}_{}_1'.format(self.project.name, service.name),
|
||||
**service.options
|
||||
)
|
||||
self.client.start(container)
|
||||
|
||||
# Create a single one-off legacy container
|
||||
self.client.create_container(
|
||||
name='{}_{}_run_1'.format(self.project.name, self.services[0].name),
|
||||
**self.services[0].options
|
||||
)
|
||||
|
||||
def get_legacy_containers(self, **kwargs):
|
||||
return list(legacy.get_legacy_containers(
|
||||
self.client,
|
||||
self.project.name,
|
||||
[s.name for s in self.services],
|
||||
**kwargs
|
||||
))
|
||||
|
||||
def test_get_legacy_container_names(self):
|
||||
self.assertEqual(len(self.get_legacy_containers()), len(self.services))
|
||||
|
||||
def test_get_legacy_container_names_one_off(self):
|
||||
self.assertEqual(len(self.get_legacy_containers(stopped=True, one_off=True)), 1)
|
||||
|
||||
def test_migration_to_labels(self):
|
||||
with self.assertRaises(legacy.LegacyContainersError) as cm:
|
||||
self.assertEqual(self.project.containers(stopped=True), [])
|
||||
|
||||
self.assertEqual(
|
||||
set(cm.exception.names),
|
||||
set(['composetest_db_1', 'composetest_web_1', 'composetest_nginx_1']),
|
||||
)
|
||||
|
||||
legacy.migrate_project_to_labels(self.project)
|
||||
self.assertEqual(len(self.project.containers(stopped=True)), len(self.services))
|
@ -6,6 +6,29 @@ from .testcases import DockerClientTestCase
|
||||
|
||||
|
||||
class ProjectTest(DockerClientTestCase):
|
||||
|
||||
def test_containers(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db')
|
||||
project = Project('composetest', [web, db], self.client)
|
||||
|
||||
project.up()
|
||||
|
||||
containers = project.containers()
|
||||
self.assertEqual(len(containers), 2)
|
||||
|
||||
def test_containers_with_service_names(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db')
|
||||
project = Project('composetest', [web, db], self.client)
|
||||
|
||||
project.up()
|
||||
|
||||
containers = project.containers(['web'])
|
||||
self.assertEqual(
|
||||
[c.name for c in containers],
|
||||
['composetest_web_1'])
|
||||
|
||||
def test_volumes_from_service(self):
|
||||
service_dicts = config.from_dictionary({
|
||||
'data': {
|
||||
@ -55,12 +78,12 @@ class ProjectTest(DockerClientTestCase):
|
||||
service_dicts=config.from_dictionary({
|
||||
'net': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"]
|
||||
'command': ["top"]
|
||||
},
|
||||
'web': {
|
||||
'image': 'busybox:latest',
|
||||
'net': 'container:net',
|
||||
'command': ["/bin/sleep", "300"]
|
||||
'command': ["top"]
|
||||
},
|
||||
}),
|
||||
client=self.client,
|
||||
@ -70,7 +93,7 @@ class ProjectTest(DockerClientTestCase):
|
||||
|
||||
web = project.get_service('web')
|
||||
net = project.get_service('net')
|
||||
self.assertEqual(web._get_net(), 'container:'+net.containers()[0].id)
|
||||
self.assertEqual(web._get_net(), 'container:' + net.containers()[0].id)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
@ -80,7 +103,7 @@ class ProjectTest(DockerClientTestCase):
|
||||
self.client,
|
||||
image='busybox:latest',
|
||||
name='composetest_net_container',
|
||||
command='/bin/sleep 300'
|
||||
command='top'
|
||||
)
|
||||
net_container.start()
|
||||
|
||||
@ -98,7 +121,7 @@ class ProjectTest(DockerClientTestCase):
|
||||
project.up()
|
||||
|
||||
web = project.get_service('web')
|
||||
self.assertEqual(web._get_net(), 'container:'+net_container.id)
|
||||
self.assertEqual(web._get_net(), 'container:' + net_container.id)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
@ -151,6 +174,18 @@ class ProjectTest(DockerClientTestCase):
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_starts_uncreated_services(self):
|
||||
db = self.create_service('db')
|
||||
web = self.create_service('web', links=[(db, 'db')])
|
||||
project = Project('composetest', [db, web], self.client)
|
||||
project.up(['db'])
|
||||
self.assertEqual(len(project.containers()), 1)
|
||||
|
||||
project.up()
|
||||
self.assertEqual(len(project.containers()), 2)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
|
||||
def test_project_up_recreates_containers(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db', volumes=['/etc'])
|
||||
@ -185,7 +220,7 @@ class ProjectTest(DockerClientTestCase):
|
||||
old_db_id = project.containers()[0].id
|
||||
db_volume_path = project.containers()[0].inspect()['Volumes']['/var/db']
|
||||
|
||||
project.up(recreate=False)
|
||||
project.up(allow_recreate=False)
|
||||
self.assertEqual(len(project.containers()), 2)
|
||||
|
||||
db_container = [c for c in project.containers() if 'db' in c.name][0]
|
||||
@ -204,7 +239,7 @@ class ProjectTest(DockerClientTestCase):
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['db'])
|
||||
project.stop()
|
||||
project.kill()
|
||||
|
||||
old_containers = project.containers(stopped=True)
|
||||
|
||||
@ -212,10 +247,11 @@ class ProjectTest(DockerClientTestCase):
|
||||
old_db_id = old_containers[0].id
|
||||
db_volume_path = old_containers[0].inspect()['Volumes']['/var/db']
|
||||
|
||||
project.up(recreate=False)
|
||||
project.up(allow_recreate=False)
|
||||
|
||||
new_containers = project.containers(stopped=True)
|
||||
self.assertEqual(len(new_containers), 2)
|
||||
self.assertEqual([c.is_running for c in new_containers], [True, True])
|
||||
|
||||
db_container = [c for c in new_containers if 'db' in c.name][0]
|
||||
self.assertEqual(db_container.id, old_db_id)
|
||||
@ -264,20 +300,20 @@ class ProjectTest(DockerClientTestCase):
|
||||
service_dicts=config.from_dictionary({
|
||||
'console': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
'command': ["top"],
|
||||
},
|
||||
'data' : {
|
||||
'data': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"]
|
||||
'command': ["top"]
|
||||
},
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
'command': ["top"],
|
||||
'volumes_from': ['data'],
|
||||
},
|
||||
'web': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
'command': ["top"],
|
||||
'links': ['db'],
|
||||
},
|
||||
}),
|
||||
@ -302,20 +338,20 @@ class ProjectTest(DockerClientTestCase):
|
||||
service_dicts=config.from_dictionary({
|
||||
'console': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
'command': ["top"],
|
||||
},
|
||||
'data' : {
|
||||
'data': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"]
|
||||
'command': ["top"]
|
||||
},
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
'command': ["top"],
|
||||
'volumes_from': ['data'],
|
||||
},
|
||||
'web': {
|
||||
'image': 'busybox:latest',
|
||||
'command': ["/bin/sleep", "300"],
|
||||
'command': ["top"],
|
||||
'links': ['db'],
|
||||
},
|
||||
}),
|
||||
|
37
tests/integration/resilience_test.py
Normal file
37
tests/integration/resilience_test.py
Normal file
@ -0,0 +1,37 @@
|
||||
from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
|
||||
import mock
|
||||
|
||||
from compose.project import Project
|
||||
from .testcases import DockerClientTestCase
|
||||
|
||||
|
||||
class ResilienceTest(DockerClientTestCase):
|
||||
def test_recreate_fails(self):
|
||||
db = self.create_service('db', volumes=['/var/db'], command='top')
|
||||
project = Project('composetest', [db], self.client)
|
||||
|
||||
container = db.create_container()
|
||||
db.start_container(container)
|
||||
host_path = container.get('Volumes')['/var/db']
|
||||
|
||||
project.up()
|
||||
container = db.containers()[0]
|
||||
self.assertEqual(container.get('Volumes')['/var/db'], host_path)
|
||||
|
||||
with mock.patch('compose.service.Service.create_container', crash):
|
||||
with self.assertRaises(Crash):
|
||||
project.up()
|
||||
|
||||
project.up()
|
||||
container = db.containers()[0]
|
||||
self.assertEqual(container.get('Volumes')['/var/db'], host_path)
|
||||
|
||||
|
||||
class Crash(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def crash(*args, **kwargs):
|
||||
raise Crash()
|
@ -4,8 +4,23 @@ import os
|
||||
from os import path
|
||||
import mock
|
||||
|
||||
from compose import Service
|
||||
from compose.service import CannotBeScaledError
|
||||
import tempfile
|
||||
import shutil
|
||||
import six
|
||||
|
||||
from compose import __version__
|
||||
from compose.const import (
|
||||
LABEL_CONTAINER_NUMBER,
|
||||
LABEL_ONE_OFF,
|
||||
LABEL_PROJECT,
|
||||
LABEL_SERVICE,
|
||||
LABEL_VERSION,
|
||||
)
|
||||
from compose.service import (
|
||||
ConfigError,
|
||||
Service,
|
||||
build_extra_hosts,
|
||||
)
|
||||
from compose.container import Container
|
||||
from docker.errors import APIError
|
||||
from .testcases import DockerClientTestCase
|
||||
@ -99,7 +114,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
service = self.create_service('db', volumes=['/var/db'])
|
||||
container = service.create_container()
|
||||
service.start_container(container)
|
||||
self.assertIn('/var/db', container.inspect()['Volumes'])
|
||||
self.assertIn('/var/db', container.get('Volumes'))
|
||||
|
||||
def test_create_container_with_cpu_shares(self):
|
||||
service = self.create_service('db', cpu_shares=73)
|
||||
@ -107,6 +122,82 @@ class ServiceTest(DockerClientTestCase):
|
||||
service.start_container(container)
|
||||
self.assertEqual(container.inspect()['Config']['CpuShares'], 73)
|
||||
|
||||
def test_build_extra_hosts(self):
|
||||
# string
|
||||
self.assertRaises(ConfigError, lambda: build_extra_hosts("www.example.com: 192.168.0.17"))
|
||||
|
||||
# list of strings
|
||||
self.assertEqual(build_extra_hosts(
|
||||
["www.example.com:192.168.0.17"]),
|
||||
{'www.example.com': '192.168.0.17'})
|
||||
self.assertEqual(build_extra_hosts(
|
||||
["www.example.com: 192.168.0.17"]),
|
||||
{'www.example.com': '192.168.0.17'})
|
||||
self.assertEqual(build_extra_hosts(
|
||||
["www.example.com: 192.168.0.17",
|
||||
"static.example.com:192.168.0.19",
|
||||
"api.example.com: 192.168.0.18"]),
|
||||
{'www.example.com': '192.168.0.17',
|
||||
'static.example.com': '192.168.0.19',
|
||||
'api.example.com': '192.168.0.18'})
|
||||
|
||||
# list of dictionaries
|
||||
self.assertRaises(ConfigError, lambda: build_extra_hosts(
|
||||
[{'www.example.com': '192.168.0.17'},
|
||||
{'api.example.com': '192.168.0.18'}]))
|
||||
|
||||
# dictionaries
|
||||
self.assertEqual(build_extra_hosts(
|
||||
{'www.example.com': '192.168.0.17',
|
||||
'api.example.com': '192.168.0.18'}),
|
||||
{'www.example.com': '192.168.0.17',
|
||||
'api.example.com': '192.168.0.18'})
|
||||
|
||||
def test_create_container_with_extra_hosts_list(self):
|
||||
extra_hosts = ['somehost:162.242.195.82', 'otherhost:50.31.209.229']
|
||||
service = self.create_service('db', extra_hosts=extra_hosts)
|
||||
container = service.create_container()
|
||||
service.start_container(container)
|
||||
self.assertEqual(set(container.get('HostConfig.ExtraHosts')), set(extra_hosts))
|
||||
|
||||
def test_create_container_with_extra_hosts_string(self):
|
||||
extra_hosts = 'somehost:162.242.195.82'
|
||||
service = self.create_service('db', extra_hosts=extra_hosts)
|
||||
self.assertRaises(ConfigError, lambda: service.create_container())
|
||||
|
||||
def test_create_container_with_extra_hosts_list_of_dicts(self):
|
||||
extra_hosts = [{'somehost': '162.242.195.82'}, {'otherhost': '50.31.209.229'}]
|
||||
service = self.create_service('db', extra_hosts=extra_hosts)
|
||||
self.assertRaises(ConfigError, lambda: service.create_container())
|
||||
|
||||
def test_create_container_with_extra_hosts_dicts(self):
|
||||
extra_hosts = {'somehost': '162.242.195.82', 'otherhost': '50.31.209.229'}
|
||||
extra_hosts_list = ['somehost:162.242.195.82', 'otherhost:50.31.209.229']
|
||||
service = self.create_service('db', extra_hosts=extra_hosts)
|
||||
container = service.create_container()
|
||||
service.start_container(container)
|
||||
self.assertEqual(set(container.get('HostConfig.ExtraHosts')), set(extra_hosts_list))
|
||||
|
||||
def test_create_container_with_cpu_set(self):
|
||||
service = self.create_service('db', cpuset='0')
|
||||
container = service.create_container()
|
||||
service.start_container(container)
|
||||
self.assertEqual(container.inspect()['Config']['Cpuset'], '0')
|
||||
|
||||
def test_create_container_with_read_only_root_fs(self):
|
||||
read_only = True
|
||||
service = self.create_service('db', read_only=read_only)
|
||||
container = service.create_container()
|
||||
service.start_container(container)
|
||||
self.assertEqual(container.get('HostConfig.ReadonlyRootfs'), read_only, container.get('HostConfig'))
|
||||
|
||||
def test_create_container_with_security_opt(self):
|
||||
security_opt = ['label:disable']
|
||||
service = self.create_service('db', security_opt=security_opt)
|
||||
container = service.create_container()
|
||||
service.start_container(container)
|
||||
self.assertEqual(set(container.get('HostConfig.SecurityOpt')), set(security_opt))
|
||||
|
||||
def test_create_container_with_specified_volume(self):
|
||||
host_path = '/tmp/host-path'
|
||||
container_path = '/container-path'
|
||||
@ -121,7 +212,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
# Match the last component ("host-path"), because boot2docker symlinks /tmp
|
||||
actual_host_path = volumes[container_path]
|
||||
self.assertTrue(path.basename(actual_host_path) == path.basename(host_path),
|
||||
msg=("Last component differs: %s, %s" % (actual_host_path, host_path)))
|
||||
msg=("Last component differs: %s, %s" % (actual_host_path, host_path)))
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_create_container_with_home_and_env_var_in_volume_path(self):
|
||||
@ -144,7 +235,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
def test_create_container_with_volumes_from(self):
|
||||
volume_service = self.create_service('data')
|
||||
volume_container_1 = volume_service.create_container()
|
||||
volume_container_2 = Container.create(self.client, image='busybox:latest', command=["/bin/sleep", "300"])
|
||||
volume_container_2 = Container.create(self.client, image='busybox:latest', command=["top"])
|
||||
host_service = self.create_service('host', volumes_from=[volume_service, volume_container_2])
|
||||
host_container = host_service.create_container()
|
||||
host_service.start_container(host_container)
|
||||
@ -153,60 +244,57 @@ class ServiceTest(DockerClientTestCase):
|
||||
self.assertIn(volume_container_2.id,
|
||||
host_container.get('HostConfig.VolumesFrom'))
|
||||
|
||||
def test_recreate_containers(self):
|
||||
def test_converge(self):
|
||||
service = self.create_service(
|
||||
'db',
|
||||
environment={'FOO': '1'},
|
||||
volumes=['/etc'],
|
||||
entrypoint=['sleep'],
|
||||
command=['300']
|
||||
entrypoint=['top'],
|
||||
command=['-d', '1']
|
||||
)
|
||||
old_container = service.create_container()
|
||||
self.assertEqual(old_container.dictionary['Config']['Entrypoint'], ['sleep'])
|
||||
self.assertEqual(old_container.dictionary['Config']['Cmd'], ['300'])
|
||||
self.assertIn('FOO=1', old_container.dictionary['Config']['Env'])
|
||||
self.assertEqual(old_container.get('Config.Entrypoint'), ['top'])
|
||||
self.assertEqual(old_container.get('Config.Cmd'), ['-d', '1'])
|
||||
self.assertIn('FOO=1', old_container.get('Config.Env'))
|
||||
self.assertEqual(old_container.name, 'composetest_db_1')
|
||||
service.start_container(old_container)
|
||||
volume_path = old_container.inspect()['Volumes']['/etc']
|
||||
old_container.inspect() # reload volume data
|
||||
volume_path = old_container.get('Volumes')['/etc']
|
||||
|
||||
num_containers_before = len(self.client.containers(all=True))
|
||||
|
||||
service.options['environment']['FOO'] = '2'
|
||||
tuples = service.recreate_containers()
|
||||
self.assertEqual(len(tuples), 1)
|
||||
new_container = service.converge()[0]
|
||||
|
||||
intermediate_container = tuples[0][0]
|
||||
new_container = tuples[0][1]
|
||||
self.assertEqual(intermediate_container.dictionary['Config']['Entrypoint'], ['/bin/echo'])
|
||||
|
||||
self.assertEqual(new_container.dictionary['Config']['Entrypoint'], ['sleep'])
|
||||
self.assertEqual(new_container.dictionary['Config']['Cmd'], ['300'])
|
||||
self.assertIn('FOO=2', new_container.dictionary['Config']['Env'])
|
||||
self.assertEqual(new_container.get('Config.Entrypoint'), ['top'])
|
||||
self.assertEqual(new_container.get('Config.Cmd'), ['-d', '1'])
|
||||
self.assertIn('FOO=2', new_container.get('Config.Env'))
|
||||
self.assertEqual(new_container.name, 'composetest_db_1')
|
||||
self.assertEqual(new_container.inspect()['Volumes']['/etc'], volume_path)
|
||||
self.assertIn(intermediate_container.id, new_container.dictionary['HostConfig']['VolumesFrom'])
|
||||
self.assertEqual(new_container.get('Volumes')['/etc'], volume_path)
|
||||
self.assertIn(
|
||||
'affinity:container==%s' % old_container.id,
|
||||
new_container.get('Config.Env'))
|
||||
|
||||
self.assertEqual(len(self.client.containers(all=True)), num_containers_before)
|
||||
self.assertNotEqual(old_container.id, new_container.id)
|
||||
self.assertRaises(APIError,
|
||||
self.client.inspect_container,
|
||||
intermediate_container.id)
|
||||
old_container.id)
|
||||
|
||||
def test_recreate_containers_when_containers_are_stopped(self):
|
||||
def test_converge_when_containers_are_stopped(self):
|
||||
service = self.create_service(
|
||||
'db',
|
||||
environment={'FOO': '1'},
|
||||
volumes=['/var/db'],
|
||||
entrypoint=['sleep'],
|
||||
command=['300']
|
||||
entrypoint=['top'],
|
||||
command=['-d', '1']
|
||||
)
|
||||
old_container = service.create_container()
|
||||
service.create_container()
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
service.recreate_containers()
|
||||
service.converge()
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
|
||||
|
||||
def test_recreate_containers_with_image_declared_volume(self):
|
||||
def test_converge_with_image_declared_volume(self):
|
||||
service = Service(
|
||||
project='composetest',
|
||||
name='db',
|
||||
@ -218,9 +306,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
self.assertEqual(old_container.get('Volumes').keys(), ['/data'])
|
||||
volume_path = old_container.get('Volumes')['/data']
|
||||
|
||||
service.recreate_containers()
|
||||
new_container = service.containers()[0]
|
||||
service.start_container(new_container)
|
||||
new_container = service.converge()[0]
|
||||
self.assertEqual(new_container.get('Volumes').keys(), ['/data'])
|
||||
self.assertEqual(new_container.get('Volumes')['/data'], volume_path)
|
||||
|
||||
@ -247,8 +333,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
set([
|
||||
'composetest_db_1', 'db_1',
|
||||
'composetest_db_2', 'db_2',
|
||||
'db',
|
||||
]),
|
||||
'db'])
|
||||
)
|
||||
|
||||
def test_start_container_creates_links_with_names(self):
|
||||
@ -264,8 +349,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
set([
|
||||
'composetest_db_1', 'db_1',
|
||||
'composetest_db_2', 'db_2',
|
||||
'custom_link_name',
|
||||
]),
|
||||
'custom_link_name'])
|
||||
)
|
||||
|
||||
def test_start_container_with_external_links(self):
|
||||
@ -283,8 +367,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
set([
|
||||
'composetest_db_1',
|
||||
'composetest_db_2',
|
||||
'db_3',
|
||||
]),
|
||||
'db_3']),
|
||||
)
|
||||
|
||||
def test_start_normal_container_does_not_create_links_to_its_own_service(self):
|
||||
@ -309,8 +392,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
set([
|
||||
'composetest_db_1', 'db_1',
|
||||
'composetest_db_2', 'db_2',
|
||||
'db',
|
||||
]),
|
||||
'db'])
|
||||
)
|
||||
|
||||
def test_start_container_builds_images(self):
|
||||
@ -343,13 +425,36 @@ class ServiceTest(DockerClientTestCase):
|
||||
self.assertEqual(list(container['NetworkSettings']['Ports'].keys()), ['8000/tcp'])
|
||||
self.assertNotEqual(container['NetworkSettings']['Ports']['8000/tcp'][0]['HostPort'], '8000')
|
||||
|
||||
def test_build(self):
|
||||
base_dir = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, base_dir)
|
||||
|
||||
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
|
||||
f.write("FROM busybox\n")
|
||||
|
||||
self.create_service('web', build=base_dir).build()
|
||||
self.assertEqual(len(self.client.images(name='composetest_web')), 1)
|
||||
|
||||
def test_build_non_ascii_filename(self):
|
||||
base_dir = tempfile.mkdtemp()
|
||||
self.addCleanup(shutil.rmtree, base_dir)
|
||||
|
||||
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
|
||||
f.write("FROM busybox\n")
|
||||
|
||||
with open(os.path.join(base_dir, b'foo\xE2bar'), 'w') as f:
|
||||
f.write("hello world\n")
|
||||
|
||||
self.create_service('web', build=six.text_type(base_dir)).build()
|
||||
self.assertEqual(len(self.client.images(name='composetest_web')), 1)
|
||||
|
||||
def test_start_container_stays_unpriviliged(self):
|
||||
service = self.create_service('web')
|
||||
container = create_and_start_container(service).inspect()
|
||||
self.assertEqual(container['HostConfig']['Privileged'], False)
|
||||
|
||||
def test_start_container_becomes_priviliged(self):
|
||||
service = self.create_service('web', privileged = True)
|
||||
service = self.create_service('web', privileged=True)
|
||||
container = create_and_start_container(service).inspect()
|
||||
self.assertEqual(container['HostConfig']['Privileged'], True)
|
||||
|
||||
@ -396,6 +501,11 @@ class ServiceTest(DockerClientTestCase):
|
||||
],
|
||||
})
|
||||
|
||||
def test_create_with_image_id(self):
|
||||
# Image id for the current busybox:latest
|
||||
service = self.create_service('foo', image='8c2e06607696')
|
||||
service.create_container()
|
||||
|
||||
def test_scale(self):
|
||||
service = self.create_service('web')
|
||||
service.scale(1)
|
||||
@ -415,10 +525,6 @@ class ServiceTest(DockerClientTestCase):
|
||||
service.scale(0)
|
||||
self.assertEqual(len(service.containers()), 0)
|
||||
|
||||
def test_scale_on_service_that_cannot_be_scaled(self):
|
||||
service = self.create_service('web', ports=['8000:8000'])
|
||||
self.assertRaises(CannotBeScaledError, lambda: service.scale(1))
|
||||
|
||||
def test_scale_sets_ports(self):
|
||||
service = self.create_service('web', ports=['8000'])
|
||||
service.scale(2)
|
||||
@ -442,6 +548,16 @@ class ServiceTest(DockerClientTestCase):
|
||||
container = create_and_start_container(service)
|
||||
self.assertEqual(container.get('HostConfig.NetworkMode'), 'host')
|
||||
|
||||
def test_pid_mode_none_defined(self):
|
||||
service = self.create_service('web', pid=None)
|
||||
container = create_and_start_container(service)
|
||||
self.assertEqual(container.get('HostConfig.PidMode'), '')
|
||||
|
||||
def test_pid_mode_host(self):
|
||||
service = self.create_service('web', pid='host')
|
||||
container = create_and_start_container(service)
|
||||
self.assertEqual(container.get('HostConfig.PidMode'), 'host')
|
||||
|
||||
def test_dns_no_value(self):
|
||||
service = self.create_service('web')
|
||||
container = create_and_start_container(service)
|
||||
@ -501,13 +617,13 @@ class ServiceTest(DockerClientTestCase):
|
||||
def test_split_env(self):
|
||||
service = self.create_service('web', environment=['NORMAL=F1', 'CONTAINS_EQUALS=F=2', 'TRAILING_EQUALS='])
|
||||
env = create_and_start_container(service).environment
|
||||
for k,v in {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}.items():
|
||||
for k, v in {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}.items():
|
||||
self.assertEqual(env[k], v)
|
||||
|
||||
def test_env_from_file_combined_with_env(self):
|
||||
service = self.create_service('web', environment=['ONE=1', 'TWO=2', 'THREE=3'], env_file=['tests/fixtures/env/one.env', 'tests/fixtures/env/two.env'])
|
||||
env = create_and_start_container(service).environment
|
||||
for k,v in {'ONE': '1', 'TWO': '2', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'}.items():
|
||||
for k, v in {'ONE': '1', 'TWO': '2', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'}.items():
|
||||
self.assertEqual(env[k], v)
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
@ -517,5 +633,75 @@ class ServiceTest(DockerClientTestCase):
|
||||
os.environ['ENV_DEF'] = 'E3'
|
||||
service = self.create_service('web', environment={'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None})
|
||||
env = create_and_start_container(service).environment
|
||||
for k,v in {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}.items():
|
||||
for k, v in {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}.items():
|
||||
self.assertEqual(env[k], v)
|
||||
|
||||
def test_labels(self):
|
||||
labels_dict = {
|
||||
'com.example.description': "Accounting webapp",
|
||||
'com.example.department': "Finance",
|
||||
'com.example.label-with-empty-value': "",
|
||||
}
|
||||
|
||||
compose_labels = {
|
||||
LABEL_CONTAINER_NUMBER: '1',
|
||||
LABEL_ONE_OFF: 'False',
|
||||
LABEL_PROJECT: 'composetest',
|
||||
LABEL_SERVICE: 'web',
|
||||
LABEL_VERSION: __version__,
|
||||
}
|
||||
expected = dict(labels_dict, **compose_labels)
|
||||
|
||||
service = self.create_service('web', labels=labels_dict)
|
||||
labels = create_and_start_container(service).labels.items()
|
||||
for pair in expected.items():
|
||||
self.assertIn(pair, labels)
|
||||
|
||||
service.kill()
|
||||
service.remove_stopped()
|
||||
|
||||
labels_list = ["%s=%s" % pair for pair in labels_dict.items()]
|
||||
|
||||
service = self.create_service('web', labels=labels_list)
|
||||
labels = create_and_start_container(service).labels.items()
|
||||
for pair in expected.items():
|
||||
self.assertIn(pair, labels)
|
||||
|
||||
def test_empty_labels(self):
|
||||
labels_list = ['foo', 'bar']
|
||||
|
||||
service = self.create_service('web', labels=labels_list)
|
||||
labels = create_and_start_container(service).labels.items()
|
||||
for name in labels_list:
|
||||
self.assertIn((name, ''), labels)
|
||||
|
||||
def test_log_drive_invalid(self):
|
||||
service = self.create_service('web', log_driver='xxx')
|
||||
self.assertRaises(ValueError, lambda: create_and_start_container(service))
|
||||
|
||||
def test_log_drive_empty_default_jsonfile(self):
|
||||
service = self.create_service('web')
|
||||
log_config = create_and_start_container(service).log_config
|
||||
|
||||
self.assertEqual('json-file', log_config['Type'])
|
||||
self.assertFalse(log_config['Config'])
|
||||
|
||||
def test_log_drive_none(self):
|
||||
service = self.create_service('web', log_driver='none')
|
||||
log_config = create_and_start_container(service).log_config
|
||||
|
||||
self.assertEqual('none', log_config['Type'])
|
||||
self.assertFalse(log_config['Config'])
|
||||
|
||||
def test_devices(self):
|
||||
service = self.create_service('web', devices=["/dev/random:/dev/mapped-random"])
|
||||
device_config = create_and_start_container(service).get('HostConfig.Devices')
|
||||
|
||||
device_dict = {
|
||||
'PathOnHost': '/dev/random',
|
||||
'CgroupPermissions': 'rwm',
|
||||
'PathInContainer': '/dev/mapped-random'
|
||||
}
|
||||
|
||||
self.assertEqual(1, len(device_config))
|
||||
self.assertDictEqual(device_dict, device_config[0])
|
||||
|
263
tests/integration/state_test.py
Normal file
263
tests/integration/state_test.py
Normal file
@ -0,0 +1,263 @@
|
||||
from __future__ import unicode_literals
|
||||
import tempfile
|
||||
import shutil
|
||||
import os
|
||||
|
||||
from compose import config
|
||||
from compose.project import Project
|
||||
from compose.const import LABEL_CONFIG_HASH
|
||||
|
||||
from .testcases import DockerClientTestCase
|
||||
|
||||
|
||||
class ProjectTestCase(DockerClientTestCase):
|
||||
def run_up(self, cfg, **kwargs):
|
||||
if 'smart_recreate' not in kwargs:
|
||||
kwargs['smart_recreate'] = True
|
||||
|
||||
project = self.make_project(cfg)
|
||||
project.up(**kwargs)
|
||||
return set(project.containers(stopped=True))
|
||||
|
||||
def make_project(self, cfg):
|
||||
return Project.from_dicts(
|
||||
name='composetest',
|
||||
client=self.client,
|
||||
service_dicts=config.from_dictionary(cfg),
|
||||
)
|
||||
|
||||
|
||||
class BasicProjectTest(ProjectTestCase):
|
||||
def setUp(self):
|
||||
super(BasicProjectTest, self).setUp()
|
||||
|
||||
self.cfg = {
|
||||
'db': {'image': 'busybox:latest'},
|
||||
'web': {'image': 'busybox:latest'},
|
||||
}
|
||||
|
||||
def test_no_change(self):
|
||||
old_containers = self.run_up(self.cfg)
|
||||
self.assertEqual(len(old_containers), 2)
|
||||
|
||||
new_containers = self.run_up(self.cfg)
|
||||
self.assertEqual(len(new_containers), 2)
|
||||
|
||||
self.assertEqual(old_containers, new_containers)
|
||||
|
||||
def test_partial_change(self):
|
||||
old_containers = self.run_up(self.cfg)
|
||||
old_db = [c for c in old_containers if c.name_without_project == 'db_1'][0]
|
||||
old_web = [c for c in old_containers if c.name_without_project == 'web_1'][0]
|
||||
|
||||
self.cfg['web']['command'] = '/bin/true'
|
||||
|
||||
new_containers = self.run_up(self.cfg)
|
||||
self.assertEqual(len(new_containers), 2)
|
||||
|
||||
preserved = list(old_containers & new_containers)
|
||||
self.assertEqual(preserved, [old_db])
|
||||
|
||||
removed = list(old_containers - new_containers)
|
||||
self.assertEqual(removed, [old_web])
|
||||
|
||||
created = list(new_containers - old_containers)
|
||||
self.assertEqual(len(created), 1)
|
||||
self.assertEqual(created[0].name_without_project, 'web_1')
|
||||
self.assertEqual(created[0].get('Config.Cmd'), ['/bin/true'])
|
||||
|
||||
def test_all_change(self):
|
||||
old_containers = self.run_up(self.cfg)
|
||||
self.assertEqual(len(old_containers), 2)
|
||||
|
||||
self.cfg['web']['command'] = '/bin/true'
|
||||
self.cfg['db']['command'] = '/bin/true'
|
||||
|
||||
new_containers = self.run_up(self.cfg)
|
||||
self.assertEqual(len(new_containers), 2)
|
||||
|
||||
unchanged = old_containers & new_containers
|
||||
self.assertEqual(len(unchanged), 0)
|
||||
|
||||
new = new_containers - old_containers
|
||||
self.assertEqual(len(new), 2)
|
||||
|
||||
|
||||
class ProjectWithDependenciesTest(ProjectTestCase):
|
||||
def setUp(self):
|
||||
super(ProjectWithDependenciesTest, self).setUp()
|
||||
|
||||
self.cfg = {
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
'command': 'tail -f /dev/null',
|
||||
},
|
||||
'web': {
|
||||
'image': 'busybox:latest',
|
||||
'command': 'tail -f /dev/null',
|
||||
'links': ['db'],
|
||||
},
|
||||
'nginx': {
|
||||
'image': 'busybox:latest',
|
||||
'command': 'tail -f /dev/null',
|
||||
'links': ['web'],
|
||||
},
|
||||
}
|
||||
|
||||
def test_up(self):
|
||||
containers = self.run_up(self.cfg)
|
||||
self.assertEqual(
|
||||
set(c.name_without_project for c in containers),
|
||||
set(['db_1', 'web_1', 'nginx_1']),
|
||||
)
|
||||
|
||||
def test_change_leaf(self):
|
||||
old_containers = self.run_up(self.cfg)
|
||||
|
||||
self.cfg['nginx']['environment'] = {'NEW_VAR': '1'}
|
||||
new_containers = self.run_up(self.cfg)
|
||||
|
||||
self.assertEqual(
|
||||
set(c.name_without_project for c in new_containers - old_containers),
|
||||
set(['nginx_1']),
|
||||
)
|
||||
|
||||
def test_change_middle(self):
|
||||
old_containers = self.run_up(self.cfg)
|
||||
|
||||
self.cfg['web']['environment'] = {'NEW_VAR': '1'}
|
||||
new_containers = self.run_up(self.cfg)
|
||||
|
||||
self.assertEqual(
|
||||
set(c.name_without_project for c in new_containers - old_containers),
|
||||
set(['web_1', 'nginx_1']),
|
||||
)
|
||||
|
||||
def test_change_root(self):
|
||||
old_containers = self.run_up(self.cfg)
|
||||
|
||||
self.cfg['db']['environment'] = {'NEW_VAR': '1'}
|
||||
new_containers = self.run_up(self.cfg)
|
||||
|
||||
self.assertEqual(
|
||||
set(c.name_without_project for c in new_containers - old_containers),
|
||||
set(['db_1', 'web_1', 'nginx_1']),
|
||||
)
|
||||
|
||||
def test_change_root_no_recreate(self):
|
||||
old_containers = self.run_up(self.cfg)
|
||||
|
||||
self.cfg['db']['environment'] = {'NEW_VAR': '1'}
|
||||
new_containers = self.run_up(self.cfg, allow_recreate=False)
|
||||
|
||||
self.assertEqual(new_containers - old_containers, set())
|
||||
|
||||
|
||||
class ServiceStateTest(DockerClientTestCase):
|
||||
def test_trigger_create(self):
|
||||
web = self.create_service('web')
|
||||
self.assertEqual(('create', []), web.convergence_plan(smart_recreate=True))
|
||||
|
||||
def test_trigger_noop(self):
|
||||
web = self.create_service('web')
|
||||
container = web.create_container()
|
||||
web.start()
|
||||
|
||||
web = self.create_service('web')
|
||||
self.assertEqual(('noop', [container]), web.convergence_plan(smart_recreate=True))
|
||||
|
||||
def test_trigger_start(self):
|
||||
options = dict(command=["top"])
|
||||
|
||||
web = self.create_service('web', **options)
|
||||
web.scale(2)
|
||||
|
||||
containers = web.containers(stopped=True)
|
||||
containers[0].stop()
|
||||
containers[0].inspect()
|
||||
|
||||
self.assertEqual([c.is_running for c in containers], [False, True])
|
||||
|
||||
web = self.create_service('web', **options)
|
||||
self.assertEqual(
|
||||
('start', containers[0:1]),
|
||||
web.convergence_plan(smart_recreate=True),
|
||||
)
|
||||
|
||||
def test_trigger_recreate_with_config_change(self):
|
||||
web = self.create_service('web', command=["top"])
|
||||
container = web.create_container()
|
||||
|
||||
web = self.create_service('web', command=["top", "-d", "1"])
|
||||
self.assertEqual(('recreate', [container]), web.convergence_plan(smart_recreate=True))
|
||||
|
||||
def test_trigger_recreate_with_image_change(self):
|
||||
repo = 'composetest_myimage'
|
||||
tag = 'latest'
|
||||
image = '{}:{}'.format(repo, tag)
|
||||
|
||||
image_id = self.client.images(name='busybox')[0]['Id']
|
||||
self.client.tag(image_id, repository=repo, tag=tag)
|
||||
|
||||
try:
|
||||
web = self.create_service('web', image=image)
|
||||
container = web.create_container()
|
||||
|
||||
# update the image
|
||||
c = self.client.create_container(image, ['touch', '/hello.txt'])
|
||||
self.client.commit(c, repository=repo, tag=tag)
|
||||
self.client.remove_container(c)
|
||||
|
||||
web = self.create_service('web', image=image)
|
||||
self.assertEqual(('recreate', [container]), web.convergence_plan(smart_recreate=True))
|
||||
|
||||
finally:
|
||||
self.client.remove_image(image)
|
||||
|
||||
def test_trigger_recreate_with_build(self):
|
||||
context = tempfile.mkdtemp()
|
||||
|
||||
try:
|
||||
dockerfile = os.path.join(context, 'Dockerfile')
|
||||
|
||||
with open(dockerfile, 'w') as f:
|
||||
f.write('FROM busybox\n')
|
||||
|
||||
web = self.create_service('web', build=context)
|
||||
container = web.create_container()
|
||||
|
||||
with open(dockerfile, 'w') as f:
|
||||
f.write('FROM busybox\nCMD echo hello world\n')
|
||||
web.build()
|
||||
|
||||
web = self.create_service('web', build=context)
|
||||
self.assertEqual(('recreate', [container]), web.convergence_plan(smart_recreate=True))
|
||||
finally:
|
||||
shutil.rmtree(context)
|
||||
|
||||
|
||||
class ConfigHashTest(DockerClientTestCase):
|
||||
def test_no_config_hash_when_one_off(self):
|
||||
web = self.create_service('web')
|
||||
container = web.create_container(one_off=True)
|
||||
self.assertNotIn(LABEL_CONFIG_HASH, container.labels)
|
||||
|
||||
def test_no_config_hash_when_overriding_options(self):
|
||||
web = self.create_service('web')
|
||||
container = web.create_container(environment={'FOO': '1'})
|
||||
self.assertNotIn(LABEL_CONFIG_HASH, container.labels)
|
||||
|
||||
def test_config_hash_with_custom_labels(self):
|
||||
web = self.create_service('web', labels={'foo': '1'})
|
||||
container = web.converge()[0]
|
||||
self.assertIn(LABEL_CONFIG_HASH, container.labels)
|
||||
self.assertIn('foo', container.labels)
|
||||
|
||||
def test_config_hash_sticks_around(self):
|
||||
web = self.create_service('web', command=["top"])
|
||||
container = web.converge()[0]
|
||||
self.assertIn(LABEL_CONFIG_HASH, container.labels)
|
||||
|
||||
web = self.create_service('web', command=["top", "-d", "1"])
|
||||
container = web.converge()[0]
|
||||
self.assertIn(LABEL_CONFIG_HASH, container.labels)
|
@ -12,6 +12,7 @@ class DockerClientTestCase(unittest.TestCase):
|
||||
def setUpClass(cls):
|
||||
cls.client = docker_client()
|
||||
|
||||
# TODO: update to use labels in #652
|
||||
def setUp(self):
|
||||
for c in self.client.containers(all=True):
|
||||
if c['Names'] and 'composetest' in c['Names'][0]:
|
||||
@ -22,10 +23,11 @@ class DockerClientTestCase(unittest.TestCase):
|
||||
self.client.remove_image(i)
|
||||
|
||||
def create_service(self, name, **kwargs):
|
||||
kwargs['image'] = "busybox:latest"
|
||||
if 'image' not in kwargs and 'build' not in kwargs:
|
||||
kwargs['image'] = 'busybox:latest'
|
||||
|
||||
if 'command' not in kwargs:
|
||||
kwargs['command'] = ["/bin/sleep", "300"]
|
||||
kwargs['command'] = ["top"]
|
||||
|
||||
return Service(
|
||||
project='composetest',
|
||||
|
@ -5,7 +5,7 @@ import os
|
||||
import mock
|
||||
from tests import unittest
|
||||
|
||||
from compose.cli import docker_client
|
||||
from compose.cli import docker_client
|
||||
|
||||
|
||||
class DockerClientTestCase(unittest.TestCase):
|
||||
|
@ -8,7 +8,6 @@ from .. import unittest
|
||||
|
||||
import docker
|
||||
import mock
|
||||
from six import StringIO
|
||||
|
||||
from compose.cli import main
|
||||
from compose.cli.main import TopLevelCommand
|
||||
@ -63,30 +62,32 @@ class CLITestCase(unittest.TestCase):
|
||||
self.assertEquals(project_name, name)
|
||||
|
||||
def test_filename_check(self):
|
||||
self.assertEqual('docker-compose.yml', get_config_filename_for_files([
|
||||
files = [
|
||||
'docker-compose.yml',
|
||||
'docker-compose.yaml',
|
||||
'fig.yml',
|
||||
'fig.yaml',
|
||||
]))
|
||||
]
|
||||
|
||||
self.assertEqual('docker-compose.yaml', get_config_filename_for_files([
|
||||
'docker-compose.yaml',
|
||||
'fig.yml',
|
||||
'fig.yaml',
|
||||
]))
|
||||
|
||||
self.assertEqual('fig.yml', get_config_filename_for_files([
|
||||
'fig.yml',
|
||||
'fig.yaml',
|
||||
]))
|
||||
|
||||
self.assertEqual('fig.yaml', get_config_filename_for_files([
|
||||
'fig.yaml',
|
||||
]))
|
||||
"""Test with files placed in the basedir"""
|
||||
|
||||
self.assertEqual('docker-compose.yml', get_config_filename_for_files(files[0:]))
|
||||
self.assertEqual('docker-compose.yaml', get_config_filename_for_files(files[1:]))
|
||||
self.assertEqual('fig.yml', get_config_filename_for_files(files[2:]))
|
||||
self.assertEqual('fig.yaml', get_config_filename_for_files(files[3:]))
|
||||
self.assertRaises(ComposeFileNotFound, lambda: get_config_filename_for_files([]))
|
||||
|
||||
"""Test with files placed in the subdir"""
|
||||
|
||||
def get_config_filename_for_files_in_subdir(files):
|
||||
return get_config_filename_for_files(files, subdir=True)
|
||||
|
||||
self.assertEqual('docker-compose.yml', get_config_filename_for_files_in_subdir(files[0:]))
|
||||
self.assertEqual('docker-compose.yaml', get_config_filename_for_files_in_subdir(files[1:]))
|
||||
self.assertEqual('fig.yml', get_config_filename_for_files_in_subdir(files[2:]))
|
||||
self.assertEqual('fig.yaml', get_config_filename_for_files_in_subdir(files[3:]))
|
||||
self.assertRaises(ComposeFileNotFound, lambda: get_config_filename_for_files_in_subdir([]))
|
||||
|
||||
def test_get_project(self):
|
||||
command = TopLevelCommand()
|
||||
command.base_dir = 'tests/fixtures/longer-filename-composefile'
|
||||
@ -135,13 +136,65 @@ class CLITestCase(unittest.TestCase):
|
||||
call_kwargs['environment'],
|
||||
{'FOO': 'ONE', 'BAR': 'NEW', 'OTHER': 'THREE'})
|
||||
|
||||
def test_run_service_with_restart_always(self):
|
||||
command = TopLevelCommand()
|
||||
mock_client = mock.create_autospec(docker.Client)
|
||||
mock_project = mock.Mock()
|
||||
mock_project.get_service.return_value = Service(
|
||||
'service',
|
||||
client=mock_client,
|
||||
restart='always',
|
||||
image='someimage')
|
||||
command.run(mock_project, {
|
||||
'SERVICE': 'service',
|
||||
'COMMAND': None,
|
||||
'-e': [],
|
||||
'--user': None,
|
||||
'--no-deps': None,
|
||||
'--allow-insecure-ssl': None,
|
||||
'-d': True,
|
||||
'-T': None,
|
||||
'--entrypoint': None,
|
||||
'--service-ports': None,
|
||||
'--rm': None,
|
||||
})
|
||||
_, _, call_kwargs = mock_client.create_container.mock_calls[0]
|
||||
self.assertEquals(call_kwargs['host_config']['RestartPolicy']['Name'], 'always')
|
||||
|
||||
def get_config_filename_for_files(filenames):
|
||||
command = TopLevelCommand()
|
||||
mock_client = mock.create_autospec(docker.Client)
|
||||
mock_project = mock.Mock()
|
||||
mock_project.get_service.return_value = Service(
|
||||
'service',
|
||||
client=mock_client,
|
||||
restart='always',
|
||||
image='someimage')
|
||||
command.run(mock_project, {
|
||||
'SERVICE': 'service',
|
||||
'COMMAND': None,
|
||||
'-e': [],
|
||||
'--user': None,
|
||||
'--no-deps': None,
|
||||
'--allow-insecure-ssl': None,
|
||||
'-d': True,
|
||||
'-T': None,
|
||||
'--entrypoint': None,
|
||||
'--service-ports': None,
|
||||
'--rm': True,
|
||||
})
|
||||
_, _, call_kwargs = mock_client.create_container.mock_calls[0]
|
||||
self.assertFalse('RestartPolicy' in call_kwargs['host_config'])
|
||||
|
||||
|
||||
def get_config_filename_for_files(filenames, subdir=None):
|
||||
project_dir = tempfile.mkdtemp()
|
||||
try:
|
||||
make_files(project_dir, filenames)
|
||||
command = TopLevelCommand()
|
||||
command.base_dir = project_dir
|
||||
if subdir:
|
||||
command.base_dir = tempfile.mkdtemp(dir=project_dir)
|
||||
else:
|
||||
command.base_dir = project_dir
|
||||
return os.path.basename(command.get_config_path())
|
||||
finally:
|
||||
shutil.rmtree(project_dir)
|
||||
@ -151,4 +204,3 @@ def make_files(dirname, filenames):
|
||||
for fname in filenames:
|
||||
with open(os.path.join(dirname, fname), 'w') as f:
|
||||
f.write('')
|
||||
|
||||
|
@ -4,6 +4,7 @@ from .. import unittest
|
||||
|
||||
from compose import config
|
||||
|
||||
|
||||
class ConfigTest(unittest.TestCase):
|
||||
def test_from_dictionary(self):
|
||||
service_dicts = config.from_dictionary({
|
||||
@ -53,46 +54,61 @@ class VolumePathTest(unittest.TestCase):
|
||||
self.assertEqual(d['volumes'], ['/home/user:/container/path'])
|
||||
|
||||
|
||||
class MergeVolumesTest(unittest.TestCase):
|
||||
class MergePathMappingTest(object):
|
||||
def config_name(self):
|
||||
return ""
|
||||
|
||||
def test_empty(self):
|
||||
service_dict = config.merge_service_dicts({}, {})
|
||||
self.assertNotIn('volumes', service_dict)
|
||||
self.assertNotIn(self.config_name(), service_dict)
|
||||
|
||||
def test_no_override(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'volumes': ['/foo:/code', '/data']},
|
||||
{self.config_name(): ['/foo:/code', '/data']},
|
||||
{},
|
||||
)
|
||||
self.assertEqual(set(service_dict['volumes']), set(['/foo:/code', '/data']))
|
||||
self.assertEqual(set(service_dict[self.config_name()]), set(['/foo:/code', '/data']))
|
||||
|
||||
def test_no_base(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{},
|
||||
{'volumes': ['/bar:/code']},
|
||||
{self.config_name(): ['/bar:/code']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code']))
|
||||
self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code']))
|
||||
|
||||
def test_override_explicit_path(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'volumes': ['/foo:/code', '/data']},
|
||||
{'volumes': ['/bar:/code']},
|
||||
{self.config_name(): ['/foo:/code', '/data']},
|
||||
{self.config_name(): ['/bar:/code']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code', '/data']))
|
||||
self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code', '/data']))
|
||||
|
||||
def test_add_explicit_path(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'volumes': ['/foo:/code', '/data']},
|
||||
{'volumes': ['/bar:/code', '/quux:/data']},
|
||||
{self.config_name(): ['/foo:/code', '/data']},
|
||||
{self.config_name(): ['/bar:/code', '/quux:/data']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code', '/quux:/data']))
|
||||
self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code', '/quux:/data']))
|
||||
|
||||
def test_remove_explicit_path(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
{'volumes': ['/foo:/code', '/quux:/data']},
|
||||
{'volumes': ['/bar:/code', '/data']},
|
||||
{self.config_name(): ['/foo:/code', '/quux:/data']},
|
||||
{self.config_name(): ['/bar:/code', '/data']},
|
||||
)
|
||||
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code', '/data']))
|
||||
self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code', '/data']))
|
||||
|
||||
|
||||
class MergeVolumesTest(unittest.TestCase, MergePathMappingTest):
|
||||
def config_name(self):
|
||||
return 'volumes'
|
||||
|
||||
|
||||
class MergeDevicesTest(unittest.TestCase, MergePathMappingTest):
|
||||
def config_name(self):
|
||||
return 'devices'
|
||||
|
||||
|
||||
class BuildOrImageMergeTest(unittest.TestCase):
|
||||
def test_merge_build_or_image_no_override(self):
|
||||
self.assertEqual(
|
||||
config.merge_service_dicts({'build': '.'}, {}),
|
||||
@ -184,9 +200,50 @@ class MergeStringsOrListsTest(unittest.TestCase):
|
||||
self.assertEqual(set(service_dict['dns']), set(['8.8.8.8', '9.9.9.9']))
|
||||
|
||||
|
||||
class MergeLabelsTest(unittest.TestCase):
|
||||
def test_empty(self):
|
||||
service_dict = config.merge_service_dicts({}, {})
|
||||
self.assertNotIn('labels', service_dict)
|
||||
|
||||
def test_no_override(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
config.make_service_dict('foo', {'labels': ['foo=1', 'bar']}),
|
||||
config.make_service_dict('foo', {}),
|
||||
)
|
||||
self.assertEqual(service_dict['labels'], {'foo': '1', 'bar': ''})
|
||||
|
||||
def test_no_base(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
config.make_service_dict('foo', {}),
|
||||
config.make_service_dict('foo', {'labels': ['foo=2']}),
|
||||
)
|
||||
self.assertEqual(service_dict['labels'], {'foo': '2'})
|
||||
|
||||
def test_override_explicit_value(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
config.make_service_dict('foo', {'labels': ['foo=1', 'bar']}),
|
||||
config.make_service_dict('foo', {'labels': ['foo=2']}),
|
||||
)
|
||||
self.assertEqual(service_dict['labels'], {'foo': '2', 'bar': ''})
|
||||
|
||||
def test_add_explicit_value(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
config.make_service_dict('foo', {'labels': ['foo=1', 'bar']}),
|
||||
config.make_service_dict('foo', {'labels': ['bar=2']}),
|
||||
)
|
||||
self.assertEqual(service_dict['labels'], {'foo': '1', 'bar': '2'})
|
||||
|
||||
def test_remove_explicit_value(self):
|
||||
service_dict = config.merge_service_dicts(
|
||||
config.make_service_dict('foo', {'labels': ['foo=1', 'bar=2']}),
|
||||
config.make_service_dict('foo', {'labels': ['bar']}),
|
||||
)
|
||||
self.assertEqual(service_dict['labels'], {'foo': '1', 'bar': ''})
|
||||
|
||||
|
||||
class EnvTest(unittest.TestCase):
|
||||
def test_parse_environment_as_list(self):
|
||||
environment =[
|
||||
environment = [
|
||||
'NORMAL=F1',
|
||||
'CONTAINS_EQUALS=F=2',
|
||||
'TRAILING_EQUALS=',
|
||||
@ -218,9 +275,8 @@ class EnvTest(unittest.TestCase):
|
||||
os.environ['ENV_DEF'] = 'E3'
|
||||
|
||||
service_dict = config.make_service_dict(
|
||||
'foo',
|
||||
{
|
||||
'environment': {
|
||||
'foo', {
|
||||
'environment': {
|
||||
'FILE_DEF': 'F1',
|
||||
'FILE_DEF_EMPTY': '',
|
||||
'ENV_DEF': None,
|
||||
@ -278,6 +334,7 @@ class EnvTest(unittest.TestCase):
|
||||
{'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''},
|
||||
)
|
||||
|
||||
|
||||
class ExtendsTest(unittest.TestCase):
|
||||
def test_extends(self):
|
||||
service_dicts = config.load('tests/fixtures/extends/docker-compose.yml')
|
||||
@ -291,12 +348,12 @@ class ExtendsTest(unittest.TestCase):
|
||||
{
|
||||
'name': 'mydb',
|
||||
'image': 'busybox',
|
||||
'command': 'sleep 300',
|
||||
'command': 'top',
|
||||
},
|
||||
{
|
||||
'name': 'myweb',
|
||||
'image': 'busybox',
|
||||
'command': 'sleep 300',
|
||||
'command': 'top',
|
||||
'links': ['mydb:db'],
|
||||
'environment': {
|
||||
"FOO": "1",
|
||||
@ -335,10 +392,11 @@ class ExtendsTest(unittest.TestCase):
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
def test_extends_validation(self):
|
||||
dictionary = {'extends': None}
|
||||
load_config = lambda: config.make_service_dict('myweb', dictionary, working_dir='tests/fixtures/extends')
|
||||
|
||||
def load_config():
|
||||
return config.make_service_dict('myweb', dictionary, working_dir='tests/fixtures/extends')
|
||||
|
||||
self.assertRaisesRegexp(config.ConfigurationError, 'dictionary', load_config)
|
||||
|
||||
@ -396,6 +454,21 @@ class ExtendsTest(unittest.TestCase):
|
||||
|
||||
self.assertEqual(set(dicts[0]['volumes']), set(paths))
|
||||
|
||||
def test_parent_build_path_dne(self):
|
||||
child = config.load('tests/fixtures/extends/nonexistent-path-child.yml')
|
||||
|
||||
self.assertEqual(child, [
|
||||
{
|
||||
'name': 'dnechild',
|
||||
'image': 'busybox',
|
||||
'command': '/bin/true',
|
||||
'environment': {
|
||||
"FOO": "1",
|
||||
"BAR": "2",
|
||||
},
|
||||
},
|
||||
])
|
||||
|
||||
|
||||
class BuildPathTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
@ -405,7 +478,10 @@ class BuildPathTest(unittest.TestCase):
|
||||
options = {'build': 'nonexistent.path'}
|
||||
self.assertRaises(
|
||||
config.ConfigurationError,
|
||||
lambda: config.make_service_dict('foo', options, 'tests/fixtures/build-path'),
|
||||
lambda: config.from_dictionary({
|
||||
'foo': options,
|
||||
'working_dir': 'tests/fixtures/build-path'
|
||||
})
|
||||
)
|
||||
|
||||
def test_relative_path(self):
|
||||
|
@ -5,16 +5,16 @@ import mock
|
||||
import docker
|
||||
|
||||
from compose.container import Container
|
||||
from compose.container import get_container_name
|
||||
|
||||
|
||||
class ContainerTest(unittest.TestCase):
|
||||
|
||||
|
||||
def setUp(self):
|
||||
self.container_dict = {
|
||||
"Id": "abc",
|
||||
"Image": "busybox:latest",
|
||||
"Command": "sleep 300",
|
||||
"Command": "top",
|
||||
"Created": 1387384730,
|
||||
"Status": "Up 8 seconds",
|
||||
"Ports": None,
|
||||
@ -24,17 +24,26 @@ class ContainerTest(unittest.TestCase):
|
||||
"NetworkSettings": {
|
||||
"Ports": {},
|
||||
},
|
||||
"Config": {
|
||||
"Labels": {
|
||||
"com.docker.compose.project": "composetest",
|
||||
"com.docker.compose.service": "web",
|
||||
"com.docker.compose.container-number": 7,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
def test_from_ps(self):
|
||||
container = Container.from_ps(None,
|
||||
self.container_dict,
|
||||
has_been_inspected=True)
|
||||
self.assertEqual(container.dictionary, {
|
||||
"Id": "abc",
|
||||
"Image":"busybox:latest",
|
||||
"Name": "/composetest_db_1",
|
||||
})
|
||||
self.assertEqual(
|
||||
container.dictionary,
|
||||
{
|
||||
"Id": "abc",
|
||||
"Image": "busybox:latest",
|
||||
"Name": "/composetest_db_1",
|
||||
})
|
||||
|
||||
def test_from_ps_prefixed(self):
|
||||
self.container_dict['Names'] = ['/swarm-host-1' + n for n in self.container_dict['Names']]
|
||||
@ -44,7 +53,7 @@ class ContainerTest(unittest.TestCase):
|
||||
has_been_inspected=True)
|
||||
self.assertEqual(container.dictionary, {
|
||||
"Id": "abc",
|
||||
"Image":"busybox:latest",
|
||||
"Image": "busybox:latest",
|
||||
"Name": "/composetest_db_1",
|
||||
})
|
||||
|
||||
@ -64,10 +73,8 @@ class ContainerTest(unittest.TestCase):
|
||||
})
|
||||
|
||||
def test_number(self):
|
||||
container = Container.from_ps(None,
|
||||
self.container_dict,
|
||||
has_been_inspected=True)
|
||||
self.assertEqual(container.number, 1)
|
||||
container = Container(None, self.container_dict, has_been_inspected=True)
|
||||
self.assertEqual(container.number, 7)
|
||||
|
||||
def test_name(self):
|
||||
container = Container.from_ps(None,
|
||||
@ -76,10 +83,8 @@ class ContainerTest(unittest.TestCase):
|
||||
self.assertEqual(container.name, "composetest_db_1")
|
||||
|
||||
def test_name_without_project(self):
|
||||
container = Container.from_ps(None,
|
||||
self.container_dict,
|
||||
has_been_inspected=True)
|
||||
self.assertEqual(container.name_without_project, "db_1")
|
||||
container = Container(None, self.container_dict, has_been_inspected=True)
|
||||
self.assertEqual(container.name_without_project, "web_7")
|
||||
|
||||
def test_inspect_if_not_inspected(self):
|
||||
mock_client = mock.create_autospec(docker.Client)
|
||||
@ -100,7 +105,7 @@ class ContainerTest(unittest.TestCase):
|
||||
|
||||
def test_human_readable_ports_public_and_private(self):
|
||||
self.container_dict['NetworkSettings']['Ports'].update({
|
||||
"45454/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "49197" } ],
|
||||
"45454/tcp": [{"HostIp": "0.0.0.0", "HostPort": "49197"}],
|
||||
"45453/tcp": [],
|
||||
})
|
||||
container = Container(None, self.container_dict, has_been_inspected=True)
|
||||
@ -110,7 +115,7 @@ class ContainerTest(unittest.TestCase):
|
||||
|
||||
def test_get_local_port(self):
|
||||
self.container_dict['NetworkSettings']['Ports'].update({
|
||||
"45454/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "49197" } ],
|
||||
"45454/tcp": [{"HostIp": "0.0.0.0", "HostPort": "49197"}],
|
||||
})
|
||||
container = Container(None, self.container_dict, has_been_inspected=True)
|
||||
|
||||
@ -120,12 +125,21 @@ class ContainerTest(unittest.TestCase):
|
||||
|
||||
def test_get(self):
|
||||
container = Container(None, {
|
||||
"Status":"Up 8 seconds",
|
||||
"Status": "Up 8 seconds",
|
||||
"HostConfig": {
|
||||
"VolumesFrom": ["volume_id",]
|
||||
"VolumesFrom": ["volume_id"]
|
||||
},
|
||||
}, has_been_inspected=True)
|
||||
|
||||
self.assertEqual(container.get('Status'), "Up 8 seconds")
|
||||
self.assertEqual(container.get('HostConfig.VolumesFrom'), ["volume_id",])
|
||||
self.assertEqual(container.get('HostConfig.VolumesFrom'), ["volume_id"])
|
||||
self.assertEqual(container.get('Foo.Bar.DoesNotExist'), None)
|
||||
|
||||
|
||||
class GetContainerNameTestCase(unittest.TestCase):
|
||||
|
||||
def test_get_container_name(self):
|
||||
self.assertIsNone(get_container_name({}))
|
||||
self.assertEqual(get_container_name({'Name': 'myproject_db_1'}), 'myproject_db_1')
|
||||
self.assertEqual(get_container_name({'Names': ['/myproject_db_1', '/myproject_web_1/db']}), 'myproject_db_1')
|
||||
self.assertEqual(get_container_name({'Names': ['/swarm-host-1/myproject_db_1', '/swarm-host-1/myproject_web_1/db']}), 'myproject_db_1')
|
||||
|
@ -2,10 +2,9 @@ from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
from tests import unittest
|
||||
|
||||
import mock
|
||||
from six import StringIO
|
||||
|
||||
from compose import progress_stream
|
||||
from compose import progress_stream
|
||||
|
||||
|
||||
class ProgressStreamTestCase(unittest.TestCase):
|
||||
@ -18,3 +17,21 @@ class ProgressStreamTestCase(unittest.TestCase):
|
||||
]
|
||||
events = progress_stream.stream_output(output, StringIO())
|
||||
self.assertEqual(len(events), 1)
|
||||
|
||||
def test_stream_output_div_zero(self):
|
||||
output = [
|
||||
'{"status": "Downloading", "progressDetail": {"current": '
|
||||
'0, "start": 1413653874, "total": 0}, '
|
||||
'"progress": "..."}',
|
||||
]
|
||||
events = progress_stream.stream_output(output, StringIO())
|
||||
self.assertEqual(len(events), 1)
|
||||
|
||||
def test_stream_output_null_total(self):
|
||||
output = [
|
||||
'{"status": "Downloading", "progressDetail": {"current": '
|
||||
'0, "start": 1413653874, "total": null}, '
|
||||
'"progress": "..."}',
|
||||
]
|
||||
events = progress_stream.stream_output(output, StringIO())
|
||||
self.assertEqual(len(events), 1)
|
||||
|
@ -8,6 +8,7 @@ from compose import config
|
||||
import mock
|
||||
import docker
|
||||
|
||||
|
||||
class ProjectTest(unittest.TestCase):
|
||||
def test_from_dict(self):
|
||||
project = Project.from_dicts('composetest', [
|
||||
@ -79,10 +80,12 @@ class ProjectTest(unittest.TestCase):
|
||||
web = Service(
|
||||
project='composetest',
|
||||
name='web',
|
||||
image='foo',
|
||||
)
|
||||
console = Service(
|
||||
project='composetest',
|
||||
name='console',
|
||||
image='foo',
|
||||
)
|
||||
project = Project('test', [web, console], None)
|
||||
self.assertEqual(project.get_services(), [web, console])
|
||||
@ -91,10 +94,12 @@ class ProjectTest(unittest.TestCase):
|
||||
web = Service(
|
||||
project='composetest',
|
||||
name='web',
|
||||
image='foo',
|
||||
)
|
||||
console = Service(
|
||||
project='composetest',
|
||||
name='console',
|
||||
image='foo',
|
||||
)
|
||||
project = Project('test', [web, console], None)
|
||||
self.assertEqual(project.get_services(['console']), [console])
|
||||
@ -103,19 +108,23 @@ class ProjectTest(unittest.TestCase):
|
||||
db = Service(
|
||||
project='composetest',
|
||||
name='db',
|
||||
image='foo',
|
||||
)
|
||||
web = Service(
|
||||
project='composetest',
|
||||
name='web',
|
||||
image='foo',
|
||||
links=[(db, 'database')]
|
||||
)
|
||||
cache = Service(
|
||||
project='composetest',
|
||||
name='cache'
|
||||
name='cache',
|
||||
image='foo'
|
||||
)
|
||||
console = Service(
|
||||
project='composetest',
|
||||
name='console',
|
||||
image='foo',
|
||||
links=[(web, 'web')]
|
||||
)
|
||||
project = Project('test', [web, db, cache, console], None)
|
||||
@ -128,10 +137,12 @@ class ProjectTest(unittest.TestCase):
|
||||
db = Service(
|
||||
project='composetest',
|
||||
name='db',
|
||||
image='foo',
|
||||
)
|
||||
web = Service(
|
||||
project='composetest',
|
||||
name='web',
|
||||
image='foo',
|
||||
links=[(db, 'database')]
|
||||
)
|
||||
project = Project('test', [web, db], None)
|
||||
@ -211,7 +222,7 @@ class ProjectTest(unittest.TestCase):
|
||||
}
|
||||
], mock_client)
|
||||
service = project.get_service('test')
|
||||
self.assertEqual(service._get_net(), 'container:'+container_id)
|
||||
self.assertEqual(service._get_net(), 'container:' + container_id)
|
||||
|
||||
def test_use_net_from_service(self):
|
||||
container_name = 'test_aaa_1'
|
||||
@ -237,4 +248,4 @@ class ProjectTest(unittest.TestCase):
|
||||
], mock_client)
|
||||
|
||||
service = project.get_service('test')
|
||||
self.assertEqual(service._get_net(), 'container:'+container_name)
|
||||
self.assertEqual(service._get_net(), 'container:' + container_name)
|
||||
|
@ -5,16 +5,17 @@ from .. import unittest
|
||||
import mock
|
||||
|
||||
import docker
|
||||
from requests import Response
|
||||
|
||||
from compose import Service
|
||||
from compose.service import Service
|
||||
from compose.container import Container
|
||||
from compose.const import LABEL_SERVICE, LABEL_PROJECT, LABEL_ONE_OFF
|
||||
from compose.service import (
|
||||
APIError,
|
||||
ConfigError,
|
||||
NeedsBuildError,
|
||||
build_port_bindings,
|
||||
build_volume_binding,
|
||||
get_container_name,
|
||||
get_container_data_volumes,
|
||||
merge_volume_bindings,
|
||||
parse_repository_tag,
|
||||
parse_volume_spec,
|
||||
split_port,
|
||||
@ -38,59 +39,45 @@ class ServiceTest(unittest.TestCase):
|
||||
self.assertRaises(ConfigError, lambda: Service(name='foo_bar'))
|
||||
self.assertRaises(ConfigError, lambda: Service(name='__foo_bar__'))
|
||||
|
||||
Service('a')
|
||||
Service('foo')
|
||||
Service('a', image='foo')
|
||||
Service('foo', image='foo')
|
||||
|
||||
def test_project_validation(self):
|
||||
self.assertRaises(ConfigError, lambda: Service(name='foo', project='_'))
|
||||
Service(name='foo', project='bar')
|
||||
|
||||
def test_get_container_name(self):
|
||||
self.assertIsNone(get_container_name({}))
|
||||
self.assertEqual(get_container_name({'Name': 'myproject_db_1'}), 'myproject_db_1')
|
||||
self.assertEqual(get_container_name({'Names': ['/myproject_db_1', '/myproject_web_1/db']}), 'myproject_db_1')
|
||||
self.assertEqual(get_container_name({'Names': ['/swarm-host-1/myproject_db_1', '/swarm-host-1/myproject_web_1/db']}), 'myproject_db_1')
|
||||
self.assertRaises(ConfigError, lambda: Service('bar'))
|
||||
self.assertRaises(ConfigError, lambda: Service(name='foo', project='_', image='foo'))
|
||||
Service(name='foo', project='bar', image='foo')
|
||||
|
||||
def test_containers(self):
|
||||
service = Service('db', client=self.mock_client, project='myproject')
|
||||
|
||||
service = Service('db', self.mock_client, 'myproject', image='foo')
|
||||
self.mock_client.containers.return_value = []
|
||||
self.assertEqual(service.containers(), [])
|
||||
|
||||
def test_containers_with_containers(self):
|
||||
self.mock_client.containers.return_value = [
|
||||
{'Image': 'busybox', 'Id': 'OUT_1', 'Names': ['/myproject', '/foo/bar']},
|
||||
{'Image': 'busybox', 'Id': 'OUT_2', 'Names': ['/myproject_db']},
|
||||
{'Image': 'busybox', 'Id': 'OUT_3', 'Names': ['/db_1']},
|
||||
{'Image': 'busybox', 'Id': 'IN_1', 'Names': ['/myproject_db_1', '/myproject_web_1/db']},
|
||||
dict(Name=str(i), Image='foo', Id=i) for i in range(3)
|
||||
]
|
||||
self.assertEqual([c.id for c in service.containers()], ['IN_1'])
|
||||
service = Service('db', self.mock_client, 'myproject', image='foo')
|
||||
self.assertEqual([c.id for c in service.containers()], range(3))
|
||||
|
||||
def test_containers_prefixed(self):
|
||||
service = Service('db', client=self.mock_client, project='myproject')
|
||||
|
||||
self.mock_client.containers.return_value = [
|
||||
{'Image': 'busybox', 'Id': 'OUT_1', 'Names': ['/swarm-host-1/myproject', '/swarm-host-1/foo/bar']},
|
||||
{'Image': 'busybox', 'Id': 'OUT_2', 'Names': ['/swarm-host-1/myproject_db']},
|
||||
{'Image': 'busybox', 'Id': 'OUT_3', 'Names': ['/swarm-host-1/db_1']},
|
||||
{'Image': 'busybox', 'Id': 'IN_1', 'Names': ['/swarm-host-1/myproject_db_1', '/swarm-host-1/myproject_web_1/db']},
|
||||
expected_labels = [
|
||||
'{0}=myproject'.format(LABEL_PROJECT),
|
||||
'{0}=db'.format(LABEL_SERVICE),
|
||||
'{0}=False'.format(LABEL_ONE_OFF),
|
||||
]
|
||||
self.assertEqual([c.id for c in service.containers()], ['IN_1'])
|
||||
|
||||
self.mock_client.containers.assert_called_once_with(
|
||||
all=False,
|
||||
filters={'label': expected_labels})
|
||||
|
||||
def test_get_volumes_from_container(self):
|
||||
container_id = 'aabbccddee'
|
||||
service = Service(
|
||||
'test',
|
||||
image='foo',
|
||||
volumes_from=[mock.Mock(id=container_id, spec=Container)])
|
||||
|
||||
self.assertEqual(service._get_volumes_from(), [container_id])
|
||||
|
||||
def test_get_volumes_from_intermediate_container(self):
|
||||
container_id = 'aabbccddee'
|
||||
service = Service('test')
|
||||
container = mock.Mock(id=container_id, spec=Container)
|
||||
|
||||
self.assertEqual(service._get_volumes_from(container), [container_id])
|
||||
|
||||
def test_get_volumes_from_service_container_exists(self):
|
||||
container_ids = ['aabbccddee', '12345']
|
||||
from_service = mock.create_autospec(Service)
|
||||
@ -98,7 +85,7 @@ class ServiceTest(unittest.TestCase):
|
||||
mock.Mock(id=container_id, spec=Container)
|
||||
for container_id in container_ids
|
||||
]
|
||||
service = Service('test', volumes_from=[from_service])
|
||||
service = Service('test', volumes_from=[from_service], image='foo')
|
||||
|
||||
self.assertEqual(service._get_volumes_from(), container_ids)
|
||||
|
||||
@ -109,7 +96,7 @@ class ServiceTest(unittest.TestCase):
|
||||
from_service.create_container.return_value = mock.Mock(
|
||||
id=container_id,
|
||||
spec=Container)
|
||||
service = Service('test', volumes_from=[from_service])
|
||||
service = Service('test', image='foo', volumes_from=[from_service])
|
||||
|
||||
self.assertEqual(service._get_volumes_from(), [container_id])
|
||||
from_service.create_container.assert_called_once_with()
|
||||
@ -145,56 +132,62 @@ class ServiceTest(unittest.TestCase):
|
||||
|
||||
def test_build_port_bindings_with_one_port(self):
|
||||
port_bindings = build_port_bindings(["127.0.0.1:1000:1000"])
|
||||
self.assertEqual(port_bindings["1000"],[("127.0.0.1","1000")])
|
||||
self.assertEqual(port_bindings["1000"], [("127.0.0.1", "1000")])
|
||||
|
||||
def test_build_port_bindings_with_matching_internal_ports(self):
|
||||
port_bindings = build_port_bindings(["127.0.0.1:1000:1000","127.0.0.1:2000:1000"])
|
||||
self.assertEqual(port_bindings["1000"],[("127.0.0.1","1000"),("127.0.0.1","2000")])
|
||||
port_bindings = build_port_bindings(["127.0.0.1:1000:1000", "127.0.0.1:2000:1000"])
|
||||
self.assertEqual(port_bindings["1000"], [("127.0.0.1", "1000"), ("127.0.0.1", "2000")])
|
||||
|
||||
def test_build_port_bindings_with_nonmatching_internal_ports(self):
|
||||
port_bindings = build_port_bindings(["127.0.0.1:1000:1000","127.0.0.1:2000:2000"])
|
||||
self.assertEqual(port_bindings["1000"],[("127.0.0.1","1000")])
|
||||
self.assertEqual(port_bindings["2000"],[("127.0.0.1","2000")])
|
||||
port_bindings = build_port_bindings(["127.0.0.1:1000:1000", "127.0.0.1:2000:2000"])
|
||||
self.assertEqual(port_bindings["1000"], [("127.0.0.1", "1000")])
|
||||
self.assertEqual(port_bindings["2000"], [("127.0.0.1", "2000")])
|
||||
|
||||
def test_split_domainname_none(self):
|
||||
service = Service('foo', hostname='name', client=self.mock_client)
|
||||
service = Service('foo', image='foo', hostname='name', client=self.mock_client)
|
||||
self.mock_client.containers.return_value = []
|
||||
opts = service._get_container_create_options({'image': 'foo'})
|
||||
opts = service._get_container_create_options({'image': 'foo'}, 1)
|
||||
self.assertEqual(opts['hostname'], 'name', 'hostname')
|
||||
self.assertFalse('domainname' in opts, 'domainname')
|
||||
|
||||
def test_split_domainname_fqdn(self):
|
||||
service = Service('foo',
|
||||
hostname='name.domain.tld',
|
||||
client=self.mock_client)
|
||||
service = Service(
|
||||
'foo',
|
||||
hostname='name.domain.tld',
|
||||
image='foo',
|
||||
client=self.mock_client)
|
||||
self.mock_client.containers.return_value = []
|
||||
opts = service._get_container_create_options({'image': 'foo'})
|
||||
opts = service._get_container_create_options({'image': 'foo'}, 1)
|
||||
self.assertEqual(opts['hostname'], 'name', 'hostname')
|
||||
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
|
||||
|
||||
def test_split_domainname_both(self):
|
||||
service = Service('foo',
|
||||
hostname='name',
|
||||
domainname='domain.tld',
|
||||
client=self.mock_client)
|
||||
service = Service(
|
||||
'foo',
|
||||
hostname='name',
|
||||
image='foo',
|
||||
domainname='domain.tld',
|
||||
client=self.mock_client)
|
||||
self.mock_client.containers.return_value = []
|
||||
opts = service._get_container_create_options({'image': 'foo'})
|
||||
opts = service._get_container_create_options({'image': 'foo'}, 1)
|
||||
self.assertEqual(opts['hostname'], 'name', 'hostname')
|
||||
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
|
||||
|
||||
def test_split_domainname_weird(self):
|
||||
service = Service('foo',
|
||||
hostname='name.sub',
|
||||
domainname='domain.tld',
|
||||
client=self.mock_client)
|
||||
service = Service(
|
||||
'foo',
|
||||
hostname='name.sub',
|
||||
domainname='domain.tld',
|
||||
image='foo',
|
||||
client=self.mock_client)
|
||||
self.mock_client.containers.return_value = []
|
||||
opts = service._get_container_create_options({'image': 'foo'})
|
||||
opts = service._get_container_create_options({'image': 'foo'}, 1)
|
||||
self.assertEqual(opts['hostname'], 'name.sub', 'hostname')
|
||||
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
|
||||
|
||||
def test_get_container_not_found(self):
|
||||
self.mock_client.containers.return_value = []
|
||||
service = Service('foo', client=self.mock_client)
|
||||
service = Service('foo', client=self.mock_client, image='foo')
|
||||
|
||||
self.assertRaises(ValueError, service.get_container)
|
||||
|
||||
@ -202,7 +195,7 @@ class ServiceTest(unittest.TestCase):
|
||||
def test_get_container(self, mock_container_class):
|
||||
container_dict = dict(Name='default_foo_2')
|
||||
self.mock_client.containers.return_value = [container_dict]
|
||||
service = Service('foo', client=self.mock_client)
|
||||
service = Service('foo', image='foo', client=self.mock_client)
|
||||
|
||||
container = service.get_container(number=2)
|
||||
self.assertEqual(container, mock_container_class.from_ps.return_value)
|
||||
@ -213,33 +206,53 @@ class ServiceTest(unittest.TestCase):
|
||||
def test_pull_image(self, mock_log):
|
||||
service = Service('foo', client=self.mock_client, image='someimage:sometag')
|
||||
service.pull(insecure_registry=True)
|
||||
self.mock_client.pull.assert_called_once_with('someimage:sometag', insecure_registry=True)
|
||||
mock_log.info.assert_called_once_with('Pulling foo (someimage:sometag)...')
|
||||
|
||||
@mock.patch('compose.service.Container', autospec=True)
|
||||
@mock.patch('compose.service.log', autospec=True)
|
||||
def test_create_container_from_insecure_registry(
|
||||
self,
|
||||
mock_log,
|
||||
mock_container):
|
||||
service = Service('foo', client=self.mock_client, image='someimage:sometag')
|
||||
mock_response = mock.Mock(Response)
|
||||
mock_response.status_code = 404
|
||||
mock_response.reason = "Not Found"
|
||||
mock_container.create.side_effect = APIError(
|
||||
'Mock error', mock_response, "No such image")
|
||||
|
||||
# We expect the APIError because our service requires a
|
||||
# non-existent image.
|
||||
with self.assertRaises(APIError):
|
||||
service.create_container(insecure_registry=True)
|
||||
|
||||
self.mock_client.pull.assert_called_once_with(
|
||||
'someimage:sometag',
|
||||
'someimage',
|
||||
tag='sometag',
|
||||
insecure_registry=True,
|
||||
stream=True)
|
||||
mock_log.info.assert_called_once_with(
|
||||
'Pulling image someimage:sometag...')
|
||||
mock_log.info.assert_called_once_with('Pulling foo (someimage:sometag)...')
|
||||
|
||||
def test_pull_image_no_tag(self):
|
||||
service = Service('foo', client=self.mock_client, image='ababab')
|
||||
service.pull()
|
||||
self.mock_client.pull.assert_called_once_with(
|
||||
'ababab',
|
||||
tag='latest',
|
||||
insecure_registry=False,
|
||||
stream=True)
|
||||
|
||||
def test_create_container_from_insecure_registry(self):
|
||||
service = Service('foo', client=self.mock_client, image='someimage:sometag')
|
||||
images = []
|
||||
|
||||
def pull(repo, tag=None, insecure_registry=False, **kwargs):
|
||||
self.assertEqual('someimage', repo)
|
||||
self.assertEqual('sometag', tag)
|
||||
self.assertTrue(insecure_registry)
|
||||
images.append({'Id': 'abc123'})
|
||||
return []
|
||||
|
||||
service.image = lambda: images[0] if images else None
|
||||
self.mock_client.pull = pull
|
||||
|
||||
service.create_container(insecure_registry=True)
|
||||
self.assertEqual(1, len(images))
|
||||
|
||||
@mock.patch('compose.service.Container', autospec=True)
|
||||
def test_recreate_container(self, _):
|
||||
mock_container = mock.create_autospec(Container)
|
||||
service = Service('foo', client=self.mock_client, image='someimage')
|
||||
service.image = lambda: {'Id': 'abc123'}
|
||||
new_container = service.recreate_container(mock_container)
|
||||
|
||||
mock_container.stop.assert_called_once_with()
|
||||
self.mock_client.rename.assert_called_once_with(
|
||||
mock_container.id,
|
||||
'%s_%s' % (mock_container.short_id, mock_container.name))
|
||||
|
||||
new_container.start.assert_called_once_with()
|
||||
mock_container.remove.assert_called_once_with()
|
||||
|
||||
def test_parse_repository_tag(self):
|
||||
self.assertEqual(parse_repository_tag("root"), ("root", ""))
|
||||
@ -249,32 +262,53 @@ class ServiceTest(unittest.TestCase):
|
||||
self.assertEqual(parse_repository_tag("url:5000/repo"), ("url:5000/repo", ""))
|
||||
self.assertEqual(parse_repository_tag("url:5000/repo:tag"), ("url:5000/repo", "tag"))
|
||||
|
||||
def test_latest_is_used_when_tag_is_not_specified(self):
|
||||
@mock.patch('compose.service.Container', autospec=True)
|
||||
def test_create_container_latest_is_used_when_no_tag_specified(self, mock_container):
|
||||
service = Service('foo', client=self.mock_client, image='someimage')
|
||||
Container.create = mock.Mock()
|
||||
images = []
|
||||
|
||||
def pull(repo, tag=None, **kwargs):
|
||||
self.assertEqual('someimage', repo)
|
||||
self.assertEqual('latest', tag)
|
||||
images.append({'Id': 'abc123'})
|
||||
return []
|
||||
|
||||
service.image = lambda: images[0] if images else None
|
||||
self.mock_client.pull = pull
|
||||
|
||||
service.create_container()
|
||||
self.assertEqual(Container.create.call_args[1]['image'], 'someimage:latest')
|
||||
self.assertEqual(1, len(images))
|
||||
|
||||
def test_create_container_with_build(self):
|
||||
self.mock_client.images.return_value = []
|
||||
service = Service('foo', client=self.mock_client, build='.')
|
||||
service.build = mock.create_autospec(service.build)
|
||||
service.create_container(do_build=True)
|
||||
|
||||
self.mock_client.images.assert_called_once_with(name=service.full_name)
|
||||
service.build.assert_called_once_with()
|
||||
images = []
|
||||
service.image = lambda *args, **kwargs: images[0] if images else None
|
||||
service.build = lambda: images.append({'Id': 'abc123'})
|
||||
|
||||
service.create_container(do_build=True)
|
||||
self.assertEqual(1, len(images))
|
||||
|
||||
def test_create_container_no_build(self):
|
||||
self.mock_client.images.return_value = []
|
||||
service = Service('foo', client=self.mock_client, build='.')
|
||||
service.create_container(do_build=False)
|
||||
service.image = lambda: {'Id': 'abc123'}
|
||||
|
||||
self.assertFalse(self.mock_client.images.called)
|
||||
service.create_container(do_build=False)
|
||||
self.assertFalse(self.mock_client.build.called)
|
||||
|
||||
def test_create_container_no_build_but_needs_build(self):
|
||||
service = Service('foo', client=self.mock_client, build='.')
|
||||
service.image = lambda: None
|
||||
|
||||
with self.assertRaises(NeedsBuildError):
|
||||
service.create_container(do_build=False)
|
||||
|
||||
|
||||
class ServiceVolumesTest(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.mock_client = mock.create_autospec(docker.Client)
|
||||
|
||||
def test_parse_volume_spec_only_one_path(self):
|
||||
spec = parse_volume_spec('/the/volume')
|
||||
self.assertEqual(spec, (None, '/the/volume', 'rw'))
|
||||
@ -297,6 +331,129 @@ class ServiceVolumesTest(unittest.TestCase):
|
||||
|
||||
def test_build_volume_binding(self):
|
||||
binding = build_volume_binding(parse_volume_spec('/outside:/inside'))
|
||||
self.assertEqual(binding, ('/inside', '/outside:/inside:rw'))
|
||||
|
||||
def test_get_container_data_volumes(self):
|
||||
options = [
|
||||
'/host/volume:/host/volume:ro',
|
||||
'/new/volume',
|
||||
'/existing/volume',
|
||||
]
|
||||
|
||||
self.mock_client.inspect_image.return_value = {
|
||||
'ContainerConfig': {
|
||||
'Volumes': {
|
||||
'/mnt/image/data': {},
|
||||
}
|
||||
}
|
||||
}
|
||||
container = Container(self.mock_client, {
|
||||
'Image': 'ababab',
|
||||
'Volumes': {
|
||||
'/host/volume': '/host/volume',
|
||||
'/existing/volume': '/var/lib/docker/aaaaaaaa',
|
||||
'/removed/volume': '/var/lib/docker/bbbbbbbb',
|
||||
'/mnt/image/data': '/var/lib/docker/cccccccc',
|
||||
},
|
||||
}, has_been_inspected=True)
|
||||
|
||||
expected = {
|
||||
'/existing/volume': '/var/lib/docker/aaaaaaaa:/existing/volume:rw',
|
||||
'/mnt/image/data': '/var/lib/docker/cccccccc:/mnt/image/data:rw',
|
||||
}
|
||||
|
||||
binds = get_container_data_volumes(container, options)
|
||||
self.assertEqual(binds, expected)
|
||||
|
||||
def test_merge_volume_bindings(self):
|
||||
options = [
|
||||
'/host/volume:/host/volume:ro',
|
||||
'/host/rw/volume:/host/rw/volume',
|
||||
'/new/volume',
|
||||
'/existing/volume',
|
||||
]
|
||||
|
||||
self.mock_client.inspect_image.return_value = {
|
||||
'ContainerConfig': {'Volumes': {}}
|
||||
}
|
||||
|
||||
intermediate_container = Container(self.mock_client, {
|
||||
'Image': 'ababab',
|
||||
'Volumes': {'/existing/volume': '/var/lib/docker/aaaaaaaa'},
|
||||
}, has_been_inspected=True)
|
||||
|
||||
expected = [
|
||||
'/host/volume:/host/volume:ro',
|
||||
'/host/rw/volume:/host/rw/volume:rw',
|
||||
'/var/lib/docker/aaaaaaaa:/existing/volume:rw',
|
||||
]
|
||||
|
||||
binds = merge_volume_bindings(options, intermediate_container)
|
||||
self.assertEqual(set(binds), set(expected))
|
||||
|
||||
def test_mount_same_host_path_to_two_volumes(self):
|
||||
service = Service(
|
||||
'web',
|
||||
image='busybox',
|
||||
volumes=[
|
||||
'/host/path:/data1',
|
||||
'/host/path:/data2',
|
||||
],
|
||||
client=self.mock_client,
|
||||
)
|
||||
|
||||
self.mock_client.inspect_image.return_value = {
|
||||
'Id': 'ababab',
|
||||
'ContainerConfig': {
|
||||
'Volumes': {}
|
||||
}
|
||||
}
|
||||
|
||||
create_options = service._get_container_create_options(
|
||||
override_options={},
|
||||
number=1,
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
binding,
|
||||
('/outside', dict(bind='/inside', ro=False)))
|
||||
set(create_options['host_config']['Binds']),
|
||||
set([
|
||||
'/host/path:/data1:rw',
|
||||
'/host/path:/data2:rw',
|
||||
]),
|
||||
)
|
||||
|
||||
def test_different_host_path_in_container_json(self):
|
||||
service = Service(
|
||||
'web',
|
||||
image='busybox',
|
||||
volumes=['/host/path:/data'],
|
||||
client=self.mock_client,
|
||||
)
|
||||
|
||||
self.mock_client.inspect_image.return_value = {
|
||||
'Id': 'ababab',
|
||||
'ContainerConfig': {
|
||||
'Volumes': {
|
||||
'/data': {},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
self.mock_client.inspect_container.return_value = {
|
||||
'Id': '123123123',
|
||||
'Image': 'ababab',
|
||||
'Volumes': {
|
||||
'/data': '/mnt/sda1/host/path',
|
||||
},
|
||||
}
|
||||
|
||||
create_options = service._get_container_create_options(
|
||||
override_options={},
|
||||
number=1,
|
||||
previous_container=Container(self.mock_client, {'Id': '123123123'}),
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
create_options['host_config']['Binds'],
|
||||
['/mnt/sda1/host/path:/data:rw'],
|
||||
)
|
||||
|
@ -3,6 +3,7 @@ from __future__ import absolute_import
|
||||
from compose.cli.utils import split_buffer
|
||||
from .. import unittest
|
||||
|
||||
|
||||
class SplitBufferTest(unittest.TestCase):
|
||||
def test_single_line_chunks(self):
|
||||
def reader():
|
||||
|
2
tox.ini
2
tox.ini
@ -8,7 +8,7 @@ deps =
|
||||
-rrequirements-dev.txt
|
||||
commands =
|
||||
nosetests -v {posargs}
|
||||
flake8 compose
|
||||
flake8 compose tests setup.py
|
||||
|
||||
[flake8]
|
||||
# ignore line-length for now
|
||||
|
12
wercker.yml
12
wercker.yml
@ -1,12 +0,0 @@
|
||||
box: wercker-labs/docker
|
||||
build:
|
||||
steps:
|
||||
- script:
|
||||
name: validate DCO
|
||||
code: script/validate-dco
|
||||
- script:
|
||||
name: run tests
|
||||
code: script/test
|
||||
- script:
|
||||
name: build binary
|
||||
code: script/build-linux
|
Loading…
x
Reference in New Issue
Block a user